You can only sayMy thinking is, on AMD platform, bit 63:2 is anyway reserved since it doesn't+ case MSR_IA32_S_CET:bits9-6 are reserved for both intel and amd. Shouldn't this check be
+ case MSR_KVM_GUEST_SSP:
+ case MSR_IA32_INT_SSP_TAB:
+ if (kvm_get_msr_common(vcpu, msr_info))
+ return 1;
+ if (msr_info->index == MSR_KVM_GUEST_SSP)
+ msr_info->data = vmcs_readl(GUEST_SSP);
+ else if (msr_info->index == MSR_IA32_S_CET)
+ msr_info->data = vmcs_readl(GUEST_S_CET);
+ else if (msr_info->index == MSR_IA32_INT_SSP_TAB)
+ msr_info->data = vmcs_readl(GUEST_INTR_SSP_TABLE);
+ break;
case MSR_IA32_DEBUGCTLMSR:
msr_info->data = vmcs_read64(GUEST_IA32_DEBUGCTL);
break;
@@ -2402,6 +2417,31 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
else
vmx->pt_desc.guest.addr_a[index / 2] = data;
break;
+#define VMX_CET_CONTROL_MASK (~GENMASK_ULL(9, 6))
done in the common code?
support IBT,
bits 5:2 and bits 63:10 are reserved since AMD doens't support IBT.
bits 9:6 are reserved regardless of the support of IBT.
so the checks in common code for AMD is enough, when the execution flow comesThe checks against reserved bits are common for AMD and Intel:
here,
it should be vmx, and need this additional check.
1. if SHSTK is supported, bit1:0 are not reserved.
2. if IBT is supported, bit5:2 and bit63:10 are not reserved
3. bit9:6 are always reserved.
There is nothing specific to Intel.
If that's the case, better to move it to patch 16, where the changeIn patch 16, these lines still need to be added back for PL{0,1,2}_SSP, so+#define CET_LEG_BITMAP_BASE(data) ((data) >> 12)this hunk can be dropped as well.
+#define CET_EXCLUSIVE_BITS (CET_SUPPRESS | CET_WAIT_ENDBR)
+ case MSR_IA32_PL0_SSP ... MSR_IA32_PL3_SSP:
+ return kvm_set_msr_common(vcpu, msr_info);
would like keep it
can be justified. And PL3_SSP should be removed anyway. and then
"msr_index != MSR_IA32_PL3_SSP" check in the below code snippet in
patch 16 can go away.
+ /*
+ * Write to the base SSP MSRs should happen ahead of toggling
+ * of IA32_S_CET.SH_STK_EN bit.
+ */
+ if (msr_index != MSR_IA32_PL3_SSP && data) {
+ vmx_disable_write_intercept_sss_msr(vcpu);
+ wrmsrl(msr_index, data);
+ }
here.Yes.
Do you mean to use "fallthrough"?+ break;how about
+ case MSR_IA32_U_CET:
+ case MSR_IA32_S_CET:
+ case MSR_KVM_GUEST_SSP:
+ case MSR_IA32_INT_SSP_TAB:
+ if ((msr_index == MSR_IA32_U_CET ||
+ msr_index == MSR_IA32_S_CET) &&
+ ((data & ~VMX_CET_CONTROL_MASK) ||
+ !IS_ALIGNED(CET_LEG_BITMAP_BASE(data), 4) ||
+ (data & CET_EXCLUSIVE_BITS) == CET_EXCLUSIVE_BITS))
+ return 1;
case MSR_IA32_U_CET:
case MSR_IA32_S_CET:
if ((data & ~VMX_CET_CONTROL_MASK) || ...
...
case MSR_KVM_GUEST_SSP:
case MSR_IA32_INT_SSP_TAB: