| From 16cfacc8085782dab8e365979356ce1ca87fd6cc Mon Sep 17 00:00:00 2001 |
| From: Sean Christopherson <sean.j.christopherson@intel.com> |
| Date: Tue, 3 Sep 2019 16:36:45 -0700 |
| Subject: KVM: x86: Manually calculate reserved bits when loading PDPTRS |
| |
| From: Sean Christopherson <sean.j.christopherson@intel.com> |
| |
| commit 16cfacc8085782dab8e365979356ce1ca87fd6cc upstream. |
| |
| Manually generate the PDPTR reserved bit mask when explicitly loading |
| PDPTRs. The reserved bits that are being tracked by the MMU reflect the |
| current paging mode, which is unlikely to be PAE paging in the vast |
| majority of flows that use load_pdptrs(), e.g. CR0 and CR4 emulation, |
| __set_sregs(), etc... This can cause KVM to incorrectly signal a bad |
| PDPTR, or more likely, miss a reserved bit check and subsequently fail |
| a VM-Enter due to a bad VMCS.GUEST_PDPTR. |
| |
| Add a one off helper to generate the reserved bits instead of sharing |
| code across the MMU's calculations and the PDPTR emulation. The PDPTR |
| reserved bits are basically set in stone, and pushing a helper into |
| the MMU's calculation adds unnecessary complexity without improving |
| readability. |
| |
| Oppurtunistically fix/update the comment for load_pdptrs(). |
| |
| Note, the buggy commit also introduced a deliberate functional change, |
| "Also remove bit 5-6 from rsvd_bits_mask per latest SDM.", which was |
| effectively (and correctly) reverted by commit cd9ae5fe47df ("KVM: x86: |
| Fix page-tables reserved bits"). A bit of SDM archaeology shows that |
| the SDM from late 2008 had a bug (likely a copy+paste error) where it |
| listed bits 6:5 as AVL and A for PDPTEs used for 4k entries but reserved |
| for 2mb entries. I.e. the SDM contradicted itself, and bits 6:5 are and |
| always have been reserved. |
| |
| Fixes: 20c466b56168d ("KVM: Use rsvd_bits_mask in load_pdptrs()") |
| Cc: stable@vger.kernel.org |
| Cc: Nadav Amit <nadav.amit@gmail.com> |
| Reported-by: Doug Reiland <doug.reiland@intel.com> |
| Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> |
| Reviewed-by: Peter Xu <peterx@redhat.com> |
| Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> |
| Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> |
| |
| --- |
| arch/x86/kvm/x86.c | 11 ++++++++--- |
| 1 file changed, 8 insertions(+), 3 deletions(-) |
| |
| --- a/arch/x86/kvm/x86.c |
| +++ b/arch/x86/kvm/x86.c |
| @@ -535,8 +535,14 @@ static int kvm_read_nested_guest_page(st |
| data, offset, len, access); |
| } |
| |
| +static inline u64 pdptr_rsvd_bits(struct kvm_vcpu *vcpu) |
| +{ |
| + return rsvd_bits(cpuid_maxphyaddr(vcpu), 63) | rsvd_bits(5, 8) | |
| + rsvd_bits(1, 2); |
| +} |
| + |
| /* |
| - * Load the pae pdptrs. Return true is they are all valid. |
| + * Load the pae pdptrs. Return 1 if they are all valid, 0 otherwise. |
| */ |
| int load_pdptrs(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, unsigned long cr3) |
| { |
| @@ -555,8 +561,7 @@ int load_pdptrs(struct kvm_vcpu *vcpu, s |
| } |
| for (i = 0; i < ARRAY_SIZE(pdpte); ++i) { |
| if ((pdpte[i] & PT_PRESENT_MASK) && |
| - (pdpte[i] & |
| - vcpu->arch.mmu.guest_rsvd_check.rsvd_bits_mask[0][2])) { |
| + (pdpte[i] & pdptr_rsvd_bits(vcpu))) { |
| ret = 0; |
| goto out; |
| } |