| From fcfbc617547fc6d9552cb6c1c563b6a90ee98085 Mon Sep 17 00:00:00 2001 |
| From: Sean Christopherson <sean.j.christopherson@intel.com> |
| Date: Thu, 9 Jan 2020 15:56:18 -0800 |
| Subject: KVM: Check for a bad hva before dropping into the ghc slow path |
| |
| From: Sean Christopherson <sean.j.christopherson@intel.com> |
| |
| commit fcfbc617547fc6d9552cb6c1c563b6a90ee98085 upstream. |
| |
| When reading/writing using the guest/host cache, check for a bad hva |
| before checking for a NULL memslot, which triggers the slow path for |
| handing cross-page accesses. Because the memslot is nullified on error |
| by __kvm_gfn_to_hva_cache_init(), if the bad hva is encountered after |
| crossing into a new page, then the kvm_{read,write}_guest() slow path |
| could potentially write/access the first chunk prior to detecting the |
| bad hva. |
| |
| Arguably, performing a partial access is semantically correct from an |
| architectural perspective, but that behavior is certainly not intended. |
| In the original implementation, memslot was not explicitly nullified |
| and therefore the partial access behavior varied based on whether the |
| memslot itself was null, or if the hva was simply bad. The current |
| behavior was introduced as a seemingly unintentional side effect in |
| commit f1b9dd5eb86c ("kvm: Disallow wraparound in |
| kvm_gfn_to_hva_cache_init"), which justified the change with "since some |
| callers don't check the return code from this function, it sit seems |
| prudent to clear ghc->memslot in the event of an error". |
| |
| Regardless of intent, the partial access is dependent on _not_ checking |
| the result of the cache initialization, which is arguably a bug in its |
| own right, at best simply weird. |
| |
| Fixes: 8f964525a121 ("KVM: Allow cross page reads and writes from cached translations.") |
| Cc: Jim Mattson <jmattson@google.com> |
| Cc: Andrew Honig <ahonig@google.com> |
| Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> |
| Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> |
| Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> |
| |
| --- |
| virt/kvm/kvm_main.c | 12 ++++++------ |
| 1 file changed, 6 insertions(+), 6 deletions(-) |
| |
| --- a/virt/kvm/kvm_main.c |
| +++ b/virt/kvm/kvm_main.c |
| @@ -2287,12 +2287,12 @@ int kvm_write_guest_offset_cached(struct |
| if (slots->generation != ghc->generation) |
| __kvm_gfn_to_hva_cache_init(slots, ghc, ghc->gpa, ghc->len); |
| |
| - if (unlikely(!ghc->memslot)) |
| - return kvm_write_guest(kvm, gpa, data, len); |
| - |
| if (kvm_is_error_hva(ghc->hva)) |
| return -EFAULT; |
| |
| + if (unlikely(!ghc->memslot)) |
| + return kvm_write_guest(kvm, gpa, data, len); |
| + |
| r = __copy_to_user((void __user *)ghc->hva + offset, data, len); |
| if (r) |
| return -EFAULT; |
| @@ -2320,12 +2320,12 @@ int kvm_read_guest_cached(struct kvm *kv |
| if (slots->generation != ghc->generation) |
| __kvm_gfn_to_hva_cache_init(slots, ghc, ghc->gpa, ghc->len); |
| |
| - if (unlikely(!ghc->memslot)) |
| - return kvm_read_guest(kvm, ghc->gpa, data, len); |
| - |
| if (kvm_is_error_hva(ghc->hva)) |
| return -EFAULT; |
| |
| + if (unlikely(!ghc->memslot)) |
| + return kvm_read_guest(kvm, ghc->gpa, data, len); |
| + |
| r = __copy_from_user(data, (void __user *)ghc->hva, len); |
| if (r) |
| return -EFAULT; |