[2/4] mm/hmm: allow snapshot of the special zero page

Submitted by Ralph Campbell on Sept. 11, 2019, 10:28 p.m.

Details

Message ID 20190911222829.28874-3-rcampbell@nvidia.com
State New
Headers show
Series "HMM tests and minor fixes" ( rev: 1 ) in AMD X.Org drivers

Not browsing as part of any series.

Commit Message

Ralph Campbell Sept. 11, 2019, 10:28 p.m.
Allow hmm_range_fault() to return success (0) when the CPU pagetable
entry points to the special shared zero page.
The caller can then handle the zero page by possibly clearing device
private memory instead of DMAing a zero page.

Signed-off-by: Ralph Campbell <rcampbell@nvidia.com>
Cc: "Jérôme Glisse" <jglisse@redhat.com>
Cc: Jason Gunthorpe <jgg@mellanox.com>
Cc: Christoph Hellwig <hch@lst.de>
---
 mm/hmm.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Patch hide | download patch | download mbox

diff --git a/mm/hmm.c b/mm/hmm.c
index 06041d4399ff..7217912bef13 100644
--- a/mm/hmm.c
+++ b/mm/hmm.c
@@ -532,7 +532,7 @@  static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr,
 			return -EBUSY;
 	} else if (IS_ENABLED(CONFIG_ARCH_HAS_PTE_SPECIAL) && pte_special(pte)) {
 		*pfn = range->values[HMM_PFN_SPECIAL];
-		return -EFAULT;
+		return is_zero_pfn(pte_pfn(pte)) ? 0 : -EFAULT;
 	}
 
 	*pfn = hmm_device_entry_from_pfn(range, pte_pfn(pte)) | cpu_flags;

Comments

On Wed, Sep 11, 2019 at 03:28:27PM -0700, Ralph Campbell wrote:
> Allow hmm_range_fault() to return success (0) when the CPU pagetable
> entry points to the special shared zero page.
> The caller can then handle the zero page by possibly clearing device
> private memory instead of DMAing a zero page.
> 
> Signed-off-by: Ralph Campbell <rcampbell@nvidia.com>
> Cc: "Jérôme Glisse" <jglisse@redhat.com>
> Cc: Jason Gunthorpe <jgg@mellanox.com>
> Cc: Christoph Hellwig <hch@lst.de>
> ---
>  mm/hmm.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/mm/hmm.c b/mm/hmm.c
> index 06041d4399ff..7217912bef13 100644
> --- a/mm/hmm.c
> +++ b/mm/hmm.c
> @@ -532,7 +532,7 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr,
>  			return -EBUSY;
>  	} else if (IS_ENABLED(CONFIG_ARCH_HAS_PTE_SPECIAL) && pte_special(pte)) {
>  		*pfn = range->values[HMM_PFN_SPECIAL];
> -		return -EFAULT;
> +		return is_zero_pfn(pte_pfn(pte)) ? 0 : -EFAULT;

Any chance to just use a normal if here:

		if (!is_zero_pfn(pte_pfn(pte)))
			return -EFAULT;
		return 0;
On 9/12/19 1:26 AM, Christoph Hellwig wrote:
> On Wed, Sep 11, 2019 at 03:28:27PM -0700, Ralph Campbell wrote:
>> Allow hmm_range_fault() to return success (0) when the CPU pagetable
>> entry points to the special shared zero page.
>> The caller can then handle the zero page by possibly clearing device
>> private memory instead of DMAing a zero page.
>>
>> Signed-off-by: Ralph Campbell <rcampbell@nvidia.com>
>> Cc: "Jérôme Glisse" <jglisse@redhat.com>
>> Cc: Jason Gunthorpe <jgg@mellanox.com>
>> Cc: Christoph Hellwig <hch@lst.de>
>> ---
>>   mm/hmm.c | 2 +-
>>   1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/mm/hmm.c b/mm/hmm.c
>> index 06041d4399ff..7217912bef13 100644
>> --- a/mm/hmm.c
>> +++ b/mm/hmm.c
>> @@ -532,7 +532,7 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr,
>>   			return -EBUSY;
>>   	} else if (IS_ENABLED(CONFIG_ARCH_HAS_PTE_SPECIAL) && pte_special(pte)) {
>>   		*pfn = range->values[HMM_PFN_SPECIAL];
>> -		return -EFAULT;
>> +		return is_zero_pfn(pte_pfn(pte)) ? 0 : -EFAULT;
> 
> Any chance to just use a normal if here:
> 
> 		if (!is_zero_pfn(pte_pfn(pte)))
> 			return -EFAULT;
> 		return 0;
> 

Sure, no problem.