Zero-copy user-space TCP send of dma_mmap_coherent() mapped memory
As I posted in an update in my question, the underlying problem is that zerocopy networking does not work for memory that has been mapped using remap_pfn_range()
(which dma_mmap_coherent()
happens to use under the hood as well). The reason is that this type of memory (with the VM_PFNMAP
flag set) does not have metadata in the form of struct page*
associated with each page, which it needs.
The solution then is to allocate the memory in a way that struct page*
s are associated with the memory.
The workflow that now works for me to allocate the memory is:
- Use
struct page* page = alloc_pages(GFP_USER, page_order);
to allocate a block of contiguous physical memory, where the number of contiguous pages that will be allocated is given by2**page_order
. - Split the high-order/compound page into 0-order pages by calling
split_page(page, page_order);
. This now means thatstruct page* page
has become an array with2**page_order
entries.
Now to submit such a region to the DMA (for data reception):
dma_addr = dma_map_page(dev, page, 0, length, DMA_FROM_DEVICE);
dma_desc = dmaengine_prep_slave_single(dma_chan, dma_addr, length, DMA_DEV_TO_MEM, 0);
dmaengine_submit(dma_desc);
When we get a callback from the DMA that the transfer has finished, we need to unmap the region to transfer ownership of this block of memory back to the CPU, which takes care of caches to make sure we're not reading stale data:
dma_unmap_page(dev, dma_addr, length, DMA_FROM_DEVICE);
Now, when we want to implement mmap()
, all we really have to do is call vm_insert_page()
repeatedly for all of the 0-order pages that we pre-allocated:
static int my_mmap(struct file *file, struct vm_area_struct *vma) {
int res;
...
for (i = 0; i < 2**page_order; ++i) {
if ((res = vm_insert_page(vma, vma->vm_start + i*PAGE_SIZE, &page[i])) < 0) {
break;
}
}
vma->vm_flags |= VM_LOCKED | VM_DONTCOPY | VM_DONTEXPAND | VM_DENYWRITE;
...
return res;
}
When the file is closed, don't forget to free the pages:
for (i = 0; i < 2**page_order; ++i) {
__free_page(&dev->shm[i].pages[i]);
}
Implementing mmap()
this way now allows a socket to use this buffer for sendmsg()
with the MSG_ZEROCOPY
flag.
Although this works, there are two things that don't sit well with me with this approach:
- You can only allocate power-of-2-sized buffers with this method, although you could implement logic to call
alloc_pages
as many times as needed with decreasing orders to get any size buffer made up of sub-buffers of varying sizes. This will then require some logic to tie these buffers together in themmap()
and to DMA them with scatter-gather (sg
) calls rather thansingle
. split_page()
says in its documentation:
* Note: this is probably too low level an operation for use in drivers.
* Please consult with lkml before using this in your driver.
These issues would be easily solved if there was some interface in the kernel to allocate an arbitrary amount of contiguous physical pages. I don't know why there isn't, but I don't find the above issues so important as to go digging into why this isn't available / how to implement it :-)
Maybe this will help you to understand why alloc_pages requires a power-of-2 page number.
To optimize the page allocation process(and decrease external fragmentations), which is frequently engaged, Linux kernel developed per-cpu page cache and buddy-allocator to allocate memory(there is another allocator, slab, to serve memory allocations that are smaller than a page).
Per-cpu page cache serve the one-page allocation request, while buddy-allocator keeps 11 lists, each containing 2^{0-10} physical pages respectively. These lists perform well when allocate and free pages, and of course, the premise is you are requesting a power-of-2-sized buffer.