"Allocations of larger pages are not reliable in Linux. If larger pages have to be allocated then one faces various choices of allowing graceful fallback or using vmalloc with a performance penalty due to the use of a page table," began Christoph Lameter, describing the third version of his virtual compound page support patchset. He continued, "a virtual compound allocation means that there will be first of all an attempt to satisfy the request with physically contiguous memory. If that is not possible then a virtually contiguous memory will be created." Christopher proposed two advantages:
"1. Current uses of vmalloc can be converted to allocate virtual compounds instead. In most cases physically contiguous memory can be used which avoids the vmalloc performance penalty. 2. Uses of higher order allocations (stacks, buffers etc) can be converted to use virtual compounds instead. Physically contiguous memory will still be used for those higher order allocs in general but the system can degrade to the use of vmalloc should memory become heavily fragmented."
"Currently there is a strong tendency to avoid larger page allocations in the kernel because of past fragmentation issues and the current defragmentation methods are still evolving," Christoph Lameter began, posting the first version of his Virtual Compound Page Support patches, a followup to his earlier RFC. He explained, "we use vmalloc allocations in many locations to provide a safe way to allocate larger arrays. That is due to the danger of higher order allocations failing." Christoph continued, "this patch set provides a way for a higher page allocation to fall back. Instead of a physically contiguous page a virtually contiguous page is provided. The functionality of the vmalloc layer is used to provide the necessary page tables and control structures to establish a virtually contiguous area."
He then listed four advantages, including, "if higher order allocations are failing then virtual compound pages consisting of a series of order-0 pages can stand in for those allocations." He also listed three disadvantages, noting that new functions are used to access the virtually mapped memory, adding "virtual mappings are less efficient than physical mappings, [so] performance will drop once virtual fall back occurs." He also noted, "virtual mappings have more memory overhead; vm_area control structures page tables, page arrays etc need to be allocated and managed to provide virtual mappings."
James Bottomley announced the Linux Foundation Technical Advisory Board election results from September 5th, "sorry this has taken so long to get out ... I just, er, forgot." He noted that there were eight candidates. "Every candidate gave a nomination statement before the voting (with the three persons not present having their statements read to the meeting). We did single polling per position and had two rounds for a tie on the last candidate."
James then stated that the five people elected to the advisory board were, Arjan van de Ven, Greg Kroah Hartman, Christoph Lameter, Jon Corbett, and Olaf Kirch. The purpose of the advisory board was discussed earlier.
"Slab defragmentation is mainly an issue if Linux is used as a fileserver and large amounts of dentries, inodes and buffer heads accumulate," Christoph Lameter explained when posting the fifth version of his patchset. He continued, "in some load situations the slabs become very sparsely populated so that a lot of memory is wasted by slabs that only contain one or a few objects. In extreme cases the performance of a machine will become sluggish since we are continually running reclaim. Slab defragmentation adds the capability to recover wasted memory." Christoph noted that the patch is difficult to validate and measure because, "activities are only performed when special load situations are encountered." He then pointed to
updatedb as something that typically triggers slab defragmentation on his systems:
"Updatedb scans all files on the system which causes a high inode and dentry use. After updatedb is complete we need to go back to the regular use patterns (typical on my machine: kernel compiles). Those need the memory now for different purposes. The inodes and dentries used for updatedb will gradually be aged by the dentry/inode reclaim algorithm which will free up the dentries and inode entries randomly through the slabs that were allocated. As a result the slabs will become sparsely populated. If they become empty then they can be freed but a lot of them will remain sparsely populated. That is where slab defrag comes in: It removes the slabs with just a few entries reclaiming more memory for other uses."