NAME
vm_bind_l1
SYNOPSIS
#include <charm.h>
Status vm_bind_l1( Vaddr va, int pfn, Cap ptcap, Cap bindcap, PTE pte );
PARAMETERS
va Specifies the virtual address to which the physical memory is to be mapped.
pfn Specifies the physical page frame containing the root page table into which the entry is being made.
ptcap The write capability associated with pfn.
bindcap A capability for the page frame described in the pte entry. If the mapping specifies a read-write protection this must be the write capability, if the mapping specifies any other mapping it must be the read capability.
pte The page table entry to be entered into the root page table.
DESCRIPTION
This function makes an entry in a root page table that points at a level 2 page table. If the kernel detects this is a new mapping the page is zeroed. If the page is a page that is being swapped in, the kernel will perform the appropriate checks to ensure the mapping is legal. If the address given in va is not page aligned the address is rounded down to a page boundary.
RETURN VALUES
Upon the successful completion, the vm_bind_l1() function returns [SUCCEED].
ERRORS
If the vm_bind_l1() function fails, a Status is returned that corresponds to one of the following values:
[VA_INVALID] The given virtual address is not valid.
[VA_ALREADY_BACKED] The given virtual address is already mapped to an address in physical memory.
[INVALID_PT_CAP] The capability specified by ptcap is invalid.
[INVALID_BIND_CAP] The capability specified by bindcap is invalid.
[PTE_INVALID] The page table entry is malformed.
[INVALID_BINDING] An attempt has been made to install an illegal page table.
RELATED INFORMATION
pm_change_caps(), vm_bind_l2(), vm_flush().
Notes for Gurus
Pages that are l1 or l2 page tables may only be mapped read only. If this were not the case, a major security hole would exist. In order to enforce this the kernel needs to know which pages represent page tables. This could be performed using a system call but we decided against this. The current proposal involves marking core map entries with a tag to indicate that they are either l1 or l2 page tables when a vm_bind_l1() or vm_bind_l2() system call is performed.
A problem arises when paging page tables – they must either be managed by the kernel, which is undesirable, or by user level. If the user level is responsible for paging a security hole exists – a pager could substitute another page for a page table when the page is faulted in. This page table could give access to any page frame. For this reason the kernel must make some checks during the binding of a page table. In the worst case the kernel could validate every page table entry on the page being bound. However, this is not possible using the proposed capability scheme unless an array of capabilities with one entry per pte were presented also. This is clearly undesirable.
The proposed solution is to store an array of CRCs with each arena – one entry per pte and one CRC per core map entry. The following algorithm is followed during a vm_bind_l1/2 operation.
if( core map entry is flagged as a page table ) {
perform mapping
update CRC count stored with core map entry
}
else if( CRC entry exists in arena CRC table ) {
if( page CRC matches CRC entry associated with the arena ) {
// success
put CRC count into core map entry
flag core map entry as page table
perform mapping
}
else signal error and return
}
else {
zero page
put zero CRC count into core map entry
flag core map entry as page table
perform mapping
}
On a vm_flush() operation, the CRC stored with the core map entry must be propagated back to the arenas whose mappings are being flushed.
last updated: 11/2/99