NAME

vm_bind_l1 - Makes binding from a level 1 page table to a physical page frame.

SYNOPSIS

#include <charm.h>

Status vm_bind_l1( Vaddr va, int pfn, Cap ptcap, Cap bindcap, PTE pte, bool use_hashing );

PARAMETERS

va Specifies the virtual address to which the physical memory is to be mapped.

pfn Specifies the physical page frame containing the root page table into which the entry is being made.

ptcap The write capability associated with pfn.

bindcap A capability for the page frame described in the pte entry. If the mapping specifies a read-write protection this must be the write capability, if the mapping specifies any other mapping it must be the read capability.

pte The page table entry to be entered into the root page table.

use_hashing Specifies that hashing is to be used when the page is unmapped (see below).

DESCRIPTION

This function makes an entry in a root page table that points at a level 2 page table. If the kernel detects this is a new mapping the page is zeroed. If the page is a page that is being swapped in, the kernel will perform the appropriate checks to ensure the mapping is legal. If the address given in va is not page aligned the address is rounded down to a page boundary.

If use_hashing is true, the kernel will calculate a hash code for the page before it is (re)mapped. This hash code is stored in a kernel data structure associated with the arena. This mechanism provides an efficient means to remove and later reinstate a page table (containing valid bindings). Without this mechanism, any extant bindings on the page would need to be recreated whenever the page was rebound.

RETURN VALUES

Upon the successful completion, the vm_bind_l1() function returns [SUCCESS].

ERRORS

If the vm_bind_l1() function fails, a Status is returned that corresponds to one of the following values:

[VA_INVALID] The given virtual address is not valid.

[VA_ALREADY_BACKED] The given virtual address is already mapped to an address in physical memory.

[VM_BAD_HASH] The page provided does not match the stored hash code for the arena.

[INVALID_PT_CAP] The capability specified by ptcap is invalid.

[INVALID_BIND_CAP] The capability specified by bindcap is invalid.

[PTE_INVALID] The page table entry is malformed.

[INVALID_BINDING] An attempt has been made to install an illegal page table.

RELATED INFORMATION

pm_change_caps(), vm_bind_l2(), vm_bind_root(), vm_unbind_root().

Notes for Gurus

Pages that are l1 or l2 page tables may only be mapped read only. If this were not the case, a major security hole would exist. In order to enforce this the kernel needs to know which pages represent page tables. This could be performed using a system call but we decided against this. The current proposal involves marking core map entries with a tag to indicate that they are either l1 or l2 page tables when a vm_bind_l1() or vm_bind_l2() system call is performed.

A problem arises when paging page tables – they must either be managed by the kernel, which is undesirable, or by user level. If the user level is responsible for paging a security hole exists – a pager could substitute another page for a page table when the page is faulted in. This page table could give access to any page frame. For this reason the kernel must make some checks during the binding of a page table. In the worst case the kernel could validate every page table entry on the page being bound. However, this is not possible using the proposed capability scheme unless an array of capabilities with one entry per pte were presented also. This is clearly undesirable.

The proposed solution is to store an array of CRCs with each arena – one entry per pte and one CRC per core map entry. The following algorithm is followed during a vm_bind_l1/2 operation.

if( core map entry is flagged as a page table ) {
perform mapping
update CRC count stored with core map entry
}

else if( CRC entry exists in arena CRC table ) {
if( page CRC matches CRC entry associated with the arena ) {
// success
put CRC count into core map entry
flag core map entry as page table

perform mapping
}
else signal error and return
}
else {
zero page
put zero CRC count into core map entry
flag core map entry as page table

perform mapping
}

On a vm_flush() operation, the CRC stored with the core map entry must be propagated back to the arenas whose mappings are being flushed.

 

last updated: 7/9/99