/* IDENT X-38 */ /************************************************************************ * * * © Copyright 1994, 2007 Hewlett-Packard Development Company, L.P. * * * * Confidential computer software. Valid license from HP and/or * * its subsidiaries required for possession, use, or copying. * * * * Consistent with FAR 12.211 and 12.212, Commercial Computer Software, * * Computer Software Documentation, and Technical Data for Commercial * * Items are licensed to the U.S. Government under vendor's standard * * commercial license. * * * * Neither HP nor any of its subsidiaries shall be liable for technical * * or editorial errors or omissions contained herein. The information * * in this document is provided "as is" without warranty of any kind and * * is subject to change without notice. The warranties for HP products * * are set forth in the express limited warranty statements accompanying * * such products. Nothing herein should be construed as constituting an * * additional warranty. * * * ************************************************************************/ /* *++ * FACILITY: * * VMS Executive (LIB) * * ABSTRACT: * * This header file provides the basic set of C inline functions for * the 64-bit memory management system services. * * Note: these functions may only be used from within the sys$vm.exe execlet * due to references to global routines which are not made available as * system vectors. * * AUTHOR: * * Karen L. Noel * * CREATION DATE: 13-Oct-1994 * * MODIFICATION HISTORY: * * X-38 Mark Morris 17-Aug-2007 * Add check in $pt_no_delete for a negative phvindex * field, which occurs for the Swapper process. * * X-37 Ruth Goldenberg 17-May-2007 * Remove incorrect test in $is_valid_delete_range that * required the range to be mapped by an integral number * of shared page tables. It is possible to create such * a section, and this overly strict requirement * makes it impossible to delete the section, once created. * * X-36 Ram Ramachandra C N 26-apr-2007 * The FREWSLE_ACTIVE bit is stored in volatile variable * to prevent compiler optimisation when it is sampled. * * X-35 Ram Ramachandra C N 23-Feb-2007 * On IA64, use __CMP_SWAP_QUAD instead of __CMP_STORE_QUAD * since the latter is not available on IA64. * * X-34 Ram Ramachandra C N 22-Feb-2007 * Add $atomic_write_keep_in_ws function. * Also mmg$gl_sysphd is made global extern to avoid * conflict when this header is included. * * X-33 GHJ Gregory H. Jordan 18-Dec-2006 * In the $pt_no_delete macro, the synchronization of the * phd$pq_pt_no_delete* fields is changing from using the * MMG spinlock to the PCB specific spinlock. * * X-32 Andy Kuehnel 16-Aug-2004 * The macros $is_valid_delete_range and $is_mapped_shpts * have the implicit assumption that a GH region cannot be * larger than the VA space mapped by a single level 3 PT. * That is no longer true on I64. * * Also: $is_mapped_shpt was holding MMG when this is not * necessary. We are at IPL$_ASTDEL where we can take page * faults but the address space cannot be changed. * * X-31 Andy Kuehnel 30-Jun-2004 * On IA64, don't try to jump over the gap for more VA space: * you might not land where you expect... * * X-30 Clair Grant 08-Mar-2004 * Updates for 64b PFNs * * X-29 MLM Mark L. Morris 26-FEB-2004 * Modify $is_last_section_page to expand checks * * X-28 CMOS Christian Moser 10-FEB-2004 * Modify $update_peak_counters to shadow VIRTPEAK and IPAGEFL * in the PHD. * * X-26,27 KLN3374 Karen L. Noel 17-Oct-2003 * In $start_end_va, fix return_start_va for descending region * with expreg. This case was only wrong when the length was not * page size aligned, ie. a partial seciton. PTR 75-101-325 * * X-24A18 KLN3025 Karen L. Noel 26-Feb-2002 * o Port PT space to IA64. * o Remove inline pragmas. We trust the compiler now. * * X-24A17 KLN2200 Karen L. Noel 15-Nov-2000 * Disable informationals for pointer casting. We know the * pointer is 32-bits before we cast. * * X-24A16 KLN2176 Karen L. Noel 17-Apr-2000 * Reference the L1 page table physically in $is_mapped_shpt * * X-24A15 KLN2171 Karen L. Noel 28-Mar-2000 * Move $read_pte and $write_pte to pte_functions.h. * * X-24A14 KLN2137 Karen L. Noel 11-Feb-2000 * o Make read and write PTE macros static so more than one * module can include them. * o Clean up interfaces so PTECHECK can use these macros too. * * X-24A13 KLN2134 Karen L. noel 10-Feb-2000 * Add macros to read and write PTEs. * * X-24A12 Andy Kuehnel 8-Jul-1998 * Detect SHMGS pages as being memory resident. * * X-24A11 Andy Kuehnel 18-Jun-1998 * - Allow galaxy shared pages as shared page tables. * - Close minute windows: we must first get MMG, then see if the * pages we want to touch are valid. * * X-24A10 KLN2082 Karen L. Noel 04-Jun-1998 * Surround this file with short pointer pragmas in case someone * wants to compile with long pointers from the command line. * * X-24A9 Andy Kuehnel 20-Jan-1998 * Teach $gsd_insque how to deal with SHMGS sections and rename it * to $insque_gsd. * * X-24A8 NYK668 Nitin Y. Karkhanis 9-Sep-1996 * $start_end_va must align VA to PT-page boundary for * shared PT regions. * * X-24A7 NYK660 Nitin Y. Karkhanis 29-Aug-1996 * Add $is_in_region, $is_mapped_shpts, $is_valid_delete_range, * and $is_last_section_page. * * X-24A6 KLN1556 Karen L. Noel 19-Jun-1996 * Add function $is_mem_res * * X-24A5 KLN1549 Karen L. Noel 31-May-1996 * Add function $stx_to_entry. * * X-24A4 KLN1533 Karen L. Noel 24-Oct-1995 * Move $probe functions to sys_functions.h, a more appropriate * home. * * X-24A3 KLN1527 Karen L. Noel 9-Oct-1995 * 1. In $more_pgflquota, purge zero page table pages to release * pagefile quota if the process has run out of quota. * 2. Add $keep_in_ws function. * * X-24A2 KLN1515 Karen L. Noel 20-Sep-1995 * Return error from $in_region_64 if either VA is with * the gap of the the range spans the gap. * * X-24A1 KLN1503 Karen L. Noel 29-Aug-1995 * Allow start_va = 0 and no expreg in $start_end_va. * * X-24 KLN1484 Karen L. Noel 21-Jul-1995 * Check for VA wrap in $start_end_va. * * X-23 KLN1482 Karen L. Noel 20-Jul-1995 * Page count -> 64 bits. * * X-22 KLN1481 Karen L. Noel 20-Jul-1995 * Only jump over the gap if P2 * * X-21 KLN1480 Karen L. Noel 19-Jul-1995 * Make length of P2 header region a natural non-sign-extended * value. * * X-20 KLN1476 Karen L. Noel 17-Jul-1995 * Return SS$_VA_IN_USE if $adjust_header_region overlap * created address space in header region. * * X-19 KLN1472 Karen L. Noel 11-Jul-1995 * 1. Hop over the gap properly * 2. Insert GSDs into the GSD queues properly * * X-18 KLN1461 Karen L. Noel 19-Jun-1995 * Restore lost check-in. * * X-17 NYK439 Nitin Y. Karkhanis 19-Jun-1995 * Restore lost edit that was X-16. That was "Virtual * peak calculation in $update_peak_counters does not * need created_length argument." * * X-16 KLN1458 Karen L. Noel 12-Jun-1995 * Charge for page table pages for pagefile quota. * * X-15 NYK422 Nitin Y. Karkhanis 8-Jun-1995 * Add $update_peak_counters function. * * X-14 KLN1457 Karen L. Noel 05-Jun-1995 * Add functions $remove_rde and $adjust_header_region. * * X-13 KLN1434 Karen L. Noel 18-Apr-1995 * Add $pt_no_delete function. * * X-12 KLN1425 Karen L. Noel 30-Mar-1995 * Allow default P2 region to start at a higher VA. * * X-11 KLN1383 Karen L. Noel 13-Feb-1995 * Fix $start_end_va for P1 block multiple ranges. * * X-9,10 KLN1379 Karen L. Noel 6-Feb-1995 * Add $gsd_insque function. * * X-7,8 KLN1354 Karen L. Noel 15-Dec-1994 * Fix problems found while debugging process private services. * * X-6 KLN1347 Karen L. Noel 5-Dec-1994 * Add $start_end_va. * * X-5 KLN1342 Karen L. Noel 30-Nov-1994 * Use $get_ps_info * * X-4 KLN1337 Karen L. Noel 21-Nov-1994 * Miscellaneous enhancements. * * X-3 KLN1333 Karen L. Noel 10-Nov-1994 * MMG utility functions. * * X-2 KLN1327 Karen L. Noel 28-Oct-1994 * Improve $lookup_rde_va *-- */ /* Include any header files we need to make these functions work */ #ifndef __MMG_FUNCTIONS_LOADED #define __MMG_FUNCTIONS_LOADED 1 #ifdef __INITIAL_POINTER_SIZE /* Defined whenever ptr size pragmas supported */ #pragma __required_pointer_size __save /* Save the previously-defined required ptr size */ #pragma __required_pointer_size __short /* And set ptr size default to 32-bit pointers */ #endif #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #define __MAX_PP_REGION_ID VA$C_P2 /* Max process permanent region id */ extern PHD * const mmg$gl_sysphd; /* *++ * $lookup_rde_id - Lookup region descriptor entry (RDE) address given a * region id. * * Input: region_id - Region id of interest. * phd - Address of PHD in P1 space. * ipl - Current IPL if known, if not known or if 0, ipl will be * set to IPL$_ASTDEL upon entry and restored upon * completion. * * Output: rde - Address of associated RDE. * *-- */ static RDE *$lookup_rde_id (uint64 region_id, PHD * const phd, int ipl) { extern RDE *ctl$a_region_table[RDE$C_REGION_TABLE_SIZE]; RDE *pp_rde_array; /* Process permanent RDE array */ RDE *rde; /* Pointer to RDE */ int saved_ipl; /* Saved ipl for synchronization */ uint32 index; /* Index into region table */ /* Process permanent regions */ if (region_id < RDE$C_MIN_USER_ID) { if (region_id > __MAX_PP_REGION_ID) return (0); pp_rde_array = (RDE *)&(phd->phd$q_p0_rde); return (pp_rde_array + region_id); } /* Synchronize, if necessary */ if (ipl < IPL$_ASTDEL) saved_ipl = __PAL_MTPR_IPL(IPL$_ASTDEL); /* User defined regions */ index = (uint32)region_id & (RDE$C_REGION_TABLE_SIZE-1); rde = ctl$a_region_table[index]; while ((rde != 0) && (rde->rde$q_region_id != region_id)) rde = rde->rde$ps_table_link; /* Restore ipl */ if (ipl < IPL$_ASTDEL) __PAL_MTPR_IPL(saved_ipl); return (rde); /* return 0 or rde if found */ } /* *++ * $search_rde_va - Search for RDE in VA list * * Inputs: va - Virtual address of interest. * head - RDE list header of interest. * * Output: RDE address if exact match * *prev - address of previous RDE in list (may be head) * *next - address of next RDE in list (may be head) * * Environment: Must be called at IPL$_ASTDEL for proper synchronization. * *-- */ static RDE *$search_rde_va (VOID_PQ va, RDE *head, RDE **prev, RDE **next) { /* External variables */ extern const int mmg$gl_va_bits; /* Local variables */ RDE *rde; /* Header region is ascending (P0, P2) */ if (!head->rde$v_descend) { *prev = 0; rde = head; *next = head->rde$ps_va_list_flink; while ((*next != head) && (va >= $$sext((uint64)rde->rde$pq_start_va + rde->rde$q_region_size))) { *prev = rde; rde = *next; *next = rde->rde$ps_va_list_flink; } /* Figure out what to return */ if (va < $$sext((uint64)rde->rde$pq_start_va + rde->rde$q_region_size)) { if (va >= rde->rde$pq_start_va) return (rde); /* Got it */ *next = rde; /* rde is next */ return(0); /* prev is correct */ } else /* va is above rde */ { *prev = rde; /* rde is prev */ return (0); /* next is correct */ } } /* Header region is descending (P1) */ if (head->rde$v_descend) { *prev = 0; rde = head; *next = head->rde$ps_va_list_flink; while ((*next != head) && (va < rde->rde$pq_start_va)) { *prev = rde; rde = *next; *next = rde->rde$ps_va_list_flink; } /* Figure out what to return */ if (va >= rde->rde$pq_start_va) { if ((uint64)va < ((uint64)rde->rde$pq_start_va + rde->rde$q_region_size)) return (rde); /* Got it */ *next = rde; /* rde is next */ return(0); /* prev is correct */ } else /* va is below rde */ { *prev = rde; /* rde is prev */ return (0); /* next is correct */ } } } /* *++ * $lookup_rde_va: * * Lookup region descriptor entry (RDE) address given a virtual address. * * Input: va - Virtual address of interest. * phd - Address of PHD in P1 space. * function - * __LOOKUP_RDE_EXACT(0) = va must be within a region for an RDE to be * returned * __LOOKUP_RDE_HIGHER(1) = If va is not within a region, the RDE for the * region whose starting VA is higher than VA will be * returned. * ipl - Current IPL if known, if not known or if 0, ipl will be * set to IPL$_ASTDEL upon entry and restored upon * completion. * * Output: rde - Address of associated RDE. * *-- */ static RDE *$lookup_rde_va (VOID_PQ va, PHD * const phd, int function, int ipl) { extern const VOID_PQ mmg$gq_process_space_limit; int saved_ipl; /* Saved ipl for synchronization */ int index; /* Index into RDE array in PHD */ RDE *rde,*prev,*head,*next; /* RDE pointers for search */ uint64 mo=0; /* Quadword -1 */ mo = ~mo; #pragma message save #pragma message disable pointerintcast /* Get proper list header */ if ($is_p2_va(va,mmg$gq_process_space_limit)) index = VA$C_P2; else if ($is_p1_va(va)) index = VA$C_P1; else if ($is_p0_va(va)) index = VA$C_P0; else /* -1 is wild-card start va, return P0 RDE */ if (((uint64)va == mo) && (function == __LOOKUP_RDE_HIGHER)) return ((RDE *)&(phd->phd$q_p0_rde)); else return (0); /* Non-process space addresses is an error */ #pragma message restore head = (RDE *)&(phd->phd$q_p0_rde) + index; /* Synchronize, if necessary */ if (ipl < IPL$_ASTDEL) saved_ipl = __PAL_MTPR_IPL(IPL$_ASTDEL); /* Search list header for RDE associated with VA */ rde = $search_rde_va (va, head, &prev, &next); if (rde != 0) { if (ipl < IPL$_ASTDEL) __PAL_MTPR_IPL(saved_ipl); return (rde); /* Found match */ } /* Did not find exact match */ if (function == __LOOKUP_RDE_EXACT) { if (ipl < IPL$_ASTDEL) __PAL_MTPR_IPL(saved_ipl); return (0); } /* Function must be __LOOKUP_RDE_HIGHER */ if (function == __LOOKUP_RDE_HIGHER) { /* If descending, prev is higher */ if (head->rde$v_descend) { if (ipl < IPL$_ASTDEL) __PAL_MTPR_IPL(saved_ipl); return (prev); } /* Ascending: * If we have left a gap before start of header region, * return header region */ if (va < head->rde$pq_start_va) { if (ipl < IPL$_ASTDEL) __PAL_MTPR_IPL(saved_ipl); return (head); } /* Otherwise, next is higher */ if (next != head) { if (ipl < IPL$_ASTDEL) __PAL_MTPR_IPL(saved_ipl); return (next); } /* Otherwise, off end of list, go look at next list */ if (index == __MAX_PP_REGION_ID) { if (ipl < IPL$_ASTDEL) __PAL_MTPR_IPL(saved_ipl); return (0); } head++; /* If ascending, head is next higher */ if (!head->rde$v_descend) { if (ipl < IPL$_ASTDEL) __PAL_MTPR_IPL(saved_ipl); return (head); } /* If descending, blink is next higher */ rde = head->rde$ps_va_list_blink; if (ipl < IPL$_ASTDEL) __PAL_MTPR_IPL(saved_ipl); return (rde); } /* Restore IPL */ if (ipl < IPL$_ASTDEL) __PAL_MTPR_IPL(saved_ipl); return (0); /* Invalid function code */ } /* End of $lookup_rde_va */ /* *++ * $init_pgflquota - Initialize pagefile quota cache * * Input: pcb - PCB address * pages - page count of pages to map * pagefile_cache - Address of pagefile cache storage * * Output: pagefile_cache is updated to reflect any allocation or * deallocation *-- */ static void $init_pgflquota (const PCB * const pcb, int pages, int *pagefile_cache) { uint32 min; JIB *jib; /* Do nothing if cached quota is enough */ if (*pagefile_cache >= pages) return; /* Get address of JIB */ jib = pcb->pcb$l_jib; /* Use min of pages and pagefile quota */ while (1) { min = jib->jib$l_pgflcnt; if (pages < min) min = pages; __ADD_ATOMIC_LONG (&jib->jib$l_pgflcnt, -min); if (jib->jib$l_pgflcnt >= 0) { *pagefile_cache += min; return; } __ADD_ATOMIC_LONG (&jib->jib$l_pgflcnt, min); } } /* *++ * $more_pgflquota - Get more pagefile quota cache * * Input: pcb - PCB address * pages_1 - page count - 1 of remaining pages to map * pagefile_cache - Address of pagefile cache storage * * Output: pagefile_cache is updated to reflect any allocation or * deallocation *-- */ static void $more_pgflquota (const PCB * const pcb, int pages_1, int * pagefile_cache) { /* External routine */ extern void mmg_std$purge_zpts (VOID_PQ start_va, VOID_PQ end_va); /* External variable */ extern VOID_PQ const mmg$gq_process_space_limit; /* Local variables */ int cache_before; /* Save value of pagefile cache before call */ cache_before = *pagefile_cache; /* Try for number of pages requested */ $init_pgflquota(pcb,pages_1+1,pagefile_cache); /* If we've run out of pagefile quota, try purging page tables */ if (*pagefile_cache == cache_before) { /* Purge zero'd page table pages */ mmg_std$purge_zpts (0, (VOID_PQ)((uint64)mmg$gq_process_space_limit-1)); /* Now try to get pagefile quota */ $init_pgflquota(pcb,pages_1+1,pagefile_cache); } } /* *++ * $ret_pgflquota - Return unused pagefile quota cache * * Input: pcb - PCB address * pagefile_cache - Address of pagefile cache storage * * Output: Unused pagefile quota cache is returned to jib and * pagefile_cache is reset to zero *-- */ static void $ret_pgflquota (const PCB * const pcb, int *pagefile_cache) { int ipl; JIB *jib; /* return cached page file quota */ jib = pcb->pcb$l_jib; __ADD_ATOMIC_LONG (&jib->jib$l_pgflcnt, *pagefile_cache); *pagefile_cache = 0; } /* *++ * $is_in_region - Is virtual address between starting address of region and * starting address plus length of region * * Input: va - virtual address * rde - region descriptor * Output: 1 - is in region * 0 - in not in region * *-- */ #define $is_in_region(va,rde)\ (((va) >= (rde)->rde$pq_start_va) && \ ((va) < $$sext((uint64)(rde)->rde$pq_start_va + (rde)->rde$q_region_size))) /* *++ * $in_region_64 - check if request is within region * * Input: RDE - RDE address * start_va - Starting address * end_va - Ending address * * Output: *exbytes - # of bytes to expand * *pages - # of pages in request * status = SS$_NORMAL - success * 0 - failure *-- */ static int $in_region_64 (RDE *rde, VOID_PQ start_va, VOID_PQ end_va, uint64 *expbytes, UINT64_PQ pages) { /* External variables */ extern const uint64 mmg$gq_bwp_mask; extern const int mmg$gl_page_size, mmg$gl_bwp_width, mmg$gl_va_bits; extern VOID_PQ const mmg$gq_gap_lo_va, mmg$gq_gap_hi_va; /* Local variables */ VOID_PQ lo, hi; /* page align start_va and end_va and get this in order */ if (start_va < end_va) { lo = (VOID_PQ)((uint64)start_va & ~mmg$gq_bwp_mask); hi = (VOID_PQ)((uint64)end_va & ~mmg$gq_bwp_mask); } else { hi = (VOID_PQ)((uint64)start_va & ~mmg$gq_bwp_mask); lo = (VOID_PQ)((uint64)end_va & ~mmg$gq_bwp_mask); } /* compute and return page count */ *pages = (((uint64)hi - (uint64)lo) >> mmg$gl_bwp_width) + 1; if (!rde->rde$v_descend) /* ascending region */ { /* decide if this is all past current end of region */ if (lo < rde->rde$pq_first_free_va) return (0); if (hi < rde->rde$pq_first_free_va) return (0); /* Also, return error if hi is higher than end of region */ if (hi >= $$sext((uint64)rde->rde$pq_start_va + rde->rde$q_region_size)) return (0); #ifdef __alpha // Verified for IA64 port - ak /* Also, return error if either VA is within the gap, or * if the range spans the gap */ if (rde->rde$q_region_id == VA$C_P2) { if ((lo >= mmg$gq_gap_lo_va) && (lo < mmg$gq_gap_hi_va)) return (0); if ((hi >= mmg$gq_gap_lo_va) && (hi < mmg$gq_gap_hi_va)) return (0); if ((lo < mmg$gq_gap_lo_va) && (hi >= mmg$gq_gap_hi_va)) return (0); } #endif /* return # of bytes to expand (watch out for jumping over * the gap) */ *expbytes = $$trunc(hi) - $$trunc(rde->rde$pq_first_free_va) + mmg$gl_page_size; return (SS$_NORMAL); } /* descending region */ if (hi > rde->rde$pq_first_free_va) return (0); if (lo > rde->rde$pq_first_free_va) return (0); /* Also return error if lo is lower than start of region */ if (lo < rde->rde$pq_start_va) return (0); /* return # of bytes to expand */ *expbytes = ((uint64)rde->rde$pq_first_free_va - (uint64)lo) + mmg$gl_page_size; return (SS$_NORMAL); } /* *++ * $service_init - Initialize MMG system service * * Input: return_va - address of caller's return va * return_length _ address of caller's return length * * Output: *callers_mode - mode of system service caller * *ipl - current IPL * *-- */ static int $service_init (VOID_PPQ return_va, UINT64_PQ return_length, int *callers_mode, int *ipl) { uint64 mo; /* Create a 64-bit constant -1 */ mo = 0; mo = ~mo; /* Get info from PS */ $get_ps_info(callers_mode,ipl); /* Probe return_va and return_length */ if ($probew_2q(return_va, return_length, *callers_mode) == 0) return (SS$_ACCVIO); /* Initialize return_va and return_length */ *return_va = (VOID_PQ)mo; *return_length = 0; /* All set */ return (SS$_NORMAL); } /* *++ * $service_complete - Complete MMG system service. * * Input: probe - If 1, probes will be done * - If 0, probes will not be done * va - virtual address to return to caller * length - length to return to caller * return_va - address of caller's return va * return_length _ address of caller's return length * callers_mode - mode of system service caller (0 - no probes) * *pagefile_cache - cached pagefile quota (0 - none) * * Output: SS$_NORMAL if successful * SS$_ACCVIO if return_va or return_length cannot be written *-- */ static int $service_complete (int probe, VOID_PQ va, uint64 length, VOID_PPQ return_va, UINT64_PQ return_length, int callers_mode, int *pagefile_cache) { extern PCB * const ctl$gl_pcb; /* Return any cached pagefile quota */ if (pagefile_cache != 0) $ret_pgflquota (ctl$gl_pcb, pagefile_cache); /* If re-probing is required, probe return parameters */ if ((probe) && ($probew_2q (return_va, return_length, callers_mode) == 0)) return (SS$_ACCVIO); /* Return va and length to system service caller*/ *return_va = va; *return_length = length; /* All set */ return (SS$_NORMAL); } /* *++ * $start_end_va - compute starting and ending vas * * Inputs: expreg - non-zero if region expansion specified * rde - Address of region descriptor * start_va - starting va specified: 0 if none, page aligned * if specified * length - length in bytes * * Outputs: *return_start_va - returned starting va * *return_end_va - returned ending va * *-- */ static void $start_end_va (int expreg, RDE *rde, VOID_PQ start_va, uint64 length, VOID_PQ *return_start_va, VOID_PQ *return_end_va) { /* External variables */ extern const uint64 mmg$gq_level_width, mmg$gq_page_size; extern VOID_PQ const mmg$gq_gap_lo_va, mmg$gq_gap_hi_va; uint64 bytes_mapped_by_l3pt; /* Compute number of bytes mapped by a PT page */ bytes_mapped_by_l3pt = mmg$gq_page_size << mmg$gq_level_width; /* Ascending, expreg */ if ((!rde->rde$v_descend) && expreg) { /* Calculate start va and end_va */ *return_start_va = rde->rde$pq_first_free_va; /* For shared PT regions, make sure start_va is PT page aligned */ if (rde->rde$v_shared_pts) *return_start_va = $align_va ( *return_start_va, bytes_mapped_by_l3pt, 1); *return_end_va = (VOID_PQ)((uint64)*return_start_va + (length - 1)); #ifdef __alpha // Verified for IA64 port - ak /* Jump over the gap if necessary */ if ((rde->rde$q_region_id == VA$C_P2) && (*return_end_va > mmg$gq_gap_lo_va) && (*return_start_va < mmg$gq_gap_lo_va)) { *return_start_va = mmg$gq_gap_hi_va; *return_end_va = (VOID_PQ)((uint64)*return_start_va + length - 1); } #endif } /* Ascending, specified start_va */ if ((!rde->rde$v_descend) && !expreg) { *return_start_va = start_va; *return_end_va = (VOID_PQ)((uint64)start_va + (length - 1)); } /* Ascending, if VA wrap has occurred, set end va to largest number */ if ((!rde->rde$v_descend) && (*return_end_va < *return_start_va)) *return_end_va = (VOID_PQ)-1; /* Descending, expreg */ if ((rde->rde$v_descend) && expreg) { if (rde->rde$v_shared_pts) { /* Calculate lowest used va */ *return_end_va = (VOID_PQ)((uint64)rde->rde$pq_first_free_va + mmg$gq_page_size); /* Subtract length and PT-page align */ *return_end_va = $align_va ( (VOID_PQ) ((uint64) *return_end_va - length), bytes_mapped_by_l3pt, 0); /* Starting address is end_va plus length - 1 */ *return_start_va = (VOID_PQ) ((uint64) *return_end_va + length - 1); } else { /* Calculate lowest used va */ *return_end_va = (VOID_PQ)((uint64)rde->rde$pq_first_free_va + mmg$gq_page_size); /* Subtract length and page align */ *return_end_va = (VOID_PQ)(((uint64)*return_end_va - length) & ~(mmg$gq_page_size-1)); /* Starting address is first free byte in region */ *return_start_va = (VOID_PQ)((uint64)*return_end_va + length - 1); } } /* Descending, specified start_va */ if ((rde->rde$v_descend) && !expreg) { *return_start_va = (VOID_PQ)((uint64)start_va + (length - 1)); *return_end_va = start_va; } /* Descending, if VA wrap has occurred, set end va to smallest number */ if ((rde->rde$v_descend) && (*return_end_va > *return_start_va)) *return_end_va = 0; } /* *++ * $insque_gsd - Insert GSD into appropriate global section queue * * Inputs: flags - sections flags * *gsd - Global section descriptor * * Outputs: none * * Environment: GSD mutex held * *-- */ static void $insque_gsd (SECDEF_FLAGS flags, GSD *gsd) { /* External variables */ extern GSD * exe$gl_gsdsysfl; extern GSD * exe$gl_gsdsysbl; extern GSD * exe$gl_gsdgrpfl; extern GSD * exe$gl_gsdgrpbl; extern GSD * exe$gl_glxsysfl; extern GSD * exe$gl_glxsysbl; extern GSD * exe$gl_glxgrpfl; extern GSD * exe$gl_glxgrpbl; /* Local variables */ GSD **gsd_list_flink; GSD **gsd_list_blink; int type = 0; /* find queue "index" */ if (flags.secflg$v_shmgs) type = 2; if (flags.secflg$v_sysgbl) type |= 1; /* Insert gsd into appropriate list */ switch (type) { case 0: /* group global */ gsd_list_flink = &exe$gl_gsdgrpfl; gsd_list_blink = &exe$gl_gsdgrpbl; break; case 1: /* system global */ gsd_list_flink = &exe$gl_gsdsysfl; gsd_list_blink = &exe$gl_gsdsysbl; break; case 2: /* SHMGS group global */ gsd_list_flink = &exe$gl_glxgrpfl; gsd_list_blink = &exe$gl_glxgrpbl; break; case 3: /* SHMGS system global*/ gsd_list_flink = &exe$gl_glxsysfl; gsd_list_blink = &exe$gl_glxsysbl; break; } (*gsd_list_flink)->gsd$l_gsdbl = gsd; gsd->gsd$l_gsdfl = *gsd_list_flink; gsd->gsd$l_gsdbl = (GSD *)gsd_list_flink; *gsd_list_flink = gsd; } /* *++ * $pt_no_delete - Set fields in PHD to synchronize PT create/delete * * Inputs: l2pte1 - starting l2pte va (0 if clearing PHD fields) * l2pte2 - ending l2pte va * phd - process's PHD address * IPL is at IPL$_ASTDEL * * Output: SS$_WASSET if l2pte1 and l2pte2 are already within range of * page tables already created. * SS$_WASCLR if range no already created. *-- */ static int $pt_no_delete (PTE_PQ l2pte1, PTE_PQ l2pte2, PHD *phd) { /* External variables */ extern PCB **sch$gl_pcbvec; extern uint16 *PHV$GL_PIXBAS; extern PCB * const sch$ar_swppcb; /* Swapper's PCB */ /* Local variables */ VOID_PQ pte1, pte2; PCB *pcb; int index; index = phd->phd$l_phvindex; // Get the PHV index if (index == -1) // Handle Swapper case { pcb = sch$ar_swppcb; } else { index = PHV$GL_PIXBAS[index]; // Get the process index pcb = sch$gl_pcbvec[index]; // Get the PCB } /* If range is zero, we want to clear the PHD fields */ if (l2pte1 == 0) { /* Clear PHD fields */ /* Note that phd$pq_pt_no_delete* are synchronized via the PCB spinlock */ device_lock ( pcb->pcb$l_spinlock, RAISE_IPL, NOSAVE_IPL ); phd->phd$pq_pt_no_delete1 = 0; phd->phd$pq_pt_no_delete2 = 0; device_unlock ( pcb->pcb$l_spinlock, IPL$_ASTDEL, SMP_RELEASE ); /* Done */ return (SS$_WASCLR); } /* Swap PTE pointers if reversed */ pte1 = l2pte1; pte2 = l2pte2; if (pte1 > pte2) { pte1 = l2pte2; pte2 = l2pte1; } /* If PHD fields are filled in, check range. If in range, all set. */ if ((phd->phd$pq_pt_no_delete1 != 0) && (pte1 >= phd->phd$pq_pt_no_delete1) && (pte2 <= phd->phd$pq_pt_no_delete2)) return (SS$_WASSET); /* Set PHD fields */ device_lock ( pcb->pcb$l_spinlock, RAISE_IPL, NOSAVE_IPL ); phd->phd$pq_pt_no_delete1 = pte1; phd->phd$pq_pt_no_delete2 = pte2; device_unlock ( pcb->pcb$l_spinlock, IPL$_ASTDEL, SMP_RELEASE ); return (SS$_WASCLR); } /*+ * $adjust_header_region - Internal function to adjust header region's RDE * * Inputs: head - Header region's RDE * next - New next RDE * * Output: status - Error code if anything is inconsistent. * */ static int $adjust_header_region (RDE *head, /* Header region's RDE */ RDE *next) /* New next region */ { /* External variables */ extern VOID_PQ const mmg$gq_process_space_limit; extern const int mmg$gl_page_size, mmg$gl_bwp_width; extern const int mmg$gl_va_bits; /* Local constants */ const uint64 size_of_p0 = 0x40000000; const VOID_PQ p1_start_va = (VOID_PQ)0x40000000; const uint64 size_of_p1 = 0x40000000; /* Local variables */ VOID_PQ start_va = 0; uint64 region_size = 0; /* Initialize start_va and region_size */ if (next != head) { start_va = next->rde$pq_start_va; region_size = next->rde$q_region_size; } /* Ascending header region */ if (!head->rde$v_descend) { if (next == head) /* Header region is now empty */ if (head->rde$v_p0_space) head->rde$q_region_size = size_of_p0; else { /* Must be P2 space */ head->rde$q_region_size = $$trunc(mmg$gq_process_space_limit); head->rde$q_region_size -= $$trunc(head->rde$pq_start_va); } if (next != head) { /* Don't allow if less than header's first free va */ if (start_va < head->rde$pq_first_free_va) return (SS$_VA_IN_USE); /* Adjust header region size */ head->rde$q_region_size = $$trunc(start_va) - $$trunc(head->rde$pq_start_va); } } /* End of ascending header region */ /* Descending header region */ if (head->rde$v_descend) { if (next == head) { /* Must be P1 region */ head->rde$pq_start_va = p1_start_va; head->rde$q_region_size = size_of_p1; } if (next != head) { /* Don't allow if above header's first free va */ if (((uint64)start_va + region_size) > ((uint64)head->rde$pq_first_free_va + mmg$gl_page_size)) return (SS$_VA_IN_USE); /* Calculate new start_va, region_size */ head->rde$pq_start_va = (VOID_PQ)((uint64)start_va + region_size); head->rde$q_region_size = (uint64)p1_start_va + size_of_p1; head->rde$q_region_size -= (uint64)head->rde$pq_start_va; } } /* All set */ return (SS$_NORMAL); } /* End of $adjust_header_region */ /* *++ * $remove_rde - Remove RDE from VA list and from region table. * * input: rde - RDE to remove * head - RDE for VA list header * * output: none - This routine should not fail. RDE passed in must be on the * VA list and in the region table. *-- */ static void $remove_rde (RDE *rde, RDE *head) { /* External variables */ extern RDE * ctl$a_region_table[RDE$C_REGION_TABLE_SIZE]; /* Local variables */ int index; RDE *prev, *next; /* Remove from region table */ index = rde->rde$q_region_id & (RDE$C_REGION_TABLE_SIZE - 1); /* Special case: rde is at head of list */ prev = ctl$a_region_table[index]; if (rde == prev) ctl$a_region_table[index] = rde->rde$ps_table_link; else { /* Search for rde */ next = prev->rde$ps_table_link; while (next != rde) { prev = next; /* Accvio if we run off end of list */ next = next->rde$ps_table_link; } /* Found it, take rde off list */ prev->rde$ps_table_link = next->rde$ps_table_link; } /* Remove from VA list */ prev = rde->rde$ps_va_list_blink; next = rde->rde$ps_va_list_flink; prev->rde$ps_va_list_flink = next; next->rde$ps_va_list_blink = prev; /* Adjust header if necessary */ if (prev == head) $adjust_header_region(head, next); } /* End of $remove_rde */ /* *++ * $update_peak_counters - Update CTL$ peak counters * * input: none * * implicit inputs: * ctl$gl_pcb * ctl$gl_ipagefl * ctl$gq_virtpeak * * output: none * * implicit outputs: * ctl$gl_ipagefl * ctl$gq_virtpeak *-- */ static void $update_peak_counters (void) { /* External variables */ extern PCB * const ctl$gl_pcb; extern PHD * const ctl$gl_phd; extern uint32 ctl$gl_ipagefl; extern uint64 ctl$gq_virtpeak; extern const uint64 mmg$gq_process_va_pages; /* Local variables */ JIB *jib; uint32 pf_pages_used; uint64 virtual_pages_used; /* Update peak page file usage */ jib = ctl$gl_pcb->pcb$l_jib; pf_pages_used = jib->jib$l_pgflquota - jib->jib$l_pgflcnt; if (pf_pages_used > ctl$gl_phd->phd$l_ipagefl) { ctl$gl_phd->phd$l_ipagefl = pf_pages_used; ctl$gl_ipagefl = pf_pages_used; } /* Update peak virtual page usage */ virtual_pages_used = mmg$gq_process_va_pages - ctl$gl_phd->phd$q_free_pte_count; if (virtual_pages_used > ctl$gl_phd->phd$q_virtpeak) { ctl$gl_phd->phd$q_virtpeak = virtual_pages_used; ctl$gq_virtpeak = virtual_pages_used; } } /* End of $update_peak_counters */ /* $keep_in_ws * * Keep range of pages between va1 and va2 in the working set. * * Input: va1 - Beginning range of pages to be kept in working set. * -1 if no longer require pages to be kept. * va2 - Ending range of pages to be kept in working set. * 0 if only one page to keep. * -1 if no longer require pages to be kept. * * To keep just one page in the working set, $keep_in_ws us called as * follows: * * $keep_in_ws (va, 0); * * To keep a range of pages in the working set, $keep_in_ws is called * as follows: * * $keep_in_ws (start_va, end_va); * * When done requiring that pages be in working set, $keep_in_ws is called as * follows: * * $keep_in_ws (DONE_KIWS); */ #define DONE_KIWS (VOID_PQ)-1,(VOID_PQ)-1 static void $keep_in_ws (VOID_PQ va1, VOID_PQ va2) { /* External variable */ extern PCB * const ctl$gl_pcb; /* If VAs are both -1, we no longer want to keep pages in the WS */ if (((__int64)va1 == -1) && ((__int64)va2 == -1)) { ctl$gl_pcb->pcb$q_keep_in_ws = -1; ctl$gl_pcb->pcb$q_keep_in_ws2 = -1; return; } /* Store VAs in PCB fields for pagefault */ if (va2 != 0) ctl$gl_pcb->pcb$q_keep_in_ws2 = (__int64)va2; ctl$gl_pcb->pcb$q_keep_in_ws = (__int64)va1; /* Synchronize with pagefault if this process can be multithreaded */ if (ctl$gl_pcb->pcb$l_multithread > 1) { sys_lock(MMG,1,0); /* Synch with pagefault */ sys_unlock(MMG,IPL$_ASTDEL,0); } } /* $stx_to_entry * * Convert section table index to section table entry * * Input: phd - Process or system header * stx - Section table index * * Output: section table entry address * */ static SECDEF * $stx_to_entry (PHD *phd, int stx) { return ((SECDEF *)((int)phd + phd->phd$l_pst_base_offset - (SEC$C_LENGTH * stx))); } /* $is_mem_res - Is page a memory-resident section page * * Input: PTE contents, * if invalid, slave PTE * if valid, slave or master PTE * * Output: 0, if not MRES * 1, if MRES * * Environment: MMG spinlock must be held */ static int $is_mem_res (PTE pte) { /* External variables */ extern PTE_PQ const mmg$gq_gpt_base; /* Local variables */ PFN_PQ entry; PTE gpte; PFN_T pfn = 0; int gstx; SECDEF * gste; /* If pte is valid, get pfn */ if (pte.pte$v_valid) pfn = pte.pte$v_pfn; else { /* PTE is invalid */ if (pte.pte$v_typ1) return (0); /* file backed process page */ if (pte.pte$v_typ0) { /* global invalid */ /* Get GPTE */ gpte = mmg$gq_gpt_base[pte.pte$v_gptx]; /* If GPTE valid, get pfn */ if (gpte.pte$v_valid) pfn = gpte.pte$v_pfn; else { /* GPTE not valid, reject types other than section */ if (!gpte.pte$v_typ0) return (0); /* Get gstx from master PTE */ gstx = gpte.pte$v_stx; } } else { /* transition or process dzro page */ pfn = pte.pte$v_pfn; if (!pfn) return (0); /* process dzro page */ } } /* If we have a PFN, check for SHMGS page or get gstx from pfn data */ if (pfn) { /* Get pfn database entry */ entry = pfn_to_entry (pfn); /* Reject non global writable section backing storage */ if (entry->pfn$v_pagtyp != PFN$C_GBLWRT) return (0); if (entry->pfn$v_shared) /* SHMGS page is resident; */ return (1); /* no need to check the gste */ if (!entry->pfn$v_typ0) return (0); if (entry->pfn$v_gblbak) return (0); /* Get gstx from BAK field in pfn database */ gstx = entry->pfn$v_stx; } /* Check the global section table entry for MRES */ gste = $stx_to_entry (mmg$gl_sysphd, gstx); if (!gste->sec$v_mres) return (0); return (1); } /* $is_mapped_shpts - Is page mapped by shared PTs * * Input: PTE VA * * Output: 0, if not mapped with shared PTs * 1, if mapped with shared PTs * * Environment: Assumes IPL = IPL$_ASTDEL */ static int $is_mapped_shpts (PTE_PQ va_pte) { /* External variables */ /* Local variables */ PFN_PQ entry; PTE_PQ l1pte, l2pte; SECDEF * gste; int old_ipl; PTE l1pte_contents; /* Get PFN database entry of L3PT PFN */ l2pte = pte_va (va_pte); l1pte = pte_va (l2pte); /* Verify upper level PTs exist before touching L3PT */ $read_pte (l1pte, &l1pte_contents); if ((!l1pte_contents.pte$v_valid) || (!l2pte->pte$v_valid)) return (0); entry = pfn_to_entry (l2pte->pte$v_pfn); /* Is this a valid Galaxy shared page? */ if ((!entry->pfn$v_shared) || (!entry->pfn$v_zeroed)) { /* No: perform additional checks */ /* If L3PT PFN page type is not global write, the PFN cannot be a shared PT PFN */ if (entry->pfn$v_pagtyp != PFN$C_GBLWRT) return (0); /* If the type 0 bits and the WRT bits are clear, but the page type of a L3PT is global write, the entry is in an inconsistent state. */ if ((!entry->pfn$v_wrt) && (!entry->pfn$v_typ0)) bug_check (INCONMMGST, FATAL, COLD); /* Compute the address of the shared PT section and verify that the shared PTs bit is set; if it's clear, the GSTE is in an inconsistent state. */ gste = $stx_to_entry (mmg$gl_sysphd, entry->pfn$v_stx); if (!gste->sec$v_shared_pts) bug_check (INCONMMGST, FATAL, COLD); } return (1); } /* $is_last_section_page - Is the specified page the last page in * section * * This routine assumes that VA in question * is part of a global section mapped into * a shared PT region. * * Input: Virtual Address * * Output: 0, if not last section page * 1, if last section page * * Environment: Assumes IPL = IPL$_ASTDEL * */ static int $is_last_section_page (VOID_PQ va) { /* External variables */ extern int const mmg$gl_page_size; extern PTE_PQ const mmg$gq_gpt_base; extern SHM_DESC_PQ const glx$gpq_shm_desc_array; /* Local variables */ PTE_PQ l1pte, l2pte, l3pte; PTE gpte; int old_ipl, pagelets_per_page; uint32 gptx, last_gptx, gstx, last_section_page, shm_reg_index; PFN_T pfn, last_section_pfn; PFN_PQ entry; SECDEF * gste; pagelets_per_page = mmg$gl_page_size / VA$C_PAGELET_SIZE; sys_lock (MMG, 1, &old_ipl); /* Verify upper level PTs exist before touching L3PT */ l1pte = l1pte_va (va); l2pte = l2pte_va (va); l3pte = pte_va (va); if ((!l1pte->pte$v_valid) || (!l2pte->pte$v_valid)) { sys_unlock (MMG, old_ipl, 1); return (0); } if ((l3pte->pte$v_valid) || ((!l3pte->pte$v_valid) && (l3pte->pte$v_pfn) && (!l3pte->pte$v_typ0))) { /* Valid or transition form: top part of PTE is a PFN */ entry = pfn_to_entry (l3pte->pte$v_pfn); if (entry->pfn$v_shared) { /* Galactic section page: Find the gstx in the SHM_REG descriptor, then check that "our" page is the last page in this section */ shm_reg_index = entry->pfn$r_shm_reg_id.shm_id$w_index; gstx = glx$gpq_shm_desc_array[shm_reg_index].shm_desc$l_gstx; gste = $stx_to_entry (mmg$gl_sysphd, gstx); last_section_page = gste->sec$l_vpx + (gste->sec$l_unit_cnt / pagelets_per_page) - 1; last_section_pfn = mmg$gq_gpt_base[last_section_page].pte$v_pfn; sys_unlock (MMG, old_ipl, 1); if (last_section_pfn == l3pte->pte$v_pfn) return 1; return 0; } else { /* Normal global page */ gptx = (int) entry->pfn$q_pte_index; gste = $stx_to_entry ( mmg$gl_sysphd, entry->pfn$v_stx); } } else { gptx = l3pte->pte$v_gptx; gpte = mmg$gq_gpt_base[gptx]; if (gpte.pte$v_valid) { pfn = gpte.pte$v_pfn; entry = pfn_to_entry(pfn); gstx = entry->pfn$v_stx; } else { gstx = gpte.pte$v_stx; } gste = $stx_to_entry ( mmg$gl_sysphd, gstx); } last_gptx = (gste->sec$l_unit_cnt / pagelets_per_page) - 1; last_gptx += gste->sec$l_vpx; sys_unlock (MMG, old_ipl, 1); if (gptx != last_gptx) return (0); /* If we reach this point, the specified page represents the last page of the section */ return (1); } /* $is_valid_delete_range - Does the specified range of addresses to * wholly contain mappings using shared PT * sections. * * This routine assumes the specified range * maps one or more global sections and that * the range wholly resides within a shared PT * region. * * Input: PTE contents, * * Output: 0, start va or end va do not lie on an even PT page boundary, if * they are mapped by shared PTs * 1, if the range is "PT page inclusive" * * Environment: Assumes IPL = IPL$_ASTDEL * */ static int $is_valid_delete_range (VOID_PQ start_va, uint64 delete_length) { /* External variables */ extern const uint64 mmg$gq_page_size; extern const int mmg$gl_bwp_width; extern const int mmg$gl_level_width; extern const int *mmg$ar_gh_ps_vector; /* Local variables */ VOID_PQ va, end_va; PTE_PQ end_va_pte; PTE pte_contents, zero_pte = {0}; uint64 bytes_mapped_by_l3pt; bytes_mapped_by_l3pt = (uint64)1<<(mmg$gl_bwp_width + mmg$gq_level_width); /* If the first page to be deleted is mapped by a shared PT section it must be aligned to the greater of PT of GH. */ if ($is_mapped_shpts (pte_va (start_va))) { PTE l3pte = *pte_va (start_va); uint64 bytes_mapped_by_gh, min_align; if (l3pte.pte$v_valid) { bytes_mapped_by_gh = (uint64)1<pcb$q_keep_in_ws; int status; volatile unsigned int *frewsl; if(pcb->pcb$l_multithread <= 1) { #ifdef MON_VERSION if (pcb->pcb$q_keep_in_ws != -1) bug_check(INCONMMGST, FATAL, COLD); #endif pcb->pcb$q_keep_in_ws = va1; if (va2 != 0) pcb->pcb$q_keep_in_ws2 = va2; return; } /* Store VAs in PCB fields for pagefault */ while(1) { while(pcb->pcb$q_keep_in_ws != -1) {} #ifdef __alpha status = __CMP_STORE_QUAD(addr, -1, va1, addr); #else status = __CMP_SWAP_QUAD(addr, -1, va1); #endif if(status) break; } if (va2 != 0) pcb->pcb$q_keep_in_ws2 = va2; phd = pcb->pcb$l_phd; frewsl = &phd->phd$l_flags; __MB(); while(*frewsl & PHD$M_FREWSLE_ACTIVE) {} } #ifdef __INITIAL_POINTER_SIZE /* Defined whenever ptr size pragmas supported */ #pragma __required_pointer_size __restore /* Restore the previously-defined required ptr size */ #endif #endif /* __MMG_FUNCTIONS_LOADED */