1 2 3 switch (uarch) { 4 case cpuinfo_uarch_cortex_a5: 5 /* 6 * Cortex-A5 Technical Reference Manual: 7 * 6.3.1. Micro TLB 8 * The first level of caching for the page table information is a micro TLB of 9 * 10 entries that is implemented on each of the instruction and data sides. 10 * 6.3.2. Main TLB 11 * Misses from the instruction and data micro TLBs are handled by a unified main TLB. 12 * The main TLB is 128-entry two-way set-associative. 13 */ 14 break; 15 case cpuinfo_uarch_cortex_a7: 16 /* 17 * Cortex-A7 MPCore Technical Reference Manual: 18 * 5.3.1. Micro TLB 19 * The first level of caching for the page table information is a micro TLB of 20 * 10 entries that is implemented on each of the instruction and data sides. 21 * 5.3.2. Main TLB 22 * Misses from the micro TLBs are handled by a unified main TLB. This is a 256-entry 2-way 23 * set-associative structure. The main TLB supports all the VMSAv7 page sizes of 24 * 4KB, 64KB, 1MB and 16MB in addition to the LPAE page sizes of 2MB and 1G. 25 */ 26 break; 27 case cpuinfo_uarch_cortex_a8: 28 /* 29 * Cortex-A8 Technical Reference Manual: 30 * 6.1. About the MMU 31 * The MMU features include the following: 32 * - separate, fully-associative, 32-entry data and instruction TLBs 33 * - TLB entries that support 4KB, 64KB, 1MB, and 16MB pages 34 */ 35 break; 36 case cpuinfo_uarch_cortex_a9: 37 /* 38 * ARM Cortex‑A9 Technical Reference Manual: 39 * 6.2.1 Micro TLB 40 * The first level of caching for the page table information is a micro TLB of 32 entries on the data side, 41 * and configurable 32 or 64 entries on the instruction side. 42 * 6.2.2 Main TLB 43 * The main TLB is implemented as a combination of: 44 * - A fully-associative, lockable array of four elements. 45 * - A 2-way associative structure of 2x32, 2x64, 2x128 or 2x256 entries. 46 */ 47 break; 48 case cpuinfo_uarch_cortex_a15: 49 /* 50 * ARM Cortex-A15 MPCore Processor Technical Reference Manual: 51 * 5.2.1. L1 instruction TLB 52 * The L1 instruction TLB is a 32-entry fully-associative structure. This TLB caches entries at the 4KB 53 * granularity of Virtual Address (VA) to Physical Address (PA) mapping only. If the page tables map the 54 * memory region to a larger granularity than 4K, it only allocates one mapping for the particular 4K region 55 * to which the current access corresponds. 56 * 5.2.2. L1 data TLB 57 * There are two separate 32-entry fully-associative TLBs that are used for data loads and stores, 58 * respectively. Similar to the L1 instruction TLB, both of these cache entries at the 4KB granularity of 59 * VA to PA mappings only. At implementation time, the Cortex-A15 MPCore processor can be configured with 60 * the -l1tlb_1m option, to have the L1 data TLB cache entries at both the 4KB and 1MB granularity. 61 * With this configuration, any translation that results in a 1MB or larger page is cached in the L1 data 62 * TLB as a 1MB entry. Any translation that results in a page smaller than 1MB is cached in the L1 data TLB 63 * as a 4KB entry. By default, all translations are cached in the L1 data TLB as a 4KB entry. 64 * 5.2.3. L2 TLB 65 * Misses from the L1 instruction and data TLBs are handled by a unified L2 TLB. This is a 512-entry 4-way 66 * set-associative structure. The L2 TLB supports all the VMSAv7 page sizes of 4K, 64K, 1MB and 16MB in 67 * addition to the LPAE page sizes of 2MB and 1GB. 68 */ 69 break; 70 case cpuinfo_uarch_cortex_a17: 71 /* 72 * ARM Cortex-A17 MPCore Processor Technical Reference Manual: 73 * 5.2.1. Instruction micro TLB 74 * The instruction micro TLB is implemented as a 32, 48 or 64 entry, fully-associative structure. This TLB 75 * caches entries at the 4KB and 1MB granularity of Virtual Address (VA) to Physical Address (PA) mapping 76 * only. If the translation tables map the memory region to a larger granularity than 4KB or 1MB, it only 77 * allocates one mapping for the particular 4KB region to which the current access corresponds. 78 * 5.2.2. Data micro TLB 79 * The data micro TLB is a 32 entry fully-associative TLB that is used for data loads and stores. The cache 80 * entries have a 4KB and 1MB granularity of VA to PA mappings only. 81 * 5.2.3. Unified main TLB 82 * Misses from the instruction and data micro TLBs are handled by a unified main TLB. This is a 1024 entry 83 * 4-way set-associative structure. The main TLB supports all the VMSAv7 page sizes of 4K, 64K, 1MB and 16MB 84 * in addition to the LPAE page sizes of 2MB and 1GB. 85 */ 86 break; 87 case cpuinfo_uarch_cortex_a35: 88 /* 89 * ARM Cortex‑A35 Processor Technical Reference Manual: 90 * A6.2 TLB Organization 91 * Micro TLB 92 * The first level of caching for the translation table information is a micro TLB of ten entries that 93 * is implemented on each of the instruction and data sides. 94 * Main TLB 95 * A unified main TLB handles misses from the micro TLBs. It has a 512-entry, 2-way, set-associative 96 * structure and supports all VMSAv8 block sizes, except 1GB. If it fetches a 1GB block, the TLB splits 97 * it into 512MB blocks and stores the appropriate block for the lookup. 98 */ 99 break; 100 case cpuinfo_uarch_cortex_a53: 101 /* 102 * ARM Cortex-A53 MPCore Processor Technical Reference Manual: 103 * 5.2.1. Micro TLB 104 * The first level of caching for the translation table information is a micro TLB of ten entries that is 105 * implemented on each of the instruction and data sides. 106 * 5.2.2. Main TLB 107 * A unified main TLB handles misses from the micro TLBs. This is a 512-entry, 4-way, set-associative 108 * structure. The main TLB supports all VMSAv8 block sizes, except 1GB. If a 1GB block is fetched, it is 109 * split into 512MB blocks and the appropriate block for the lookup stored. 110 */ 111 break; 112 case cpuinfo_uarch_cortex_a57: 113 /* 114 * ARM® Cortex-A57 MPCore Processor Technical Reference Manual: 115 * 5.2.1 L1 instruction TLB 116 * The L1 instruction TLB is a 48-entry fully-associative structure. This TLB caches entries of three 117 * different page sizes, natively 4KB, 64KB, and 1MB, of VA to PA mappings. If the page tables map the memory 118 * region to a larger granularity than 1MB, it only allocates one mapping for the particular 1MB region to 119 * which the current access corresponds. 120 * 5.2.2 L1 data TLB 121 * The L1 data TLB is a 32-entry fully-associative TLB that is used for data loads and stores. This TLB 122 * caches entries of three different page sizes, natively 4KB, 64KB, and 1MB, of VA to PA mappings. 123 * 5.2.3 L2 TLB 124 * Misses from the L1 instruction and data TLBs are handled by a unified L2 TLB. This is a 1024-entry 4-way 125 * set-associative structure. The L2 TLB supports the page sizes of 4K, 64K, 1MB and 16MB. It also supports 126 * page sizes of 2MB and 1GB for the long descriptor format translation in AArch32 state and in AArch64 state 127 * when using the 4KB translation granule. In addition, the L2 TLB supports the 512MB page map size defined 128 * for the AArch64 translations that use a 64KB translation granule. 129 */ 130 break; 131 } 132 133 134