Skip to content

Commit 1bc318c

Browse files
committed
x86/mm/ident_map: Use gbpages only where full GB page should be mapped.
jira LE-3201 Rebuild_History Non-Buildable kernel-rt-4.18.0-553.30.1.rt7.371.el8_10 commit-author Steve Wahl <[email protected]> commit d794734 When ident_pud_init() uses only gbpages to create identity maps, large ranges of addresses not actually requested can be included in the resulting table; a 4K request will map a full GB. On UV systems, this ends up including regions that will cause hardware to halt the system if accessed (these are marked "reserved" by BIOS). Even processor speculation into these regions is enough to trigger the system halt. Only use gbpages when map creation requests include the full GB page of space. Fall back to using smaller 2M pages when only portions of a GB page are included in the request. No attempt is made to coalesce mapping requests. If a request requires a map entry at the 2M (pmd) level, subsequent mapping requests within the same 1G region will also be at the pmd level, even if adjacent or overlapping such requests could have been combined to map a full gbpage. Existing usage starts with larger regions and then adds smaller regions, so this should not have any great consequence. [ dhansen: fix up comment formatting, simplifty changelog ] Signed-off-by: Steve Wahl <[email protected]> Signed-off-by: Dave Hansen <[email protected]> Cc: [email protected] Link: https://lore.kernel.org/all/20240126164841.170866-1-steve.wahl%40hpe.com (cherry picked from commit d794734) Signed-off-by: Jonathan Maple <[email protected]>
1 parent 7a7082b commit 1bc318c

File tree

1 file changed

+18
-5
lines changed

1 file changed

+18
-5
lines changed

arch/x86/mm/ident_map.c

Lines changed: 18 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -26,18 +26,31 @@ static int ident_pud_init(struct x86_mapping_info *info, pud_t *pud_page,
2626
for (; addr < end; addr = next) {
2727
pud_t *pud = pud_page + pud_index(addr);
2828
pmd_t *pmd;
29+
bool use_gbpage;
2930

3031
next = (addr & PUD_MASK) + PUD_SIZE;
3132
if (next > end)
3233
next = end;
3334

34-
if (info->direct_gbpages) {
35-
pud_t pudval;
35+
/* if this is already a gbpage, this portion is already mapped */
36+
if (pud_large(*pud))
37+
continue;
38+
39+
/* Is using a gbpage allowed? */
40+
use_gbpage = info->direct_gbpages;
3641

37-
if (pud_present(*pud))
38-
continue;
42+
/* Don't use gbpage if it maps more than the requested region. */
43+
/* at the begining: */
44+
use_gbpage &= ((addr & ~PUD_MASK) == 0);
45+
/* ... or at the end: */
46+
use_gbpage &= ((next & ~PUD_MASK) == 0);
47+
48+
/* Never overwrite existing mappings */
49+
use_gbpage &= !pud_present(*pud);
50+
51+
if (use_gbpage) {
52+
pud_t pudval;
3953

40-
addr &= PUD_MASK;
4154
pudval = __pud((addr - info->offset) | info->page_flag);
4255
set_pud(pud, pudval);
4356
continue;

0 commit comments

Comments
 (0)