From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mx.groups.io with SMTP id smtpd.web11.18512.1679327941200967685 for ; Mon, 20 Mar 2023 08:59:01 -0700 Authentication-Results: mx.groups.io; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=ZtaOY3w0; spf=pass (domain: redhat.com, ip: 170.10.133.124, mailfrom: kraxel@redhat.com) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1679327940; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=XfNZVXvDgFsElJxgHk/BS4rek3GSlsf/uwOjZnKfqoM=; b=ZtaOY3w01YdirxgVXr+XcolCiFzWvOfrJex6jtGTv7ii8G9SfnJvSN7itRg3yHnII3j2Hy kX7RV/y0qQt8KVl+I7dSMD/gu7lmXcdql/V83biP9Bui/6y4nJKqli8HzXciqiLBqpgmRE 6WAVH0r1osNRT3zR70ylP53NZnPJlxY= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-12-S0Cn4UVQM1y5Sn5cnM62Kg-1; Mon, 20 Mar 2023 11:58:57 -0400 X-MC-Unique: S0Cn4UVQM1y5Sn5cnM62Kg-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id AAA9296DC85; Mon, 20 Mar 2023 15:58:56 +0000 (UTC) Received: from sirius.home.kraxel.org (unknown [10.39.192.144]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 2DEAE40C20FA; Mon, 20 Mar 2023 15:58:56 +0000 (UTC) Received: by sirius.home.kraxel.org (Postfix, from userid 1000) id 0433A1800081; Mon, 20 Mar 2023 16:58:54 +0100 (CET) Date: Mon, 20 Mar 2023 16:58:54 +0100 From: "Gerd Hoffmann" To: Fiona Ebner Cc: devel@edk2.groups.io, Jordan Justen , Pawel Polawski , Jiewen Yao , Oliver Steffen , Ard Biesheuvel , Thomas Lamprecht Subject: Re: [PATCH v2 2/4] OvmfPkg/PlatformInitLib: detect physical address space Message-ID: <20230320155854.tsrojzzjqxzzszmd@sirius.home.kraxel.org> References: <20221004134728.55499-1-kraxel@redhat.com> <20221004134728.55499-3-kraxel@redhat.com> <5259991b-964c-4378-f206-9991053f7c7e@proxmox.com> <20230317140148.7ioafsne2asymfxi@sirius.home.kraxel.org> <550ddd42-2f53-b75e-c819-acfc12fd620f@proxmox.com> MIME-Version: 1.0 In-Reply-To: <550ddd42-2f53-b75e-c819-acfc12fd620f@proxmox.com> X-Scanned-By: MIMEDefang 3.1 on 10.11.54.1 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Hi, > It seems that Page1GSupport is already TRUE in my case, so > unfortunately, the suggested changes don't help. > > Before commit bbda386d25, PhysMemAddressWidth is 36, after the commit, > it's 47. I tried with hardcoding different values: > 45 - My VM boots fine. > 46 - I run into a "KVM internal error. Suberror: 1" during Linux boot > (that's also what happens with 47 and 750 MiB of memory). > 47 - Hangs right away and display is never initialized. Hmm. "KVM internal error" sound like this could be a linux kernel bug. I can't reproduce this, although I'm not testing with kvm but with tcg because I don't have a machine with 48 phys-bits at hand. RedHat QE didn't ran into any problems either, although it certainly could be they didn't test guests with only 512 MB. > Is there any interest to use a smaller limit than 47 from upstream's > perspective? Admittedly, it is a rather niche case to use OVMF with so > little memory. Well, the problem OVMF has (compared to physical platforms) is that it needs to scale from tiny (512 MB) to huge (multi-TB). There are some heuristics in place to deal with that like the one limiting the address space used in case there is no support for gigabyte pages. So, just lowering the limit doesn't look like a good plan. We can try tweak the heuristics so OVMF picks better defaults for both huge and tiny guests. For that it would be good to figure what the actual root cause is. Is it really memory pressure? With gigabyte pages available it should not need that much memory for page tables ... In any case you can run qemu with "-cpu $name,phys-bits=" to make the guest address space smaller. take care, Gerd