From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received-SPF: Pass (sender SPF authorized) identity=mailfrom; client-ip=217.140.101.70; helo=foss.arm.com; envelope-from=mark.rutland@arm.com; receiver=edk2-devel@ml01.01.org Received: from foss.arm.com (usa-sjc-mx-foss1.foss.arm.com [217.140.101.70]) by ml01.01.org (Postfix) with ESMTP id E4449211B112A for ; Mon, 28 Jan 2019 05:46:22 -0800 (PST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 5770F80D; Mon, 28 Jan 2019 05:46:22 -0800 (PST) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 2492F3F59C; Mon, 28 Jan 2019 05:46:21 -0800 (PST) Date: Mon, 28 Jan 2019 13:46:18 +0000 From: Mark Rutland To: Tan Xiaojun Cc: Laszlo Ersek , Ard Biesheuvel , Marc Zyngier , "edk2-devel@lists.01.org" , Christoffer Dall Message-ID: <20190128134618.GB20888@lakrids.cambridge.arm.com> References: <1449471969-16949-1-git-send-email-ard.biesheuvel@linaro.org> <2dd4294c-76f0-f433-cbd2-bf0b37114aee@redhat.com> <12fa0861-e25d-eba7-48ea-2bd7d47d58fb@redhat.com> <20190128104634.xnaivxxbvad7jffo@blommer> <70ec9046-cb9d-74e8-e64c-4d5fbeba4bfb@redhat.com> <5C4EFF06.2050600@huawei.com> MIME-Version: 1.0 In-Reply-To: <5C4EFF06.2050600@huawei.com> User-Agent: Mutt/1.11.1+11 (2f07cb52) (2018-12-01) Subject: Re: [PATCH] ArmPkg: update InvalidateInstructionCacheRange to flush only to PoU X-BeenThere: edk2-devel@lists.01.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: EDK II Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 28 Jan 2019 13:46:23 -0000 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline On Mon, Jan 28, 2019 at 09:09:26PM +0800, Tan Xiaojun wrote: > On 2019/1/28 19:54, Laszlo Ersek wrote: > > On 01/28/19 11:46, Mark Rutland wrote: > >> On Wed, Jan 23, 2019 at 10:54:56AM +0100, Laszlo Ersek wrote: > >>> And even on the original (unspecified) hardware, the same binary works > >>> frequently. My understanding is that there are five VMs executing reboot > >>> loops in parallel, on the same host, and 4 out of 5 may hit the issue in > >>> a reasonable time period (300 reboots or so). > >> > >> Interesting. > >> > >> Do you happen to know how many VMID bits the host has? If it has an 8-bit VMID, > >> this could be indicative of some problem upon overflow. > > > > I'll let Tan Xiaojun (CC'd) answer this questions. > > > >> Can you point us at the host kernel? > > > > In the report, Tan Xiaojun wrote "4.18.0-48.el8.aarch64"; I guess that > > information is mostly useless in an upstream discussion. Unfortunately, > > I couldn't reproduce the symptom at all (I know nothing about the > > hardware in question), so I can't myself retest with an upstream host > > kernel. > > I don't understand, what do you want me to do? What is the specific problem? Could you let us know which CPU/system you've seen this issue with? ... and what the value of ID_AA64MMFR1_EL1.VMIDBits is? Thanks, Mark.