* [PATCH] OvmfPkg: End timer interrupt later to avoid stack overflow under load @ 2020-06-16 2:49 Igor Druzhinin 2020-06-16 18:42 ` Laszlo Ersek 0 siblings, 1 reply; 11+ messages in thread From: Igor Druzhinin @ 2020-06-16 2:49 UTC (permalink / raw) To: devel Cc: jordan.l.justen, lersek, ard.biesheuvel, anthony.perard, julien, Igor Druzhinin RestoreTPL called while at TPL_HIGH_LEVEL unconditionally enables interrupts even if called in interrupt handler. That opens a window while interrupt is not completely handled but another interrupt could be accepted. If a VM starts on a heavily loaded host hundreds of periodic timer interrupts might be queued while vCPU is descheduled (the behavior is typical for a Xen host). The next time vCPU is scheduled again all of them get delivered back to back causing OVMF to accept each one without finishing a previous one and cleaning up the stack. That quickly results in stack overflow and a triple fault. Fix it by postponing sending EOI until we finished processing the current tick giving interrupt handler opportunity to clean up the stack before accepting the next tick. Signed-off-by: Igor Druzhinin <igor.druzhinin@citrix.com> --- Laszlo, Anthony, Do you think it's the right way to address it? Alternatively, we might avoid calling RaiseTPL in interrupt handler at all like it's done in HpetTimer implementation for instance. Or we might try to address it in Raise/RestoreTPL calls by saving/restoring interrupt state along with TPL. --- OvmfPkg/8254TimerDxe/Timer.c | 5 +++-- OvmfPkg/XenTimerDxe/XenTimerDxe.c | 5 +++-- 2 files changed, 6 insertions(+), 4 deletions(-) diff --git a/OvmfPkg/8254TimerDxe/Timer.c b/OvmfPkg/8254TimerDxe/Timer.c index 67e22f5..fd1691b 100644 --- a/OvmfPkg/8254TimerDxe/Timer.c +++ b/OvmfPkg/8254TimerDxe/Timer.c @@ -79,8 +79,6 @@ TimerInterruptHandler ( OriginalTPL = gBS->RaiseTPL (TPL_HIGH_LEVEL); - mLegacy8259->EndOfInterrupt (mLegacy8259, Efi8259Irq0); - if (mTimerNotifyFunction != NULL) { // // @bug : This does not handle missed timer interrupts @@ -89,6 +87,9 @@ TimerInterruptHandler ( } gBS->RestoreTPL (OriginalTPL); + + DisableInterrupts (); + mLegacy8259->EndOfInterrupt (mLegacy8259, Efi8259Irq0); } /** diff --git a/OvmfPkg/XenTimerDxe/XenTimerDxe.c b/OvmfPkg/XenTimerDxe/XenTimerDxe.c index 9f9e047..0bec593 100644 --- a/OvmfPkg/XenTimerDxe/XenTimerDxe.c +++ b/OvmfPkg/XenTimerDxe/XenTimerDxe.c @@ -61,8 +61,6 @@ TimerInterruptHandler ( OriginalTPL = gBS->RaiseTPL (TPL_HIGH_LEVEL); - SendApicEoi(); - if (mTimerNotifyFunction != NULL) { // // @bug : This does not handle missed timer interrupts @@ -71,6 +69,9 @@ TimerInterruptHandler ( } gBS->RestoreTPL (OriginalTPL); + + DisableInterrupts (); + SendApicEoi (); } /** -- 2.7.4 ^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [PATCH] OvmfPkg: End timer interrupt later to avoid stack overflow under load 2020-06-16 2:49 [PATCH] OvmfPkg: End timer interrupt later to avoid stack overflow under load Igor Druzhinin @ 2020-06-16 18:42 ` Laszlo Ersek 2020-06-17 3:16 ` Igor Druzhinin 2020-06-17 13:51 ` Paolo Bonzini 0 siblings, 2 replies; 11+ messages in thread From: Laszlo Ersek @ 2020-06-16 18:42 UTC (permalink / raw) To: Igor Druzhinin, devel Cc: jordan.l.justen, ard.biesheuvel, anthony.perard, julien, Ray Ni, Paolo Bonzini (+Paolo, +Ray) On 06/16/20 04:49, Igor Druzhinin wrote: > RestoreTPL called while at TPL_HIGH_LEVEL unconditionally enables interrupts > even if called in interrupt handler. That opens a window while interrupt > is not completely handled but another interrupt could be accepted. > > If a VM starts on a heavily loaded host hundreds of periodic timer interrupts > might be queued while vCPU is descheduled (the behavior is typical for > a Xen host). The next time vCPU is scheduled again all of them get > delivered back to back causing OVMF to accept each one without finishing > a previous one and cleaning up the stack. That quickly results in stack > overflow and a triple fault. > > Fix it by postponing sending EOI until we finished processing the current > tick giving interrupt handler opportunity to clean up the stack before > accepting the next tick. > > Signed-off-by: Igor Druzhinin <igor.druzhinin@citrix.com> > --- > > Laszlo, Anthony, > > Do you think it's the right way to address it? > > Alternatively, we might avoid calling RaiseTPL in interrupt handler at all > like it's done in HpetTimer implementation for instance. > > Or we might try to address it in Raise/RestoreTPL calls by saving/restoring > interrupt state along with TPL. > > --- > OvmfPkg/8254TimerDxe/Timer.c | 5 +++-- > OvmfPkg/XenTimerDxe/XenTimerDxe.c | 5 +++-- > 2 files changed, 6 insertions(+), 4 deletions(-) If I understand correctly, TimerInterruptHandler() [OvmfPkg/8254TimerDxe/Timer.c] currently does the following: - RaiseTPL (TPL_HIGH_LEVEL) --> mask interrupts from being delivered - mLegacy8259->EndOfInterrupt() --> permit the PIC to generate further interrupts (= make them pending) - RestoreTPL() --> unmask interrupts (allow delivery) RestoreTPL() is always expected to invoke handlers (on its own stack) that have just been unmasked, so that behavior is not unexpected, in my opinion. What seems unexpected is the queueing of a huge number of timer interrupts. I would think a timer interrupt is either pending or not pending (i.e. if it's already pending, then the next generated interrupt is coalesced, not queued). While there would still be a window between the EOI and the unmasking, I don't think it would normally allow for a *huge* number of queued interrupts (and consequently a stack overflow). So I basically see the root of the problem in the interrupts being queued rather than coalesced. I'm pretty unfamiliar with this x86 area (= the 8259 PIC in general), but the following wiki article seems to agree with my suspicion: https://wiki.osdev.org/8259_PIC#How_does_the_8259_PIC_chip_work.3F [...] and whether there's an interrupt already pending. If the channel is unmasked and there's no interrupt pending, the PIC will raise the interrupt line [...] Can we say that the interrupt queueing (as opposed to coalescing) is a Xen issue? (Hmmm... maybe the hypervisor *has* to queue the timer interrupts, otherwise some of them would simply be lost, and the guest would lose track of time.) Either way, I'm not sure what the best approach is. This driver was moved under OvmfPkg from PcAtChipsetPkg in commit 1a3ffdff82e6 ("OvmfPkg: Copy 8254TimerDxe driver from PcAtChipsetPkg", 2019-04-11). HpetTimerDxe also lives under PcAtChipsetPkg. So I think I'll have to rely on the expertise of Ray here (CC'd). Also, I recall a recent-ish QEMU commit that seems vaguely related (i.e., to timer interrupt coalescing -- see 7a3e29b12f5a, "mc146818rtc: fix timer interrupt reinjection again", 2019-11-19), so I'm CC'ing Paolo too. Some more comments / questions below: > > diff --git a/OvmfPkg/8254TimerDxe/Timer.c b/OvmfPkg/8254TimerDxe/Timer.c > index 67e22f5..fd1691b 100644 > --- a/OvmfPkg/8254TimerDxe/Timer.c > +++ b/OvmfPkg/8254TimerDxe/Timer.c > @@ -79,8 +79,6 @@ TimerInterruptHandler ( > > OriginalTPL = gBS->RaiseTPL (TPL_HIGH_LEVEL); > > - mLegacy8259->EndOfInterrupt (mLegacy8259, Efi8259Irq0); > - > if (mTimerNotifyFunction != NULL) { > // > // @bug : This does not handle missed timer interrupts > @@ -89,6 +87,9 @@ TimerInterruptHandler ( > } > > gBS->RestoreTPL (OriginalTPL); > + > + DisableInterrupts (); > + mLegacy8259->EndOfInterrupt (mLegacy8259, Efi8259Irq0); > } So this briefly (temporarily) unmasks interrupt delivery (between RestoreTPL() and DisableInterrupts()) while the PIC is still blocked from generating more, and then unblocks the PIC. It looks plausible for preventing the unbounded recursion per se, but why is it safe to leave the function with interrupts disabled? Before the patch, that didn't use to be the case. I'm generally concerned about such an "indiscriminate" 8254TimerDxe patch, because I haven't seen a similar report on QEMU/KVM yet. And again I'm quite out of my depth with the 8259 / 8254. Even if we end up merging a patch like this, I feel tempted to restrict it to Xen. ... Regarding XenTimerDxe, I don't have the slightest idea, unfortunately. Thanks Laszlo > > /** > diff --git a/OvmfPkg/XenTimerDxe/XenTimerDxe.c b/OvmfPkg/XenTimerDxe/XenTimerDxe.c > index 9f9e047..0bec593 100644 > --- a/OvmfPkg/XenTimerDxe/XenTimerDxe.c > +++ b/OvmfPkg/XenTimerDxe/XenTimerDxe.c > @@ -61,8 +61,6 @@ TimerInterruptHandler ( > > OriginalTPL = gBS->RaiseTPL (TPL_HIGH_LEVEL); > > - SendApicEoi(); > - > if (mTimerNotifyFunction != NULL) { > // > // @bug : This does not handle missed timer interrupts > @@ -71,6 +69,9 @@ TimerInterruptHandler ( > } > > gBS->RestoreTPL (OriginalTPL); > + > + DisableInterrupts (); > + SendApicEoi (); > } > > /** > ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH] OvmfPkg: End timer interrupt later to avoid stack overflow under load 2020-06-16 18:42 ` Laszlo Ersek @ 2020-06-17 3:16 ` Igor Druzhinin 2020-06-17 12:44 ` Laszlo Ersek 2020-06-17 13:51 ` Paolo Bonzini 1 sibling, 1 reply; 11+ messages in thread From: Igor Druzhinin @ 2020-06-17 3:16 UTC (permalink / raw) To: Laszlo Ersek, devel, xen-devel Cc: jordan.l.justen, ard.biesheuvel, anthony.perard, julien, Ray Ni, Paolo Bonzini On 16/06/2020 19:42, Laszlo Ersek wrote > If I understand correctly, TimerInterruptHandler() > [OvmfPkg/8254TimerDxe/Timer.c] currently does the following: > > - RaiseTPL (TPL_HIGH_LEVEL) --> mask interrupts from being delivered > > - mLegacy8259->EndOfInterrupt() --> permit the PIC to generate further > interrupts (= make them pending) > > - RestoreTPL() --> unmask interrupts (allow delivery) > > RestoreTPL() is always expected to invoke handlers (on its own stack) > that have just been unmasked, so that behavior is not unexpected, in my > opinion. Yes, this is where I'd like to have a confirmation - opening a window for uncontrollable number of nested interrupts with a small stack looks dangerous. > What seems unexpected is the queueing of a huge number of timer > interrupts. I would think a timer interrupt is either pending or not > pending (i.e. if it's already pending, then the next generated interrupt > is coalesced, not queued). While there would still be a window between > the EOI and the unmasking, I don't think it would normally allow for a > *huge* number of queued interrupts (and consequently a stack overflow). It's not a window between EOI and unmasking but the very fact vCPU is descheduled for a considerable amount of time that causes backlog of timer interrupts to build up. This is Xen default behavior and is configurable (there are several timer modes including coalescing you mention). That is done for compatibility with some guests basing time accounting on the number of periodic interrupts they receive. > So I basically see the root of the problem in the interrupts being > queued rather than coalesced. I'm pretty unfamiliar with this x86 area > (= the 8259 PIC in general), but the following wiki article seems to > agree with my suspicion: > > https://wiki.osdev.org/8259_PIC#How_does_the_8259_PIC_chip_work.3F > > [...] and whether there's an interrupt already pending. If the > channel is unmasked and there's no interrupt pending, the PIC will > raise the interrupt line [...] > > Can we say that the interrupt queueing (as opposed to coalescing) is a > Xen issue? I can admit that the whole issue might be Xen specific if that form of timer mode is not used in QEMU-KVM. What mode is typical there then? We might consider switching Xen to a different mode if so, as I believe those guests are not in support for many years. > (Hmmm... maybe the hypervisor *has* to queue the timer interrupts, > otherwise some of them would simply be lost, and the guest would lose > track of time.) > > Either way, I'm not sure what the best approach is. This driver was > moved under OvmfPkg from PcAtChipsetPkg in commit 1a3ffdff82e6 > ("OvmfPkg: Copy 8254TimerDxe driver from PcAtChipsetPkg", 2019-04-11). > HpetTimerDxe also lives under PcAtChipsetPkg. > > So I think I'll have to rely on the expertise of Ray here (CC'd). Also note that since the issue might be Xen specific we might want to try to fix it in XenTimer only - I modified 8254Timer due to the fact Xen is still present in general config (but that should soon go away). > Also, I recall a recent-ish QEMU commit that seems vaguely related > (i.e., to timer interrupt coalescing -- see 7a3e29b12f5a, "mc146818rtc: > fix timer interrupt reinjection again", 2019-11-19), so I'm CC'ing Paolo > too. Hmm that looks more like a RTC implementation specific issue. > Some more comments / questions below: > >> >> diff --git a/OvmfPkg/8254TimerDxe/Timer.c b/OvmfPkg/8254TimerDxe/Timer.c >> index 67e22f5..fd1691b 100644 >> --- a/OvmfPkg/8254TimerDxe/Timer.c >> +++ b/OvmfPkg/8254TimerDxe/Timer.c >> @@ -79,8 +79,6 @@ TimerInterruptHandler ( >> >> OriginalTPL = gBS->RaiseTPL (TPL_HIGH_LEVEL); >> >> - mLegacy8259->EndOfInterrupt (mLegacy8259, Efi8259Irq0); >> - >> if (mTimerNotifyFunction != NULL) { >> // >> // @bug : This does not handle missed timer interrupts >> @@ -89,6 +87,9 @@ TimerInterruptHandler ( >> } >> >> gBS->RestoreTPL (OriginalTPL); >> + >> + DisableInterrupts (); >> + mLegacy8259->EndOfInterrupt (mLegacy8259, Efi8259Irq0); >> } > > So this briefly (temporarily) unmasks interrupt delivery (between > RestoreTPL() and DisableInterrupts()) while the PIC is still blocked > from generating more, and then unblocks the PIC. > > It looks plausible for preventing the unbounded recursion per se, but > why is it safe to leave the function with interrupts disabled? Before > the patch, that didn't use to be the case. Quickly looking through the code it appears to me the first thing that caller does after interrupt handler - it clears interrupt flag to make sure those disabled. So I don't see any assumption that interrupts should be enabled on exiting. But I might not know about all of the possible combinations here. Igor ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH] OvmfPkg: End timer interrupt later to avoid stack overflow under load 2020-06-17 3:16 ` Igor Druzhinin @ 2020-06-17 12:44 ` Laszlo Ersek 0 siblings, 0 replies; 11+ messages in thread From: Laszlo Ersek @ 2020-06-17 12:44 UTC (permalink / raw) To: Igor Druzhinin, devel, xen-devel Cc: jordan.l.justen, ard.biesheuvel, anthony.perard, julien, Ray Ni, Paolo Bonzini On 06/17/20 05:16, Igor Druzhinin wrote: > On 16/06/2020 19:42, Laszlo Ersek wrote >> If I understand correctly, TimerInterruptHandler() >> [OvmfPkg/8254TimerDxe/Timer.c] currently does the following: >> >> - RaiseTPL (TPL_HIGH_LEVEL) --> mask interrupts from being delivered >> >> - mLegacy8259->EndOfInterrupt() --> permit the PIC to generate further >> interrupts (= make them pending) >> >> - RestoreTPL() --> unmask interrupts (allow delivery) >> >> RestoreTPL() is always expected to invoke handlers (on its own stack) >> that have just been unmasked, so that behavior is not unexpected, in my >> opinion. > > Yes, this is where I'd like to have a confirmation - opening a window > for uncontrollable number of nested interrupts with a small stack > looks dangerous. Sorry, I meant the above more generally. The sentence RestoreTPL() is always expected to invoke handlers (on its own stack) that have just been unmasked doesn't only refer to actual timer hardware interrupts (in connection to TPL_HGIH_LEVEL), but also to invoking event notification functions that have been queued while running at the raised TPL. Quoting "EFI_BOOT_SERVICES.CreateEvent()" from the spec: Events exist in one of two states, “waiting” or “signaled.” When an event is created, firmware puts it in the “waiting” state. When the event is signaled, firmware changes its state to “signaled” and, if EVT_NOTIFY_SIGNAL is specified, places a call to its notification function in a FIFO queue. There is a queue for each of the “basic” task priority levels defined in Section 7.1 (TPL_CALLBACK, and TPL_NOTIFY). The functions in these queues are invoked in FIFO order, starting with the highest priority level queue and proceeding to the lowest priority queue that is unmasked by the current TPL. If the current TPL is equal to or greater than the queued notification, it will wait until the TPL is lowered via EFI_BOOT_SERVICES.RestoreTPL(). In practice, when the event is signaled, and the current TPL is not masking the TPL of the associated notify function, then the notify function is called internally to signaling the event. Otherwise, if the unmasking occurs via RestoreTPL(), then the queued notification functions are invoked on the stack of RestoreTPL() -- in other words, internally to the RestoreTPL() function call itself. So all I meant was that notification functions running internally to RestoreTPL() was by design. What's unexpected is the "uncontrollable number" of nested interrupts. > >> What seems unexpected is the queueing of a huge number of timer >> interrupts. I would think a timer interrupt is either pending or not >> pending (i.e. if it's already pending, then the next generated interrupt >> is coalesced, not queued). While there would still be a window between >> the EOI and the unmasking, I don't think it would normally allow for a >> *huge* number of queued interrupts (and consequently a stack overflow). > > It's not a window between EOI and unmasking but the very fact vCPU is > descheduled for a considerable amount of time that causes backlog of > timer interrupts to build up. This is Xen default behavior and is > configurable (there are several timer modes including coalescing > you mention). That is done for compatibility with some guests basing > time accounting on the number of periodic interrupts they receive. OK, thanks for explaining. > >> So I basically see the root of the problem in the interrupts being >> queued rather than coalesced. I'm pretty unfamiliar with this x86 area >> (= the 8259 PIC in general), but the following wiki article seems to >> agree with my suspicion: >> >> https://wiki.osdev.org/8259_PIC#How_does_the_8259_PIC_chip_work.3F >> >> [...] and whether there's an interrupt already pending. If the >> channel is unmasked and there's no interrupt pending, the PIC will >> raise the interrupt line [...] >> >> Can we say that the interrupt queueing (as opposed to coalescing) is a >> Xen issue? > > I can admit that the whole issue might be Xen specific if that form > of timer mode is not used in QEMU-KVM. What mode is typical there > then? That question is too difficult for me to answer :( > We might consider switching Xen to a different mode if so, as I believe > those guests are not in support for many years. Can you perhaps test this hypothesis? If you select the coalescing timer mode for the Xen guest in question, does the symptom go away? > >> (Hmmm... maybe the hypervisor *has* to queue the timer interrupts, >> otherwise some of them would simply be lost, and the guest would lose >> track of time.) >> >> Either way, I'm not sure what the best approach is. This driver was >> moved under OvmfPkg from PcAtChipsetPkg in commit 1a3ffdff82e6 >> ("OvmfPkg: Copy 8254TimerDxe driver from PcAtChipsetPkg", 2019-04-11). >> HpetTimerDxe also lives under PcAtChipsetPkg. >> >> So I think I'll have to rely on the expertise of Ray here (CC'd). > > Also note that since the issue might be Xen specific we might want to > try to fix it in XenTimer only - I modified 8254Timer due to the > fact Xen is still present in general config (but that should soon > go away). We could also modify 8254TimerDxe like this: - provide the new variant of the TimerInterruptHandler() function for Xen only, without touching the existent one -- simply introduce it as a new function, - in TimerDriverInitialize(), first call XenDetected() from XenPlatformLib, then choose the argument for the mCpu->RegisterInterruptHandler() call accordingly. This wouldn't be difficult to locate and revert when <https://bugzilla.tianocore.org/show_bug.cgi?id=2122> is addressed. (It would be easy to find by grepping for XenDetected().) [...] Thanks! Laszlo ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH] OvmfPkg: End timer interrupt later to avoid stack overflow under load 2020-06-16 18:42 ` Laszlo Ersek 2020-06-17 3:16 ` Igor Druzhinin @ 2020-06-17 13:51 ` Paolo Bonzini 2020-06-17 15:46 ` Laszlo Ersek 1 sibling, 1 reply; 11+ messages in thread From: Paolo Bonzini @ 2020-06-17 13:51 UTC (permalink / raw) To: Laszlo Ersek, Igor Druzhinin, devel Cc: jordan.l.justen, ard.biesheuvel, anthony.perard, julien, Ray Ni On 16/06/20 20:42, Laszlo Ersek wrote: > (Hmmm... maybe the hypervisor *has* to queue the timer interrupts, > otherwise some of them would simply be lost, and the guest would lose > track of time.) Yes, there are various kinds of coalescing of interrupts that hypervisors perform to help the guest keep track of time. This is especially true of the PIT and RTC; newer OSes track time directly from the TSC, the HPET or the APIC timer so they tolerate lost ticks much better. That said, Igor's patch seems correct to me. In fact, I'd even move DisableInterrupts before gBS->RestoreTPL unless there's a good reason not to do so. Paolo ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH] OvmfPkg: End timer interrupt later to avoid stack overflow under load 2020-06-17 13:51 ` Paolo Bonzini @ 2020-06-17 15:46 ` Laszlo Ersek 2020-06-17 16:59 ` Paolo Bonzini 0 siblings, 1 reply; 11+ messages in thread From: Laszlo Ersek @ 2020-06-17 15:46 UTC (permalink / raw) To: Paolo Bonzini, Igor Druzhinin, devel Cc: jordan.l.justen, ard.biesheuvel, anthony.perard, julien, Ray Ni On 06/17/20 15:51, Paolo Bonzini wrote: > On 16/06/20 20:42, Laszlo Ersek wrote: >> (Hmmm... maybe the hypervisor *has* to queue the timer interrupts, >> otherwise some of them would simply be lost, and the guest would lose >> track of time.) > > Yes, there are various kinds of coalescing of interrupts that > hypervisors perform to help the guest keep track of time. This is > especially true of the PIT and RTC; newer OSes track time directly from > the TSC, the HPET or the APIC timer so they tolerate lost ticks much better. > > That said, Igor's patch seems correct to me. In fact, I'd even move > DisableInterrupts before gBS->RestoreTPL unless there's a good reason > not to do so. OK, thank you! Igor, please confirm if you'd like to submit v2 with the update suggested by Paolo, or if you prefer the current version. We're at the beginning of the current development cycle, so I guess we can apply the patch and see how it works in practice. If it ends up wreaking havoc on some platforms, we can always revert the patch in time for the next stable tag (edk2-stable202008). Perhaps we should also file a TianoCore BZ for this issue, with a clear problem statement, and the solution outline. The commit message is not lacking, but I think a TianoCore BZ could be easier to find with a web search, if users (not developers) want to comment after the patch is merged. It's also easier to round up (possibly) important changes, for stable tag content review, when there are BZs. Thanks! Laszlo ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH] OvmfPkg: End timer interrupt later to avoid stack overflow under load 2020-06-17 15:46 ` Laszlo Ersek @ 2020-06-17 16:59 ` Paolo Bonzini 2020-06-17 17:23 ` Igor Druzhinin 0 siblings, 1 reply; 11+ messages in thread From: Paolo Bonzini @ 2020-06-17 16:59 UTC (permalink / raw) To: Laszlo Ersek, Igor Druzhinin, devel Cc: jordan.l.justen, ard.biesheuvel, anthony.perard, julien, Ray Ni On 17/06/20 17:46, Laszlo Ersek wrote: >> That said, Igor's patch seems correct to me. In fact, I'd even move >> DisableInterrupts before gBS->RestoreTPL unless there's a good reason >> not to do so. > OK, thank you! > > Igor, please confirm if you'd like to submit v2 with the update > suggested by Paolo, or if you prefer the current version. We're at the > beginning of the current development cycle, so I guess we can apply the > patch and see how it works in practice. If it ends up wreaking havoc on > some platforms, we can always revert the patch in time for the next > stable tag (edk2-stable202008). For what it's worth "correct" means that I don't see anything that could break and in fact I find it good policy hygienic not to allow recursive interrupts. v1 is okay for me too, so: Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Paolo ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH] OvmfPkg: End timer interrupt later to avoid stack overflow under load 2020-06-17 16:59 ` Paolo Bonzini @ 2020-06-17 17:23 ` Igor Druzhinin 2020-06-18 8:36 ` Laszlo Ersek 2020-06-18 12:10 ` [edk2-devel] " Laszlo Ersek 0 siblings, 2 replies; 11+ messages in thread From: Igor Druzhinin @ 2020-06-17 17:23 UTC (permalink / raw) To: Paolo Bonzini, Laszlo Ersek, devel Cc: jordan.l.justen, ard.biesheuvel, anthony.perard, julien, Ray Ni On 17/06/2020 17:59, Paolo Bonzini wrote: > [CAUTION - EXTERNAL EMAIL] DO NOT reply, click links, or open attachments unless you have verified the sender and know the content is safe. > > On 17/06/20 17:46, Laszlo Ersek wrote: >>> That said, Igor's patch seems correct to me. In fact, I'd even move >>> DisableInterrupts before gBS->RestoreTPL unless there's a good reason >>> not to do so. >> OK, thank you! >> >> Igor, please confirm if you'd like to submit v2 with the update >> suggested by Paolo, or if you prefer the current version. We're at the >> beginning of the current development cycle, so I guess we can apply the >> patch and see how it works in practice. If it ends up wreaking havoc on >> some platforms, we can always revert the patch in time for the next >> stable tag (edk2-stable202008). > > For what it's worth "correct" means that I don't see anything that could > break and in fact I find it good policy hygienic not to allow recursive > interrupts. > > v1 is okay for me too, so: > > Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Thanks, unfortunately it's not possible to move DisableInterrupts inside TPL block as RestoreTPL would immediately enable them according to current logic. IMO RaiseTPL could technically save interrupt state inside it (in that case it was disabled) and then honor it in RestoreTPL but that might have more surprise consequences than the proposed change I reckon. I will create a BZ ticket as requested. Igor ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH] OvmfPkg: End timer interrupt later to avoid stack overflow under load 2020-06-17 17:23 ` Igor Druzhinin @ 2020-06-18 8:36 ` Laszlo Ersek 2020-06-18 8:44 ` Laszlo Ersek 2020-06-18 12:10 ` [edk2-devel] " Laszlo Ersek 1 sibling, 1 reply; 11+ messages in thread From: Laszlo Ersek @ 2020-06-18 8:36 UTC (permalink / raw) To: Igor Druzhinin, Paolo Bonzini, devel Cc: jordan.l.justen, ard.biesheuvel, anthony.perard, julien, Ray Ni On 06/17/20 19:23, Igor Druzhinin wrote: > On 17/06/2020 17:59, Paolo Bonzini wrote: >> [CAUTION - EXTERNAL EMAIL] DO NOT reply, click links, or open attachments unless you have verified the sender and know the content is safe. >> >> On 17/06/20 17:46, Laszlo Ersek wrote: >>>> That said, Igor's patch seems correct to me. In fact, I'd even move >>>> DisableInterrupts before gBS->RestoreTPL unless there's a good reason >>>> not to do so. >>> OK, thank you! >>> >>> Igor, please confirm if you'd like to submit v2 with the update >>> suggested by Paolo, or if you prefer the current version. We're at the >>> beginning of the current development cycle, so I guess we can apply the >>> patch and see how it works in practice. If it ends up wreaking havoc on >>> some platforms, we can always revert the patch in time for the next >>> stable tag (edk2-stable202008). >> >> For what it's worth "correct" means that I don't see anything that could >> break and in fact I find it good policy hygienic not to allow recursive >> interrupts. >> >> v1 is okay for me too, so: >> >> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Thank you, Paolo! > Thanks, unfortunately it's not possible to move DisableInterrupts inside > TPL block as RestoreTPL would immediately enable them according to current > logic. > > IMO RaiseTPL could technically save interrupt state inside it (in that > case it was disabled) and then honor it in RestoreTPL but that might have > more surprise consequences than the proposed change I reckon. > > I will create a BZ ticket as requested. Thanks -- please follow up with the URL, and then I'll pick up your current patch, with Paolo's R-b and the BZ URL added to the commit message. Acked-by: Laszlo Ersek <lersek@redhat.com> Thanks Laszlo ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH] OvmfPkg: End timer interrupt later to avoid stack overflow under load 2020-06-18 8:36 ` Laszlo Ersek @ 2020-06-18 8:44 ` Laszlo Ersek 0 siblings, 0 replies; 11+ messages in thread From: Laszlo Ersek @ 2020-06-18 8:44 UTC (permalink / raw) To: Igor Druzhinin, Paolo Bonzini, devel Cc: jordan.l.justen, ard.biesheuvel, anthony.perard, julien, Ray Ni On 06/18/20 10:36, Laszlo Ersek wrote: > On 06/17/20 19:23, Igor Druzhinin wrote: >> I will create a BZ ticket as requested. > > Thanks -- please follow up with the URL nevermind, I've found, in my bugmail: https://bugzilla.tianocore.org/show_bug.cgi?id=2815 Thanks Laszlo ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [edk2-devel] [PATCH] OvmfPkg: End timer interrupt later to avoid stack overflow under load 2020-06-17 17:23 ` Igor Druzhinin 2020-06-18 8:36 ` Laszlo Ersek @ 2020-06-18 12:10 ` Laszlo Ersek 1 sibling, 0 replies; 11+ messages in thread From: Laszlo Ersek @ 2020-06-18 12:10 UTC (permalink / raw) To: devel, igor.druzhinin, Paolo Bonzini Cc: jordan.l.justen, ard.biesheuvel, anthony.perard, julien, Ray Ni On 06/17/20 19:23, Igor Druzhinin wrote: > On 17/06/2020 17:59, Paolo Bonzini wrote: >> [CAUTION - EXTERNAL EMAIL] DO NOT reply, click links, or open attachments unless you have verified the sender and know the content is safe. >> >> On 17/06/20 17:46, Laszlo Ersek wrote: >>>> That said, Igor's patch seems correct to me. In fact, I'd even move >>>> DisableInterrupts before gBS->RestoreTPL unless there's a good reason >>>> not to do so. >>> OK, thank you! >>> >>> Igor, please confirm if you'd like to submit v2 with the update >>> suggested by Paolo, or if you prefer the current version. We're at the >>> beginning of the current development cycle, so I guess we can apply the >>> patch and see how it works in practice. If it ends up wreaking havoc on >>> some platforms, we can always revert the patch in time for the next >>> stable tag (edk2-stable202008). >> >> For what it's worth "correct" means that I don't see anything that could >> break and in fact I find it good policy hygienic not to allow recursive >> interrupts. >> >> v1 is okay for me too, so: >> >> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> > > Thanks, unfortunately it's not possible to move DisableInterrupts inside > TPL block as RestoreTPL would immediately enable them according to current > logic. > > IMO RaiseTPL could technically save interrupt state inside it (in that > case it was disabled) and then honor it in RestoreTPL but that might have > more surprise consequences than the proposed change I reckon. > > I will create a BZ ticket as requested. Merged via <https://github.com/tianocore/edk2/pull/709> as commit 239b50a86370, with the following updates: [lersek@redhat.com: add BZ ref; rewrap msg to silence PatchCheck.py] Thanks! Laszlo ^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2020-06-18 12:11 UTC | newest] Thread overview: 11+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2020-06-16 2:49 [PATCH] OvmfPkg: End timer interrupt later to avoid stack overflow under load Igor Druzhinin 2020-06-16 18:42 ` Laszlo Ersek 2020-06-17 3:16 ` Igor Druzhinin 2020-06-17 12:44 ` Laszlo Ersek 2020-06-17 13:51 ` Paolo Bonzini 2020-06-17 15:46 ` Laszlo Ersek 2020-06-17 16:59 ` Paolo Bonzini 2020-06-17 17:23 ` Igor Druzhinin 2020-06-18 8:36 ` Laszlo Ersek 2020-06-18 8:44 ` Laszlo Ersek 2020-06-18 12:10 ` [edk2-devel] " Laszlo Ersek
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox