From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-ed1-f42.google.com (mail-ed1-f42.google.com [209.85.208.42]) by mx.groups.io with SMTP id smtpd.web09.995.1618617004272549781 for ; Fri, 16 Apr 2021 16:50:04 -0700 Authentication-Results: mx.groups.io; dkim=pass header.i=@gmail.com header.s=20161025 header.b=G1Nrweec; spf=pass (domain: gmail.com, ip: 209.85.208.42, mailfrom: harlydavidsen@gmail.com) Received: by mail-ed1-f42.google.com with SMTP id e7so34079421edu.10 for ; Fri, 16 Apr 2021 16:50:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc:content-transfer-encoding; bh=NpHqntDlH998Y5ldftoB6lSvpDYL+teSUl1xYrf2kfU=; b=G1Nrweec5zTN9nKd34i+RlK4w6+1emzsAztqFzb7ggAhk59LkEWOjkiHhViil+t8gK 3eeeIfFhW39yJOI6CXCp4smu9yYAGLHYFAm3mlNIxVSjr5aWixo70Pbw6GvytFQLI4cX d3ZFicF4TLJpzh1nXDFNxfeRO/Tspsg37UWgIcMIsxVdWBCNHYVJ3f5fZRbRHQeZ1bF0 Ntik7zdjlRmjS4uZSauPHlbQIeB/4XPQO45uOvKgEvmcxYm4cBf7/RoVhFsauv4XBxij 33N3ViNxjvsY2GsA/+4GG0oI4vvc/2WB7XYYKCTKPW1MnSgHb7KgXbpnTtYPIDy5B7oB jwww== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc:content-transfer-encoding; bh=NpHqntDlH998Y5ldftoB6lSvpDYL+teSUl1xYrf2kfU=; b=WEQI+Ut+5gBLbAIHx2kiA24t63wwqxV23gM6hMN5xWb2OZGbeJtUhEK3piY/iKIxeU 5iy8M6sFq793zn1ZSmDbHkSLkWlnLCVLlOv8X5C/ZA+JB/mk3SjTnUHUqe6a543vPEEE vjErGRTlgAIeKMH2Q9GNxrFKiorqIB+QkI8fQXl7BpKm6pe2UJnrasuanKCa5qraZ74G ythPBZzBoRV5mh6kytf3B2GVWt7gWuF6fwA+SzBlq+OBk1D8qXDj+HeXtVDzpoIAOtZW pp+rQr6gN5zc1PQ9MPyIa1ep/A8Lrx1P+vb/NXS3+3iExEOJCTMwiK/WFEZWOEckQI4h 34vA== X-Gm-Message-State: AOAM532YusX7KvvT1AUvqGlxtZ4+BE5+V7QZTYC+AfxCyQsqGoSlWcCm I9ya5BJBy0M+YSzSK8cK5sQOAEeltdBY/nrGUag= X-Google-Smtp-Source: ABdhPJxLVSpQtNy9KspmffCGjG4scNojYDT8HdSeS2PR9NECLwzwUY0RQFtJiw0ktTlPHIz5wrOPvQzDxE5faRnoM2Y= X-Received: by 2002:aa7:d2da:: with SMTP id k26mr12855133edr.156.1618617002581; Fri, 16 Apr 2021 16:50:02 -0700 (PDT) MIME-Version: 1.0 Received: by 2002:ab4:a64c:0:0:0:0:0 with HTTP; Fri, 16 Apr 2021 16:50:01 -0700 (PDT) In-Reply-To: <2B5C3A05-95BC-4239-A84D-B262F1A2D6DC@apple.com> References: <4AEC1784-99AF-47EF-B7DD-77F91EA3D7E9@apple.com> <309cc5ca-2ecd-79dd-b183-eec0572ea982@ipxe.org> <33e37977-2d27-36a0-89a6-36e513d06b2f@ipxe.org> <6F69BEA6-5B7A-42E5-B6DA-D819ECC85EE5@apple.com> <20210416113447.GG1664@vanye> <10E3436C-D743-4B2F-8E4B-7AD93B82FC92@apple.com> <2B5C3A05-95BC-4239-A84D-B262F1A2D6DC@apple.com> From: "Ethin Probst" Date: Fri, 16 Apr 2021 18:50:01 -0500 Message-ID: Subject: Re: [edk2-devel] VirtIO Sound Driver (GSoC 2021) To: Andrew Fish Cc: edk2-devel-groups-io , Leif Lindholm , Michael Brown , Mike Kinney , Laszlo Ersek , "Desimone, Nathaniel L" , Rafael Rodrigues Machado , Gerd Hoffmann Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Andrew, Thanks for that example -- that makes things a lot easier to understand now. I think, anyway. As for the simple text output protocol, I was just using that as an example; I understand that the HII infrastructure draws graphical characters, as does the simple text output protocol, and the screen is redrawn quite often. I was just using that as an example of the problems that we might face. I'd prefer that the protocols themselves be made accessible instead of requiring applications to do it; the advantage would be that very little work, if any, would need to be done to make an application accessible, and nothing would necessarily need to be rebuilt for accessibility to be implemented. All someone would have to do is update the firmware, enable accessibility features, and they immediately get an accessible user interface, with no changes required to user-facing applications such as bootloaders, secure boot utilities, shells, and setup utilities. But that's a long way off, and for now, we should focus primarily on the audio output protocol instead of focusing on details that are still abstract. Is the audio interface I proposed in a couple emails previously sufficient and something that we can all agree on? We might also want a GetVersion() function to return the revision number so that users of it know what revision they're working with in case we decide to add functions to it later on. As always, I've updated my gist, and as before, the new version can be found here: https://gist.github.com/ethindp/56066e0561b901c1e276e82ff46943= 6d On 4/16/21, Andrew Fish wrote: > > >> On Apr 16, 2021, at 10:55 AM, Ethin Probst >> wrote: >> >> Also, forgot to add this before sending: yes, speech synthesizers >> usually generate the PCM audio on the fly. They can write it to an >> output file, but if your just using it in a screen reader, then you >> end up streaming it to the audio device. This raises another issue I >> was pondering, but I (don't think) we need to handle that quite yet. >> The problem involves output generated by the HII, console, etc.: when >> we're using a speech synthesizer, it might be configured to speak at a >> faster or slower rate depending on the preferences of the user (we >> definitely want to let the user control these things because different >> people have different preferences on the speed of speech synthesizers: >> some can understand it at really fast rates and others can't, for >> example). The problem arises when we want to forward output from the >> screen (say, the simple text output protocol). Assume that a user is >> running the EFI shell as an example, which, if I'm not mistaken, uses >> this protocol. The shell calls >> EFI_SIMPLE_TEXT_OUTPUT_PROTOCOL.OutputString(). We probably then want >> this function to forward the string passed in onto the speech >> synthesizer, assuming that accessibility features are enabled (I'm >> assuming we'd want to make that a toggle). The problem is that one can >> call EFI_SIMPLE_TEXT_OUTPUT_PROTOCOL.OutputString() many times. During >> all these calls, text is being sent to the synthesizer, its generating >> samples, and forwarding those samples onto the audio output protocol. >> The problem is: how do we ensure that these samples don't overlap or >> cause other problems (e.g.: interrupt streams that are still being >> processed)? As I said, these are things we don't need to necessarily >> consider now, but this is a problem we'll need to tackle in the >> future. >> > > I=E2=80=99m not sure converting EFI_SIMPLE_TEXT_OUTPUT_PROTOCOL to audio= directly > would work from a practical point of view. If you look at the Hii based > setup and lot of text and a lot of text based graphics characters get du= mped > into a screen with not particular order. For the UEFI Shell there is a l= ot > of cursor control and the prompt could get redrawn at arbitrary times. N= ot > sure how well that maps to pure text to speech. Also in a text based GUI= you > are just changing the attributes of what is selected as you redraw it. T= hus > for Hii I was thinking it may make more sense to just have the Hii forms > browser pick the spots to do text to speech. > > I=E2=80=99ve also thought about mechanisms that would let the OS encode = auto > associated with the boot options so the Hii UI could use that. But I=E2= =80=99m > guessing speak on select in the UI might be more useful than raw > EFI_SIMPLE_TEXT_OUTPUT_PROTOCOL to text. You can always chose to make th= e > Audio playback synchronous, or block certain UI progress on audio > completion. > > Thanks, > > Andrew Fish > > PS The edk2 has a ConSpliter so adding and extra > EFI_SIMPLE_TEXT_OUTPUT_PROTOCOL would be easy, I=E2=80=99m just not sure= it would > work out well. > >> On 4/16/21, Ethin Probst wrote: >>> Yes, three APIs (maybe like this) would work well: >>> - Start, Stop: begin playback of a stream >>> - SetVolume, GetVolume, Mute, Unmute: control volume of output and >>> enable >>> muting >>> - CreateStream, ReleaseStream, SetStreamSampleRate: Control sample >>> rate of stream (but not sample format since Signed 16-bit PCM is >>> enough) >>> Marvin, how do you suggest we make the events then? We need some way >>> of notifying the caller that the stream has concluded. We could make >>> the driver create the event and pass it back to the caller as an >>> event, but you'd still have dangling pointers (this is C, after all). >>> We could just make a IsPlaying() function and WaitForCompletion() >>> function and allow the driver to do the event handling -- would that >>> work? >>> >>> On 4/16/21, Andrew Fish wrote: >>>> >>>> >>>>> On Apr 16, 2021, at 4:34 AM, Leif Lindholm wrote= : >>>>> >>>>> Hi Ethin, >>>>> >>>>> I think we also want to have a SetMode function, even if we don't ge= t >>>>> around to implement proper support for it as part of GSoC (although = I >>>>> expect at least for virtio, that should be pretty straightforward). >>>>> >>>> >>>> Leif, >>>> >>>> I=E2=80=99m think if we have an API to load the buffer and a 2nd API = to play >>>> the >>>> buffer an optional 3rd API could configure the streams. >>>> >>>>> It's quite likely that speech for UI would be stored as 8kHz (or >>>>> 20kHz) in some systems, whereas the example for playing a tune in GR= UB >>>>> would more likely be a 44.1 kHz mp3/wav/ogg/flac. >>>>> >>>>> For the GSoC project, I think it would be quite reasonable to >>>>> pre-generate pure PCM streams for testing rather than decoding >>>>> anything on the fly. >>>>> >>>>> Porting/writing decoders is really a separate task from enabling the >>>>> output. I would much rather see USB *and* HDA support able to play p= cm >>>>> streams before worrying about decoding. >>>>> >>>> >>>> I agree it might turn out it is easier to have the text to speech cod= e >>>> just >>>> encode a PCM directly. >>>> >>>> Thanks, >>>> >>>> Andrew Fish >>>> >>>>> / >>>>> Leif >>>>> >>>>> On Fri, Apr 16, 2021 at 00:33:06 -0500, Ethin Probst wrote: >>>>>> Thanks for that explanation (I missed Mike's message). Earlier I se= nt >>>>>> a summary of those things that we can agree on: mainly, that we hav= e >>>>>> mute, volume control, a load buffer, (maybe) an unload buffer, and = a >>>>>> start/stop stream function. Now that I fully understand the >>>>>> ramifications of this I don't mind settling for a specific format a= nd >>>>>> sample rate, and signed 16-bit PCM audio is, I think, the most wide= ly >>>>>> used one out there, besides 64-bit floating point samples, which I'= ve >>>>>> only seen used in DAWs, and that's something we don't need. >>>>>> Are you sure you want the firmware itself to handle the decoding of >>>>>> WAV audio? I can make a library class for that, but I'll definitely >>>>>> need help with the security aspect. >>>>>> >>>>>> On 4/16/21, Andrew Fish via groups.io >>>>>> wrote: >>>>>>> >>>>>>> >>>>>>>> On Apr 15, 2021, at 5:59 PM, Michael Brown wrote= : >>>>>>>> >>>>>>>> On 16/04/2021 00:42, Ethin Probst wrote: >>>>>>>>> Forcing a particular channel mapping, sample rate and sample >>>>>>>>> format >>>>>>>>> on >>>>>>>>> everyone would complicate application code. From an application >>>>>>>>> point >>>>>>>>> of view, one would, with that type of protocol, need to do the >>>>>>>>> following: >>>>>>>>> 1) Load an audio file in any audio file format from any storage >>>>>>>>> mechanism. >>>>>>>>> 2) Decode the audio file format to extract the samples and audio >>>>>>>>> metadata. >>>>>>>>> 3) Resample the (now decoded) audio samples and convert (quantiz= e) >>>>>>>>> the >>>>>>>>> audio samples into signed 16-bit PCM audio. >>>>>>>>> 4) forward the samples onto the EFI audio protocol. >>>>>>>> >>>>>>>> You have made an incorrect assumption that there exists a >>>>>>>> requirement >>>>>>>> to >>>>>>>> be able to play audio files in arbitrary formats. This requireme= nt >>>>>>>> does >>>>>>>> not exist. >>>>>>>> >>>>>>>> With a protocol-mandated fixed baseline set of audio parameters >>>>>>>> (sample >>>>>>>> rate etc), what would happen in practice is that the audio files >>>>>>>> would >>>>>>>> be >>>>>>>> encoded in that format at *build* time, using tools entirely >>>>>>>> external >>>>>>>> to >>>>>>>> UEFI. The application code is then trivially simple: it just doe= s >>>>>>>> "load >>>>>>>> blob, pass blob to audio protocol". >>>>>>>> >>>>>>> >>>>>>> >>>>>>> Ethin, >>>>>>> >>>>>>> Given the goal is an industry standard we value interoperability >>>>>>> more >>>>>>> that >>>>>>> flexibility. >>>>>>> >>>>>>> How about another use case. Lets say the Linux OS loader (Grub) >>>>>>> wants >>>>>>> to >>>>>>> have an accessible UI so it decides to sore sound files on the EFI >>>>>>> System >>>>>>> Partition and use our new fancy UEFI Audio Protocol to add audio t= o >>>>>>> the >>>>>>> OS >>>>>>> loader GUI. So that version of Grub needs to work on 1,000 of >>>>>>> different >>>>>>> PCs >>>>>>> and a wide range of UEFI Audio driver implementations. It is a muc= h >>>>>>> easier >>>>>>> world if Wave PCM 16 bit just works every place. You could add a l= ot >>>>>>> of >>>>>>> complexity and try to encode the audio on the fly, maybe even in >>>>>>> Linux >>>>>>> proper but that falls down if you are booting from read only media >>>>>>> like >>>>>>> a >>>>>>> DVD or backup tape (yes people still do that in server land). >>>>>>> >>>>>>> The other problem with flexibility is you just made the test matri= x >>>>>>> very >>>>>>> large for every driver that needs to get implemented. For somethin= g >>>>>>> as >>>>>>> complex as Intel HDA how you hook up the hardware and what CODECs >>>>>>> you >>>>>>> use >>>>>>> may impact the quality of the playback for a given board. Your EFI >>>>>>> is >>>>>>> likely >>>>>>> going to pick a single encoding at that will get tested all the ti= me >>>>>>> if >>>>>>> your >>>>>>> system has audio, but all 50 other things you support not so much. >>>>>>> So >>>>>>> that >>>>>>> will required testing, and some one with audiophile ears (or an AI >>>>>>> program) >>>>>>> to test all the combinations. I=E2=80=99m not kidding I get BZs on= the >>>>>>> quality >>>>>>> of >>>>>>> the boot bong on our systems. >>>>>>> >>>>>>> >>>>>>>>> typedef struct EFI_SIMPLE_AUDIO_PROTOCOL { >>>>>>>>> EFI_SIMPLE_AUDIO_PROTOCOL_RESET Reset; >>>>>>>>> EFI_SIMPLE_AUDIO_PROTOCOL_START Start; >>>>>>>>> EFI_SIMPLE_AUDIO_PROTOCOL_STOP Stop; >>>>>>>>> } EFI_SIMPLE_AUDIO_PROTOCOL; >>>>>>>> >>>>>>>> This is now starting to look like something that belongs in >>>>>>>> boot-time >>>>>>>> firmware. :) >>>>>>>> >>>>>>> >>>>>>> I think that got a little too simple I=E2=80=99d go back and look = at the >>>>>>> example >>>>>>> I >>>>>>> posted to the thread but add an API to load the buffer, and then >>>>>>> play >>>>>>> the >>>>>>> buffer (that way we can an API in the future to twiddle knobs). Th= at >>>>>>> API >>>>>>> also implements the async EFI interface. Trust me the 1st thing th= at >>>>>>> is >>>>>>> going to happen when we add audio is some one is going to complain >>>>>>> in >>>>>>> xyz >>>>>>> state we should mute audio, or we should honer audio volume and mu= te >>>>>>> settings from setup, or from values set in the OS. Or some one is >>>>>>> going >>>>>>> to >>>>>>> want the volume keys on the keyboard to work in EFI. >>>>>>> >>>>>>> Also if you need to pick apart the Wave PCM 16 byte file to feed i= t >>>>>>> into >>>>>>> the >>>>>>> audio hardware that probably means we should have a library that >>>>>>> does >>>>>>> that >>>>>>> work, so other Audio drivers can share that code. Also having a >>>>>>> library >>>>>>> makes it easier to write a unit test. We need to be security >>>>>>> conscious >>>>>>> as we >>>>>>> need to treat the Audo file as attacker controlled data. >>>>>>> >>>>>>> Thanks, >>>>>>> >>>>>>> Andrew Fish >>>>>>> >>>>>>>> Michael >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> Signed, >>>>>> Ethin D. Probst >>>>> >>>>> >>>>> >>>>> >>>>> >>>> >>>> >>> >>> >>> -- >>> Signed, >>> Ethin D. Probst >>> >> >> >> -- >> Signed, >> Ethin D. Probst >> >> >>=20 >> >> > > --=20 Signed, Ethin D. Probst