Posted in:

Quite often I get questions from people who would like to play audio that they are receiving over the network, or recording from the microphone, but also want to save that audio to WAV at the same time. This is actually quite easy to achieve, so long as you think in terms of a “signal chain” (which is something I talk about a lot in my audio courses on Pluralsight).

Basically, the usual strategy I recommend for playing audio that you receive over the network or from the microphone is to put it into a BufferedWaveProvider. You fill it with (PCM) audio as it becomes available, and then in its Read method, it returns the audio, or silence if the buffer is empty.

Normally, you’d pass the BufferedWaveProvider directly to the IWavePlayer device (such as WaveOut, but to implement save to WAV, we’ll first wrap it in a new signal chain component that we’ll create for this purpose. We’ll call it SavingWaveProvider and it will implement IWaveProvider. In it’s Read method, it will read from it’s source wave provider (the BufferedWaveProvider) in our case, and write to a WAV file before we pass it on.

We’ll dispose the WaveFileWriter if we read 0 bytes from the source wave provider, which should normally indicate we have reached the end of playback. But we also make the whole class Disposable, since BufferedWaveProvider is set up to always return the number of bytes we asked for in Read, so it will never reach the end itself.

Here’s the code for SavingWaveProvider:

class SavingWaveProvider : IWaveProvider, IDisposable
{
    private readonly IWaveProvider sourceWaveProvider;
    private readonly WaveFileWriter writer;
    private bool isWriterDisposed;

    public SavingWaveProvider(IWaveProvider sourceWaveProvider, string wavFilePath)
    {
        this.sourceWaveProvider = sourceWaveProvider;
        writer = new WaveFileWriter(wavFilePath, sourceWaveProvider.WaveFormat);
    }

    public int Read(byte[] buffer, int offset, int count)
    {
        var read = sourceWaveProvider.Read(buffer, offset, count);
        if (count > 0 && !isWriterDisposed)
        {
            writer.Write(buffer, offset, read);
        }
        if (count == 0)
        {
            Dispose(); // auto-dispose in case users forget
        }
        return read;
    }

    public WaveFormat WaveFormat { get { return sourceWaveProvider.WaveFormat; } }

    public void Dispose()
    {
        if (!isWriterDisposed)
        {
            isWriterDisposed = true;
            writer.Dispose();
        }
    }
}

And here’s how you use it to both play and save audio at the same time (note this is very simplified WPF app with two buttons and no checks that you don’t press Start twice in a row etc). The key is that we pass the BufferedWaveProvider into the constructor of the SavingWaveProvider, and then pass that to waveOut.Init. Then all we need to do is make sure we dispose SavingWaveProvider so that the WAV file header gets written correctly:

public partial class MainWindow : Window
{
    private WaveIn recorder;
    private BufferedWaveProvider bufferedWaveProvider;
    private SavingWaveProvider savingWaveProvider;
    private WaveOut player;

    public MainWindow()
    {
        InitializeComponent();
    }

    private void OnStartRecordingClick(object sender, RoutedEventArgs e)
    {
        // set up the recorder
        recorder = new WaveIn();
        recorder.DataAvailable += RecorderOnDataAvailable;

        // set up our signal chain
        bufferedWaveProvider = new BufferedWaveProvider(recorder.WaveFormat);
        savingWaveProvider = new SavingWaveProvider(bufferedWaveProvider, "temp.wav");

        // set up playback
        player = new WaveOut();
        player.Init(savingWaveProvider);

        // begin playback & record
        player.Play();
        recorder.StartRecording();
    }

    private void RecorderOnDataAvailable(object sender, WaveInEventArgs waveInEventArgs)
    {
        bufferedWaveProvider.AddSamples(waveInEventArgs.Buffer,0, waveInEventArgs.BytesRecorded);
    }

    private void OnStopRecordingClick(object sender, RoutedEventArgs e)
    {
        // stop recording
        recorder.StopRecording();
        // stop playback
        player.Stop();
        // finalise the WAV file
        savingWaveProvider.Dispose();
    }
}

This technique isn’t only for saving audio that you record or receive over the network. It’s also a great way to get a copy of the audio you just played, which is very handy when you want to troubleshoot audio issues and want to get a copy of the exact audio that was sent to the soundcard. You can even insert many of these at different places in your signal chain, to hear what the audio sounded like earlier in the signal chain.

Want to get up to speed with the the fundamentals principles of digital audio and how to got about writing audio applications with NAudio? Be sure to check out my Pluralsight courses, Digital Audio Fundamentals, and Audio Programming with NAudio.

Comments

Comment by TempGuy

How to display a waveform from line-in (Voice from Mic) in real time???? There is some C# exemples????
Thank you...

TempGuy
Comment by Mark Heath

I'm afraid that isn't something I've included in the NAudio demo code, but it's a good idea, and perhaps I'll try to include that for a future NAudio

Mark Heath
Comment by Jacob Stolmeier

This seems to be a great feature of Naudio, however there are some limitations. I am currently developing an Application that needs to use the AsioOut functionality (i have to manipulate each channel independently), and the WaveIn recorder can not start recording. When I call the recorder.StartRecording() function, a MMException is thrown saying that WaveInOpen has already been allocated. Is there any way that I can make this a more flexible class?

Jacob Stolmeier
Comment by Mark Heath

Normally you don't use WaveIn and ASIO at the same time. ASIO is best used for applications that want exclusive access to the soundcard.

Mark Heath
Comment by Anna

Thank you for your article, it's what I was looking for. But I have one problem: I have only one button and as long as it is pressed I want to hear what I speak to the microphone. Sofar I put the recording into a background worker and it works but when I talk with a headset, I can hear what I say but each second or third word is breaking up and not sounding completely. What could be the reason? The funny thing is when I run musik in the room, it has silent moments too but after about 10seconds the breaks disappear and I can hear the entire song in my headset that the mic was recording. and then I also can talk without breaks in the voice
My wav is looking like this in the first seconds
https://www.3aussies.de/cs/...

Anna
Comment by Paul Anthony Veluya

Hello Mark. I am reading your code on WasapiCapture on how the data is being buffered on ReadNextPacket method. Im getting the array around 50 milliseconds. is this dependent on the sound i am using or it can get better with good soundcard (faster say 20 millisecond)? i am working on getting the frequency of every data i am retrieving.

Paul Anthony Veluya
Comment by Mark Heath

in exclusive mode I think you can get lower latencies, but really, the best driver for low latency audio is usually an ASIO driver if one is available

Mark Heath
Comment by Nabeel

Hello mark can we record audio from an application output sound? I know it can be with wasapi but it does not allow to capture sound of an application rather it gives all sound playing.

Nabeel
Comment by Mark Heath

I'm afraid I don't think that's possible. Unless there are new WASAPI capabilities added since I last checked, you can only capture all sound for a single device

Mark Heath
Comment by sam

Hi ! It work but the sound is very very low... How I can setting this ?

sam
Comment by Anderson Nunes

Hi Mark
I'm working on 24/7 non stop module of recording for FREEWAVE.
he record audio from WaveIn and store realtime to MP3 file.
I can't have delay on start/stop record every hour to close the file. So, the convertion from WAVE to MP3 for all memory stream don't work for me.
i prefer Opus ( I have the wrapper to do encode the data from WaveIn but I don't know how put it on container file) but MP3 works for now.
I'm trying use this sample to encode the WaveIn data and append to file. But do not geted any sucess.
Any tips for me?
Thank you and sorry my bad English.

Anderson Nunes
Comment by Levin Obrian

Hi,
How can I save a mp3 stream from URL? In your example player.Init takes savingWaveProvider, hence I cannot init the player with MediaFoundationReader.

Levin Obrian
Comment by Mark Heath

why not just download the URL to disk?

Mark Heath
Comment by Levin Obrian

I would like to process the stream live (or almost live) and identify possible samples using soundfingerprinting .net lib.

Levin Obrian
Comment by Eko Suprapto Wibowo

for me there is no sound at all....

Eko Suprapto Wibowo
Comment by Yahiya Sohail

i want to control the sound while recording as an when the sound of source is low i want it to be louder in recording not effecting much in the file size .please guide me through

Yahiya Sohail
Comment by Yahiya Sohail

i want to control the sound while recording as an when the sound of source is low i want it to be louder in recording not effecting much in the file size .please guide me through

Yahiya Sohail
Comment by No Name

Hi, Mark! This article was very helpful!
A friend of mine has posted to stack overflow a question regarding the NAudio library for signal processing and I think some aspects shown in this article could almost be the answer. If you could perhaps take a look, it would be of great help, because our minds cannot comprehend the library yet. Thank you!
Question is Here

No Name
Comment by Nwenar Ismail

code works but what I hear is just a noise

Nwenar Ismail
Comment by ramanareddy

which neuget package is suitable for xamarin uwp

ramanareddy