Posted in:

In this post I will explain how to use the WaveFileWriter class that is part of NAudio. I will discuss how to use it now in NAudio 1.4 and mention some of the changes that will be coming for NAudio 1.5.

The purpose of WaveFileWriter is to allow you to create a standard .WAV file. WAV files are often thought of as containing uncompressed PCM audio data, but actually they can contain any audio compression, and are often used as containers for telephony compression types such as mu-law, ADPCM, G.722 etc.

NAudio provides a one-line method to produce a WAV file if you have an existing WaveStream derived class that can provide the data (in NAudio 1.5 it can be an IWaveProvider).

string tempFile = Path.Combine(Path.GetTempPath(), Guid.NewGuid().ToString() + ".wav");
WaveFormat waveFormat = new WaveFormat(8000, 8, 2);
WaveStream sourceStream = new NullWaveStream(waveFormat, 10000);
WaveFileWriter.CreateWaveFile(tempFile, sourceStream);

In the above example, I am using a simple utility class as my source stream, but in a real application this might be the output of a mixer, or the output from some effects or a synthesizer. The most important thing to note is that the Read method of your source stream MUST eventually return 0, otherwise your file will keep on writing until your disk is full! So beware of classes in NAudio (such as WaveChannel32) that can be configured to always return the number of bytes asked for from the Read method.

For greater control over the data you write, you can simply use the WriteData method (renamed to Write in NAudio 1.5, as WaveFileWriter will inherit from Stream). WriteData assumes that you are providing raw data in the correct format and will simply write it directly into the data chunk of the WAV file. This is therefore the most general purpose way of writing to a WaveFileWriter, and can be used for both PCM and compressed formats.

byte[] testSequence = new byte[] { 0x1, 0x2, 0xFF, 0xFE };
using (WaveFileWriter writer = new WaveFileWriter(fileName, waveFormat))
{
    writer.WriteData(testSequence, 0, testSequence.Length);
}

WaveFileWriter has an additional constructor that takes a Stream instead of a filename, allowing you to write to any kind of a stream (for example, a MemoryStream). Be aware though that when you dispose the WaveFileWriter, it disposes the output stream, so use the IgnoreDisposeStream utility class to wrap the output stream if you don’t want that to happen.

One of the most commonly used bit depths for PCM WAV files is 16 bit, and so NAudio provides another WriteData overload (to be called WriteSamples in NAudio 1.5) that allows you to supply data as an array of shorts (Int16s). Obviously, this only really makes sense if you are writing to a 16 bit WAV file, but the current implementation will also try to scale the sample value for different bit depths.

short[] samples = new short[1000];
// TODO: fill sample buffer with data
waveFileWriter.WriteData(samples, 0, samples.Length);

Another consideration is that very often after applying various audio effects (even as simple as changing the volume), the audio samples stored as 32 bit floating point numbers (float or Single). To make writing these to the WAV file as simple as possible, a WriteSample function is provided, allowing you to write one sample at a time. If the underlying PCM format is a different bit depth (e.g. 16 or 24 bits), then the WriteSample function will attempt to convert the sample to that bit depth before writing it to a file. NAudio 1.5 will also feature a WriteSamples function to allow arrays of floating point samples to be written. The following example shows one second of a 1kHz sine wave being written to a WAV file using the WriteSample function:

float amplitude = 0.25f;
float frequency = 1000;

for (int n = 0; n < waveFileWriter.WaveFormat.SampleRate; n++)
{
    float sample = (float)(amplitude * Math.Sin((2 * Math.PI * n * frequency) / waveFileWriter.WaveFormat.SampleRate));
    waveFileWriter.WriteSample(sample);
}
Want to get up to speed with the the fundamentals principles of digital audio and how to got about writing audio applications with NAudio? Be sure to check out my Pluralsight courses, Digital Audio Fundamentals, and Audio Programming with NAudio.

Comments

Comment by hen vertis

Hi
First of all I want to congratulate you on the software package Naudio.
my question is :
i have 2 WaveProvider32 that produce sine wave at diffrent amplitude(like the example "Play a Sine Wave")i have also MultiplexingWaveProvider that connect WaveProvider32 of sine wave 1 to output 0 and WaveProvider32 of sine wave 2 to output 1.
(i dont do anything in MultiplexingWaveProvider.read function) the sound work perfect .
i want to be able to write the both signals to wave file how can i do it?
should i pass WaveFileWriter reference to the 2 instances of WaveProvider32 ?
can i get the the samples from MultiplexingWaveProvider and write it?
what about the performance issue I/O can be in another thread,how can i do it ?

Comment by Mark H

you can just pass your MultiplexingWaveProvider into the WaveFileWriter.CreateWaveFile function. CreateWaveFile will call Read repeatedly until it reaches the end. You can do this in another thread no problem. One thing is that your source WaveProviders must return 0 from Read when they reach the end, or you'll create a never-ending WAV file that fills your hard disk up.

Comment by hen vertis

Hi Mark
what i do wrong?


SineWaveProvider32 file

namespace Hello2ChannelsAudio
{
public class SineWaveProvider32 : IWaveProvider
{
private WaveFormat waveFormat;
float m_fAmplitude;
float m_fFrequency;
int m_nSample;



public SineWaveProvider32( int sampleRate, int channels, float Amplitude,float Frequency)

{
m_fAmplitude = Amplitude;
m_fFrequency = Frequency;
SetWaveFormat(sampleRate, channels);

}


public void SetWaveFormat(int sampleRate, int channels)
{
this.waveFormat = WaveFormat.CreateIeeeFloatWaveFormat(sampleRate, channels);
}

public int Read(byte[] buffer, int offset, int count)
{
WaveBuffer waveBuffer = new WaveBuffer(buffer);
int samplesRequired = count / 4;
int samplesRead = Read(waveBuffer.FloatBuffer, offset / 4, samplesRequired);
return samplesRead * 4;
}

public int Read(float[] buffer, int offset, int sampleCount)
{
int sampleRate = WaveFormat.SampleRate;
for (int n = 0; n < sampleCount; n++)
{
buffer[n + offset] = (float)(m_fAmplitude * Math.Sin((2 * Math.PI * m_nSample * m_fFrequency) / sampleRate));
m_nSample++;
if (m_nSample >= sampleRate)
m_nSample = 0;
}
return sampleCount;
}

public WaveFormat WaveFormat
{
get { return waveFormat; }
}
}
}


Form file



private void button1_Click(object sender, EventArgs e)
{
if (waveOut == null)
{
input1 = new SineWaveProvider32(10000, 1, 0.25F, 1000F);

input2 = new SineWaveProvider32(10000, 1, 1.5F, 2000F);

multiplexingWaveProvider = new MultiplexingWaveProvider(new IWaveProvider[] { input1, input2 }, 2);
multiplexingWaveProvider.ConnectInputToOutput(0, 0);
multiplexingWaveProvider.ConnectInputToOutput(1, 1);
waveOut = new WaveOut();
waveOut.Init(multiplexingWaveProvider);
waveOut.Play();
WaveFileWriter.CreateWaveFile("temp.wav", multiplexingWaveProvider);

}
else
{
waveOut.Stop();
waveOut.Dispose();
waveOut = null;
}
}

Comment by Mark H

you've got two things trying to read from the same waveprovider. You can either play or write a wave file. If you want to do both, creae a new IWaveProvider that in the Read method, reads from the source, writes to a WAV file and then passes the data on through.

Comment by hen vertis

Hi Mark
Thanks for the fast response.
so i need to pass the same instance of wavefilewriter to both waveproviders ?

Comment by Mark H

no, the new waveprovider you make should be the last thing in the pipeline, after the multiplexer. It makes its own WaveFileWriter and writes the data into it as it passes it through in its Read function

Comment by hen vertis

Hi again
i do what you describe and a wave file is genreate but it seems that is not sync like what i hear from the player.

here is my code:

public class SineWaveProvider32 : IWaveProvider
{
private WaveFormat waveFormat;
float m_fAmplitude;
float m_fFrequency;
int m_nSample;
WaveFileWriter m_waveFileWriter;


public SineWaveProvider32( WaveFileWriter waveFileWriter,int sampleRate, int channels, float Amplitude,float Frequency)

{
m_waveFileWriter = waveFileWriter;
m_fAmplitude = Amplitude;
m_fFrequency = Frequency;
SetWaveFormat(sampleRate, channels);

}


public void SetWaveFormat(int sampleRate, int channels)
{
this.waveFormat = WaveFormat.CreateIeeeFloatWaveFormat(sampleRate, channels);
}

public int Read(byte[] buffer, int offset, int count)
{
WaveBuffer waveBuffer = new WaveBuffer(buffer);
int samplesRequired = count / 4;
int samplesRead = Read(waveBuffer.FloatBuffer, offset / 4, samplesRequired);
return samplesRead * 4;
}

public int Read(float[] buffer, int offset, int sampleCount)
{
int sampleRate = WaveFormat.SampleRate;
for (int n = 0; n < sampleCount; n++)
{
buffer[n + offset] = (float)(m_fAmplitude * Math.Sin((2 * Math.PI * m_nSample * m_fFrequency) / sampleRate));
m_nSample++;
if (m_nSample >= sampleRate)
m_nSample = 0;
m_waveFileWriter.WriteSample(buffer[n + offset]);
}
return sampleCount;
}

public WaveFormat WaveFormat
{
get { return waveFormat; }
}
}

private void button1_Click(object sender, EventArgs e)
{
if (waveOut == null)
{
waveFileWriter = new WaveFileWriter("temp.wav", new WaveFormat(10000, 1));
input1 = new SineWaveProvider32(waveFileWriter,10000, 1, 0.25F, 1000F);

input2 = new SineWaveProvider32(waveFileWriter,10000, 1, 1.5F, 2000F);

multiplexingWaveProvider = new MultiplexingWaveProvider(new IWaveProvider[] { input1, input2 }, 2);
multiplexingWaveProvider.ConnectInputToOutput(0, 0);
multiplexingWaveProvider.ConnectInputToOutput(1, 1);
waveOut = new WaveOut();
waveOut.Init(multiplexingWaveProvider);
waveOut.Play();

//WaveFileWriter.CreateWaveFile("temp.wav", multiplexingWaveProvider);

}
else
{
waveOut.Stop();
waveOut.Dispose();
waveOut = null;
waveFileWriter.Close();
}
}

Comment by Mark H

no, the wavefilewriter must be after the multiplexer in the pipeline. You have it before. You need to create a new class that implements IWaveProvider. In its constructor it takes the multiplexing wave provider. In its Read method it reads from the multiplexing wave provider, writes what it gets to a file, and then returns what it read to the caller (which will be WaveOut)

Comment by hen vertis

Hi
I lost you :(
i have 2 WaveProvider32 ~ Sine
after i have MultiplexingWaveProvider .
now i need to add another waveProvider that get data from the multiplexingWaveProvider?

Comment by Mark H

yes, you'll need to create a helper class if you want to both play and record to WAV at the same time.

Comment by hen vertis

Hi
i think its work.
i add new class like you say.
class MultiplexingWaveProvider32Stereo : IWaveProvider
{
private WaveFormat m_waveFormat;
WaveFileWriter m_waveFileWriter;
MultiplexingWaveProvider m_multiplexingWaveProvider;
public MultiplexingWaveProvider32Stereo(MultiplexingWaveProvider multiplexingWaveProvider, WaveFileWriter waveFileWriter)
{
m_waveFileWriter = waveFileWriter;
m_waveFormat = multiplexingWaveProvider.WaveFormat;
m_multiplexingWaveProvider = multiplexingWaveProvider;
}



#region IWaveProvider Members

public int Read(byte[] buffer, int offset, int count)
{
WaveBuffer waveBuffer = new WaveBuffer(buffer);
int samplesRead =m_multiplexingWaveProvider.Read(waveBuffer.ByteBuffer, offset, count);
for (int i = 0; i < samplesRead/4; i++)
{
m_waveFileWriter.WriteSample(waveBuffer.FloatBuffer[i]);
}
return samplesRead;
}







public WaveFormat WaveFormat
{
get { return m_waveFormat; }
}

#endregion
}

Thanks

Comment by hen vertis

Hi
now i think its working i add the new class.

class MultiplexingWaveProvider32Stereo : IWaveProvider
{
private WaveFormat m_waveFormat;
WaveFileWriter m_waveFileWriter;
MultiplexingWaveProvider m_multiplexingWaveProvider;
public MultiplexingWaveProvider32Stereo(MultiplexingWaveProvider multiplexingWaveProvider, WaveFileWriter waveFileWriter)
{
m_waveFileWriter = waveFileWriter;
m_waveFormat = multiplexingWaveProvider.WaveFormat;
m_multiplexingWaveProvider = multiplexingWaveProvider;
}



#region IWaveProvider Members

public int Read(byte[] buffer, int offset, int count)
{
WaveBuffer waveBuffer = new WaveBuffer(buffer);
int samplesRead =m_multiplexingWaveProvider.Read(waveBuffer.ByteBuffer, offset, count);
for (int i = 0; i < samplesRead/4; i++)
{
m_waveFileWriter.WriteSample(waveBuffer.FloatBuffer[i]);
}
return samplesRead;
}







public WaveFormat WaveFormat
{
get { return m_waveFormat; }
}

#endregion
}



Thanks :)

Comment by hen vertis

Hi Mark
i have 2 WaveProvider32 and MultiplexingWaveProvider32Stereo ,
i have at the begining of playing and recording at the same time alots of beeps that disapear after 5-10 seconds also in the file that WaveFileWriter create.
What could be the problem?
thanks

Comment by Anonymous

Hi, I juste want to resample but only CreateWaveFile work and i dont want a wave file

bye [] data = From64String("----- string encoded ---");
MemoryStream fs = new MemoryStream(data);
var baseDir = AppDomain.CurrentDomain.BaseDirectory;

using (var wfr = new WaveFileReader(fs))
{
var outputFormat = new WaveFormat(8000, 16, 1);
using (var pcmStream = new WaveFormatConversionStream(outputFormat, wfr))
{
using (var ms = new MemoryStream())
{
var bytesRead = -1;
while (bytesRead != 0)
{
var buffer = new byte[pcmStream.WaveFormat.AverageBytesPerSecond];
bytesRead = pcmStream.Read(buffer, 0, pcmStream.WaveFormat.AverageBytesPerSecond);


ms.Write(buffer, 0, bytesRead);
}

program.WaveHeaderIN(ms.GetBuffer());
ms.Position = 0;
RawSourceWaveStream RawStram = new RawSourceWaveStream(ms, outputFormat);

System.IO.File.WriteAllBytes(@"Desktop\waveConvertBy.wav", ms.GetBuffer());
// to make a real wav file...

ms.Position = 0;
WaveFileWriter.CreateWaveFile(Path.Combine(\Desktop\output.wav"), RawStram);
Console.WriteLine("wavefile length: " + RawStram.Length);

}
}
}
ms.getBUffer is not a wav file when i play it. there is another way to put RawSourcewave stream into byte? or memory Stream?
thanks

Anonymous
Comment by Aviad Zamir

Hi
I'm trying to create a WAV file from a wave created using SignalGenerator, but it doesnt works, please see below.
Does anyone knows how to do this?
SignalGenerator mySinus = new SignalGenerator(44100, 1)
mySinus.Frequency = 2000;
mySinus.Gain = 1;
mySinus.Type = SignalGeneratorType.Sin;
IWaveProvider w = mySinus.ToWaveProvider16();
WaveFileWriter.CreateWaveFile("Check_WAV.wav", w);

Aviad Zamir
Comment by Mark Heath

You need to limit the duration, e.g. with the Take extension method: var sine20Seconds = new SignalGenerator() {
Gain = 1,
Frequency = 1,
Type = SignalGeneratorType.Sin}
.Take(TimeSpan.FromSeconds(20));

Mark Heath
Comment by Ross Carlson

Greetings,
I am working on an application that receives audio data over the network from multiple sources. You can think of it as a multi-user audio chat system like TeamSpeak. Currently, I feed the incoming audio to a BufferedWaveProvider, which then feeds through a Pcm16BitToSampleProvider. I have one such audio chain for each source of audio (each user in the chat.) These are then added as inputs on a MixingSampleProvider. Finally, the mixed audio is connected to a WaveOut via a SampleToWaveProvider.
This is all working very well. Now, I want to add the ability to toggle recording of the mixed audio to a wave file. I considered adding a WaveRecorder instance to the pipeline, just prior to the WaveOut, and that would work fine if I wanted to be recording at all times. However, I want to be able to turn recording on and off during the session.
So I'm not sure how best to go about setting up recording to a wave file when I'm receiving audio data from multiple sources over the network. Does anyone have any pointers?
(I'm also curious if there's a better way to mix audio from multiple network sources for feeding to a WaveOut device.)
Thank you!
-Ross

Ross Carlson
Comment by Mark Heath

You'd just make your own custom WaveRecorder in the pipeline with an on-off switch. If it's on, then the Read method writes to a WaveFileWriter, if it's off, then it skips writing and just passes the audio through.

Mark Heath
Comment by Ross Carlson

Ahh, that makes sense, but here's a use case that I see I failed to describe in my first post:
In some cases, I have multiple audio pipelines, each mixing audio from multiple sources and ending at a WaveOut device. The user can control which chat participants go to audio device 1 (such as a headset) and which chat participants go to audio device 2 (such as a set of speakers.) I would like to mix all of the audio, regardless of which of these pipelines it is in, into a single wave file, when recording is toggled on. Plus, I want to include in this recording any audio taken from the user's mic.
What I've tried so far to make this work is to have a third pipeline into which I feed all incoming audio, as well as the audio from the mic, and this pipeline terminates at a WaveRecorder. This obviously doesn't work since nothing is calling Read() on the WaveRecorder. I tried calling Read() on it myself immediately after stuffing audio data into the recording pipeline, but I don't know how many bytes I should read, so I either get a short read and the audio is cut off in the wave file, or I read too many bytes and the wave file has chunks of silence in it when it shouldn't. I'm not sure this simplistic approach of immediately reading after stuffing audio data into the pipeline could ever work anyway, since it seems it would be reading the data before the MixingSampleProvider has a chance to do its mixing.
Am I barking up the wrong tree here?
Thank you very much for your assistance!

Ross Carlson
Comment by Mark Heath

OK, that is a trickier problem to solve. There's a few approaches. One is quite similar to what you are trying, but assumes both devices are playing the whole time. Whenever they read, they copy the audio they are giving the soundcard into a BufferedWaveProvider (one for each device). These are connected to a mixer, and whenever you write to one, you read out the highest number of samples that both bufferedwaveproviders are able to provide. A more complex solution has a timestamped list of buffers played by each device, and an algorthm that monitors that list and when it has enough audio to mix 1 second of audio, it does so. This has better support for devices not playing audio, or playback glitches.

Mark Heath
Comment by Ross Carlson

That makes sense. I think I would have to use the second approach, because the WaveOut devices definitely are not playing at all times. I mean, I call Play() on the WaveOut instances right when the app starts after building the pipelines, but they are not getting samples fed to them constantly.
Would you mind elaborating a bit more on the second approach? I'm not sure I understand why I would need timestamped buffers. Couldn't I just read from them on a 1 second timer?

Ross Carlson
Comment by Mark Heath

You can, but imagine if one of them for some reason is not being fed with audio. You need to decide when to insert silence into one of the streams if it doesn't have continuous input

Mark Heath
Comment by Ross Carlson

I think I see what you mean. I guess I was assuming that using a MixingSampleProvider would take care of inserting silence for the inputs that didn't have data available to read. I just looked over the code for that class and I see that it removes its sources when they have nothing to read.
So maybe I could do what I'm doing to mix the audio before passing it to a WaveOut device, which is to first pass it through a BufferedWaveProvider before sending it to the mixer, which I think creates silence as needed (I'll inspect that class next), but then I assume my wave file would be constantly recording silence when no voice data is coming in over the network, which is not what I want.
I'll experiment a bit more and see what I can come up with. I think I should take your pluralsight course as well ...
Thank you!

Ross Carlson
Comment by Mark Heath

Yes, but BufferedWaveProvider produces a never-ending stream of silence so you if you aggressively Read from it before it's ready you'll artificially introduce silence

Mark Heath
Comment by Ross Carlson

Yup, makes sense. I'm thinking I could create a custom BufferedWaveProvider that allows the reader to ask the provider if it has data available. Then I could have a custom mixer that would have the same ability to tell the reader if it has data. The mixer would ask its sources if any of them have data, and return true if any do. The consumer of the mixer's stream would then poll the mixer on some interval and do a read() if it returns true. Perhaps it could be called TryRead(). I'm probably missing something here and it's not this simple, but I'll experiment with it.

Ross Carlson
Comment by Ross Carlson

I ended up making a SilenceTrimmingWaveRecorder class, based on your WaveRecorder, where the only difference is that it doesn't write to the wave file if the buffer is full of silence. The Read method looks like this:


public int Read(byte[] buffer, int offset, int count)
{
int bytesRead = source.Read(buffer, offset, count);
if (buffer.Any(b => b != 0)) {
writer.Write(buffer, offset, bytesRead);
}
return bytesRead;
}

The recorder is fed from a MixingSampleProvider as I described earlier.
I have a timer that calls Read once per second and passes in WaveFormat.AverageBytesPerSecond for the count parameter. Seems to be working well so far.
Let me know if you see any problems with this approach.
Thank you again for your help!

Ross Carlson
Comment by bengine

I have a situation that is confusing. I have a simple application where I want to be able to start and stop recording continuously using a button and save the sound bytes to a file.
I have setup a small test application where I have a toggle button that triggers the sourceStream.StartRecording() and sourceStream.StopRecording() with the output going to a WaveFileWriter.
It appears to work however toggling the start / stop recording button a few times results in a
Exception thrown: 'NAudio.MmException' in NAudio.dll
I am missing something obvious - but I can't seem to get enough information to track down what the issue is.
Is there some trick required to be able to start and stop recording and have the sound appended to a file only when the toggle button is on?.

bengine