Try changing "waveOutDevice = new WaveOut()" to "waveOutDevice = new WaveOutEvent()". This will move the decoding to a background thread so your UI updates don't affect the playback.
This should also fix the "de-sync" issue you mentioned.
Try changing "waveOutDevice = new WaveOut()" to "waveOutDevice = new WaveOutEvent()". This will move the decoding to a background thread so your UI updates don't affect the playback.
This should also fix the "de-sync" issue you mentioned.
also, NumberOfBuffers should not be 64. It should be 2 or at most 3. You're giving the soundcard much more work than it needs by constantly cycling through very short buffers. Pushing latency up also protects more against the chance that a garbage collect could make you miss a buffer refill. 200 or even 300ms might be acceptable for your case.
I have a bunch of audio files that are super quiet. I would like to increase the volume on these files. I was planning on using theAudioFileReader.Volume
member to increase the volume and then save the file back. Below is what I have so far:
using(AudioFileReader soft =newAudioFileReader(softAudio)){Console.WriteLine("Current soft audio at: "+ soft.Volume);}
The issue is that the files volume is already at 1.0f
which is the max.
Am I going at this the right way for what I want to do? Is it possible to increase the volume further if NAudio already thinks it's at max?
the reader volume is not telling you how loud the audio is, 1.0 simply means that every sample will be left at an unchanged volume. If you set volume to 2.0, then every sample value will be doubled, (effectively a 3dB increase).
The trouble comes when you come to write them back to a file, because the samples will clip if they are outside the range +/- 1.0. The simplest approach is to hard-limit after boosting the volume. This will clip the very loudest parts of your audio file, but if they are very low level in the first place, you might be able to get a good boost without ever clipping.
The proper solution is to use dynamic range compression, which can be tricky to get your head round at first, as you need to select appropriate attack/release/threshold/ratio and make-up gain parameters for your audio. However, this is one of the most effective ways of getting a good audio level. NAudio does contain a very simple compressor, but it has not yet been reworked into the ISampleProvider interface, something I hope to do for the next version.
Mark
The change to WaveOutEvent and setting number of buffers to 2 did the trick.
The latency I need to experiment with, so that it corresponds roughly to the wireless sending of the fire message and ignition of the electrical match and the firing of the fireworks. It cannot be perfect timing, but its more than good enough when the brain combines the sound of the music and sight of the pyro. I think I may need to add a slider to advance/delay the pyro cues if the operator thinks the audio is a bit off from the actual firing.
The timing issue became apparent when a broadcaster could not accept our audio feed. We decided on a simple "ready set go" to sync the pyro system with their playing of the mp3 on their computer. We tested it a few times and were surprised how much timing wander between the two systems there was. Tested again using windows media player on both systems and saw the same result. Any ideas why?
Thanks Again!
Hi,
I need to set audio output device(sound card) how can i assign that using naudio.
One thing to note about the "latency" is that really I shouldn't have called it latency. It is the total duration of all the buffers. Actual latency is half that if you are working with two buffers (the computer is playing one buffer while it fills the other). Does NAudio lag behind the other PC or get ahead? I know the soundcards on windows PCs can sometimes be running at 48kHz even when they are playing 44.1kHz audio. It does of course do sample rate conversion, but it might account for a small drift perhaps. I don't know what the actual tolerance of audio clocks on soundcards is.
Obviously with fireworks you have the bigger issue with the speed of sound which is much slower than people realise. It is 340m/s, so so moving just a few metres closer to the source of the sound will mean you hear it 30ms earlier. Unless everyone is exactly the same distance from the speakers, they will not experience the same synchronization
Anyway, sounds a really interesting use you are putting NAudio to.
Mark
Hi,
I need to set audio output device(sound card) how can i assign audio device using naudio in normal form in normal form. can you please tell the sample code.
public AudioPlaybackPanel([ImportMany]IEnumerable<IOutputDevicePlugin> outputDevicePlugins)
{
}
From where you are getting "outputDevicePlugins" value
Thanks & Regards,
S.Maria Hinshin Das
Hi,
In NAudio from where you are calling
[ImportingConstructor]
public AudioPlaybackPanel([ImportMany]IEnumerable<IOutputDevicePlugin> outputDevicePlugins)
{
InitializeComponent();
LoadOutputDevicePlugins(outputDevicePlugins);
}
I want to play multiple sound in same time in different sound card. how you get "outputDevicePlugins" value, when the constructor is called. How can I choose audio device using NAudio without User Control.
I was trying this from last 2 month please tell the solution that you have did in NAudio.Thanks in advance.
To play sound in multiple soundcards, you need an output device per soundcard. So for example create two instances of WaveOut, and set the DeviceNumber property on each one.
Mark
hi,
In NAudio how I get output device and set device number...
Thanks & Regards,
Hinshin
var outputDevice = new WaveOut()
outputDevice.DeviceNumber = 1;
Thankyou I got the solution.
Is there a way to modify a WAV file's frequency?
Hi,
Thanks for your quick response.
do you mean change the sample rate? If so you can use WaveFormatConversionStream or DmoResamplerStream
Hi Mark - thanks for the reply. In Windows Vista and later, you can use the core audio library to control the sample rate from the application. It's a big pain though, so I've abandoned this effort for now.
http://msdn.microsoft.com/en-us/library/windows/desktop/dd370884(v=vs.85).aspx
Thanks,
Giawa
Hi Mark and Keith,
You're more than welcome to use the AiffFileWriter that I wrote, which is here: http://giawa.com/tutorials/src/AiffFileWriter.cs
It depends on an IEEE floating point library, which is here: http://giawa.com/tutorials/src/IEEE.cs
The AiffFileReader currently included with NAudio has its own IEEE floating point conversion methods (I wrote it before the AiffFileWrite, so I didn't think to put it in its own static class). It should be pretty straight forward to consolidate the whole thing to work with the IEEE.cs provided.
Hope this helps,
Giawa
I am using NAudio to play Mp3 files in my .NET 4 app. First of all I initialize WaveOut:
IWavePlayer^ waveOutDevice = gcnew WaveOut();
Then I have 2 buttons. Play button :(code)
volumeStream = gcnew WaveChannel32(gcnew Mp3FileReader(gcnew IO::FileStream(path, IO::FileMode::Open, IO::FileAccess::Read, IO::FileShare::ReadWrite))); mainOutputStream = volumeStream; waveOutDevice->Init(mainOutputStream); waveOutDevice->Play();
It loads MP3 form FileStream and plays it. 2nd button is Stop :(code)
waveOutDevice->Stop();
t just stops playing.
When I start my app it eats 5.344 KB. But when i hit 2 buttons (Play then Stop) (imagine i'm playing different MP3's) about 10 times app eats 14.912 KB!
And I don't know how to release this memory. To play MP3 I am using these NAudio objects:
IWavePlayer^ waveOutDevice; WaveStream^ mainOutputStream; WaveChannel32^ volumeStream;
Much appreciated!