In Unity3D editor, it does work.
In Unity3D editor, it does work.
I'm trying to convert from .wav to .aiff and notice there is an example of how to convert a .aiff to .wav using the AiffFileReader but there doesn't appear to be an AiffFileWriter which I'd hoped there would be.
Is there a way to use NAudio to convert .wav files to .aiff files?
I've tried to figure this out. Does this look correct?
PrivateSub btnPlay_Click(sender AsObject, e As EventArgs) Handles btnPlay.ClickIf Label1.Text = NothingThenDim wavFile AsString OpenWavFile.InitialDirectory = "" wavFile = OpenWavFile.ShowDialog() wavFile = OpenWavFile.FileName Label1.Text = wavFileElseDim file AsNew FileStream(Label1.Text, FileMode.Open, FileAccess.Read)Dim format AsNew WaveFormat(44100, 16, 2)Dim rawStream AsNew RawSourceWaveStream(file, format) output = New AsioOut(comboBoxAsioDriver.Text) output.Init(New NAudio.Wave.WaveChannel32(rawStream)) output.Play() btnPlay.Enabled = FalseEndIfEndSub
yes this is fine, although the WaveChannel32 is redundant in this case. Also, only use ASIO if you have a particular need for very low latency.
Unfortunately there is no AiffFileWriter in NAudio. If someone were to contribute a working one to the project I would be glad to include it. You may be able to work out how to create one yourself by examining the code for the reader.
Mark
Thanks for the prompt response Mark, I'll take a look at that
ok. thank you so much)
slinger
Hey,
Thanks for your awesome work. It's been rather enjoyable using your library for development. I have a question about performance, however. My intention is to create a Digital Audio Workstation with VST support (using VST.NET) and I would like to be able to sell it as a full-featured, powerful program. Will NAudio support dozens of audio tracks playing simultaneously? What about with the overhead of VST processing? Am I overestimating the performance of .NET, and should I instead be writing this in C++? Performance does matter. People recording instruments need very low latency between recording and playback otherwise... well, recording gets very confusing at that point. I need to accomplish a latency of no more than 20ms.
Should I re-think my approach?
Thanks!
Hello everyone ,
I'm a beginner with NAudio and I want to create a basic sinus wave and modify its frequency with the move of the mouse . I followed this tutorial .And in my MainWindow , I've just added
And my sinus wave changed its frequency but the changes are not fluid , anyone has some ideas how that can be more fluid ?
double GetFrequency(Rectangle sender, MouseEventArgs e)
// This function is just to convert the position of the mouse to a frequency { ... } privatevoid Rectangle_MouseMove_2(object sender, MouseEventArgs e) { if(Mouse.LeftButton == MouseButtonState.Pressed) { sineWaveProvider.Frequency = (float)GetFrequency((Rectangle)sender, e);
// Here I change the frequency while playing } } privatevoid Rectangle_MouseLeftButtonDown_1(object sender, MouseButtonEventArgs e) { sineWaveProvider.Frequency = (float)GetFrequency((Rectangle)sender, e); sineWaveProvider.Amplitude = 0.25f; waveOut = new WaveOut(); waveOut.Init(sineWaveProvider); waveOut.Play();
// When you click , the sound starts playing
} privatevoid Rectangle_MouseLeftButtonUp_1(object sender, MouseButtonEventArgs e) { waveOut.Stop(); waveOut.Dispose(); waveOut = null;
// When you released, the sound stops } }
for smooth frequency changes you need to implement a portamento algorithm. One way to do this is with a wavetable, where you slowly change the offset intot the wavetable you are operating with. It's a tricky algorithm if you are a beginner to DSP, but hopefully that points you in the right direction.
.Net is plenty fast enough to do a project like that, *IF* you correctly optimize the code (using the profilers as guides).
That said, I would recommend only opening a single output stream; This uses less handles and allows you to mix on the fly in your own code. I'd also recommend that you make sure all sources use the same sample rate as the output stream (we'll assume all of them are 32-bit float internally).
No matter which language you use, this will be a large project... take your time and make sure you are always using the best algorithm for each piece of the system.
Good Luck!
Thank you very much. Your statements were very helpful. I had some problems calling the ADPCM.dll properly. But now it seems to work mostly.
Unfortunately only mostly, because the library proccesses the G.727 data only in 512 byte blocks. This results in the following ALaw block sizes:
2-bit: 2048 bytes
3-bit: 1365 bytes
4-bit: 1024 bytes
5-bit: 819 bytes
The Read() method asks for a 2880 byte buffer to be filled. If I use 4-bit samples I process two blocks, fill 2048 bytes and return this 2048. But then the next Read() call asks for the remaining 832 bytes which are less than the library can process.
That leaves me with two possibilities:
1.) Regulate NAudio to ask for the right buffer size
2.) Process three blocks, fill exactly 2880 bytes and buffer the remaining 192 bytes for the next time
Is 1.) possible?
Best wishes
WaveFormatConversionStream should use the BlockAlign parameter of the source stream to manage its read sizes. Make sure that is set to the correct block size.
Mark