January 29, 2013, 6:10 am
I'm afraid I never got round to implementing MIDI file playback in NAudio. My focus at the time I did MIDI was much more on manipulating MIDI files. I do think that some of the midiStream APIs can do it, but the wrappers for them in NAudio are only partially
complete.
If you're making your own synth, then you'd probably be implementing the sequencer yourself anyway as it wouldn't appear to Windows as a MIDI device. The drum machine demo I added to the NAudio WPF Demo project might point you in the right direction for
doing this. However, drums are usually "one-shot" whilst other MIDI notes must respond to the note-off message at the right time.
↧
January 29, 2013, 7:39 am
Respected Sir,
i am using waveIn to list down audio devices. but issue is my machine have 2 audio capture devices and if audio service stopped then i am getting 2 count even if i unplugged one of them.
can you please tell me how to solve it or is there any way to get proper audio capture device list even if audio service stopped.
↧
↧
January 29, 2013, 7:42 am
Hi - thx for replying.
I tried Google/Wiki 'IEEE float' ->> I guess it is a general data-format rather than an audio format. Thereafter I went to my recording software to meet that recording standard. It turned out that the line:
NAudio.Wave.WaveChannel32 wave = new NAudio.Wave.WaveChannel32(new NAudio.Wave.WaveFileReader(open.FileName));
- only can open 'Windows PCM 16-bit'.
In this process (my own and early learning process of C#) I have also (- found a way to -) tried to export the data stream of the y-values, from above, into an .csv-file, which rendered (in Excel) the values into this graph: http://www.preforce.dk/DIV/NAudio_codeplex.htm
This doesn't fit with the asymmetric output of my 'Windows PCM 16-bit' wav-file in chart1.
So instead of;
NAudio.Wave.WaveChannel32 wave = new NAudio.Wave.WaveChannel32(new NAudio.Wave.WaveFileReader(open.FileName));
- should I then be using a different way to open the wavefile? (the goal is to make the y-values fits the peaks at the waveposittion (x-values/time) in the recorded file, of course)
All though Giawa, who made this tutorial that I am merging from, could plot out to the chart, I am having big trouble understanding whats going on value-wise. I mean; on the chart it displays very different y-values (making peaks) ->> so how to access those
peak-making y-values?
Lastly; I also tried to alter the way the files ya-values are being read/gets converted in the tutorial from:
chart1.Series["wave"].Points.Add(BitConverter.ToSingle(buffer, i * 4));
to:
chart1.Series["wave"].Points.Add(BitConverter.ToDouble(buffer, i * 4));
(actually this action, was just to try something, as you wrote about the 'ToSingle'-approach being wrong ('IEEE float'))
'ToDouble' results in an exception, but then again, this was a wild shot from my side :-/
↧
January 29, 2013, 9:05 pm
Is there a way to change the playback device of a MMDevice?
I know there is a hacky way to change the DEFAULT playback device, but I only want to hear (for example, a input line coming from a turntable) to a specific audio output.
Same way is done in this window
Hope you can help me
Regards
JLuis
↧
January 30, 2013, 1:18 am
to monitor an input you'd need to record sound from that input (WasapiIn) and play it to the output (WasapiOut).
↧
↧
January 30, 2013, 3:24 am
your WAV file is 16 bit, but WaveChannel32 converts 16 bit into 32 bit IEEE float. So you either keep WaveChannel32. ToSingle is right if your audio is IEEE float at that point. I'm not really clear on what you are trying to graph. Do you want to draw a
waveform?
↧
January 30, 2013, 8:38 am
Kind of a bit redundant :S
And what would be the process? I hope you can help me a bit because Im sort of lost in here:
var waveIn = new WasapiCapture(inputDevice);
var waveOut = new WasapiOut(outputDevice, Shared, true, 150);
//What object do I need to create to provide a IWaveProvider for it Init method of waveOut?
Hope you can help me with this
Regards
JLuis
↧
January 30, 2013, 8:59 am
Is there a way amplify/normalize a wave file?
I need to raise the volume of files!
Hope you can help me
Regards
Rodrigo
↧
January 30, 2013, 2:44 pm
Hello Mark Heath.
First I want to thank you and your helpers for this great Sound Lib. You probably made the only full .NET Sound lib with that many functions :) Well, I have to come to my question now:
I set the NAudio.dll reference, set up the imports and declared a new NAudio.Wave.WaveOut
object. Then I init the ogg stream to it and start the playback. How can I implement A FFT Spectrum of the stream now, which winforms element should I use to Display that? I saw you support that in the dsp class. Any short usage example would be great. Full Code:
Imports NVorbis
Imports NAudio
Public Class Form1
Dim stream As System.IO.UnmanagedMemoryStream
Dim waveOut = New NAudio.Wave.WaveOut()
Private Sub Button1_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button1.Click
stream = My.Resources.Test1 '// This is a ogg file, it is only named as Wave file to get a stream ;)
Dim OggStreamReader = New NVorbis.NAudioSupport.VorbisWaveReader(stream)
waveOut.Init(OggStreamReader)
waveOut.play()
End Sub
End Class
You can download the full Project from my public ftp here if you want to take a look into the full Project: ftp://www.untergrund.net/users/Freefall/News%20Section/NVorbis%26NAudioTest.rar
↧
↧
January 30, 2013, 6:55 pm
Interesting. Your voice recorder app only sets the windows volume control to it's max. How would you boost the audio signal, ie multiply it's amplitude with a given value and output it in realtime, so you can create virtually any volume you desire up to audio distortion?
↧
January 30, 2013, 8:10 pm
Don't bother I found it myself.
As http://stackoverflow.com/questions/5694326/record-input-from-naudio-wavein-and-output-to-naudio-waveout helped me well for the recording part I decided to share the amplification part for those who need it.
int Verstärkung = 30;
void wi_DataAvailable(object sender, WaveInEventArgs e)
{
//bwp2.AddSamples(e.Buffer, 0, e.BytesRecorded);
var erg = new byte[e.BytesRecorded];
for (int i = 0; i < e.BytesRecorded; i += 2)
{
var sample = (short)(e.Buffer[i] | (e.Buffer[i + 1] << 8));
erg[i] = (byte)((sample * Verstärkung) & 0xff);
erg[i + 1] = (byte)(((sample * Verstärkung) >> 8) & 0xff);
}
bwp2.AddSamples(erg, 0, e.BytesRecorded);
}
Enter your desired amplification value for Verstärkung(80 seems to distort everything, use a slider to get to the limit)
One last question though: how do I improve latency?
↧
January 30, 2013, 10:32 pm
You can set the buffer size on WaveIn before you start recording. WaveIn doesn't support really low latency though, but you might find you can get down to around 50ms.
Your method of amplifying the sound will only work up to a point. It risks overflow as you are staying in 16 bit mode. Normally I would convert the output of the buffered wave provider to float. Then you can freely amplify as much as you like. Then, before converting back down to 16 bit to play or write to a WAV file, you clip any samples that go outside the +/- 1.0 range.
Mark
↧
January 30, 2013, 10:35 pm
What does that look like in code for my example?
↧
↧
January 30, 2013, 11:04 pm
Hi, I explain how to use FFT in NAudio in this article. Be warned that it if you are new to audio DSP, it can take a while to understand how to use FFT. If you want to visualise the FFT with a spectrum analyser, there is a great NAudio example in this project.
↧
January 30, 2013, 11:38 pm
Try the AudioFileReader, which will give you IEEE float as an ISampleProvider. This will allow you to easily examine the value of each sample as it comes through. You amplify by multiplying each sample by a value > 1. AudioFileReader has a Volume property that will do this for you.
To write back to a wav file, you'd use a SampleToWaveProvider16 to get back to 16 bit (and will clip if necessary), then pass that into WaveFileWriter.CreateWaveFile.
Normalizing, is the process of amplifying by the largest amount possible without actually clipping. It isn't actually a particularly useful feature as a single sample close to the maximum value means that you can't amplify the volume at all. It is much better to use dynamic range compression, which is a more complex algorithm. I've got an example in SimpleCompressor class, but it needs a bit of work to be easily inserted into the signal chain at this point.
↧
January 30, 2013, 11:39 pm
you can use the BufferedWaveProvider for this. When you recieve audio from the capture device, write it into the BufferedWaveProvider. The player can just read continually from BufferedWaveProvider.
↧
January 30, 2013, 11:42 pm
waveIn device count just returns the number of devices that Windows reports (using the waveInGetNumDevs function).
↧
↧
January 31, 2013, 12:01 am
you need to set up an effect pipeline. You are doing roughly the right thing, its just that EffectStream is expecting a WaveStream. I'd modify EffectStream to simply require an IWaveProvider. It dones't really need a WaveStream. Then you will be able to call waveOuts.Init(effectStream)
↧
January 31, 2013, 8:11 am
Thanks for your quick reply. Well, it seems wpfsvl is a good way to go, but I have 2 problems now:
1) Wpfsvl needs .Net Framework 4.0 and I don´t want to have to install it on my computers.
2) There is a example project showing the usage of NAudio with it, but source code or guides for the compiled NAudio example are not provided :-Y . That makes me angry, it wouldn´t be that much work to include the example project source code ...
=> So this seems to be a dead end. Could you please release an example of using FFT with NAudio? Would be fantastic!! I would be your biggest fan then :) It should only show the basic functions, I don´t need a full example. I don´t want to steal your spare time you know :)
↧
January 31, 2013, 8:20 am
1) I don't see why wpfsvl needs .NET 4.0. You should be able to recompile it to .NET 3.5.
2) The code is available here
My example of how to use FFT with NAudio is the one in the article I linked above.
For another, perhaps simpler example, look at the WPFDemo that comes with NAudio. It contains a class called SampleAggregator which is calculating the FFTs, and SpectrumAnalyser draws them (but it doesn't look nearly as good as the one in wpfsvl)
Mark
↧