do you mean missing samples, or recording samples that have a value of 0? How long do you have to record before this problem is observed? Is there any chance it could be coinciding with a run of the .NET garbage collector?
Mark
do you mean missing samples, or recording samples that have a value of 0? How long do you have to record before this problem is observed? Is there any chance it could be coinciding with a run of the .NET garbage collector?
Mark
Thank you very much for your reply!
I mean missing samples (like the original sound from the computer is: ABCDEF and the recording is like ABDEF). Sometimes it occurs after 10 seconds, sometimes after 2 minutes and sometimes I get a record without any issue... In the DataAvailable Method I just write the buffer to the file with writer.Write(e.Buffer, 0, e.BytesRecorded); ...
Thank you very much for your help!
You will be able to play AAC in NAudio 1.7 using the MediaFoundationReader so long as you are using Windows 7 and above. You can try this out by building the latest code or getting a prerelease from NuGet if you are interested.
Mark
hmmm, not sure what is causing that. It might possibly be the garbage collector, in which case recording with larger buffer sizes might help, although unfortunately that isn't currently configurable in WasapiCapture, which is something that probably ought to be rectified in a future NAudio.
Mark
Hello,
I have just started out playing around with the NAudio library,
as my soundcard does not support a "what you hear" record feature I need to have NAudio mixing a WaveIn signal with another WaveStream played in the background.
I've so far accomplished to record the input signal with a WaveIn object, and also using the WasapilLoopbackCapture to record any sound playing of the output instead.
So it's either the one or the other that gets recorded.
I've looked for a way to mix these two signals using the WaveMixerStream32 but I'm not sure if this is possible and how I should implement this to work
(The WaveMixerStream32 takes no WaveIn but only Stream objects)
Any help on this subject is appreciated!
Regards Niclas
you'd either need to remove streams from the mixer, or read out of the mixer until you reached the end of the streams. You'd probably be better off using the MixingSampleProvider as WaveMixerStream32 has some quirks.
ОК. I found names of channels in AsioOut.driver.Capabilities.InputChannelInfos and make methods to get it.
I added this to AsioOut:
publicstring AsioInputChannelName(int channel) { if (channel > DriverInputChannelCount) return""; elsereturn driver.Capabilities.InputChannelInfos[channel].name; } publicstring AsioOutputChannelName(int channel) { if (channel > DriverOutputChannelCount) return""; elsereturn driver.Capabilities.OutputChannelInfos[channel].name; }
Ok I tryed it,but after I add a stream in the mixer how to get the result which I have to play threw DirectSoundOut ?
I saw you example, but I have some problems understanding it. :)
It needs to become a waveprovider again (I'll intend to make this simpler in a future NAudio)
directSoundOut.Init(new SampleToWaveProvider(mixingSampleProvider))
Thanks for your reply!
I used the following code,
It sounds exactly the same as described above, same rhythm but really distorted.
WaveFormat wFormat = WaveFormat.CreateIeeeFloatWaveFormat(44100, 1); WaveFileWriter waveWriter = new WaveFileWriter(path, wFormat); for (int i = 0; i < decodedSamples.Count; i++) { waveWriter.WriteSample(decodedSamples[i]); } waveWriter.Close();
I made some screenshots of Audacity.
(bottom wave is 1st channel of my decoders output)
The following pic shows that the global shape matches:
http://reinverberne.nl/tmp/1.png
But when zoomed in, its not a "nice" wave:
http://reinverberne.nl/tmp/2.png
If my decoder is bad (verry probably) it could be 1001 places in my code :/
How did you debug your decoder (Nlayer) when writing it?
Regards
shouldn't you be making a stereo file?
I tried it but did not matter:
http://reinverberne.nl/tmp/3.png
I think i need a way to compare the outputs of the various decoder blocks with a working decoder
all the way from the huffman decoding block, then the requantize block, then reordering, etc.
To see where the problem lies.
Regards
I want to monitor the microphone input and listen for a square wave type input. This is actually Morse Code and I want to sense the "dots" and the "dashes". Programmatically I will have to determine which is which as the length of each and time in-between can fluctuate. My question is, what would be the best way to parse the input stream to check for differences in amplitude. There is bound to be some background noise so the input would be a low value then go high possibly with a leading spike. this high level would be maintained and then drop to a low value again. Any pointers on what commands I could use to monitor the stream of values?
I just finished a flac decoder. Of course it is not a mp3 decoder but its always the same. I also got bad results. If nearly the whole wave contains mistakes you would not be able to find the bug quite easy. I thought about different things how to solve my problem but I ended up in reading line for line again and search for mistakes I made. If there is an opensource deocer(in my case there was no flac decoder) you could debug that decoder and compare results with your decoder.
A very big problem is also if you just have very small mistakes in your wave. So you can use Audacity, zoom in and get the sample the mistake is. Now you can calculate the frame index where the error happens. Now you can debug start debugging there.
But all in all. Debugging decoders is always very very hard because there is just such a huge amount of data and you have nothing more than numbers and bytes :(
I really can feel with you... (by the way... I spent over 2 month debugging until everything worked fine :P)
yes, debugging decoders is really hard. Check in often and regularly so you can rewind mistakes, have a good set of test files that you can check regularly. Still you'll probably need to painstakingly code review each line of your source to spot the problems.
MrGroovy,
You're on the right track with logging the output (though I'd also log side decode, bit reservoir calc, & scalefactor read). Just make sure you don't overdo it and force yourself to wade through too much data at once (i.e., only do one frame at a time).
From 2.png and 3.png, it looks like everything through stereo decode is fine, but everything after it is questionable...
One other thought: Make sure SubBandSynthesis' outer loop reads out polyphase information correctly (looping ss, read element ss from each subband in order, then inverse polyphase decode into your pcm buffer, repeat until all 18 elements have been decoded in all subbands).
Layer III is a bear to get right, so good luck!
I'm building an application where would be interesting to have access to ambient sound. The main problem of this is that the sound file should not be too big. I would like to know if using this framework is possible to actually record audio from a microphone only when the dB is high enough.
Obs.: Keep in mind that all the recorded data should go to the same file.
Is it possible to do with this framework?