Quantcast
Channel: NAudio
Viewing all 5831 articles
Browse latest View live

New Post: Audio Input Capturing from multiple devices

$
0
0
markheath wrote:
How many physical soundcards do you have?
Only one. That could be a point of course.
markheath wrote:
Have you made 100% sure deviceID is what you are expecting it to be?
Yes, it is definitely different in each instance. Beginning at 1 and increasing with each instance.
markheath wrote:
Do you always dispose waveIn before opening it again?
No, I'm not doing that. But if I do so the DataAvailable event wouldn't be fired anymore, would it?

New Post: Audio Input Capturing from multiple devices

$
0
0
Your soundcard driver might be not allowing you to open all the "devices" simultaneously

Once you have opened a waveIn device and called start recording, the DataAvailable event will keep firing, until you call StopRecording and Dispose, Then, and only then can you open it again

New Post: Volume quirk with Waveout when Pause/Resume play

$
0
0
Well, I solved this by the steps below. If anyone knows of a more elegant solution, I'm all ears.

1) On pause, saving Current position and volume
TimeSpan positionTimeSpan =  mainOutputStream.CurrentTime ;
float volumeOnPause  = currVolume;
2) On resume, if volume has changed since pause, flush the out buffer and restore position
if (volumeOnPause != currVolume)
                {
                    waveOutDevice.Stop();   // flush the buffer, brute force method
                    mainOutputStream.CurrentTime = positionTimeSpan;  // reset position
                }
               
                waveOutDevice.Play();

New Post: Volume quirk with Waveout when Pause/Resume play

$
0
0
yes, calling Stop flushes the buffers. Just because the API is called "Stop", doesn't mean you can't use it when the user presses the "Pause" button.

New Post: Volume quirk with Waveout when Pause/Resume play

$
0
0
Ok thanks Mark. This solution seems to work fine.

New Post: Detecting Audio Level of a Stream, but not from device

$
0
0
I would like to be able to determine the sound or audio level of an mp3 stream. I see how to do it using a device. However, if there are other sounds playing on the system, I believe this will reflect other sounds coming through the speaker/device and give a false reading. Any help or direction would be appreciated. --Thanks

New Post: Get a Stream from WaveIn

$
0
0
Sorry to insist, but all WaveStream class that i've seen takes a WaveStream as argument in their constructor.

Is it possible to get or build a WaveStream from WaveIn?

Please?

New Post: Audio Input Capturing from multiple devices

$
0
0
Allright then I gotta do it different. Thanks for the replies.

New Post: Detecting Audio Level of a Stream, but not from device

New Post: Detecting Audio Level of a Stream, but not from device

$
0
0
... But not from a device.

From what I have tested, all the examples are getting the audio level from the sound card and not the stream. If I have some other application playing sound, it will also show audio levels from the sound card device.

Here is another example. Lets say I have two streams going at the same exact time, both playing music. how would I detect or display audio levels of each stream independently. I do not want to get the audio level from the device (sound card), as this will show the combined audio levels.

Source code checked in, #88aa8c87b30f

$
0
0
ASIO out audio available event modified to allow writing to the output buffers directly for maximum performance

New Post: Get a Stream from WaveIn

$
0
0
You can use the BufferedWaveProvider to make an IWaveProvider. It doesn't implement WaveStream as it doesn't support repositioning, but you could make a custom class. But I'm pretty sure the "WaveStream" MSDN is talking about means a WAV file in a stream, which is not the same as an NAudio WaveStream

New Post: Get a Stream from WaveIn

$
0
0
Thank you for your reply.
Unfortunatly i still don't figure out how to get a WaveStream from something else than a file.
For example, i don't see doc or sample to get a WaveStream from BufferedWaveProvider .

New Post: wave file has finished playing

$
0
0
my C# GUI allows the user to press button stop or pause while the wave file is playing. how can I disable those buttons and view the play button when file ends playing ?

New Post: Detecting Audio Level of a Stream, but not from device

$
0
0
Again see my previous answer.
The Naudiodemo uses realtime level detection of a stream playback. So where's your problem. Just try it out yourself. Play anything on your computer and take a look at the Demoapp. There won't be any level shown.

New Post: wave file has finished playing

$
0
0
well, I'd just disable Play after calling waveOut.Play().. Then subscribe to PlaybackStopped and you can re-enable Play there

New Post: Get a Stream from WaveIn

$
0
0
I don't think a WaveStream will actually be any use to you beacuse the speech recognition wants a WAV file in a stream. But if you really want to turn a WaveProvider into a WaveStream, it is very easy to write a class to do so. Just pass Read and WaveFormat through to the WaveProvider, make Length return a suitably large number, Position set throws an exception and Position get returns the number of bytes read so far.

New Post: Buffer lenght in read wavestream

$
0
0
WaveChannel32 adjusts the volume of each individaul sample as it passes through. 1.0 means no change, 0.0 means silence, and 2.0 will double the amplitude of each sample, although you risk clipping.

WaveChannel32 is probably a bit cumbersome. I'd use the Wave16ToSampleProvider and then a VolumeSampleProvider before finally a SampleToWaveProvider16 at the end efore playback.

New Post: Question about wav files

$
0
0
Hi,all I want to use NAudio in my current project in which I have to recognize numbers with camera and play different wav file for each number.
So If I have one number its ok, but when I have 5 numbers => 5 sounds I have to mix them all, so how can I do this with NAudio. Lets say I recognize 4 images, so I have to play 4 mixed wave files in loop while numbers are in camera point of view of view.
Thx in advance to anyone who read this. :)

New Post: MixingSampleProvider to Mix and Output Two Buffers

$
0
0
Hello,

I've got a WaveStream object whose Read method populates buffers for sinewave signal in one instance and static sample array in the other instance. Now I'd like to mix together and output via DirectSoundOut. Since I only need mono for each buffer, I thought perhaps to populate L/R channels with respective samples, but that seems ugly. Any applicable examples for which I could leverage MixingSampleProvider? Just not sure how it interfaces with DirectSoundOut. Would we even require WaveStream?

Thanks much...
Viewing all 5831 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>