Quantcast
Channel: NAudio
Viewing all 5831 articles
Browse latest View live

New Post: Generating random numbers using NAudio

$
0
0
Hi Mark,

I take your point about NAudio not being focused on general programming utilities.

As regards creating a sequence of all zeros, the program guards against this as follows:

The program considers successive pairs of 16-bit samples from the audio buffer. It looks at the least significant bit from each sample. If the bits are the same the program discards this data and moves on to the next pair of samples. However, if the two bits are different, the program appends the second bit to the growing stream of random bits - and then moves on to the next pair of samples. [This is the line "if (b1 != b2)" in HandleNextBuffer.] This process is called a John von Neumann randomness extractor. Here is an example from running the code on my computer. The first two columns are the pairs of 16-bit samples and the sequence of 0's and 1's on the right hand side is the sequence of random bits. If every sample was 0000 the program would get stuck and never produce a single random bit.

0002 0002: least significant bits: 00 discard
fffe 0001: least significant bits: 01 generate 1
fffd fffe: least significant bits: 10 generate 0
ffff fffd: least significant bits: 11 discard
0001 fffe: least significant bits: 10 generate 0
0002 0001: least significant bits: 01 generate 1
0001 0000: least significant bits: 10 generate 0
fffe fffd: least significant bits: 01 generate 1
fffc fff9: least significant bits: 01 generate 1
fffc fff8: least significant bits: 00 discard
0001 fffa: least significant bits: 10 generate 0
0002 fffd: least significant bits: 01 generate 1
0002 fffe: least significant bits: 00 discard
0000 fffd: least significant bits: 01 generate 1
ffff fffc: least significant bits: 10 generate 0
0001 fffc: least significant bits: 10 generate 0
0001 fffd: least significant bits: 11 discard
0000 fffe: least significant bits: 00 discard
fffe fffe: least significant bits: 00 discard
fffd fffe: least significant bits: 10 generate 0
ffff fffe: least significant bits: 10 generate 0
0004 0000: least significant bits: 00 discard
0006 fffe: least significant bits: 00 discard
0005 fffd: least significant bits: 11 discard
0002 fffe: least significant bits: 00 discard
0003 0001: least significant bits: 11 discard
0005 0003: least significant bits: 11 discard
0005 0001: least significant bits: 11 discard
0001 ffff: least significant bits: 11 discard
fffd fffc: least significant bits: 10 generate 0
fffd fffd: least significant bits: 11 discard
0001 fffe: least significant bits: 10 generate 0

... and so on.

Clive

New Post: Issue Streaming live data

$
0
0
I am trying to stream live audio data (U-Law 8000Hz, 8-bit, Mono) over my network and play it on my client machine. I have it working but I have having an issue with play time.

I am using a BufferedWaveProvider to "stream" the audio data into so I can play it out. I was trying as data came in to increase the buffer size until the user presses stop and then I can flush the buffers out and start over when the user presses play again. My issue is that even though I am increasing the buffer size my audio stops playing after 5 seconds (or what ever I have set the BufferDuration too. Is there a way to increase this time as the buffer grows so it doesn't stop playing?

How I currently have my BufferedWaveProvider setup
_waveProvider1 = new BufferedWaveProvider(_inputWaveFormat);
_waveProvider1.BufferDuration = new TimeSpan(_bufferHours,
                                             _bufferMinutes,
                                             _bufferSeconds);
_waveProvider1.DiscardOnBufferOverflow = true;
I currently have the TimeSpan set to 5 minutes and everything plays fine until I reach the 5 minute mark and then it just stops. If i set DiscardOnBufferOverflow to false I will get a Buffer Full exception

New Post: Convert WaveIn buffer to PCM integers

$
0
0
I think I may stumbled upon the answer by iterating by 2 through the byte array. That way the BitConverter.ToInt16 method combines two byte elements into a 16 bit integer.

New Post: Buffer lenght in read wavestream

$
0
0
Hi,

I'm beginner to NAudio. I'm working on tutorial 6 ... and made some change to test.

So, i now have sourceStream_DataAvailable handler that fire event on a microphone source stream.

I implement a TestWaveStream derived from WaveStream. My sourceStream_DataAvailable handler Send his recorded buffer to TestWaveStream .... (i'll add effect here in the future)

Then i have a WaveOut Stream linked to TestWaveStream. In the read function, i want to pass the data that i previously aquire in sourceStream_DataAvailable.

for both stream in and out, i 'm using WaveFormat 16000Khz, 16 bits, one channel.

Here, i have a problem because in the Read function, lenght are not the same for in and out.
I mean : the buffer in sourceStream_DataAvailable has a 3200 bytes lenght and in the read function, the argument buffer has a 2560 bytes lenght and a 1280 offset. So i don't understand why the buffer in the read function doesn't has a lenght of 3200, then i could copy input buffer to output one directly.

Did i miss something ? could someone explain me ?

thanks

New Post: Any recommendations on integrating SoundTouch?

$
0
0
I'm also trying to integrate them but I haven't been successful. If somebody gets it going please let me know. Thanks

New Post: Issue Streaming live data

New Post: Any recommendations on integrating SoundTouch?

$
0
0
ecruz wrote:
I'm also trying to integrate them but I haven't been successful. If somebody gets it going please let me know. Thanks
I have integrated SoundTouch using some of the design elements from the PracticeSharp library. I was previously using the WaveOut driver before integrating Soundtouch, but the buffer size is calculated based on the WaveFormat and it was asking for too much audio. So, there was lots of gaps. Those gaps were not present when I had written each processed element to a file. I used the Wasapiout driver instead which appears to ask for less packets. Seemed to work well, but I still have issues with delayed audio feedback when I change bass/treble, etc because those occur before the resampling of data and not after.

I am just curious if resampling the tempo prior to adjusting bass/treble, volume, left/right balance has any adverse affects oppose to doing it in the original wave format.

New Post: Convert WaveIn buffer to PCM integers

$
0
0
yes, each pair of bytes is a sample if you are recording 16 bit audio

New Post: Buffer lenght in read wavestream

$
0
0
The buffer sizes of record and playback may not be the same. Use a BufferedWaveProvider for an easy way round this. Add audio as you receive it into the buffered wave provider.

Updated Wiki: Documentation

$
0
0

The NAudio Documentation Wiki

NAudio FAQ

What is NAudio?

NAudio is an open source audio API for .NET written in C# by Mark Heath, with contributions from many other developers. It is intended to provide a comprehensive set of useful utility classes from which you can construct your own audio application.

Why NAudio?

NAudio was created because the Framework Class Library that shipped with .NET 1.0 had no support for playing audio. The System.Media namespace introduced in .NET 2.0 provided a small amount of support, and the MediaElement in WPF and Silverlight took that a bit further. The vision behind NAudio is to provide a comprehensive set of audio related classes allowing easy development of utilities that play or record audio, or manipulate audio files in some way.

Can I Use NAudio in my Project?

NAudio is licensed under the Microsoft Public License (Ms-PL) which means that you can use it in whatever project you like including commercial projects. Of course we would love it if you share any bug-fixes or enhancements you made to the original NAudio project files.

Is .NET Performance Good Enough for Audio?

While .NET cannot compete with unmanaged languages for very low latency audio work, it still performs better than many people would expect. On a fairly modest PC, you can quite easily mix multiple WAV files together, including pass them through various effects and codecs, play back glitch free with a latency of around 50ms.

How can I get help?

There are three main ways to get help. If you have a specific question concerning how to use NAudio, then I recommend that you ask onStackOverflow and tag your question with naudio. This gives you the best chance of getting a quick answer. You can also ask a question on theNAudio discussion forums here on CodePlex. I attempt to answer all questions, but since this is a spare time project, occasionally I get behind. Finally, I am occasionally able to offer paid support for situations where you need quick advice, bugfixes or new features. Use the contact feature of the Codeplex website to get in touch withMark Heath if you wish to pursue this option.

How do I submit a patch?

I welcome contributions to NAudio and have accepted many patches, but if you want your code to be included, please familiarise yourself with the following guidelines:

  • Your submission must be your own work, and able to be released under the MS-PL license.
  • You will need to make sure your code conforms to the layout and naming conventions used elsewhere in NAudio.
  • Remember that there are many existing users of NAudio. A patch that changes the public interface is not likely to be accepted.
  • Try to write "clean code" - avoid long functions and long classes. Try to add a new feature by creating a new class rather than putting loads of extra code inside an existing one.
  • I don't usually accept contributions I can't test, so please write unit tests (using NUnit) if at all possible. If not, give a clear explanation of how your feature can be unit tested and provide test data if appropriate. Tell me what you did to test it yourself, including what operating systems and soundcards you used.
  • If you are adding a new feature, please consider writing a short tutorial on how to use it.
  • Unless your patch is a small bugfix, I will code review it and give you feedback. You will need to be willing to make the recommended changes before it can be integrated into the main code.
  • The easiest way to provide a patch is to create your own fork on Mercurial and issue a pull request. Seethis video if you are new to Mercurial.
  • Please also bear in mind that when you add a feature to NAudio, that feature will generate future support requests and bug reports. Are you willing to stick around on the forums and help out people using it?

How do I...?

The best way to learn how to use NAudio is to download the source code and look at the two demo applications - NAudioDemo and NAudioWpfDemo. These demonstrate several of the key capabilities of the NAudio framework. They also have the advantage of being kept up to date, whilst some of the tutorials you will find on the internet refer to old versions of NAudio.

Getting Started with NAudio – Downloading and Compiling

  1. Download a copy of the NAudio source code (or a pre-compiled version but the newest available code is available in source form first).
    http://naudio.codeplex.com/SourceControl/list/changesets
  2. The default project is set to the NAudio class library. Class Library’s don’t have anything to look at when you press F5. If this is your first time with NAudio, set the start-up project asNAudioDemo and then hit F5. The NAudioDemo project shows the base functionality you can utilise through your own project.
  3. In the BIN directory for the built solution, you can find a copy of the NAudio library for using and referencing in your own project. Make sure that you grab a copy of the NAudio.XML file as well if your copying it over to your own projects directory, that way you will have the intellisense documentation for use in Visual Studio when working with the NAudio API.
http://stackoverflow.com/

New Post: Issue Streaming live data

New Post: Information of an audio file (wma, mp3)

$
0
0
I was trying to do the same...but apparently nobody answers this.

New Post: Information of an audio file (wma, mp3)

$
0
0
this isn't really something NAudio has much support for. For MP3 files, taglibsharp is supposed to be very good. You can also often get information from the IShellDispatch4 from Shell32.

New Post: Issue Streaming live data

New Post: Volume quirk with Waveout when Pause/Resume play

$
0
0
I found a Volume quirk with Waveout and am wondering if there is a work around.

I start a WaveOut playing with 100% volume. Pause it, then set volume to 0%. Then click Resume. I would expect the MP3 to continue playing with no sound. But instead, I get a quick burst of sound (maybe less than a tenth of a second worth), then play continues as expected at volume=0.

So I'm assuming that when I click Resume button, there is a small buffer of audio left in the Waveout buffer. Is that true? Is yes, is there a way to flush that buffer before continuing the play?

IWavePlayer waveOutDevice = null;
string fileName = "myaudio.mp3";
           
WaveStream  mediaFileReader = new Mp3FileReader(fileName);
WaveChannel32   volumeStream = new WaveChannel32(mediaFileReader);
WaveChannel32 volumeStream =  new WaveChannel32(mediaFileReader);
WaveStream mainOutputStream = volumeStream;


waveOutDevice.Init(mainOutputStream);
waveOutDevice.Play();  //start play

volumeStream.Volume = 1;   //vol 100%

...
waveOutDevice.Pause();  //pause
volumeStream.Volume = 0;   //vol 0%
...
waveOutDevice.Play();  //resume, volume burst happens here


New Post: Buffer lenght in read wavestream

$
0
0
Thank you for your quick and nice answer, it runs perfectly.

I have another question : When streaming Mic to loudspekaer, the volume is really low compared with playing mp3.

I implement a MixerLine to control microphone level, it works fine in certains limits

I also implement "waveOutSetVolume" from "winmm.dll", it works but not as fine as i expected. Level is not very high.

So using the solution you gave i multiplicate every samples in the buffer (converting to int16, multiplicate,reconvert to byte array) by a defined value before passing it to BufferedWaveProvider . It seems to give good results but i'm not really sure it is a good way.

I red about WaveChannel32 here : http://naudio.codeplex.com/discussions/264096 but i didnt understand how to adjust volume using it ... and i didnt find more informations.

Could you please tell me which way is the best ? or is there another one i didn't found ?

thanks

New Post: Audio Input Capturing from multiple devices

$
0
0
I'm using the NAudio library to capture Audio Input in C#:
int _deviceID;
double _previewVal;
private void PreviewData()
{
    WaveIn waveIn = new WaveIn();
    waveIn.DeviceNumber = _deviceID;
    waveIn.DataAvailable += waveIn_DataAvailable;
    int sampleRate = 8000; // 8 kHz
    int channels = 1; // mono
    waveIn.WaveFormat = new WaveFormat(sampleRate, channels);
    waveIn.StartRecording();
}
void waveIn_DataAvailable(object sender, WaveInEventArgs e)
{
    for (int index = 0; index < e.BytesRecorded; index += 2)
    {
        short sample = (short)((e.Buffer[index + 1] << 8) |
                                e.Buffer[index + 0]);
        _previewVal = sample / 327.68f;
    }
}
The function PreviewData is called simultaneously for each audio input device (which are 4 in my system). When I only call the method for one device it seems to be working but if I call it once more I get the exception "AlreadyAllocated calling waveInOpen". Does someone know how to work around this?

New Post: Audio Input Capturing from multiple devices

$
0
0
You can only open each device for recording once. Are ou user you are using a different device number each time?

New Post: Audio Input Capturing from multiple devices

$
0
0
The code above is in the class Microphone and PreviewData is called in it's constructor.
This is the Method where the instances are created:
        private ObservableCollection<Microphone> devices = new ObservableCollection<Microphone>();
        private void UpdateMicrophones()
        {
            int waveInDevices = WaveIn.DeviceCount;
            devices.Clear();
            for (int waveInDevice = 0; waveInDevice < waveInDevices; waveInDevice++)
            {
                devices.Add(new Microphone(waveInDevice));
            }
        }
So I'm not recording one device multiple times but I want to record all the devices with different ID at the same time

New Post: Audio Input Capturing from multiple devices

$
0
0
How many physical soundcards do you have?
Have you made 100% sure deviceID is what you are expecting it to be?
Do you always dispose waveIn before opening it again?
Viewing all 5831 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>