Quantcast
Channel: NAudio
Viewing all 5831 articles
Browse latest View live

New Post: MixingSampleProvider to Output

$
0
0
There is a SampleToWaveProvider helper class to turn it into an IWaveProvider so you can pass it to Init on your WaveOut or DirectSoundOut. The latest NAudio code has helper extensions that will do this automatically.

And yes, you can add and remove mixer inputs on the fly.

New Post: Mixing an audio gotten from network and AsioOut playback

$
0
0
hi, ASIO is designed for super low latency performance, but this does mean you need to be comfortable working with low level audio manipulation. It may be easier for you to just put received audio into a BufferedWaveProvider turn it into an ISampleProvider with the ToSampleProvider extension, and then mix that with whatever else you are playing using a MixingSampleProvider

New Post: What does the WaveInProvider class do? What is it used for?

$
0
0
It basically saves you wiring up a BufferedWaveProvider to a WaveIn, but it may be dropped in a future version, as it has not been maintained for a while, and a helper method could probably make the connection just as well.

New Post: Seting WaveOut buffersize directly to a number of samples

$
0
0
Ok I tested it and it doesn´t work. This just puts the missed samples to next read iteration and so on...

I`m very unhappy with this solution. A in-place pitch shifting of exactly each iteration of read would be best.

New Post: Visual Studio C# and USB MIDI Keyboard Alesis Q88...

$
0
0
Yes, NAudio has basic support for MIDI in, and if you run the NAudio demo app included in the code (at GitHub), you can see an example of this working

New Post: AudioMeterInformation PeakValues problem

$
0
0
sorry, no ideas here. Might be worth reading up the MSDN documentation of the IAudioMeterInformation interface which is what NAudio is calling under the hood

New Post: Fire-and-forget game audio.

$
0
0
Sorry for long delay replying.

Yes, to play two versions of the sound, create a new ISampleProvider (with a RawSourceWaveStream reading from memory). It's because Read will be called independently on each one

New Post: Preparing audio for using HTML 5 audio element

$
0
0
Hi!

In the last days I played around trying to get a working solution. My first attempt was to run an Icecast server and to send my audio data to this server, so Icecast can do all the HTTP stuff. I tested this before with an Icecast source client called butt. In Chrome I got a delay of about 4 seconds. Not very satisfying. I'm sure I didn't configure the optimal settings though.

This has to go faster, so I tried to get it working without Icecast and butt. I found this on Stackoverflow and also this article from Mark. At the bottom of the second article is explained how to use external encoders.

I've written my own HTTP server and used lame.exe for mp3 and oggenc2.exe for ogg.
My results in regard to delay:
  • Chrome ogg: ~3 seconds
  • Chrome mp3: ~2 seconds
  • Firefox ogg: ~1.5 seconds
  • Firefox mp3: ~2 seconds
(44.1kHz, 16bit, stereo)

I don't know if it possible to go under this 1.5 seconds. If I can do this encoding directly in C# (with interop) it is maybe a bit faster I guess. I found this library for doing this, at least I think I can encode ogg with this library. Ogg Vorbis Interop. I haven't tried it by now.

I wanted to share my discoveries even if no one cares :P

Kind regards,
Stefan

New Post: Play from Line-In?

$
0
0
I have a setup where i have a FM/DAB radio board that is connected to Line-In. I want to be able to play this in my speakers using NAudio, so i can manipulate the stream via an equalizer etc..

What is the best way of doing this with the best quality and minimum latency?

Is it to just create a WaveInEvent and record from that and create a WaveOutEvent with the provider from the WaveIn and my speaker DeviceId?

New Post: WdlResamplingSampleProvider applied Filter

$
0
0
I want to compare DownSampling from NAudio with DownSampling in Matlab. For that I have to know what filter is applied and what would be the equalivent in Matlab. At the moment I use decimate function of Matlab for downsampling.

As far as I understand a FIR-Filter off order 4 (2*BiQuad) is applied. But when I use FIR of order 4 in matlab, the signals are not similiar as they should be.

The best result I got, was when doing the resampling with MediaFoundationResampler off quality 60 and a Chebyshev Type 1 off order 8 in Matlab. But there I have no Idea what kind off filter is applied in MediaFoundation.

Thx for helping me out.

New Post: Visualization/Waveforms with mixed wav files

$
0
0
So I've started using NAudio and it's AWESOME. I had an application in mind and started using Naudio and everything is working perfectly. I've only run into one issue: I'm taking lots of small wav file samples and concatenating them, truncatiing, adding extra blank audio, etc to create my music. I tried out the WaveForm Demo to create a visualization and it worked great for a single wave file.

Then I got really advanced and started mixing wav files together. The mixing demo worked great for playing the different wav files, but then I realized... getting that Visualization to work might be tough when mixing. I can go through the waveform demo where it reads through an AudioFilereader and finds the max value of each sample and then I could loop through the other wave files, making sure the buffer size is the same and comparing the max values to the previous max values. BUT before I go through all that I guess the one question I wanted to ask to make my life easier would be.... Can the mixing sample provider, instead of just PLAYING multiple inputs to waveout... can it actually OUTPUT the audio of the mixed wav files to one single wav file? IF that's possible I could save myself a lot of trouble and when rendering the waveform, just read through the one mixed/master wav file.

New Post: Recording trouble

$
0
0
Hello,

I'm using NAudio library in my C# project for recording audio. I write below the description of my trouble:

Before recording of audio (Wave) starts, a user selects one of available input devices. We want to record audio in 16bits and sample rate of 48kHz so we use WaveFormat object in initialization of WaveFileWriter. The code looks like this:
waveSource = new WaveIn();
waveSource.DeviceNumber = deviceNumber;
WaveFormat waveFormat = new WaveFormat(sampleRate, bits, waveSource.WaveFormat.Channels);
waveSource.WaveFormat = waveFormat;

waveSource.DataAvailable += OnDataAvailable;
waveSource.RecordingStopped += OnRecordingStopped;

memoryStream = new MemoryStream();
waveWriter = new WaveFileWriter(memoryStream, waveSource.WaveFormat);
It works as expected on some sound cards, but the same code has a problem on proffesional high-end sound card. The problem is, that only half of the range is used, i.e. when setting 16bits/sample, the output wave has ranges in <-16384, 16384> or <-2^14, 2^14> instead of <-2^15, 2^15>. Moreover, when I switch bit depth to 8bits/sample, samples are -32640 -32641 -32640 32640 -32640 etc, which is really strange.

Is there any way how to fix this?

Thanks for your help!

New Post: Capture stream to memory and play

$
0
0
I made application that runs speech synthesis and saves it to a wave file with naudio. That file is played.
I would like to create memory stream and play from it without creating wave file. Is there an example of memory stream usage?

New Post: simple example to control volume using NAudio

$
0
0
I know this probably sounds simple, but I'm getting started and need to know how to control the volume. I've seen comments on using the VolumeSampleProvider but I'm having trouble with it. My current code looks like this:

m_WavePlayer = new WaveOut();
m_AudioFileReader = new AudioFileReader(strFullPath);
m_WavePlayer.Init(m_AudioFileReader);
m_WavePlayer.Play();

A little help would go a long way... Thx, Ed

New Post: simple example to control volume using NAudio

$
0
0
Here's how i've done it.


format = WaveFormat.CreateIeeeFloatWaveFormat(readerStream.WaveFormat.SampleRate, 2);
        // Provide PCM conversion if needed
        if (readerStream.WaveFormat.Encoding != WaveFormatEncoding.Pcm)
        {
            readerStream = WaveFormatConversionStream.CreatePcmStream(readerStream);
            readerStream = new BlockAlignReductionStream(readerStream);
        }

        // Provide conversion to 16 bits if needed
        if (readerStream.WaveFormat.BitsPerSample != 16)
        {
            readerStream = new WaveFormatConversionStream(format, readerStream);
        }

        mixingSampleProvider = new MixingSampleProvider(format);
        waveChannel = new WaveChannel32(readerStream);


        //Convert wave to samples
        waveToSample = new WaveToSampleProvider(waveChannel);

        //Provide metering support
        meterSampleProvider = new MeteringSampleProvider(waveToSample);

        //Provide volume support
        volumeSampleProvider = new VolumeSampleProvider(meterSampleProvider);

        //Convert samples back to wave
        sampleToWave = new SampleToWaveProvider(volumeSampleProvider );

        if (waveOutDevice == null)
        {
            waveOutDevice = new WaveOutEvent();
            waveOutDevice.PlaybackStopped += new EventHandler<StoppedEventArgs>(waveOutDevice_PlaybackStopped);
        }

        waveOutDevice.DeviceNumber = CPCore.Instance.AudioOutDevice.Id;
        waveOutDevice.Init(sampleToWave);



Everytime you need to support a new provider, use the last one as input for the next.

Like this:

//Convert wave to samples
        waveToSample = new WaveToSampleProvider(waveChannel);

        //Provide metering support
        meterSampleProvider = new MeteringSampleProvider(waveToSample);

        //Provide volume support
        volumeSampleProvider = new VolumeSampleProvider(meterSampleProvider);

        //Add the samples to the mixed stream used to enable crossfading
        mixingSampleProvider.AddMixerInput(volumeSampleProvider);

        //Add support for equalizer
        equalizerSampleProvider = new EqualizerSampleProvider(mixingSampleProvider, equalizerBands);
        equalizerSampleProvider.IsActive = initialEQIsActive.GetValueOrDefault();

        //Convert samples back to wave
        sampleToWave = new SampleToWaveProvider(equalizerSampleProvider);

New Post: Visualization/Waveforms with mixed wav files

$
0
0
BAM. Solved:
    private WaveFileReader reader1;
    private WaveFileReader reader2;
    private WaveFileReader reader3;
    private WaveFileReader readerMaster;



string myinfile1 = "C:\samples\00test\test1.wav";
string myinfile2 = "C:\samples\00test\test2.wav";
string myinfile3 = "C:\samples\00test\test3.wav";
string mymainfile = "C:\samples\00test\testmaster.wav";
       reader1 = new WaveFileReader(myfile1);
        reader2 = new WaveFileReader(myfile2);
        reader3 = new WaveFileReader(myfile3);
        WaveChannel32 stream1 = new WaveChannel32(reader1);
        WaveChannel32 stream2 = new WaveChannel32(reader2);
        WaveChannel32 stream3 = new WaveChannel32(reader3);

        WaveMixerStream32 mixer = new WaveMixerStream32();
        mixer.AddInputStream(stream1);
        mixer.AddInputStream(stream2);
        mixer.AddInputStream(stream3);

        Wave32To16Stream wavmixer = new Wave32To16Stream(mixer);

        WaveFileWriter.CreateWaveFile(mymainfile, wavmixer);

New Post: VolumeSampeProvider and Wave volume

$
0
0
I'm using the VolumeSampleProvider with WaveOutEvent to control my volume, but after gettting an external sound card with DAC, i doesn't work anymore as i now have to control the volume through the Wave Volume.

But the WaveOut has a volume property which controls it's volume through waveOutSetVolume, so why the difference and how should i control the volume with waveOutSetVolume through the VolumeSampleProvider?

New Post: simple example to control volume using NAudio

$
0
0
Thank you for the suggestion, here is the code that is functional for creating the wave player:
            m_WavePlayer = new WaveOut();
            m_AudioFileReader = new AudioFileReader(strFullPath);
            m_VolSampleProvider = new VolumeSampleProvider(m_AudioFileReader);
            m_VolSampleProvider.Volume = Volume;
            m_SmplToWaveProvider = new SampleToWaveProvider(m_VolSampleProvider);
            m_WavePlayer.Init(m_SmplToWaveProvider);
And then the volume can dynamically change using this:
            if (m_VolSampleProvider != null)
                m_VolSampleProvider.Volume = value;
Interestingly, the VolumeSampleProvider and SampleToWaveProvider don't have Dispose() methods. I'm wondering if I have a memory leak ?

New Post: simple example to control volume using NAudio

$
0
0
Another interesting question, when I run this code out of the main form thread (testing) there are no issues at all. But, in my real application it must be performed from a background thread. Initially I attempted to create a member variable to be used by a background thread but started having issues similar to cross-threading - errors indicating that it was not called on the thread it was created on.

OK, so I decided that I would just create a new instance from whatever thread I planned to use it, then dispose of it after it was done. But I'm finding that the WaveOut object is hanging the thread when I perform the line of code below, any ideas?
            m_WavePlayer.Dispose();

New Post: Visualization/Waveforms with mixed wav files

$
0
0
Great, glad you got it working. You could also have used MixingSampleProvider and CreateWaveFile16
Viewing all 5831 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>