Quantcast
Channel: NAudio
Viewing all 5831 articles
Browse latest View live

New Post: Using AcmMp3FrameDecompressor on a mp3 stream.

$
0
0
markheath wrote:

DecompressFrame turns MP3 into PCM. You cannot go directly from MP3 to A-law. You must go to PCM first. Also, a-law is almost always 8kHz mono, so you would need to resample as well. And really sure you want to convert it to A-law? Unless you are integrating with some antiquated telephony hardware, I can think of no reason to want to do this. Any music you use this on will sound horrible.

I have written a detailed article on CodeProject about how to convert between any audio formats which you can access here.

Thank you got the explanation. There's no music, only speech, so PCM is used by default in the software. I was hoping it was PCM coming from the DecompressFrame function, so that makes life alot easier for me :).

Again, thank you for the response! I'll check out your article as well. 


New Post: How to playback webcam audio

$
0
0

Hello sir,

Thanks for your reply. But I am not able to follow any of these two solutions because WaveInProvider don't provide any method to stop putting audio into the buffer while playback is stopped and DiscardOnBufferOverflow property is defined in BufferedWaveProvider whose object is private in WaveInProvider so I cannot use this object from my code.

New Post: Volume Meter

$
0
0

 

the thing you pass to Init must have the metering sample provider in the audio pipeline or it will have no effect. Try this:

reader = new AudioFileReader(fileName); 

var postVolumeMeter = new MeteringSampleProvider(reader); 

postVolumeMeter.StreamVolume += OnPostVolumeMeter;

player.Init(postVolumeMeter);

player.Play();

New Post: How to playback webcam audio

$
0
0

Don't use WaveInProvider then. It's a very simple class that uses WaveIn and BufferedWaveProvider.

New Post: NAudio Volume Slider

$
0
0

Hi I am currently at uni working on a media player, having a real newbie problem, I can't seem to find the code to connect my slider to my audio output,any help would be incredibly helpful as my lecturer doesn't know in detail too much about NAudio! Many thanks :)

here is the source code I have so far:

using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Linq;
using System.Text;
using System.Windows.Forms;


namespace naudiotrial
{

    public partial class Form1 : Form
    {
        public Form1()
        {
            InitializeComponent();
        }

        private NAudio.Wave.BlockAlignReductionStream stream = null;

        private NAudio.Wave.DirectSoundOut output = null;

        private void OpenAudio_Click(object sender, EventArgs e)
        {
            OpenFileDialog open = new OpenFileDialog();
            open.Filter = "Audio File (*.mp3;*.wav)|*.mp3;*.wav;";
            if (open.ShowDialog() != DialogResult.OK) return;

            DisposeWave();

            if (open.FileName.EndsWith(".mp3"))
            {

                NAudio.Wave.WaveStream pcm = NAudio.Wave.WaveFormatConversionStream.CreatePcmStream(new NAudio.Wave.Mp3FileReader(open.FileName));
                stream = new NAudio.Wave.BlockAlignReductionStream(pcm);
            }

            else if (open.FileName.EndsWith(".wav"))
            {
                NAudio.Wave.WaveStream pcm = new NAudio.Wave.WaveChannel32(new NAudio.Wave.WaveFileReader(open.FileName));
                stream = new NAudio.Wave.BlockAlignReductionStream(pcm);
            }

            else throw new InvalidOperationException("Not a correct Audio file type.");

            output = new NAudio.Wave.DirectSoundOut();
            output.Init(stream);
            output.Play();

            PausePlay.Enabled = true;

        }

        private void PausePlay_Click(object sender, EventArgs e)
        {

            if (output != null)
            {
                if (output.PlaybackState == NAudio.Wave.PlaybackState.Playing) output.Pause();
                else if (output.PlaybackState == NAudio.Wave.PlaybackState.Paused) output.Play();
            }
        }

        private void DisposeWave()
        {
            if (output != null)
            {
                if (output.PlaybackState == NAudio.Wave.PlaybackState.Playing) output.Stop();
                output.Dispose();
                output = null;
            }
            if (stream != null)
            {
                stream.Dispose();
                stream = null;
            }

        }


        private void Form1_FormClosing(object sender, FormClosingEventArgs e)
        {
            DisposeWave();

        }

        private void trackBar1_Scroll(object sender, EventArgs e)
        {

            label1.Text = Convert.ToString(trackBar1.Value);


        }
    }
}

Created Issue: BlockAlignReductionStream.Read cannot read large chunks. [16378]

$
0
0
The BlockAlignReductionStream uses a circular buffer to allow unaligned reading from an otherwise block aligned source stream. If the user tries to read more bytes than would fit in the circular buffer, the BlockAlignReductionStream only reads the amount that fits in the circular buffer, but internally advanced the input stream by the actual requested number of bytes. Future read requests will therefore miss a portion of the input stream.

A simple fix would be to clamp the number of bytes read from the input stream to the size of the circular buffer. I could not find the svn url for the source tree or figure out where to submit a patch, so I just attached the modified source file.

New Comment on "MP3"

$
0
0
Hi Mark, what do you think about this MatrixBus Implementation? I don't know if I got all the locks right... public class BusMatrixWaveProvider : IWaveProvider { private readonly IList<IWaveProvider> inputs; private readonly WaveFormat waveFormat; private readonly int outputChannelCount; private int inputChannelCount; private readonly int bytesPerSample; private float[] volumes; public float GetVolume(int inputIndex, int outputIndex) { if (inputIndex < 0 || inputIndex >= InputChannelCount) { throw new ArgumentException("Invalid input channel"); } if (outputIndex < 0 || outputIndex >= OutputChannelCount) { throw new ArgumentException("Invalid output channel"); } lock (volumes) { int index = inputIndex + (outputIndex * inputChannelCount); return volumes[index]; } } public void SetVolume(int inputIndex, int outputIndex, float value) { if (inputIndex < 0 || inputIndex >= InputChannelCount) { throw new ArgumentException("Invalid input channel"); } if (outputIndex < 0 || outputIndex >= OutputChannelCount) { throw new ArgumentException("Invalid output channel"); } lock (volumes) { int index = inputIndex + (outputIndex * inputChannelCount); volumes[index] = value; } } public void AddInput(IWaveProvider waveProvider) { lock (volumes) { //inputChannelCount += 1; inputChannelCount += waveProvider.WaveFormat.Channels; lock (inputs) { this.inputs.Add(waveProvider); Array.Resize<float>(ref volumes, outputChannelCount * inputChannelCount); } } } public void RemoveInput(IWaveProvider waveProvider) { int index = inputs.IndexOf(waveProvider); if (index == -1) throw new ArgumentException("input was not found in this mixer"); lock (volumes) { //inputChannelCount -= 1; int inputChannels = waveProvider.WaveFormat.Channels; inputChannelCount -= inputChannels; lock (inputs) { inputs.Remove(waveProvider); int offset = 0; for (int i = index; i < volumes.Length; i += (inputChannelCount + inputChannels)) { Array.Copy(volumes, i + inputChannels, volumes, i - offset, inputChannelCount); offset += inputChannels; } Array.Resize<float>(ref volumes, outputChannelCount * inputChannelCount); } } } public void RemoveInput(int index) { if (index > inputs.Count-1 || index < 0) throw new ArgumentOutOfRangeException("index was not found in this bus"); RemoveInput(inputs[index]); } /// <summary> /// Creates a mixing wave provider, allowing mixing of input channels to different /// output channels /// </summary> /// <param name="numberOfOutputChannels">Desired number of output channels.</param> public BusMatrixWaveProvider(int numberOfOutputChannels) { this.outputChannelCount = numberOfOutputChannels; if (numberOfOutputChannels < 1) { throw new ArgumentException("You must provide at least one output"); } } /// <summary> /// Creates a mixing wave provider, allowing mixing of input channels to different /// output channels /// </summary> /// <param name="inputs">Input wave providers. Must all be of the same format, but can have any number of channels</param> /// <param name="numberOfOutputChannels">Desired number of output channels.</param> public BusMatrixWaveProvider(IEnumerable<IWaveProvider> inputs, int numberOfOutputChannels) { this.inputs = new List<IWaveProvider>(inputs); this.outputChannelCount = numberOfOutputChannels; if (this.inputs.Count == 0) { throw new ArgumentException("You must provide at least one input"); } if (numberOfOutputChannels < 1) { throw new ArgumentException("You must provide at least one output"); } foreach (var input in this.inputs) { if (this.waveFormat == null) { if (input.WaveFormat.Encoding == WaveFormatEncoding.Pcm) { this.waveFormat = new WaveFormat(input.WaveFormat.SampleRate, input.WaveFormat.BitsPerSample, numberOfOutputChannels); } else if (input.WaveFormat.Encoding == WaveFormatEncoding.IeeeFloat) { this.waveFormat = WaveFormat.CreateIeeeFloatWaveFormat(input.WaveFormat.SampleRate, numberOfOutputChannels); } else { throw new ArgumentException("Only PCM and 32 bit float are supported"); } } else { if (input.WaveFormat.BitsPerSample != this.waveFormat.BitsPerSample) { throw new ArgumentException("All inputs must have the same bit depth"); } if (input.WaveFormat.SampleRate != this.waveFormat.SampleRate) { throw new ArgumentException("All inputs must have the same sample rate"); } } inputChannelCount += input.WaveFormat.Channels; } this.bytesPerSample = this.waveFormat.BitsPerSample / 8; this.volumes = new float[this.inputChannelCount * this.outputChannelCount]; } private byte[] inputBuffer; public int Read(byte[] buffer, int offset, int count) { //byte[] tempbuffer = new byte[count]; Array.Clear(buffer, offset, count); int outputBytesPerFrame = bytesPerSample * outputChannelCount; int sampleFramesRequested = count / outputBytesPerFrame; int sampleFramesRead = 0; lock (volumes) { // now we must read from all inputs, even if we don't need their data, so they stay in sync for (int inputIndex = 0; inputIndex < inputChannelCount; inputIndex++) { int inputBytesPerFrame = bytesPerSample; int bytesRequired = sampleFramesRequested * inputBytesPerFrame; this.inputBuffer = BufferHelpers.Ensure(this.inputBuffer, bytesRequired); lock (inputs) { int bytesRead = inputs[inputIndex].Read(inputBuffer, 0, bytesRequired); sampleFramesRead = Math.Max(sampleFramesRead, bytesRead / inputBytesPerFrame); Sum32BitAudio(buffer, inputBuffer, bytesRead, inputBytesPerFrame, inputIndex, outputChannelCount, volumes); } } } return sampleFramesRead * outputBytesPerFrame; } /// <summary> /// Actually performs the mixing /// </summary> private static unsafe void Sum32BitAudio( byte[] destBuffer, byte[] sourceBuffer, int bytesRead, int inputBytesPerFrame, int inputIndex, int outputChannelCount, float[] volumes) { fixed (byte* pDestBuffer = &destBuffer[0], pSourceBuffer = &sourceBuffer[0]) { fixed (float* volume = &volumes[inputIndex]) { float* pfDestBuffer = (float*)pDestBuffer; float* pfReadBuffer = (float*)pSourceBuffer; int samplesRead = bytesRead / inputBytesPerFrame; for (int n = 0; n < samplesRead; n++) { for (int nOutput = 0; nOutput < outputChannelCount; nOutput++) { //pfDestBuffer[(n * outputChannelCount) + nOutput] += (pfReadBuffer[n]); pfDestBuffer[n + nOutput] += (pfReadBuffer[n] * volume[outputChannelCount * nOutput]); } } } } } public WaveFormat WaveFormat { get { return waveFormat; } } public int InputChannelCount { get { return inputChannelCount; } } public int OutputChannelCount { get { return outputChannelCount; } } }

New Post: A possible memory issue in NAudio

$
0
0

Mark,

I have tested nAudio with hHeader pinned and everything seems to work as expected.   As far as I can see the hHeader pinning is the only change needed, at least on the WaveIn side. 

It appears this problem only occurs if 1) the wave input buffer is over 85,000 bytes (at least 482 millisecond buffers with two 16 bit channels) and 2) the Garbage Collector is needing to be aggressive. 

In my case the application windows which were started at the time nAudio gave an access violation presented real time graphics on incoming analog data.  The input data analysis was displayed each second.  The processing involved creating a large number of short lifetime, but large objects.  From Perfmon the large object heap was growing into the hundreds of megabytes.   I know - the code will be changed to reuse the large objects rather than creating new ones. 

Thanks again Mark for a nice general purpose product in nAudio.

John C

 

 

 

 


New Post: How to playback webcam audio

$
0
0

I am directly using BufferedWaveProvider and its working now. Thank you.

New Post: Volume Meter

$
0
0

Hi,

     I was getting error because of invalid parameter....

The best overloaded method match for 'NAudio.Wave.WaveOut.Init(NAudio.Wave.IWaveProvider)' has some invalid arguments 
Argument '1': cannot convert from 'NAudio.Wave.SampleProviders.MeteringSampleProvider' to 'NAudio.Wave.IWaveProvider'

 

Thanks & Regards,

Hinshin

New Post: MF streaming

$
0
0

Hello,

In the series Mediafoundation ;-)


I did a little test streaming (http://radio.reaper.fm/stream/) under MediaFoundationReader.

It works. you just change a few little routine GetLength.


I did not find any indication of loading buffer.

 

Modification ::: MediaFoudationReader.cs

Add :// Detect if Streamingprivatebool IsStreamingUrl = false;

In CreateReader :
protectedvirtual IMFSourceReader CreateReader(MediaFoundationReaderSettings settings)
{//var uri = new Uri(file);
        IMFSourceReader reader;//string fileTest = "http://radio.reaper.fm/stream/";// change for FileName with a Accent
	Uri nUrl = null;string str = "";if (Uri.TryCreate(file, UriKind.Absolute, out nUrl))
	{
		str = nUrl.ToString();
	}else
	{
		str = nUrl.AbsoluteUri;
	}// Detect Mode Streamingif (nUrl.Scheme != "file")IsStreamingUrl = true;

   MediaFoundationInterop.MFCreateSourceReaderFromURL(str, null, out reader);

........
}



In GetLength :

 privatelong GetLength(IMFSourceReader reader)
 {if (IsStreamingUrl)return 0;  // Return 0 !!!!
....

}

New Post: Volume Meter

$
0
0

ah yes, convert to IWaveProvider (I'm planning to make that step automatic for future NAudio versions). You've already got the code that does this elsewhere in your app:

player.Init(newSampleToWaveProvider(postVolumeMeter));

New Post: NAudio Volume Slider

$
0
0

do you mean that you want to reposition within the file using a scrollbar? If so there is an example of how to do this in the NAudioDemo app. The basic technique is to use trackBar1.Value to work out a percentage of the way through your file you want to move to, and then set the Position of your base Mp3FileReader or WaveFileReaer object (you'll need to keep a reference to it to do so)

New Post: Volume Meter

$
0
0

hi,

   Now it not playing and hitting OnPostVolumeMeter(object sender, StreamVolumeEventArgs e) event

 

Thanks & Regards,

Hinshin

New Post: How to code circullar buffer to create a delay and used the wavein for micrcophone and output to speaker?

$
0
0

Hi Again,

I have solved the problem by using WaveMixerStream32 class with some editing.

Another question is I want to send this mixer's output from UDP port to somewhere. (as digital sound)

I have implemented a thread and for each 1000ms, I am sending a block from mixer to udp port.

 

For each 1000 ms;

my thread is reading a block from mixer by mixer.Read(data,0,block_size)

My block size is : waveformat.sampleRate * waveformat.BlockAlign 

An then send this data to udp port.

 

On the server side I am receiving this data, but if I play the data, it is too noisy?

What can be the problem?

 

In adition to this: I have sent an existing wave file like this to udp port, and there is not any problem.  I received and played at the server side. Also I have played the mixer's output by waveout device. The problem is about sending mixer's output by udp(catching mixer's output buffer)!

 

 


New Post: How to code circullar buffer to create a delay and used the wavein for micrcophone and output to speaker?

$
0
0

I have solved the problem, thanks.

I had to do a 32 to 16 conversion that I have forgetton before sending it from UDP.

New Post: Mf Reader :: Filename with Accent

$
0
0

I'm thinking that MediaFoundationReader should simply pass the file value straight through. This seems to work fine in my tests. This would allow it to be a URL or an absolute path and fixes the problem with accents in the filename.

Mark

Source code checked in, #8683b50b078e

$
0
0
Media Foundation Reader updated to pass URL through unchanged, supporting files with accents in their name. The Media Foundation Reader demo now allows playing back from URLs. WaveOutEvent updated to allow multiple calls to Init

New Post: Volume Meter

$
0
0

Hi,

   Thankyou for your response I got that (volume meter).

 I was not able to play wave file which has 319 kbps of Bit Rate...

Thanks & Regards,

Hinshin

Source code checked in, #cefdb8fd6778

$
0
0
support for streaming radio where duration is not available with MediaFoundationReader
Viewing all 5831 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>