Quantcast
Channel: NAudio
Viewing all 5831 articles
Browse latest View live

New Post: Transefer function

$
0
0
I recommend to look at the FastFouriertransformation class and the sampleaggregator class for a start.

New Post: how to use SimpleCompressorStream?

$
0
0
SimpleCompressorStream is a ISampleProvider implementing class. Meaning, it is thought to be used for output driven systems (PULL, like a output device).

I see two options here:

1) Pass the data to a BufferedWaveProvider -> .ToSampleProvider -> SimpleCompressorStream -> Read the data out again (PULL system)
2) Manually parse the data, as u did above, copy the SimpleCompressorStream functions and objects in the class and process it directly (PUSH system)

I strongly recommend the first option. It has also the nice benift, that you don´t have to parse the format yourself, is well tested and optimized.

New Post: how to use SimpleCompressorStream?

$
0
0
Do you have an example for me how to do option 1? I'm pretty new to the entire audio topic and not sure if I understood what I shall do

New Post: how to use SimpleCompressorStream?

$
0
0
In VB I´d do it that way (fast pseudo code):

Private BWP as BufferedWaveProvider
Private COMP as SimpleCompressorStream 

Sub DataAvailable(Bayval sender as object, e as WaveInEventArgs)

If BWP is Nothing then
BWP = new BufferedWaveProvider(e.Waveformat)
COMP = new SimpleCompressorStream(BWP.ToSampleProvider)
End If

BWP.Add(e.Buffer, 0, e.BytesRecorded)

Dim SampCount as Integer = e.BytesRecorded / (e.WaveFormat.BitsPerSample / 8)

Dim Samples(SampCount - 1) as Single

COMP.Read(Samples, 0, SampCount)

// Display the compressed sample, write to file etc.

Next

End Sub

New Post: how to use SimpleCompressorStream?

$
0
0
Thank you for your example but I'm sorry I don't manage to convert this to my C# code because the parameters of
BWP = new BufferedWaveProvider(e.Waveformat)
COMP = new SimpleCompressorStream(BWP.ToSampleProvider)
are not fitting for me and I have no idea what to put there. The first line: e has no value called WaveFormat. I only have that from my record method in m_WaveSource.WaveFormat . I tried to use that but then I have the problem that the secondline isn't working the SimpleCompressorStream wants a wav stream object which I haven't...

Created Unassigned: In Visual Studio C# : How to play 2 wav file at certain dB values simultaneously [16515]

$
0
0
I am trying to build an application in c# visual studio, in which I want to play 2 wav files at certain dB values simultaneously. I looked into many stackoverflow questions but didn't find any useful answers.

My scenario is : before using the application user fill the 2 different dB values for 2 wav files. So I am facing two problems -

1. I am not able to figure out how to play two wav files on ceratin dB values?
2. How can I validate that the wav file is being played on the dB value, I provided.

I tried using the WaveChannel32, mediaplayer and Naudio but it didn't help.

Any help will be much appreciated.

Thanks, Robin

New Post: how to use SimpleCompressorStream?

$
0
0
Oh, alright I was wrong. SimpleCompressorStream doenst implement ISampleProvider, but WaveStream.

In this case simply change the second line to:
COMP = new SimpleCompressorStream(BWP)
And it should work. Discard the sample code and just call the read method of COMP.

For the Waveformat you are right, just pass the waveformat of the input device here.

New Post: DirectSoundOut isn't sluent stream after passing effect (NAudio, C#)

$
0
0
Hi guys,
I've got a problem with playing me stream by DirectSoundOut. I open same wav file by OpenFileDialog - It's Ok, Then read data from wav file and convert them to float, then I apply my function for effecting that samples, and conver them to byte[] and read - and now start the problem when the stream of effected data is in DirectSoundOut - I can't hear fluent, effected sound but it sounds like one sample is played more times, its change when i change latency, but if I've got small latency, I can't hear sound and when I raise latency, I've got problem, which I described above.
I think, It becase:
1) This process is to time-consuming, and data are not being processed in this time, becase my function for effecting is transfer function, which can be variable in time - you got graph of this function and you can change possition of points of this graph and on this point is applied polynomial regression to get function with parametr, this method I call in Read method and fit samples to the equation.
2) I can't found the best latency...
        private BlockAlignReductionStream stream = null;
        private NAudio.Wave.DirectSoundOut output = null;
        private void button1_Click(object sender, EventArgs e)
        {
            OpenFileDialog open = new OpenFileDialog();
            open.Filter = "Wave File (*.wav)|*.wav;";
            if (open.ShowDialog() != DialogResult.OK) return;
            textBox1.Text = open.FileName;
            WaveChannel32 wave = new WaveChannel32(new WaveFileReader(open.FileName));
            EffectStream effect = new EffectStream(wave);
            stream = new BlockAlignReductionStream(effect);
            output = new DirectSoundOut();
            output.Init(stream);
            output.Play();
            button2.Enabled = true;
            chart1.Enabled = true;
        }
this is part of code, where I try to open file and play effected file
 public float getFunction(float x)
        {
           double[] arrayX = new double[chart1.Series[0].Points.Count()]; // Získáme polohu bodů z grafu na ose x
           double[] arrayY = new double[chart1.Series[0].Points.Count()];// Získáme polohu bodů z grafu na ose y
           double[] arrayResult = { };

            for (int i = 0; i < chart1.Series[0].Points.Count(); i++) // naplnění 
            {

                arrayX[i] = (chart1.Series[0].Points[i].XValue);
                arrayY[i] = (chart1.Series[0].Points[i].YValues[0]);
            }
            arrayResult = (PolyRegression.Polyfit(arrayX, arrayY, 8)); // instance of class PolyRegression for solving a system of equation
            // System of equation,  functionVarA-E are coefficients 
            double functionVarI = arrayResult[0];
            double functionVarH = arrayResult[1];
            double functionVarG = arrayResult[2];
            double functionVarF = arrayResult[3];
            double functionVarE = arrayResult[4]; 
            double functionVarD = arrayResult[5];
            double functionVarC = arrayResult[6];
            double functionVarB = arrayResult[7];
            double functionVarA = arrayResult[8];
            double equationVar = 0;

            equationVar = functionVarA * (Math.Pow(x, 8)) + functionVarB * (Math.Pow(x, 7)) + functionVarC * (Math.Pow(x, 6)) + functionVarD * (Math.Pow(x, 5)) + functionVarE * (Math.Pow(x, 4)) + functionVarF * (Math.Pow(x, 3)) + functionVarG * (Math.Pow(x, 2)) + functionVarH * x + functionVarI; // Transfer function
            float Transfer = Convert.ToSingle(equationVar); //Convert to float
            return Transfer; .
        }
this is code for getting transfer function from graph
public class PolyRegression
    {
        public static double[] Polyfit(double[] x, double[] y, int degree)
    {
        // Count of coefficients from system of equations, plynimial regression 
        var v = new DenseMatrix(x.Length, degree + 1);
        for (int i = 0; i < v.RowCount; i++)
            for (int j = 0; j <= degree; j++) v[i, j] = Math.Pow(x[i], j);// v[i, j] - levá strana rovnice, Math.Pow(x[i],j) - pravá strana 
        var yv = new DenseVector(y).ToColumnMatrix();
        QR qr = v.QR(); // triangle matrix
        var r = qr.R.SubMatrix(0, degree + 1, 0, degree + 1);
        var q = v.Multiply(r.Inverse());
        var p = r.Inverse().Multiply(q.TransposeThisAndMultiply(yv));
        return p.Column(0).ToArray();
    }
    }
this is how I solve coeffincients
public override int Read(byte[] buffer, int offset, int count)
        {
            Console.WriteLine("DirectSoundOut requested {0} bytes", count);

            int read = SourceStream.Read(buffer, offset, count);
           
            for (int i = 0; i < read / 4 ; i++)
            {
                float sample = BitConverter.ToSingle(buffer, i * 4 );
                        sample = frm.getFunction(sample);
               
                  byte[] bytes = BitConverter.GetBytes(sample);
                  buffer[i * 4 + 0] = bytes[0];
                  buffer[i * 4 + 1] = bytes[1];
                  buffer[i * 4 + 2] = bytes[2];
                  buffer[i * 4 + 3] = bytes[3];
                  
            }

            return read;
        }
and this is my read function in EffectStream

So guys, heve you got any idea, what I have to do? I need hear fluent effected sound.
Thank you so much for your advices and comments. :-)

New Post: DirectSoundOut isn't fluent stream after passing effect (NAudio, C#)

$
0
0
Your Read method seems correct to me, so I assume your "getFunction" is causing the problems.

Also you can discard BlockAlignReductionStream, as it is not needed here.

New Post: Request for some direction

$
0
0
I've successfully used NAudio to process a UDP incoming stream,by defining a signal chain composed of a BufferedWaveProvider, followed by a SampleChannel, followed by a WdlResamplingSampleProvider, then finally to a WaveOut device to play the audio stream on a USB audio device. Now my question. I am externally connecting the output of my USB audio device to the input of a pro audio card which has an ASIO driver using PortAudio. I'd like to not use this external arrangement, but instead patch the stream from the the Resampler directly to the code that supports the processing from the PortAudio code. Is anyone willing to provide some direction for me to experiment with? I can provide more details. Basically I'd like to do away with the PortAudio stuff and patch the Resampler output directly to other signal processing code I have which is DttSP. Regards, Karin

New Post: how to use SimpleCompressorStream?

$
0
0
I finally managed to make it work that way:
WaveFileReader audio = new WaveFileReader(strFile);
            
            string strCompressedFile = "";

            byte[] WaveData = new byte[audio.Length];
            
            SimpleCompressorStream Compressor = new SimpleCompressorStream(audio);
            Compressor.Enabled = true;

            if (Compressor.Read(WaveData, 0, WaveData.Length) > 0)
            {
                strCompressedFile = "xxx.wav";               
            
                WaveFileWriter CompressedWaveFile = new WaveFileWriter(strCompressedFile, audio.WaveFormat);
                CompressedWaveFile.Write(WaveData, 0, WaveData.Length);
                CompressedWaveFile.Flush();                                                     

            } 

New Post: How can I normalize my volume of my recorded wav file?

$
0
0
I hope someone can help me, I have a recorded wav file which I did already sent through the SimpleCompressor class like that:
            WaveFileReader audio = new WaveFileReader(strFile);
            
            string strCompressedFile = "";

            byte[] WaveData = new byte[audio.Length];
            
            SimpleCompressorStream Compressor = new SimpleCompressorStream(audio);
            Compressor.Enabled = true;

            if (Compressor.Read(WaveData, 0, WaveData.Length) > 0)
            {
                strCompressedFile = "xxx.wav";               
            
                WaveFileWriter CompressedWaveFile = new WaveFileWriter(strCompressedFile, audio.WaveFormat);
                CompressedWaveFile.Write(WaveData, 0, WaveData.Length);
                CompressedWaveFile.Flush();                                                     

            } 
Afterwards I need to do some normalization of the volume but I have no idea how to do that. Is there any function in naudio for that like the compressor class? If not what do I have to do?

New Post: Fast Forwarding and Rewind

$
0
0
Hi All,

Please help me how to do fastforwarding and rewind using naudio.

New Post: DirectSoundOut isn't fluent stream after passing effect (NAudio, C#)

$
0
0
Could I ask? I this project I want to apply waveshaper effect and but with changing transfer function - getFunction(double x) - is my stransfer function, and I get I when I take position of points from my chart and apply Polyregression to get polynomial function equationVar = functionVarA * (Math.Pow(x, 8)) + functionVarB * (Math.Pow(x, 7)) + functionVarC * (Math.Pow(x, 6)) + functionVarD * (Math.Pow(x, 5)) + functionVarE * (Math.Pow(x, 4)) + functionVarF * (Math.Pow(x, 3)) + functionVarG * (Math.Pow(x, 2)) + functionVarH * x + functionVarI.
And than I want to play this effected destroyed sound - when I click play button I want to hear effected sound insted of my wav file sound, and still I want to have chance to changing transfer function when I played audio.
And my problem is / If I apply this function on read method - I can't hear fluent effected sound.
So I aks. Is any solution for waveshaper effect in NAudio?
I'll be soooo gratefull for answer. Thank you so much...

New Post: Transefer function

$
0
0
Hello, so I tried to do FFT with use of SampleAggregator.cs, but I'm not shure, if I realy understand class SampleAggregator - Is that instance, witch take samples from my wave file (or any input), and apply FFT algorythm for every single sample of my wave file - this is the method Add in this class -right? And length of FFT means, how large is step between couted frequencies in spectrum? For example if is frequency of my point[n] = 100Hz, frequency of point[n+1] is my sample rate/ FFT length, so if I've got sample rate 44100 and FFT length 1024, point[n+1] = point[n]+(44100/1024), so if my point[n] is 100 my point[n+1]= 143,066.... ??? I'm right or absolutly dumb...?? :-D
Thanks for answer.

New Post: Transefer function

$
0
0
I´m not deep into the the maths, but what I know so far is:

Each bin of the FFT result is equally spaced in the frequency spectrum.

For example when SampleRate is 44100 Hz and FFT Size is 1024:
frequency[0]=0*44100/(1024*2) // First bin is always 0 Hz, called the DC offset
frequency[1]=1*44100/(1024*2) //Second bin, here at 22 Hz
frequency[2]=2*44100/(1024*2) //Third bin, here at 44 Hz
....
frequency[1024]=1024*44100/(1024*2) //Last bin is always half SampleRate, here 22050 Hz
So just loop over the FFT Result and calculate the frequency and amplitude.

New Post: Fast Forwarding and Rewind

$
0
0
You can do that with adjustable resampling. E.g. use WDLResamplingSampleProvider and modify the Output SampleRate while playing.

New Post: How can I normalize my volume of my recorded wav file?

$
0
0
Simply loop through each sample and find the max sample. Finally loop through each sample again and multiply by (1/max sample).

New Post: DirectSoundOut isn't fluent stream after passing effect (NAudio, C#)

$
0
0
As I said before, I assume your poly function is doing the distortions.

Here is an example how to do realtime effects with NAudio.

New Post: DirectSoundOut isn't fluent stream after passing effect (NAudio, C#)

$
0
0
Yes, my polyfuction is doing distortions and I want to changing parametres functionVarI, functionVarH, functionVarG, functionVarF etc. in realtime by dragging and moving points in chart of my poly function and in real time counting this parameters and play destroyed sound...
Viewing all 5831 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>