I would like to see these references added as part of the NuGet package(s), since they're required. It took me a long time to figure out why an Azure deployment was failing because of this :(
↧
New Post: System.Drawing and System.Windows.Forms references
↧
New Post: Converting RTP Packets into wav format and writing to a wav file dynamically
Excellent article Mark. It took me a while to understand )).. A few questions after reading your article:
1) NAudio will only be able to convert raw stream into PCM for ACM codecs or the ones supported by Media Foundation right? If I have a custom codec installed on my machine will ACM/MF be able to convert from that custom format into PCM? Also, I found an alternate way of generating the WAV file using your code and playing it in media player for MuLaw format. I extracted the payloads from RTP packets and stored them directly into WAV file without converting them into PCM and created the WAV header with Mulaw format information instead of PCM. I am able to play that WAV file in media player. So I believe if the correct codec is installed Media Player will automatically create the PCM stream based on the codec info in header and play it for us. But again the question remains what will happen in case of custom codecs about which media player is unaware?
2) Also, I want to write my audio in stereo format to a WAV file. I saw your code but it seems that stereo format writing is supported by NAudio only for PCM streams. If I want to write the MuLaw payloads in stereo format to a WAV file is there a way to do it using NAudio?
Thanks,
Saleh
1) NAudio will only be able to convert raw stream into PCM for ACM codecs or the ones supported by Media Foundation right? If I have a custom codec installed on my machine will ACM/MF be able to convert from that custom format into PCM? Also, I found an alternate way of generating the WAV file using your code and playing it in media player for MuLaw format. I extracted the payloads from RTP packets and stored them directly into WAV file without converting them into PCM and created the WAV header with Mulaw format information instead of PCM. I am able to play that WAV file in media player. So I believe if the correct codec is installed Media Player will automatically create the PCM stream based on the codec info in header and play it for us. But again the question remains what will happen in case of custom codecs about which media player is unaware?
2) Also, I want to write my audio in stereo format to a WAV file. I saw your code but it seems that stereo format writing is supported by NAudio only for PCM streams. If I want to write the MuLaw payloads in stereo format to a WAV file is there a way to do it using NAudio?
Thanks,
Saleh
↧
↧
New Post: Custom sample provider FFT sample capture issue
I was experimenting with WasapiOut.cs and found out the minimum supported period is 30000 (30ms). So, I did the following:
Exclusive mode, based on MSDN documentation of IAudioClient::Initialize, should work if I set the buffer length and periodicity to MinimumDevicePeriod. It merely errors out: Buffer size not aligned.
What I can possibly do is, like you suggested, place the completed FftBuffers into a FIFO queue. Then I can (StreamLatency / 10000) / FftQueue.Length = Spectrum analyzer refresh rate. Then use a timer, with period based on refresh rate, to render the FFT.
It may not work as predicted, so I am still interested in retrieving the current frame based on a timer elapsed event.
audioClient.Initialize(shareMode, AudioClientStreamFlags.EventCallback, audioClient.MinimumDevicePeriod, 0,
outputFormat, Guid.Empty);
It did not work. The stream latency ended up being 1.0666 seconds, which is the default.Exclusive mode, based on MSDN documentation of IAudioClient::Initialize, should work if I set the buffer length and periodicity to MinimumDevicePeriod. It merely errors out: Buffer size not aligned.
What I can possibly do is, like you suggested, place the completed FftBuffers into a FIFO queue. Then I can (StreamLatency / 10000) / FftQueue.Length = Spectrum analyzer refresh rate. Then use a timer, with period based on refresh rate, to render the FFT.
It may not work as predicted, so I am still interested in retrieving the current frame based on a timer elapsed event.
↧
New Post: System.Drawing and System.Windows.Forms references
@tommck, what class in NAudio were you using that needed WinForms in Azure?
↧
New Post: Custom sample provider FFT sample capture issue
you must make sure your buffer size is block aligned (use BlockAlign on the WaveFormat).
I'm not quite sure what you are referring to as a "frame" in this context.
I'm not quite sure what you are referring to as a "frame" in this context.
↧
↧
New Post: Converting RTP Packets into wav format and writing to a wav file dynamically
If it is an ACM codec you must use WaveFormatConversionStream, and if it is a Media Foundation codec you use MediaFoundationReader. It can be a bit of a pain working with custom ACM codecs- you have to examine what WaveFormat they are expecting and pass that in. The NAudioDemo project does a fair bit of the work for you in showing the input and output formats. Windows Media Player can use ACM and Media Foundation codecs so as long as the format chunk in your WAV file is correct, it should play fine.
There is nothing stopping you putting stereo MuLaw into a WAV file. Just set up the WaveFormat with the correct values (MuLaw encoding, 2 channels, 8 bits per sample etc) and you should be fine. MuLaw is not normally stereo, since its main use is telephony
There is nothing stopping you putting stereo MuLaw into a WAV file. Just set up the WaveFormat with the correct values (MuLaw encoding, 2 channels, 8 bits per sample etc) and you should be fine. MuLaw is not normally stereo, since its main use is telephony
↧
New Post: How would I detect drum beats in a live stream?
NAudio gives you access to the raw samples. You'd need to implement your own transient detection algorithm. How simple this is depends on what sort of signal you are looking at. If you can isolate a single drum then it is not too hard. If you mic a full kit, then it would be a much more tricky problem.
↧
New Post: Custom sample provider FFT sample capture issue
I'm new at audio processing so you will have to forgive my ignorance.
I took a look at WaveFormat.BlockAlign. It only has a getter and the value was "4" when I retrieved it (I have been trying to figure out exactly what that pertains to).
frame: audio information for a given point in time.
The problem I seem to be having is the FFT calculation only happens 10 - 11 times per second and trying to time it correctly is... difficult.
If I rely on notifications, I get 10 - 11 (currently), one right after the other, then nothing for 1 second. So, I tried setting my timer to 100ms, which seemed to give me the correct data, but did not run smoothly.
I do need to get a better understanding of how the underlying code works because when I finish this project, I will be porting over to Windows Phone 8.
I took a look at WaveFormat.BlockAlign. It only has a getter and the value was "4" when I retrieved it (I have been trying to figure out exactly what that pertains to).
frame: audio information for a given point in time.
The problem I seem to be having is the FFT calculation only happens 10 - 11 times per second and trying to time it correctly is... difficult.
If I rely on notifications, I get 10 - 11 (currently), one right after the other, then nothing for 1 second. So, I tried setting my timer to 100ms, which seemed to give me the correct data, but did not run smoothly.
I do need to get a better understanding of how the underlying code works because when I finish this project, I will be porting over to Windows Phone 8.
↧
New Post: change left or Right channel volume.
It work all right for single song. When go to next song then it is reset. what i doing now?
↧
↧
New Post: Where to discuss NAudio 1.7-alpha
Where is an appropriate place to discuss NAudio 1.7-aplha builds?
I am very interested in using MediaFoundationReader. I have used it to some success, but I cannot detect stream end in order to play another file. The stream just repeats forever.
Snippet:
I am very interested in using MediaFoundationReader. I have used it to some success, but I cannot detect stream end in order to play another file. The stream just repeats forever.
Snippet:
// fileWaveStream is of type WaveStream
FileWaveStream = new MediaFoundationReader(fileNameToPlay,
new MediaFoundationReader.MediaFoundationReaderSettings
{
RepositionInRead = true,
RequestFloatOutput = false,
SingleReaderObject = false
});
// Waveplayer is of type IWavePlayer
wavePlayer.Init(FileWaveStream);
wavePlayer.Play();
↧
New Post: Where to discuss NAudio 1.7-alpha
That's very strange and unexpected. What sort of a file are you playing?
When you say "repeats" forever, do you mean it goes back to the start again? Or does it play silence?
When you say "repeats" forever, do you mean it goes back to the start again? Or does it play silence?
↧
New Post: Where to discuss NAudio 1.7-alpha
The file format I am using is either mp3 for flac. I installed the mfFlac codec to support flac in media foundation. http://sourceforge.net/projects/mfflac/
Interestingly enough in further testing the Waveplayer.PlaybackState never sets to PlaybackState.Stopped. So it gets to the end of the stream and just stops. The reason for the repeat was code I have to detect a threading problem ( https://naudio.codeplex.com/discussions/357995 ) where waveplayer would start to play a track and just stop. I have updated the code to also check if we are at the end of the stream and fire off the next track. While this works it doesnt seem ideal.
Any ideas on why wavePlayer isnt changing to PlaybackState.Stopped state at the end of the stream?
Interestingly enough in further testing the Waveplayer.PlaybackState never sets to PlaybackState.Stopped. So it gets to the end of the stream and just stops. The reason for the repeat was code I have to detect a threading problem ( https://naudio.codeplex.com/discussions/357995 ) where waveplayer would start to play a track and just stop. I have updated the code to also check if we are at the end of the stream and fire off the next track. While this works it doesnt seem ideal.
Any ideas on why wavePlayer isnt changing to PlaybackState.Stopped state at the end of the stream?
↧
New Post: Wasapi Loopback Recording -> Heavy noise/distortion
Hey all,
i'm trying to write a simple stereo-mix substitute in c#, but can't seem to get it working even remotely.
Any idea?
Shouldn't matter at all, but just to mention it: i'm running it in parallels on a macbook (i7,good enough for this easy task normally)
Thanks already for your ideas and @mark for this awesome lib :)
EDIT:
i tried it with different WaveFormat options. If i use what i posted here, it basically works, but "stutters".
i'm trying to write a simple stereo-mix substitute in c#, but can't seem to get it working even remotely.
public void StartRecording()
{
recthread = new Thread( () =>
{
var cap = new WasapiLoopbackCapture();
startlabel:
var title = _helper.TrackTitle;
var artist = _helper.Artist;
if (title == "" || artist == "" || _helper.isPlayingAds)
{
Thread.Sleep(10);
goto startlabel;
}
var writer = new WaveFileWriter(artist + " - " + SanitizeFilename(title,'-') + ".wav",
cap.WaveFormat);
cap.DataAvailable += (sender, args) =>
{
writer.Write(args.Buffer, 0, args.BytesRecorded);
};
cap.StartRecording();
uint currentpos = _helper.CurrentTime;
while(_helper.Artist==artist && _helper.TrackTitle == title){
Thread.Sleep(5);
while(currentpos == _helper.CurrentTime)
Thread.Sleep(1);
}
cap.StopRecording();
writer.Flush();
writer.Close();
goto startlabel;
});
recthread.Start();
}
As you can see,nothing special going on there, but still i can't manage to get out what i hear when i captured the sound.Any idea?
Shouldn't matter at all, but just to mention it: i'm running it in parallels on a macbook (i7,good enough for this easy task normally)
Thanks already for your ideas and @mark for this awesome lib :)
EDIT:
i tried it with different WaveFormat options. If i use what i posted here, it basically works, but "stutters".
↧
↧
New Post: Wasapi Loopback Recording -> Heavy noise/distortion
You do need to get the right WaveFormat - using the one from capture should be OK, although WASAPI annoyingly likes to use WAVEFORMATEXTENSIBLE, so often I turn that into the equivalent PCM or IEEE Float WAVEFORMAT.
You might simplify things by removing the creation of another thread.
I'd also look at the WAV file you make in an editor like Audacity and see if there is anything visually obviously wrong with the waveform. That can often provide clues.
You might simplify things by removing the creation of another thread.
I'd also look at the WAV file you make in an editor like Audacity and see if there is anything visually obviously wrong with the waveform. That can often provide clues.
↧
New Post: Where to discuss NAudio 1.7-alpha
What IWavePlayer implementation are you using?
The idea is that FileWaveStream's Read method should return 0 when it reaches the end. Then the wavePlayer knows that it can stop playing. So you could start by writing a simple test that keeps calling Read on your FileWaveStream (e.g. into a buffer of 1024) and ensure that it eventually returns 0.
The idea is that FileWaveStream's Read method should return 0 when it reaches the end. Then the wavePlayer knows that it can stop playing. So you could start by writing a simple test that keeps calling Read on your FileWaveStream (e.g. into a buffer of 1024) and ensure that it eventually returns 0.
↧
New Post: Wasapi Loopback Recording -> Heavy noise/distortion
Hello Mark,
thanks for your fast answer.
As you suggested, i tried it with WaveFormatExtensible now, which works partially.
Partially in the sense that it works sometimes perfectly, but sometimes i have the "stutter" effect again. I already increased all timers and the application itself isn't even using 1% cpu according to the taskmgr, so that shouldn't be the problem.
The WaveFileWriter writes directly to the hdd, so not buffered,right?
I have an ssd in the macbook, but could that be the problem?
thanks again
streppel
EDIT:
i'm stupid...setting the same event multiple times isn't a good idea, that will trigger too often and the result will be wrong values
i think this fixed it :)
thanks for your fast answer.
As you suggested, i tried it with WaveFormatExtensible now, which works partially.
Partially in the sense that it works sometimes perfectly, but sometimes i have the "stutter" effect again. I already increased all timers and the application itself isn't even using 1% cpu according to the taskmgr, so that shouldn't be the problem.
The WaveFileWriter writes directly to the hdd, so not buffered,right?
I have an ssd in the macbook, but could that be the problem?
thanks again
streppel
EDIT:
i'm stupid...setting the same event multiple times isn't a good idea, that will trigger too often and the result will be wrong values
i think this fixed it :)
↧
New Post: Record wav from single channel source
Hi,
First of all, thank you for making such an easy to use audio library. As a rookie in programming, NAudio helps me a lot in my project.
I need to record a mono signal from Line-in jack, the source is only connected to the tip(left channel) of the 3.5mm plug, while the ring(right channel) is unused. I use the following code, written in Visual Basic Express 2010, to record and save the signal to wave file.
If I record in mono format, NoOfChannel=1, the recorded amplitude is reduced by half, each sample byte can only swing between 64 and 192 approximately. I get the same result when using other audio recording program such as Audacity. It seems that when recording in mono format, the computer try to average the left and right signal, and since there is no signal applies to right channel, therefore the recorded amplitude is reduced by half. Even if I turn up the record volume, the recorded wave file shows clipping at level 64 and 192.
I want to maintain maximum amplitude swing, but keep the filesize small. Is it possible to record the left channel signal only and write into mono format wave file?
Thanks!
First of all, thank you for making such an easy to use audio library. As a rookie in programming, NAudio helps me a lot in my project.
I need to record a mono signal from Line-in jack, the source is only connected to the tip(left channel) of the 3.5mm plug, while the ring(right channel) is unused. I use the following code, written in Visual Basic Express 2010, to record and save the signal to wave file.
Public WithEvents WaveIn As NAudio.Wave.WaveIn
Public WaveWriter As NAudio.Wave.WaveFileWriter
Private Sub ButtonRecord_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles ButtonRecord.Click
Dim NoOfChannel As Integer = 2
WaveIn = New Naudio.Wave.WaveIn
WaveIn.DeviceNumber = 0
WaveIn.WaveFormat = New NAudio.Wave.WaveFormat(96000, 8, NoOfChannel)
WaveWriter = New Wave.WaveFileWriter("Output.wav", WaveIn.WaveFormat)
WaveIn.StartRecording()
End Sub
Private Sub ButtonStop_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles ButtonStop.Click
WaveIn.StopRecording()
WaveIn.Dispose()
WaveWriter.Close()
End Sub
Private Sub WaveIn_DataAvailable(ByVal sender As System.Object, ByVal e As WaveInEventArgs) Handles WaveIn.DataAvailable
WaveWriter.Write(e.Buffer, 0, e.BytesRecorded)
End Sub
After setting the record volume to a suitable level by Windows mixer, if I record in stereo format, NoOfChannel=2, I can get maximum amplitude from the left channel, each sample byte can swing all the way between 0 to 255, while each right channel sample byte stay at around 128. Since I only need the left channel, writing in stereo format double the file size which waste a lot of disk space.If I record in mono format, NoOfChannel=1, the recorded amplitude is reduced by half, each sample byte can only swing between 64 and 192 approximately. I get the same result when using other audio recording program such as Audacity. It seems that when recording in mono format, the computer try to average the left and right signal, and since there is no signal applies to right channel, therefore the recorded amplitude is reduced by half. Even if I turn up the record volume, the recorded wave file shows clipping at level 64 and 192.
I want to maintain maximum amplitude swing, but keep the filesize small. Is it possible to record the left channel signal only and write into mono format wave file?
Thanks!
↧
↧
New Post: 10 band Equalizer
My calculations are working without causing any sound gaps, pops, or crackles; but I cannot hear the changes.
16hz through 500khz at 5db with the high frequencies at -5db should sound like the music is being muffled.
1000hz - 16000hz at 5db with low frequencies at -5db should sound tinny.
FilterSampleProvider:
BiQuadFilter.ProcessSample:
16hz through 500khz at 5db with the high frequencies at -5db should sound like the music is being muffled.
1000hz - 16000hz at 5db with low frequencies at -5db should sound tinny.
FilterSampleProvider:
public class FilterSampleProvider : ISampleProvider
{
private readonly BiQuadFilter[] _sampleFilters;
private readonly WaveFormat _waveFormat;
private readonly ISampleProvider _sampleProvider;
public WaveFormat WaveFormat { get { return _waveFormat; } }
public FilterSampleProvider(ISampleProvider source, BiQuadFilter[] filters)
{
_sampleFilters = filters;
_sampleProvider = source;
_waveFormat = _sampleProvider.WaveFormat;
}
public int Read(float[] buffer, int offset, int sampleCount)
{
var read = _sampleProvider.Read(buffer, offset, sampleCount);
var filterBuffer = new float[buffer.Length];
//cascade the filters
for (var j = _sampleFilters.Length; j-- > 0; )
{
if (j == _sampleFilters.Length - 1)
{
filterBuffer = _sampleFilters[j].ProcessSample(buffer, offset, sampleCount);
continue;
}
filterBuffer = _sampleFilters[j].ProcessSample(filterBuffer, offset, sampleCount);
}
buffer = filterBuffer;
return read;
}
}
I pre-calculate the coefficients when the dbGain or Q changes.BiQuadFilter.ProcessSample:
public float[] ProcessSample(float[] buffer, int offset, int sampleCount)
{
//skip processing if gain is 0 or we are recalculating due to a Gain or Q change
if (_dbGain != 0f && !_calcInProgress)
{
//if Gain or Q changes during this process, ReCalculate will wait. Prevents odd results.
_transformInProgress = true;
var current = new float[buffer.Length];
for (int i = 0; i < sampleCount; i += 2)
{
if (_counter == _blockAlign)
{
_counter = 0;
Reset();
}
//Left channel
var left = (_biQuadLeft.A0 * buffer[offset + i] + _biQuadLeft.A1 * _biQuadLeft.X1 + _biQuadLeft.A2
* _biQuadLeft.X2 - _biQuadLeft.A3 * _biQuadLeft.Y1 - _biQuadLeft.A4 * _biQuadLeft.Y1);
_biQuadLeft.X2 = _biQuadLeft.X1;
_biQuadLeft.X1 = buffer[offset + i];
_biQuadLeft.Y2 = _biQuadLeft.Y1;
_biQuadLeft.Y1 = double.IsNaN(left) ? 0 : left;
current[offset + i] = (float)_biQuadLeft.Y1;
//Right channel
var right = (_biQuadRight.A0 * buffer[offset + i + 1] + _biQuadRight.A1 * _biQuadRight.X1 + _biQuadRight.A2
* _biQuadRight.X2 - _biQuadRight.A3 * _biQuadRight.Y1 - _biQuadRight.A4 * _biQuadRight.Y1);
_biQuadRight.X2 = _biQuadRight.X1;
_biQuadRight.X1 = buffer[offset + i + 1];
_biQuadRight.Y2 = _biQuadRight.Y1;
_biQuadRight.Y1 = double.IsNaN(right) ? 0 : right;
current[offset + i + 1] = (float)_biQuadRight.Y1;
_counter++;
}
_transformInProgress = false;
return current;
}
return buffer;
}
private void Reset()
{
_biQuadLeft.X1 = 0;
_biQuadLeft.X2 = 0;
_biQuadLeft.Y1 = 0;
_biQuadLeft.Y2 = 0;
_biQuadRight.X1 = 0;
_biQuadRight.X2 = 0;
_biQuadRight.Y1 = 0;
_biQuadRight.Y2 = 0;
}
↧
Updated Wiki: Home
NAudio Overview
NAudio is an open source .NET audio and MIDI library, containing dozens of useful audio related classes intended to speed development of audio related utilities in .NET. It has been in development since 2002 and has grown to include a wide variety of features. While some parts of the library are relatively new and incomplete, the more mature features have undergone extensive testing and can be quickly used to add audio capabilities to an existing .NET application. NAudio can be quickly added to your .NET application using NuGet.NAudio demo project showing an MP3 file playing:
NAudio WPF Project showing waveform visualisation and spectrum analyser:
Latest News
For the latest news and more documentation on NAudio, visit Mark Heath's blog.- 26 Oct 2012 NAudio 1.6 Released. Read the release notes
- 9 Sep 2012 ASIO Recording Support added
- 19 Dec 2011 NAudio 1.5 Released. Read the release notes
- 20 Apr 2011 NAudio 1.4 Released. Read the release notes
- 15 Apr 2011 NAudio demo now shows how to select output devices (for WaveOut, DirectSound, WASAPI and ASIO), and can play 8 bit, 16 bit, 24 bit, and 32 bit float WAV files. Fixed a longstanding ASIO issue.
- 7 Nov 2010 Major improvements to Mp3FileReader
- 10 Oct 2009 Version 1.3 Released. Read the release notes
- 20 Sep 2009 We are getting close to feature complete for 1.3. Last chance to get in any feedback on the API
- 26 Aug 2009 WPF Waveform drawing demo project including FFT added
- 28 Feb 2009 Lots of new stuff is being added and planned, so do check out the Source Code tab to have a sneak peak at what's coming in 1.3
- 26 June 2008 Version 1.2 Released. Read the release notes
NAudio Features
- Play back audio using a variety of APIs
- WaveOut
- DirectSound
- ASIO
- WASAPI (Windows Vista and above)
- Decompress audio from different Wave Formats
- MP3 decode using ACM or DMO codec
- AIFF
- G.711 mu-law and a-law
- ADPCM
- G.722
- Speex (using NSpeex)
- SF2 files
- Decode using any ACM codec installed on your computer
- Record audio using WaveIn, WASAPI or ASIO
- Read and Write standard .WAV files
- Mix and manipulate audio streams using a 32 bit floating mixing engine
- Extensive support for reading and writing MIDI files
- Full MIDI event model
- Basic support for Windows Mixer APIs
- A collection of useful Windows Forms Controls
- Some basic audio effects, including a compressor
Projects Using NAudio
NAudio currently is used to support a number of audio related utilities, some of which may be moved to be hosted on CodePlex in the future. If you have used NAudio for a project, please get in touch so we can post it here.- Skype Voice Changer - Modify your voice with audio effects while talking on Skype
- .NET Voice Recorder - Record your voice, save to MP3, and visualise the waveform using WPF. Now includes autotune
- MIDI File Mapper - Utility for mapping MIDI drum files for use on other samplers
- MIDI File Splitter - Split MIDI files up at their markers
- SharpMod - managed port of MikMod, can play mod files in both WinForms and Silverlight
- NVorbis - Fully managed Vorbis decoder, with support for NAudio
- Practice# - Windows tool for practicing playing an instrument with playback music. Includes FLAC playback support and an equaliser for NAudio.
- WPF Sound Visualization Library - beautiful waveform and spectrum analyzer code written for WPF, comes with NAudio sample
- Teachey Teach - utility to help English language conversation teachers generate feedback for students
- Sound Mill - an audio player, list organizer and automation manager
- Bravura Studio - a modular, extensible application and platform for creating and experimenting with music and audio.
- SIPSorcery - .NET softphone framework
- Squiggle - A free open source LAN Messenger
- Helix 3D toolkit
- airphone-tv - A revival of axStream to implement control through the iPhone
- JamNet - a Silverlight drum sample player
- Jingle Jim - Jingle Software (German language)
- All My Music
- iSpy - Open Source Camera Security Software
- RadioTuna - Online internet radio player
- Fire Talk New - chat program
- AVR Audio Guard - utility to fix a HDMI related issue
More Info
For more information, please visit the NAudio Documentation WikiDonate
NAudio is a free open source project that is developed in personal time. You can show your appreciation for NAudio and support future development by donating.↧
New Post: Converting RTP Packets into wav format and writing to a wav file dynamically
Finally I got the stereo format for MuLaw working. Now I am able to extract Rx, Tx payloads from RTP packets and write them into a wav file in stereo mode which plays fine in Media Player. I realized that some codecs that we support are DMO based, some are ACM and some are Media Foundation based. Trying to play around with your code and other options to figure out if I can support all the codecs that we have. I will keep you posted as go along on my conversion journey. Once again thanks a lot for your help and I can definitely say that without your NAudio library I would not have reached this far.
Regards
Saleh
Regards
Saleh
↧