Quantcast
Channel: NAudio
Viewing all 5831 articles
Browse latest View live

New Post: BufferedWaveProvider to Wavestream

$
0
0
In my project I use wavestream for my effects class and now I need to bring the input from the microphone in that class. I get the microphone input with the waveinprovider but I don't know how to convert from waveinprovider to wavestream.

If i read from a mp3/wav file with Mp3FileReader/WaveFileReader I manage to send the data to my class but from waveinprovider i can't.

Is there a way to get microphone input to a stream (wavestream)?

Thanks.

New Post: BufferedWaveProvider to Wavestream

$
0
0
There shouldn't really be a need for your effects to take a WaveStream as the only things a WaveStream has over an IWaveProvider is position get/set and Length get. If you really must turn an IWaveProvider into a WaveStream it is very easy by creating an adapter class. In Position get, return total bytes read so far, throw an exception for Position set, and Length get can return whatever you want it to. All other methods (Read + WaveFormat) can delegate down to the IWaveProvider.

New Post: BufferedWaveProvider to Wavestream

$
0
0
Now I am traing, thanks for the solution :D .

New Post: 10 band Equalizer

$
0
0
Currently working on a generic (reusable by everyone) solution.

New Comment on "Convert a MP3 to WAV"

$
0
0
I am using this code to convert from MP3 to WAV but the original MP3 file is 55minutes and 4 secs and the wav file is 54 minutes and 5 seconds. Is there a way to make the files come out to be the same length by time? It is critical that they be the same length for my application. Any ideas?

New Comment on "Convert a MP3 to WAV"

$
0
0
I forgot to say that the MP3 is from mp3skyperecorder I am using 24 bit, at 16 kHz

New Post: How to detect insertion or removal of a USB sound card?

$
0
0
Is there anyway to detect when a USB sound card is either inserted or removed? Bonus, if I can also detect insertion/removal of a Mic from the Mic RJ-45 jack.

New Post: Naudio on Mono

$
0
0
I am using building a c# application that plays music. I have heard about this project a lot and it seems like a good choice. I am curious to see if this runs cross platform on mono. If so, what are the limitations? I have heard that it can't play mp3 cross platform, but is their any workarounds? Thank you!

New Post: Custom sample provider FFT sample capture issue

$
0
0
The "Read" method of my custom sample provider executes every second, which is not working well for my spectrum analyzer. I based my sample provider on NotifyingSampleProvider and SampleChannel.

Can't seem to find the issue. I have tried adjusting the WasapiOut latency and the DispatcherTimer in the spectrum analyzer. Both changes had no effect.

Is there something I'm missing?

I need the sample data every 32ms.

Sample provider code:
using System;

namespace MusicEngine.MediaFoundation
{
  public class FilterSampleProvider : ISampleProvider
  {
    public event EventHandler SampleReady;

    private readonly ISampleProvider _sampleProvider;
    private readonly WaveFormat _waveFormat;
    private readonly ISampleFilter[] _sampleFilters;
    private readonly int _channels;
    private readonly bool _captureSample;

    public WaveFormat WaveFormat { get { return _waveFormat; } }

    public float CurrentSample { get; private set; }

    public FilterSampleProvider(IWaveProvider waveProvider, ISampleFilter[] filters)
      : this(waveProvider, filters, false, false)
    {

    }

    public FilterSampleProvider(IWaveProvider waveProvider, ISampleFilter[] filters, bool forceStereo, bool captureSample)
    {
      _sampleFilters = filters;
      _captureSample = captureSample;

      _sampleProvider = SampleProviderConverters.ConvertWaveProviderIntoSampleProvider(waveProvider);

      if (_sampleProvider.WaveFormat.Channels == 1 && forceStereo)
      {
        _sampleProvider = new MonoToStereoSampleProvider(_sampleProvider);
      }

      _waveFormat = _sampleProvider.WaveFormat;
      _channels = WaveFormat.Channels;
    }

    public int Read(float[] buffer, int offset, int sampleCount)
    {
      var readCount = _sampleProvider.Read(buffer, offset, sampleCount);

      //var outputBuffer = new float[buffer.Length];

      //TODO: Filter processing. Processes peak EQ. Currently NOT working.
      //foreach (var sampleFilter in _sampleFilters)
      //{
      //  sampleFilter.TransformBuffer(buffer, outputBuffer);
      //}

      //FFT sample capture
      if (_captureSample)
      {
        for (var n = 0; n < readCount; n += _channels)
        {
          //if stereo, get average of right and left channels. If not, get the only channel.
          CurrentSample = _channels > 1 ? (buffer[offset + n] + buffer[offset + n + 1]) / 2 : buffer[offset + n];

          if (SampleReady != null)
            SampleReady(this, new EventArgs());
        }
      }

      return readCount;
    }
  }
}
 
Code from playback engine:
    public async void OpenFile(string filePath, PlaybackCallback callback)
    {
      var file = await StorageFile.GetFileFromPathAsync(filePath);

      TrackInfo = new TrackInfo
      {
        MusicProperties = await file.Properties.GetMusicPropertiesAsync(),
        Thumbnail = await file.GetThumbnailAsync(ThumbnailMode.SingleItem, 500)
      };

      _sampleAggregator = new SampleAggregator(4096);

      var stream = await file.OpenAsync(FileAccessMode.Read);//  .OpenReadAsync();

      if (stream == null)
        return;

      using (stream)
      {
        //TODO: fix this!!!
        var task = Task.Factory.StartNew(() =>
          {
            _activeStream = new MediaFoundationReader(stream);
            _player = new WasapiOut(AudioClientShareMode.Shared, 200);
            Task.WaitAll(new[] { _player.Init(CreateInputStream(_activeStream)) });
          });

        Task.WaitAll(new[] { task });

        if (callback != null)
          callback(true);

        CanPlay = true;
      }
    }

    private IWaveProvider CreateInputStream(IWaveProvider fileStream)
    {
      _filterSampleProvider = new FilterSampleProvider(fileStream, _filters, true, true);

      _filterSampleProvider.SampleReady += (sender, args) => 
        _sampleAggregator.Add(_filterSampleProvider.CurrentSample);

      return new SampleToWaveProvider(_filterSampleProvider);
    }

    //Called every 32ms by the spectrum analyzer
    public float[] GetFFTBuffer()
    {
      var fftDataBuffer = new float[2048];

      _sampleAggregator.GetFFTResults(fftDataBuffer);

      return fftDataBuffer;
    }

New Post: How to detect insertion or removal of a USB sound card?

$
0
0
You should be able to use this as a starting point:

Detecting USB Drive Removal in a C# Program
Windows will send WM_DEVICECHANGE message to all applications whenever some hardware change occurs, including when a flash drive (or other removable device) is inserted or removed. The WParam parameter of this message contains code which specifies exactly what event occurred.

New Post: Naudio on Mono

$
0
0
It depends on what you call cross platform. Running your application using mono on a Windows workstation wouldn't be a problem. NAudio depends heavily on p-invoke calls to the underlying operating system (Windows). So running your application on a different operating system, e.g. Linux, will result in no audio at all.
There will be parts of NAudio that work. e.g. wavestreams for reading information from wave files doesn't rely on native OS calls.

I would suggest to create an abstraction of playing audio files in your application, so you can choose the appropriate audio framework based on the platform you are running on. And then use NAudio for playing on a Windows playform.

New Post: Naudio on Mono

$
0
0
Thanks for the advice. So you recommend a different library for each platform?

New Post: Naudio on Mono

$
0
0
probably the best idea if you like the NAudio structure is to create a new implementation of IWavePlayer for Mono that P/Invokes into Linux audio playback APIs (I'm not a mono or Linux expert so I have no idea how hard that would be). The same would apply for any audio format conversion. Instead of WaveFormatConversionStream (which uses the Windows ACM API), you'd need something that hooks into codecs available for Linux.

New Post: Forking NAudio

$
0
0
Hi, to answer your first question, I use Mercurial for source control, so if you want to use TFS it will not be a "fork" in the sense that you can issue pull requests. Nothing stopping you creating your own project that is a fork.

For the second question, I advise you start by reading the patch submission guidelines here

As a general rule, the submissions I accept are either bugfixes to existing NAudio components, or new components which I think will be useful to a wide range of users (the goal is not to collect every conceivably possible audio related class). NAudio is very loosely coupled, so most of the time you can extend it simply by creating your own new classes that derive from existing ones, or adding extension methods. It is rare that you would actually need to make your own custom fork of NAudio - just put your extensions in your own assembly. This is actually how I add almost all new features to NAudio - I create the components in another project, and once I am convinced they work well and might be general purpose, I move them into the core library.

New Post: Converting RTP Packets into wav format and writing to a wav file dynamically

$
0
0
You don't have to use CreateWaveFile. Instead just open a WaveFileWriter and call the Write method whenever you receive more PCM data. You'll have to handle extracting the audio from the RTP packets yourself though.

Mark

New Post: Custom sample provider FFT sample capture issue

$
0
0
the NAudio sample aggregator raises events when it has got the required number of samples for an FFT (and has calculated the FFT). So just subscribe to the FftCalculated event. The NAudio WPF Demo shows this in action.

Mark

New Post: Forking NAudio

$
0
0
​Hi and thanks for your reply,
I didn't actually mean 'fork' but rather to add some features.
But I appreciate your advise. I will add extension methods and external features as you said.

Best regards,
And thanks for the awesome work!​


New Post: Mute left or Right channel volume.

$
0
0
I want to mute left or right channel volume using naudio. Also want to known that how to reduce volume of any one(left or right) channel.

New Post: Converting RTP Packets into wav format and writing to a wav file dynamically

$
0
0
Hello Mark,

Thank you for the reply.. I was able to modify your code to convert live RTP stream into WAV. I also used your CreatePcmStream method to convert the RTP packets into PCM stream and then dynamically wrote that PCM stream into a wav file updating the header everytime I write new data because in my case the requirement is to play the live WAV file as it gets generated. At this point I have couple of questions if you can please provide some insight:

1) The WAV file that I have generated is using the MuLaw WAV format i.e. WaveFormat.CreateMuLawFormat(8000, 1). Even though the WAV file is generated properly I get a clicking throughout the audio play. I verified that the RTP packets are received in proper sequence and none of them are lost. So I am getting a valid RTP stream. However while conversion to PCM something is happening and the WAV file plays with clicking sound throughout. On the other hand I have tried getting this same audio file converted into WAV through another medium and it plays fine. When I compared the WAV headers here is the diff that I saw:

The correct WAV file header:
channels: 2
sampleRate: 11025
fmtAvgBPS: 44100
fmtBlockAlign: 4
bitDepth: 16

The WAV file header generated using your code:
channels: 1
sampleRate: 8000
fmtAvgBPS: 16000
fmtBlockAlign: 2
bitDepth: 16

I believe the correct WAV file is generated in stereo mode with 2 channels. So I tried creating a new wavformat as follows:

WaveFormat waveformat = new WaveFormat(44100, 16, 2);

But when I do this the stream is no longer converted into PCM and it is directly written into WAV file which is not a valid output. So is there any way I can improve the quality of my WAV file? Is there a way to create PCM stream using different WAV formats through your code?

2) Also, what is the best way to convert RTP into WAV? Do we always have to go via the PCM route? What would be the best route in my case?

Thank you..

Regards,
Saleh

New Post: Converting RTP Packets into wav format and writing to a wav file dynamically

$
0
0
converting mu-law to PCM should not result in a change of sample rate or channel count. It simply should go from 8 bits per sample to 16 bits.

I'm afraid I don't know about removing audio from RTP packets, but you must definitely not simply write the whole thing into your WAV file since it will contain additional data which is probably the cause of your clicking noise.
Viewing all 5831 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>