Quantcast
Channel: NAudio
Viewing all 5831 articles
Browse latest View live

Updated Wiki: Create an ID3v2 Tag

$
0
0

You can create simple ID3v2 tags in NAudio by constructing a dictionary of IDs (4 character strings) and values. Here’s a simple example:

Dictionary<string, string> tags = new Dictionary<string, string>
{
 { "TIT2", "Song title" },
 { "TPE1", "Artist name" },
 { "TYER", "2012" }, /* year of the song */
 { "COMM", "Extra comment info" },
 { "TCON", "(110)Satire" } /* genre */ 
};
var tag = Id3v2Tag.Create(tags);

And then you can write this to your MP3 Stream (usually a FileStream)  using the RawData property:

output.Write(tag.RawData, 0, tag.RawData.Length);

Updated Wiki: Documentation

$
0
0

The NAudio Documentation Wiki

NAudio FAQ

What is NAudio?

NAudio is an open source audio API for .NET written in C# by Mark Heath, with contributions from many other developers. It is intended to provide a comprehensive set of useful utility classes from which you can construct your own audio application.

Why NAudio?

NAudio was created because the Framework Class Library that shipped with .NET 1.0 had no support for playing audio. The System.Media namespace introduced in .NET 2.0 provided a small amount of support, and the MediaElement in WPF and Silverlight took that a bit further. The vision behind NAudio is to provide a comprehensive set of audio related classes allowing easy development of utilities that play or record audio, or manipulate audio files in some way.

Can I Use NAudio in my Project?

NAudio is licensed under the Microsoft Public License (Ms-PL) which means that you can use it in whatever project you like including commercial projects. Of course we would love it if you share any bug-fixes or enhancements you made to the original NAudio project files.

Is .NET Performance Good Enough for Audio?

While .NET cannot compete with unmanaged languages for very low latency audio work, it still performs better than many people would expect. On a fairly modest PC, you can quite easily mix multiple WAV files together, including pass them through various effects and codecs, play back glitch free with a latency of around 100ms.

How can I get help?

There are three main ways to get help. If you have a specific question concerning how to use NAudio, then I recommend that you ask on StackOverflow and tag your question with naudio. This gives you the best chance of getting a quick answer. You can also ask a question on the NAudio discussion forums here on CodePlex. I attempt to answer all questions, but since this is a spare time project, occasionally I get behind. Finally, I am occasionally able to offer paid support for situations where you need quick advice, bugfixes or new features. Use the contact feature of the Codeplex website to get in touch with Mark Heath if you wish to pursue this option.

How do I submit a patch?

I welcome contributions to NAudio and have accepted many patches, but if you want your code to be included, please familiarise yourself with the following guidelines:

  • Your submission must be your own work, and able to be released under the MS-PL license.
  • You will need to make sure your code conforms to the layout and naming conventions used elsewhere in NAudio.
  • Remember that there are many existing users of NAudio. A patch that changes the public interface is not likely to be accepted.
  • Try to write "clean code" - avoid long functions and long classes. Try to add a new featue by creating a new class rather than putting loads of extra code inside an existing one.
  • I don't usually accept contributions I can't test, so please write unit tests (using NUnit) if at all possible. If not, give a clear explanation of how your feature can be unit tested and provide test data if appropriate. Tell me what you did to test it yourself, including what operating systems and soundcards you used.
  • If you are adding a new feature, please consider writing a short tutorial on how to use it.
  • Unless your patch is a small bugfix, I will code review it and give you feedback. You will need to be willing to make the recommended changes before it can be integrated into the main code.
  • The easiest way to provide a patch is to create your own fork on Mercurial and issue a pull request. See this video if you are new to Mercurial.
  • Please also bear in mind that when you add a feature to NAudio, that feature will generate future support requests and bug reports. Are you willing to stick around on the forums and help out people using it?

How do I...?

The best way to learn how to use NAudio is to download the source code and look at the two demo applications - NAudioDemo and NAudioWpfDemo. These demonstrate several of the key capabilities of the NAudio framework. They also have the advantage of being kept up to date, whilst some of the tutorials you will find on the internet refer to old versions of NAudio.

Getting Started with NAudio – Downloading and Compiling

  1. Download a copy of the NAudio source code (or a pre-compiled version but the newest available code is available in source form first).
    http://naudio.codeplex.com/SourceControl/list/changesets
  2. The default project is set to the NAudio class library. Class Library’s don’t have anything to look at when you press F5. If this is your first time with NAudio, set the start-up project as NAudioDemo and then hit F5. The NAudioDemo project shows the base functionality you can utilise through your own project.
  3. In the BIN directory for the built solution, you can find a copy of the NAudio library for using and referencing in your own project. Make sure that you grab a copy of the NAudio.XML file as well if your copying it over to your own projects directory, that way you will have the intellisense documentation for use in Visual Studio when working with the NAudio API.

Additional Tutorials from OpenSebJ's blog (n.b. these are for NAudio 1.3):

http://stackoverflow.com/

New Post: Resample problem

$
0
0

What kind of exception. The constructor overload of MixingSampleProvider with WaveFormat is so simple. Why it should throw any exception?

If I use:

WaveFormat waveFormat = WaveFormat.CreateIeeeFloatWaveFormat(44100, 2);

Then the problem I have mentionied earlier occurred - I cannot resample the input streams with that IEEE format.

New Post: Resample problem

$
0
0

maybe I added that in a later version. all SampleProviders must be IEEE float. they will not work with PCM audio

New Post: MP3 Encoding feature

$
0
0

If I understand this discussion, Lame library is not free of use?
I thought Lame was free!
Please can somebody explain me clearly the around of Lame license.

New Post: MP3 Encoding feature

$
0
0

LAME is free, but it is not .NET, so can't be part of NAudio. You can easily call LAME.exe using .NET though to convert a WAV file to MP3.

New Post: MP3 Encoding feature

$
0
0

LAME is free, but numerous patent holders claim it infringes their intellectual property when compiled. The authors of LAME have claimed that it can be distributed in source form as the source is just an educational description of the technologies. That doesn't seem like a watertight argument to me, but the patent owners obviously aren't willing to risk that a court might be convinced by it as they have not initiated legal proceedings.

The LAME team recommends that anyone compiling their software and distributing it in binary form as a part of another product first obtain licenses for the patents. The asking price for those licenses appear to be $2.50/unit, which is obviously a big problem for free software.

And, since it's not .NET, including it in a .NET project would require distributing it in binary form.

New Post: MP3 Encoding feature

$
0
0

yes, LAME.exe will never be included with NAudio.


New Post: MP3 Encoding feature

$
0
0
markheath wrote:

NAudio uses the MP3 decoder that comes with Windows, which MS has already paid the license fee for. WMA Encoding is possible (I've done it once for a commercial project), but the API is ridiculously overcoomplicated. Fully managed ogg encoding would be a nice idea.

Yeah, not just overcomplicated, but the programmer's guide seems deficient in the amount of useful guidance.

Xiph's reference implementation of a Vorbis encoder is a little intimidating, but it's only a few thousand lines of C :P

New Post: Latency in .net 4.5

$
0
0
I'm about to start doing some tests to see what latency I can get with naudio and how much I need to worry about the garbage collector. Microsoft seem to have made some massive improvements to the gc since .net 3.5. Has anybody else looked into this much yet?

New Post: Latency in .net 4.5

$
0
0

There are a few issues to consider with latency and NAudio.

First is simply the performance of the NAudio code. I have tried to write it with performance in mind, but it has never been through a round of serious performance tuning (too many other priorities unfortunately), so there is undoubtedly scope for improvement. One big annoyance with .NET is that there is no way to cast from a byte[] to a float[] or short[] without using either unsafe code or doing a manual buffer copy. Obviously the latter is slower, but the former results in code that can't be run in a sandboxed environment. There is a mixture of techniques in the existing NAudio codebase.

Second is the wide variety of different performance of the various audio APIs. With WaveOut you will be lucky to go below 50ms, but Wasapi and AsioOut can often go lower.

Third is the way latency is measured. For the purposes of playback with two buffers (one being filled while the other is played), this is often reported as the size of one buffer. But there can be complications, such as WaveOut with more than two buffers, or DirectSoundOut which uses a circular buffer technique.

Finally, there is garbage collection itself. The trouble with .NET garbage collection is that when it runs it stops everything, on every thread. The ideal scenario is for the garbage collector not to run while you are playing audio, and I have tried to write the code so as not to create any new objects during playback (buffers are reused etc). However, it is likely that if you have an application with a GUI, the garbage collector will be triggered. If it takes too long, you can be left without enough time to fill the next buffer, or even miss a buffer. Hopefully MS have improved the GC enough for it to not cause noticable glitches, but I suspect at the sort of low latencies required in applications like DAWs (lower than 20ms), you would get the occasional glitch.

Anyway, will be interested to hear your findings.

New Post: Resample problem

$
0
0
Now I'm able to do the mixing like this:
WaveFormat waveFormat = new WaveFormat(44100, 2);
WaveFileReader reader = new WaveFileReader(fileName);
WaveFormatConversionStream convertStream = new WaveFormatConversionStream(waveFormat, reader);
cm16BitToSampleProvider sampleProvider = new Pcm16BitToSampleProvider(convertStream);
mixer.AddMixerInput(sampleProvider);

But how can I play/save the mixer stream. The problem is that it is 16bit PCM at this stage and I need to convert it to IWaveProvider

 

New Post: Resample problem

$
0
0

if it is a sampleprovider then it is IEEE float. That  is what Pcm16BitToSampleProvider does

New Post: Resample problem

$
0
0

The mixer itself is 16bit PCM. Pcm16BitToSampleProvider needs IWaveProvider as source. Mixer is ISampleProvider. If I convert it using SampleToWaveProvider it tells me that must be floating point...

New Post: Resample problem

$
0
0

The WaveFormat of your sampleProvider class should be IEEE. Please do a sampleProvider.WaveFormat.ToString() and tell me what you've got.

You can go back fom ISampleProvider to IWaveProvider using SampleToWaveProvider in order to play the audio. (n.b. in the very latest code there is SampleToWaveProvider16, which will also put it back to 16 bit PCM).


New Post: Resample problem

$
0
0

sampleProvider.WaveFormat.ToString() - "IeeeFloat"

mixer.WaveFormat.ToString()  -"16 bit PCM: 44kHz 2 channels"

I need to play the mixer that is 16 bit PCM...

New Post: Resample problem

$
0
0

If you are using MixingSampleProvider you must pass in an IEEE format into its constructor. The bug that lets you pass in PCM was fixed last month. It simply will not work with PCM, even though the version you are using lets you pass in the wrong WaveFormat.

New Post: Resample problem

$
0
0

But when I pass this waveFormat instance

WaveFormat waveFormat = WaveFormat.CreateIeeeFloatWaveFormat(44100, 2);

the following line throws exception:

WaveFormatConversionStream convertStream = new WaveFormatConversionStream(waveFormat, reader);

"AcmNotPossible calling acmStreamOpen". That's the issue that I'm trying to resolve.

New Post: Latency in .net 4.5

$
0
0

Mark,


Thanks for the detailed reply. My requirement is to get an end to end latency of about 100ms using VoIP over a LAN. By my calculations that would be just about possible with a buffer size of 20ms, using asio (my initial testing in naudio would use wavein so I expect higher latency).
Read buffer 20ms.
Network latency 1ms
Jitter buffer 20ms
Play buffer 20ms

I've been reading about garbage collection in .net 4 and they have implemented background garbage collection. I've never really thought about garbage collection much, but in my understanding a thread will only pause if it needs garbage collecting or more memory while the GC is collecting generation 2 objects. With background garbage collection, the GC will yield very quickly so that the thread doesn't pause for long. I possibly haven't explained that particularly well but this page makes for interesting reading: http://msdn.microsoft.com/en-us/library/ee787088(v=vs.110).aspx.

In my app, the audio would run as a service, but I will have garbage collection overheads for rtp, sip and a wcf interface to the service. Will let you know how I get on; failure would mean writing a solution in unmanaged c++ or Delphi, neither of which appeals to me!


Dave

New Post: Ima Adpcm Error during Compression

$
0
0

Hi,

i am using following Wave Format for recoding a audio file.

waveIn =

New WaveIn ()

waveIn.WaveFormat =New WaveFormat (8000, 1)

inputformat = waveIn.WaveFormat

Now,i need to Encode/Compress these recorded audio file into Ima Adpcm Format.And,I also seen the discussion about this issue in your forum.There you said,

"Simply add a short in there. Extra size means how many extra bytes there are on top of the original WaveFormat structure. You need a variable to hold that data (a short because you have two extra bytes). And you should also set the contents of that extra data according to the specification of the specific WaveFormatTag you are using"

private class MyWaveFormat:WaveFormat
{
short extraData;

public MyWaveFormat(WaveFormatEncoding WaveFormatTag, short BitsPerSample, short Channels, int AverageBytesPerSecond, short BlockAlign, int SampleRate, short ExtraSize)
{
base.extraSize = ExtraSize;
base.averageBytesPerSecond = AverageBytesPerSecond;
base.bitsPerSample = BitsPerSample;
base.blockAlign = BlockAlign;
base.channels = Channels;
base.sampleRate = SampleRate;
base.waveFormatTag = WaveFormatTag;
}
}

But i am not getting how to set  contents "ExtraData" corresponding to Extra Size.

I also tried  to set Extrqa Data using following Methods,but still i am getting the Same Error ""AcmNotPossible Calling the AcmStreamOpen""

ptr=Waveformat.MarshalToPtr (inputformat)

TargetFormat=waveformat.MarshalFromPtr(ptr)

Can you Please guide me About this problem  as early as possible.

Regards,

Chandan k.s

 

 

 

Viewing all 5831 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>