Quantcast
Channel: NAudio
Viewing all 5831 articles
Browse latest View live

New Post: New Member and some questions

$
0
0
KarinAnne Good News. Visual Studio 2017 Community edition is free. I've been using Community edition since 2015. Unless you are in a team environment, it has everything you should need. Go for it. :-)

New Post: MOTU question (on windows 10)

$
0
0
Hi all,
Does anyone here have MOTU experience. I have a customer who bought a MOTU 16A with 16 output channels. Only one output channel (Output 1/2) shows up in the list of Playback devices in the Window Control Panel > Sound Applet. Does anyone know how he can get it to expose all 16 outputs in the WCP Sound applet?

New Post: New Member and some questions

$
0
0
Thanks Theseus: I saw that availability. My XP machine has VS2008 on it and I don't want to get too confused with adding another version of VS to the XP machine. I loaded the community edition on my Win7 laptop just to play with it a bit but haven't transitioned over to it. I've got a massive hobby project on the XP machine with VS2008 and am used to its eccentricities. so for now I kind of answered my own question related to this thread. and am OK to go. Karin

New Post: Using ASIO

$
0
0
Trying to implement UDP streaming audio in both directions from server to client. I have a previous implementation that uses ASIO drivers as a COM object, from the work of Rob Philpot. This is being used in a remote controlled ham radio site. It works pretty good on the remote site as there is no GUI implemented so the not existent GUI does not get in the way of the COM object. But on the local site which is the main user interface there is a pretty complicated GUI and when it gets updated I get glitches in the audio streaming. I've been trying to figure out on my own how to get around the eccentricities of COM objects. When I came across NAudio I decided to give it a try.
I've got most of my original ASIO code converted over to using NAudio with a few exceptions.
I'm trying to figure out if I can just pump audio into the ASIO code implementation of NAudio using AsioOut. Or do I need to first instantiate a WaveIn device as part of the signal chain. I'm referencing the two Asio modules and the chat module in the NAudio Demo, but I'm getting a disconnect in my own mind as to how the proper implementation of the ASIO Record and Playback simultaneously would work. Any takers?? Regards, Karin

New Post: How to play multiple sounds in quick succession?

$
0
0
Hello,

I’m completely new to naudio so please bear with me…

I trying to play many short (about 500 milliseconds) sounds (wav files) in quick succession with little to no time between them and no overlap.


Let us say, for example, I have 5 different sounds, each exactly 400 ms long. Ideally I would like to play them all in a row, getting a 2000 ms long output.

I accomplished that using the OffsetSampleProvider-class, where I simply give each sound a fitting DelayBy-value and then call the play-functions of all corresponding IWavePlayer simultaneously. This way, I get very good results (only a little time gap between sounds, about 5 ms).

So far so good.

However, the situation becomes slightly more complicated:

What if I want to change a sound while the one before is already playing?

Let us say, I play a row of 5 Sounds like in the previous example. But while the 3.th one is playing I want to change the 4.th one.

I tried to accomplish that again by using the OffsetSampleProvider-class, initializing each sound individually while the previous one is playing and giving an offset (DelayBy-value) based on the CurrentTime-value of the AudioFileReader of the currently playing sound.
This works in principle, but I get huge (about 30 ms) time gaps and overlaps between the sounds.

Anny Ideas?

Thank you.

New Post: Using ASIO

$
0
0
Success..... I have the UDP streaming audio working in both directions. I had some clean up work to do with regard to changing the floating point samples from the Asio interface to byte oriented samples so that I could use the default encoder and decoder in NAudio. I was reading one of the other posts about the code in GetAsInterleavedSamples, and there was a comment there about having this code run a quick as possible. In my older code not using NAudio I had to change the floating point samples over to shorts then run the encoder (Alaw) on the shorts get the bytes to send. I want the network traffic to have the least possible overhead because of data caps on the internet ISP I'm using. Another long story there. Anyway the basic performance look OK with me. I've got a lot of cleanup work to do on the code so I don't throw errors when closing the program, and the usual things that occur when you haven't thought out the error processing aspects of code.
I still need to check if the issues I have with the larger GUI base program will work with the NAudio implementation of ASIO. Karin

New Post: Detection of silent phases in audio

$
0
0
I want to detect percentage of silent phases in audio using NAudio in c#.
Algorithm that i can think of is that to look for consecutive samples having an absolute value less than a "threshold" amount. 0.00006 is -84.437 dB. If my approach is right then please tell me how can i do it using NAudio in c#. If my approach is wrong then help me out in tackling this problem

New Post: can Naudio take 2 16bit arrays and play them as stereo

$
0
0
I'm working on a project to create and electronic version of a musical instrument and I've figured out I'm having some phase issues between some of the samples. Each sample is recorded at 44100, 16 bit mono. It is exactly 1 cycle for the frequency. I need to be able to feed two 16 bit streams in preferably from two 16 bit integer arrays and shifting one to play ahead or behind the other so that I can get the best match for phase between the two channels. The plan is to feed the output real time to Adobe Audition and use it's frequency and phase analysis tools to do the visualization.

For this to work and make it easy, I want to play the 1 cycle in a loop but increase or decrease the starting point for one loop relative to what is playing in the other channel. Once I figure out the best offset to use I'll recreate the data array to start at that point. This data is then used in an Arduino type processor to actually reproduce the sound based on the wave data for each note.

I guess this all boils down to can NAudio take two 16bit arrays and feed them separately in as left and right channel of a stereo wave output?

Can I control the starting point and duration in terms of number of samples when playing?

That way I can keep the number of samples I need for a note but start it further in than the first data point. In order to allow for the shift my plan is to duplicate the wave so that it can be clipped to the right number of samples and still preserve the integrity of the wave itself.

New Post: AsioAudioAvailableEventArgs.cs

$
0
0
My question is related to the subject line. In this module there is the function "GetAsInterleavedSamples". There is a note, "Better performance if you use the overload that takes an array, and reuse the same one". Is there already this overload function somewhere buried in the NAudio source? Or do I have write this myself. My experience with C# has been good but still in a very steep learning curve. Any Advice???
Regards, Karin

New Post: EQ Bandwidth

$
0
0
I was wondering what 0.8f for bandwidth, used in the examples, relates to? 80% of what? How wide is a 100% bandwidth? (I assume it is a percentage)

Also, is there a way to set the actual Q, Q and bandwidth being two different things. Q would be the shape of the bandwidth bell curve. A way, perhaps, to set it from thin to fat?

Love NAudio, using it in this project:
http://github.com/PaulKeefe/MusicPlayer

New Post: DirectSoundOut seems to be wrong

$
0
0
DirectSoundOut::GetPosition() seems to be useless.

Because you need to have access from the same thread you have created on.
Otherwise an E_NO_INTERFACE exception occured. You create a private thread here - so no chance to use it.

Not easy to fix this, but to fire an event at the beginning and/or end of your while loop in the seperate thread would be helpful. (Give me the chance to cache it)

*Sebastian

New Post: clicks/pops/static when playing short tones at a high rate

$
0
0
We're generating brief audio tones at varying frequencies at a high rate. This is used for audible feedback. We used the sample SineWaveProvider class along with the WasapiOut class. On each request, we set this up, play the tone, the SineWaveProvider fills in the buffer as requested and the sound plays at the specified frequency.

What we get, though, when playing many brief tones in sequence are these clicking/popping/static type noises along side the sounds.

Not sure what's causing them. Anyone else have a similar experience? Any ideas?

New Post: DirectSoundOut seems to be wrong

$
0
0
I fix it by modifing the NAudio sourcecode in DirectSoundOut.cs
  • do check for IWaveProvider.Init has been called before thread want create
  • give easy access to IDirectSoundBuffer properties from another thread
  • spend some diagnostic properties and playback event from origin thread
  • make sure thread will be destroyed if ThreadProc cause an exception
Do you care for pull requests?

*Sebastian

New Post: How to pause recording and play or overwrite the audio being recorded

$
0
0
Hello,

I am recording an audio which I want to pause the recording and rewind and play then pause again and start recording to overwrite some of already recorded audio, how is that possible?
The audio is either in use or corrupted
Can anyone provide a sample code snippet ?

New Post: How to pause recording and play or overwrite the audio being recorded

$
0
0
Hello,

I am recording an audio which I want to pause the recording and rewind and play then pause again and start recording to overwrite some of already recorded audio, how is that possible?
The audio is either in use or corrupted
Can anyone provide a sample code snippet ?

New Post: Getting MIDI NOTEon notenumber has exited with code -1073741819 (0xc0000005) 'Access violation'.

$
0
0
Hi! i'm just starting with naudio, I trying to modify the demo of midi interface, for display diferents messages for each midi note so in the code I only modify this
  void midiIn_MessageReceived(object sender, MidiInMessageEventArgs e)
        {
            if (checkBoxFilterAutoSensing.Checked && e.MidiEvent != null && e.MidiEvent.CommandCode == MidiCommandCode.AutoSensing)
            {
                return;
            } 

            if(e.MidiEvent.CommandCode == MidiCommandCode.NoteOn)
            {
                    NoteOnEvent note = (NoteOnEvent)e.MidiEvent;

                progressLog1.LogMessage(Color.Blue, string.Format("Time {0} Message 0x{1:X8} Event {2}",
               e.Timestamp, e.RawMessage, note.NoteNumber.ToString()));
            }
            

        }
To get the NoteNumber, before i write the if or whatever. But, the program display the message and immediately after, exit with code -1073741819 (0xc0000005) 'Access violation'.

New Post: clicks/pops/static when playing short tones at a high rate

$
0
0
Got this working thanks to a helpful suggestion from Mark Heath. We're now doing the following and don't have the problem of clicking/popping artifacts when playing tones at a high rate.

1) We're using the FadeInOutSampleProvider in between the SineWaveProvider and the output player object. When requested to play a tone by the application, this is used to do a 5ms fade in and then fade out after the tone.

2) After the fade-out, we no longer call Stop() on the output player object. We just let it continue playing at 0 volume and then use a timer which, if no tones are requested for a period of time (1s) then calls Stop(). If another tone is requested before this times expires, we reset the timer, etc. This way we're not starting/stopping the player at such a high rate which was another source of the clicking/popping artifacts.

Hope this helps someone else!

New Post: Help on using BufferedWaveProvider

$
0
0
I have a UDP stream of sampled audio arriving at my application. I can process the stream and retrieve the data from the UDP packets. This audio looks like a stereo two channel group of bytes. It really is sampled I and Q data from a remote DSP that is sending the data to the local port. All of the network stuff works OK. My question is how to choose an appropriate WaveFormat to provide for the argument of the BufferedWaveProvider. I need to separate the two channels, re-arrange two adjacent bytes into a 16 bit word, then save the separate I and Q words into separate buffers that I can pass to an FFT engine as floating point buffers. Any help would be appreciated.
Regards, Karin

New Post: ACC or MP4 to MP3

$
0
0
I have an extractor that I use to download videos from YouTube but I want to convert them into an MP3. I have made the work around to download the audio as an ACC file but I am following all of the docs and I am just not able to encode to mp3 without some type of error. The error usually has to do with unauthorized access but the code I created waits for the file to download, convert to a Wave File and then deleting the original ACC file. I am wondering what I may be able to do to jump straight to encoding the downloaded ACC file to mp3.

Thanks guys and gals.

New Post: playing a .wav file stored as an embedded resource and controlling playback volume

$
0
0
We're able to access a .wav file stored as an embedded resource in the project with the below code. This gets us a stream to the sound file contents (the below assumes that there exists a SoundFiles sub-directory in the project containing the .wav sound files and that the files are set to be embedded resources).
                var asm = Assembly.GetExecutingAssembly();
                var resourceStream = asm?.GetManifestResourceStream(
                    asm.GetName().Name +
                    ".SoundFiles." +
                    soundFile +
                    ".wav"
                );
Once we have the stream, we can use it to create an instance of the WaveFileReader class and then provide that instance to the output player (WasapiOut for example). To control the playback volume, we could use IWavePlayer.Volume, but...

... the comments for IWavePlayer.Volume indicate that this should not be used but that volume should be set "on your input WaveProvider instead". The problem is that WaveFileReader does not have a Volume property.

So if using WaveFileReader, there doesn't appear to be any way to control the volume "correctly". Also it appears that setting volume on the IWavePlayer actually affects the global system sound volume, which is not desirable.

An alternative is to use the AudioFileReader class. While this class has a Volume property, it does not have a way to construct it using a stream.

What we're doing now is to get the stream for the embedded resource, copy the contents out to a temp file, and then point the AudioFileReader class at the temp file, then play the sound, then delete the temp file when done.

This seems a bit hacky and unnecessary though. I'm posting this to suggest one of the following.

1) Add a Volume property to the WaveFileReader class
2) Add the ability to construct an AudioFileReader from a stream instead of just a file path string
Viewing all 5831 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>