Sort of, although if you look at the example code I showed earlier, the looper comes after the mixingmultiplexingsampleprovider. You have to do that if you want it to only loop after the longest file has played
Mark
Sort of, although if you look at the example code I showed earlier, the looper comes after the mixingmultiplexingsampleprovider. You have to do that if you want it to only loop after the longest file has played
Mark
What's the suggested action for the players of my game? Should I follow these steps?
hi vaughands, what OS are you running on?
I did that first, like you said, but the question then is, is how will I access the position property, since the metering sample provider doesn't have that.. (and the mixing sample provider is created with an array of metering sample providers..)
From: markheath
Sort of, although if you look at the example code I showed earlier, the looper comes after the mixingmultiplexingsampleprovider. You have to do that if you want it to only loop after the longest file has played
Mark
Read the full discussion online.
To add a post to this discussion, reply to this email (naudio@discussions.codeplex.com)
To start a new discussion for this project, email naudio@discussions.codeplex.com
You are receiving this email because you subscribed to this discussion on CodePlex. You can unsubscribe on CodePlex.com.
Please note: Images and attachments will be removed from emails. Any posts to this discussion will also be available online at CodePlex.com
Windows XP
Hello,
I'm working on an application that is essentially a network chat system, but I need to support multiple users talking at the same time. After some trial and error, I came up with a method that seems to be working well, but I feel like there is probably a better way, and I'm running into a small issue with different sound cards.
Here's how I have it working now: When the client starts up, it creates a pool of WaveOut instances, each with their own BufferedWaveProvider. The client currently creates a pool of 10 of these. These 10 WaveOuts are all assigned to the same audio device.
When sending audio, each client tags the encoded audio data with a unique client ID and sends it to the server. The server forwards this data to each other user in the chat. The client maintains a mapping of each client ID to one of the 10 BufferedWaveProviders. When the client receives audio data, it looks at the sender's client ID, looks it up in the map, and then feeds the samples to the associated BufferedWaveProvider. This way, if two remote clients are talking at once, their audio does not get intermingled.
Now for the problem ... this has been working great on my development machine, but when I installed the client on another computer (a netbook) an exception was thrown when it tried to create the 10 WaveOuts. The exception happened when it tried to Init() the WaveOut. The error was that it was "already allocated." If I reduced the number of WaveOuts to 5, the problem went away. I assume this is a sound driver limitation.
So, I'm wondering if there's a better way to handle this situation where there are multiple streams from distinct sources that need to be fed to the sound card independently. If not, I think I would at least need a way to determine the maximum number of WaveOuts that can be assigned to a single audio device. I see that the GetCapabilities() method can tell me the number of channels, but it doesn't seem to return a maximum number of wave streams the card can mix.
I know I could just keep constructing WaveOuts until I get the exception, but that seems rather hacky.
Any insight or pointers would be greatly appreciated!
Hello. My english bad, sorry.
I need gelp with my player.
I have link "http://cs5880.userapi.com/u9002353/audios/5080e5649b40.mp3" (example)
How i can play this track in real-time? (after 100-500ms? +-)
private MemoryStream ms; private WaveOut waveOut;
It's first thread:
using (var stream = WebRequest.Create(url).GetResponse().GetResponseStream()) { byte[] buffer = newbyte[1024*64]; int read; while ((read = stream.Read(buffer, 0, buffer.Length)) > 0) { var pos = ms.Position; ms.Position = ms.Length; ms.Write(buffer, 0, read); ms.Position = pos; }
Last thread:
ms.Position = 0; using (var mp3FileReader = new Mp3FileReader(ms)) using (var waveFormatConv = WaveFormatConversionStream.CreatePcmStream(mp3FileReader)) using (var blockAlignedStream = new BlockAlignReductionStream(waveFormatConv)) { waveOut = new WaveOut(); waveOut.Init(blockAlignedStream); waveOut.Play(); while (!taskWaveStop) { Thread.Sleep(1 0); } }
But id3tag always in random position. If position of tag > my buffer, then i will be have random error.
What i do?
Look at the NAudioDemo application source code. There is an example of playing a streaming MP3 file using the AcmMp3FrameDecompressor and the BufferedWaveProvider.
Mark
It would be more appropriate to mix the different streams in software and have a single WaveOut. I tend to write my own mixer for these purposes. NAudio does have a bunch of mixers such as MixingSampleProvider, MixingWaveProvider32 and WaveMixerStream32. I've recently updated the MixingSampleProvider to have an option to always read fully, making it easier to work with this type of situtation.
Basically, the idea is that you would go from your BufferedWaveProviders into 32 bit floating point audio streams. Then each input goes into the mixer and the single WaveOut device plays from the mixer. You need to make sure that the mixer is set up to never 'end' (i.e. even if there is no incoming data, it should just blank out the buffer and return it).
Hope this points you in the right direction
Mark
OK, use the NAudioDemo application to examine what ACM codecs you do have. XP usually comes with an MP3 decoder so I'm surprised you don't see one.
Mark
Also, can I ask where you saw the sample code that uses WaveFormatConversionStream and BlockAlignReductionStream? This has not been necessary for a very long time, but it seems everyone is still using it. If possible I'd like to update the documentation that is showing the use of this.
thanks
Mark
Thanks Mark, I'll dig around and see if I can figure out how to do what you're suggesting.
Thanks! I'll try it.
This PC has one and I can assure you the demo application works to play a audio file. (I just tried). So I'm not sure why MY Mp3FileReader would crash on the same file. :(
Hi Mark,
I seem to have this working correctly ... at least it's mixing the streams using a MixingSampleProvider. However, the audio quality suffers whenever there is more than one stream being fed to the mixer. It's hard to describe how the quality suffers ... sort of sounds like there's some clipping or truncation of the samples when they're mixed. Almost a "stuttering" kind of sound. This only happens if I use a codec before sending the data over the wire. If I just send the raw data I get from the waveIn, then the mixing works great with no quality issues. The only codec I've tried so far is Speex, both narrow band and ultra wide band, using the classes from the Network Chat demo.
In your reply above, you mentioned that the MixingSampleProvider has an "option" to always read fully. I wasn't able to find that setting, and I'm wondering if that might be the issue. How is that option turned on?
You also mentioned that I have to make sure that the mixer is set up to never end. How is that done? (Though it seems like it might already be set up that way, since I can connect and disconnect chat clients and I never have to "restart" the mixer.)
Thanks for your help and the great piece of software.
is your OS 64 bit and your app running as a 64 bit process by any chance?