January 25, 2013, 9:12 pm
markheath wrote:It should be very rare that MP3s contain frames of different sample rates. It is usually a sign someone has tried to concatenate parts of two completely different MP3 files/
I think this is not rarly as you think. VBR file is totally common in use, such as "Sleep away.mp3" in Windows 7. And i have this problem files many times
Would u support this feature one day?
ref:
When you want to read info about an MPEG file, it is usually enough to find the first frame, read its header and assume that the other frames are the same. But this may not be always the case. Variable bitrate MPEG files
may use so called bitrate switching, which means that bitrate changes according to the content of each frame.
http://mpgedit.org/mpgedit/mpeg_format/MP3Format.html
http://en.wikipedia.org/wiki/MP3
↧
January 25, 2013, 11:54 pm
You are confusing bitrate and sample rate. NAudio has no problem playing VBR MP3s that have different bitrates in every frame. The trouble is when the sample rate or the channel count changes. If that happens, the decoder will start emitting a different
PCM WAV format. If you were playing out the soundcard, you would need to close and re-open the soundcard, or resample and convert the channel count to the same number as the original. Otherwise the audio would speed up or slow down.The same problem occurs
if you are converting MP3 to WAV. A WAV file cannot contain sections with different PCM formats, so if different MP3 frames in the same file convert natively to different PCM sample rates or channel counts you have to do a further conversion on the decoded
audio.
I do wonder if some VBRs like the one you mention above contain just one frame at a different sample rate, and maybe NAudio could somehow detect this and skip over it.
↧
↧
January 26, 2013, 1:22 pm
In a mix of trying to alter code fom this tutorial: https://www.youtube.com/watch?v=ZnFoVuOVrUQ
- and trying to get the peak values of a recorded wave I ended up with this:
while (wave.Position < wave.Length )
{
read = wave.Read(buffer, 0, 16384);
for (int i = 0; i < read / 4; i++)
{
if (i < 5000){
chart1.Series["wave"].Points.Add(BitConverter.ToSingle(buffer, i * 4));
liste.Add(chart1.Series["wave"].Points[Convert.ToInt32(wave.Position/4096)].YValues[0]);
}
if (wave.Position % 16384 == 0 && i % 4096 == 0)
{
Console.WriteLine("Values!!: "+ liste[i]);
Console.WriteLine("values: " + chart1.Series["wave"].Points[Convert.ToInt32(BitConverter.ToDouble(buffer,
i / 4))].YValues[0]);
}
}
textBox1.Text = Convert.ToString(optaelPeaks);
textBox2.Text = Convert.ToString(wp);
}
- but the output of this code doesn't really makes sense in the console as all the values lays between 0.027 and 0.089 (in a very looping-like pattern, that doesn't resembles the .wav I am analyzing/loading), as opposed to the chart1 that shows values
from -1.0 to 1.0.
I have also been checking up on Voicerecorder and this link: http://stackoverflow.com/questions/14350790/naudio-peak-volume-meter
- without getting further (as a Dane I have trouble with more in depth math English terms like: "take the log base 10 of the maximum value") ... Hope you can help me /on before hand thx /Thomas
↧
January 26, 2013, 9:32 pm
Thank you for reply
I got something wrong with CBR VBR.
As you say different frame may have different bit rate but has the same sample rate.
Is it possible has different sample rate too?
I have sent the strange mp3 file of Windows sample music to your email . Please check it .
↧
January 28, 2013, 7:44 am
what format are you recording in? ToSingle assumes you are capturing as IEEE float.
↧
↧
January 28, 2013, 8:15 am
I'm reading a mp3-file and convert it to a WaveStream
Mp3FileReader mp3 = new Mp3FileReader(open.FileName);
WaveStream pcm =WaveFormatConversionStream.CreatePcmStream(mp3);
Now I'd like to convert that stream to a double-array.
I've read about WaveBuffer but have no idea how to use it for that purpose.
Can you help me out?
↧
January 28, 2013, 8:38 am
Yes, that particular file has been mentioned to me before. The first two frames are 48kHz, whilst the rest is 44.1kHz. I have no idea why this might be. I thought at first they might be XING or VBRI headers, but they don't appear to be. I have no idea why
MS would make a file like this. I've only seen something like it on one other occasion, where a file had one frame at 48kHz, and the rest at 44.1kHz. On that file it was actually album art being mistaken for a valid MP3 frame. A similar thing could be happening
here I suppose. There could be a problem with the code that tries to jump over the ID3v2 tag.
↧
January 28, 2013, 9:03 am
I've answered your question on StackOverflow. Also, unless you are using a really old NAudio, no need for the WaveFormatConversionStream. Simply reading out of the Mp3FileReader will give you PCM. It is most likely 16 bit stereo, so you'd want to split out
the channels before you do an FFT.
WaveBuffer saves you using BitConverter. Simply bind a WaveBuffer to your byte array containing PCM samples, and then you can access each sample using the ShortBuffer property. But your FFT probably wants float/double anyway, so you still need to divide
by 32,768 and put it into your separate left and right floating point arrays.
↧
January 28, 2013, 9:12 am
Thanks for the quick reply, Mark.
How can I split the channels and why do I need it anyways?
My FFT (the one by Lomont) asks for interleaved pairs of the real and immaginary parts.
Up until now I thought the real part is the left channel and immaginary part is the right channel.
↧
↧
January 28, 2013, 9:13 am
the left and right samples will be interleaved, so just throw every other one away if you only need to do the FFT of one channel. Audio is entirely real, so the imaginary component of the input signal will always be zero.
↧
January 28, 2013, 9:20 am
Ok, very well :)
Is there an easy way to check whether the FFT worked correctly?
Before applying the FFT I split the audio-material in chunks of 4096 bytes (to keep some kind of time-domain).
I created a sine-wave at 100 Hz and looked after applying the FFT in the double-array whether I could find the high magnitude but I couldn't. To find the correct entry in the array I calculated the frequency-band of every array-entry (44100/4096 = 10,7Hz)
and looked in the corresponding part of the array where I suspected to find the high magnitude of 100 Hz.
I'm explaining this because I hope you see my error.
↧
January 28, 2013, 12:09 pm
FYI, I looked at the file in question... The first frame starts 528 bytes after the "official" end of the ID3v2 tag (when seeking from the start using the tag's length field). If the decoder just skips the tag and doesn't properly validate sync headers,
it will sync with some spurious data that "looks like" a Stereo Layer I frame @ 384kbps and 48khz.
That said, if the sync is checked against the following, the spurious data will be properly skipped:
- Frame Sync (11 set bits)
- MPEG Version != reserved (01)
- Layer != reserved (00)
- Bitrate != invalid (1111)
- Sample Rate != reserved (11)
- Channel Mode Extension == 00 - or - Channel Mode == Joint Stereo
I believe the channel mode extension check is what is lacking...
↧
January 28, 2013, 12:32 pm
thanks ioctlLR, I'll revisit the code, I know I check a few of those things, but maybe one is missing.
↧
↧
January 28, 2013, 1:59 pm
Hello, Now I creating my project about Effect Guitar by DSP. then my project have some problem about MIC input to effectStream. Because I get data form MIC-input by WaveIn and get data into BufferWaveProvider. But I can't to get input data to pass effectStream.
Please help me give some a sample code to get data pass effectStream.
publicvoid Start()
{
Stop();
waveOuts = new WaveOut();
sourceStream = new WaveIn();
sourceStream.BufferMilliseconds = 20;
sourceStream.DeviceNumber = 0;
sourceStream.WaveFormat = new WaveFormat(44100, 16, 2);
sourceStream.DataAvailable += new EventHandler<WaveInEventArgs>(sourceStream_DataAvailable);
waveBuffer = new BufferedWaveProvider(sourceStream.WaveFormat);
waveBuffer.DiscardOnBufferOverflow = true;
sourceStream.StartRecording();
//effectStream = new EffectStream(effects, waveBuffer);
waveOuts.Init(waveBuffer);
waveOuts.Play();
}
privatevoid sourceStream_DataAvailable(object sender, WaveInEventArgs args)
{
waveBuffer.AddSamples(args.Buffer, 0, args.BytesRecorded);
}
How I can get data to pass effectStream for my output. Please help me. Thank you so much.
Ps. Sorry for my English is poor. And I beginner in C#. Sorry very much.
↧
January 29, 2013, 3:39 am
4096 is the number of bytes not the number of samples. To work out the bin sizes, N is the number of samples (i.e. complex numers)
↧
January 29, 2013, 3:43 am
I'm sorry, I don't quite get what you are trying to say. Can you explain it in a different way?
↧
January 29, 2013, 3:45 am
say you record 4096 bytes of audio. If it is 16 bit that means you have 2048 samples. If it is stereo, you have 1024 sample pairs (and you should only pass one channel in to the FFT). So your bin resolution is not 44100/4096 but 44100/1024.
↧
↧
January 29, 2013, 3:53 am
more robust validation of MP3 Frame
↧
January 29, 2013, 3:55 am
brilliant, the channel mode check fixes the "Sleep Away" MP3, and I've checked the fix in. Unfortunately it does not fix the other two example files I have. I wonder if there is a problem with my jumping over the ID3v2 tag not properly calculating its length.
↧
January 29, 2013, 4:35 am
Anyone got example/tips on how to play Midi files using NAudio?
All I have found is sending note signals to midi synthesizer
midiOut.Send(MidiMessage.ChangePatch(47, 0).RawData);
But looping through each note over 10+ tracks is a mess!
Also another question, I have lots of wav files sorted into instruments and notes. How would I use my own wav files to play the midi? or sf2 files? (my own synthesizer?)
↧