Hi,
Have talked about this some years back i think.
But well when i am recording, even with 1000ms buffer with Wasapi,
if the system is under heavy load it will skip.
Now there are 2 solutions for this, one is making sure it doesn't skip (not sure if it can be done?).
Other is pushing out Silent data when audio is skipped to make up for the lost data (hence keeping to correct timeline).
So while the skipping is important to get rid of if possible, the most important thing is the timeline.
I know you can do this by using either the CPU QPC and sync to that clock, or use the Audio QPC and sync to that.
And by Sync i mean push silent data or remove data in order to keep the same time.
With Audio QPC it will most likely only be Pushing silent data though as it runs the same as the actual data anyway.
Problem with Audio QPC in my case is that one of them is an USB Device, and USB isn't hardware level, it's operated by CPU interruptions, so even whe Audio QPC can be affected by heavy load, though it's the data that first get affected, the QPC can survive a bit more.
CPU QPC is ideal in that sense as it's unaffected by load, problem is one needs to skip and push,
this would be great in my case as it would sync the audio during recording (sync to the PC clock which i need to do anyway).
Though one loses a bit of data rather than the resampling later way (not in the topic).
So i wonder what can be done?
I have played around A Lot with the idea, and while i have gotten it somewhat to work at times, it's never perfect and has issues.
Here is some of the code i played around with, was quite awhile ago so not sure what works as it's all commented out now.
But you can probably grasp the idea of what i am trying to do more or less (it's a mess).
long DevicePosition, TimeQPC, LastQPC = 0, CurrentQPC = 0;
int framesAvailable, CurrentSize = 0;
public long getStartQPC() => LastQPC;
public void setStartQPC(long d) => LastQPC = d;
int recordsize = 0;
int MSDiff = 0;
bool first = false, test;
int silence = 0;
private void ReadNextPacket(AudioCaptureClient capture)
{
int packetSize = capture.GetNextPacketSize();
int recordBufferOffset = 0;
//Debug.WriteLine(string.Format("packet size: {0} samples", packetSize / 4));
while (packetSize != 0)
{
AudioClientBufferFlags flags;
IntPtr buffer = capture.GetBuffer(out framesAvailable, out flags, out DevicePosition, out TimeQPC);
int bytesAvailable = framesAvailable * bytesPerFrame;
if (!first)
{
first = true;
CurrentSize -= framesAvailable;
LastQPC = TimeQPC;
CurrentSize += framesAvailable;
capture.ReleaseBuffer(framesAvailable);
packetSize = capture.GetNextPacketSize();
continue;
}
//recordsize += bytesAvailable + (recordBufferOffset);
////CurrentSize += framesAvailable;
//CurrentQPC = (TimeQPC - LastQPC);
//var QPCTime = TimeSpan.FromMilliseconds((CurrentQPC / 10000));
//var DeviceTime = TimeSpan.FromMilliseconds(DevicePosition / (framesAvailable / 10));
//var Time = TimeSpan.FromMilliseconds(CurrentSize / (framesAvailable / 10));
//Debug.WriteLine("QPC: " + QPCTime + " - DevicePosition: " + DeviceTime + " - CurrentSize: " + Time);
//var difference = (long)(QPCTime.TotalMilliseconds - DeviceTime.TotalMilliseconds);
//if (difference < -10)
//{
// Debug.WriteLine("Time: " + QPCTime);
// Debug.WriteLine("Skipped 10ms" + " - Total: " + MSDiff / -1 + "ms");
// recordsize -= recordBuffer.Length;
// recordBufferOffset += bytesAvailable;
// capture.ReleaseBuffer(framesAvailable);
// packetSize = capture.GetNextPacketSize();
// CurrentSize -= framesAvailable;
// continue;
//}
//else
//if (difference > 10)
//{
// recordsize += bytesAvailable + (recordBufferOffset);
// Debug.WriteLine("Add 10ms" + " - Total: " + MSDiff + "ms");
// MSDiff += 10;
// byte[] b = new byte[192 * (int)difference];
// silence += b.Length / 192;
// CurrentSize = ((int)Time.TotalMilliseconds + (int)difference) * (framesAvailable / 10);
// Debug.WriteLine("Silence: " + silence);
// DataAvailable(this, new WaveInEventArgs(b, b.Length));
// QPCTime = TimeSpan.FromMilliseconds((CurrentQPC / 10000));
// Time = TimeSpan.FromMilliseconds(CurrentSize / (framesAvailable / 10));
// difference = (long)(QPCTime.TotalMilliseconds - Time.TotalMilliseconds);
// Debug.WriteLine("QPC: " + QPCTime + " - DevicePosition: " + DeviceTime + " - CurrentSize: " + Time);
// Debug.WriteLine("Difference: " + difference);
// capture.ReleaseBuffer(framesAvailable);
// packetSize = capture.GetNextPacketSize();
// CurrentSize = ((int)QPCTime.TotalMilliseconds) * (framesAvailable * 10);
// return;
//}
// apparently it is sometimes possible to read more frames than we were expecting?
// fix suggested by Michael Feld:
int spaceRemaining = Math.Max(0, recordBuffer.Length - recordBufferOffset);
if (spaceRemaining < bytesAvailable && recordBufferOffset > 0)
{
if (DataAvailable != null) DataAvailable(this, new WaveInEventArgs(recordBuffer, recordBufferOffset));
recordBufferOffset = 0;
}
// if not silence...
if ((flags & AudioClientBufferFlags.Silent) != AudioClientBufferFlags.Silent)
{
Marshal.Copy(buffer, recordBuffer, recordBufferOffset, bytesAvailable);
}
else
{
Array.Clear(recordBuffer, recordBufferOffset, bytesAvailable);
}
recordBufferOffset += bytesAvailable;
capture.ReleaseBuffer(framesAvailable);
packetSize = capture.GetNextPacketSize();
if (DataAvailable != null)
{
DataAvailable(this, new WaveInEventArgs(recordBuffer, bytesAvailable));
}
}
}