Optimizing Video and Audio Output Buffer Handling in iOS Apps for Smooth Recording Experience
Based on the provided code and issue description, I’ll provide an updated version of the captureOutput
method with some improvements to handle both video and audio output buffers efficiently.
- (void)captureOutput:(AVCaptureSession *)session didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
lastSampleTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
if (!CMSampleBufferDataIsReady(sampleBuffer)) {
NSLog(@"sample buffer is not ready. Skipping sample");
return;
}
if (isRecording == YES) {
switch (videoWriter.status) {
case AVAssetWriterStatusUnknown:
NSLog(@"First time execute");
if (CMTimeCompare(lastSampleTime, kCMTimeZero) == 0) {
lastSampleTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
}
[videoWriter startWriting];
[videoWriter startSessionAtSourceTime:lastSampleTime];
// Break if not ready, otherwise fall through.
if (videoWriter.status != AVAssetWriterStatusWriting) {
break;
}
case AVAssetWriterStatusWriting:
if (captureOutput == audioOutput) {
NSLog(@"Audio Buffer capped!");
if (![audioWriterInput isReadyForMoreMediaData]) {
break;
}
@try {
if (!([audioWriterInput appendSampleBuffer:sampleBuffer])) {
NSLog(@"Audio Writing Error");
} else {
// You can add a small delay here to give more time for audio processing
dispatch_async(dispatch_get_main_queue(), ^{
[NSThread sleepForTimeInterval:0.03];
});
}
}
@catch (NSException *e) {
NSLog(@"Audio Exception: %@", [e reason]);
}
} else if (captureOutput == videoOutput) {
NSLog(@"Video Buffer capped!");
if (![videoWriterInput isReadyForMoreMediaData]) {
break;
}
@try {
CVImageBufferRef buffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CMTime frameTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
if (buffer) {
// You can add a small delay here to give more time for video processing
dispatch_async(dispatch_get_main_queue(), ^{
[NSThread sleepForTimeInterval:0.03];
});
if ([videoWriterInput isReadyForMoreMediaData]) {
if (![adaptor appendPixelBuffer:buffer withPresentationTime:frameTime]) //CMTimeMake(frame, fps)
NSLog(@"FAIL");
else {
// You can add a small delay here to give more time for video processing
dispatch_async(dispatch_get_main_queue(), ^{
[NSThread sleepForTimeInterval:0.03];
});
}
} else {
NSLog(@"video writer input not ready for more data, skipping frame");
}
}
frame++;
}
@catch (NSException *e) {
NSLog(@"Video Exception Exception: %@", [e reason]);
}
}
break;
case AVAssetWriterStatusCompleted:
return;
case AVAssetWriterStatusFailed:
NSLog(@"Critical Error Writing Queues");
// bufferWriter->writer_failed = YES ;
// _broadcastError = YES;
return;
case AVAssetWriterStatusCancelled:
break;
default:
break;
}
}
}
In the updated code:
- I’ve added a small delay after appending audio and video buffers to give more time for processing.
- You can adjust the delay value based on your specific requirements and performance constraints.
Additionally, consider the following general suggestions to optimize video and audio output buffer handling:
- Use
AVAssetWriterInputPixelBufferAdaptor
instead ofAVAssetWriterInput
if you need to handle pixel buffers efficiently. - Implement a custom buffer queue or use an existing one to manage the processing of both video and audio buffers concurrently.
- Monitor the performance of your application and adjust the delay values as needed to maintain optimal output quality.
By implementing these suggestions, you should be able to optimize the handling of both video and audio output buffers in your captureOutput
method, leading to a smoother and more efficient recording experience.
Last modified on 2023-06-06