Replace Part of Pixel Buffer with White Pixels in iOS
Here is an easy way to manipulate a CVPixelBufferRef
without using other libraries like Core Graphics or OpenGL:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
const int kBytesPerPixel = 4;
CVPixelBufferLockBaseAddress( pixelBuffer, 0 );
int bufferWidth = (int)CVPixelBufferGetWidth( pixelBuffer );
int bufferHeight = (int)CVPixelBufferGetHeight( pixelBuffer );
size_t bytesPerRow = CVPixelBufferGetBytesPerRow( pixelBuffer );
uint8_t *baseAddress = CVPixelBufferGetBaseAddress( pixelBuffer );
for ( int row = 0; row < bufferHeight; row++ )
{
uint8_t *pixel = baseAddress + row * bytesPerRow;
for ( int column = 0; column < bufferWidth; column++ )
{
if ((row < 100) && (column < 100) {
pixel[0] = 255; // BGRA, Blue value
pixel[1] = 255; // Green value
pixel[2] = 255; // Red value
}
pixel += kBytesPerPixel;
}
}
CVPixelBufferUnlockBaseAddress( pixelBuffer, 0 );
// Do whatever needs to be done with the pixel buffer
}
This overwrites the top left patch of 100 x 100 pixels in the image with white pixels.
I found this solution in this Apple Developer Example called RosyWriter.
Kind of amazed I didn't get any answers here considering how easy this turned out to be. Hope this helps someone.
Updating it with Swift implementation.
CVPixelBufferLockBaseAddress(pixelBuffer, CVPixelBufferLockFlags(rawValue: 0))
let bufferWidth = Int(CVPixelBufferGetWidth(pixelBuffer))
let bufferHeight = Int(CVPixelBufferGetHeight(pixelBuffer))
let bytesPerRow = CVPixelBufferGetBytesPerRow(pixelBuffer)
guard let baseAddress = CVPixelBufferGetBaseAddress(pixelBuffer) else {
return
}
for row in 0..<bufferHeight {
var pixel = baseAddress + row * bytesPerRow
for col in 0..<bufferWidth {
let blue = pixel
blue.storeBytes(of: 255, as: UInt8.self)
let red = pixel + 1
red.storeBytes(of: 255, as: UInt8.self)
let green = pixel + 2
green.storeBytes(of: 255, as: UInt8.self)
let alpha = pixel + 3
alpha.storeBytes(of: 255, as: UInt8.self)
pixel += 4;
}
}
CVPixelBufferUnlockBaseAddress(pixelBuffer, CVPixelBufferLockFlags(rawValue: 0))
Since baseAddress
gives UnsafeMutableRawPointer
, which does not support subscript, you have to use storeBytes
instead. That is basically the only key difference from Objective-C version above.
I had to process frames from the iPhone camera using captureOutput and CVPixelBuffer. I used your code (thanks!) to loop on about 200k pixels in the pixelbuffer 15 frames per second but I constantly had issues with dropped frames. It turned out that in Swift a while
loop is 10x faster than a for ... in
loop.
Like:
0.09 sec:
for row in 0..<bufferHeight {
for col in 0..<bufferWidth {
// process pixels
0.01 sec:
var x = 0
var y = 0
while y < bufferHeight
{
y += 1
x = 0;
while x < bufferWidth
{
// process pixels
}
}