OpenCV command line app can't access camera under macOS Mojave
It not an ultimate solution but I got it resolved by installing any terminal application that request access to your Camera. Then your openCv c++ program will gain the access to the FaceTime HD Camera afterwards.
for example, you can install ImageSnap by:
brew install imagesnap
imagesnap -w 1 shot.png
Then give camera permission through the pop out that will appear.
The problem was that the c++ program, for whatever reason, wasn't requesting camera access. I took the advice of @gerwin in the comments to give it a try with Python. Running that program from Terminal resulted in Terminal asking for camera access. Once I granted that, the c++ program was able to access the camera when run from Terminal.
As far as CodeRunner, I'm not sure how to get CodeRunner to run Python programs under a virtual environment so I haven't been able to run a Python OpenCV program to get it to ask for camera access. So at the moment I can't use CodeRunner to run a c++ program that accesses the camera.
A couple of comments here...
The error I'm seeing when trying to run OpenCV from my MacOS development environment Is:
OpenCV: not authorized to capture video (status 0), requesting... OpenCV: camera failed to properly initialize! Error opening video stream or file Program ended with exit code: 255
I know those words originate from the OpenCV library here. My initial thought was that this was an OpenCV issue. With a bit more testing I think it's something else. As others have noted, MacOS security / permissions issue. But here's the rub.
If I go to Mac Apple Icon (Upper Left Corner) --> Systems Preferences --> Security and Privacy I can glean a lot of info.
Check on the Camera Icon.
In my case this shows two applications which require additional permissions to get access to the camera, Terminal and Virtualbox (not sure what happens to browser, Facetime?) I do note, Xcode didn't make this list.
When I click over to Microphone, I see different apps listed, INCLUDING Xcode.
How does that even work? I did do a whole lot of testing, including researching modifying the Info.plist for the Xcode application package (Finder --> Applications Folder --> Xcode --> Rt click, Show Package Contents. Copy Info.plist save it elsewhere, modify it via Xcode, resubmit.) Note: Don't try this without keeping a copy of the original Info.plist. Total fail. Adding the NSCameraUsageDescription
key/value was a total bust. Xcode won't open at all. Reminder DON'T lose the original Info.plist.
This whole thing is baffling. Why does Apple allow us to access the camera via terminal but not in Xcode? What's the logic there?
I sure would like to be able to step thru code to understand frame by frame possible design issues. This just isn't fun.
So a couple of things to understand.
Yes, you can run an OpenCV project on MacOS WITH your camera after the program has been successfully compiled to Unix Executable. You have to ensure permissions for the Terminal are set in Security and Privacy per photo above. Obviously you build the executable in your development tool (in my case Xcode) then open the executable from the projects Build/Debug folder. The app opens in the terminal window and works just fine as noted by SSteve.
If you really want to do some video / camera debugging, you do have the option to "pre-record" a video, then open that video in your development environment. At that point you can use the debugger. How do you guys do frame by frame analysis? This is the only way I know of that will at least partially work.
(edit update 5/22/19...) Whoa. I just realized.. you can attach the debugger to a running (terminal) process. You can totally do frame by frame debugging, using the camera (as long as the program compiles to a functional executable.) Now this is pretty cool, and gets me to 98% functionality. To do this, start the terminal executable, then go to Xcode --> Debug --> Attach to Process. Select the running application, add Breakpoints to the source code and debug/step along. Works well.
I start my OpenCV project with:
int main(int argc, char** argv){
// Parse command line arguments
CommandLineParser parser(argc,argv,keys);
// Create a VideoCapture object & open the input file
VideoCapture cap;
if (parser.has("video")){
cap.open(parser.get<String>("video"));
}
else
cap.open(0);
...
It's a hack work around, but better than nothing. (Sure wish Apple included the camera in iOS emulator, that would be another way to solve this, sigh.) Obviously a lot depends on where you are going with your project. Ultimately I need mine to run on an iPad; Proveout on MacOS, then wrap code in Swift, etc...
For reference, I'm using macOS Mojave, 10.14.4, MacBook 2.7GHz i7
PS. The security preferences above doesn't show Chrome with Camera access. Seems odd. I just tested the camera at this site... in Chrome, and it asks for permission and works exactly as expected. Its not clear on what's going on here.
PS2. Am I the only person to file a bug report on this issue? Link included for your convenience. Thanks.