Wow, okay. So, here’s the deal with the Quest 3 & Quest 3S and this… Passthrough Camera API thing. I mean, it’s all official and live now for apps you get on the Meta Horizon Store. Before, it was like this secret backstage pass for developers — only accessible in the experimental phase, and sure, they could tinker with it and throw it on SideQuest or whatever, but not in the big leagues, you know? But, yeah, that’s finally changed with the v76 version of the Meta XR Core SDK, which is like the magic ticket for app inclusion after they check it out.
Now, what even is this Passthrough Camera Access? It’s all about those nifty cameras on the headset, letting you see the real world while you’re… not really in it? But before, only the main system could touch those raw camera feeds. Developers were basically handed crayons when they needed paintbrushes, getting only skeletal data and environmental meshes with bounding boxes for… furniture. Yep, furniture. So they couldn’t run cool computer vision models. Big bummer, right?
Here’s the thing — if you say yes to giving your camera permission, apps get access to color cameras up front, with all sorts of metadata like lens stuff and headset poses. They can then run wild with custom vision models. You could scan QR codes, track game boards, detect physical items for fancy enterprise things, or mix in visual AI from cloud-hosted models. It’s like giving developers a playground, limited only by the XR2 Gen 2 chip’s power or whatever cloud service they can afford.
Roberto Coviello, a Meta engineer, has some neat QuestCameraKit samples floating around. The camera streams come at you with a resolution up to 1280×960 per camera, moving at 30 frames per second, but if you need to catch something moving faster than your Aunt Linda’s knitting needles — forget it. There’s like a 40-60 millisecond delay. It’s not great for trying to spot a tiny ant-sized text either.
And technically, there’s no shiny Meta Quest Camera API. Developers do this nifty workaround by asking for Horizon OS camera permissions, and they use the Android Camera2 API alongside OpenXR to get there. It’s a little sneaky, in a smart way, and supposedly can carry over to whatever Google throws out with their new XR platform. Unity folks take a more direct route, using the WebCamTexture API, kinda like how they deal with regular cameras on phones and PCs. But they hit a snag: only one camera at a time. Annoying, right?
If you’re a developer itching to dive into this, there’s documentation sprawled all over for Unity and Native Android. Meta’s got like five Unity samples hanging on GitHub — CameraViewer, CameraToWorld, and some others. Roberto’s up there too with extras like Color Picker and Object Detection among others. So, developers, Knock yourselves out! Or, you know, mess with it cautiously. Your call.