I am building a React Native app that uses OpenCV under the hood. What I want to do is project an image over the phone's camera feed onto a frame. I have the four points to calculate cv::getPerspectiveTransform
with and could simply do a cv::warpPerspective
to overlay an image.
However, I would much rather get the transformation data and overlay an image in React Native land using the StyleSheet transform
property (I also want to do some general logic there too). From my understanding, I can get 3D translation and rotation data through some decomposition. But this requires a camera matrix, and I don't want my user's to have to go through the calibration steps.
Is there some way of getting this information that would prevent my users from having to do the calibration themselves? Or am I doomed to make it a fun little step in the whole process?
Note: For clarification, I am asking how to get the data not how to send that data to React.