Google Vision API Samples: Get the CameraSource to Focus

I modified the CameraSourcePreview (….) constructor to be as follows: public CameraSourcePreview(Context context, AttributeSet attrs) { super(context, attrs); mContext = context; mStartRequested = false; mSurfaceAvailable = false; mSurfaceView = new SurfaceView(context); mSurfaceView.getHolder().addCallback(new SurfaceCallback()); addView(mSurfaceView); mSurfaceView.setOnClickListener(new OnClickListener() { @Override public void onClick(View v) { cameraFocus(mCameraSource, Camera.Parameters.FOCUS_MODE_CONTINUOUS_VIDEO); } }); } private static boolean cameraFocus(@NonNull CameraSource cameraSource, @NonNull … Read more

Media Recorder with Google Vision API

Solution 1: From Android Lollipop, a MediaProjection API was introduced which in conjunction with MediaRecorder can be used to save a SurfaceView to a video file. This example shows how to output a SurfaceView to a video file. Solution 2: Alternatively, you can use one of the neat Encoder classes provided in the Grafika repository. … Read more

Mobile Vision API – concatenate new detector object to continue frame processing

Yes, it is possible. You’d need to create your own subclass of Detector which wraps FaceDetector and executes your extra frame processing code in the detect method. It would look something like this: class MyFaceDetector extends Detector<Face> { private Detector<Face> mDelegate; MyFaceDetector(Detector<Face> delegate) { mDelegate = delegate; } public SparseArray<Face> detect(Frame frame) { // *** … Read more