Camera2 API is the newer API for controlling camera devices on Android. Although it is often used in cam apps from an Activity or Fragment, there might be some use cases where you want to acquire and process camera frames in the background while other apps are in the foreground.

Just recently I started to play with Tensorflow and in my experimental app I needed to do camera frame processing in the background. Setting it up, so that frames are processed in a Service wasn't that hard. There were some differences though in comparison to regular usage. I hope this post can give some useful information to make it easier for you to accomplish the same thing.

I'll show two ways of using the camera in a Service. One way would be to still render the frames into a surface that is drawn over all other apps. This will be something like a preview window that is displayed even when other apps are in foreground. The other mode will just get the frames in the background, so that they can be processed by your app.

Depending on what you consider processing in "background", the title of this post might be a bit misleading. I used a foreground Service to process camera frames. Background service will also work, but due to background execution limits introduced in Android 8.0, it wouldn't be very useful for background processing when your app is not foreground.

You can find sample code related to this blog post on github.

I also wrote a blogpost about the native camera API on Android. If you're using the NDK and trying to access camera from C code, you might find that information useful.

Additionally, if you are interested in efficient processing of images using C/C++ and RenderScript, check out this blogpost.

Create Foreground Service

As I mentioned earlier, you could keep the service background, but because of background execution limits, the system would kill it very short time after your Activity would go to background. Therefore, I decided to go with a foreground Service and here I'll show you a quick summary. You can read my other post for some additional information about using foreground services together with OpenGL.

Because I want to show a preview of the camera while other apps are in the foreground, I need to add the SYSTEM_ALERT_WINDOW permission to my AndroidManifest.xml. This permission is required to render views over other apps. Additionally, since Android Pie (API level 28), you also need to add the FOREGROUND_SERVICE permission.

<manifest ...>

    <uses-permission android:name="android.permission.SYSTEM_ALERT_WINDOW"/>
    <uses-permission android:name="android.permission.FOREGROUND_SERVICE"/>

    <application ...>
        <service android:name=".CamService"/>

The SYSTEM_ALERT_WINDOW permission needs to be additionally granted by user in Android settings. Before you try to load a view in your foreground Service, you should first check if the user has granted this permission. You can then direct user to the settings section where he can grant this permission (Display over other app)

if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.M && !Settings.canDrawOverlays(this)) {

    // Don't have permission to draw over other apps yet - ask user to give permission
    val settingsIntent = Intent(Settings.ACTION_MANAGE_OVERLAY_PERMISSION)
    startActivityForResult(settingsIntent, PERMISSION_REQUEST_CODE)

And here is the code that handles creating the service and starting it in foreground

class CamService: Service() {

    var rootView: View? = null
    var texPreview: TextureView? = null

    override fun onBind(p0: Intent?): IBinder? {
        return null

    override fun onStartCommand(intent: Intent?, flags: Int, startId: Int): Int {

        when(intent?.action) {

            ACTION_START_WITH_PREVIEW -> startWithPreview()

        return super.onStartCommand(intent, flags, startId)

    override fun onCreate() {

    private fun startWithPreview() {

        // Initialize view drawn over other apps
        val li = getSystemService(Context.LAYOUT_INFLATER_SERVICE) as LayoutInflater
        rootView = li.inflate(R.layout.overlay, null)
        texPreview = rootView?.findViewById(

        val type = if (Build.VERSION.SDK_INT < Build.VERSION_CODES.O)

        val params = WindowManager.LayoutParams(
            WindowManager.LayoutParams.FLAG_NOT_TOUCHABLE or WindowManager.LayoutParams.FLAG_NOT_FOCUSABLE,

        val wm = getSystemService(Context.WINDOW_SERVICE) as WindowManager
        wm.addView(rootView, params)

        // Initialize camera here
        // ...

    private fun startForeground() {

        val pendingIntent: PendingIntent =
            Intent(this, { notificationIntent ->
                PendingIntent.getActivity(this, 0, notificationIntent, 0)

        if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.O) {
            val channel = NotificationChannel(CHANNEL_ID, CHANNEL_NAME, NotificationManager.IMPORTANCE_NONE)
            channel.lightColor = Color.BLUE
            channel.lockscreenVisibility = Notification.VISIBILITY_PRIVATE
            val nm = getSystemService(Context.NOTIFICATION_SERVICE) as NotificationManager

        val notification: Notification = NotificationCompat.Builder(this, CHANNEL_ID)

        startForeground(ONGOING_NOTIFICATION_ID, notification)

    companion object {

        val ACTION_START_WITH_PREVIEW = "eu.sisik.backgroundcam.action.START_WITH_PREVIEW"

        val ONGOING_NOTIFICATION_ID = 6660
        val CHANNEL_ID = "cam_service_channel_id"
        val CHANNEL_NAME = "cam_service_channel_name"

The startForeground() method takes care of switching the service from background to foreground.

Once the service receives ACTION_START_WITH_PREVIEW, it calls startWithPreview() which loads the view that is rendered above other apps. You need to have the permission to draw over other apps (SYSTEM_ALERT_WINDOW) before calling wm.addView(rootView, params), or you'll get an exception.

The overlay.xml layout is very simple. It contains a TextureView which I would like to use to render the camera preview

<?xml version="1.0" encoding="utf-8"?>
< xmlns:android=""

    <TextureView android:id="@+id/texPreview"


At this point we should have the basic structure for starting our service and loading a view that can show our camera preview. Following sections will focus on using the Camera2 API to get camera frames for our preview and for further processing.

Camera Permissions

You need to declare the Camera permission in your AndroidManifest.xml, so that you're able to use Camera devices

<manifest ...>

    <uses-permission android:name="android.permission.FOREGROUND_SERVICE"/>

Camera permission is considered a "dangerous permission", therefore you'll also need to request it during runtime

val permission = Manifest.permission.CAMERA
if (ContextCompat.checkSelfPermission(this, permission) != PackageManager.PERMISSION_GRANTED) {

    // We don't have camera permission yet... Request it from the user
    ActivityCompat.requestPermissions(this, arrayOf(permission), CODE_PERM_CAMERA)

This piece of code should be called from your Activity or Fragment before you start the Service.

You can then check the result of the request by overriding onRequestPermissionsResult()

override fun onRequestPermissionsResult(requestCode: Int, permissions: Array<out String>, grantResults: IntArray) {
    super.onRequestPermissionsResult(requestCode, permissions, grantResults)
    when (requestCode) {
        CODE_PERM_CAMERA -> {
            if (grantResults?.firstOrNull() != PackageManager.PERMISSION_GRANTED) {
                // Handle permission denial here
                // ...

Finding Available Camera Devices

Android devices now often have more than one Camera device built in. Additionally, there can be devices connected through for example USB. In this example I just pick the first front facing camera

val manager = getSystemService(Context.CAMERA_SERVICE) as CameraManager
var camId: String? = null

for (id in manager.cameraIdList) {
    val characteristics = manager.getCameraCharacteristics(id)
    val facing = characteristics.get(CameraCharacteristics.LENS_FACING)
    if (facing == CameraCharacteristics.LENS_FACING_FRONT) {
        camId = id

// Use camId to initialize camera
// ...

I this example I'm using a TextureView for showing the preview. TextureView has some lifecycle callbacks which I can use to plug in the camera initialization.

private val surfaceTextureListener = object : TextureView.SurfaceTextureListener {

    override fun onSurfaceTextureAvailable(texture: SurfaceTexture, width: Int, height: Int) {
        // Init camera here
        // ...

    override fun onSurfaceTextureSizeChanged(texture: SurfaceTexture, width: Int, height: Int) {

    override fun onSurfaceTextureDestroyed(texture: SurfaceTexture): Boolean {
        return true

    override fun onSurfaceTextureUpdated(texture: SurfaceTexture) {}


private fun startWithPreview() {

    // Initialize view drawn over other apps

    // Initialize camera here if texture view already initialized
    if (texPreview!!.isAvailable)
        initCam(texPreview!!.width, texPreview!!.height)
        texPreview!!.surfaceTextureListener = surfaceTextureListener

When my Service receives the request to start the preview, I first check if my TextureView is already prepared for rendering (texPreview!!.isAvailable). If the TextureView is ready, I can directly initialize the camera-related stuff. If not, I just wait till onSurfaceTextureAvailable() is called and start camera initialization there.

Determine Suitable Preview Size

A camera device will only supports a specific set of resolutions and these may vary depending on which surface is chosen as an output. In our case we want the resolutions that can be used with a TextureView. The resolutions that we get from the Camera2 api might be different from TextureView size and might have completely different aspect ratio.

// Get all supported sizes for TextureView
val characteristics = manager.getCameraCharacteristics(cameraId)
val map = characteristics.get(CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP)
val supportedSizes = map.getOutputSizes(

Once you have the supported resolutions, you can pick one that has a suitable aspect ratio and/or a resolution that is similar to your TextureView size. You can also try to adjust the TextureView size to fit a specific resolution, or you can pick from sizes that should be supported on all devices.

In our example I tried to pick aspect ration and resolution similar to the current size of our TextureView

val manager = getSystemService(Context.CAMERA_SERVICE) as CameraManager

// Get all supported sizes for TextureView
val characteristics = manager.getCameraCharacteristics(camId)
val map = characteristics.get(CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP)
val supportedSizes = map.getOutputSizes(

// We want to find something near the size of our TextureView
val texViewArea = textureViewWidth * textureViewHeight
val texViewAspect = textureViewWidth.toFloat()/textureViewHeight.toFloat()

val nearestToFurthestSz = supportedSizes.sortedWith(compareBy(
        // First find something with similar aspect
            val aspect = if (it.width < it.height) it.width.toFloat() / it.height.toFloat()
            else it.height.toFloat()/it.width.toFloat()
            (aspect - texViewAspect).absoluteValue
        // Also try to get similar resolution
            (texViewArea - it.width * it.height).absoluteValue

// The first entry should have similar size and aspect ratio
val mySize = nearestToFurthestSz[0]

Preparing CaptureRequest

CaptureRequest contains various capturing configuration (e.g. focusing mode, flash control, ...) and you also use it to specify target surfaces for image data.

There can be multiple target surfaces specified. In my example I decided to use 2 types of targets. I use the TextureView to display the preview. And I also created an ImageReader that can be used for background processing. If you only want to process the image data without displaying a preview, you can remove TextureView completely and only use the ImageReader.

ImageReader uses an OnImageAvailableListener to notify when a new frame is available. You need to call acquireLatestImage() or <a href""> to dequeue the image data, otherwise the the enqueued data reaches a memory limit and onImageAvailable() stops to get called

private val imageListener = object: ImageReader.OnImageAvailableListener {
    override fun onImageAvailable(reader: ImageReader?) {
        val image = reader?.acquireLatestImage()
        Log.d(TAG, "Got image " + image?.width + "x" + image?.height)

        // Process image here (ideally async, so that you don't block the callback)
        // ..


Now you can use this callback to initialize your CaptureRequest. The request builder class allows you to create a request with various parameters

// Prepare CaptureRequest that can be used with CameraCaptureSession
val requestBuilder = cameraDevice!!.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW)

val texture = textureView!!.surfaceTexture!!
texture.setDefaultBufferSize(previewSize!!.width, previewSize!!.height)
val previewSurface = Surface(texture)


// Configure target surface for background processing (ImageReader)
imageReader = ImageReader.newInstance(
    previewSize!!.getWidth(), previewSize!!.getHeight(),
    ImageFormat.YUV_420_888, 2
imageReader!!.setOnImageAvailableListener(imageListener, null)


// Set some additional parameters for the request
requestBuilder.set(CaptureRequest.CONTROL_AF_MODE, CaptureRequest.CONTROL_AF_MODE_CONTINUOUS_PICTURE)
requestBuilder.set(CaptureRequest.CONTROL_AE_MODE, CaptureRequest.CONTROL_AE_MODE_ON_AUTO_FLASH)

// We've configured a CaptureRequest 
val captureRequest = requestBuilder!!.build()

// Now use the initialized request to create a CameraCaptureSession here
// ...

In the code above I've got a 'Surface' from our TextureView and ImageReader which I've used as targets for the CaptureRequest.

Important thing to note here is that before I've created the Surface from a SurfaceTexture, I called setDefaultBufferSize() with my selected preview size. You should only use one of the supported preview sizes for SurfaceTexture that we've queried for in the previous section. The order of the dimensions is also important ( width, height != height, width). If you forget about this step, a default size supported by the camera device will be picked automatically (smallest supported size less than 1080p) and your preview might then look skewed.

I've used the imageListener from previous step to initialize ImageReader. Note that I've supplied ImageFormat.YUV_420_888 as one of the parameters to ImageReader.newInstance(). This parameter will affects the format of the pixels retrieved in onImageAvailable() and the conversions you need to do first before getting your data in usable format. I picked the YUV format because it should have guaranteed support for preview (see the tables tables starting with "LEGACY-level guaranteed configurations"). JPEG format should work too, but you would probably get much smaller frame rate due to additional compression and processing.

Preparing CameraCaptureSession

CameraCaptureSession is used to instruct the camera device to capture images.

The CaptureRequest and target surfaces that you've got in previous section of this blogpost become the main configuration parameters for creating a capture session

// Prepare CaptureRequest and target Surfaces
// ...

// Initialize a CameraCaptureSession
cameraDevice!!.createCaptureSession(listOf(previewSurface, imageReader!!.surface),
    object : CameraCaptureSession.StateCallback() {

        override fun onConfigured(cameraCaptureSession: CameraCaptureSession) {
            // Only proceed when camera not already closed
            if (null == cameraDevice) {

            captureRequest = requestBuilder!!.build()
            captureSession!!.setRepeatingRequest(captureRequest!!, captureCallback, null)

        override fun onConfigureFailed(cameraCaptureSession: CameraCaptureSession) {
            Log.e(TAG, "createCaptureSession()")
    }, null

Calling setRepeatingRequest() will start capturing frames endlessly in a loop. You can then stop it by calling stopRepeating().

You can also pass a CaptureCallback to setRepeatingRequest() to get more information about the state of image capturing. Here I've only used a callback that actually does nothing. It should also be possible to just pass null.

private val captureCallback = object : CameraCaptureSession.CaptureCallback() {

    override fun onCaptureProgressed(
        session: CameraCaptureSession,
        request: CaptureRequest,
        partialResult: CaptureResult
    ) {}

    override fun onCaptureCompleted(
        session: CameraCaptureSession,
        request: CaptureRequest,
        result: TotalCaptureResult
    ) {}

Similar to some other methods from Camera2 API, you can pass a Handler to setRepeatingRequest() which enables you to control on which thread the CaptureCallback should be called. I only passed null in this case, so it should be invoked on the same thread setRepeatingRequest() was called on.

Opening Connection to Camera Device

Before you can actually do anything useful with a specific camera device, you need to open a connection to that device. You can do this by calling openCamera().

This again happens asynchronously and you have callback that will notify you when the connection is established. You need to wait till connection is established before you try to create and use the CameraCaptureSession from previous section

private val stateCallback = object : CameraDevice.StateCallback() {

    override fun onOpened(cameraDevice: CameraDevice) {   
        this.cameraDevice = cameraDevice

        // Wait till connection is opened and only then create the CameraCaptureSession

    override fun onDisconnected(cameraDevice: CameraDevice) {
        this.cameraDevice = null

    override fun onError(cameraDevice: CameraDevice, error: Int) {
        this.cameraDevice = null

Here is how I opened the connection

cameraManager = getSystemService(Context.CAMERA_SERVICE) as CameraManager

// Get camera id and suitable preview size
// ...

cameraManager!!.openCamera(camId, stateCallback, null)

Closing Camera Connection & Cleanup

When using Camera2 API from regular Activity/Fragment, you would normally do the cleanup somewhere in onPause(). Service lifecycle is of course different and it all depends on your specific situation. I decided to do the cleanup in onDestroy()

captureSession = null

cameraDevice = null

imageReader = null

More Useful Resources

A background service can also be used to apply effects to a video with OpengGL even without requiring the additional permissions and overlay window. I wrote another blogpost which shows how to generate a video file from captured images using the MediaCodec API inside of a background IntentService.

Additionally, I also recommend to check out my other blogpost that shows how to process images efficiently with C/C++ and RenderScript.

Next Post

Add a comment