Android人脸检测

时间:2020-02-23 14:28:55  来源:igfitidea点击:

随着Google Play服务7.8的发布,Google引入了Mobile Vision API,可让您执行人脸检测,条形码检测和文本检测。
在本教程中,我们将开发一个Android人脸检测应用程序,让您可以检测图像中的人脸。

Android人脸检测

Android人脸检测API使用眼睛,鼻子,耳朵,脸颊和嘴巴等地标来跟踪照片,视频中的人脸。

API不会立即检测到个人特征,而是立即检测到人脸,然后在定义后检测界标和分类。
此外,API还可以检测各种角度的面部。

Android人脸检测–地标

地标是人脸内的兴趣点。
左眼,右眼和鼻根都是地标的例子。
以下是该API当前可能找到的地标:

  • 左右眼
  • 左右耳
  • 左右耳尖
  • 鼻子的根部
  • 左右脸颊
  • 嘴的左右角
  • 嘴巴

当使用"左"和"右"时,它们相对于主题。
例如,LEFT_EYE地标是对象的左眼,而不是查看图像时在左眼。

分类

分类确定是否存在某种面部特征。
Android Face API当前支持两种分类:

  • 睁开眼睛:使用了getIsLeftEyeOpenProbability()和getIsRightEyeOpenProbability()方法。

  • 微笑:使用getIsSmilingProbability()方法。

脸部定位

使用欧拉角确定脸部的方向。
这些是指面围绕X,Y和Z轴的旋转角度。

  • 欧拉Y告诉我们该脸是向左看还是向右看。

  • 欧拉Z告诉我们面部是否旋转/定格

  • 欧拉X告诉我们脸部是向上还是向下(当前不支持)

注意:如果无法计算概率,则将其设置为-1。

让我们跳到本教程的业务结束。
我们的应用程序应包含一些示例图像以及捕获您自己的图像的功能。
注意:API仅支持人脸检测。
当前的Mobile Vision API不支持面部识别。

Android人脸检测代码

在您的应用程序的build.gradle文件中添加以下依赖项。

compile 'com.google.android.gms:play-services-vision:11.0.4'

如下所示,在AndroidManifest.xml文件的application标签内添加以下meta-deta。

<meta-data
          android:name="com.google.android.gms.vision.DEPENDENCIES"
          android:value="face"

这使Vision库知道您计划检测应用程序中的面部。

在AndroidManifest.xml的列表标记中添加以下权限以获取相机权限。

<uses-feature
      android:name="android.hardware.camera"
      android:required="true"
  <uses-permission
      android:name="android.permission.WRITE_EXTERNAL_STORAGE"

下面给出了" activity_main.xml"布局文件的代码。

<?xml version="1.0" encoding="utf-8"?>

<ScrollView xmlns:android="https://schemas.android.com/apk/res/android"
  xmlns:app="https://schemas.android.com/apk/res-auto"
  xmlns:tools="https://schemas.android.com/tools"
  android:layout_width="match_parent"
  android:layout_height="match_parent">

  <android.support.constraint.ConstraintLayout xmlns:app="https://schemas.android.com/apk/res-auto"
      xmlns:tools="https://schemas.android.com/tools"
      android:layout_width="match_parent"
      android:layout_height="wrap_content"
      tools:context="com.theitroad.facedetectionapi.MainActivity">

      <ImageView
          android:id="@+id/imageView"
          android:layout_width="300dp"
          android:layout_height="300dp"
          android:layout_marginTop="8dp"
          android:src="@drawable/sample_1"
          app:layout_constraintLeft_toLeftOf="parent"
          app:layout_constraintRight_toRightOf="parent"
          app:layout_constraintTop_toTopOf="parent" 

      <Button
          android:id="@+id/btnProcessNext"
          android:layout_width="wrap_content"
          android:layout_height="wrap_content"
          android:layout_marginTop="8dp"
          android:text="PROCESS NEXT"
          app:layout_constraintHorizontal_bias="0.501"
          app:layout_constraintLeft_toLeftOf="parent"
          app:layout_constraintRight_toRightOf="parent"
          app:layout_constraintTop_toBottomOf="@+id/imageView" 

      <ImageView
          android:id="@+id/imgTakePic"
          android:layout_width="250dp"
          android:layout_height="250dp"
          android:layout_marginTop="8dp"
          app:layout_constraintLeft_toLeftOf="parent"
          app:layout_constraintRight_toRightOf="parent"
          app:layout_constraintTop_toBottomOf="@+id/txtSampleDescription"
          app:srcCompat="@android:drawable/ic_menu_camera" 

      <Button
          android:id="@+id/btnTakePicture"
          android:layout_width="wrap_content"
          android:layout_height="wrap_content"
          android:layout_marginTop="8dp"
          android:text="TAKE PICTURE"
          app:layout_constraintLeft_toLeftOf="parent"
          app:layout_constraintRight_toRightOf="parent"
          app:layout_constraintTop_toBottomOf="@+id/imgTakePic" 

      <TextView
          android:id="@+id/txtSampleDescription"
          android:layout_width="match_parent"
          android:layout_height="wrap_content"
          android:layout_marginBottom="8dp"
          android:layout_marginTop="8dp"
          android:gravity="center"
          app:layout_constraintBottom_toTopOf="@+id/txtTakePicture"
          app:layout_constraintLeft_toLeftOf="parent"
          app:layout_constraintRight_toRightOf="parent"
          app:layout_constraintTop_toBottomOf="@+id/btnProcessNext"
          app:layout_constraintVertical_bias="0.0" 

      <TextView
          android:id="@+id/txtTakePicture"
          android:layout_width="wrap_content"
          android:layout_height="wrap_content"
          android:layout_marginTop="8dp"
          android:gravity="center"
          app:layout_constraintLeft_toLeftOf="parent"
          app:layout_constraintRight_toRightOf="parent"
          app:layout_constraintTop_toBottomOf="@+id/btnTakePicture" 

  </android.support.constraint.ConstraintLayout>

</ScrollView>

我们定义了两个ImageView,TextView和Button。
可以遍历示例图像并显示结果的一种。
另一个用于从相机捕获图像。

下面给出了MainActivity.java文件的代码。

package com.theitroad.facedetectionapi;

import android.Manifest;
import android.content.Context;
import android.content.Intent;
import android.content.pm.PackageManager;
import android.graphics.Bitmap;
import android.graphics.BitmapFactory;
import android.graphics.Canvas;
import android.graphics.Color;
import android.graphics.Paint;
import android.net.Uri;
import android.os.Environment;
import android.provider.MediaStore;
import android.support.annotation.NonNull;
import android.support.v4.app.ActivityCompat;
import android.support.v7.app.AppCompatActivity;
import android.os.Bundle;
import android.util.SparseArray;
import android.view.View;
import android.widget.Button;
import android.widget.ImageView;
import android.widget.TextView;
import android.widget.Toast;

import com.google.android.gms.vision.Frame;
import com.google.android.gms.vision.face.Face;
import com.google.android.gms.vision.face.FaceDetector;
import com.google.android.gms.vision.face.Landmark;

import java.io.File;
import java.io.FileNotFoundException;

public class MainActivity extends AppCompatActivity implements View.OnClickListener {

  ImageView imageView, imgTakePicture;
  Button btnProcessNext, btnTakePicture;
  TextView txtSampleDesc, txtTakenPicDesc;
  private FaceDetector detector;
  Bitmap editedBitmap;
  int currentIndex = 0;
  int[] imageArray;
  private Uri imageUri;
  private static final int REQUEST_WRITE_PERMISSION = 200;
  private static final int CAMERA_REQUEST = 101;

  private static final String SAVED_INSTANCE_URI = "uri";
  private static final String SAVED_INSTANCE_BITMAP = "bitmap";

  @Override
  protected void onCreate(Bundle savedInstanceState) {
      super.onCreate(savedInstanceState);
      setContentView(R.layout.activity_main);

      imageArray = new int[]{R.drawable.sample_1, R.drawable.sample_2, R.drawable.sample_3};
      detector = new FaceDetector.Builder(getApplicationContext())
              .setTrackingEnabled(false)
              .setLandmarkType(FaceDetector.ALL_CLASSIFICATIONS)
              .setClassificationType(FaceDetector.ALL_CLASSIFICATIONS)
              .build();

      initViews();

  }

  private void initViews() {
      imageView = (ImageView) findViewById(R.id.imageView);
      imgTakePicture = (ImageView) findViewById(R.id.imgTakePic);
      btnProcessNext = (Button) findViewById(R.id.btnProcessNext);
      btnTakePicture = (Button) findViewById(R.id.btnTakePicture);
      txtSampleDesc = (TextView) findViewById(R.id.txtSampleDescription);
      txtTakenPicDesc = (TextView) findViewById(R.id.txtTakePicture);

      processImage(imageArray[currentIndex]);
      currentIndex++;

      btnProcessNext.setOnClickListener(this);
      btnTakePicture.setOnClickListener(this);
      imgTakePicture.setOnClickListener(this);
  }

  @Override
  public void onClick(View v) {
      switch (v.getId()) {
          case R.id.btnProcessNext:
              imageView.setImageResource(imageArray[currentIndex]);
              processImage(imageArray[currentIndex]);
              if (currentIndex == imageArray.length - 1)
                  currentIndex = 0;
              else
                  currentIndex++;

              break;

          case R.id.btnTakePicture:
              ActivityCompat.requestPermissions(MainActivity.this, new
                      String[]{Manifest.permission.WRITE_EXTERNAL_STORAGE}, REQUEST_WRITE_PERMISSION);
              break;

          case R.id.imgTakePic:
              ActivityCompat.requestPermissions(MainActivity.this, new
                      String[]{Manifest.permission.WRITE_EXTERNAL_STORAGE}, REQUEST_WRITE_PERMISSION);
              break;
      }
  }

  @Override
  public void onRequestPermissionsResult(int requestCode, @NonNull String[] permissions, @NonNull int[] grantResults) {
      super.onRequestPermissionsResult(requestCode, permissions, grantResults);
      switch (requestCode) {
          case REQUEST_WRITE_PERMISSION:
              if (grantResults.length > 0 && grantResults[0] == PackageManager.PERMISSION_GRANTED) {
                  startCamera();
              } else {
                  Toast.makeText(getApplicationContext(), "Permission Denied!", Toast.LENGTH_SHORT).show();
              }
      }
  }

  @Override
  protected void onActivityResult(int requestCode, int resultCode, Intent data) {
      if (requestCode == CAMERA_REQUEST && resultCode == RESULT_OK) {
          launchMediaScanIntent();
          try {
              processCameraPicture();
          } catch (Exception e) {
              Toast.makeText(getApplicationContext(), "Failed to load Image", Toast.LENGTH_SHORT).show();
          }
      }
  }

  private void launchMediaScanIntent() {
      Intent mediaScanIntent = new Intent(Intent.ACTION_MEDIA_SCANNER_SCAN_FILE);
      mediaScanIntent.setData(imageUri);
      this.sendBroadcast(mediaScanIntent);
  }

  private void startCamera() {
      Intent intent = new Intent(MediaStore.ACTION_IMAGE_CAPTURE);
      File photo = new File(Environment.getExternalStorageDirectory(), "photo.jpg");
      imageUri = Uri.fromFile(photo);
      intent.putExtra(MediaStore.EXTRA_OUTPUT, imageUri);
      startActivityForResult(intent, CAMERA_REQUEST);
  }

  @Override
  protected void onSaveInstanceState(Bundle outState) {
      if (imageUri != null) {
          outState.putParcelable(SAVED_INSTANCE_BITMAP, editedBitmap);
          outState.putString(SAVED_INSTANCE_URI, imageUri.toString());
      }
      super.onSaveInstanceState(outState);
  }

  private void processImage(int image) {

      Bitmap bitmap = decodeBitmapImage(image);
      if (detector.isOperational() && bitmap != null) {
          editedBitmap = Bitmap.createBitmap(bitmap.getWidth(), bitmap
                  .getHeight(), bitmap.getConfig());
          float scale = getResources().getDisplayMetrics().density;
          Paint paint = new Paint(Paint.ANTI_ALIAS_FLAG);
          paint.setColor(Color.GREEN);
          paint.setTextSize((int) (16 * scale));
          paint.setShadowLayer(1f, 0f, 1f, Color.WHITE);
          paint.setStyle(Paint.Style.STROKE);
          paint.setStrokeWidth(6f);
          Canvas canvas = new Canvas(editedBitmap);
          canvas.drawBitmap(bitmap, 0, 0, paint);
          Frame frame = new Frame.Builder().setBitmap(editedBitmap).build();
          SparseArray<Face> faces = detector.detect(frame);
          txtSampleDesc.setText(null);

          for (int index = 0; index < faces.size(); ++index) {
              Face face = faces.valueAt(index);
              canvas.drawRect(
                      face.getPosition().x,
                      face.getPosition().y,
                      face.getPosition().x + face.getWidth(),
                      face.getPosition().y + face.getHeight(), paint);

              canvas.drawText("Face " + (index + 1), face.getPosition().x + face.getWidth(), face.getPosition().y + face.getHeight(), paint);

              txtSampleDesc.setText(txtSampleDesc.getText() + "FACE " + (index + 1) + "\n");
              txtSampleDesc.setText(txtSampleDesc.getText() + "Smile probability:" + " " + face.getIsSmilingProbability() + "\n");
              txtSampleDesc.setText(txtSampleDesc.getText() + "Left Eye Is Open Probability: " + " " + face.getIsLeftEyeOpenProbability() + "\n");
              txtSampleDesc.setText(txtSampleDesc.getText() + "Right Eye Is Open Probability: " + " " + face.getIsRightEyeOpenProbability() + "\n\n");

              for (Landmark landmark : face.getLandmarks()) {
                  int cx = (int) (landmark.getPosition().x);
                  int cy = (int) (landmark.getPosition().y);
                  canvas.drawCircle(cx, cy, 8, paint);
              }

          }

          if (faces.size() == 0) {
              txtSampleDesc.setText("Scan Failed: Found nothing to scan");
          } else {
              imageView.setImageBitmap(editedBitmap);
              txtSampleDesc.setText(txtSampleDesc.getText() + "No of Faces Detected: " + " " + String.valueOf(faces.size()));
          }
      } else {
          txtSampleDesc.setText("Could not set up the detector!");
      }
  }

  private Bitmap decodeBitmapImage(int image) {
      int targetW = 300;
      int targetH = 300;
      BitmapFactory.Options bmOptions = new BitmapFactory.Options();
      bmOptions.inJustDecodeBounds = true;

      BitmapFactory.decodeResource(getResources(), image,
              bmOptions);

      int photoW = bmOptions.outWidth;
      int photoH = bmOptions.outHeight;

      int scaleFactor = Math.min(photoW/targetW, photoH/targetH);
      bmOptions.inJustDecodeBounds = false;
      bmOptions.inSampleSize = scaleFactor;

      return BitmapFactory.decodeResource(getResources(), image,
              bmOptions);
  }

  private void processCameraPicture() throws Exception {
      Bitmap bitmap = decodeBitmapUri(this, imageUri);
      if (detector.isOperational() && bitmap != null) {
          editedBitmap = Bitmap.createBitmap(bitmap.getWidth(), bitmap
                  .getHeight(), bitmap.getConfig());
          float scale = getResources().getDisplayMetrics().density;
          Paint paint = new Paint(Paint.ANTI_ALIAS_FLAG);
          paint.setColor(Color.GREEN);
          paint.setTextSize((int) (16 * scale));
          paint.setShadowLayer(1f, 0f, 1f, Color.WHITE);
          paint.setStyle(Paint.Style.STROKE);
          paint.setStrokeWidth(6f);
          Canvas canvas = new Canvas(editedBitmap);
          canvas.drawBitmap(bitmap, 0, 0, paint);
          Frame frame = new Frame.Builder().setBitmap(editedBitmap).build();
          SparseArray<Face> faces = detector.detect(frame);
          txtTakenPicDesc.setText(null);

          for (int index = 0; index < faces.size(); ++index) {
              Face face = faces.valueAt(index);
              canvas.drawRect(
                      face.getPosition().x,
                      face.getPosition().y,
                      face.getPosition().x + face.getWidth(),
                      face.getPosition().y + face.getHeight(), paint);

              canvas.drawText("Face " + (index + 1), face.getPosition().x + face.getWidth(), face.getPosition().y + face.getHeight(), paint);

              txtTakenPicDesc.setText("FACE " + (index + 1) + "\n");
              txtTakenPicDesc.setText(txtTakenPicDesc.getText() + "Smile probability:" + " " + face.getIsSmilingProbability() + "\n");
              txtTakenPicDesc.setText(txtTakenPicDesc.getText() + "Left Eye Is Open Probability: " + " " + face.getIsLeftEyeOpenProbability() + "\n");
              txtTakenPicDesc.setText(txtTakenPicDesc.getText() + "Right Eye Is Open Probability: " + " " + face.getIsRightEyeOpenProbability() + "\n\n");

              for (Landmark landmark : face.getLandmarks()) {
                  int cx = (int) (landmark.getPosition().x);
                  int cy = (int) (landmark.getPosition().y);
                  canvas.drawCircle(cx, cy, 8, paint);
              }

          }

          if (faces.size() == 0) {
              txtTakenPicDesc.setText("Scan Failed: Found nothing to scan");
          } else {
              imgTakePicture.setImageBitmap(editedBitmap);
              txtTakenPicDesc.setText(txtTakenPicDesc.getText() + "No of Faces Detected: " + " " + String.valueOf(faces.size()));
          }
      } else {
          txtTakenPicDesc.setText("Could not set up the detector!");
      }
  }

  private Bitmap decodeBitmapUri(Context ctx, Uri uri) throws FileNotFoundException {
      int targetW = 300;
      int targetH = 300;
      BitmapFactory.Options bmOptions = new BitmapFactory.Options();
      bmOptions.inJustDecodeBounds = true;
      BitmapFactory.decodeStream(ctx.getContentResolver().openInputStream(uri), null, bmOptions);
      int photoW = bmOptions.outWidth;
      int photoH = bmOptions.outHeight;

      int scaleFactor = Math.min(photoW/targetW, photoH/targetH);
      bmOptions.inJustDecodeBounds = false;
      bmOptions.inSampleSize = scaleFactor;

      return BitmapFactory.decodeStream(ctx.getContentResolver()
              .openInputStream(uri), null, bmOptions);
  }

  @Override
  protected void onDestroy() {
      super.onDestroy();
      detector.release();
  }
}

从上面的代码得出的推论很少是:

  • 当点击"下一个处理"按钮时," imageArray"保存将要扫描的示例图像。

  • 检测器使用以下代码段实例化:
    地标会增加计算时间,因此需要明确设置。

根据我们的要求,人脸检测器可以设置为" FAST_MODE"或者" ACCURATE_MODE"。

由于我们要处理静止图像,因此在上面的代码中将跟踪设置为false。
可以将其设置为true以检测视频中的面部。

  • processImage()和processCameraPicture()方法包含代码,在这些代码中我们实际检测到面孔并在其上绘制一个矩形

  • " detector.isOperational()"用于检查手机中当前的Google Play服务库是否支持视觉API(如果不支持,则Google Play会下载所需的本机库以提供支持)。

  • 实际进行人脸检测的代码段为:

  • 一旦被检测到,我们就会循环通过" faces"数组来查找每个面孔的位置和属性。

  • 每个面孔的属性都附加在按钮下方的TextView中。

  • 当我们用相机捕获图像时,这是相同的,除了我们需要在运行时要求相机许可并保存uri,由相机应用程序返回的位图。

尝试捕获狗的照片,您会发现Vision API未检测到它的脸(该API仅检测人脸)。