使用CoreImage进行人脸识别

      因为好奇前段时间一些应用实现的人脸识别登录,所以便着手研究一下相关的知识,起初是打算用opencv做人脸识别的,但是效果不佳,识别的准确率比较低(可能是我对opencv不够了解吧,有时间再做深入的学习),这次选用的是苹果自带CoreImage进行人脸识别。

      一,导入CoreImage.framework

      至于怎样倒入这里喔就不细说了,

       二.下面这一段是关键的脸部识别代码

/*
 * 脸部识别
 */
-(NSArray *)dettectFaceWithImage:(UIImage *)faceImage{
    NSDictionary *opts = [NSDictionary dictionaryWithObject:CIDetectorAccuracyHigh forKey:CIDetectorAccuracy];
    
    CIDetector *detector = [CIDetector detectorOfType:CIDetectorTypeFace context:nil options:opts];
    CIImage *ciimage = [CIImage imageWithCGImage:faceImage.CGImage];
    NSArray *featrues = [detector featuresInImage:ciimage];
    if (featrues.count>0)
        return featrues;
    return nil;
}

在这里稍微介绍一下:

CIDetectorAccuracy

CIDetector

       CIDetectorAccuracy适用于设置识别精度

       CIDetecror是 coreImage框架中提供的一个识别类,包括人脸【CIDetectorTypeFace】,形状【CIDetectorTypeRectangle】,条码【CIDetectorTypeQRCode】,文本【CIDetectorTypeText】的识别

三.具体实现

- (void)viewDidLoad {
    [super viewDidLoad];
    self.title = @"CoreImage";
    
    
    
//    //脸部识别
    UIImage *image = self.FaceImage.image;
    NSArray *result = [self dettectFaceWithImage:image];
    UIView *viewFace = [[UIView alloc]initWithFrame:_FaceImage.frame];
    viewFace.backgroundColor = [UIColor clearColor];
    [self.view addSubview:viewFace];
    
    if(result.count>0){
        CIFaceFeature *face = [result firstObject];
        if (face.hasSmile) {
            NSLog(@"有微笑");
        }
        if (face.leftEyeClosed) {
            NSLog(@"左眼闭着");
        }
        if (face.rightEyeClosed) {
            NSLog(@"右眼闭着");
        }
        if (face.hasLeftEyePosition) {
            NSLog(@"左眼位置:%@",NSStringFromCGPoint(face.leftEyePosition));
            UIView *v = [[UIView alloc]initWithFrame:CGRectMake(face.leftEyePosition.x-10, face.leftEyePosition.y-10, 20, 20)];
            v.backgroundColor = [UIColor blackColor];
            v.alpha = 0.5;
            [viewFace addSubview:v];

        }
        if (face.hasRightEyePosition) {
            NSLog(@"右眼位置: %@",NSStringFromCGPoint(face.rightEyePosition));
            UIView *v = [[UIView alloc]initWithFrame:CGRectMake(face.rightEyePosition.x-10, face.rightEyePosition.y-10, 20, 20)];
            v.backgroundColor = [UIColor blackColor];
            v.alpha = 0.5;
            [viewFace addSubview:v];

        }
        if (face.hasMouthPosition) {
            NSLog(@"嘴巴位置: %@",NSStringFromCGPoint
                  (face.mouthPosition));
             UIView *v = [[UIView alloc]initWithFrame:CGRectMake(face.mouthPosition.x-10, face.mouthPosition.y-10, 20, 20)];
            v.backgroundColor = [UIColor blueColor];
            v.alpha = 0.5;
            [viewFace addSubview:v];

        }
        NSLog(@"脸部区域:%@",NSStringFromCGRect(face.bounds));
        if(face.bounds.size.width==face.bounds.size.height){
            NSLog(@"脸蛋是圆的-.-");
            UIView *v = [[UIView alloc]initWithFrame:face.bounds];
            v.layer.borderWidth = 2;
            v.layer.borderColor = [UIColor redColor].CGColor;
            [viewFace addSubview:v];
        }
        
        
        self.FaceImage.image = image;
        [self.FaceImage sizeToFit];
        
        viewFace.transform = CGAffineTransformMakeScale(1, -1);
    }
}

其中。coreImage自带很多便捷的属性,我们可以直接通过他们的值了解到,他识别到的人脸情况:是否微笑,是否闭眼,眼睛喝嘴巴的位置。

注意:这里

viewFace.transform = CGAffineTransformMakeScale(1, -1);

由于CIImage的坐标和UIKit的坐标系不同,UIkit的坐标原点在屏幕左上角,CIIMage的坐标原点在屏幕的左下角,所以这里要做变换处理

      总结:个人感觉这个只是检测人脸而已,并非人脸识别,有时间再继续深挖人脸识别吧

      

原文地址:https://www.cnblogs.com/HQBBOX/p/6548552.html