Learning Microsoft Cognitive Services
上QQ阅读APP看书,第一时间看更新

Deciding whether two faces belong to the same person

To decide whether two faces belong to the same person, we are going to call the Verify function of the API. The API allows us to detect when two faces are of the same person, which is called face-to-face verification. Detecting whether a face belongs to a specific person is called face-to-person verification.

The UI will consist of three button elements, two image elements, and one text block element. Two of the buttons will be used to browse for images, which are then shown in each image element. The last button will run the verification. The text block will output the result.

Lay out the UI how you want and bind the different elements to properties in the ViewModel, as we have done previously. In the ViewModel, there should be two BitmapImage properties for the image elements. There should be one string property, containing the verification result. Finally, there should be three ICommand properties, one for each of our buttons.

Remember to add the UI to the MainView.xaml file as a new TabItem. In addition, add the ViewModel to the MainViewModel.cs file, where you will also need to add a new variable for the FaceServiceClient variable. This should be created with the Face API key, which we signed up for in Chapter 1, Getting Started with Microsoft Cognitive Services.

In the ViewModel, we need to declare the following three private variables:

    private FaceServiceClient _faceServiceClient;
    private Guid _faceId1 = Guid.Empty;
    private Guid _faceId2 = Guid.Empty;

We have seen the first one before; it will access the Face API. The two Guid variables will be assigned when we have run the face detection.

The constructor accepts one parameter, which is our FaceServiceClient object. This is assigned to the previously created variable, as shown in the following code:

    public FaceVerificationViewModel (FaceServiceClientfaceServiceClient)
    {    
        _faceServiceClient = faceServiceClient;
        Initialize();
    }

From the constructor, we call the Initialize function to create the DelegateCommand properties, as follows:

    private void Initialize()
    {
        BrowseImage1Command = new DelegateCommand(BrowseImage1);
        BrowseImage2Command = new DelegateCommand(BrowseImage2);
        VerifyImageCommand = new DelegateCommand(VerifyFace, CanVerifyFace);
    }

The browse commands do not need to be disabled at any point, so we just pass on the command function, as follows:

    private async void BrowseImage1(object obj) {
        Image1Source = await BrowseImageAsync(1);
    }

Both functions will look similar. We call another function to browse for an image and detect a face. To separate each image, we pass on the image number.

The BrowseImageAsync function will accept an int type as a parameter. It returns a BitmapImage object, which we assign to the BitmapImage property bound to our UI. The first part opens a browse dialog and returns the selected image. We will jump in when we have the image and the path to that image.

We open the image as a Stream object. The Stream object is used in the API call to detect faces. When we call the API, we can use the default call, as it will return the value we are interested in, as shown in the following code:

    try {
        using (Stream fileStream = File.OpenRead(filePath)) {
            Face[] detectedFaces = await  _faceServiceClient.DetectAsync(fileStream);

When the detection process has completed, we check to see which image this is and assign the FaceId parameter to the correct Guid variable using the following code. For this example, we are assuming that there will be only one face per image:

            if (imagenumber == 1)
                _faceId1 = detectedFaces[0].FaceId;
            else
                _faceId2 = detectedFaces[0].FaceId;
        }
    }

Finish off the function by adding catch clauses as you see fit. You also need to create and return a BitmapImage parameter from the selected image.

Before the button for the face verification is enabled, we perform a check to see if both face IDs have been set using the following code:

    private bool CanVerifyFace(object obj)
    {
        return !_faceId1.Equals(Guid.Empty) &&! _faceId2.Equals(Guid.Empty);
    }

The VerifyFace function is not a complex one, as you can see in the following code:

    private async void VerifyFace(object obj) {
        try {
            VerifyResultverificationResult = await  _faceServiceClient.VerifyAsync(_faceId1, _faceId2);

With the face IDs set, we can make a call to the VerifyAsync function of the API. We pass on the face IDs as parameters and get a VerifyResult object in return. We use this object to provide the output, as follows:

            FaceVerificationResult = $"The two provided faces is identical: {verificationResult.IsIdentical}, with confidence: {verificationResult.Confidence}";
        }

A successful call will return a code 200 response. The response data is a bool type variable, isIdentical, and a number, confidence:

Deciding whether two faces belong to the same person

At the time of writing, the NuGet package for the Face API only allows for face-to-face verification. If we were calling directly to the REST API, we could have utilized face-to-person verification as well.

To use face-to-person verification, only one image is required. You will need to pass on the face ID for that image. You will also need to pass on a person group ID, and a person ID. These are to specify a specific person group to search in and a certain person within that group. We will cover person groups and persons later in this chapter.