Hi,
I'm pretty shocking to hear. when a 'matchPerson' mode returns someone of a completely different gender and race, something is definitely off. that should not be happening...
first, let's clarify the modes. 'matchFace' just looks for visually similar faces, ignoring identity. 'matchPerson' is supposed to be smarter. it uses a person model to understand the actual identity, filtering out faces that might look similar but belong to different people. it's meant to have that internal threshold you trusted.
the fact that it's failing so badly suggests something might have changed on the backend. microsoft occasionally updates its underlying ai models to improve accuracy, but sometimes these updates can introduce regressions for specific use cases.
your immediate fix is to absolutely start checking the confidence score. even with 'matchPerson', you should never treat the top result as absolute without verifying its confidence. set a threshold for your application. what's acceptable? 0.7? 0.8? you will need to test to find the right balance for your data.
to troubleshoot, try running the same face against your person group using the 'identify' call. does it also return the wrong person with high confidence? if yes, the issue might be with the quality of the faces registered in your person group. if 'identify' works correctly but 'find similar' does not, then the issue is likely isolated to the 'find similar' api.
finally, please report this behavior to azure support. provide them with example request ids and the specific faces that are causing the poor matches. they can investigate if there's a service side issue or a bug in the current model. your feedback could help them fix it for everyone.
really hope you get back to those great results soon. that kind of accuracy is why we use these services.
Best regards,
Alex
and "yes" if you would follow me at Q&A - personaly thx.
P.S. If my answer help to you, please Accept my answer