This topic has come up once or twice already but thought I’d raise it again in the light of experiments I’ve made with both pii-blur (TrekView) and the understand.ai anonymizer. I’ve had variable results with the former but quite encouraging results with the latter.
With the understand.ai anonymizer (https://github.com/understand-ai/anonymizer) I get pretty good results for faces that are clearly visible. Don’t have too many panos with license plates so untested as yet, but one with a vaguely-visible license plate (which you can’t read properly even if unblurred) was not detecting.
However I have a question on our legal obligations (my server is in Germany, I am in UK, so both in GDPR-land - though the software will potentially be deployed anywhere in the world - this is Eesger and myself’s OpenWanderer project which he’s mentioned elsewhere): to what extent do we need to blur faces and license plates?
With the understand.ai tool I can blur faces that are clearly visible, and the TrekView pii-blur tool adds further blurring on entire people and cars though it doesn’t detect as well as understand.ai.
However, faces further away from the camera are not reliably detected and blurred - but these faces are not clearly visible anyway. Do we thus have to blur ALL people showing on the panorama irrespective of whether their face is clearly visible, or just faces that are clearly visible? From a privacy and ethical POV I’d have thought just clearly visible faces, but IANAL.
Intiial experiments here incidentally