Prime Minister Malcolm Turnbull has pushed state premiers to hand over their drivers' licence database in order to enhance facial recognition systems, particularly at airports. COAG has agreed, with the ACT insisting that only perfect matches be used for non-counterterrorism purposes. It is hard to find this reassuring.
In something out of a British spy movie, and sounding as sinister, this biometric matching is called in some circles The Capability. It was introduced in 2015, using passport data. People who have recently travelled overseas might recall using SmartGates.
It is worth recalling that data retention has also taken effect, despite sustained protest from legal and tech experts. A home-affairs super-department was created for Peter Dutton only months ago. The thrust is clear: expand powers in the name of security even without consensus on merit.
Apart from adding millions of images from drivers' licences to the database, the Turnbull government has also proposed detaining terrorism suspects without charge for up to two weeks. It is a monumental break from the pre-charge regime which allows detention of an additional seven days after the first day, via court process.
As terror law expert Dr Nicola McGarrity says: 'To the best of my knowledge, and based on previous inquiries, there are no situations in which it would have been necessary to hold someone in detention for more than those eight days.' No strong case has been made either about harvesting biometrics.
There is something shocking about our primary form of ID being captured like this, without the courtesy of having been asked, without having committed the slightest infraction.
In places where facial recognition has been deployed, such as the UK, US and Canada, it has not prevented mass murders. Some perpetrators were already known to police, a few for domestic violence. They were more likely to be locals. Their methods were incredibly low-tech — an ironic counterpoint to the massive resources funneled toward sophisticated surveillance software.
"In western countries with vast inequities, particularly an over-incarceration of blacks and Indigenous, the sample base for algorithms may be skewed from the start."
This is not to argue that identification isn't critical to crime investigations, but it bears emphasising that it is only one part. Police still must build their case on evidence, and be able to link that evidence to a person. It is reasonable to be sceptical about claims that automatic facial recognition makes better cops and safer citizens.
That has not been the experience for minorities. Studies of facial recognition software developed in various countries show that there are racial differences in accuracy. In the US, blacks are more likely to be misidentified than other races — errors that could be life-shattering and devastating for communities of colour.
These inaccuracies do not necessarily mean that the tool or its developers are racist. But they do demonstrate how such technologies can perpetuate existing injustices. In western countries with vast inequities, particularly an over-incarceration of blacks and Indigenous, the sample base for algorithms may be skewed from the start.
The availability of sensitive data also lends itself to authoritarian excess. In Maryland, for example, facial recognition software was used to identify those involved in protests following the death of a black man in police custody.
We ought to have learned that oppressive practices that hurt minorities first and the most, affect everyone eventually — even if differently. Privacy advocates have pointed out the possibility of such high-value systems being maliciously breached or disrupted, or even used inappropriately by those with official access.
Electronic Frontier Foundation analyst Jeremy Malcolm points out: 'When it's a password database that's breached, you can just change your password. When it's facial recognition, you can't change your face.' Australian Privacy Foundation chair David Vaile describes it as a potential lifelong liability.
Surveillance scholars have also pointed out the risk of feature creep, in which technologies are used for purposes beyond initial intent. Today, an argument is being made about terrorism. But databases or indices, once they exist, prove malleable to other contexts or agendas, such as civil suits and minor crimes or even entry into public buildings.
Are we really prepared for all this? How confident could anyone be that a future government would be less restrained, more benign when it is equipped with powerful capabilities like this? Do we even know if the current one has not got more in store, given how successfully it has been able to implement other policies?
Fatima Measham is a Eureka Street consulting editor. She co-hosts the ChatterSquare podcast, tweets as @foomeister and blogs on Medium.