Laws needs to be updated to protect against abuse from surveillance tech

Laws needs to be updated to protect against abuse from surveillance tech


Canada's laws are "woefully out of date" on protecting people's privacy rights from the potential harms of technology like facial recognition and artificial intelligence, according to a deputy for the country's privacy watchdog.

"Our private sector law is now over 20 years old, so it was passed and conceived of before social media such as Facebook, [which] was only founded in 2004," Gregory Smolynec, deputy commissioner for policy and promotion in the Office of the Privacy Commissioner of Canada (OPC), told Spark host Nora Young.

Today, tools one might have once read about in a science fiction novel are being used in tandem by companies to better target potential customers, or by law enforcement with the stated intention of identifying criminals.

The OPC describes its role as working to "protect and promote the privacy rights of individuals" including reporting on how citizens' private information is handled by both the public and private sector.

Smolynec said lawmakers likely didn't anticipate how quickly tech like AI, or facial recognition technology (FRT) could evolve to a state where it could threaten individuals' privacy, whether through inadvertent or deliberate misuse.

Ethics of facial recognition

To some, FRT promises not only to be able to identify faces and match them to images, but to do things like detect emotion, or identify who might be truthful or good employees.

Luke Stark, assistant professor at Western University's Faculty of Information and Media Studies, isn't convinced it's that sophisticated — or that it's even a good idea.

"Facial recognition really just looks at the kind of patterns of lightness and darkness on a face. It's not doing much to identify you. It's just looking for patterns of facial ridges and other features," he said.

More worryingly, he argued, this purportedly mechanical categorization of facial features quickly becomes "the textbook definition of racism."

Past reports have found that FRT are worse at identifying the faces of people of colour, he said. Some have said the problem is merely technical — include a wider range of people's faces and skin tones in the database, and you'll make it more accurate.

Stark has an opposite conclusion. "At its core, this is a fundamentally dangerous technology," he said.