The Legal Examiner Affiliate Network The Legal Examiner The Legal Examiner The Legal Examiner search instagram avvo phone envelope checkmark mail-reply spinner error close The Legal Examiner The Legal Examiner The Legal Examiner

Artificial intelligence can infringe on real-life civil liberties

Ever applied for a job and wondered why you never heard back? Curious about why you got overlooked for that house you were trying to buy?

artificial intelligence

Anyone who uses a computer to seek employment, housing, educational opportunities or goods and services is impacted by sets of data that categorize searchers based on keywords. They are the algorithms used in artificial intelligence software.

What many people don’t realize is that there are gaps in AI regulations, some of which allow infringement on civil rights. That can include housing and employment opportunities.

RELATED: Civil liberties at risk as crisis managers turn to tech solutions

RELATED:  War crimes investigations aided by artificial intelligence

As an example, Amazon stopped using its algorithms for job applicants a few years back. The software was downgrading any resume that contained the term “women’s” and gave lower scores to applicants who had attended all-female colleges.

But at least one study shows current laws can be tweaked to allow fair use of artificial intelligence without writing all new laws.

A copyrighted doctoral dissertation written by Carlos Ignacio Gutierrez Gaviria for Pardee Rand Graduate School, states that AI’s growing applications have the potential to affect public policy significantly by “generating instances where regulations are not adequate to confront the issues faced by society, also known as regulatory gaps.”

His objective was to understand how AI affects U.S. public policy by exploring the role of AI in those regulatory gaps.

In an article published by the Brookings Institute of Justice, affiliated with New York University, Gutierrez said one of his main findings is that “few, if any, instances exist where AI’s challenges to policy require an overhaul of government. Adaptations to existing policies can serve as a solution to a gap.”

U.S. policies are equipped to withstand the social challenges brought on by AI, he said. “ It appears that methods and applications of AI will not overtake the policymaking process to the point of requiring completely new approaches for the administration of government.”

Still, there is work to be done.

Artificial intelligence choices could be critical

“In the modern world, artificial intelligence helps us navigate through a sea of information, curating our online experience,” said Lindsay Nako, director of litigation and training for the Impact Fund, a nonprofit that provides strategic leadership and support for litigation to achieve economic and social justice.

“Which song will I want to listen to next?  Is an email important or SPAM? What ads will interest me?  Artificial intelligence prevents us from being inundated with irrelevant information — and that raises important questions.”

If an artificial intelligence program delivers her an alternate song to the one she requested, it may be no big deal. But if it shows her ads for one job, but not the type she is seeking, it could be illegal.

Nako, in an article for the Impact Fund, describes AI as the science and engineering of making intelligent machines and using algorithms with data to perform functions similar to human functions.

“The first generation of artificial intelligence in the 1980s and 1990s could apply rules written by humans to data to create outputs,” she said. “The second generation in the 2000s could ‘learn,’ meaning that programs could take data and guidance provided by humans, independently identify rules, and then apply those rules to new data to create outputs. The third generation of artificial intelligence, which we are currently in, seeks to incorporate ‘deep learning.’  Deep learning will permit programs to autonomously learn rules and automatically judge new data to create outputs, without human intervention.”

Artificial does not mean equitable

While avoiding human bias may sound noble, it does not guarantee equitable results, Nako said.

“Artificial intelligence prioritizes user preference, while our civil rights laws prioritize equality of opportunity.” AI-based products can potentially screen out vulnerable groups, such as older people, people of color, women, those with disabilities or those in the LGBTQ community. And that could be illegal. “Our civil rights laws prohibit targeting advertisements and making employment or housing decisions based on personal characteristics.  This would also prohibit use of these characteristics as relevant data for AI decision-making.”

Policymakers face a critical decision, said Gutierrez, who is a Governance of Artificial Intelligence Fellow at the Sandra Day O’Connor College of Law, Arizona State University.

AI has made huge strides in recent years, said Mathias Risse, Harvard professor of philosophy and public policy, in his paper Human Rights and Artificial Intelligence: An Urgently Needed Agenda.

“AI is increasingly present in our lives, reflecting a growing tendency to turn for advice, or turn over decisions altogether, to algorithms. By ‘intelligence,’ I mean the ability to make predictions about the future and solve complex tasks. ‘Artificial’ intelligence, AI, is (an) ability demonstrated by machines, in smart phones, tablets, laptops, drones, self-operating vehicles or robots that might take on tasks ranging from household support, companionship of sorts, even sexual companionship, to policing and warfare.”

AI must be considered in terms of the future and what laws need to be in place to regulate it, he said.