AI, ML and the Far Right

No Time To Read? LISTEN In English On-The-Go!

Artificial Intelligence, Machine Learning, and the Far Right

In 2018, former Trump strategist Steve Bannon used location data from smartphones to target church-goers with get-out-the-vote advertisements.

“We’re sending a message from CatholicVote, not to go vote for a specific guy,” Bannon said. “But for all Catholics to go out and do their duty and they’re going to put out a thing to support President Trump.”

According to The Guardian news website, 4 years ago the Cambridge Analytica data science firm’s execs boasted of their role in getting Donald Trump elected. The firm, also at the heart of the Facebook data breach scandal back then, says they used ‘unattributable and untrackable’ ads, in their Trump campaign. Hillary Clinton, nor the American public knew that what had hit them could make a stellar sci-fi thriller. Only this was real.

The *disgraced company’s head of data, Alex Tayler, added: “When you think about the fact that Donald Trump lost the popular vote by 3million votes but won the electoral college vote that’s down to the data and the research.

That was back in 2016 — it’s a history we can’t ignore. **The heads of two prominent artificial intelligence firms came under public scrutiny in 2020 for ties to far right organizations. A report by Matt Stroud at OneZero identified the founder and CEO of surveillance firm Banjo, Damien Patton, as a former member of the Dixie Knights of the Ku Klux Klan, who was charged with a hate crime for shooting at a synagogue in 1990. The report led the Utah Attorney General’s office to suspend a contract worth at least $750,000 with the company, and reportedly the firm also lost a $20.8 million contract with the state’s Department of Public Safety.

This news reveals deep and extensive connections between the far right and AI-driven surveillance firms that are contracting with law enforcement agencies and city and state governments.

We need to be urgently asking critical questions:

How is this persistent strain of right-wing and reactionary politics currently manifesting within the tech industry?

What do the views held by these AI founders suggest about the technologies they are building and bringing into the world?

And — most importantly — what should we do about it?

These firms have access to extensive data about the activities of members of the public — for example, the state of Utah gave Banjo access to real-time data streaming from the state’s traffic cameras, CCTV, and 911 emergency systems, among other things, which the company combined with social media and other sensitive sources of data. It combs through these sources to, as the company describes it, ‘detect anomalies’ in the real world.

We know that many AI systems reproduce patterns of racially biased social inequality. For example, many predictive policing systems draw on dirty data: In many jurisdictions, law enforcement agencies are using data produced during periods of flawed, racially-biased, and sometimes unlawful policing practices to train these systems. Unsurprisingly, this means that racial bias is endemic in ‘crime-prevention’ analytic systems: as research by the scholar Sarah Brayne on predictive policing indicates, these data practices reinscribe existing patterns of inequality that exacerbate the over-policing and surveillance of communities of color.

But what we’re seeing here is something quite different. Clearview AI appears to have been designed with explicitly racist use cases in mind: according to the Huffington Post report, Chuck Johnson posted in January 2017 — and remember this was now Trump’s Immigration and Customs Enforcement (ICE) agency — that he was involved in “building algorithms to ID all the illegal immigrants for the deportation squads” and bragged about the capabilities of the facial recognition software he was working on.

Clearview AI has now signed a paid contract with Immigration and Customs Enforcement, which is using predictive analytics and facial recognition software to accelerate the detention and deportation of undocumented people in the United States, even as the pandemic unfolds around us.

The bankrupt Cambridge Analytica and Clearview AI and Banjo are only indicators of a much deeper and more extensive problem. We need to take a long, hard look at a fascination with the far-right among some members of the tech industry, putting the politics and networks of those creating and profiting from AI systems at the heart of our analysis. And we should brace ourselves: we won’t like what we find.


  1. Once at the center of a global storm for manipulating elections in the US and other countries, Cambridge Analytica is a non-entity today.  It filed for bankruptcy in May 2018 after suffering a sharp drop in business in the aftermath of the whistleblower revelations about how the firm accessed Facebook users’ data through a third-party app and used it for targeted political advertising.

2. Citation AI Now, a research institute at New York university

Leave a Reply

Your email address will not be published. Required fields are marked *



error: Content is protected !!