This week, a BBC Panorama investigation uncovered how troubling deepfake images of Trump surrounded by Black supporters found their way across the internet -- yet another example of why A.I. election disinformation regulation is a necessity in the US. In the courts, Microsoft filed a motion to dismiss part of the New York Times' case against it and OpenAI. The case alleges that ChatGPT imperils the very future of journalism. Microsoft compares The Times to Hollywood-types in the 1980s panicking about the doomsday device that was the VCR. Speaking of OpenAI, the company clapped back at Elon Musk's lawsuit by posting a bunch of Musk's emails on its blog which, frankly, leaves Elon looking kind of like a hypocrite. The company also signed an open letter pledging to build AI that will make for a better future for humanity. If those platitudes don't leave you feeling warm and fuzzy, don't worry. It isn't just you. Finally, India signalled a move towards active A.I. regulation with a much-criticised and extremely confusing advisory which requires companies (but apparently not startups!) to obtain government clearance before releasing new unreliable models (i.e., every single model) onto the Indian internet.
It’s been a busy week in A.I. Here are some of the stories that caught my eye.
Last week, I noted how Brazil's electoral authority has taken proactive steps to combat A.I. disinformation. This week, just over a month after the furor ignited by deepfake robocalls in New Hampshire which mimicked the voice of President Joe Biden [1], BBC Panorama provided yet further evidence for why robust regulation is necessary in the United States.
The BBC investigation uncovered several troubling deepfake images depicting Black people supporting Donald Trump [2]. While there isn’t any evidence directly connecting the images to Trump’s official campaign, the images appear to have been fabricated by his actual supporters. The below image, for example, was created by a conservative talk show host based inFlorida and posted to a Facebook page with 1 million followers.
Disinformation targeting marginalized communities in the U.S. isn’t new news. Back in 2016, an influencer who went by the moniker"Ricky Vaughn" was sentenced to seven months in prison and a $15,000 fine for tweeting images depicting Black and Hispanic women in convincing "political ads". These ads directed readers to "Avoid the Line. Vote from Home" and "Text ‘Hillary’ to 59925." By Vaughn’s confession, the intention was to "limit the black turnout". In fact, a few thousand people did text in [3].
Despite these issues, there isn't yet any federal law regulating A.I. in the electoral context. Vaughn, in the aforementioned case, was found guilty in federal court on a charge of conspiracy against rights – for his part in a scheme to interfere with an individual’s rights to vote, rather for the disinformation itself. The Federal Election Commission, the organization responsible for the regulation of campaign financing, prohibits the impersonation of candidates in certain contexts [4]. It is reportedly considering extending this prohibition to deepfakes [5]. The FTC, too, is looking into tackling impersonation [6]. However, there’s no telling when these regulations will appear. In the absence of applicable federal law on the books, states have been stepping in to fill the pages. Texas, for example, has made it a criminal offense for a person to create and publish a deepfake video within 30 days of an election with intent to injure a candidate or influence an election [7].
A.I. tools make disinformation campaigns such as those described above troublingly easier. And increasingly believable. In a world permeated by disinformation, sorting fact from fiction becomes incredibly difficult. Echo chambers become less permeable. Trust dissolves. And, against this backdrop of cognitive exhaustion and uncertainty, bad-actors -- both domestic and, crucially, foreign -- may much more easily slip in and sway election outcomes.
I would argue that, while regulation of A.I. disinformation doesn't directly do anything to solve the threat of foreign influence on an election, it certainly helps clean the game up domestically. Not only should there be no question about doing whatever we can to maintain the integrity of the vote, a clear stance on this subject could also foster the sort of culture of critical thinking that is going to be necessary to navigate the inevitable uncertainty these technologies are going to continue to cause.
Back in December 2023, the New York Times sued Microsoft and OpenAI alleging copyright infringement (by way of ChatGPT’s technology) to an extent that imperils the very future of journalism. At the heart of the case is a significant copyright issue that is critical to current A.I. tech: the question of whether it is fair use of publicly accessible data to train DNNs.
On Monday, 4 March 2024, Microsoft filed a motion to dismiss some of the seven claims that comprise the suit particularly related to what it deems “unsubstantiated suggestions that the public’s use of GPT-based products harms The Times”. It compares this to alarmist arguments made by the entertainment industry against VCR technology in the 1980s — technology which did not ultimately go on to destroy the film industry.
The claims Microsoft seeks to have dismissed are as follows:
This follows a fairly similar motion filed by OpenAI on 26 February 2024. Whether Microsoft or OpenAI are successful in having these claims dismissed remains to be seen. If successful, this would leave the direct, vicarious, and contributory copyright infringement claims relating to the training of OpenAI’s GPT models to be litigated, as well as a trademark dilution claim.
Curiously, OpenAI alleges that the examples that the Times produced to support its allegations were the product of someone "hacking" ChatGPT to generate "highly anomalous results" — behaviour not indicative of the vast majority of actual end users. Basically, that the Times had to fiddle with ChatGPT and even directly input entire paragraphs of Times articles before the LLM even began reproducing elements of the Times articles in question.
Let's see where this goes. I plan to write up a primer at some point setting out recent copyright arguments that have been made by creators (artists, writers, organizations such as the The Times) taking on Generative A.I. as well as where they've landed.
LLast week, I described how Elon Musk has filed a suit against OpenAI ostensibly because he's concerned about how it has become for-profit and turned away from its original purposes as a fully "open" organization. The complaint depicts an organization founded on ideals of openness which has since turned to the dark side and gotten in bed with Microsoft — that GPT-4 is apparently a "de facto Microsoft proprietary algorithm". This latter point is evidenced, according to the complaint, by the fact that every GPT model up to and including GPT 3 was made public; GPT-4 and OpenAI’s work on an alleged Q* model has not. I’ll admit it’s a pretty convincing piece of writing.
But OpenAI wasn’t about to take it lying down.
On 5 March 2024, Sam Altman, Greg Brockman, Ilya Sutskever and others at OpenAI published a blog post contesting Musk's allegations and framing the move to for-profit as a years-long acceptance that building AGI “will require far more resources than [they] initially imagined” [8]. By way of a series of e-mails published between the various OpenAI founders (including Elon), what becomes clear is that in 2017 everyone — including Elon — was on board with this for-profit shift to raise the vast amount of capital required to achieve the dream. Elon, apparently, initially wanted majority equity, board control and a position as interim CEO. He wanted OpenAI to merge with Tesla. When it didn’t work out this way, he and OpenAI more-or-less parted ways.
It's easy to be reflexively cynical about Elon given his past behaviour (look to, for example, his shenanigans trying to wiggle out of his Twitter bid in 2023). In this case, however, I think the cynicism is justified. The e-mails certainly make his complaints feel a little hypocritical and a lot self-serving. Well played, OpenAI.
After a very hands-off approach to A.I., India has had a bit of an about face and signalled potential regulation in the future with a new advisory. The advisory comes from the Ministry of Electronics and Information Technology and (though not legally binding) requires that tech firms seek government approval before launching "unreliable Artificial Intelligence model(s) /LLM/Generative AI, software(s) or algorithm(s)" (i.e., any model) in the country and that firms must label the "possible and inherent fallibility or unreliability" of the output of said models [9]. This comes after an incident where Google's Gemini model labelled India's PM, Modi, a fascist [10].
This shift in policy was met with vociferous criticism from notables working in A.I. startups (such Aravind Srinivas, co-founder of Perplexity.ai) concerned that this sort of regulation would stifle India's ability to compete and innovate [11]. Despite clarifications that the advisory is intended only for "significant platforms" and does not apply to start-ups, it seems inevitable that further issues will surface -- a clear reminder of how challenging it is to balance speed with deliberation when it comes to government regulation of evolving technologies.
The European Parliament is expected to debate and approve an agreement reached with EU member countries regulating A.I. next week (13 March 2024). More to come in a few days.
SV Angel (https://svangel.com/), a venture fund with an investment in OpenAI, posted an open letter -- with space to add one's own signature -- committing to "building AI that will contribute to a better future for humanity". Notable signatories include OpenAI, Meta, Google, Salesforce, Y Combinator, Hugging Face, Mistral AI, and many others. Some have questioned the timing of the letter -- it's convenient, of course, that an OpenAI investor would release such a pledge of which OpenAI is the second signatory shortly after OpenAI was sued by Elon Musk [12].
The letter proclaims that "[i]t is our collective responsibility to make choices that maximize AI’s benefits and mitigate the risks, for today and for future generations." I don't believe letters like this actually achieve very much besides being publicity stunts. Particularly where, like this one, they are nebulous as to the actual how to achieve the actual thing. Regardless, I believe that the sentiment is sound. Developments in A.I. -- many of which may have seismic impacts across many different aspects of our lives -- must be made in the context of robust discussion around their implications. And yes, beyond just our generations. But should the responsibility for facilitating this dialogue be left to companies whose ultimate purpose is the maximization of shareholder value? Probably not.
I wanted to shout out an interesting podcast I listened to this week. Nilay Patel, editor-in-chief at the Verge does insightful weekly interviews on his podcast, Decoder. This week's episode looked at the rising phenomenon of A.I. in the context of dating and companionship. The idea of A.I. companions might seem like the stuff of sci-fi but it is very much a thing of the present with companies such as Replika ("The AI companion who cares"). This sort of use of A.I. raises some super-interesting and important ethical and regulatory questions. Aside from the obvious ones around privacy are the not-so-obvious issues such as the fact that these companies treat their offerings as software, while the people forming relationships with them view them as much, much more. A software provider often retains rights to drastically shift and modify their product. But should they be able to fundamentally change products that other humans have formed meaningful relationships with? Not to mention... what does this mean for the future of human to human romantic interaction?
I might look into this further with a later post, but for now, happy listening! Check out this episode of Decoder here.
[1] Seitz-Wald, Alex. “Democratic Operative Admits to Commissioning Fake Biden Robocall That Used AI.” NBC News, February 25, 2024. https://www.nbcnews.com/politics/2024-election/democratic-operative-admits-commissioning-fake-biden-robocall-used-ai-rcna140402.
[2] Spring, Marianna. “Trump Supporters Target Black Voters with Faked AI Images.” BBC News, March 4, 2024. https://www.bbc.com/news/world-us-canada-68440150.
[3] “Social Media Influencer Sentenced for Election Interference in 2016 Presidential Race.” Office of Public Affairs | United States Department of Justice, October 18, 2023. https://www.justice.gov/opa/pr/social-media-influencer-sentenced-election-interference-2016-presidential-race.
[4] 11 CFR § 110.16 (2023) - Prohibitions on fraudulent misrepresentations.
[5] Brennan Center for Justice. “Regulating AI Deepfakes and Synthetic Media in the Political Arena,” December 12, 2023. https://www.brennancenter.org/our-work/research-reports/regulating-ai-deepfakes-and-synthetic-media-political-arena.
[6] Federal Trade Commission. “FTC Proposes New Protections to Combat AI Impersonation of Individuals,” February 15, 2024. https://www.ftc.gov/news-events/news/press-releases/2024/02/ftc-proposes-new-protections-combat-ai-impersonation-individuals.
[7] Tex. Election Code - ELEC § 255.004 (2021) - True Source of Communication
[8] “OpenAI and Elon Musk,” March 5, 2024. https://openai.com/blog/openai-elon-musk.
[9] Sinha, Amber. “The Many Questions About India's New AI Advisory.” Tech Policy Press, March 6, 2024. https://www.techpolicy.press/the-many-questions-about-indias-new-ai-advisory/.
[10] Kalra, Aditya, and Vengattil, Munsif. “India Asks Tech Firms to Seek Approval before Releasing ‘unreliable’ AI Tools.” Reuters, March 4, 2024. https://www.reuters.com/world/india/india-asks-tech-firms-seek-approval-before-releasing-unreliable-ai-tools-2024-03-04/.
[11] Singh, Manish. “India Reverses AI Stance, Requires Government Approval for Model Launches.” TechCrunch, March 3, 2024. https://techcrunch.com/2024/03/03/india-reverses-ai-stance-requires-government-approval-for-model-launches/.
[12] Metinko, Chris. “Eye On AI: Oh The Humanity — Everybody’s In A Rush To Reassure AI Development Will Help Us.” Crunchbase News, March 6, 2024. https://news.crunchbase.com/ai/musk-openai-lawsuit-altman-msft/.