This week, Brazil adopted new A.I. regulation tackling election disinformation. It requires disclosing A.I. generated content as such, restricts chatbot use, and prohibits deepfakes. In an important election year across the globe, this'll be a space to watch closely. Another lawyer (this time in B.C., Canada) fell victim to LLM hallucinations and received a costs penalty for her blunder. Meanwhile, a Caribbean court is turning to A.I. for automatic transcript generation. Finally, Elon's suing OpenAI, ostensibly because they're taking money from Microsoft and that isn't what OpenAI was meant to be. While it's hard not to be skeptical about his intentions, the case certainly opens up interesting questions around AGI.
Here are some of the stand-out A.I. stories from last week.
On Tuesday, 27th February, Brazil’s top electoral court (the TSE) adopted a regulation controlling the sorts of disinformation that might unduly influence the country’s upcoming (2024) elections [1]. The regulation:
But is the approach effective? Depends on who's held accountable and what the end goal is. In Brazil's case, it appears the sights are levelled at candidates in breach, who, according to Alexandre de Moraes (president of the TSE), "will face the penalty of having their registration revoked and, if they have already been elected, their mandate revoked." [3]
So it's got teeth.
Won't this mean that deepfakes will just percolate under the surface in WhatsApp Groups, Telegram, etc.? Probably. But forcing candidates and, ideally, their affiliates, to think hard before stooping to these tactics or reposting material they're unsure about should still be helpful. Particularly where the price to pay is their spot on the ballot.
That's not to say that tech companies that serve up this sort of content shouldn't also be taken to task. In Brazil, TSE Resolution 23.714 of 2022, Article 2, gives the TSE power to order social media companies to remove offending disinformation under significant time-based financial penalties that start racking up an hour after notification.
Ultimately, A.I. is going to continue to facilitate exponentially easier ways to conjure up the sorts of believable fictions that may influence elections. As such, information integrity is a fundamental challenge democratic countries will have to wrestle with. Surely, as a starting point -- and no matter our political affiliations -- we all want a robust system of elections that is inured to influences of bad actors who can increasingly use A.I. as a tool to ‘legitimize’ their practices. I, for one, want to vote for whomever I think is the right person for the job, on the basis of the right (read real and relevant) information available about them.
I'm working on a deeper dive into how other jurisdictions are tackling this poignant issue. In a year where many eyes will be turned to the U.S. lead up to elections in November, it's a conversation we need to be having. Watch this space.
A lawyer in British Columbia has been ordered to pay special costs in a novel case of ChatGPT (mis)use in Canadian courts [4]. The lawyer submitted an application which cited two non-existent cases, only discovered to be A.I. hallucinations when opposing counsel was unable to find them by citation alone. The B.C. Supreme Court judge found no "intent to deceive"; however, ordered the lawyer to pay costs for the time it took opposing counsel to uncover that the two cases were non-existent.
The moral of the story is simple: approach LLMs with eyes wide open.
For most people, LLMs appear miraculous black-boxes that can conjure up the answer to any question. However, ignorance as to their limitations is obviously no excuse when putting them to use in a professional advisory capacity. It's important to bear in mind what they really are. Shanahan puts it all rather concisely in a paper I highly recommend: LLMs are models that we can query and their purpose is to "generate statistically likely sequences of words" [5]. Nothing more, nothing less. Hallucinations are an almost inevitable byproduct of these models due to issues such as overfitting or the quality of training data. They should be expected.
All that said, it’s not the first time and it won’t be the last time. LLMs save time and are a useful drafting tool. But, as it stands, they can’t yet reliably replace good-ol'-fashioned research.
Meanwhile, the Eastern Caribbean Supreme Court (ECSC) is soft-launching A.I. powered transcript tool in St Kitts and Nevis [6]. Ideally, this technology should save a tremendous amount of manual labour. The same lessons from the B.C. blunder mentioned above apply. Hopefully, a pair of human eyes reviews the transcripts for any nonsense before they become part of any sort of official record.
Another week, another Elon story. Cynicism aside, this one raises some intriguing A.I.-related questions.
In a lawsuit filed against OpenAI on Thursday, 29 February, Musk has alleged breach of contract, suggesting that the company has deviated from its original purpose of pursuing AGI openly for the benefit of the public [7]. According to reports, there's no obvious founding agreement per se -- rather, it's a patchwork creation of conversations and statements from documents of incorporation [8]. There's also apparently a claim of breach of fiduciary duty in there.
The wildest element, however, might be that Elon wants the court to determine that OpenAI has, in fact, created AGI with its latest model. The embedded demand is that this technology must be released to the public and should not benefit any of OpenAI's leadership or Microsoft (which has invested billions into a partnership with OpenAI for its tech).
But it's worth backing up for a moment. If this goes the way Elon intends, it will lead to the court -- indeed, a jury -- having to determine the bar for when something is AGI. This is arguably a crucial philosophical question still doing the rounds. It might be hard to find consensus about how to define AGI even amongst A.I. experts, let alone lawyers and laypeople. With the determination of AGI comes a suite of other ethical issues about legal standing, and even discussions around the boundaries of rights that may or may not be afforded to AGI.
I'm not so sure the court is equipped to step into this particular minefield. That said, stranger things have happened.
[1] Mari, Angelica. 2024. “Brazil Outlines Rules For AI Use During Elections.” Forbes, February 28, 2024. https://www.forbes.com/sites/angelicamarideoliveira/2024/02/28/brazil-outlines-rules-for-ai-use-during-elections/?sh=8a78ac11f6ab.
[2] For a deeper regulatory overview, see: Rafael Rubio & Vitor de Andrade Monteiro (2023) Preserving trust in democracy: The Brazilian Superior Electoral Court's quest to tackle disinformation in elections, South African Journal of International Affairs, 30:3, 497-520, DOI: 10.1080/10220461.2023.2274860
[3] Grattan, Steven. 2024. “Brazil Justice Moraes warns political candidates not to use AI against opponents.” Reuters. February 29, 2024. https://www.reuters.com/world/americas/brazil-justice-moraes-warns-political-candidates-not-use-ai-against-opponents-2024-02-29/.
[4] Proctor, Jason. 2024. “B.C. Lawyer Reprimanded For Citing Fake Cases Invented by ChatGPT.” CBC, February 27, 2024. https://www.cbc.ca/news/canada/british-columbia/lawyer-chatgpt-fake-precedent-1.7126393.
[5] Shanahan, M. (2022). Talking About Large Language Models. arXiv (Cornell University). https://doi.org/10.48550/arxiv.2212.03551
[6] EIN News. 2024. “ECSC Initiates Pilot Project in St Kitts and Nevis With AI Technology to Revolutionise Court Proceedings.” EIN News, March 2, 2024. https://tech.einnews.com/pr_news/692890894/ecsc-initiates-pilot-project-in-st-kitts-and-nevis-with-ai-technology-to-revolutionise-court-proceedings.
[7] Reuters. 2024. “Elon Musk Launches Lawsuit Against OpenAI, Sam Altman. Why?” Global News, March 1, 2024. https://globalnews.ca/news/10328745/elon-musk-openai-sam-altman-lawsuit/.
[8] Ephrat Livni, Lauren Hirsch, Sarah Kessler and Michael J. De La Merced. 2024. “The Big Questions Raised by Elon Musk’s Lawsuit Against OpenAI.” The New York Times, March 2, 2024. https://www.nytimes.com/2024/03/02/business/dealbook/the-big-questions-raised-by-elon-musks-lawsuit-against-openai.html.