This week, the European Parliament passed the Artificial Intelligence Act (AIA) -- sweeping legislation governing the deployment of A.I. models and systems in the European market. The AIA also applies to A.I. companies situated outside of the E.U. that deploy their services in a way that is accessible within the E.U. Some of the provisions will kick in as soon as 6 months from when the Act enters into force. India, meanwhile, pulled back from last week's statement suggesting a turn away from onerous regulatory requirements. Electoral integrity is front of mind around the world this year, and the E.U. Commission has sent information requests to a number of major tech companies (including Google, X and TikTok) for details on generative A.I. and deepfake risks. In the U.S., Biden set out his 2025 budget request, which includes $20 billion across the board for A.I. research agencies. Also in America, a Florida man was barred from practicing law for a year for submitting hallucinated case citations and apparently being entirely unrepentant. Are we surprised? Finally, Mercedes is trialing humanoid robots for mundane tasks in a slow shuffle towards a robotic future.
This week saw significant news on the regulatory front. First and foremost...
The major news this week is that the European Parliament (the E.U.'s democratic electorate) passed the Artificial Intelligence Act (AIA) by 523 - 43 votes, with 49 abstentions [1]. This is the first such sweeping legislation of A.I. by a major regulatory body with real clout over major tech players. On top of that, it's a bit of a territory grab by the E.U. in that it inevitably creates a benchmark against which other slower-to-act regulators will likely have to measure against.
You can find the provisional text here; however, I spent a fair amount of time reading through this gargantuan document this past week so, if you don't feel like slogging through it yourself, never fear... I'll be posting a more detailed outline of the AIA along with a flowchart to help explore when and and to whom the Act applies.
In the meantime, here are a few headline points to start you off:
The intention of the AIA is to "promote the uptake of human-centric and trustworthy artificial intelligence". The notion of a human-centric A.I. is one of developing technology that benefits and augments human abilities, rather than entirely supplanting them. This goal is balanced against the protection of European Fundamental Rights: healthy, safety, democracy, rule of law, and environmental protection. The E.U. also seems to be acutely be aware of the tension between innovation and regulation -- as highlighted in Article 1 -- although exactly how this balancing act plays out will depend on how effectively the relevant authorities and A.I. providers play together.
The definition of A.I. systems is aligned with the OECD definition as "a machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments" (see Article 2).
Adoption of the OECD definition seems to be an acknowledgement that -- given the borderless nature of the companies innovating in the field as well as that virtual deployment means access by anyone, anywhere -- A.I. regulation should have globally consistent guiding principles. Coordinated definitions, which ensure that everyone's talking about the same thing, are a necessary starting point.
The AIA takes a risk-based approach [2]. Of particular note are prohibited practices such as social scoring systems or real time biometric identification for the purposes of law enforcement (Article 5) and high-risk systems which are permitted but have more strenuous certification standards, systemic risk-mitigation requirements and reporting obligations (Article 6(2) and Annex III).
The AIA applies to, amongst others:
Super important note -- if you're a provider, it's irrelevant whether you're based in the E.U. or in another country (Article 3(a)). If you're a deployer, the AIA only applies to you if you're based in the E.U.(Article 3(b)).
The exception to the above is that the AIA captures any provider or deployer based outside of the E.U. where the output produced by the relevant system is used in the E.U (Article 3(c)).
The AIA envisages a concept of "regulatory sandboxes", which are basically controlled environments for the deployment, testing and validation of A.I. systems in quasi-real-world conditions. Providers of novel systems can take their models from the laboratory testing environment to these sandboxes for a limited time, prior to applying to be able to bring their systems to market. Every Member State of the E.U. is expected to have such a framework in place by two years from the date of the Act, which are to be regulated in a way to provide some legal certainty for developers as well as help them work with regulators to capture and help mitigate risk. See Articles 57-59 for more.
The AIA will enter into force 20 days after publication in the Official Journal of the E.U. -- anticipated roughly to be in May or June 2024. It will apply from 24 months after that date, roughly from Q2 2026 (Article 113). The following exceptions apply:
The AIA definitely also has some teeth to it. Penalties for non-compliance range from:
It would be surprising if such a substantial piece of legislation covering such novel and newsworthy technology were passed without any criticism. A general argument here, of course, is that regulation is never going to be a panacea. It alone will not suddenly lead to responsible behaviour across the board. Notably, there is quite a lot of scope for amendment and review baked into the AIA to keep up with industry and the state of technology. This includes, for example, an advisory forum representing a balanced selection of stakeholders "including industry, start-ups, SMEs, civil society and academia" (Article 67).
That said, law makers inevitably move slowly. The first port of call when it comes to responsible A.I. development will inevitably be, to the extent that they have full autonomy, the developers and engineers designing the models and systems, followed by the organizations responsible for their deployment.
Major players such as Meta are unsurprisingly vocal about the troubles that overregulation might cause [3]. Most arguments along those lines suggest that a burdensome regulatory regime will just drive innovation elsewhere in the world. And indeed it might. However, I think it is nevertheless important not to innovate recklessly. Anyone that says that we ought to let companies lead the charge conveniently forgets that the sole purpose of most businesses is to drive shareholder value -- a goal which is often starkly at odds with slower-paced, thoughtful development. One might therefore argue that the only way to reign in the contentious new development (think Clearview and its approach to facial recognition) is to provide clear boundaries of what is and isn't allowed. To regulate in a meaningful and enforceable way. Whether the E.U. has indeed struck such a balance between innovation and deterrence remains to be seen.
Last week, I wrote about how India had suddenly signalled a move towards regulation of A.I., abandoning what was a traditionally more hands-off approach. In yet another about-face, the country's Ministry of Electronics and IT shared a new advisory on March 15th which walked back from its position [4]. It would seem like the government has buckled to what must have amounted to a significant amount of pressure off the back of a decision that clearly wasn't entirely thought through.
Under the revised advisory, India will no longer require A.I. developers to seek government approval before deploying their models in the Indian market. Instead, models that are unreliable or fallible must be appropriately labelled as such before their release [5]. Presumably what this amounts to is simply a warning note about accuracy.
It's a huge year for elections across the globe (in approximately 64 countries -- check out this article in Time magazine, which includes a nifty map) including multiple European countries as well as elections for Members of the European Parliament (MEPs) across the whole of the bloc.
Consequently, it seems like not a week goes by without very real questions being posed about the integrity of electoral systems and the potential plague of deepfakes across platforms (digital and political). This week, the E.U.'s Commission sent information requests under Article 67 of the Digital Services Act (2022) to major service providers such as Google, Instagram, Snapchat, X, and TikTok seeking information around mitigation measures related to risks around the following:
The companies will be required to provide answers by 5 April 2024 for election-protection related questions, and 26 April for the remainder [6]. Under Article 74 (2) of the Act, the Commission can impose fines for "incorrect, incomplete, or misleading information" in these responses.
On Monday, 11 March, President Biden set out his budget ask for the 2025 fiscal year. With regards to investment in A.I., the budget includes a proposed $20 billion set out across major agencies to regulate and promote A.I., as well as implement aspects of Biden's A.I. executive order. The budget also includes funding for the establishment of Chief A.I. Officers and the adoption of A.I. within government services [7].
A couple of weeks ago, I had posted about a lawyer practicing in B.C., Canada, who discovered ChatGPT's proclivity to hallucinate case-law the hard way. In that instance, the guilty lawyer pled complete ignorance, and consequently had to apologise and pay special costs for wasting opposing counsel's time.
This week, a Florida lawyer was suspended from practice for a year for not only citing fabricated cases, but also subsequently providing "non-responsive and evasive answers to the request for the cited authorities" when asked about it (this according to the reviewing committee). This fellow seems to have been entirely unapologetic and not to have grasped the seriousness of the issue [8]. Much more substantial than a slap on the wrist, but entirely fair enough.
Just your weekly reminder to check those citations!
Mercedes-Benz is set to trial humanoid robots for demanding and repetitive tasks. The trial is to take place in Hungary and the aim, according to FT reporting, is to do the "physically demanding, repetitive and dull tasks for which it is increasingly hard to find reliable workers" [9]. Ostensibly these are tasks which the company expects humans don't want to do, however a cynical reading-between-the-lines is that these are tasks it might believe it's. no longer worth paying humans for. This pilot follows BMW's announcement of a similar trial in January with a company called Figure [10].
Mercedes' brand of robot is called Apollo, manufactured by a company called Apptronik. This represents Apptronik's first deployment of their robot in a commercial setting [11].
On the plus side, and with no apologies for freely mixing pop-culture references, these awkwardly-shuffling bots don't exactly look like the foundations of a droid army ready to take over the galaxy. Check out the video above if you need a little reassurance that we aren't quite there. Yet.
[1] European Union, 2024 - Source: European Parliament. “Minutes - Artificial Intelligence Act - Wednesday, 13 March 2024,” March 13, 2024. https://www.europarl.europa.eu/doceo/document/PV-9-2024-03-13-ITM-008-02_EN.html.
[2] European Parliament. “Artificial Intelligence Act | Legislative Train Schedule,” n.d. https://www.europarl.europa.eu/legislative-train/theme-a-europe-fit-for-the-digital-age/file-regulation-on-artificial-intelligence.
[3] Milmo, Dan and Alex Hern. “What Will the EU’s Proposed Act to Regulate AI Mean for Consumers?” the Guardian, March 14, 2024. https://www.theguardian.com/technology/2024/mar/14/what-will-eu-proposed-regulation-ai-mean-consumers.
[4] Singh, Manish. “India drops plan to require approval for AI model launches” March 15, 2024. https://techcrunch.com/2024/03/15/india-drops-plan-to-require-approval-for-ai-model-launches/.
[5] Rekhi, Dia and Aryan, Aashish. “Govt Withdraws Mandate Requiring AI Models to Seek Approval before Deployment.” The Economic Times, March 15, 2024. https://economictimes.indiatimes.com/tech/technology/govt-withdraws-advisory-on-regulation-of-ai-platforms/articleshow/108531687.cms.
[6] "Commission Sends Requests for Information on Generative AI Risks to 6 Very Large Online Platforms and 2 Very Large Online Search Engines under the Digital Services Act,” March 14, 2024. https://digital-strategy.ec.europa.eu/en/news/commission-sends-requests-information-generative-ai-risks-6-very-large-online-platforms-and-2-very.
[7] JD Supra. “President Biden Unveils Key AI Priorities in FY 2025 Budget Request,” March 14, 2024. https://www.jdsupra.com/legalnews/president-biden-unveils-key-ai-9802116.
[8] LawSites. “Federal Court Suspends Florida Attorney Over Filing Fabricated Cases Hallucinated by AI,” March 14, 2024. https://www.lawnext.com/2024/03/federal-court-suspends-florida-attorney-over-filing-fabricated-cases-hallucinated-by-ai.html.
[9] Harris, Gareth. “Mercedes Trials Humanlike Robots for ‘Demanding and Repetitive’ Tasks,” Financial Times, March 2, 2024. https://www.ft.com/content/0dd1227c-0971-4d90-960e-5aef7f18ee48?desktop=true&segmentId=7c8f09b9-9b61-4fbb-9430-9208a9e233c8.
[10] Weatherbed, Jess. “BMW Is Bringing Humanoid Robots to Its South Carolina Facility.” The Verge, January 18, 2024. https://www.theverge.com/2024/1/18/24043065/bmw-figure-robotics-humanoid-robot-workers-deployment.
[11] https://apptronik.com/news-collection/apptronik-and-mercedes-benz-enter-commercial-agreement