the Concept

Sunday Sweep: Week in Review (March 31, 2024)

Ankesh Chandaria

March 31, 2024
(
April 20, 2024
*)
tl;dr

With this week's condensed version of the Sunday Sweep, we focus on a couple of key developments. First, Neuralink's released an incredible video showing its first human patient, Noland Arbaugh, manipulating a cursor on a monitor entirely with his mind. This sort of technology demands serious consideration both for the tremendous benefits it can bring as well as the potential ethical issues that come hand-in-hand with it. On the regulatory side, representatives of the US Congress proposed the Protecting Consumers from Deceptive AI Act -- federal legislation designed to protect the public from deepfakes. The White House Office of Management and Budget issued its first major AI policy, and a pair of additional government departments released their own AI reports (including a Treasury report on managing AI cybersecurity risk in the financial services sector).

This Sunday I'm bringing you a shorter version of the sweep (back to the regular schedule. soon!) focusing on a couple major stories from the past few weeks.

Neuralink Update: A Glimpse of The Future

The standout news for me from the last couple of weeks remains Neuralink. Back in January, Neuralink announced that its first human patient — Noland Arbaugh, a 29-year-old paralyzed from the shoulders down — had received an implant and was on the road to recovery [1].  On March 20th, Neuralink went a step further, sharing a captivating live update video of Noland. In it, he describes his accident, recovery, and what having the chip in his head has been like… all while playing a game of chess with his mind.

This is incredible stuff. But don’t take my word for it. I highly suggest you watch the video.

So how does it work?

Basically, the device reads electrical patterns in the brain and associates them with actions outside. Implanted directly into the patient’s skull, it consists over 60 polymer threads which provide over a thousand sites for recording neuronal electrical signals (neural electrophysiological recording, in case it comes up on a trivia night). These patterns of neural activation are measured against the patient’s baseline and translated into various actions. Like I said. Incredible stuff.

There is, reason to move forward with caution: this technology presents a new frontier beyond one we barely understand (i.e., our brains). Indeed, we just don’t know whether and how this sort of procedure might inadvertently interfere with a patient’s brain. And what of potential misinterpretation of the signals sent through the device? A little further afield, there are concerns around hacking — could a bad actor somehow hijack and damage the device and, as a result, the human as well? I will also admit to having some discomfort at the idea of a private company controlling the contents of a device like this. Could they shut down access? Push buggy updates? A list of concerns might go on for pages...

However, it is impossible to deny the good that could come from this sort of research and the hope it offers to so many people. By fully capitalizing on machine learning technology, Neuralink demonstrates just the first step into what will likely a be a brave new world of treatment for those with paralysis, and even perhaps eventually other conditions like epilepsy and Parkinson’s. We just have to make sure we walk into that world with our eyes wide open, and the best interests of the human patients in mind.

U.S. (Federal!) Government Making Moves

Protecting Consumers from Deceptive AI Act

On 21 March 2024, Congresswoman Anna Eshoo (Co-Chair of the House AI Caucus) and Congressman Neal Dunn introduced the Protecting Consumers from Deceptive AI Act. This is the first proposed federal legislation from the House (similar legislation was proposed by the Senate in September 2023 [2]) that would offer the sorts of protections around deepfakes that States have been pushing ahead with. The bill appears to be endorsed by Hugging Face, IEEE, the Center for Countering Digital Hate, and others [3].

It would, amongst other things:

  • Direct NIST (the National Institutes of Standards and Technology) to facilitate the development of standards for labelling and identifying AI content;
  • Require that generative-AI devs include machine readable disclosures within audio and visual content generated by their AI apps; and
  • Require online platforms to use those disclosures to label AI-generated content.
Government-Wide AI Policy

Just a week later, VP Harris announced that the White House Office of Management and Budget ("OMB") was issuing its first government-wide AI policy [4]. The policy governs AI procurement across Federal Government agencies including through "specific minimum risk management practices for uses of AI that impact the rights and safety of the public". Agencies must, for example, designate a Chief AI Officer (an individual responsible for the coordination, promotion, and risk management of the agency's use of AI) within 60 days of the issuance of the memo. A significant portion of the memo (Section 4) is dedicated to advancing responsible AI innovation with a view to removing unhelpful barriers whilst still maintaining effective guardrails -- certainly the struggle of anyone looking to regulate AI. Further, Section 5 of the memo spells out a risk mitigation approach for "safety-impacting" AI (i.e., that could impact human safety, climate, critical infrastructure) and "rights-impacting" AI (i.e., that impacts one's civil rights and liberties, privacy, access to equal opportunities, or access to critical government services).

Reports Galore

In addition to the OMB policy, the NTIA (National Telecoms and Information Administration under the Department of Commerce) published an Accountability Policy Report and the Department of Treasury set out a report on managing AI cybersecurity risks in the financial services sector.

Recommended Reading

If you've got a moment to digest something a little longer, I suggest Sarah Murray's recent piece in the FT: "What Does AI Mean For a Responsible Business?" Setting aside any thoughts one might have on ESG investing as a concept, the article presents a good summary of current concerns around the implementation of AI without due care, as well as some of what's being done to mitigate and assess risk.

References

[1] https://www.cbc.ca/news/business/musk-brain-implant-first-human-1.7099093

[2] https://www.klobuchar.senate.gov/public/index.cfm/news-releases?ID=AF782E4C-C2C9-4C7C-8696-374F72C03F90

[3] https://eshoo.house.gov/media/press-releases/rep-eshoo-introduces-bipartisan-bill-label-deepfakes

[4] https://www.whitehouse.gov/briefing-room/statements-releases/2024/03/28/fact-sheet-vice-president-harris-announces-omb-policy-to-advance-governance-innovation-and-risk-management-in-federal-agencies-use-of-artificial-intelligence/

You may also be interested in...