the Concept

Whistleblower Woes: The Right to Warn

Ankesh Chandaria

June 7, 2024
(
June 10, 2024
*)
tl;dr

On 4 June 2024, employees at OpenAI and Google DeepMind released an open letter imploring their companies to facilitate what they call a right to warn about advanced AI. In the absence of proper government oversight, they call upon their companies to: (1) not enter or enforce any non-disparagement clauses when it comes to risk-related concerns; (2) develop internal, verifiably anonymous, whistleblower processes; (3) support a culture of open criticism which enables disclosure of risk-related concerns; and (4) not retaliate against employees (past or present) who choose to publicly voice risk-related concerns where other processes have failed. In the uncomfortable light of commercial reality, most of these requests appear unrealistic. A demand for robust internal whistleblowing processes, however, is an actionable and arguably best next step -- particularly given the poor state of protection under the current whistleblowing regime in England and Wales. There should be a right to warn about advanced AI. This demand may well be the starting point.

The Letter

In my last article, I touched on the troubling signs of an absence of guardrails at OpenAI in the context of the larger question: Who’s Responsible for AI Safety? Much has happened since. Most recently, on 4 June 2024, current and former employees at OpenAI and Google DeepMind signed an open letter warning about inadequate safety oversight and highlighting the lack of whistleblower protections within advanced AI companies. 

Under the aspirational headline, “A Right to Warn about Advanced Artificial Intelligence”, they call upon advanced AI companies to (and this is me paraphrasing):

  1. Not enter into (or enforce) non-disparagement clauses when it comes to risk-related concerns;
  2. Develop internal, verifiably anonymous, whistleblower processes;
  3. Support a culture of open criticism which enables disclosure of risk-related concerns to the public, regulators, etc.; and
  4. Not retaliate against employees (past or present) who choose to publicly voice risk-related concerns where other processes have failed.

They make these requests given ineffective government oversight. I happen to agree. But are these asks realistic? And, if so, are they useful, or merely an insufficient fix to a deeper-rooted problem?

Ask Not, Have Not

It's important to be pragmatic. Whilst we shouldn’t lose touch with our ideals, the world we live in is a murky place that doesn’t often facilitate a perfect solution. As such, we must acknowledge that some of these calls-to-action are non-starters. Broad confidentiality agreements -- like those lambasted by the letter's authors – aren’t going anywhere and it is unlikely non-disparagement clauses will be tempered in any way. 

The call for a culture of open criticism is also a tough-sell. While the authors are reasonable to carve out certain commercial concerns (namely trade secrets and property interests), this concession doesn’t address the more basic question. Namely, why would a company, operating in a highly competitive and lucrative market, voluntarily hamstring itself? It is unrealistic to expect that any company would choose to commit to open criticism unless every one of its competitors did the same. In any case, the belief that a company would agree to keep employees on its payroll that may, freely and at any time, damage its reputation in the public sphere is, unfortunately, wishful thinking.

It will also be difficult to convince a company not to enforce its rights against an employee opting to go public. The only incentive is one of reputation. However, it's reasonable to assume that a company might decide that the best way to maintain its reputation is by quashing stories that don’t do it any favours. Silence may be read as an admission of wrongdoing and any post-publication activities criticized as too little too late. (Recall, for example, OpenAI’s recent scramble to reassure the public by cobbling together a board sub-committee dedicated to safety.)

At this point you may be wondering -- quite rightly! -- if I’ve become a complete pessimist resigned to the inevitability of big AI companies behaving badly.

Not quite. 

As a general point, if you don’t ask, you don’t get. More particularly, I believe there is one call to action in the letter that may result in some actual action: the demand for a robust, verifiably anonymous, internal whistleblower process.

Whistleblower Rights

This demand is made against a background where, as the authors of the letter articulate, “[o]rdinary whistleblower protections are insufficient because they focus on illegal activity, whereas many of the risks [they] are concerned about are not yet regulated.” In order to properly appreciate this concern, it is necessary to take a look at the current state of whistleblower rights.

My analysis will be limited to the context I’m most familiar with, which is the regime in England and Wales. Note that none of the below constitutes legal advice. If you're planning on whistleblowing, make sure to consult a knowledgeable lawyer before you do! You'll see why in a moment.

The whistleblowing regime is defined by the Public Interest Disclosure Act 1998, which amends the Employment Rights Act 1996 (the “ERA”). Whistleblowers are protected from unfair treatment and the restrictions of confidentiality agreements so long as certain conditions are met. Without going into arduous detail, it must generally be the case that:

  • They are a worker under the ERA (other options exist but this is most likely to be the case for current employees); and
  • The disclosure must be a qualifying disclosure

A qualifying disclosure covers, for example: a criminal offence (past, present or likely); failure (past, present or likely) by a person to comply with a legal obligation; likely environmental damage; miscarriages of justice; endangerment of the health and safety of any individual; and cover-ups of any of the above.

Within this framework, there are three main ways in which an employee can raise concerns in England and Wales:

  1. Internally (within the company);
  2. Externally (to a prescribed person); and
  3. Externally (to a non-prescribed person, such as the media).
1. Internal (Within the Company)

Whistleblowers are protected by section 43C of the ERA where they make a qualifying disclosure to their employer or another responsible person authorized by an internal process. These processes are typically set-out in whistleblowing policies which, ideally, offer an anonymous or protected pathway to an autonomous individual within the organization. This approach ensures protection of the employee as well as effective consideration of the issues at hand. 

It may surprise some readers that it is not mandatory for a company to have a whistleblowing policy [1]. This basic process is what the employees are asking for in their second call-to-action: an anonymous, safe, reporting pathway to (in the first instance) the company board.

We'll come back to this later.

2. External (Prescribed Person)

In the absence (or failure) of internal processes, whistleblowers are left with two external routes.

The first is making a qualifying disclosure to a prescribed person, per section 43F of the ERA. In addition to the general requirements set out above, the whistleblower must also reasonably believe that the information disclosed and any allegation contained therein is substantially true.

The list of prescribed persons is published on the U.K. Government’s website. A brief canter through this list, and even a minute's reflection on the definition of a qualifying disclosure, might already reveal some of the significant problems any whistleblower concerned with AI safety might come up against. 

As a starting point, it is unclear what would constitute a qualifying disclosure. There are certainly no current laws in England and Wales (criminal or otherwise) that contemplate the sort of AI risks the authors are worried about. Further, it would be almost absurd to try to fit arguments under the alternative banners of environmental concern, miscarriage of justice, or the health and safety concerns of an individual. Without proper, topical, legislation that translates to protection under the ERA, whistleblowers have no basis of comfort upon which to make their disclosures.

To add to this, upon review of the aforementioned list, you would be forgiven for wondering who exactly would be the right regulator for the job. The current proposed framework for AI regulation in the U.K. is a “light-touch” regime with enforcement dispersed between the CMA, ICO, OfCom, and the FCA, all connected, in a somewhat wispy way, by a central hub [2]. As the Ada Lovelace Institute suggests (in quite reasonable criticism), this approach appears to be “all eyes, no hands” – i.e., characterized by broad monitoring capabilities but no powers to prevent or react to risks [3]. While there may be certain benefits to this dispersed approach, the trouble for our present purposes is that this complexity may to dissuade whistleblowers who want to be certain of their ERA protections before coming forward.

3. External (Non-Prescribed Person)

The second external option – when all else fails – is a qualified disclosure to a non-prescribed person, per section 43G of the ERA. This would include, for example, any disclosures to the media. 

Such disclosures are only protected in exceptional cases. Not only must the belief standard mentioned above be met, the disclosure must also be made in good faith, and with no benefit to the whistleblower. In addition, it can only be made if one of the following conditions are fulfilled:

  • The whistleblower reasonably believes they will be subject to detriment by their employer;
  • Where there is no relevant prescribed person, there is a reasonable belief that evidence relating to the failure will be concealed or destroyed if disclosed to the employer; or
  • That a substantially similar disclosure has previously been made either internally or to a prescribed person.

On top of all this, additional factors are taken into consideration in determining whether it was reasonable for the whistleblower to make the disclosure in question. This includes their identity, the seriousness of the potential failure, whether it is a continuing or likely future failure, and even the behaviour of the employer if disclosure was already made internally. Not only is the bar set sky-high, but significant questions -- such as who in government (under the current and proposed regime) is qualified to assess the seriousness of these AI risk issues -- remain unanswered.

Whistleblower Woes

Even from this brief review, it’s evident that the legal framework is not up to the task when it comes to the protection of whistleblowers expressing advanced AI risk-concerns. In the absence of sufficient government oversight, the only avenue that can be relied upon is the internal one. There is only one demand in the letter, therefore, that I believe might result in actual action and, in fact, be the best stopgap solution: that companies working on advanced AI develop internal, verifiably anonymous, whistleblower processes.

The rationale may not be immediately obvious. I've certainly expressed my view that companies cannot be relied upon for AI Safety. The various realities of commercial existence preclude a serious prioritization of risk-mitigation over revenue-maximization. This is not to discount the many individuals who believe in AI Safety within these organizations – as evinced by this very letter. Rather, it is an acknowledgement of the reality I articulated in my last article, which also happens to be echoed in the letter: that there are significant financial incentives in place preventing the implementation of proper, effective safety measures. Safety simply isn't tangible enough when compared to other more immediate incentives to develop new products and drive growth. Furthermore every whistleblowing option outlined above (including the internal avenue) must count as a qualifying disclosure to trigger the protections of the ERA. Without the proper supporting legislation in place, none of these options will actually provide the sort of safety a whistleblower ought to have. 

Whilst an effective legislative framework is the obvious end-goal, such systemic shifts are incredibly slow. Companies, for their faults, can at least move quickly.  Indeed, in some ways, this is low-hanging fruit. There are many examples of companies with working whistleblowing policies to pull from. For organizations like OpenAI and Google DeepMind, implementing proper whistleblowing processes would also signal a commitment to AI Safety both to their employees and to the public. It is conceivable that a truly anonymous pathway to an autonomous individual within the organization would give employees the confidence to voice their concerns without needing the additional safety net of the ERA. 

Whistleblower woes are a real thing. Whistleblowers tend to fare badly because, as MacDougall notes, they face a tragic choice: they must choose between failing a duty to the public and a duty to their employer [4]. The sole support they have in making this difficult decision is the protection of proper policies and legislative systems. 

There should be a right to warn about advanced AI. In the absence of fully-flushed-out government oversight, the only (and therefore best) option is an internal process. Employees would, for the time being, have to put their trust in their organizations. This letter suggests that they are at least willing to give that a shot.

References

[1] Department for Business, Innovation and Skills (2015) Whistleblowing: Guidance for Employers and Code of Practice. Available at: https://assets.publishing.service.gov.uk/media/5a819ef5e5274a2e87dbe9e3/bis-15-200-whistleblowing-guidance-for-employers-and-code-of-practice.pdf

[2] https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach/white-paper

[3] Birtwistle, M. (2024, February 7). Ada Lovelace Institute statement on the UK’s approach to AI regulation. https://www.adalovelaceinstitute.org/press-release/statement-on-uk-ai-regulation/

[4] MacDougall D. R. (2015). Whistleblowing: Don't Encourage It, Prevent It Comment on "Cultures of Silence and Cultures of Voice: The Role of Whistleblowing in Healthcare Organisations". International journal of health policy and management, 5(3), 189–191. https://doi.org/10.15171/ijhpm.2015.190

You may also be interested in...