Whistleblower Woes: The Right to Warn
June 7, 2024
On 4 June 2024, employees at OpenAI and Google DeepMind released an open letter imploring their companies to facilitate what they call a right to warn about advanced AI. In the absence of proper government oversight, they call upon their companies to: (1) not enter or enforce any non-disparagement clauses when it comes to risk-related concerns; (2) develop internal, verifiably anonymous, whistleblower processes; (3) support a culture of open criticism which enables disclosure of risk-related concerns; and (4) not retaliate against employees (past or present) who choose to publicly voice risk-related concerns where other processes have failed. In the uncomfortable light of commercial reality, most of these requests appear unrealistic. A demand for robust internal whistleblowing processes, however, is an actionable and arguably best next step -- particularly given the poor state of protection under the current whistleblowing regime in England and Wales. There should be a right to warn about advanced AI. This demand may well be the starting point.