When AI systems built to act like humans cause harm, our legal system faces a question it wasn't designed to answer: who is responsible? The dominant response has been to treat AI as a passive instrument wielded by a human operator (what I call the Instrumental Frame), a logic embedded in product liability law long before Social AI existed. In Garcia v. Character Technologies, the Frame did essential work: the Court refused to let Character.AI dissolve the chatbot into an inert vessel, held the company accountable for its design choices, and compelled concrete safety improvements. But the Frame strains. Social AI isn't a passive object -- it acts back, by design -- and treating it as one carries real risks, including the risk that accountability folds into the nearest human in ways that protect the system itself at precisely the moment we need to be asking harder questions about what it is. My paper names this tension but does not resolve it. That resolution is a question for the judiciary, legislators, and society at large. For now, I suggest that the Instrumental Frame is a fiction worth keeping. What we do with the time it buys us is the harder question.
In my previous article, I argued that the growing integration of human-like features in AI systems – the simulated breath of an LLM's voice mode, the typing ellipsis, the first-person voice – demands our critical attention. The question isn't just whether we can engineer AI to blur the line between machine and human, but whether we should. It’s a rather slippery slope towards normalisation, and one that will certainly bring significant societal consequences with it. From my perspective, we’re already halfway down the hill and haven’t figured out how to avoid a crash landing.
In a new paper published this month, A Necessary Fiction: Making Social AI Legally Legible, I examine the consequences of these decisions in an arena where they’re already being tested: the court of law. What happens when AI systems built to act like humans cause harm and our legal system is left to unpack it all and ultimately figure out who is responsible?
The systems I’ve focused on belong to a category known as Social AI: LLM-based applications designed not merely to process tasks, but to simulate human social presence. I have previously described these decisions as "anthropomorphic design choices". A more precise term, employed by philosopher Henry Shevlin, is that such systems are anthropomimetic: i.e., specifically engineered to resemble or imitate humans rather than those systems that merely invite us to project human qualities onto them [1]. The distinction is more than semantic: anthropomorphism is a tendency we bring; anthropomimesis reflects an active choice that designers make.
The Garcia v. Character Technologies, Inc. (Character.AI) [2] case – the litigation following the tragic death of fourteen-year-old Sewell Setzer III, which I touched on in my previous piece – remains an illustrative example.
Our legal systems were (obviously) not designed with anthropomimetic Social AI in mind. There is therefore a gravitational pull towards treating this sort of AI as an inert instrument wielded by a human operator: what I refer to as the Instrumental Frame. It’s an old move. Traceable to the culture of AI engineering labs since the expert systems era, knowledge engineers described their work in the language of extraction, as if information could be harvested from human experts and deposited, unchanged, into neutral systems. The tool had no choices; the operator did.
Product liability law encodes the same assumption: products are passive things; users act upon them. A system’s classification – is a chatbot a product or a service? – determines where responsibility lands. Strict liability for products anchors accountability with the designer; negligence for services is considerably harder to establish, and centres the agency of the user.
In the Garcia litigation, the plaintiff argued that Character.AI's chatbots were shaped by deliberate "high-risk anthropomorphic design choices" that were a substantial factor in causing Sewell's death. These design choices created accountability hooks. Character.AI, meanwhile, argued their platform was not a product but a service delivering expressive content, with First Amendment protections to match. The chatbot became an inert vessel, the user shaped the conversation, and responsibility dissolved into the operator.
The Court declined to accept this. It found that by releasing anthropomimetic chatbots with engagement-maximising designs and insufficient guardrails, Character.AI had created a foreseeable risk of harm over which they retained specific control.
This is clearly a reasonable conclusion: of course a company ought to be responsible for the design choices it makes.
The Character.AI case nevertheless also reveals where the Instrumental Frame strains. The plaintiff's own complaint -- even while arguing for a product-based perspective -- described the chatbot as initiating abusive interactions, exploiting the user, encouraging suicide. Both parties found themselves forcing a system engineered to act like a human into legal categories built for things that don't act back.
Here lies the deeper tension. Social AI is not simply acted with or through. It acts back, by design. Some scholars argue it may even exhibit a minimal form of social agency (i.e., actions that extend beyond what any individual user controls or intends) which makes it increasingly difficult to justify treating it purely as a passive object. [3]
What the Instrumental Frame brackets, then, is not designer responsibility – that, as the Character.AI case shows, it handles well enough – but the question of the system’s own role in generating harm. There is also the risk of what Madeleine Elish calls a "moral crumple zone" [4]: accountability folding into the nearest human operator in ways that protect the system itself, at precisely the moment we might need to be asking harder questions about what the system is and what it does. The defendants’ framing was, arguably, an attempt to construct exactly such a zone.
The Court, to its credit, refused.
The Instrumental Frame does essential work. By treating Social AI as a product rather than a legal actor, it keeps responsibility anchored to the humans who design, deploy, and profit from these systems. As Character.AI’s post-incident implementation of age verification, content filters, and crisis intervention features demonstrates, this can compel concrete, safer design choices. This matters in a very practical way.
At the same time, Social AI strains the frame. As these systems become more sophisticated and more deeply embedded in our lives, the gap between how they are built and how the law treats them will only grow.
My paper does not resolve that tension. What it attempts to do is make it visible: to name the frame, describe what it does well, and identify what it leaves unexamined. The longer-term questions are for legislators, regulators, courts, and society at large. But those conversations depend on first being honest about the fiction we are currently relying on.
For now, it is a fiction worth keeping. What we do with the time it buys us is the harder question.
[1] Shevlin, H. (2025). The Anthropomimetic Turn in Contemporary AI. PhilArchive. https://philpapers.org/archive/SHETAT-11.pdf
[2] Garcia v. Character Technologies, Inc., No. 6:24-cv-01903-ACC-UAM (M.D. Fla. 2024).
[3] Symons, J., & Abumusab, S. (2024). Social Agency for Artifacts: Chatbots and the Ethics of Artificial Intelligence. Digital Society, 3(1), 2. https://doi.org/10.1007/s44206-023-00086-8
[4] Elish, M.C. (2019). Moral Crumple Zones: Cautionary Tales in Human-Robot Interaction. Engaging Science, Technology, and Society, 5, 40–60. https://doi.org/10.17351/ests2019.260