A Necessary Fiction: The Legal Ontology of Social AI
March 21, 2026
When AI systems built to act like humans cause harm, our legal system faces a question it wasn't designed to answer: who is responsible? The dominant response has been to treat AI as a passive instrument wielded by a human operator (what I call the Instrumental Frame), a logic embedded in product liability law long before Social AI existed. In Garcia v. Character Technologies, the Frame did essential work: the Court refused to let Character.AI dissolve the chatbot into an inert vessel, held the company accountable for its design choices, and compelled concrete safety improvements. But the Frame strains. Social AI isn't a passive object -- it acts back, by design -- and treating it as one carries real risks, including the risk that accountability folds into the nearest human in ways that protect the system itself at precisely the moment we need to be asking harder questions about what it is. My paper names this tension but does not resolve it. That resolution is a question for the judiciary, legislators, and society at large. For now, I suggest that the Instrumental Frame is a fiction worth keeping. What we do with the time it buys us is the harder question.