Explores the legal ontology of social AI, how courts apply product liability to chatbots, and why AI-as-instrument is a fiction worth keeping for now.
Explore the ethics of anthropomorphic AI design, its risks, and a framework to assess necessity while safeguarding human-to-AI interactions.
AI models like GPT4 and LLaMA2 mimic theory of mind, performing at human levels in tests, raising implications for mental health and decision-making.
Explore the call for a "right to warn" about advanced AI, advocating whistleblower protections, open criticism, and safeguards against retaliation.