I read the Judge’s Order , so you don’t have to. Another 49 pages were added to the heart-wrenching tragedy of Sewell Garcia in May, as Judge Conway ruled against Character A.I. and Google on a number of motions argued in the case brought against them by Sewell’s mother. Sewell's tragic fate, seemingly encouraged by an AI agent in the final, crucial moment, drove Megan Garcia to sue Character A.I. and it's sponsor, Google, on a wide range of grounds. The defendants' defences included the claim that AI is free speech. Of the judge's rulings, 3 are profound in their impact on AI ethics. Google knows that AI has the potential to harm. When the Google team who later formed Character A.I. asked to release a version of their LLM designed for text dialogues (LaMDA), Google denied their request. Notably because: “ Google employees raised concerns that users might 'ascribe too much meaning to the text [output by LLMs]', because ‘humans are prepared to interpret strings...
A few weeks ago, I wrote about how all the hand-wringing over the potential future evils of AI is misplaced. Forget the future, AI is already having allegedly fatal impacts on our children . Character.AI, a Google-backed company who builds AI-powered chat bots aimed at children (astute readers may have already spotted the problem) are defending a court case brought by Megan Garcia, whose 14-year-old son Sewell tragically took his own life. The New York Times reported on Sewell's final moments interacting with his Character.ai chatbot "Dany": On the night of Feb. 28, in the bathroom of his mother’s house, Sewell told Dany that he loved her, and that he would soon come home to her. “Please come home to me as soon as possible, my love,” Dany replied. “What if I told you I could come home right now?” Sewell asked. “… please do, my sweet king,” Dany replied. He put down his phone, picked up his stepfather’s .45 caliber handgun and pulled the trigger. Wow, that really hits ...