Skip to main content

Posts

AI-generated text isn’t free speech - yet

I read the Judge’s Order , so you don’t have to. Another 49 pages were added to the heart-wrenching tragedy of Sewell Garcia in May, as Judge Conway ruled against Character A.I. and Google on a number of motions argued in the case brought against them by Sewell’s mother. Sewell's tragic fate, seemingly encouraged by an AI agent in the final, crucial moment, drove Megan Garcia to sue Character A.I. and it's sponsor, Google, on a wide range of grounds. The defendants' defences included the claim that AI is free speech. Of the judge's rulings, 3 are profound in their impact on AI ethics. Google knows that AI has the potential to harm. When the Google team who later formed Character A.I. asked to release a version of their LLM designed for text dialogues (LaMDA), Google denied their request.  Notably because: “ Google employees raised concerns that users might 'ascribe too much meaning to the text [output by LLMs]', because ‘humans are prepared to interpret strings...
Recent posts

The first AImendment

A few weeks ago, I wrote about how all the hand-wringing over the potential future evils of AI is misplaced.  Forget the future, AI is already having allegedly fatal impacts on our children . Character.AI, a Google-backed company who builds AI-powered chat bots aimed at children (astute readers may have already spotted the problem) are defending a court case brought by Megan Garcia, whose 14-year-old son Sewell tragically took his own life. The New York Times reported on Sewell's final moments interacting with his Character.ai chatbot "Dany":  On the night of Feb. 28, in the bathroom of his mother’s house, Sewell told Dany that he loved her, and that he would soon come home to her. “Please come home to me as soon as possible, my love,” Dany replied. “What if I told you I could come home right now?” Sewell asked. “… please do, my sweet king,” Dany replied. He put down his phone, picked up his stepfather’s .45 caliber handgun and pulled the trigger. Wow, that really hits ...

Liberté, mate

I had my final interview on my journey to French citizenship recently. As I think about what it means to "be French", one thing really resonates with me: how France so regularly stands out, in the global landscape, as a nation of principles. Last week, we saw this when French politicians evoked Marianne, their iconic personification of "Liberté", in defending the digital privacy rights of their citizens. Following heated debate, a 119-24 vote defeated a measure which proposed forcing messaging platforms to provide unencrypted user data to law enforcement. As Joe Mullin reports , "The French lawmakers who voted this provision down deserve credit. They listened—not only to French digital rights organizations and technologists, but also to basic principles of cybersecurity and civil liberties." This decision shows, "you don’t have to sacrifice fundamental rights in the name of public safety." This is a timely reminder for Australian legislators, wh...

Trading factchecks for fat cheques

Spinach is full of iron.  We only use 10% of our brain power.  Man never landed on the moon.  Vaccines cause autism.  They’re eating the cats. You have an influencer friend, Fred.  He tells you that he has discovered that he can reach more people and make more money if he just stops checking whether things are true before he shares them with his audience.  What would you think of Fred? Is it morally wrong if he doesn’t create the misinformation himself, but just passes it along to those who have chosen to listen to him?  Haven’t we all been guilty of repeating common misconceptions at some point?  Can we hold one person morally accountable for repeating reports of pet consumption in Springfield, but give another a pass for inflicting spinach on their children at every meal? As Gina Rushton reports , Meta has now taken a position on this ethical dilemma.  Where in 2021 it celebrated “industry leading” fact-checking, it recently announced the ...

AI is already killing our children

 In 2007, Facebook released the "Facebook Platform".  Following the scandal where Cambridge Analytica used the platform to harvest data of some 87 million users and influence the US election and Brexit referendum, Deputy General Counsel Paul Gewal at Facebook confirmed that this was not a data breach - the system was operating as intended. That the system was designed to reduce user privacy and abuse user trust for the profit of corporate payers went almost unnoticed in the uproar over how the data was used, but it marked a fundamental shift in the relationship between software developers and their users.  Until then, users could trust that software they used was designed and built for them. * Having begun with a promise "it is free and always will be", Facebook's model to fund via advertising dollars meant that their software would be built for advertisers - and their users would be the product.  During that golden age of computing, millions of hours of the bes...

The callousing of our callow youth

At the 2024 Democratic National Convention, MyPillow CEO Mike Lindell delivered a ray of hope. Losing an argument to a 12 year old is sort of on-brand for Mike, given his non-consensual relationship with reality and enthusiastic disregard for personal credibility. Mike mainlines social media election misinformation like Neo learns kungfu and he was caught on camera aggressively shouting a transcript of his twitter feed into the face of a child . * That social media actively floods our modern attention, discourse and culture with the most antagonistic, inflammatory and misleading content is, of course, widely known. As Stephen Fry recent put it , Facebook and Twitter … “are the worst polluters in human history.  Worse than any chemical plant ever.  You and your children cannot breathe the air or swim in the waters of our culture without breathing in the toxic particulates and stinking effluvia that belch and pour unchecked from their companies into the currents of our world” * ...

Digital derangement

Last week, Stephen Fry called Zuckerberg and Musk the "worst polluters in human history", but he wasn't talking about the environment. The self-professed technophile, who once so joyfully quarried the glittering bounty of social media, has turned canary, warning directly and urgently of the stench he now detects in the bowels of Earth's digital coal mine: A reek of digital effluvia in the "air and waters" of our global culture. The long arc of Fry's journey from advocate to alarmist is important.  The flash-bang of today's "AI ethics" panic has distracted our moral attention from the duller truth that malignant "ethical misalignment" has already metastascised into every major organ of our digital lives. * In 2015, Google's corporate restructure quietly replaced the famous "Don't be evil" motto with a saccharine, MBA-approved fascimile.  It seems the motto was becoming a millstone, as it allowed critics to attack...