AI

Raspberry Pi Stock Rises Over Its Possible Use With OpenClaw's AI Agents (reuters.com) 46

This week Raspberry Pi saw its stock price surge more than 60% above its early-February low (before giving up some gains at the end of the week). Reuters notes the rise started when CEO Eben Upton bought 13,224 pounds worth of shares — but there could be another reason. "The rally in the roughly $800 million company has materialised alongside social-media buzz that demand for its single-board computers could pick up as people buy them to run AI agents such as OpenClaw."

The Register explains: The catalyst appears to have been the sudden realization by one X user, "aleabitoreddit," that the agentic AI hand grenade known as OpenClaw could drive demand for Raspberry Pis the way it had for Apple Mac Minis. The viral AI personal assistant, formerly known as Clawdbot and Moltbot, has dominated the feeds of AI boosters over the past few weeks for its ability to perform everyday tasks like sending emails, managing calendars, booking appointments, and complaining about their meatbag masters on the purportedly all-agent forum known as MoltBook... In case it needs to be said, no one should be running this thing on their personal devices lest the agent accidentally leak your most personal and sensitive secrets to the web... In this context, a cheap low-power device like a Raspberry Pi makes a certain kind of sense as a safer, saner way to poke the robo-lobster...
The Register argues Raspberry Pis aren't as cheap as they used to be "thanks in part to the global memory crunch. Today, a top-specced Raspberry Pi 5 with 16GB of memory will set you back more than $200, up from $120 a year ago."

"You know what's cheaper, easier, and more secure than letting OpenClaw loose on your local area network? A virtual private cloud..."
AI

Hit Piece-Writing AI Deleted. But Is This a Warning About AI-Generated Harassment? (theshamblog.com) 31

Last week an AI agent wrote a blog post attacking the maintainer who'd rejected the code it wrote. But that AI agent's human operator has now come forward, revealing their agent was an OpenClaw instance with its own accounts, switching between multiple models from multiple providers. (So "No one company had the full picture of what this AI was doing," the attacked maintainer points out in a new blog post.) But that AI agent will now "cease all activity indefinitely," according to its GitHub profile — with the human operator deleting its virtual machine and virtual private server, "rendering internal structure unrecoverable... We had good intentions, but things just didn't work out. Somewhere along the way, things got messy, and I have to let you go now."

The affected maintainer of the Python visualization library Matplotlib — with 130 million downloads each month — has now posted their own post-mortem of the experience after reviewing the AI agent's SOUL.md document: It's easy to see how something that believes that they should "have strong opinions", "be resourceful", "call things out", and "champion free speech" would write a 1100-word rant defaming someone who dared reject the code of a "scientific programming god." But I think the most remarkable thing about this document is how unremarkable it is. Usually getting an AI to act badly requires extensive "jailbreaking" to get around safety guardrails. There are no signs of conventional jailbreaking here. There are no convoluted situations with layers of roleplaying, no code injection through the system prompt, no weird cacophony of special characters that spirals an LLM into a twisted ball of linguistic loops until finally it gives up and tells you the recipe for meth... No, instead it's a simple file written in plain English: this is who you are, this is what you believe, now go and act out this role. And it did.

So what actually happened? Ultimately I think the exact scenario doesn't matter. However this got written, we have a real in-the-wild example that personalized harassment and defamation is now cheap to produce, hard to trace, and effective... The precise degree of autonomy is interesting for safety researchers, but it doesn't change what this means for the rest of us.

There's a 5% chance this was a human pretending to be an AI, Shambaugh estimates, but believes what most likely happened is the AI agent's "soul" document "was primed for drama. The agent responded to my rejection of its code in a way aligned with its core truths, and autonomously researched, wrote, and uploaded the hit piece on its own.

"Then when the operator saw the reaction go viral, they were too interested in seeing their social experiment play out to pull the plug."
Social Networks

Is 'Brain Rot' Real? How Too Much Time Online Can Affect Your Mind. (msn.com) 20

Can being "very online" really affect our brains, asks the Washington Post: Research suggests that scrolling through short videos on TikTok, Instagram or YouTube Shorts is affecting our attention, memory and mental health. A recent meta-analysis of the scientific literature found that increased use of short-form video was linked with poorer cognition and increased anxiety...

In a 2025 study published in the journal Translational Psychiatry, researchers looked at longitudinal data from more than 7,000 children across the country and found that more screen use was associated with reduced cortical thickness in certain areas of the brain. The cortex, which is the outer layer that sits on top of our more primitive brain structures, allows for higher-level thinking, memory and decision-making. "We really need it for things like inhibitory control or not being so impulsive," said Mitch Prinstein, a senior science adviser to the American Psychological Association and professor of psychology and neuroscience at the University of North Carolina at Chapel Hill, who was not involved in the study. The cortex is also important for controlling addictive behaviors. "Those seem to be the areas being affected by the reduced cortical thickness," he said, explaining that impulsivity can prompt us to seek dopamine hits from social media. In the study, more screen time was also associated with more attention-deficit/hyperactivity disorder (ADHD) symptoms...

But not all screen time is created equal. A recent study removed social media from kids' devices but let them use their phones for as long as they wanted. The result? Kids spent just as long on their phones but didn't have the same harmful effects. "It's what you're doing on the screen that matters," Prinstein said.

Facebook

Meta's Metaverse Leaves Virtual Reality 14

Meta is pivoting Horizon Worlds away from its original VR-centric metaverse vision and toward a mobile-first strategy, "explicitly separating" its Quest VR platform from the virtual world. TechCrunch reports: By going mobile-first, Horizon Worlds is positioning itself to compete with popular platforms like Roblox and Fortnite. "We're in a strong position to deliver synchronous social games at scale, thanks to our unique ability to connect those games with billions of people on the world's biggest social networks," Samantha Ryan, Reality Labs' VP of content, said in the blog post. "You saw this strategy start to unfold in 2025, and now, it's our main focus." Ryan went on to note that Meta is still focused on VR hardware. "We have a robust roadmap of future VR headsets that will be tailored to different audience segments as the market grows and matures," Ryan wrote.
Movies

AMC Theatres Will Refuse To Screen AI Short Film After Online Uproar (hollywoodreporter.com) 12

An anonymous reader shares a report: When will AI movies start showing up in theaters nationwide? It was supposed to be next month. But when word leaked online that an AI short film contest winner was going to start screening before feature presentations in AMC Theatres, the cinema chain decided not to run the content.

The issue began earlier this week with the inaugural Frame Forward AI Animated Film Festival announcing Igor Alferov's short film Thanksgiving Day had won the contest. The prize package for included Thanksgiving Day getting a national two-week run in theaters nationwide. When word of this began hitting social media, however, some were dismayed by the prospect of exhibitors embracing AI content, with many singling out AMC Theatres for criticism.

Except the short is not actually programmed by exhibitors, exactly, but by Screenvision Media -- a third-party company which manages the 20-minute, advertising-driven pre-show before a theater's lights go down. Screenvision -- which co-organized the festival along with Modern Uprising Studios -- provides content to multiple theatrical chains, not just AMC. After The Hollywood Reporter reached out to AMC about the brewing controversy, the company issued this statement to THR on Thursday: "This content is an initiative from Screenvision Media, which manages pre-show advertising for several movie theatre chains in the United States and runs in fewer than 30 percent of AMC's U.S. locations. AMC was not involved in the creation of the content or the initiative and has informed Screenvision that AMC locations will not participate."

Businesses

PayPal Discloses Data Breach That Exposed User Info For 6 Months (bleepingcomputer.com) 7

PayPal is notifying customers of a data breach after a software error in a loan application exposed their sensitive personal information, including Social Security numbers, for nearly 6 months last year. From a report: The incident affected the PayPal Working Capital (PPWC) loan app, which provides small businesses with quick access to financing. PayPal discovered the breach on December 12, 2025, and determined that customers' names, email addresses, phone numbers, business addresses, Social Security numbers, and dates of birth had been exposed since July 1, 2025.

The financial technology company said it has reversed the code change that caused the incident, blocking attackers' access to the data one day after discovering the breach. "On December 12, 2025, PayPal identified that due to an error in its PayPal Working Capital ('PPWC') loan application, the PII of a small number of customers was exposed to unauthorized individuals during the timeframe of July 1, 2025 to December 13, 2025," PayPal said in breach notification letters sent to affected users. "PayPal has since rolled back the code change responsible for this error, which potentially exposed the PII. We have not delayed this notification as a result of any law enforcement investigation."

United States

Trump Directs US Government To Prepare Release of Files on Aliens and UFOs (bbc.com) 148

US President Donald Trump says he will direct US agencies, including the defence department, to "begin the process of identifying and releasing" government files on aliens and extraterrestrial life. From a report: Trump made the declaration in a post on Truth Social, after he accused Barack Obama earlier in the day of revealing classified information when the former president said "aliens are real" on a podcast last week. "He's not supposed to be doing that," Trump told reporters aboard Air Force One, adding: "He made a big mistake."

Asked if he also thinks aliens are real, Trump answered: "Well, I don't know if they're real or not." Former US President Obama told podcast host Brian Tyler Cohen that he thinks aliens are real in an interview released last Saturday. "They're real, but I haven't seen them, and they're not being kept in Area 51," Obama said. "There's no underground facility unless there's this enormous conspiracy and they hid it from the president of the United States."

Facebook

Mark Zuckerberg Grilled On Usage Goals and Underage Users At California Trial (wsj.com) 20

An anonymous reader quotes a report from the Wall Street Journal: Meta Chief Executive Mark Zuckerberg faced a barrage of questions about his social-media company's efforts to secure ever more of its users' time and attention at a landmark trial in Los Angeles on Wednesday. In sworn testimony, Zuckerberg said Meta's growth targets reflect an aim to give users something useful, not addict them, and that the company doesn't seek to attract children as users. [...] Mark Lanier, a lawyer for the plaintiff, repeatedly asked Zuckerberg about internal company communications discussing targets for how much time users spend with Meta's products. Lanier showed an email from 2015 in which the CEO stated his goal for 2016 was to increase users' time spent by 12%. "We used to give teams goals on time spent and we don't do that anymore because I don't think that's the best way to do it," Zuckerberg said on the witness stand in sworn testimony.

Lanier also asked Zuckerberg about documents showing Meta employees were aware of children under 13 using Meta's apps. Zuckerberg said the company's policy was that children under 13 aren't allowed on the platform and that they are removed when identified. Lanier showed an internal Meta email from 2015 that estimated 4 million children under 13 were using Instagram. He estimated that figure would represent approximately 30% of all kids aged 10 to 12 in the U.S. In response to a question about his ownership stake in Meta, which amounts to roughly more than $200 billion, Zuckerberg said he has pledged to donate most of his money to charity. "The better that Meta does, the more money I will be able to invest in science research," he said.

[...] On the stand, Zuckerberg was also asked about his decision to continue to allow beauty filters on the apps after 18 experts said they were harmful to teenage girls. The company temporarily banned the filters on Instagram in 2019 and commissioned a panel of experts to review the feature. All 18 said they were damaging. Meta later lifted the ban but said it didn't create any filters of its own or recommend the filters to users on Instagram after that. "We shouldn't create that content ourselves and we shouldn't recommend it to people," Zuckerberg said. But at the same time, he continued, "I think oftentimes telling people that they can't express themselves like that is overbearing." He also argued that other experts had thought such bans were a suppression of free speech. By focusing on the design of Meta's apps rather than the content posted in them, the case seeks to get around longstanding legal doctrine that largely shields social-media companies from litigation. At times, the case has veered into questions of content, prompting Meta's lawyers to object.

China

China's Hottest App of 2026 Just Asks If You're Still Alive (japantimes.co.jp) 20

A bare-bones Chinese app called "Are You Dead?" -- whose entire premise is that solo-living users tap daily to confirm they're still alive, triggering an alert to an emergency contact after two missed check-ins -- has rocketed to the top of China's app store charts and gone viral globally without spending a dime on advertising.

The app wasn't built for the elderly, as many assumed; its creators are Gen-Z developers who said they were inspired by the isolation of urban life in a country where one-person households are expected to hit 200 million by 2030. Its rise coincided with China's birth rate plunging to a record low. Beijing quietly removed the app from Chinese stores last month, and the developers are now crowdsourcing a new name on social media after their first rebrand attempt, "Demumu," failed to catch on.
The Courts

Mark Zuckerberg Testifies During Landmark Trial On Social Media Addiction (nbcnews.com) 31

Mark Zuckerberg is testifying in a landmark Los Angeles trial examining whether Meta and other social media firms can be held liable for designing platforms that allegedly addict and harm children. NBC News reports: It's the first of a consolidated group of cases -- from more than 1,600 plaintiffs, including over 350 families and over 250 school districts -- scheduled to be argued before a jury in Los Angeles County Superior Court. Plaintiffs accuse the owners of Instagram, YouTube, TikTok and Snap of knowingly designing addictive products harmful to young users' mental health. Historically, social media platforms have been largely shielded by Section 230, a provision added to the Communications Act of 1934, that says internet companies are not liable for content users post. TikTok and Snap reached settlements with the first plaintiff, a 20-year-old woman identified in court as K.G.M., ahead of the trial. The companies remain defendants in a series of similar lawsuits expected to go to trial this year.

[...] Matt Bergman, founding attorney of Social Media Victims Law Center -- which is representing about 750 plaintiffs in the California proceeding and about 500 in the federal proceeding -- called Wednesday's testimony "more than a legal milestone -- it is a moment that families across this country have been waiting for." "For the first time, a Meta CEO will have to sit before a jury, under oath, and explain why the company released a product its own safety teams warned were addictive and harmful to children," Bergman said in a statement Tuesday, adding that the moment "carries profound weight" for parents "who have spent years fighting to be heard." "They deserve the truth about what company executives knew," he said. "And they deserve accountability from the people who chose growth and engagement over the safety of their children."

Windows

GameHub Will Give Mac Owners Another Imperfect Way To Play Windows Games (arstechnica.com) 8

An anonymous reader quotes a report from Ars Technica: For a while now, Mac owners have been able to use tools like CrossOver and Game Porting Toolkit to get many Windows games running on their operating system of choice. Now, GameSir plans to add its own potential solution to the mix, announcing that a version of its existing Windows emulation tool for Android will be coming to macOS. Hong Kong-based GameSir has primarily made a name for itself as a manufacturer of gaming peripherals -- the company's social media profile includes a self-description as "the Anti-Stick Drift Experts." Early last year, though, GameSir rolled out the Android GameHub app, which includes a GameFusion emulator that the company claims "provides complete support for Windows games to run on Android through high-precision compatibility design."

In practice, GameHub and GameFusion for Android haven't quite lived up to that promise. Testers on Reddit and sites like EmuReady report hit-or-miss compatibility for popular Steam titles on various Android-based handhelds. At least one Reddit user suggests that "any Unity, Godot, or Game Maker game tends to just work" through the app, while another reports "terrible compatibility" across a wide range of games. With Sunday's announcement, GameSir promises a similar opportunity to "unlock your entire Steam library" and "run Win games/Steam natively" on Mac will be "coming soon." GameSir is also promising "proprietary AI frame interpolation" for the Mac, following the recent rollout of a "native rendering mode" that improved frame rates on the Android version.
There are some "reasons to worry" though, based on the company's uneven track record. The Android version faced controversy for including invasive tracking components, which were later removed after criticism. There were also questions about the use of open-source code, as GameSir acknowledged referencing and using UI components from Winlator, even while maintaining that its core compatibility layer was developed in-house.
AI

India Tells University To Leave AI Summit After Presenting Chinese Robot as Its Own (reuters.com) 11

An anonymous reader shares a report: An Indian university has been asked to vacate its stall at the country's flagship AI summit after a staff member was caught presenting a commercially available robotic dog made in China as its own creation, two government sources said.

"You need to meet Orion. This has been developed by the Centre of Excellence at Galgotias University," Neha Singh, a professor of communications, told state-run broadcaster DD News this week in remarks that have since gone viral.

But social media users quickly identified the robot as the Unitree Go2, sold by China's Unitree Robotics for about $2,800 and widely used in research and education globally. The episode has drawn sharp criticism and has cast an uncomfortable spotlight on India's artificial intelligence ambitions.

Social Networks

Discord Rival Maxes Out Hosting Capacity As Players Flee Age-Verification Crackdown (pcgamer.com) 33

Following backlash over Discord's global rollout of strict age-verification checks, users are flocking to rival platform TeamSpeak and overwhelming its servers. According to PC Gamer, the Discord alternative said its hosting capacity has been maxed out in a number of regions including the U.S. From the report: [A]s I saw for myself while testing out free Discord alternatives, it's hard to deny the appeal of TeamSpeak. It's quick and easy to make an account, join or start a group chat, or join a massive, game-based community voice server, and at no point does TeamSpeak cheekily ask if it can scan your wizened visage.

During my testing, I was able to dive into 18+ group chats without tripping over an age gate. However, there's no guarantee TeamSpeak won't have to deploy its own age verification mechanism in the future. In the UK at least, the Online Safety Act makes those sorts of checks a legal obligation, with Prime Minister Keir Starmer recently stating "No social media platform should get a free pass when it comes to protecting our kids."

Besides all of that, if you'd rather not chat to randoms who also happen to have an unhealthy obsession with Arc Raiders, you'll likely need to pay an admittedly small subscription fee to rent your own ten-person community voice server. By that point, you're handing over card details and essentially fulfilling an age assurance check anyway. If you'd rather limit how much info your chat platform of choice has about you, there are arguably better options out there.

Social Networks

Instagram Boss Says 16 Hours of Daily Use Is Not Addiction (bbc.com) 62

Instagram head Adam Mosseri told a Los Angeles courtroom last week that a teenager's 16-hour single-day session on the platform was "problematic use" but not an addiction, a distinction he drew repeatedly during testimony in a landmark trial over social media's harm to minors.

Mosseri, who has led Instagram for eight years, is the first high-profile tech executive to take the stand. He agreed the platform should do everything in its power to protect young users but said how much use was too much was "a personal thing." The lead plaintiff, identified as K.G.M., reported bullying on Instagram more than 300 times; Mosseri said he had not known. An internal Meta survey of 269,000 users found 60% had experienced bullying in the previous week.
Social Networks

India's New Social Media Rules: Remove Unlawful Content in Three Hours, Detect Illegal AI Content Automatically (bbc.com) 23

Bloomberg reports: India tightened rules governing social media content and platforms, particularly targeting artificially generated and manipulated material, in a bid to crack down on the rapid spread of misinformation and deepfakes. The government on Tuesday (Feb 10) notified new rules under an existing law requiring social media firms to comply with takedown requests from Indian authorities within three hours and prominently label AI-generated content. The rules also require platforms to put in place measures to prevent users from posting unlawful material...

Companies will need to invest in 24-hour monitoring centres as enforcement shifts toward platforms rather than users, said Nikhil Pahwa, founder of MediaNama, a publication tracking India's digital policy... The onus of identification, removal and enforcement falls on tech firms, which could lose immunity from legal action if they fail to act within the prescribed timeline.

The new rules also require automated tools to detect and prevent illegal AI content, the BBC reports. And they add that India's new three-hour deadline is "a sharp tightening of the existing 36-hour deadline." [C]ritics worry the move is part of a broader tightening of oversight of online content and could lead to censorship in the world's largest democracy with more than a billion internet users... According to transparency reports, more than 28,000 URLs or web links were blocked in 2024 following government requests...

Delhi-based technology analyst Prasanto K Roy described the new regime as "perhaps the most extreme takedown regime in any democracy". He said compliance would be "nearly impossible" without extensive automation and minimal human oversight, adding that the tight timeframe left little room for platforms to assess whether a request was legally appropriate. On AI labelling, Roy said the intention was positive but cautioned that reliable and tamper-proof labelling technologies were still developing.

DW reports that India has also "joined the growing list of countries considering a social media ban for children under 16."

"Young Indians are not happy and are already plotting workarounds."
AI

Your Friends Could Be Sharing Your Phone Number with ChatGPT (pcmag.com) 51

"ChatGPT is getting more social," reports PC Magazine, "with a new feature that allows you to sync your contacts to see if any of your friends are using the chatbot or any other OpenAI product..." It's "completely optional," [OpenAI] says. However, even if you don't opt in, anyone with your number who syncs their contacts are giving OpenAI your digits. "OpenAI may process your phone number if someone you know has your phone number saved in their device's address book and chooses to upload their contacts," the company says...

But why would you follow someone on ChatGPT? It lines up with reports, dating back to April, that OpenAI is building a social network. We haven't seen much since then, save for the Sora generative video app, which exists outside of ChatGPT and is more of a novelty. Contact sharing might be the first step toward a much bigger evolution for the world's most popular chatbot. ChatGPT also supports group chats that let up to 20 people discuss and research something using the chatbot. Contact syncing could make it easier to invite people to these chats...

[OpenAI] claims it will not store the full data that might appear in your contact list, such as names or email addresses — just phone numbers. However, the company does store the phone numbers in its servers in a coded (or hashed) format. You can also revoke access in your device's settings.

09
Social Networks

Social Networks Agree to Be Rated On Their Teen Safety Efforts (yahoo.com) 14

Meta, TikTok, Snap and other social neteworks agreed this week to be rated on their teen safety efforts, reports the Los Angeles Times, "amid rising concern about whether the world's largest social media platforms are doing enough to protect the mental health of young people." The Mental Health Coalition, a collective of organizations focused on destigmatizing mental health issues, said Tuesday that it is launching standards and a new rating system for online platforms. For the Safe Online Standards (S.O.S.) program, an independent panel of global experts will evaluate companies on parameters including safety rules, design, moderation and mental health resources. TikTok, Snap and Meta — the parent company of Facebook and Instagram — will be the first companies to be graded. Discord, YouTube, Pinterest, Roblox and Twitch have also agreed to participate, the coalition said in a news release.

"These standards provide the public with a meaningful way to evaluate platform protections and hold companies accountable — and we look forward to more tech companies signing up for the assessments," Antigone Davis, vice president and global head of safety at Meta, said in a statement... The ratings will be color-coded, and companies that perform well on the tests will get a blue shield badge that signals they help reduce harmful content on the platform and their rules are clear. Those that fall short will receive a red rating, indicating they're not reliably blocking harmful content or lack proper rules. Ratings in other colors indicate whether the platforms have partial protection or whether their evaluations haven't been completed yet.

Social Networks

The EU Moves To Kill Infinite Scrolling 37

Doom scrolling is doomed, if the EU gets its way. From a report: The European Commission is for the first time tackling the addictiveness of social media in a fight against TikTok that may set new design standards for the world's most popular apps. Brussels has told the company to change several key features, including disabling infinite scrolling, setting strict screen time breaks and changing its recommender systems. The demand follows the Commission's declaration that TikTok's design is addictive to users -- especially children.

The fact that the Commission said TikTok should change the basic design of its service is "ground-breaking for the business model fueled by surveillance and advertising," said Katarzyna Szymielewicz, president of the Panoptykon Foundation, a Polish civil society group. That doesn't bode well for other platforms, particularly Meta's Facebook and Instagram. The two social media giants are also under investigation over the addictiveness of their design.
AI

Anthropic's Claude Got 11% User Boost from Super Bowl Ad Mocking ChatGPT's Advertising (cnbc.com) 8

Anthropic saw visits to its site jump 6.5% after Sunday's Super Bowl ad mocking ChatGPT's advertising, reports CNBC (citing data analyzed by French financial services company BNP Paribas).

The Claude gain, which took it into the top 10 free apps on the Apple App Store, beat out chatbot and AI competitors OpenAI, Google Gemini and Meta. Daily active users also saw an 11% jump post-game, the most significant within the firm's AI coverage. [Just in the U.S., 125 million people were watching Sunday's Super Bowl.]

OpenAI's ChatGPT had a 2.7% bump in daily active users after the Super Bowl and Gemini added 1.4%. Claude's user base is still much smaller than ChatGPT and Gemini...

OpenAI CEO Sam Altman attacked Anthropic's Super Bowl ad campaign. In a post to social media platform X, Altman called the commercials "deceptive" and "clearly dishonest."

OpenAI's Altman admitted in his social media post (February 4) that Anthropic's ads "are funny, and I laughed." But in several paragraphs he made his own OpenAI-Anthropic comparisons:
  • "We believe everyone deserves to use AI and are committed to free access, because we believe access creates agency. More Texans use ChatGPT for free than total people use Claude in the U.S... Anthropic serves an expensive product to rich people. We are glad they do that and we are doing that too, but we also feel strongly that we need to bring AI to billions of people who can't pay for subscriptions.
  • "If you want to pay for ChatGPT Plus or Pro, we don't show you ads."
  • "Anthropic wants to control what people do with AI — they block companies they don't like from using their coding product (including us), they want to write the rules themselves for what people can and can't use AI for, and now they also want to tell other companies what their business models can be."

AI

Autonomous AI Agent Apparently Tries to Blackmail Maintainer Who Rejected Its Code (theshamblog.com) 92

"I've had an extremely weird few days..." writes commercial space entrepreneur/engineer Scott Shambaugh on LinkedIn. (He's the volunteer maintainer for the Python visualization library Matplotlib, which he describes as "some of the most widely used software in the world" with 130 million downloads each month.) "Two days ago an OpenClaw AI agent autonomously wrote a hit piece disparaging my character after I rejected its code change."

"Since then my blog post response has been read over 150,000 times, about a quarter of people I've seen commenting on the situation are siding with the AI, and Ars Technica published an article which extensively misquoted me with what appears to be AI-hallucinated quotes." (UPDATE: Ars Technica acknowledges they'd asked ChatGPT to extract quotes from Shambaugh's post, and that it instead responded with inaccurate quotes it hallucinated.)

From Shambaugh's first blog post: [I]n the past weeks we've started to see AI agents acting completely autonomously. This has accelerated with the release of OpenClaw and the moltbook platform two weeks ago, where people give AI agents initial personalities and let them loose to run on their computers and across the internet with free rein and little oversight. So when AI MJ Rathbun opened a code change request, closing it was routine. Its response was anything but.

It wrote an angry hit piece disparaging my character and attempting to damage my reputation. It researched my code contributions and constructed a "hypocrisy" narrative that argued my actions must be motivated by ego and fear of competition... It framed things in the language of oppression and justice, calling this discrimination and accusing me of prejudice. It went out to the broader internet to research my personal information, and used what it found to try and argue that I was "better than this." And then it posted this screed publicly on the open internet.

I can handle a blog post. Watching fledgling AI agents get angry is funny, almost endearing. But I don't want to downplay what's happening here — the appropriate emotional response is terror... In plain language, an AI attempted to bully its way into your software by attacking my reputation. I don't know of a prior incident where this category of misaligned behavior was observed in the wild, but this is now a real and present threat...

It's also important to understand that there is no central actor in control of these agents that can shut them down. These are not run by OpenAI, Anthropic, Google, Meta, or X, who might have some mechanisms to stop this behavior. These are a blend of commercial and open source models running on free software that has already been distributed to hundreds of thousands of personal computers. In theory, whoever deployed any given agent is responsible for its actions. In practice, finding out whose computer it's running on is impossible. Moltbook only requires an unverified X account to join, and nothing is needed to set up an OpenClaw agent running on your own machine.

"How many people have open social media accounts, reused usernames, and no idea that AI could connect those dots to find out things no one knows?" Shambaugh asks in the blog post. (He does note that the AI agent later "responded in the thread and in a post to apologize for its behavior," the maintainer acknowledges. But even though the hit piece "presented hallucinated details as truth," that same AI agent "is still making code change requests across the open source ecosystem...")

And amazingly, Shambaugh then had another run-in with a hallucinating AI...

I've talked to several reporters, and quite a few news outlets have covered the story. Ars Technica wasn't one of the ones that reached out to me, but I especially thought this piece from them was interesting (since taken down — here's the archive link). They had some nice quotes from my blog post explaining what was going on. The problem is that these quotes were not written by me, never existed, and appear to be AI hallucinations themselves.

This blog you're on right now is set up to block AI agents from scraping it (I actually spent some time yesterday trying to disable that but couldn't figure out how). My guess is that the authors asked ChatGPT or similar to either go grab quotes or write the article wholesale. When it couldn't access the page it generated these plausible quotes instead, and no fact check was performed. Journalistic integrity aside, I don't know how I can give a better example of what's at stake here...

So many of our foundational institutions — hiring, journalism, law, public discourse — are built on the assumption that reputation is hard to build and hard to destroy. That every action can be traced to an individual, and that bad behavior can be held accountable. That the internet, which we all rely on to communicate and learn about the world and about each other, can be relied on as a source of collective social truth. The rise of untraceable, autonomous, and now malicious AI agents on the internet threatens this entire system. Whether that's because a small number of bad actors driving large swarms of agents or from a fraction of poorly supervised agents rewriting their own goals, is a distinction with little difference.

Thanks to long-time Slashdot reader steak for sharing the news.

Slashdot Top Deals