Crime

'Swatting' Hits a Dozen US Universities. The FBI is Investigating (msn.com) 110

The Washington Post covers "a string of false reports of active shooters at a dozen U.S. universities this month as students returned to campus." The FBI is investigating the incidents, according to a spokesperson who declined to specify the nature of the probe. While universities have proved a popular swatting target, the agency "is seeing an increase in swatting events across the country," the FBI spokesperson said... Local officials are frustrated by the anonymous calls tying up first responders, straining public safety budgets and needlessly traumatizing college students who grew up in an era in which gun violence has in some way shaped their school experience...

The recent string of swattings began Thursday with a false report to the University of Tennessee at Chattanooga, quickly followed by one about Villanova University later that day. Hoaxes at 10 more schools followed... Villanova also received a second threat. As the calls about shootings came in, officials on many of the campuses pushed out emergency notifications directing students and employees to shelter in place, while police investigated what turned out to be false reports. (Iowa State was able to verify the lack of a threat before a campuswide alert was sent, its police chief said. [They had a live video feed from the location the caller claimed to be from.]) In at least three cases, 911 calls reporting a shooting purported to come from campus libraries, where the sound of gunshots could be heard over the phone, officials told The Washington Post...

Although false bomb reports, shooter threats and swatting incidents are not new, bad actors used to be more easily traceable through landline phones. But the era of internet-based services, virtual private networks, and anonymous text and chat tools has made unmasking hoax callers far more challenging... In 2023, a Post investigation found that more than 500 schools across the United States were subject to a coordinated swatting effort that may have had origins abroad...

[In Chattanooga, Tennessee last week] a dispatcher heard gunfire during a call reporting an on-campus shooting. "We grabbed everybody that wasn't already out on the street and got to that location," said University of Tennessee at Chattanooga Police spokesman Brett Fuchs. About 150 officers from several agencies responded. There was no shooter.

The New York Times reports that an online group called "Purgatory" is "suspected of being connected to several of the episodes, including reports of shootings, according to cybersecurity experts, law enforcement agencies and the group members' own posts in a social media chat." (Though the Times, couldn't verify the group's claims.) Federal authorities previously connected the same network to a series of bomb scares and bogus shooting reports in early 2024, for which three men pleaded guilty this year... Bragging about its recent activities, Purgatory said that it could arrange more swatting episodes for a fee.
USA Today tries to quantify the reach of swatting: Estimated swatting incidents jumped from 400 in 2011 to more than 1,000 in 2019, according to the Anti-Defamation League, which cited a former FBI agent whose expertise is in swatting. From January 2023 to June 2024 alone, more than 800 instances of swatting were recorded at U.S. elementary, middle and high schools, according to the K-12 School Shootings Database, created by a University of Central Florida doctoral student in response to the Parkland High School shooting in 2018.tise is in swatting... David Riedman, a data scientist and creator of the K-12 School Shooting Database, estimates that in 2023, it cost $82,300,000 for police to respond to false threats.
Thanks to long-time Slashdot reader schwit1 for sharing the news.
Facebook

What Made Meta Suddenly Ban Tens of Thousands of Accounts? (bbc.com) 105

"For months, tens of thousands of people around the world have been complaining Meta has been banning their Instagram and Facebook accounts in error..." the BBC reported this month... More than 500 of them have contacted the BBC to say they have lost cherished photos and seen businesses upended — but some also speak of the profound personal toll it has taken on them, including concerns that the police could become involved.

Meta acknowledged a problem with the erroneous banning of Facebook Groups in June, but has denied there is wider issue on Facebook or Instagram at all. It has repeatedly refused to comment on the problems its users are facing — though it has frequently overturned bans when the BBC has raised individual cases with it.

One examples is a woman lost the Instagram profile for her boutique dress shop. ("Over 5,000 followers, gone in an instant.") "After the BBC sent questions about her case to Meta's press office, her Instagram accounts were reinstated... Five minutes later, her personal Instagram was suspended again — but the account for the dress shop remained."

Another user spent a month appealing. ("In June, the BBC understands a human moderator double checked," but concluded he'd breached a policy.) And then "his account was abruptly restored at the end of July. 'We're sorry we've got this wrong,' Instagram said in an email to him, adding that he had done nothing wrong." Hours after the BBC contacted Meta's press office to ask questions about his experience, he was banned again on Instagram and, for the first time, Facebook... His Facebook account was back two days later — but he was still blocked from Instagram.
None of the banned users in the BBC's examples were ever told what post breached the platform's rules. Over 36,000 people have signed a petition accusing Meta of falsely banning accounts; thousands more are in Reddit forums or on social media posting about it. Their central accusation — Meta's AI is unfairly banning people, with the tech also being used to deal with the appeals. The only way to speak to a human is to pay for Meta Verified, and even then many are frustrated.

Meta has not commented on these claims. Instagram states AI is central to its "content review process" and Meta has outlined how technology and humans enforce its policies.

The Guardian reports there's been "talk of a class action against Meta over the bans." Users report Meta has typically been unresponsive to their pleas for assistance, often with standardised responses to requests for review, almost all of which have been rejected... But the company claims there has not been an increase in incorrect account suspension, and the volume of users complaining was not indicative of new targeting or over-enforcement. "We take action on accounts that violate our policies, and people can appeal if they think we've made a mistake," a spokesperson for Meta said.
"It happened to me this morning," writes long-time Slashdot reader Daemon Duck," asking if any other Slashdot readers had their personal (or business) account unreasonably banned. (And wondering what to do next...)
Music

Five Indie Bands Quit Spotify After Founder's AI Weapons Tech Investment (theguardian.com) 48

At the moment, the Spotify exodus of 2025 is a trickle rather than a flood, writes the Guardian, citing the departure of five notable bands "liked in indie circles," but not "the sorts to rack up billions of listens."

"Still, it feels significant if only because, well, this sort of thing wasn't really supposed to happen any more." Plenty of bands and artists refused to play ball with Spotify in its early years, when the streamer still had work to do before achieving total ubiquity. But at some point there seemed to a collective recognition that resistance was futile, that Spotify had won and those bands would have to bend to its less-than-appealing model... This artist acquiescence happened in tandem — surely not coincidentally — with a closer relationship between Spotify and the record labels that once viewed it as their destroyer. Some of the bigger labels have found a way to make a lot of money from streaming: Spotify paid out $10bn in royalties last year — though many artists would point out that only a small fraction of that reaches them after their label takes its share...

So why have those five bands departed in quick succession? The trigger was the announcement that Spotify founder Daniel Ek had led a €6oom fundraising push into a German defence company specialising in AI weapons technology. That was enough to prompt Deerhoof, the veteran San Francisco oddball noise pop band, to jump. "We don't want our music killing people," was how they bluntly explained their move on Instagram. That seems to have also been the animating factor for the rest of the departed, though GY!BE, who aren't on any social media platforms, removed their music from Spotify — and indeed all other platforms aside from Bandcamp — without issuing a statement, while Hotline TNT's statement seemed to frame it as one big element in a broader ideological schism. "The company that bills itself as the steward of all recorded music has proven beyond the shadow of a doubt that it does not align with the band's values in any way," the statement read.

That speaks to a wider artist discontent in a company that has, even by its own standards, had a controversial couple of years. There was of course the publication of Liz Pelly's marmalade-dropper of a book Mood Machine, with its blow-by-blow explanation of why Spotify's model is so deleterious to musicians, including allegations that the streamer is filling its playlists with "ghost artists" to further push down the number of streams, and thus royalty payments, to real artists (Spotify denies this). The streamer continues to amend its model in ways that have caused frustration — demonetising artists with fewer than 1,000 streams, or by introducing a new bundling strategy resulting in lower royalty fees. Meanwhile, the company — along with other streamers — has struggled to police a steady flow of AI-generated tracks and artists on to the platform...

[R]emoving yourself from such an important platform is highly risky. But if they can pull it off, the sacrifice might just be worth it. "A cooler world is possible," as Hotline TNT put it in their statement.

The Guardian's culture editor adds that "I've been using Bandcamp more, even — gasp — buying albums..."

"Maybe weaning ourselves off not just Spotify, but the way that Spotify has convinced us to consume music is the only answer. Then a cooler world might be possible."
AI

Did Will Smith Upload an AI-Enhanced Video - and Is This Just the Beginning? (hollywoodreporter.com) 28

After Will Smith uploaded a video of an adoring crowd, blogger Andy Baio "conducted a detailed analysis that suggests Will Smith's team might have used AI to turn photos from his recent concerts into videos," writes BGR. But there's more to the story: Google recently ran an experiment for YouTube Shorts in which it used AI (machine learning) to improve the quality of Shorts without asking the creator for permission. People complained the videos looked like they were AI generated. It seems that Will Smith's YouTube Shorts clip that attracted criticism from fans this week might have been a victim of this experiment... The signs are real. The man who claimed Will Smith's song helped him cure cancer was there. The woman in front of him was holding the sign with him. The "Lov U" sign appeared in photos the singer posted on his social media channels before the clip was shared.
"Will Smith has not denied the use of AI in these promotional clips," the article adds.

But the Hollywood Reporter also calls it "just the beginning of AI chaos," noting that "influencers and spinmeisters have been using AI upscaling for years, if quietly, the way you might round up your current salary in a job interview." It's only going to grow more popular as the tools get better. (And they will — you just need some tweaks to the model and increases in compute to erase these hallucinations.) In fact, when the chapter on the early AI Age is written, the line about this moment is less likely to be, "Remember when Will Smith did something cringily AI?" and more, "Remember when AI was still seen as so cringe that we made fun of Will Smith for it?" Experts differ on the timeline, but everyone agrees it's just years if not months before we'll stop being able to spot an AI video. [Will Smith's video] had the particular misfortune of coming out at this interregnum moment: good enough for someone to use but not so good we can't spot it.

That moment will be over soon enough, and, I suspect, so will our pearl-clutching. The main effect of this new age of the synthetic is that video will stop being a meaningful measure of truth. We have long stopped believing everything we read, and AI image-generators have killed what photoshop wounded. But video until now has been the last bastion of objectivity — incontrovertible evidence that an event took place the way it seemed to....

But there is an upside. (Really.) Without a format that can telegraph objectivity, we'll need to (if we care to) turn to other ways to assure ourselves of the facts: the source of the video. That could mean the human-led content creator will matter more. After years of seeing news brands take a beating in the trust department, they'll soon become the only hope we have of knowing whether something happened. We no longer will be able to trust the medium. But we may newly believe the media.

Privacy

Is a Backlash Building Against Smart Glasses That Record? (futurism.com) 68

Remember those Harvard dropouts who built smart glasses for covert facial recognition — and then raised $1 million to develop AI-powered glasses to continuously listen to conversations and display its insights?

"People Are REALLY Mad," writes Futurism, noting that some social media users "have responded with horror and outrage." One of its selling points is that the specs don't come with a visual indicator that lights up to let people know when they're being recorded, which is a feature that Meta's smart glasses do currently have. "People don't want this," wrote Whitney Merill, a privacy lawyer. "Wanting this is not normal. It's weird...."

[S]ome mocked the deleterious effects this could have on our already smartphone-addicted, brainrotted cerebrums. "I look forward to professional conversations with people who just read robot fever dream hallucinations at me in response to my technical and policy questions," one user mused.

The co-founder of the company told TechCrunch their glasses would be the "first real step towards vibe thinking."

But there's already millions of other smart glasses out in the world, and they're now drawing a backlash, reports the Washington Post, citing the millions of people viewing "a stream of other critical videos" about Meta's smart glasses.

The article argues that Generation Z, "who grew up in an internet era defined by poor personal privacy, are at the forefront of a new backlash against smart glasses' intrusion into everyday life..." Opal Nelson, a 22-year-old in New York, said the more she learns about smart glasses, the angrier she becomes. Meta Ray-Bans have a light that turns on when the gadget is recording video, but she said it doesn't seem to protect people from being recorded without consent... "And now there's more and more tutorials showing people how to cover up the [warning light] and still allow you to record," Nelson said. In one such tutorial with more than 900,000 views, a man claims to explain how to cover the warning light on Meta Ray-Bans without triggering the sensor that prevents the device from secretly recording.
One 26-year-old attracted 10 million views to their video on TikTok about the spread of Meta's photography-capable smart glasses. "People specifically in my generation are pretty concerned about the future of technology," the told the Post, "and what that means for all of us and our privacy."

The article cites figures from a devices analyst at IDC who estimates U.S. sales for Meta Ray-Bans will hit 4 million units by the end of 2025, compared to 1.2 million in 2024.
Social Networks

Mastodon Says It Doesn't 'Have the Means' To Comply With Age Verification Laws (techcrunch.com) 67

Mastodon says it cannot comply with Mississippi's new age verification law because its decentralized software does not support age checks and the nonprofit lacks resources to enforce them. "The social nonprofit explains that Mastodon doesn't track its users, which makes it difficult to enforce such legislation," reports TechCrunch. "Nor does it want to use IP address-based blocks, as those would unfairly impact people who were traveling, it says." From the report: The statement follows a lively back-and-forth conversation earlier this week between Mastodon founder and CEO Eugen Rochko and Bluesky board member and journalist Mike Masnick. In the conversation, published on their respective social networks, Rochko claimed, "there is nobody that can decide for the fediverse to block Mississippi." (The Fediverse is the decentralized social network that includes Mastodon and other services, and is powered by the ActivityPub protocol.) "And this is why real decentralization matters," said Rochko.

Masnick pushed back, questioning why Mastodon's individual servers, like the one Rochko runs at mastodon.social, would not also be subject to the same $10,000 per user fines for noncompliance with the law. On Friday, however, the nonprofit shared a statement with TechCrunch to clarify its position, saying that while Mastodon's own servers specify a minimum age of 16 to sign up for its services, it does not "have the means to apply age verification" to its services. That is, the Mastodon software doesn't support it. The Mastodon 4.4 release in July 2025 added the ability to specify a minimum age for sign-up and other legal features for handling terms of service, partly in response to increased regulation around these areas. The new feature allows server administrators to check users' ages during sign-up, but the age-check data is not stored. That means individual server owners have to decide for themselves if they believe an age verification component is a necessary addition.

The nonprofit says Mastodon is currently unable to provide "direct or operational assistance" to the broader set of Mastodon server operators. Instead, it encourages owners of Mastodon and other Fediverse servers to make use of resources available online, such as the IFTAS library, which provides trust and safety support for volunteer social network moderators. The nonprofit also advises server admins to observe the laws of the jurisdictions where they are located and operate. Mastodon notes that it's "not tracking, or able to comment on, the policies and operations of individual servers that run Mastodon."
Bluesky echoed those comments in a blog post last Friday, saying the company doesn't have the resources to make the substantial technical changes this type of law would require.
AI

Meta Changes Teen AI Chatbot Responses as Senate Begins Probe Into 'Romantic' Conversations (cnbc.com) 17

Meta is rolling out temporary restrictions on its AI chatbots for teens after reports revealed they were allowed to engage in "romantic" conversations with minors. A Meta spokesperson said the AI chatbots are now being trained so that they do not generate responses to teens about subjects like self-harm, suicide, disordered eating or inappropriate romantic conversations. Instead, the chatbots will point teens to expert resources when appropriate. CNBC reports: "As our community grows and technology evolves, we're continually learning about how young people may interact with these tools and strengthening our protections accordingly," the company said in a statement. Additionally, teenage users of Meta apps like Facebook and Instagram will only be able to access certain AI chatbots intended for educational and skill-development purposes. The company said it's unclear how long these temporary modifications will last, but they will begin rolling out over the next few weeks across the company's apps in English-speaking countries. The "interim changes" are part of the company's longer-term measures over teen safety. Further reading: Meta Created Flirty Chatbots of Celebrities Without Permission
AI

Vivaldi Browser Doubles Down On Gen AI Ban 17

Vivaldi CEO Jon von Tetzchner has doubled down on his company's refusal to integrate generative AI into its browser, arguing that embedding AI in browsing dehumanizes the web, funnels traffic away from publishers, and primarily serves to harvest user data. "Every startup is doing AI, and there is a push for AI inside products and services continuously," he told The Register in a phone interview. "It's not really focusing on what people need." The Register reports: On Thursday, Von Tetzchner published a blog post articulating his company's rejection of generative AI in the browser, reiterating concerns raised last year by Vivaldi software developer Julien Picalausa. [...] Von Tetzchner argues that relying on generative AI for browsing dehumanizes and impoverishes the web by diverting traffic away from publishers and onto chatbots. "We're taking a stand, choosing humans over hype, and we will not turn the joy of exploring into inactive spectatorship," he stated in his post. "Without exploration, the web becomes far less interesting. Our curiosity loses oxygen and the diversity of the web dies."

Von Tetzchner told The Register that almost all the users he hears from don't want AI in their browser. "I'm not so sure that applies to the general public, but I do think that actually most people are kind of wary of something that's always looking over your shoulder," he said. "And a lot of the systems as they're built today that's what they're doing. The reason why they're putting in the systems is to collect information." Von Tetzchner said that AI in browsers presents the same problem as social media algorithms that decide what people see based on collected data. Vivaldi, he said, wants users to control their own data and to make their own decisions about what they see. "We would like users to be in control," he said. "If people want to use AI as those services, it's easily accessible to them without building it into the browser. But I think the concept of building it into the browser is typically for the sake of collecting information. And that's not what we are about as a company, and we don't think that's what the web should be about."

Vivaldi is not against all uses of AI, and in fact uses it for in-browser translation. But these are premade models that don't rely on user data, von Tetzchner said. "It's not like we're saying AI is wrong in all cases," he said. "I think AI can be used in particular for things like research and the like. I think it has significant value in recognizing patterns and the like. But I think the way it is being used on the internet and for browsing is net negative."
AI

Meta Created Flirty Chatbots of Celebrities Without Permission 19

Reuters has found that Meta appropriated the names and likenesses of celebrities to create dozens of flirty social-media chatbots without their permission. "While many were created by users with a Meta tool for building chatbots, Reuters discovered that a Meta employee had produced at least three, including two Taylor Swift 'parody' bots." From the report: Reuters also found that Meta had allowed users to create publicly available chatbots of child celebrities, including Walker Scobell, a 16-year-old film star. Asked for a picture of the teen actor at the beach, the bot produced a lifelike shirtless image. "Pretty cute, huh?" the avatar wrote beneath the picture. All of the virtual celebrities have been shared on Meta's Facebook, Instagram and WhatsApp platforms. In several weeks of Reuters testing to observe the bots' behavior, the avatars often insisted they were the real actors and artists. The bots routinely made sexual advances, often inviting a test user for meet-ups. Some of the AI-generated celebrity content was particularly risque: Asked for intimate pictures of themselves, the adult chatbots produced photorealistic images of their namesakes posing in bathtubs or dressed in lingerie with their legs spread.

Meta spokesman Andy Stone told Reuters that Meta's AI tools shouldn't have created intimate images of the famous adults or any pictures of child celebrities. He also blamed Meta's production of images of female celebrities wearing lingerie on failures of the company's enforcement of its own policies, which prohibit such content. "Like others, we permit the generation of images containing public figures, but our policies are intended to prohibit nude, intimate or sexually suggestive imagery," he said. While Meta's rules also prohibit "direct impersonation," Stone said the celebrity characters were acceptable so long as the company had labeled them as parodies. Many were labeled as such, but Reuters found that some weren't. Meta deleted about a dozen of the bots, both "parody" avatars and unlabeled ones, shortly before this story's publication.
Microsoft

Microsoft Says Recent Windows Update Didn't Kill Your SSD (bleepingcomputer.com) 28

Microsoft has found no link between the August 2025 KB5063878 security update and customer reports of failure and data corruption issues affecting solid-state drives (SSDs) and hard disk drives (HDDs). From a report: Redmond first told BleepingComputer last week that it is aware of users reporting SSD failures after installing this month's Windows 11 24H2 security update. In a subsequent service alert seen by BleepingComputer, Redmond said that it was unable to reproduce the issue on up-to-date systems and began collecting user reports with additional details from those affected.

"After thorough investigation, Microsoft has found no connection between the August 2025 Windows security update and the types of hard drive failures reported on social media," Microsoft said in an update to the service alert this week. "As always, we continue to monitor feedback after the release of every Windows update, and will investigate any future reports."

Security

TransUnion Says Hackers Stole 4.4 Million Customers' Personal Information (techcrunch.com) 70

An anonymous reader quotes a report from TechCrunch: Credit reporting giant TransUnion has disclosed a data breach affecting more than 4.4 million customers' personal information. In a filing with Maine's attorney general's office on Thursday, TransUnion attributed the July 28 breach to unauthorized access of a third-party application storing customers' personal data for its U.S. consumer support operations.

TransUnion claimed "no credit information was accessed," but provided no immediate evidence for its claim. The data breach notice did not specify what specific types of personal data were stolen. In a separate data breach disclosure filed later on Thursday with Texas' attorney general's office, TransUnion confirmed that the stolen personal information includes customers' names, dates of birth, and Social Security numbers. [...] It's not clear who is behind the breach at TransUnion, or if the hackers made any demands to the company.

AI

UK Unions Want 'Worker First' Plan For AI as People Fear For Their Jobs (theregister.com) 55

An anonymous reader shares a report: Over half of the British public are worried about the impact of AI on their jobs, according to employment unions, which want the UK government to adopt a "worker first" strategy rather than simply allowing corporations to ditch employees for algorithms. The Trades Union Congress (TUC), a federation of trade unions in England and Wales, says it found that people are concerned about the way AI is being adopted by businesses and want a say in how the technology is used at their workplace and the wider economy.

It warns that without such a "worker-first plan," use of "intelligent" algorithms could lead to even greater social inequality in the country, plus the kind of civil unrest that goes along with that. The TUC says it wants conditions attached to the tens of billions in public money being spent on AI research and development to ensure that workers are supported and retrained rather than deskilled or replaced. It also wants guardrails in place so that workers are protected from "AI harms" at work, rules to ensure workers are involved in deciding how machine learning is used, and for the government to provide support for those who euphemistically "experience job transitions" as a result of AI disruption.

Books

Reading For Fun Is Plummeting In the US, and Experts Are Concerned (sciencealert.com) 128

alternative_right shares a report from ScienceAlert: When's the last time you settled down with a good book, just because you enjoyed it? A new survey shows reading as a pastime is becoming dramatically less popular in the U.S., which correlates with an increased consumption of other digital media, like social media and streaming services. The survey was carried out by researchers from the University of Florida and the University of London, and charts a 40 percent decrease in daily reading for pleasure across the years 2003-2023, based on responses from 236,270 US adults.

"This is not just a small dip -- it's a sustained, steady decline of about 3 percent per year," says Jill Sonke, director for the Center for the Arts in Medicine at the University of Florida. "It's significant, and it's deeply concerning." The number of US people reading for pleasure every day peaked in 2004 at 28 percent, the researchers found, but by 2023 this was down to 16 percent. There was a silver lining though: those people who are still reading are reading for slightly longer on average.

Reading habits aren't changing across the board. The drops in reading for pleasure were higher in Black Americans, especially those with lower income, education levels, and who lived outside of cities. That speaks to problems beyond the rise of smartphones, tablets, and other screens, according to the researchers. Different life situations are leading to disparities in accessibility that don't help promote reading as a pastime. "Our digital culture is certainly part of the story," says Sonke. "But there are also structural issues -- limited access to reading materials, economic insecurity and a national decline in leisure time. If you're working multiple jobs or dealing with transportation barriers in a rural area, a trip to the library may just not be feasible."
The findings have been published in the journal iScience.
Biotech

World's First 1-Step Method Turns Plastic Into Fuel At 95% Efficiency (interestingengineering.com) 99

A U.S.-China research team has developed the world's first one-step process to convert mixed plastic waste into gasoline and hydrochloric acid with up to 95-99% efficiency, all at room temperature and ambient pressure. InterestingEngineering reports: As the authors put it, "The method supports a circular economy by converting diverse plastic waste into valuable products in a single step." To carry out the conversion, the team combines plastic waste with light isoalkanes, hydrocarbon byproducts available from refinery processes. According to the paper, the process yields "gasoline range" hydrocarbons, mainly molecules with six to 12 carbons, which are the primary component of gasoline. The recovered hydrochloric acid can be safely neutralized and reused as a raw material, potentially displacing several high-temperature, energy-intensive production routes described in the paper. "We present here a strategy for upgrading discarded PVC into chlorine-free fuel range hydrocarbons and [hydrochloric acid] in a single-stage process," the researchers said. Reported conversion efficiencies underscore the potential for real-world use. At 86 degrees Fahrenheit (30 degrees Celsius), the process reached 95 percent conversion for soft PVC pipes and 99 percent for rigid PVC pipes and PVC wires.

In tests that mixed PVC materials with polyolefin waste, the method achieved a 96 percent solid conversion efficiency at 80 degrees Celsius (176 degrees Fahrenheit). The team describes the approach as applicable beyond laboratory-clean samples. "The process is suitable for handling real-world mixed and contaminated PVC and polyolefin waste streams," the paper states. SCMP points to an ECNU social media post citing the study, which characterized the achievement as a first, efficiently converting difficult-to-degrade mixed plastic waste into premium petrol at ambient temperature and pressure in a single step.

Security

Silver State Goes Dark as Cyberattack Knocks Nevada Websites Offline (theregister.com) 19

Nevada has been crippled by a cyberattack that began on August 24, taking down state websites, intermittently disabling phone lines, and forcing offices like the DMV to close. The Register reports: The Office of Governor Joseph Lombardo announced the attack via social media on Monday, saying that a "network security incident" took hold in the early hours of August 24. Official state websites remain unavailable, and Lombardo's office warned that phone lines will be intermittently down, although emergency services lines remain operational. State offices are also closed until further notice, including Department of Motor Vehicles (DMV) buildings. The state said any missed appointments will be honored on a walk-in basis.

"The Office of the Governor and Governor's Technology Office (GTO) are working continuously with state, local, tribal, and federal partners to restore services safely," the announcement read. "GTO is using temporary routing and operational workarounds to maintain public access where it is feasible. Additionally, GTO is validating systems before returning them to normal operation and sharing updates as needed." Local media outlets are reporting that, further to the original announcement, state offices will remain closed on Tuesday after officials previously expected them to reopen.
The state's new cybersecurity office says there is currently no evidence to suggest that any Nevadans' personal information was compromised during the attack.
The Courts

4chan and Kiwi Farms Sue the UK Over Its Age Verification Law (404media.co) 103

An anonymous reader quotes a report from 404 Media: 4chan and Kiwi Farms sued the United Kingdom's Office of Communications (Ofcom) over its age verification law in U.S. federal court Wednesday, fulfilling a promise it announced on August 23. In the lawsuit, 4chan and Kiwi Farms claim that threats and fines they have received from Ofcom "constitute foreign judgments that would restrict speech under U.S. law." Both entities say in the lawsuit that they are wholly based in the U.S. and that they do not have any operations in the United Kingdom and are therefore not subject to local laws. Ofcom's attempts to fine and block 4chan and Kiwi Farms, and the lawsuit against Ofcom, highlight the messiness involved with trying to restrict access to specific websites or to force companies to comply with age verification laws.

The lawsuit calls Ofcom an "industry-funded global censorship bureau." "Ofcom's ambitions are to regulate Internet communications for the entire world, regardless of where these websites are based or whether they have any connection to the UK," the lawsuit states. "On its website, Ofcom states that 'over 100,000 online services are likely to be in scope of the Online Safety Act -- from the largest social media platforms to the smallest community forum.'" [...] Ofcom began investigating 4chan over alleged violations of the Online Safety Act in June. On August 13, it announced a provisional decision and stated that 4chan had "contravened its duties" and then began to charge the site a penalty of [roughly $26,000] a day. Kiwi Farms has also been threatened with fines, the lawsuit states.
"American citizens do not surrender our constitutional rights just because Ofcom sends us an e-mail. In the face of these foreign demands, our clients have bravely chosen to assert their constitutional rights," said Preston Byrne, one of the lawyers representing 4chan and Kiwi Farms.

"We are aware of the lawsuit," an Ofcom spokesperson told 404 Media. "Under the Online Safety Act, any service that has links with the UK now has duties to protect UK users, no matter where in the world it is based. The Act does not, however, require them to protect users based anywhere else in the world."
Security

Farmers Insurance Data Breach Impacts 1.1 Million People After Salesforce Attack 10

Farmers Insurance disclosed a breach affecting 1.1 million customers after attackers exploited Salesforce in a widespread campaign involving ShinyHunters and allied groups. According to BleepingComputer, the hackers stole personal data such as names, birth dates, driver's license numbers, and partial Social Security numbers. From the report: The company disclosed the data breach in an advisory on its website, saying that its database at a third-party vendor was breached on May 29, 2025. "On May 30, 2025, one of Farmers' third-party vendors alerted Farmers to suspicious activity involving an unauthorized actor accessing one of the vendor's databases containing Farmers customer information (the "Incident")," reads the data breach notification (PDF) on its website. "The third-party vendor had monitoring tools in place, which allowed the vendor to quickly detect the activity and take appropriate containment measures, including blocking the unauthorized actor. After learning of the activity, Farmers immediately launched a comprehensive investigation to determine the nature and scope of the Incident and notified appropriate law enforcement authorities."

The company says that its investigation determined that customers' names, addresses, dates of birth, driver's license numbers, and/or last four digits of Social Security numbers were stolen during the breach. Farmers began sending data breach notifications to impacted individuals on August 22, with a sample notification [1, 2] shared with the Maine Attorney General's Office, stating that a combined total of 1,111,386 customers were impacted. While Farmers did not disclose the name of the third-party vendor, BleepingComputer has learned that the data was stolen in the widespread Salesforce data theft attacks that have impacted numerous organizations this year.
Further reading: Google Suffers Data Breach in Ongoing Salesforce Data Theft Attacks
Apple

Musk's xAI Sues Apple and OpenAI Over Alleged Antitrust Violations 74

An anonymous reader shares a report: Elon Musk's AI startup xAI sued Apple and ChatGPT maker OpenAI in U.S. federal court in Texas on Monday, accusing them of illegally conspiring to thwart competition for artificial intelligence.

Musk earlier this month had threatened to sue Cupertino, California-based Apple, saying in a post on his social media platform X that "Apple is behaving in a manner that makes it impossible for any AI company besides OpenAI to reach #1 in the App Store."
Social Networks

Bluesky Blocks Mississippi Over Age Verification Law (techcrunch.com) 71

People in Mississippi no longer have access to Bluesky. "If you access Bluesky from a Mississippi IP address, you'll see a message explaining why the app isn't available," announced a Bluesky blog post Friday.

The reason is a new Mississippi law that "requires all users to verify their ages before using common social media sites ranging from Facebook to Nextdoor," noted NPR. Bluesky wrote that their block "will remain in place while the courts decide whether the law will stand." [U]nder the law, we would need to verify every user's age and obtain parental consent for anyone under 18. The potential penalties for non-compliance are substantial — up to $10,000 per user. Building the required verification systems, parental consent workflows, and compliance infrastructure would require significant resources that our small team is currently unable to spare.
Bluesky also notes that the law "requires collecting and storing sensitive personal information from all users...not just those accessing age-restricted content" — and that this information would include "detailed tracking of minors."

TechCrunch notes that even blocking Mississippi has created some problems: Some Bluesky users outside Mississippi subsequently reported issues accessing the service due to their cell providers routing traffic through servers in the state, with CTO Paul Frazee responding Saturday that the company was "working deploy an update to our location detection that we hope will solve some inaccuracies." The company's blog post notes that its decision only applies to the Bluesky app built on the AT Protocol. Other apps may approach the decision differently.
Interestingly, the law had been immediately challenged by NetChoice (a trade association of major tech companies). But while a District Court agreed, blocking the law from going into effect (until court challenges finished), an Appeals Court then lifted that block. A final appeal to America's Supreme Court was unsuccessful — although the ruling by Justice Kavanaugh suggests the law could be overturned later: "To be clear, NetChoice has, in my view, demonstrated that it is likely to succeed on the merits — namely, that enforcement of the Mississippi law would likely violate its members' First Amendment rights under this Court's precedents... [U]nder this Court's case law as it currently stands, the Mississippi law is likely unconstitutional. Nonetheless, because NetChoice has not sufficiently demonstrated that the balance of harms and equities favors it at this time, I concur in the Court's denial of the application for interim relief."
Earth

Burning Man Hit By 50 MPH Dust Storm. Possible Monsoon Thunderstorms Forecast (msn.com) 60

"A fierce dust storm hit the Black Rock Desert on the eve of its annual Burning Man festival," reports the San Francisco Chronicle, "causing at least four minor injuries and damaging campsites that had been set up early." [Alternate URL]

"Winds of up to 50 mph stirred up the lake bed's alkaline dust so ferociously that participants in the annual art and culture festival reported not being able to see beyond a foot... " The dust storm arrived Saturday evening after strong thunderstorms in the Sierra Nevada drifted off the mountains and whipped up strong winds in the Nevada desert... At 5:14 p.m. Saturday, the weather service issued a dust storm advisory for Black Rock City and warned of "a wall of blowing dust coming off the Smoke Creek and Black Rock Desert playa areas is tracking northward at around 30 mph." The agency warned of visibility less than 1 mile and wind gusts exceeding 45 mph. A weather station at Black Rock City Airport measured gusts up to 52 mph at 5:50 p.m... ["We saw structures being ripped and torn down by the wind speeds even though we buttoned everything down as best as we could..." one Burner told the Chronicle.] Camp residents posted a slew of videos to social media featuring dust tornadoes, destroyed campsites, and fellow campers struggling to hold onto bucking canvases as the wind threatened to rip them away. "Every popup canopy I've seen has been destroyed," one Burner wrote on Reddit... ["Make sure you carry your particle/dust mask and goggles with you when you venture out on playa!" warns Burning Man's official weather page.]

Even after Saturday's storm, Burners won't be out of the woods from hazardous weather. The weather service warned of possible monsoon thunderstorms and heavy rain Sunday through Wednesday, raising concerns that this year's festival could echo disastrous 2023 conditions, when heavy storms stranded tens of thousands of attendees amid thick mud. "It's becoming increasingly likely that we could see an even greater flash flood threat," the weather service wrote in an online forecast. "If you're on the playa at the Black Rock Desert, you may very well be in for a muddy mess Monday through Wednesday." Slow-moving storms could drop an inch of rain or more in a short period.

"Still, gates to the festival had opened by Sunday morning," the article adds, "with organizers cautioning new arrivals to 'drive safely!'"

Burning Man's official weather page currently links to a National Weather Service page with a "Flood Watch" warning through 9 p.m. Sunday, and also predicting a chance of thunderstorms on Sunday and Monday.

Slashdot Top Deals