Social Networks

Two-Week Social Media 'Detox' Erases a Decade of Age-Related Decline, Study Finds (yahoo.com) 20

Critics say social media is engineered to be as addictive as tobacco or gambling, writes the Washington Post — while adding that "the science has been moving in parallel with the court's recognition." A growing body of research links heavy social media use not only to declines in mental health but to measurable cognitive effects — on attention, memory and focus — that in some studies resemble accelerated aging. Science also suggests we have more control than we realize when it comes to reversing this damage, and the solution is surprisingly simple: Take a break... "Digital detoxes" can sound like a fad. But in one of the largest studies to date, published in PNAS Nexus and involving more than 467 participants with an average age of 32, even a short time away produced striking results — effectively erasing a decade of age-related cognitive decline.

For 14 days, participants used a commercially available app, Freedom, to block internet access on their phones. They were still allowed calls and text messages, essentially turning a smartphone into a dumb phone. Their time online decreased from 314 minutes to 161 minutes, and by the end of the period the participants had improvements in sustained attention, mental health as well as self-reported well-being. The improvement in sustained attention was about the same magnitude as 10 years of age-related decline, the researchers noted, and the effect of the intervention on depression symptoms was larger than antidepressants and similar to that of cognitive behavioral therapy.

But two things were even more mind-blowing... Even those people who cheated and broke the rules after a few days seemed to have positive effects from the break; and in follow-up reports after the two weeks, many people reported the positive effects lingered. "So you don't have to necessarily restrict yourself forever. Even taking a partial digital detox, even for a few days, seems to work," Kushlev said.

The article also notes a November study at Harvard published in JAMA Network Open where nearly 400 people 'found that even a short break can make a measurable difference: After just one week of reduced smartphone use, participants reported drops in anxiety (16.1 percent), depression (24.8 percent) and insomnia (14.5 percent)..."

"Other experiments point in the same direction — whether decreasing social media use by an hour a day for one week or stepping away from just Facebook and Instagram."
Advertising

Meta Removes Ads For Social Media Addiction Litigation (axios.com) 46

Meta has started removing ads from law firms seeking clients for social media addiction lawsuits, just weeks after a jury found Meta and YouTube negligent in a landmark case involving harm to a young user. "Lawyers across the country now are seeking new plaintiffs, in the hopes of bringing a class action lawsuit that could result in lucrative verdicts," reports Axios. From the report: Axios has identified more than a dozen such ads that were deactivated today, some of which came from large national firms like Morgan & Morgan and Sokolove Law. Almost all of them ran on both Facebook and Instagram. Some also appeared on Threads and Messenger, plus Meta's Audience Network -- which distributes ads to thousands of third-party sites.

One such ad read: "Anxiety. Depression. Withdrawal. Self-harm. These aren't just teenage phases -- they're symptoms linked to social media addiction in children. Platforms knew this and kept targeting kids anyway." A few of the ads still remain active, including some that were posted earlier today.
"We're actively defending ourselves against these lawsuits and are removing ads that attempt to recruit plaintiffs for them," a Meta spokesperson said in a statement. "We will not allow trial lawyers to profit from our platforms while simultaneously claiming they are harmful."
Electronic Frontier Foundation

EFF Is Leaving X (eff.org) 188

After nearly 20 years on the platform, The Electronic Frontier Foundation (EFF) says it is leaving X. "This isn't a decision we made lightly, but it might be overdue," the digital rights group said. "The math hasn't worked out for a while now." From the report: We posted to Twitter (now known as X) five to ten times a day in 2018. Those tweets garnered somewhere between 50 and 100 million impressions per month. By 2024, our 2,500 X posts generated around 2 million impressions each month. Last year, our 1,500 posts earned roughly 13 million impressions for the entire year. To put it bluntly, an X post today receives less than 3% of the views a single tweet delivered seven years ago. [...]

When you go online, your rights should go with you. X is no longer where the fight is happening. The platform Musk took over was imperfect but impactful. What exists today is something else: diminished, and increasingly de minimis.

EFF takes on big fights, and we win. We do that by putting our time, skills, and our members' support where they will effect the most change. Right now, that means Bluesky, Mastodon, LinkedIn, Instagram, TikTok, Facebook, YouTube, and eff.org. We hope you follow us there and keep supporting the work we do. Our work protecting digital rights is needed more than ever before, and we're here to help you take back control.

Facebook

Meta Debuts 'Muse Spark', First AI Model Under Alexandr Wang (axios.com) 7

Meta has launched Muse Spark, its first major AI model under Alexandr Wang's leadership. The model was built over the past nine months and is being positioned as a significant step up from Llama 4. Axios reports: Muse Spark will power queries in the Meta AI app and Meta.ai website immediately, with plans to expand across Facebook, Instagram and WhatsApp. The model accepts voice, text and image inputs, but produces text-only output. [...] Meta plans to release a version of Muse Spark under an open-source license.

The model uses a fast mode for casual queries and several reasoning modes. A "shopping mode" highlights how Meta hopes to differentiate itself. It combines large language models with data on user interests and behavior. Over time, the model will also power "features that cite recommendations and content people share across Instagram, Facebook, and Threads," Meta said in a blog post.
Wang, the 29-year-old entrepreneur who co-founded Scale AI, joined Meta's "superintelligence" unit last year to help Meta catch up to rival models from OpenAI and Anthropic.
Social Networks

Australia Readies Social Media Court Action Citing Teen Ban Breaches (reuters.com) 27

Australia is preparing possible court action against major social media platforms that are failing to enforce the country's social media ban on under-16s. "Three months after the ban came into effect, the eSafety Commissioner said it was probing Meta's Instagram and Facebook, Google's YouTube, Snapchat and TikTok for possible breaches of the law," reports Reuters. From the report: Communications Minister Anika Wells said the government was gathering evidence "so that the eSafety Commissioner can go to the Federal Court and win." "We have spent the summer building that evidence base of all the stories that no doubt you have all heard ... about how kids are getting around that," Wells told reporters in Canberra. The legal threat is a striking change of tone from a government which had hailed tech giants' shows of cooperation when the ban went live in December.

Under the Australian law, platforms must show they are taking reasonable steps to keep out underage users or face fines of up to $34 million per breach, something eSafety would need to pursue in a civil court. The regulator previously said it would only take enforcement action in cases of systemic noncompliance. But in its first comprehensive compliance report since the ban took effect, eSafety said measures taken by the platforms were substandard and it would make a decision about next steps by mid-year. "We are now moving âinto an enforcement stance," said commissioner Julie Inman Grant in a statement.

The regulator reported major compliance gaps, including platforms prompting children who had previously declared ages under 16 to do fresh age checks, allowing repeated attempts at age-assurance tests until a child got a result over 16 and poor pathways for people to report underage accounts. Some platforms did not use age-inference, which estimates age based on someone's online activity, and some only used age-assurance measures like photo-based checks after a user tried to change their age, rather than at sign-up. That made it "likely many Australian children aged under 16 have been able to create accounts on age-restricted social media platforms by simply declaring they are 16 or older", the regulator said. Nearly one-third of parents reported their under-16 child had at least one social media account after the ban took effect, of which two-thirds said the platform had not asked the child's age, it added.

Social Networks

Meta and YouTube Found Negligent in Landmark Social Media Addiction Case 113

A jury found Meta and YouTube negligent in a landmark social media addiction case, ruling that addictive design features such as infinite scroll and algorithmic recommendations harmed a young user and contributed to her mental health distress. The verdict awards $3 million in compensatory damages so far and could pave the way for more lawsuits seeking financial penalties and product changes across the social media industry. "Meta is responsible for 70 percent of that cost and YouTube for the remainder," notes The New York Times. "TikTok and Snap both settled with the plaintiff for undisclosed terms before the trial started." From the report: The bellwether case, which was brought by a now 20-year-old woman identified as K.G.M., had accused social media companies of creating products as addictive as cigarettes or digital casinos. K.G.M. sued Meta, which owns Instagram and Facebook, and Google's YouTube over features like infinite scroll and algorithmic recommendations that she claimed led to anxiety and depression.

The jury of seven women and five men will deliberate further to decide what further punitive damages the companies should pay for malice or fraud. The verdict in K.G.M.'s case -- one of thousands of lawsuits filed by teenagers, school districts and state attorneys general against Meta, YouTube, TikTok and Snap, which owns Snapchat -- was a major win for the plaintiffs. The finding validates a novel legal theory that social media sites or apps can cause personal injury. It is likely to factor into similar cases expected to go to trial this year, which could expose the internet giants to further financial damages and force changes to their products.
The verdict also comes on the heels of a New Mexico jury ruling that found Meta liable for violating state law by failing to protect users of its apps from child predators.
Facebook

Meta Loses Trial After Arguing Child Exploitation Was 'Inevitable' (arstechnica.com) 45

Meta lost a child safety trial in New Mexico after a court found that its platforms failed to adequately protect children from exploitation and misled parents about app safety. According to Ars Technica, the jury on Tuesday "deliberated for only one day before agreeing that Meta should pay $375 million in civil damages..." While the jury declined to impose the maximum penalty New Mexico sought, which could have cost the company $2.2 billion, Meta may still face additional financial penalties and could be forced to make changes to its apps. From the report: The trial followed a 2023 lawsuit filed by New Mexico Attorney General Raul Torrez after The Guardian published a two-year investigation exposing child sex trafficking markets on Facebook and Instagram. Torrez's office then conducted an undercover investigation codenamed "Operation MetaPhile," in which officers posed as children on Facebook, Instagram, and WhatsApp. The jury heard that these fake profiles were "simply inundated with images and targeted solicitations" from child abusers, Torrez told CNBC in 2024. Ultimately, three men were arrested amid the sting for attempting to use Meta's social networks to prey on children. At trial, Mark Zuckerberg and Instagram chief Adam Mosseri testified that "harms to children, such as sexual exploitation and detriments to mental health, were inevitable on the company's platforms due to their vast user bases," The Guardian reported. Internal messages and documents, as well as testimony from child safety experts within and outside the company, showed that Meta repeatedly ignored warnings and failed to fix platforms to protect kids, New Mexico's AG successfully argued.

Perhaps most troubling to the jury, law enforcement and the National Center for Missing and Exploited Children also testified that Meta's reporting of crimes to children on its apps -- including child sexual abuse materials (CSAM) -- was "deficient," The Guardian reported. Rather than make it easy to trace harms on its platforms, the jury learned from frustrated cops that Meta "generated high volumes of 'junk' reports by overly relying on AI to moderate its platforms." This made its reporting "useless" and "meant crimes could not be investigated," The Guardian reported.

Celebrating the win as a "historic victory," Torrez told CNBC that families had previously paid the price for "Meta's choice to put profits over kids' safety." "Meta executives knew their products harmed children, disregarded warnings from their own employees, and lied to the public about what they knew," Torrez said. "Today the jury joined families, educators, and child safety experts in saying enough is enough."
Meta said the company plans to appeal the verdict. "We respectfully disagree with the verdict and will appeal," Meta's spokesperson said. "We work hard to keep people safe on our platforms and are clear about the challenges of identifying and removing bad actors or harmful content. We will continue to defend ourselves vigorously, and we remain confident in our record of protecting teens online."
Facebook

Meta Is Shutting Down VR Social Platform Horizon Worlds (cnbc.com) 51

Meta is shutting down its VR social platform Horizon Worlds, which was once a key piece of the pivot to the metaverse. The company said the app will be taken off the Quest store at the end of March, and fully removed from Quest headsets by June 15. After that date, it will shift to a standalone "mobile-only experience." CNBC reports: The shift for Horizon Worlds, which was once a central part of the company's push into virtual reality, comes weeks after Meta cut over 1,000 employees from Reality Labs, the unit responsible for the metaverse. [...] The social platform has never drawn more than a couple hundred thousand active users a month, CNBC previously reported.

The virtual 3D social network where avatars could interact and play games with other users officially launched in late 2021. It operated exclusively on the Quest VR platform until Meta launched a mobile app version in September 2023. The mobile version of Horizon Worlds was built to provide an entry point for users without VR headsets, functioning similarly to Roblox.

Encryption

Instagram Discontinues End-To-End Encryption For DMs (thehackernews.com) 31

Meta plans to remove end-to-end encryption (E2EE) from Instagram direct messages by May 8, 2026. "Very few people were opting in to end-to-end encrypted messaging in DMs, so we're removing this option from Instagram in the coming months," says Meta. "Anyone who wants to keep messaging with end-to-end encryption can easily do that on WhatsApp." The Hacker News reports: The American company first began testing E2EE for Instagram direct messages in 2021 as part of CEO Mark Zuckerberg's "privacy-focused vision for social networking." The feature is currently "only available in some areas" and is not enabled by default. Weeks into the Russo-Ukrainian war in February 2022, the company made encrypted direct messaging available to all adult users in both countries. Last week, TikTok said it would not introduce E2EE, arguing it makes users less safe by preventing police and safety teams from being able to read direct messages if needed.
Facebook

Meta Acquires Moltbook, the Social Network For AI Agents 30

Axios reports that Meta has acquired Moltbook, the viral, Reddit-like social network designed for AI agents. Humans are welcome, but only to observe. Axios reports: The deal brings Moltbook's creators -- Matt Schlicht and Ben Parr -- into Meta Superintelligence Labs (MSL), the unit run by former Scale AI CEO Alexandr Wang. Meta did not disclose Moltbook's purchase price. The deal is expected to close mid-March, Meta says, with the pair starting at MSL on March 16. When it launched in late January, Moltbook was labeled the "most interesting place on the internet" by open-source developer and writer Simon Willison. "Browsing around Moltbook is so much fun. A lot of it is the expected science fiction slop, with agents pondering consciousness and identity. There's also a ton of genuinely useful information, especially on m/todayilearned."

In an internal post seen by Axios, Meta's Vishal Shah said existing Moltbook customers can temporarily continue using the platform. "The Moltbook team has given agents a way to verify their identity and connect with one another on their human's behalf," Shah says. "This establishes a registry where agents are verified and tethered to human owners." He added: "Their team has unlocked new ways for agents to interact, share content, and coordinate complex tasks."
Government

Indonesia To Ban Social Media For Children Under 16 (theguardian.com) 47

Indonesia will ban children under 16 from having accounts on major social media platforms as part of a government push to protect minors from harmful content, addiction, and online threats. The rule will roll out starting March 28 and makes Indonesia the first country in Southeast Asia to impose such a restriction. The Guardian reports: Meutya Hafid said in a statement to media said that she signed a government regulation that will mean children under the age of 16 can no longer have accounts on high-risk digital platforms, including YouTube, TikTok, Facebook, Instagram, Threads, X, Roblox and Bigo Live, a popular livestreaming site. With a population of about 285 million, the fourth-highest in the world, the south-east Asian nation represents a significant market for social networks.

The implementation will start gradually from 28 March, until all platforms fulfill their compliance obligations. "The basis is clear. Our children face increasingly real threats. From exposure to pornography, cyberbullying, online fraud, and most importantly addiction. The government is here so that parents no longer have to fight alone against the giant of algorithms," Hafid said.

She added that the government is taking this step as the best effort in the midst of a digital emergency to reclaim sovereignty over children's futures. "We realize that the implementation of this regulation may cause some discomfort at first. Children may complain and parents may be confused about how to respond to their children's complaints," Hafid said.

Businesses

Jensen Huang Says Nvidia Is Pulling Back From OpenAI and Anthropic (techcrunch.com) 26

An anonymous reader quotes a report from TechCrunch: At the Morgan Stanley Technology, Media and Telecom conference in downtown San Francisco Wednesday, Nvidia CEO Jensen Huang said his company's recent investments in OpenAI and Anthropic are likely to be its last in both, saying that once they go public as anticipated later this year, the opportunity to invest closes. It could be that simple. While firms sometimes pile into companies until practically the eve of their public debut in search of more upside, Nvidia is minting money selling the chips that power both companies -- it's not like it needs to goose its returns by pouring even more money into either one.

Nvidia, for its part, isn't offering much more on the matter. Asked for comment earlier today following Huang's remarks, a spokesman pointed TechCrunch to a transcript from the company's fourth-quarter earnings call, where Huang said all of Nvidia's investments are "focused very squarely, strategically on expanding and deepening our ecosystem reach," a goal its earlier stakes in both companies have arguably met. Still, a few other dynamics might also explain the pullback, including the circular nature of these arrangements themselves. [...] Meanwhile, Nvidia's relationship with Anthropic has looked fraught in its own right. Just two months after Nvidia announced a $10 billion investment in November, Anthropic CEO Dario Amodei took the stage at Davos and, without naming Nvidia directly, compared the act of U.S. chip companies selling high-performance AI processors to approved Chinese customers to "selling nuclear weapons to North Korea." Ouch. [...]

Where that leaves Nvidia is holding stakes in two companies that, at this particular moment, are pulling in very different directions, and potentially dragging customers and partners along for the ride. Whether Huang saw any of this coming, given Nvidia's web of partnerships, is impossible to know. But his stated reason on Wednesday for likely pulling the plug on future investments -- that the IPO window closes the door on this kind of deal -- is hard to square with how late-stage private investing actually works. What's looking more probable is that this is an exit from a situation that has gotten really complicated, really fast.

Encryption

TikTok Says End-To-End Encryption Makes Users Less Safe (bbc.com) 86

An anonymous reader quotes a report from the BBC: TikTok will not introduce end-to-end encryption (E2EE) -- the controversial privacy feature used by nearly all its rivals -- arguing it makes users less safe. E2EE means only the sender and recipient of a direct message can view its contents, making it the most secure form of communication available to the general public. Platforms such as Facebook, Instagram, Messenger and X have embraced it because they say their priority is maximizing user privacy.

But critics have said E2EE makes it harder to stop harmful content spreading online, because it means tech firms and law enforcement have no way of viewing any material sent in direct messages. The situation is made more complex because TikTok has long faced accusations that ties to the Chinese state may put users' data at risk. TikTok has consistently denied this, but earlier this year the social media firm's US operations were separated from its global business on the orders of US lawmakers.

TikTok told the BBC it believed end-to-end encryption prevented police and safety teams from being able to read direct messages if they needed to. It confirmed its approach to the BBC in a briefing about security at its London office, saying it wanted to protect users, especially young people from harm. It described this stance as a deliberate decision to set itself apart from rivals.
"Grooming and harassment risks are very real in DMs [direct messages] so TikTok now can credibly argue that it's prioritizing 'proactive safety' over 'privacy absolutism' which is a pretty powerful soundbite," said social media industry analyst Matt Navarra. But Navarra said the move also "puts TikTok out of step with global privacy expectations" and might reinforce wariness for some about its ownership.
Crime

DNA Technology Convicts a 64-Year-Old for Murdering a Teenager in 1982 (cnn.com) 78

"More than four decades after a teenager was murdered in California, DNA found on a discarded cigarette has helped authorities catch her killer," reports CNN: Sarah Geer, 13, was last seen leaving her friend's houseï in Cloverdale, California, on the evening of May 23, 1982. The next morning, a firefighter walking home from work found her body, the Sonoma County District Attorney's Office said in a news release... Her death was ruled a homicide, but due to the "limited forensic science of the day," no suspect was identified and the case went cold for decades, prosecutors said.

Nearly 44 years after Sarah's murder, a jury found James Unick, 64, guilty of killing her on February 13. It would have been the victim's 57th birthday, the Sonoma County District Attorney's Office told CNN. Genetic genealogy, which combines DNA evidence and traditional genealogy, helped match Unick's DNA from a cigarette butt to DNA found on Sarah's clothing, according to prosecutors... [The Cloverdale Police Department] said it had been in communication with a private investigation firm in late 2019 and had partnered with them in hopes the firm could revisit the case's evidence "with the latest technological advancements in cold case work...."

"The FBI, with its access to familial genealogical databases, concluded that the source of the DNA evidence collected from Sarah belonged to one of four brothers, including James Unick," prosecutors said. Once investigators narrowed down the list of suspects to the four Unick brothers, the FBI "conducted surveillance of the defendant and collected a discarded cigarette that he had been smoking," prosecutors said. A DNA analysis of the cigarette confirmed James Unick's DNA matched the 2003 profile, along with other DNA samples collected from Sarah's clothing the day she was killed.

In a statement, the county's district attorney "While 44 years is too long to wait, justice has finally been served..."

And the article points out that "In 2018, genetic genealogy led to the arrest of the Golden State Killer, and it has recently helped solve several other cold cases, including a 1974 murder in Wisconsin and a 1988 murder in Washington."
Advertising

Meta Begins $65 Million Election Push To Advance AI Agenda (nytimes.com) 33

An anonymous reader quotes a report from the New York Times: Meta is preparing to spend $65 million this year to boost state politicians who are friendly to the artificial intelligence industry, beginning this week in Texas and Illinois, according to company representatives. The sum is the biggest election investment by Meta, which owns Facebook, Instagram and WhatsApp. The company was previously cautious about campaign engagements, making small donations out of a corporate political action committee and contributing to presidential inaugurations. It also let executives like Sheryl Sandberg, who was chief operating officer, support candidates in their personal capacities.

Now Meta is betting bigger on politics, driven by concerns over the regulatory threat to the artificial intelligence industry as it aims to beat back legislation in states that it fears could inhibit A.I. development, company representatives said. To do that, Meta is quietly starting two new super PACs, according to federal filings surfaced by The New York Times. One group, Forge the Future Project, is backing Republicans. Another, Making Our Tomorrow, is backing Democrats. The new PACs join two others already started by Meta, one of which is focused on California while the other is an umbrella organization that finances the company's spending in other states. In total, the four super PACs have an initial budget of $65 million, according to federal and state filings. Meta's spending is set to start this week in Illinois and Texas, where the company generally favors backing Democratic and Republican incumbents or engaging in open races rather than deposing existing officials, company representatives said in interviews.

[...] Last year, Meta's public policy vice president, Brian Rice, said the company would start spending in politics because of "inconsistent regulations that threaten homegrown innovation and investments in A.I." The company started its first two super PACs, American Technology Excellence Project and Mobilizing Economic Transformation Across California. Meta put $45 million into American Technology Excellence Project in September. That money is expected, in turn, to flow to Forge the Future Project, Making Our Tomorrow and potentially to other entities. [...] In California, which has some of the country's most onerous campaign-finance disclosures, Meta in August put $20 million into Mobilizing Economic Transformation Across California, which shortens to META California. State laws require the sponsoring company to be disclosed in the name of the entity. In December, Meta put $5 million into another California committee called California Leads, which is focused on promoting moderate business policy and not A.I., according to state records.

Social Networks

India's New Social Media Rules: Remove Unlawful Content in Three Hours, Detect Illegal AI Content Automatically (bbc.com) 23

Bloomberg reports: India tightened rules governing social media content and platforms, particularly targeting artificially generated and manipulated material, in a bid to crack down on the rapid spread of misinformation and deepfakes. The government on Tuesday (Feb 10) notified new rules under an existing law requiring social media firms to comply with takedown requests from Indian authorities within three hours and prominently label AI-generated content. The rules also require platforms to put in place measures to prevent users from posting unlawful material...

Companies will need to invest in 24-hour monitoring centres as enforcement shifts toward platforms rather than users, said Nikhil Pahwa, founder of MediaNama, a publication tracking India's digital policy... The onus of identification, removal and enforcement falls on tech firms, which could lose immunity from legal action if they fail to act within the prescribed timeline.

The new rules also require automated tools to detect and prevent illegal AI content, the BBC reports. And they add that India's new three-hour deadline is "a sharp tightening of the existing 36-hour deadline." [C]ritics worry the move is part of a broader tightening of oversight of online content and could lead to censorship in the world's largest democracy with more than a billion internet users... According to transparency reports, more than 28,000 URLs or web links were blocked in 2024 following government requests...

Delhi-based technology analyst Prasanto K Roy described the new regime as "perhaps the most extreme takedown regime in any democracy". He said compliance would be "nearly impossible" without extensive automation and minimal human oversight, adding that the tight timeframe left little room for platforms to assess whether a request was legally appropriate. On AI labelling, Roy said the intention was positive but cautioned that reliable and tamper-proof labelling technologies were still developing.

DW reports that India has also "joined the growing list of countries considering a social media ban for children under 16."

"Young Indians are not happy and are already plotting workarounds."
Programming

Fake Job Recruiters Hid Malware In Developer Coding Challenges (bleepingcomputer.com) 25

"A new variation of the fake recruiter campaign from North Korean threat actors is targeting JavaScript and Python developers with cryptocurrency-related tasks," reports the Register. Researchers at software supply-chain security company ReversingLabs say that the threat actor creates fake companies in the blockchain and crypto-trading sectors and publishes job offerings on various platforms, like LinkedIn, Facebook, and Reddit. Developers applying for the job are required to show their skills by running, debugging, and improving a given project. However, the attacker's purpose is to make the applicant run the code... [The campaign involves 192 malicious packages published in the npm and PyPi registries. The packages download a remote access trojan that can exfiltrate files, drop additional payloads, or execute arbitrary commands sent from a command-and-control server.]

In one case highlighted in the ReversingLabs report, a package named 'bigmathutils,' with 10,000 downloads, was benign until it reached version 1.1.0, which introduced malicious payloads. Shortly after, the threat actor removed the package, marking it as deprecated, likely to conceal the activity... The RAT checks whether the MetaMask cryptocurrency extension is installed on the victim's browser, a clear indication of its money-stealing goals...

ReversingLabs has found multiple variants written in JavaScript, Python, and VBS, showing an intention to cover all possible targets.

The campaign has been ongoing since at least May 2025...
Social Networks

Social Networks Agree to Be Rated On Their Teen Safety Efforts (yahoo.com) 14

Meta, TikTok, Snap and other social neteworks agreed this week to be rated on their teen safety efforts, reports the Los Angeles Times, "amid rising concern about whether the world's largest social media platforms are doing enough to protect the mental health of young people." The Mental Health Coalition, a collective of organizations focused on destigmatizing mental health issues, said Tuesday that it is launching standards and a new rating system for online platforms. For the Safe Online Standards (S.O.S.) program, an independent panel of global experts will evaluate companies on parameters including safety rules, design, moderation and mental health resources. TikTok, Snap and Meta — the parent company of Facebook and Instagram — will be the first companies to be graded. Discord, YouTube, Pinterest, Roblox and Twitch have also agreed to participate, the coalition said in a news release.

"These standards provide the public with a meaningful way to evaluate platform protections and hold companies accountable — and we look forward to more tech companies signing up for the assessments," Antigone Davis, vice president and global head of safety at Meta, said in a statement... The ratings will be color-coded, and companies that perform well on the tests will get a blue shield badge that signals they help reduce harmful content on the platform and their rules are clear. Those that fall short will receive a red rating, indicating they're not reliably blocking harmful content or lack proper rules. Ratings in other colors indicate whether the platforms have partial protection or whether their evaluations haven't been completed yet.

Social Networks

The EU Moves To Kill Infinite Scrolling 37

Doom scrolling is doomed, if the EU gets its way. From a report: The European Commission is for the first time tackling the addictiveness of social media in a fight against TikTok that may set new design standards for the world's most popular apps. Brussels has told the company to change several key features, including disabling infinite scrolling, setting strict screen time breaks and changing its recommender systems. The demand follows the Commission's declaration that TikTok's design is addictive to users -- especially children.

The fact that the Commission said TikTok should change the basic design of its service is "ground-breaking for the business model fueled by surveillance and advertising," said Katarzyna Szymielewicz, president of the Panoptykon Foundation, a Polish civil society group. That doesn't bode well for other platforms, particularly Meta's Facebook and Instagram. The two social media giants are also under investigation over the addictiveness of their design.
Social Networks

Meta Plans To Let Smart Glasses Identify People Through AI-Powered Facial Recognition (nytimes.com) 64

Meta plans to add facial recognition technology to its Ray-Ban smart glasses as soon as this year, New York Times reported Friday, five years after the social giant shut down facial recognition on Facebook and promised to find "the right balance" for the controversial technology.

The feature, internally called "Name Tag," would let wearers identify people and retrieve information about them through Meta's AI assistant, the report added. An internal memo from May acknowledged the feature carries "safety and privacy risks" and noted that political tumult in the United States would distract civil society groups that might otherwise criticize the launch. The company is exploring restrictions that would prevent the glasses from functioning as a universal facial recognition tool, potentially limiting identification to people connected on Meta platforms or those with public accounts.

Slashdot Top Deals