Cellphones

French Lawmakers Vote To Ban Social Media Use By Under-15s (theguardian.com) 50

French lawmakers have voted to ban social media access for children under 15 and prohibit mobile phones in high schools, positioning France as the second country after Australia to impose sweeping age-based digital restrictions. The Guardian reports: The lower national assembly adopted the text by a vote of 130 to 21 in a lengthy overnight session from Monday to Tuesday. It will now go to the Senate, France's upper house, ahead of becoming law. Macron hailed the vote as a "major step" to protect French children and teenagers in a post on X. The legislation, which also provides for a ban on mobile phones in high schools, would make France the second country to take such a step following Australia's ban for under-16s in December. [...] "The emotions of our children and teenagers are not for sale or to be manipulated, either by American platforms or Chinese algorithms," Macron said in a video broadcast on Saturday. Authorities want the measures to be enforced from the start of the 2026 school year for new accounts.

Former prime minister Gabriel Attal, who leads Macron's Renaissance party in the lower house, said he hoped the Senate would pass the bill by mid-February so that the ban could come into force on September 1. He added that "social media platforms will then have until December 31 to deactivate existing accounts" that do not comply with the age limit. [...] The draft bill excludes online encyclopedias and educational platforms. An effective age verification system would have to come into force for the ban to become reality. Work on such a system is under way at the European level.

The Internet

Tim Berners-Lee Wants Us To Take Back the Internet (theguardian.com) 68

mspohr shares a report: When Sir Tim Berners-Lee invented the world wide web in 1989, his vision was clear: it would used by everyone, filled with everything and, crucially, it would be free. Today, the British computer scientist's creation is regularly used by 5.5 billion people -- and bears little resemblance to the democratic force for humanity he intended.

Since Berners-Lee's disappointment a decade ago, he's thrown everything at a project that completely shifts the way data is held on the web, known as the Solid (social linked data) protocol. It's activism that is rooted in people power -- not unlike the first years of the web.

This version of the internet would turbocharge personal sovereignty and give control back to users. Berners-Lee has long seen AI -- which exists only because of the web and its data -- as having the potential to transform society far beyond the boundaries of self-interested companies. But now is the time, he says, to put guardrails in place so that AI remains a force for good -- and he's afraid the chance may pass humankind by.
Berners-Lee traces the web's corruption to the commercialization of the domain name system in the 1990s, when the .com space was "pounced on by charlatans." The 2016 US elections, he said, revealed to him just how toxic his creation could become. A corner of the web, he says, has been "optimised for nastiness" -- extractive, surveillance-heavy, and designed to maximize engagement at the cost of user wellbeing.

His answer is Solid, a protocol that gives users control through personal data "pods" functioning as secure backpacks of information. The Flanders government in Belgium already uses Solid pods for its citizens. On AI, his optimism remains dim. "The horse is bolting," he says, calling for a "Cern for AI" where scientists could collaboratively develop superintelligence under contained, non-commercial oversight.
Social Networks

Internal Messages May Doom Meta At Social Media Addiction Trial (arstechnica.com) 54

An anonymous reader quotes a report from Ars Technica: This week, the first high-profile lawsuit -- considered a "bellwether" case that could set meaningful precedent in the hundreds of other complaints -- goes to trial. That lawsuit documents the case of a 19-year-old, K.G.M, who hopes the jury will agree that Meta and YouTube caused psychological harm by designing features like infinite scroll and autoplay to push her down a path that she alleged triggered depression, anxiety, self-harm, and suicidality. TikTok and Snapchat were also targeted by the lawsuit, but both have settled. The Snapchat settlement came last week, while TikTok settled on Tuesday just hours before the trial started, Bloomberg reported. For now, YouTube and Meta remain in the fight. K.G.M. allegedly started watching YouTube when she was 6 years old and joined Instagram by age 11. She's fighting to claim untold damages -- including potentially punitive damages -- to help her family recoup losses from her pain and suffering and to punish social media companies and deter them from promoting harmful features to kids. She also wants the court to require prominent safety warnings on platforms to help parents be aware of the risks. [...]

To win, K.G.M.'s lawyers will need to "parcel out" how much harm is attributed to each platform, due to design features, not the content that was targeted to K.G.M., Clay Calvert, a technology policy expert and senior fellow at a think tank called the American Enterprise Institute, wrote. Internet law expert Eric Goldman told The Washington Post that detailing those harms will likely be K.G.M.'s biggest struggle, since social media addiction has yet to be legally recognized, and tracing who caused what harms may not be straightforward. However, Matthew Bergman, founder of the Social Media Victims Law Center and one of K.G.M.'s lawyers, told the Post that K.G.M. is prepared to put up this fight. "She is going to be able to explain in a very real sense what social media did to her over the course of her life and how in so many ways it robbed her of her childhood and her adolescence," Bergman said.

The research is unclear on whether social media is harmful for kids or whether social media addiction exists, Tamar Mendelson, a professor at Johns Hopkins Bloomberg School of Public Health, told the Post. And so far, research only shows a correlation between Internet use and mental health, Mendelson noted, which could doom K.G.M.'s case and others.' However, social media companies' internal research might concern a jury, Bergman told the Post. On Monday, the Tech Oversight Project, a nonprofit working to rein in Big Tech, published a report analyzing recently unsealed documents in K.G.M.'s case that supposedly provide "smoking-gun evidence" that platforms "purposefully designed their social media products to addict children and teens with no regard for known harms to their wellbeing" -- while putting increased engagement from young users at the center of their business models.
Most of the unsealed documents came from Meta. An internal email shows Mark Zuckerberg decided Meta's top strategic priority was getting teens "locked in" to Meta's family of apps. Another damning document discusses allowing "tweens" to use a private mode inspired by fake Instagram accounts ("finstas"). The same document includes an admission that internal data showed Facebook use correlated with lower well-being.

Internal communications showed Meta seemingly bragging that "teens can't switch off from Instagram even if they want to" and an employee declaring, "oh my gosh yall IG is a drug," likening all social media platforms to "pushers."
AI

Pinterest Cuts Up To 15% Jobs To Redirect Resources To AI (reuters.com) 19

Pinterest said on Tuesday it would trim its workforce by less than 15% and reduce office space, as the social media company looks to reallocate resources to AI-focused roles and initiatives. From a report: The announcement comes as the company competes with TikTok and Meta-owned Facebook and Instagram for digital advertising budgets, as these platforms continue to draw marketers with their extensive user base.

Pinterest had 5,205 full-time employees as of September 2025. The latest job cut would translate to less than 780 positions. Top executives at the World Economic Forum's annual meeting said while jobs would disappear, new ones would spring up, with two telling Reuters that AI would be used as an excuse by companies which were planning layoffs anyway. Last week, design software maker Autodesk also announced a 7% job cut to redirect investments to its cloud platform and AI efforts.

Social Networks

Reddit Lawyers Force Founder to Redact 'WallStreetBets' From Miami Event (yahoo.com) 43

Reddit has forced Jaime Rogozinski, the founder of infamous r/WallStreetBets, to strip the WallStreetBets name from an upcoming Miami conference after legal threats citing trademark rights. According to a press release, it's the "first known case of a social media company enforcing trademark control over a user-created community." From the report: After years of litigation, courts ultimately sided with Reddit in a decision now referred to as the "Rogozinski Ruling," a precedent that grants platforms broad authority to assert trademark ownership over user-created communities. That ruling now forms the basis for Reddit's demand that the words "WallStreetBets" be physically removed from the event. "They aren't afraid of the name being used," said Rogozinski. "If they were, they'd have to sue the internet. What they're afraid of is the creator hanging out with his creation. They're afraid of the community's independence. And they're afraid it's evolved into something bigger than a subreddit."

The irony is difficult to ignore. The original subreddit counts around three million subscribers, while conservative estimates place more than seven million WallStreetBets participants spread across other platforms. For a movement that built its reputation confronting corporate overreach, Reddit's decision to extend its authority beyond the confines of its web-based platform, reaching into real-world gatherings to police culture it did not create, risks stirring a hornet's nest with a long memory and a track record of collective action.

The event formerly known as WallStreetBets Live, will proceed as scheduled on January 28-30 in Miami. In compliance with Reddit's demands, all references to the name will be physically redacted on-site.
"Reddit's lawyers did one thing right," Rogozinski continued. "They proved exactly why we need a decentralized future. This event has become a live case study in what's broken about modern social media. Platforms can deplatform creators, and now, with courts backing them, they can appropriate what users build."
Social Networks

TikTok Alternative 'Skylight' Soars To 380K+ Users After TikTok US Deal Finalized (techcrunch.com) 29

Skylight, an open-source, TikTok-style video app built on the AT Protocol, surged past 380,000 users after last week's shake-up around TikTok's U.S. ownership and privacy concerns. TechCrunch reports: Launched last year and backed by Mark Cuban and other investors, Skylight's mobile app is built on the AT Protocol, the technology that also powers the decentralized X rival Bluesky, which now has north of 42 million users. Skylight, co-founded by CEO Tori White and CTO Reed Harmeyer, offers a built-in video editor; user profiles; support for likes, commenting, and sharing; and the ability for community curators to create custom feeds for others to follow. The app now has over 150,000 videos uploaded directly to the platform. It can also stream videos from Bluesky because of its AT Protocol integration.

Harmeyer said Saturday that 1.4 million videos were played on the app the day before, up 3x over the past 24 hours. The app had also seen sign-ups increase more than 150%. Other noteworthy stats include over a 50% increase in returning users, over 40% rise in video played on average, and over 100% increase in posts created. This surge was likely triggered by concerns over TikTok's change in ownership and its unfortunately timed technical glitches. [...] Over the weekend, Skylight's CEO, Tori White, said the app added around 20,000 new users and is continuing to grow. So far this January, the app has seen around 95,000 monthly active users.
"We've seen what happens when one person dictates what's pushed into people's feeds," White told TechCrunch. "Not only does it harm a creator's connection with their followers, but the entire health of the platform. That's why we built Skylight Social on open standards. We wanted creator and user power to be guaranteed by the technology. Not an empty promise, but an irrevocable right."
Privacy

TikTok Is Now Collecting Even More Data About Its Users (wired.com) 41

An anonymous reader quotes a report from Wired: When TikTok users in the U.S. opened the app today, they were greeted with a pop-up asking them to agree to the social media platform's new terms of service and privacy policy before they could resume scrolling. These changes are part of TikTok's transition to new ownership. In order to continue operating in the U.S., TikTok was compelled by the U.S. government to transition from Chinese control to a new, American-majority corporate entity. Called TikTok USDS Joint Venture LLC, the new entity is made up of a group of investors that includes the software company Oracle. It's easy to tap "agree" and keep on scrolling through videos on TikTok, so users might not fully understand the extent of changes they are agreeing to with this pop-up.

Now that it's under U.S.-based ownership, TikTok potentially collects more detailed information about its users, including precise location data. Here are the three biggest changes to TikTok's privacy policy that users should know about. TikTok's change in location tracking is one of the most notable updates in this new privacy policy. Before this update, the app did not collect the precise, GPS-derived location data of U.S. users. Now, if you give TikTok permission to use your phone's location services, then the app may collect granular information about your exact whereabouts. Similar kinds of precise location data is also tracked by other social media apps, like Instagram and X.

[...] Rather than an adjustment, TikTok's policy on AI interactions adds a new topic to the privacy policy document. Now, users' interactions with any of TikTok's AI tools explicitly fall under data that the service may collect and store. This includes any prompts as well as the AI-generated outputs. The metadata attached to your interactions with AI tools may also be automatically logged. [...] This change to TikTok's privacy policy may not be as immediately noticeable to users, but it will likely have an impact on the types of ads you see outside of TikTok. So, rather than just using your collected data to target you while using the app, TikTok may now further leverage that info to serve you more relevant ads wherever you go online. As part of this advertising change, TikTok also now explicitly mentions publishers as one kind of partner the platform works with to get new data.

Social Networks

TikTok Finalizes Deal To Form New American Entity (npr.org) 18

An anonymous reader quotes a report from NPR: TikTok has finalized a deal to create a new American entity, avoiding the looming threat of a ban in the United States that has been in discussion for years. The social video platform company signed agreements with major investors including Oracle, Silver Lake and MGX to form the new TikTok U.S. joint venture. The new version will operate under "defined safeguards that protect national security through comprehensive data protections, algorithm security, content moderation and software assurances for U.S. users," the company said in a statement Thursday. American TikTok users can continue using the same app. [...] Adam Presser, who previously worked as TikTok's head of operations and trust and safety, will lead the new venture as its CEO. He will work alongside a seven-member, majority-American board of directors that includes TikTok's CEO Shou Chew.

[...] In addition to an emphasis on data protection, with U.S. user data being stored locally in a system run by Oracle, the joint venture will also focus on TikTok's algorithm. The content recommendation formula, which feeds users specific videos tailored to their preferences and interests, will be retrained, tested and updated on U.S. user data, the company said in its announcement. The algorithm has been a central issue in the security debate over TikTok. China previously maintained the algorithm must remain under Chinese control by law. But the U.S. regulation passed with bipartisan support said any divestment of TikTok must mean the platform cuts ties -- specifically the algorithm -- with ByteDance. Under the terms of this deal, ByteDance would license the algorithm to the U.S. entity for retraining.

The law prohibits "any cooperation with respect to the operation of a content recommendation algorithm" between ByteDance and a new potential American ownership group, so it is unclear how ByteDance's continued involvement in this arrangement will play out. Oracle, Silver Lake and the Emirati investment firm MGX are the three managing investors, who each hold a 15% share. Other investors include the investment firm of Michael Dell, the billionaire founder of Dell Technologies. ByteDance retains 19.9% of the joint venture.

The Courts

Snap Settles Social media Addiction Lawsuit Ahead of Landmark Trial (bbc.com) 28

Snap has settled a social media addiction lawsuit just days before trial, while Meta, TikTok, and Alphabet remain defendants and are headed to court. "Terms of the deal were not announced as it was revealed by lawyers at a California Superior Court hearing, after which Snap told the BBC the parties were 'pleased to have been able to resolve this matter in an amicable manner.'" From the report: The plaintiff, a 19-year old woman identified by the initials K.G.M., alleged that the algorithmic design of the platforms left her addicted and affected her mental health. In the absence of a settlement with the other parties, the trial is scheduled to go forward against the remaining three defendants, with jury selection due to begin on January 27. Meta boss Mark Zuckerberg is expected to testify, and until Tuesday's settlement, Snap CEO Evan Spiegel was also set to take the stand.

Snap is still a defendant in other social media addiction cases that have been consolidated in the court. The closely watched cases could challenge a legal theory that social media companies have used to shield themselves. They have long argued that Section 230 of the Communications Decency Act of 1996 protects them from liability for what third parties post on their platforms. But plaintiffs argue that the platforms are designed in a way that leaves users addicted through choices that affect their algorithms and notifications. The social media companies have said the plaintiffs' evidence falls short of proving that they are responsible for alleged harms such as depression and eating disorders.

Earth

Era of 'Global Water Bankruptcy' Is Here, UN Report Says (theguardian.com) 118

An anonymous reader quotes a report from the Guardian: The world has entered an era of "global water bankruptcy" that is harming billions of people, a UN report has declared. The overuse and pollution of water must be tackled urgently, the report's lead author said, because no one knew when the whole system could collapse, with implications for peace and social cohesion. All life depends on water but the report found many societies had long been using water faster than it could be replenished annually in rivers and soils, as well as over-exploiting or destroying long-term stores of water in aquifers and wetlands. This had led to water bankruptcy, the report said, with many human water systems past the point at which they could be restored to former levels. The climate crisis was exacerbating the problem by melting glaciers, which store water, and causing whiplashes between extremely dry and wet weather.

Prof Kaveh Madani, who led the report, said while not every basin and country was water bankrupt, the world was interconnected by trade and migration, and enough critical systems had crossed this threshold to fundamentally alter global water risk. The result was a world in which 75% of people lived in countries classified as water-insecure or critically water-insecure and 2 billion people lived on ground that is sinking as groundwater aquifers collapse. Conflicts over water had risen sharply since 2010, the report said, while major rivers, such as the Colorado, in the US, and the Murray-Darling system, in Australia, were failing to reach the sea, and "day zero" emergencies -- when cities run out of water, such as in Chennai, India -- were escalating. Half of the world's large lakes had shrunk since the early 1990s, the report noted. Even damp nations, such as the UK, were at risk because of reliance on imports of water-dependent food and other products. "This report tells an uncomfortable truth: many critical water systems are already bankrupt," said Madani, of the UN University's Institute for Water, Environment and Health. "It's extremely urgent [because] no one knows exactly when the whole system would collapse."

About 70% of fresh water taken by human withdrawals was used for agriculture, but Madani said: "Millions of farmers are trying to grow more food from shrinking, polluted or disappearing water sources. Water bankruptcy in India or Pakistan, for example, also means an impact on rice exports to a lot of places around the world." More than half of global food was grown in areas where water storage was declining or unstable, the report said. Madani said action to deal with water bankruptcy offered a chance to bring countries together in an increasingly fragmented world. "Water is a strategic, untapped opportunity to the world to create unity within and between nations. It is one of the very rare topics that left and right and north and south all agree on its importance." The UN report, which is based on a forthcoming paper in the peer-reviewed journal Water Resources Management, sets out how population growth, urbanization and economic growth have increased water demand for agriculture, industry, energy and cities. "These pressures have produced a global pattern that is now unmistakable," it said.

United Kingdom

UK Mulls Australia-Like Social Media Ban For Users Under 16 (engadget.com) 25

The UK government has launched a public consultation on whether to ban social media use for children under 16, drawing inspiration from Australia's recently enacted age-based restrictions. "It would also explore how to enforce that limit, how to limit tech companies from being able to access children's data and how to limit 'infinite scrolling,' as well as access to addictive online tools," reports Engadget. "In addition to seeking feedback from parents and young people themselves, the country's ministers are going to visit Australia to see the effects of the country's social media ban for kids, according to Financial Times."
AI

Energy Costs Will Decide Which Countries Win the AI Race, Microsoft's Nadella Says (cnbc.com) 60

Energy costs will be key to deciding which country wins the AI race, Microsoft CEO Satya Nadella has said. CNBC: As countries race to build AI infrastructure to capitalize on the technology's promise of huge efficiency gains, Nadella told the World Economic Forum (WEF) on Tuesday that "GDP growth in any place will be directly correlated" to the cost of energy in using AI.

He pointed to a new global commodity in "tokens" -- basic units of processing that are bought by users of AI models, allowing them to run tasks. "The job of every economy and every firm in the economy is to translate these tokens into economic growth, then if you have a cheaper commodity, it's better."

"I would say we will quickly lose even the social permission to actually take something like energy, which is a scarce resource, and use it to generate these tokens, if these tokens are not improving health outcomes, education outcomes, public sector efficiency, private sector competitiveness across all sectors," Nadella said.

Earth

Ocean Damage Nearly Doubles the Cost of Climate Change 38

A new study from Scripps Institution of Oceanography finds that factoring ocean damage into climate economics nearly doubles the estimated global cost of climate change, adding close to $2 trillion per year from losses to fisheries, coral reefs, and coastal infrastructure. "It is the first time a social cost of carbon (SCC) assessment -- a key measure of economic harm caused by climate change -- has included damages to the ocean," reports Inside Climate News. From the report: "For decades, we've been estimating the economic cost of climate change while effectively assigning a value of zero to the ocean," said Bernardo Bastien-Olvera, who led the study during his postdoctoral fellowship at Scripps. "Ocean loss is not just an environmental issue, but a central part of the economic story of climate change."

The social cost of carbon is an accounting method for working out the monetary cost of each ton of carbon dioxide released into the atmosphere. "[It] is one of the most efficient tools we have for internalizing climate damages into economic decision-making," said Amy Campbell, a United Nations climate advisor and former British government COP negotiator. Calculations have historically been used by international organizations and state departments like the U.S. Environmental Protection Agency to assess policy proposals -- though a 2025 White House memo from the Trump administration instructed federal agencies to ignore the data during cost-benefit analyses unless required by law. "It becomes politically contentious when deciding whose damages are counted, which sectors are included and most importantly how future and retrospective harms are valued," Campbell said.

Excluding ocean harm, the social cost of carbon is $51 per ton of carbon dioxide emitted. This increases to $97.20 per ton when the ocean, which covers 70 percent of the planet, is included. In 2024, global CO2 emissions were estimated to be 41.6 billion tons, making the 91 percent cost increase significant. Using greenhouse gas emission predictions, the report estimates the annual damages to traditional markets alone will be $1.66 trillion by 2100.
Transportation

Germany's EV Subsidies Will Include Chinese Brands (cnevpost.com) 55

Germany is reinstating EV subsidies after a sharp sales drop, rolling out a 3 billion-euro program offering 1,500-6,000 euros per buyer starting in May and running through 2029. Unlike some neighboring countries, the incentives are open to all manufacturers with a focus on low- and middle-income households. From a report: "I cannot see any evidence of this postulated major influx of Chinese car manufacturers in Germany, either in the figures or on the roads -- and that is why we are facing up to the competition and not imposing any restrictions," German Environment Minister Carsten Schneider said at a Monday press conference. The decision is a major boon for affordable Chinese automakers like BYD that are steadily gaining ground in the European market, [Bloomberg noted].

Germany's green-light for Chinese EVs stands in stark contrast to other nations' approaches. In the UK, subsidies introduced last year effectively excluded Chinese battery-powered vehicles, while France's so-called social leasing scheme includes similar restrictions. [...] Germany maintains strong diplomatic ties with China. German automakers are among the most significant players in China's automotive industry. Over the past years, China's policies -- including purchase subsidies and purchase tax reductions -- have not excluded models or automakers from specific countries. Whether German automakers like Volkswagen or American automakers like Tesla, all enjoy national-level purchase incentive policies in China on par with domestic automakers.

Social Networks

Threads Usage Overtakes X On Mobile (techcrunch.com) 37

New data from Similarweb shows Threads has overtaken X in daily mobile users. However, X still dominates on the web with around 150 million daily web visits compared to Threads' 8.5 million daily visits. TechCrunch reports: Similarweb's data shows that Threads had 141.5 million daily active users on iOS and Android as of January 7, 2026, after months of growth, while X has 125 million daily active users on mobile devices. This appears to be the result of longer-term trends, rather than a reaction to the recent X controversies [...]. Instead, Threads' boost in daily mobile usage may be driven by other factors, including cross-promotions from Meta's larger social apps like Facebook and Instagram (where Threads is regularly advertised to existing users), its focus on creators, and the rapid rollout of new features.

Over the past year, Threads has added features like interest-based communities, better filters, DMs, long-form text, disappearing posts, and has recently been spotted testing games. Combined, the daily active user increases suggest that more people are using Threads on mobile as a more regular habit.
Further reading: Threads Now Has More Than 400 Million Monthly Active Users
Electronic Frontier Foundation

Congress Wants To Hand Your Parenting To Big Tech 53

An anonymous reader quotes a report from the Electronic Frontier Foundation (EFF): Lawmakers in Washington are once again focusing on kids, screens, and mental health. But according to Congress, Big Tech is somehow both the problem and the solution. The Senate Commerce Committee held a hearing [Friday] on "examining the effect of technology on America's youth." Witnesses warned about "addictive" online content, mental health, and kids spending too much time buried in screen. At the center of the debate is a bill from Sens. Ted Cruz (R-TX) and Brian Schatz (D-HI) called the Kids Off Social Media Act (KOSMA), which they say will protect children and "empower parents."

That's a reasonable goal, especially at a time when many parents feel overwhelmed and nervous about how much time their kids spend on screens. But while the bill's press release contains soothing language, KOSMA doesn't actually give parents more control. Instead of respecting how most parents guide their kids towards healthy and educational content, KOSMA hands the control panel to Big Tech. That's right -- this bill would take power away from parents, and hand it over to the companies that lawmakers say are the problem. [...] This bill doesn't just set an age rule. It creates a legal duty for platforms to police families. Section 103(b) of the bill is blunt: if a platform knows a user is under 13, it "shall terminate any existing account or profile" belonging to that user. And "knows" doesn't just mean someone admits their age. The bill defines knowledge to include what is "fairly implied on the basis of objective circumstances" -- in other words, what a reasonable person would conclude from how the account is being used. The reality of how services would comply with KOSMA is clear: rather than risk liability for how they should have known a user was under 13, they will require all users to prove their age to ensure that they block anyone under 13.

KOSMA contains no exceptions for parental consent, for family accounts, or for educational or supervised use. The vast majority of people policed by this bill won't be kids sneaking around -- it will be minors who are following their parents' guidance, and the parents themselves. Imagine a child using their parent's YouTube account to watch science videos about how a volcano works. If they were to leave a comment saying, "Cool video -- I'll show this to my 6th grade teacher!" and YouTube becomes aware of the comment, the platform now has clear signals that a child is using that account. It doesn't matter whether the parent gave permission. Under KOSMA, the company is legally required to act. To avoid violating KOSMA, it would likely lock, suspend, or terminate the account, or demand proof it belongs to an adult. That proof would likely mean asking for a scan of a government ID, biometric data, or some other form of intrusive verification, all to keep what is essentially a "family" account from being shut down.

Violations of KOSMA are enforced by the FTC and state attorneys general. That's more than enough legal risk to make platforms err on the side of cutting people off. Platforms have no way to remove "just the kid" from a shared account. Their tools are blunt: freeze it, verify it, or delete it. Which means that even when a parent has explicitly approved and supervised their child's use, KOSMA forces Big Tech to override that family decision. [...] These companies don't know your family or your rules. They only know what their algorithms infer. Under KOSMA, those inferences carry the force of law. Rather than parents or teachers, decisions about who can be online, and for what purpose, will be made by corporate compliance teams and automated detection systems.
AI

Is the Possibility of Conscious AI a Dangerous Myth? (noemamag.com) 221

This week Noema magazine published a 7,000-word exploration of our modern "Mythology Of Conscious AI" written by a neuroscience professor who directs the University of Sussex Centre for Consciousness Science: The very idea of conscious AI rests on the assumption that consciousness is a matter of computation. More specifically, that implementing the right kind of computation, or information processing, is sufficient for consciousness to arise. This assumption, which philosophers call computational functionalism, is so deeply ingrained that it can be difficult to recognize it as an assumption at all. But that is what it is. And if it's wrong, as I think it may be, then real artificial consciousness is fully off the table, at least for the kinds of AI we're familiar with.
He makes detailed arguments against a computation-based consciousness (including "Simulation is not instantiation... If we simulate a living creature, we have not created life.") While a computer may seem like the perfect metaphor for a brain, the cognitive science of "dynamical systems" (and other approaches) reject the idea that minds can be entirely accounted for algorithmically. And maybe actual life needs to be present before something can be declared conscious.

He also warns that "Many social and psychological factors, including some well-understood cognitive biases, predispose us to overattribute consciousness to machines."

But then his essay reaches a surprising conclusion: As redundant as it may sound, nobody should be deliberately setting out to create conscious AI, whether in the service of some poorly thought-through techno-rapture, or for any other reason. Creating conscious machines would be an ethical disaster. We would be introducing into the world new moral subjects, and with them the potential for new forms of suffering, at (potentially) an exponential pace. And if we give these systems rights, as arguably we should if they really are conscious, we will hamper our ability to control them, or to shut them down if we need to. Even if I'm right that standard digital computers aren't up to the job, other emerging technologies might yet be, whether alternative forms of computation (analogue, neuromorphic, biological and so on) or rapidly developing methods in synthetic biology. For my money, we ought to be more worried about the accidental emergence of consciousness in cerebral organoids (brain-like structures typically grown from human embryonic stem cells) than in any new wave of LLM.

But our worries don't stop there. When it comes to the impact of AI in society, it is essential to draw a distinction between AI systems that are actually conscious and those that persuasively seem to be conscious but are, in fact, not. While there is inevitable uncertainty about the former, conscious-seeming systems are much, much closer... Machines that seem conscious pose serious ethical issues distinct from those posed by actually conscious machines. For example, we might give AI systems "rights" that they don't actually need, since they would not actually be conscious, restricting our ability to control them for no good reason. More generally, either we decide to care about conscious-seeming AI, distorting our circles of moral concern, or we decide not to, and risk brutalizing our minds. As Immanuel Kant argued long ago in his lectures on ethics, treating conscious-seeming things as if they lack consciousness is a psychologically unhealthy place to be...

One overlooked factor here is that even if we know, or believe, that an AI is not conscious, we still might be unable to resist feeling that it is. Illusions of artificial consciousness might be as impenetrable to our minds as some visual illusions... What's more, because there's no consensus over the necessary or sufficient conditions for consciousness, there aren't any definitive tests for deciding whether an AI is actually conscious....

Illusions of conscious AI are dangerous in their own distinctive ways, especially if we are constantly distracted and fascinated by the lure of truly sentient machines... If we conflate the richness of biological brains and human experience with the information-processing machinations of deepfake-boosted chatbots, or whatever the latest AI wizardry might be, we do our minds, brains and bodies a grave injustice. If we sell ourselves too cheaply to our machine creations, we overestimate them, and we underestimate ourselves...

The sociologist Sherry Turkle once said that technology can make us forget what we know about life. It's about time we started to remember.

Australia

Nearly 5 Million Accounts Removed Under Australia's New Social Media Ban (nytimes.com) 72

An anonymous reader quotes a report from the New York Times: Nearly five million social media accounts belonging to Australian teenagers have been deactivated or removed, a month after a landmark law barring those younger than 16 from using the services took effect, the government said on Thursday. The announcement was the first reported metric reflecting the rollout of the law, which is being closely watched by several other countries weighing whether the regulation can be a blueprint for protecting children from the harms of social media, or a cautionary tale highlighting the challenges of such attempts.

The law required 10 social media platforms, including Instagram, Facebook, Snapchat and Reddit, to prevent users under 16 from accessing their services. Under the law, which came into force in December, failure by the companies to take "reasonable steps" to remove underage users could lead to fines of up to 49.5 million Australian dollars, about $33 million. [...] The number of removed accounts offered only a limited picture of the ban's impact. Many teenagers have said in the weeks since the law took effect that they were able to get around the ban by lying about their age, or that they could easily bypass verification systems.

The regulator tasked with enforcing and tracking the law, the eSafety Commissioner, did not release a detailed breakdown beyond announcing that the companies had "removed access" to about 4.7 million accounts belonging to children under 16. Meta, the parent company of Instagram and Facebook, said this week that it had removed almost 550,000 accounts of users younger than 16 before the ban came into effect.
"Change doesn't happen overnight," said Prime Minister Anthony Albanese. "But these early signs show it's important we've acted to make this change."
Social Networks

Study Finds Weak Evidence Linking Social Media Use to Teen Mental Health Problems (theguardian.com) 40

An anonymous reader quotes a report from the Guardian: Screen time spent gaming or on social media does not cause mental health problems in teenagers, according to a large-scale study. [...] Researchers at the University of Manchester followed 25,000 11- to 14-year-olds over three school years, tracking their self-reported social media habits, gaming frequency and emotional difficulties to find out whether technology use genuinely predicted later mental health difficulties. Participants were asked how much time on a normal weekday in term time they spent on TikTok, Instagram, Snapchat and other social media, or gaming. They were also asked questions about their feelings, mood and wider mental health.

The study found no evidence for boys or girls that heavier social media use or more frequent gaming increased teenagers' symptoms of anxiety or depression over the following year. Increases in girls' and boys' social media use from year 8 to year 9 and from year 9 to year 10 had zero detrimental impact on their mental health the following year, the authors found. More time spent gaming also had a zero negative effect on pupils' mental health. "We know families are worried, but our results do not support the idea that simply spending time on social media or gaming leads to mental health problems -- the story is far more complex than that," said the lead author Dr Qiqi Cheng.

The research, published in the Journal of Public Health, also examined whether how pupils use social media makes a difference, with participants asked how much time spent chatting with others, posting stories, pictures and videos, browsing feeds, profiles or scrolling through photos and stories. The scientists found that actively chatting on social media or passive scrolling feeds did not appear to drive mental health difficulties. The authors stressed that the findings did not mean online experiences were harmless. Hurtful messages, online pressures and extreme content could have detrimental effects on wellbeing, but focusing on screen time alone was not helpful, they said.

Social Networks

Digg Launches Its New Reddit Rival To the Public (techcrunch.com) 44

Digg is officially back under the ownership of its original founder, Kevin Rose, along with Reddit co-founder Alexis Ohanian. "Similar to Reddit, the new Digg offers a website and mobile app where you can browse feeds featuring posts from across a selection of its communities and join other communities that align with your interests," reports TechCrunch. "There, you can post, comment, and upvote (or 'digg') the site's content." From the report: [T]he rise of AI has presented an opportunity to rebuild Digg, Rose and Ohanian believe, leading them to acquire Digg last March through a leveraged buyout by True Ventures, Ohanian's firm Seven Seven Six, Rose and Ohanian themselves, and the venture firm S32. The company has not disclosed its funding. They're betting that AI can help to address some of the messiness and toxicity of today's social media landscape. At the same time, social platforms will need a new set of tools to ensure they're not taken over by AI bots posing as people.

"We obviously don't want to force everyone down some kind of crazy KYC process," said Rose in an interview with TechCrunch, referring to the 'know your customer' verification process used by financial institutions to confirm someone's identity. Instead of simply offering verification checkmarks to designate trust, Digg will try out new technologies, like using zero-knowledge proofs (cryptographic methods that verify information without revealing the underlying data) to verify the people using its platform. It could also do other things, like require that people who join a product-focused community verify they actually own or use the product being discussed there.

As an example, a community for Oura ring owners could verify that everyone who posts has proven they own one of the smart rings. Plus, Rose suggests Digg could use signals acquired from mobile devices to help verify members -- for instance, the app could identify when Digg users attended a meetup in the same location. "I don't think there's going to be any one silver bullet here," said Rose. "It's just going to be us saying ... here's a platter of things that you can add together to create trust."

Slashdot Top Deals