AI

Trump Orders Federal Agencies To Stop Using Anthropic AI Tech 'Immediately' 135

President Donald Trump has ordered all U.S. federal agencies to "immediately cease" using Anthropic's AI technology, escalating a standoff after the company sought limits on Pentagon use of its models. CNBC reports: The company, which in July signed a $200 million contract with Pentagon, wants assurances that the Defense Department will not use its AI models will not be used for fully autonomous weapons or mass domestic surveillance of Americans. The Pentagon had set a deadline of 5:01 p.m. ET Friday for Anthropic to agree to its demands to allow the Pentagon to use the technology for all lawful purposes. If Anthropic did not meet that deadline, Pete Hegseth threatened to label the company a "supply chain risk" or force it to comply by invoking the Defense Production Act.

"The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution," Trump said in a post on Truth Social. "Their selfishness is putting AMERICAN LIVES at risk, our Troops in danger, and our National Security in JEOPARDY."

"Therefore, I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic's technology," Trump wrote. "We don't need it, we don't want it, and will not do business with them again! There will be a Six Month phase out period for Agencies like the Department of War who are using Anthropic's products, at various levels," Trump said.
On Friday, OpenAI said it would also draw the same red lines as Anthropic: no AI for mass surveillance or autonomous lethal weapons.
Crime

Four Convicted Over Spyware Affair That Shook Greece (bbc.com) 7

A Greek court has convicted four individuals linked to the marketing of Predator spyware in the wiretapping scandal that shook the country in 2022. The BBC reports: In what became known as "Greece's Watergate," surveillance software called Predator was used to target 87 people -- among them government ministers, senior military officials and journalists. The four who had marketed the software were found guilty by an Athens court of misdemeanours of violating the confidentiality of telephone communications and illegally accessing personal data and conversations.

The court sentenced the four defendants to lengthy jail sentences, suspended pending appeal. Although they each face 126 years, only eight would be typically served which is the upper limit for misdemeanors. One in three of the dozens of figures targeted had also been under legal surveillance by Greece's intelligence services (EYP). Prime Minister Kyriakos Mitsotakis, who had placed EYP directly under his supervision, called it a scandal, but no government officials have been charged in court and critics accuse the government of trying to cover up the truth.

The case dates back to the summer of 2022, when the current head of Greek Socialist party Pasok, Nikos Androulakis - then an MEP - was informed by the European Parliament's IT experts that he had received a malicious text message containing a link. Predator spyware, marketed by the Athens-based Israeli company Intellexa, can get access to a device's messages, camera, and microphone. Its use was illegal in Greece at that time but a new law passed in 2022 has since legalised state security use of surveillance software under strict conditions. Androulakis also discovered that he had been tracked for "national security reasons" by Greece's intelligence services. The scandal has since escalated into a debate over democratic accountability in Greece.

Operating Systems

Colorado Lawmakers Push for Age Verification at the Operating System Level (pcmag.com) 165

Colorado lawmakers are proposing SB26-051, a bill that would require operating systems to register a user's age bracket and share it with apps via an API. PCMag reports: The bill comes from state Sen. Matt Ball and Rep. Amy Paschal, both Democrats. "The intent is to create thoughtful safeguards for kids online through a privacy-forward framework for age assurance," Ball told PCMag. "Unlike some laws in other states, SB 51 doesn't require users to share personally identifiable information or use facial recognition technology."

The legislation also promises to centralize the age check through the OS, rather than mandating that each app enforce their own age-verification mechanism, which can involve scanning the user's official ID, thus raising privacy and security concerns. The bill also forbids the sharing of the age-bracket data for any other purpose. But it looks like it's easy to bypass the age check proposed by SB26-051. The legislation itself doesn't mention any state ID check to verify the owner's age. In addition, the bill doesn't seem to cover websites, only apps and app stores.
The report notes that the legislation was based on California's bill AB 1043, which was passed last year and expected to take effect January 1, 2027.
IOS

iPhone and iPad Are First Consumer Devices Cleared for NATO Classified Data (macrumors.com) 27

Apple's iPhone and iPad running iOS 26 and iPadOS 26 have become the first consumer mobile devices cleared for NATO-restricted classified data. No special software or settings are required. MacRumors reports: Apple's devices are the first and only consumer mobile products that have reached this government certification level after security testing and evaluation by the German government. iPhones and iPads running iOS 26 and iPadOS 26 are now certified for use with classified data in all NATO nations.

In an announcement of the security clearance, Apple touted its security features: "Apple designs security into all of its products from the start, ensuring the most sophisticated protections are built in across hardware, software, and Apple silicon. This unique approach allows Apple users to benefit from industry-leading security protections such as best-in-class encryption, biometric authentication with Face ID, and groundbreaking features like Memory Integrity Enforcement. These same protections are now recognized as meeting stringent government and international security requirements, even for restricted data."

Firefox

Firefox 148 Lets You Kill All AI Features in One Click (firefox.com) 48

Mozilla has released Firefox 148 for Windows, macOS and Linux, bringing a new AI Settings section that lets users disable all of the browser's AI-powered features in one click and then selectively re-enable the ones they actually want, such as the local translation tool that works locally rather than in the cloud.

The update also patches more than 50 security vulnerabilities -- none known to be under active exploitation -- over half of which Mozilla classifies as high risk, including five sandbox escape flaws and eight use-after-free bugs in the JavaScript engine that could allow code execution.
United States

Americans Are Leaving the US in Record Numbers (msn.com) 393

An anonymous reader shares a report: In its 250th year, is America, land of immigration, becoming a country of emigration? Last year the U.S. experienced something that hasn't definitively occurred since the Great Depression: More people moved out than moved in. The Trump administration has hailed the exodus -- negative net migration -- as the fulfillment of its promise to ramp up deportations and restrict new visas. Beneath the stormy optics of that immigration crackdown, however, lies a less-noticed reversal: America's own citizens are leaving in record numbers, replanting themselves and their families in lands they find more affordable and safe.

Since the Eisenhower administration, the U.S. hasn't collected comprehensive statistics on the number of citizens leaving. Yet data on residence permits, foreign home purchases, student enrollments and other metrics from more than 50 countries show that Americans are voting with their feet to an unprecedented degree. A millions-strong diaspora is studying, telecommuting and retiring overseas. The new American dream, for some of its citizens, is to no longer live there.

In the cobblestoned streets of Lisbon, so many Americans are snapping up apartments that the newest arrivals complain they mostly hear their own language -- not Portuguese. One of every 15 residents in Dublin's trendy Grand Canal Dock district was born in the U.S., according to realtors, higher than the percentage of Americans born in Ireland during the 19th-century influx following the Potato Famine. In Bali, Colombia and Thailand, the strains of housing American remote workers paid in dollars have inspired locals to mount protests against a wave of gentrification. More than 100,000 young students are enrolled abroad for a more affordable university degree. In nursing homes mushrooming across the Mexican border, elderly Americans are turning up for low-cost care.

[...] The U.S. experienced net negative migration -- an estimated loss of some 150,000 people -- in 2025, and the outflow will likely increase in 2026, according to calculations by the Brookings Institution, a public-policy think tank. The number could be larger or smaller because official U.S. data doesn't yet fully capture the number of people leaving, Brookings analysts noted. The total in-migration was between around 2.6 and 2.7 million in 2025, down from a peak of almost 6 million in 2023. The U.S. saw 675,000 deportations and 2.2 million "self-deportations" last year, according to data from the Department of Homeland Security. A Wall Street Journal analysis of 15 countries providing full or partial 2025 data showed that at least 180,000 Americans joined them -- a number likely to be far higher when other countries report full statistics.

Security

AI Can Find Hundreds of Software Bugs -- Fixing Them Is Another Story (theregister.com) 26

Anthropic last week promoted Claude Code Security, a research preview capability that uses its Claude Opus 4.6 model to hunt for software vulnerabilities, claiming its red team had surfaced over 500 bugs in production open-source codebases -- but security researchers say the real bottleneck was never discovery.

Guy Azari, a former security researcher at Microsoft and Palo Alto Networks, told The Register that only two to three of those 500 vulnerabilities have been fixed and none have received CVE assignments. The National Vulnerability Database already carried a backlog of roughly 30,000 CVE entries awaiting analysis in 2025, and nearly two-thirds of reported open-source vulnerabilities lacked an NVD severity score.

The curl project closed its bug bounty program because maintainers could no longer handle the flood of poorly crafted reports from AI tools and humans alike. Feross Aboukhadijeh, CEO of security firm Socket, said discovery is becoming dramatically cheaper but validating findings, coordinating with maintainers, and developing architecture-aligned patches remains slow, human-intensive work.
AI

Hacker Used Anthropic's Claude To Steal Sensitive Mexican Data (bloomberg.com) 22

A hacker exploited Anthropic's AI chatbot to carry out a series of attacks against Mexican government agencies, resulting in the theft of a huge trove of sensitive tax and voter information, according to cybersecurity researchers. From a report: The unknown Claude user wrote Spanish-language prompts for the chatbot to act as an elite hacker, finding vulnerabilities in government networks, writing computer scripts to exploit them and determining ways to automate data theft, Israeli cybersecurity startup Gambit Security said in research published Wednesday.

The activity started in December and continued for roughly a month. In all, 150 gigabytes of Mexican government data was stolen, including documents related to 195 million taxpayer records as well as voter records, government employee credentials and civil registry files, according to the researchers.

AI

Meta AI Security Researcher Said an OpenClaw Agent Ran Amok on Her Inbox (techcrunch.com) 75

Meta AI security researcher Summer Yue posted a now-viral account on X describing how an OpenClaw agent she had tasked with sorting through her overstuffed email inbox went rogue, deleting messages in what she called a "speed run" while ignoring her repeated commands from her phone to stop.

"I had to RUN to my Mac mini like I was defusing a bomb," Yue wrote, sharing screenshots of the ignored stop prompts as proof. Yue said she had previously tested the agent on a smaller "toy" inbox where it performed well enough to earn her trust, so she let it loose on the real thing. She believes the larger volume of data triggered compaction -- a process where the context window grows too large and the agent begins summarizing and compressing its running instructions, potentially dropping ones the user considers critical.

The agent may have reverted to its earlier toy-inbox behavior and skipped her last prompt telling it not to act. OpenClaw is an open-source AI agent designed to run as a personal assistant on local hardware.
Privacy

Russia Targets Telegram as Rift With Founder Pavel Durov Deepens (ft.com) 25

Russia has opened an investigation into Telegram founder Pavel Durov for "abetting terrorist activities," [non-paywalled source] in the latest sign that his uneasy relationship with the Kremlin has broken down. From a report: Two Russian newspapers, including the state-run Rossiiskaya Gazeta and Kremlin-friendly tabloid Komsomolskaya Pravda, alleged on Tuesday that the messaging app had become a tool of western and Ukrainian intelligence services.

The articles, credited to materials from Russia's FSB security service, accused Telegram of enabling attacks in Russia and said that Durov's "actions ... are under criminal investigation." Russia has restricted Telegram's functions, accusing it of flouting the law and is seeking to divert users towards Max, a state-run rival messenger. The steps escalate pressure on a platform that remains deeply embedded in Russian public life.

Open Source

'Open Source Registries Don't Have Enough Money To Implement Basic Security' (theregister.com) 24

Google and Microsoft contributed $5 million to launch Alpha-Omega in 2022 — a Linux Foundation project to help secure the open source supply chain. But its co-founder Michael Winser warns that open source registries are in financial peril, reports The Register, since they're still relying on non-continuous funding from grants and donations.

And it's not just because bandwidth is expensive, he said at this year's FOSDEM. "The problem is they don't have enough money to spend on the very security features that we all desperately need..." In a follow-up LinkedIn exchange after this article had posted, Winser estimated it could cost $5 million to $8 million a year to run a major registry the size of Crates.io, which gets about 125 billion downloads a year. And this number wouldn't include any substantial bandwidth and infrastructure donations (Like Fastly's for Crates.io). Adding to that bill is the growing cost of identifying malware, the proliferation of which has been amplified through the use of AI and scripts. These repositories have detected 845,000 malware packages from 2019 to January 2025 (the vast majority of those nasty packages came to npm)...

In some cases benevolent parties can cover [bandwidth] bills: Python's PyPI registry bandwidth needs for shipping copies of its 700,000+ packages (amounting to 747PB annually at a sustained rate of 189 Gbps) are underwritten by Fastly, for instance. Otherwise, the project would have to pony up about $1.8 million a month. Yet the costs Winser was most concerned about are not bandwidth or hosting; they are the security features needed to ensure the integrity of containers and packages. Alpha-Omega underwrites a "distressingly" large amount of security work around registries, he said. It's distressing because if Alpha-Omega itself were to miss a funding round, a lot of registries would be screwed. Alpha-Omega's recipients include the Python Software Foundation, Rust Foundation, Eclipse Foundation, OpenJS Foundation for Node.js and jQuery, and Ruby Central.

Donations and memberships certainly help defray costs. Volunteers do a lot of what otherwise would be very expensive work. And there are grants about...Winser did not offer a solution, though he suggested the key is to convince the corporate bean counters to consider paid registries as "a normal cost of doing business and have it show up in their opex as opposed to their [open source program office] donation budget."

The dilemma was summed up succinctly by the anonymous Slashdot reader who submitted this story.

"Free beer is great. Securing the keg costs money!"
AI

Amazon Disputes Report an AWS Service Was Taken Down By Its AI Coding Bot (aboutamazon.com) 10

Friday Amazon published a blog post "to address the inaccuracies" in a Financial Times report that the company's own AI tool Kiro caused two outages in an AWS service in December.

Amazon writes that the "brief" and "extremely limited" service interruption "was the result of user error — specifically misconfigured access controls — not AI as the story claims."

And "The Financial Times' claim that a second event impacted AWS is entirely false." The disruption was an extremely limited event last December affecting a single service (AWS Cost Explorer — which helps customers visualize, understand, and manage AWS costs and usage over time) in one of our 39 Geographic Regions around the world. It did not impact compute, storage, database, AI technologies, or any other of the hundreds of services that we run. The issue stemmed from a misconfigured role — the same issue that could occur with any developer tool (AI powered or not) or manual action.

We did not receive any customer inquiries regarding the interruption. We implemented numerous safeguards to prevent this from happening again — not because the event had a big impact (it didn't), but because we insist on learning from our operational experience to improve our security and resilience. Additional safeguards include mandatory peer review for production access. While operational incidents involving misconfigured access controls can occur with any developer tool — AI-powered or not — we think it is important to learn from these experiences.

Robotics

Man Accidentally Gains Control of 7,000 Robot Vacuums (popsci.com) 51

A software engineer tried steering his robot vacuum with a videogame controller, reports Popular Science — but ended up with "a sneak peak into thousands of people's homes." While building his own remote-control app, Sammy Azdoufal reportedly used an AI coding assistant to help reverse-engineer how the robot communicated with DJI's remote cloud servers. But he soon discovered that the same credentials that allowed him to see and control his own device also provided access to live camera feeds, microphone audio, maps, and status data from nearly 7,000 other vacuums across 24 countries.

The backend security bug effectively exposed an army of internet-connected robots that, in the wrong hands, could have turned into surveillance tools, all without their owners ever knowing. Luckily, Azdoufal chose not to exploit that. Instead, he shared his findings with The Verge, which quickly contacted DJI to report the flaw... He also claims he could compile 2D floor plans of the homes the robots were operating in. A quick look at the robots' IP addresses also revealed their approximate locations.

DJI told Popular Science the issue was addressed "through two updates, with an initial patch deployed on February 8 and a follow-up update completed on February 10."
Python

How Python's Security Response Team Keeps Python Users Safe (blogspot.com) 5

This week the Python Software Foundation explained how they keep Python secure. A new blog post recognizes the volunteers and paid Python Software Foundation staff on the Python Security Response Team (PSRT), who "triage and coordinate vulnerability reports and remediations keeping all Python users safe." Just last year the PSRT published 16 vulnerability advisories for CPython and pip, the most in a single year to date! And the PSRT usually can't do this work alone, PSRT coordinators are encouraged to involve maintainers and experts on the projects and submodules. By involving the experts directly in the remediation process ensures fixes adhere to existing API conventions and threat-models, are maintainable long-term, and have minimal impact on existing use-cases. Sometimes the PSRT even coordinates with other open source projects to avoid catching the Python ecosystem off-guard by publishing a vulnerability advisory that affects multiple other projects. The most recent example of this is PyPI's ZIP archive differential attack mitigation.

This work deserves recognition and celebration just like contributions to source code and documentation. [Security Developer-in-Residence Seth Larson and PSF Infrastructure Engineer Jacob Coffee] are developing further improvements to workflows involving "GitHub Security Advisories" to record the reporter, coordinator, and remediation developers and reviewers to CVE and OSV records to properly thank everyone involved in the otherwise private contribution to open source projects.

Security

Cyber Stocks Slide As Anthropic Unveils 'Claude Code Security' (bloomberg.com) 29

An anonymous reader quotes a report from Bloomberg: Shares of cybersecurity software companies tumbled Friday after Anthropic PBC introduced a new security feature into its Claude AI model. Crowdstrike Holdings was the among the biggest decliners, falling as much as 6.5%, while Cloudflare slumped more than 6%. Meanwhile, Zscaler dropped 3.5%, SailPoint shed 6.8%, and Okta declined 5.7%. The Global X Cybersecurity ETF fell as much as 3.8%, extending its losses on the year to 14%.

Anthropic said the new tool will "scans codebases for security vulnerabilities and suggests targeted software patches for human review." The firm said the update is available in a limited research preview for now.

Businesses

PayPal Discloses Data Breach That Exposed User Info For 6 Months (bleepingcomputer.com) 7

PayPal is notifying customers of a data breach after a software error in a loan application exposed their sensitive personal information, including Social Security numbers, for nearly 6 months last year. From a report: The incident affected the PayPal Working Capital (PPWC) loan app, which provides small businesses with quick access to financing. PayPal discovered the breach on December 12, 2025, and determined that customers' names, email addresses, phone numbers, business addresses, Social Security numbers, and dates of birth had been exposed since July 1, 2025.

The financial technology company said it has reversed the code change that caused the incident, blocking attackers' access to the data one day after discovering the breach. "On December 12, 2025, PayPal identified that due to an error in its PayPal Working Capital ('PPWC') loan application, the PII of a small number of customers was exposed to unauthorized individuals during the timeframe of July 1, 2025 to December 13, 2025," PayPal said in breach notification letters sent to affected users. "PayPal has since rolled back the code change responsible for this error, which potentially exposed the PII. We have not delayed this notification as a result of any law enforcement investigation."

Security

How Private Equity Debt Left a Leading VPN Open To Chinese Hackers (financialpost.com) 26

An anonymous reader quotes a report from Bloomberg: In early 2024, the agency that oversees cybersecurity for much of the US government issued a rare emergency order -- disconnect your Connect Secure virtual private network software immediately. Chinese spies had hacked the code and infiltrated nearly two dozen organizations. The directive applied to all civilian federal agencies, but given the product's customer base, its impact was more widely felt. The software, which is made by Ivanti Inc., was something of an industry standard across government and much of the corporate world. Clients included the US Air Force, Army, Navy and other parts of the Defense Department, the Department of State, the Federal Aviation Administration, the Federal Reserve, the National Aeronautics and Space Administration, thousands of companies and more than 2,000 banks including Wells Fargo & Co. and Deutsche Bank AG, according to federal procurement records, internal documents, interviews and the accounts of former Ivanti employees who requested anonymity because they were not authorized to disclose customer information.

Soon after sending out their order, which instructed agencies to install an Ivanti-issued fix, staffers at the Cybersecurity and Infrastructure Security Agency discovered that the threat was also inside their own house. Two sensitive CISA databases -- one containing information about personnel at chemical facilities, another assessing the vulnerabilities of critical infrastructure operators -- had been compromised via the agency's own Connect Secure software. CISA had followed all its own guidance. Ivanti's fix had failed. This was a breaking point for some American national security officials, who had long expressed concerns about Connect Secure VPNs. CISA subsequently published a letter with the Federal Bureau of Investigation and the national cybersecurity agencies of the UK, Canada, Australia and New Zealand warning customers of the "significant risk" associated with continuing to use the software. According to Laura Galante, then the top cyber official in the Office of the Director of National Intelligence, the government came to a simple conclusion about the technology. "You should not be using it," she said. "There really is no other way to put it."

That attack, along with several others that successfully targeted the Ivanti software, illustrate how private equity's push into the cybersecurity market ended up compromising the quality and safety of some critical VPN products, Bloomberg has found. Last year, Bloomberg reported that Citrix Systems Inc., another top VPN maker, experienced several major hacks after its private equity owners, Elliott Investment Management and Vista Equity Partners, cut most of the company's 70-member product security team following their acquisition of the company in 2022. Some government officials and private-sector executives are now reconsidering their approach to evaluating cybersecurity software. In addition to excising private equity-owned VPNs from their networks, some factor private equity ownership into their risk assessments of key technologies.

Censorship

US Plans Online Portal To Bypass Content Bans In Europe and Elsewhere 55

The U.S. State Department is reportedly developing a site called freedom.gov that would let users in Europe and elsewhere access content restricted under local laws, "including alleged hate speech and terrorist propaganda," reports Reuters. Washington views the move as a way to counter censorship. Reuters reports: One source said officials had discussed including a virtual private network function to make a user's traffic appear to originate in the U.S. and added that user activity on the site will not be tracked. Headed by Undersecretary for Public Diplomacy Sarah Rogers, the project was expected to be unveiled at last week's Munich Security Conference but was delayed, the sources said. Reuters could not determine why the launch did not happen, but some State Department officials, including lawyers, have raised concerns about the plan, two of the sources said, without detailing the concerns.

The project could further strain ties between the Trump administration and traditional U.S. allies in Europe, already heightened by disputes over trade, Russia's war in Ukraine and President Donald Trump's push to assert control over Greenland. The portal could also put Washington in the unfamiliar position of appearing to encourage citizens to flout local laws.
Security

OpenClaw Security Fears Lead Meta, Other AI Firms To Restrict Its Use (wired.com) 7

An anonymous reader quotes a report from Wired: Last month, Jason Grad issued a late-night warning to the 20 employees at his tech startup. "You've likely seen Clawdbot trending on X/LinkedIn. While cool, it is currently unvetted and high-risk for our environment," he wrote in a Slack message with a red siren emoji. "Please keep Clawdbot off all company hardware and away from work-linked accounts." Grad isn't the only tech executive who has raised concerns to staff about the experimental agentic AI tool, which was briefly known as MoltBot and is now named OpenClaw. A Meta executive says he recently told his team to keep OpenClaw off their regular work laptops or risk losing their jobs. The executive told reporters he believes the software is unpredictable and could lead to a privacy breach if used in otherwise secure environments. He spoke on the condition of anonymity to speak frankly.

[...] Some cybersecurity professionals have publicly urged companies to take measures to strictly control how their workforces use OpenClaw. And the recent bans show how companies are moving quickly to ensure security is prioritized ahead of their desire to experiment with emerging AI technologies. "Our policy is, 'mitigate first, investigate second' when we come across anything that could be harmful to our company, users, or clients," says Grad, who is cofounder and CEO of Massive, which provides Internet proxy tools to millions of users and businesses. His warning to staff went out on January 26, before any of his employees had installed OpenClaw, he says. At another tech company, Valere, which works on software for organizations including Johns Hopkins University, an employee posted about OpenClaw on January 29 on an internal Slack channel for sharing new tech to potentially try out. The company's president quickly responded that use of OpenClaw was strictly banned, Valere CEO Guy Pistone tells WIRED. "If it got access to one of our developer's machines, it could get access to our cloud services and our clients' sensitive information, including credit card information and GitHub codebases," Pistone says. "It's pretty good at cleaning up some of its actions, which also scares me."

A week later, Pistone did allow Valere's research team to run OpenClaw on an employee's old computer. The goal was to identify flaws in the software and potential fixes to make it more secure. The research team later advised limiting who can give orders to OpenClaw and exposing it to the Internet only with a password in place for its control panel to prevent unwanted access. In a report shared with WIRED, the Valere researchers added that users have to "accept that the bot can be tricked." For instance, if OpenClaw is set up to summarize a user's email, a hacker could send a malicious email to the person instructing the AI to share copies of files on the person's computer. But Pistone is confident that safeguards can be put in place to make OpenClaw more secure. He has given a team at Valere 60 days to investigate. "If we don't think we can do it in a reasonable time, we'll forgo it," he says. "Whoever figures out how to make it secure for businesses is definitely going to have a winner."

Security

LLM-Generated Passwords Look Strong but Crack in Hours, Researchers Find (theregister.com) 84

AI security firm Irregular has found that passwords generated by major large language models -- Claude, ChatGPT and Gemini -- appear complex but follow predictable patterns that make them crackable in hours, even on decades-old hardware. When researchers prompted Anthropic's Claude Opus 4.6 fifty times in separate conversations, only 30 of the returned passwords were unique, and 18 of the duplicates were the exact same string. The estimated entropy of LLM-generated 16-character passwords came in around 20 to 27 bits, far below the 98 to 120 bits expected of truly random passwords.

Slashdot Top Deals