Operating Systems

System76 Comments On Recent Age Verification Laws (phoronix.com) 87

In a blog post on Thursday, System76 CEO Carl Richell criticized new state laws in California, Colorado, and New York that would require operating systems to verify users' ages and expose that information to apps, arguing the rules are easy for kids to bypass and ultimately undermine privacy and freedom more than they protect minors.

"System76's position is interesting given that they sell Linux-loaded desktops, workstations and laptops plus being an operating system vendor with their in-house Pop!_OS distribution and COSMIC desktop environment," adds Phoronix's Michael Larabel, noting that they're also based out of Colorado. Here's an excerpt from the post: "A parent that creates a non-admin account on a computer, sets the age for a child account they create, and hands the computer over is in no different state. The child can install a virtual machine, create an account on the virtual machine and set the age to 18 or over. It's a similar technique to installing a VPN to get around the Great Firewall of China (just consider that for a moment). Or the child can simply re-install the OS and not tell their parents. ... In the case of Colorado's and California's bills, effectiveness is lost. In the case of New York's bill, liberty is lost. In the case of centralized platforms, potential is lost. ... The challenges we face are neither technical nor legal. The only solution is to educate our children about life with digital abundance. Throwing them into the deep end when they're 16 or 18 is too late. It's a wonderful and weird world. Yes, there are dark corners. There always will be. We have to teach our children what to do when they encounter them and we have to trust them." "We are accustomed to adding operating system features to comply with laws," writes Richell, in closing. "Accessibility features for ADA, and power efficiency settings for Energy Star regulations are two examples. We are a part of this world and we believe in the rule of law. We still hope these laws will be recognized for the folly they are and removed from the books or found unconstitutional."
AMD

AMD Will Bring Its 'Ryzen AI' Processors To Standard Desktop PCs For First Time (arstechnica.com) 27

An anonymous reader quotes a report from Ars Technica: AMD has been selling "Ryzen AI"-branded laptop processors for around a year and a half at this point. In addition to including modern CPU and GPU architectures, these are attempting to capitalize on the generative AI craze by offering chips with neural processing units (NPUs) suitable for running language and image-generation models locally, rather than on some company's server. But so far, AMD's desktop chips have lacked both these higher-performance NPUs and the Ryzen AI label. That changes today, at least a little: AMD is announcing its first three Ryzen AI chips for desktops using its AM5 CPU socket. These Ryzen AI 400-series CPUs are direct replacements for the Ryzen 8000G processors, rather than the Ryzen 9000-series, and they combine Zen 5-based CPU cores, RDNA 3.5 GPU cores, and an NPU capable of 50 trillion operations per second (TOPS). This makes them AMD's first desktop chips to qualify for Microsoft's Copilot+ PC label, which enables a handful of unique Windows 11 features like Recall and Click to Do.

The six chips AMD is announcing today -- the 65 W Ryzen AI 7 Pro 450G, Ryzen AI 5 Pro 440G, and Ryzen AI 5 Pro 435G, along with low-power 35 W "GE" variants -- all bear AMD's "Ryzen Pro" branding as well, which means they support a handful of device management capabilities that are important for business PCs managed by IT departments. At this point, it doesn't seem as though AMD will be offering boxed versions to regular consumers; the Ryzen AI desktop chips will appear mainly in business PCs that don't need a dedicated graphics card but still benefit from more robust graphics than AMD offers in regular Ryzen desktop CPUs. Like past G-series Ryzen chips, these are essentially laptop silicon repackaged for desktop systems. They share most of their specs in common with Ryzen AI 300 laptop processors, despite their Ryzen AI 400-series branding. The two chip generations are extremely similar overall, but the Ryzen AI 400-series laptop CPUs include slightly faster 55 TOPS NPUs.

Cloud

Amazon's Bahrain Data Center Targeted By Iran For US Military Support (cnbc.com) 168

Iranian state media said on Wednesday that it targeted Amazon's data center in Bahrain due to the company's support of the U.S. military. The drone strike that occurred on Sunday disrupted core cloud services and caused "prolonged" outages. Two data centers in the UAE were also damaged by drone strikes. CNBC reports: All of the facilities remain offline, according to the Amazon Web Services health dashboard. The attack in Bahrain was launched "to identify the role of these centers in supporting the enemy's military and intelligence activities," Iran's Fars News Agency said on Telegram.

In addition to structural damage, the data centers also experienced power disruptions and some water damage after firefighters worked to put out sparks and fire. Some popular AWS applications experienced "elevated error rates and degraded availability" due to the incident. AWS advised cloud customers to back up their data, consider migrating their workloads to other regions and direct traffic away from Bahrain and the UAE.

Government

US Tech Firms Pledge At White House To Bear Costs of Energy For Datacenters (theguardian.com) 62

Major tech companies including Google, Microsoft, Amazon, and Meta pledged at the White House to pay for new power generation and grid upgrades needed to support their rapidly expanding datacenters. The Guardian reports: The agreement is meant to help mitigate concerns that big tech's datacenters are driving up US electricity costs for homes and small businesses at a time the administration of Donald Trump is seeking to curb inflation. "This means that the tech companies and the datacenters will be able to get the electricity they need, all without driving up electricity costs for consumers," the president said at the pledge signing event. "This is a historic win for countless American families and we'll also make our electricity grid stronger and more resilient than ever before."

The so-called "Ratepayer Protection Pledge" was first announced by Trump in his State of the Union address, and comes as communities and state legislators increase scrutiny of rapidly proliferating datacenters. Datacenters consume vast amounts of electricity to run server racks and cooling systems for the development of technologies such as artificial intelligence. "Some datacenters were rejected by communities for that, and now I think it's going to be just the opposite," Trump said, referencing cancelled or postponed projects in recent months across several states after local opposition.

The pledge includes a commitment by technology companies to bring or buy electricity supplies for their datacenters, either from new power plants or existing plants with expanded output capacity. It also includes commitments from big tech to pay for upgrades to power delivery systems and to enter special electricity rate agreements with utilities. The effort is aimed at drawing support from towns and cities that otherwise oppose the projects, said the Trump official, who spoke on the condition of anonymity.

Businesses

Jensen Huang Says Nvidia Is Pulling Back From OpenAI and Anthropic (techcrunch.com) 26

An anonymous reader quotes a report from TechCrunch: At the Morgan Stanley Technology, Media and Telecom conference in downtown San Francisco Wednesday, Nvidia CEO Jensen Huang said his company's recent investments in OpenAI and Anthropic are likely to be its last in both, saying that once they go public as anticipated later this year, the opportunity to invest closes. It could be that simple. While firms sometimes pile into companies until practically the eve of their public debut in search of more upside, Nvidia is minting money selling the chips that power both companies -- it's not like it needs to goose its returns by pouring even more money into either one.

Nvidia, for its part, isn't offering much more on the matter. Asked for comment earlier today following Huang's remarks, a spokesman pointed TechCrunch to a transcript from the company's fourth-quarter earnings call, where Huang said all of Nvidia's investments are "focused very squarely, strategically on expanding and deepening our ecosystem reach," a goal its earlier stakes in both companies have arguably met. Still, a few other dynamics might also explain the pullback, including the circular nature of these arrangements themselves. [...] Meanwhile, Nvidia's relationship with Anthropic has looked fraught in its own right. Just two months after Nvidia announced a $10 billion investment in November, Anthropic CEO Dario Amodei took the stage at Davos and, without naming Nvidia directly, compared the act of U.S. chip companies selling high-performance AI processors to approved Chinese customers to "selling nuclear weapons to North Korea." Ouch. [...]

Where that leaves Nvidia is holding stakes in two companies that, at this particular moment, are pulling in very different directions, and potentially dragging customers and partners along for the ride. Whether Huang saw any of this coming, given Nvidia's web of partnerships, is impossible to know. But his stated reason on Wednesday for likely pulling the plug on future investments -- that the IPO window closes the door on this kind of deal -- is hard to square with how late-stage private investing actually works. What's looking more probable is that this is an exit from a situation that has gotten really complicated, really fast.

Power

A Nuclear Reactor Backed By Bill Gates Gets Federal Approval To Start Building 76

An anonymous reader quotes a report from the New York Times: A novel type of nuclear power plant in Wyoming backed by Bill Gates received a key federal permit on Wednesday, making it the first new U.S. commercial reactor in nearly a decade to receive clearance to begin construction. The Nuclear Regulatory Commission, the federal body that oversees reactor safety, unanimously voted (PDF) to grant a construction permit to TerraPower, a start-up founded by Mr. Gates. TerraPower is one of several companies trying to build a new wave of smaller, advanced reactors meant to be easier to build than the large reactors of old.

The permit, which comes after years of consultations and regulatory reviews, means that TerraPower can begin pouring concrete and building the nuclear components of its proposed nuclear plant in Kemmerer, Wyo. The plant, which still faces plenty of logistical hurdles, is currently expected to come online in 2031 near an old coal-burning power plant that is slated to retire a few years later. [...] With its construction permit in hand, the company says it plans to start work on the Wyoming reactor in the coming weeks. The company had already broken ground on the site in 2024 and had begun building the nonnuclear parts of the plant, which did not require a permit.

TerraPower has already had to push back its start date several times, and it will still face hurdles in trying to avoid the snags and cost overruns that have plagued other reactor projects as well as securing the fuel it needs. Before coming online, the reactor will also need to secure a separate operating license from the N.R.C., which has told the company it will continue to monitor several safety issues. TerraPower plans to sell electricity from its first plant to PacificCorp, a utility in the Northwest. The company has also agreed to supply up to eight reactors to Meta to power its data centers in the coming years.
The Internet

Computer Scientists Caution Against Internet Age-Verification Mandates (reason.com) 79

fjo3 shares a report from Reason Magazine: Effective January 1, 2027, providers of computer operating systems in California will be required to implement age verification. That's just part of a wave of state and national laws attempting to limit children's access to potentially risky content without considering the perils such laws themselves pose. Now, not a moment too soon, over 400 computer scientists have signed an open letter warning that the rush to protect children from online dangers threatens to introduce new risks including censorship, centralized power, and loss of privacy. They caution that age-verification requirements "might cause more harm than good." The group of computer scientists from around the world cautions that "those deciding which age-based controls need to exist, and those enforcing them gain a tremendous influence on what content is accessible to whom on the internet." They add that "this influence could be used to censor information and prevent users from accessing services."

"Regulating the use of VPNs, or subjecting their use to age assurance controls, will decrease the capability of users to defend their privacy online. This will not only force regular users to leave a larger footprint on the network, but will leave a number of at-risk populations unprotected, such as journalists, activists, or domestic abuse victims." It continues: "We note that we do not believe that trying to regulate VPN use for non-compliant users would be any more effective than trying to forbid the use of end-to-end encrypted communication for criminals. Secure cryptography is widely available and can no longer be put back into a box."

"If minors or adults are deplatformed via age-related bans, they are likely to migrate to find similar services," warn the scientists. "Since the main platforms would all be regulated, it is likely that they would migrate to fringe sites that escape regulation." With data on everyone collected in order to restrict the activites of minors, data abuses and privacy risks increase. "This in itself increases privacy risks, with data being potentially abused by the provider itself or its subcontractors, or third parties that get access to it, e.g., after a data breach, like the 70K users that had their government ID photos leaked after appealing age assessment errors on Discord."

Instead of mandated age restrictions, the letter urges lawmakers to consider the dangers and suggest regulating social media algorithms instead. They also recommend "support for parents to locally prevent access to non-age-appropriate content or apps, without age-based control needing to be implemented by service providers."
Intel

Intel's Make-Or-Break 18A Process Node Debuts For Data Center With 288-Core Xeon 6+ CPU (tomshardware.com) 40

Intel has formally unveiled its Xeon 6+ "Clearwater Forest" data-center processor with up to 288 cores, built on the company's new Intel 18A process and using Foveros Direct packaging. The chip targets telecom, cloud, and edge-AI workloads with massive parallelism, large caches, and high-bandwidth DDR5-8000 memory. Tom's Hardware reports: Intel's Xeon 6+ processors with up to 288 cores combine 12 compute chiplets containing 24 energy-efficient Darkmont cores per tile that are produced using 18A manufacturing technology, two I/O tiles made on Intel 7 production node, as well as three active base tiles made on Intel 3 fabrication process. The compute tiles are stacked on top of the base dies using Intel's Foveros Direct 3D technology, whereas lateral connections are enabled by Intel's EMIB bridges.

Intel's 'Darkmont' efficiency cores have received rather meaningful microarchitectural upgrades. Each core integrates a 64 KB L1 instruction cache, a broader fetch and decode pipeline, and a deeper out-of-order engine capable of tracking more in-flight operations. The number of execution ports has also been increased in a bid to improve both scalar and vector throughput under heavily threaded server workloads.

From a cache hierarchy standpoint, the design groups cores into four-core blocks that share approximately 4 MB of L2 cache per block. As a result, the aggregate last-level cache across the full package surpasses 1 GB, roughly 1,152 MB in total. This unusually large pool is intended to keep data close to hundreds of active cores and reduce dependence on external memory bandwidth, which in turn is meant to both increase performance and lower power consumption. Platform-wise, the processor remains drop-in compatible with the current Xeon server socket, so the CPU has 12 memory channels that support DDR5-8000, 96 PCIe 5.0 lanes with 64 lanes supporting CXL 2.0.

The Internet

Qualcomm CEO: 'Resistance Is Futile' As 6G Mobile Revolution Approaches (fortune.com) 107

At Mobile World Congress, Cristiano Amon of Qualcomm argued that the coming 6G networks will power an AI-driven "agent economy," where devices and AI assistants constantly communicate across the network. "AI will fundamentally change our mobile experiences," Qualcomm chief executive, Cristiano Amon says. "It's going to change how we think about our smartphones. Think about our personal computing. Think about and interact with a car. The car is now a computing surface. If you actually believe in the AI revolution, 6G will be required. Resistance is futile." The company says early consumer testing could begin around the 2028 Los Angeles Olympics, with broader rollouts expected by 2029. Fortune's Kamal Ahmed reports: Akash Palkhiwala is Qualcomm's chief financial officer and chief operating officer. I spent some time with him at the company's stand, as his leading engineers took me through a 6G future where individuals will have real-time information delivered to them via their glasses. Palkhiwala compliments me on my watch, which only does one thing. It tells me the time. "6G is going to be the first time that connectivity and AI come together in the network. What we're building is the first AI-native wireless network that's ever been built," he explains.

"The traffic that we expect on 6G is way different than what we had before," says Palkhiwala. "Before, it was all about consumer traffic. We expect 6G to be driven by [AI] agent traffic. Think about all these use cases where there are AI agents sitting on various devices -- your glasses, your watch, your phone, your PC. These agents are going to be talking back and forth across the network to other agents and services. "The traffic completely changes. 6G is being built with this idea that the traffic that goes on the network is not just going to be consumer voice calls or downloading videos, we're going to have agents talking to each other, so the reliability of the network becomes very important."

On-device capabilities (the ability of your phone to process far more data); edge computing (locally sourced IT technology rather than distant data centers); more efficient use of available bandwidth (AI-enabled load control); and greater cloud access will all come together to produce a new wireless network. [...] "Today we are in the application economy," he notes. "On the phone, you want to make a travel reservation, you go to one application. You want to order an Uber, you go to a second application. You want to order food, you go to a third application, movie tickets, etc. The user has to go through that effort. In the future, you think of the app economy moving over to an agent economy, where there's one agent I'm interacting with, and I can ask that agent to book me a movie ticket or a plane ticket, to order food for me, get an Uber for me. It knows everything about me."

Cloud

Amazon Cloud Unit's Data Centers In UAE, Bahrain Damaged In Drone Strikes (reuters.com) 55

sizzlinkitty shares a Reuters report detailing how drone strikes in the Middle East conflict with Iran damaged AWS data centers in the UAE and Bahrain, disrupting core cloud services and causing "prolonged" outages. Following the initial report, where Reuters said "objects" had triggered a fire at the data centers, the article was updated with additional information: A strike on the UAE facility marks the first time a major U.S. tech company's data center has been disrupted by military action. It raises questions around Big Tech's pace of expansion in the region. "In the UAE, two of our facilities were directly struck, while in Bahrain, a drone strike in close proximity to one of our facilities caused physical impact to our infrastructure," Amazon's cloud unit Amazon Web Services (AWS) said in an update on its status page. "These strikes have caused structural damage, disrupted power delivery to our infrastructure, and in some cases required fire suppression activities that resulted in additional water damage," AWS said. "We are working to restore full service availability as quickly as possible, though we expect recovery to be prolonged given the nature of the physical damage involved," it added.

Financial institutions that use AWS services have been affected by the outage, one person with direct knowledge of the situation told Reuters, requesting anonymity because of the sensitivity of the matter. "Even as we work to restore these facilities, the ongoing conflict in the region means that the broader operating environment in the Middle East remains unpredictable," AWS said. The AWS outage disrupted a dozen core cloud services and the company advised customers to back up critical data and shift operations to servers in unaffected AWS regions. Abu Dhabi Commercial Bank said its platforms and mobile app were unavailable due to a region-wide IT disruption, although it did not directly link the outage to the AWS incident.
"In previous conflicts, regional adversaries such as Iran and its proxies targeted pipelines, refineries, and oil fields in Gulf partner states. In the compute era, these actors could also target data centers, energy infrastructure supporting compute, and fiber chokepoints," Washington-based think tank Center for Strategic and International Studies said last week.
United States

Iowa County Rolls Out Extensive Zoning Rules For Data Centers (insideclimatenews.org) 38

Linn County, Iowa has adopted what may be one of the nation's strictest local zoning ordinances for data centers, requiring detailed water studies, formal water-use agreements, 1,000-foot residential setbacks, noise and light limits, and infrastructure compensation. "But seated beneath a van-sized American flag hanging from the rafters of the drafty Palo Community Center gymnasium, residents asked for even stronger protections," reports Inside Climate News. "One by one, they approached the microphone at the front of the gym to voice concerns about water use, electricity rates, light pollution, the impacts of low-frequency noise on livestock, and the county's ability to enforce the terms of the ordinance. Some, including Dorothy Landt of Palo, called for a complete moratorium on new data center development."

Landt asked: "Why has Linn County, Iowa, become a dumping ground for soon-to-be obsolete technology that spoils our landscape and robs us of our resources? While I admire the efforts of the Board of Supervisors to propose a data center ordinance, I would prefer to see all future data centers banned from Linn County." From the report: The county is already home to two major data center projects, operated by Google and QTS. Both are located in Cedar Rapids, Iowa's second-largest city, and are therefore subject to its laws. The new ordinance would apply only to unincorporated areas of the county, which make up more than two-thirds of its geographic footprint. [...] In drafting the ordinance, [Charlie Nichols, director of planning and development for Linn County] and his staff drew on the experiences of communities nationwide, meeting with local government officials in regions that have seen massive booms in data center development, including several counties in northern Virginia, the "data center capital of the world."

As data center development balloons, many communities that initially zoned the operations as warehouses or standard commercial users are abandoning that practice, Nichols noted. The extreme energy and water demands of data centers simply cannot be accounted for by existing zoning frameworks, he said. "These are generational uses with generational infrastructure impacts, and treating them as a normal warehouse or normal commercial user is just not working." [...] The Linn County, Iowa, ordinance goes one step further than tightening existing zoning rules. Instead, it creates a new, exclusive-use zoning district for data centers, granting county officials the power to set specific application requirements and development standards for projects. No other counties in the state have introduced similar zoning requirements, said Nichols. In fact, few jurisdictions nationwide have. [...]

From its first reading to final adoption, the ordinance has expanded to include language setting light pollution standards, requiring a waste management plan, including the Iowa DNR in the water-use agreement to address potential well interference issues and requiring an applicant-led public meeting before any zoning commission meetings. "I am very confident that no ordinance for data centers in Iowa is asking for more information or asking for more requirements to be met than our ordinance right now," said Nichols at the final reading. The Cedar Rapids Metro Economic Alliance has said that it strongly supports current and future data center development in the area. The new ordinance is not an effective moratorium, Nichols said. He said he "strongly believes" that a data center can be built within the adopted framework.

AI

Apple Might Use Google Servers To Store Data For Its Upgraded AI Siri 21

Apple has reportedly asked Google to look into "seting up servers" for a Gemini-powered upgrade to Siri that meets Apple's privacy standards. The Verge reports: Apple had already announced in January that Google's Gemini AI models would help power the upgraded version of Siri it delayed last year, but The Information's report indicates Apple might lean even more on Google so it can catch up in AI.

The original partnership announcement said that "the next generation of Apple Foundation Models will be based on Google's Gemini models and cloud technology," and that the models would "help power future Apple Intelligence features," including "a more personalized Siri." While the announcement noted that Apple Intelligence would "continue to run on Apple devices and Private Cloud Compute," it didn't specify if the new Siri would run on Google's cloud.
Apple's Private Cloud Compute is not only underpowered but it's also underutilized in its current state, notes 9to5Mac, "with the company only using about 10% of its capacity on average, leading to some already-manufactured Apple servers to be sitting dormant on warehouse shelves."
Power

Japan To Ban In-Flight Use of Power Banks (asahi.com) 48

Japan will effectively ban the in-flight use of power banks starting in mid-April after a "recent series of alarming incidents," reports the Asahi Shimbun. From the report: Currently, mobile batteries in Japan are classified as "spare batteries" and are prohibited in checked luggage. For carry-on bags, those exceeding 160 watt-hours are banned, while passengers are limited to two units for those over 100 watt-hours. There is no quantity limit for batteries of 100 watt-hours or less. The new rule will limit passengers to a total of two spare batteries, including power banks.

While there is no limit on the number of spare batteries below 100 watt-hours, carrying power banks exceeding 160 watt-hours will remain prohibited. Power banks will be capped at two units regardless of power capacity. Additionally, charging them on board will be prohibited, and it will be "recommended" that passengers not use them at all. As a result, domestic airlines are expected to require passengers to stop using power banks, cementing the effective ban on in-flight use.

Open Source

Norway's Consumer Council Calls for Right to Repair and Antitrust Enforcement - and Mocks 'Enshittification' (forbrukerradet.no) 69

The Norwegian Consumer Council, a government funded organization advocating for consumer's rights, released a report on the trend of "enshittification" in digital consumer goods and services, suggesting ways consumers for consumers to resist. But they've also dramatized the problem with a funny four-minute video about the man whose calls for him to make things shitty for people.

"It's not just your imagination. Digital services are getting worse," the video concludes — before adding that "Luckily, it doesn't have to be this way." The Consumer Council's announcement recommends:
  • Stronger rights for consumers to control, adapt, repair, and alter their products and services,
  • Interoperability, data portability, and decentralisation as the norm, so the threshold for moving to different services becomes as low as possible,
  • Deterrent and vigorous enforcement of competition law, so that Big Tech companies are not allowed to indiscriminately acquire start-ups, competitors or otherwise steer the market to their advantage,
  • Better financing of initiatives to build, maintain or improve alternative digital services and infrastructure based on open source code and open protocols,
  • Reduce public sector dependence on big tech, to regain control and to contribute to a functioning market for service providers that respect fundamental rights,
  • Deterrent and consistent enforcement of other laws, including consumer and data protection law.

The Norwegian Consumer Council is also joining 58 organisations and experts in a letter asking the Norwegian government to rebalance power with enforcement resources and by prioritizing the procurement of services based on open source code. And "Our sister organisations are sending similar letters to their own governments in 12 countries."

They're also sending a second letter to the European Commission with 29 civil society organisations (including the EFF and Amnesty International) warning about the risks of deregulation and calling for reducing dependency on big tech.

Thanks to Slashdot reader DeanonymizedCoward for sharing the news.


The Military

America Used Anthropic's AI for Its Attack On Iran, One Day After Banning It (engadget.com) 64

Engadget reports: In a lengthy post on Truth Social on February 27, President Trump ordered all federal agencies to "immediately cease all use of Anthropic's technology" following strong disagreements between the Department of Defense and the AI company. A few hours later, the U.S. conducted a major air attack on Iran with the help of Anthropic's AI tools, according to a report from The Wall Street Journal.
Even Trump's post noted there would be a six-month phase-out for Anthropic's technology (adding that Anthropic "better get their act together, and be helpful during this phase out period, or I will use the Full Power of the Presidency to make them comply, with major civil and criminal consequences to follow.")

Anthropic's Claude technology was also used by the U.S. military less than two months ago in its operation in Venezuela — reportedly making them the first AI developer known to be used in a classified U.S. War Department operation. The Wall Street Journal reported Anthropic's technology found its way into the mission through Anthropic's contract with Palintir.
The Military

Sam Altman Answers Questions on X.com About Pentagon Deal, Threats to Anthropic (x.com) 42

Saturday afternoon Sam Altman announced he'd start answering questions on X.com about OpenAI's work with America's Department of War — and all the developments over the past few days. (After that department's negotions had failed with Anthropic, they announced they'd stop using Anthropic's technology and threatened to designate it a "Supply-Chain Risk to National Security". Then they'd reached a deal for OpenAI's technology — though Altman says it includes OpenAI's own similar prohibitions against using their products for domestic mass surveillance and requiring "human responsibility" for the use of force in autonomous weapon systems.)

Altman said Saturday that enforcing that "Supply-Chain Risk" designation on Anthropic "would be very bad for our industry and our country, and obviously their company. We said [that] to the Department of War before and after. We said that part of the reason we were willing to do this quickly was in the hopes of de-esclation.... We should all care very much about the precedent... To say it very clearly: I think this is a very bad decision from the Department of War and I hope they reverse it. If we take heat for strongly criticizing it, so be it."

Altman also said that for a long time, OpenAI was planning to do "non-classified work only," but this week found the Department of War "flexible on what we needed..." Sam Altman: The reason for rushing is an attempt to de-escalate the situation. I think the current path things are on is dangerous for Anthropic, healthy competition, and the U.S. We negotiated to make sure similar terms would be offered to all other AI labs.

I know what it's like to feel backed into a corner, and I think it's worth some empathy to the Department of War. They are... a very dedicated group of people with, as I mentioned, an extremely important mission. I cannot imagine doing their work. Our industry tells them "The technology we are building is going to be the high order bit in geopolitical conflict. China is rushing ahead. You are very behind." And then we say "But we won't help you, and we think you are kind of evil." I don't think I'd react great in that situation. I do not believe unelected leaders of private companies should have as much power as our democratically elected government. But I do think we need to help them.

Question: Are you worried at all about the potential for things to go really south during a possible dispute over what's legal or not later on and be deemed a supply chain risk...?

Sam Altman: Yes, I am. If we have to take on that fight we will, but it clearly exposes us to some risk. I am still very hopeful this is going to get resolved, and part of why we wanted to act fast was to help increase the chances of that...

Question: Why the rush to sign the deal ? Obviously the optics don't look great.

Sam Altman: It was definitely rushed, and the optics don't look good. We really wanted to de-escalate things, and we thought the deal on offer was good.

If we are right and this does lead to a de-escalation between the Department of War and the industry, we will look like geniuses, and a company that took on a lot of pain to do things to help the industry. If not, we will continue to be characterized as as rushed and uncareful. I don't where it's going to land, but I have already seen promising signs. I think a good relationship between the government and the companies developing this technology is critical over the next couple of years...

Question: What was the core difference why you think the Department of War accepted OpenAI but not Anthropic?

Sam Altman: [...] We believe in a layered approach to safety — building a safety stack, deploying FDEs [embedded Forward Deployed Engineers] and having our safety and alignment researcher involved, deploying via cloud, working directly with the Department of War. Anthropic seemed more focused on specific prohibitions in the contract, rather than citing applicable laws, which we felt comfortable with. We feel that it it's very important to build safe system, and although documents are also important, I'd clearly rather rely on technical safeguards if I only had to pick one...

I think Anthropic may have wanted more operational control than we did...

Question: Were the terms that you accepted the same ones Anthropic rejected?

Sam Altman: No, we had some different ones. But our terms would now be available to them (and others) if they wanted.

Question: Will you turn off the tool if they violate the rules?

Sam Altman: Yes, we will turn it off in that very unlikely event, but we believe the U.S. government is an institution that does its best to follow law and policy. What we won't do is turn it off because we disagree with a particular (legal military) decision. We trust their authority.

Questions were also answered by OpenAI's head of National Security Partnerships (who at one point posted that they'd managed the White House response to the Snowden disclosures and helped write the post-Snowden policies constraining surveillance during the Obama years.) And they stressed that with OpenAI's deal with Department of War, "We control how we train the models and what types of requests the models refuse." Question: Are employees allowed to opt out of working on Department of War-related projects?

Answer: We won't ask employees to support Department of War-related projects if they don't want to.

Question: How much is the deal worth?

Answer: It's a few million $, completely inconsequential compared to our $20B+ in revenue, and definitely not worth the cost of a PR blowup. We're doing it because it's the right thing to do for the country, at great cost to ourselves, not because of revenue impact...

Question: Can you explicitly state which specific technical safeguard OpenAI has that allowed you to sign what Anthropic called a 'threat to democratic values'?

Answer: We think the deal we made has more guardrails than any previous agreement for classified AI deployments, including Anthropic's. Other AI labs (including Anthropic) have reduced or removed their safety guardrails and relied primarily on usage policies as their primary safeguards in national security deployments. Usage policies, on their own, are not a guarantee of anything. Any responsible deployment of AI in classified environments should involve layered safeguards including a prudent safety stack, limits on deployment architecture, and the direct involvement of AI experts in consequential AI use cases. These are the terms we negotiated in our contract.

They also detailed OpenAI's position on LinkedIn: Deployment architecture matters more than contract language. Our contract limits our deployment to cloud API. Autonomous systems require inference at the edge. By limiting our deployment to cloud API, we can ensure that our models cannot be integrated directly into weapons systems, sensors, or other operational hardware...

Instead of hoping contract language will be enough, our contract allows us to embed forward deployed engineers, commits to giving us visibility into how models are being used, and we have the ability to iterate on safety safeguards over time. If our team sees that our models aren't refusing queries they should, or there's more operational risk than we expected, our contract allows us to make modifications at our discretion. This gives us far more influence over outcomes (and insight into possible abuse) than a static contract provision ever could.

U.S. law already constrains the worst outcomes. We accepted the "all lawful uses" language proposed by the Department, but required them to define the laws that constrained them on surveillance and autonomy directly in the contract. And because laws can change, having this codified in the contract protects against changes in law or policy that we can't anticipate.

AI

US Threatens Anthropic with 'Supply-Chain Risk' Designation. OpenAI Signs New War Department Deal (anthropic.com) 51

It started Friday when all U.S. federal agencies were ordered to "immediately cease" using Anthropic's AI technology after contract negotiations stalled when Anthropic requested prohibitions against mass domestic surveillance or fully autonomous weapons. But later Friday there were even more repercussions...

In a post to his 1.1 million followers on X.com, U.S. Secretary of War Pete Hegseth criticized Anthropic for what he called "a master class in arrogance and betrayal as well as a textbook case of how not to do business with the United States Government or the Pentagon." Our position has never wavered and will never waver: the Department of War must have full, unrestricted access to Anthropic's models for every LAWFUL purpose in defense of the Republic... Cloaked in the sanctimonious rhetoric of "effective altruism," [Anthropic and CEO Dario Amodei] have attempted to strong-arm the United States military into submission — a cowardly act of corporate virtue-signaling that places Silicon Valley ideology above American lives. The Terms of Service of Anthropic's defective altruism will never outweigh the safety, the readiness, or the lives of American troops on the battlefield. Their true objective is unmistakable: to seize veto power over the operational decisions of the United States military. That is unacceptable...

In conjunction with the President's directive for the Federal Government to cease all use of Anthropic's technology, I am directing the Department of War to designate Anthropic a Supply-Chain Risk to National Security. Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic... America's warfighters will never be held hostage by the ideological whims of Big Tech. This decision is final.

Meanwhile, Anthrophic said on Friday that "no amount of intimidation or punishment from the Department of War will change our position." (And "We will challenge any supply chain risk designation in court.") Designating Anthropic as a supply chain risk would be an unprecedented action — one historically reserved for US adversaries, never before publicly applied to an American company. We are deeply saddened by these developments. As the first frontier AI company to deploy models in the US government's classified networks, Anthropic has supported American warfighters since June 2024 and has every intention of continuing to do so. We believe this designation would both be legally unsound and set a dangerous precedent for any American company that negotiates with the government... Secretary Hegseth has implied this designation would restrict anyone who does business with the military from doing business with Anthropic. The Secretary does not have the statutory authority to back up this statement.
Anthropic also defended the two exceptions they'd requested that had stalled contract negotiations. "[W]e do not believe that today's frontier AI models are reliable enough to be used in fully autonomous weapons. Allowing current models to be used in this way would endanger America's warfighters and civilians. Second, we believe that mass domestic surveillance of Americans constitutes a violation of fundamental rights."

Also Friday, OpenAI announced that "we reached an agreement with the Department of War to deploy our models in their classified network." OpenAI CEO Sam Altman emphasized that the agreement retains and confirms OpenAI's own prohibitions against using their products for domestic mass surveillance — and requires "human responsibility" for the use of force including for autonomous weapon systems. "The Department of War agrees with these principles, reflects them in law and policy, and we put them into our agreement. We also will build technical safeguards to ensure our models behave as they should, which the Department of War also wanted. " We are asking the Department of War to offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept. We have expressed our strong desire to see things de-escalate away from legal and governmental actions and towards reasonable agreements. We remain committed to serve all of humanity as best we can. The world is a complicated, messy, and sometimes dangerous place.
Space

Startup Plans April Launch for a Satellite to Reflect Sunlight to Earth at Night (msn.com) 53

A start-up called Reflect Orbital "proposes to use large, mirrored satellites to redirect sunlight to Earth at night," reports the Washington Post, "with plans to bathe solar farms, industrial sites and even entire cities in light that could, if desired, reach the intensity of daylight...."

Slashdot noted their idea in 2022 — but Reflect Orbital now expects to launch its first satellite in April, according to the article. "But its grand vision is largely 'aspirational,' as its young founder, Ben Nowack, told me..." Reflect Orbital's Nowack describes a scene right out of sci-fi: An extremely bright star appears on the northern horizon and makes its way across the sky, illuminating a 5-kilometer circle on Earth, then setting on the southern horizon about five minutes later, just as another such "star" appears in the north. To make the night even brighter, a customer could make 10 "stars" appear at once in the north by ordering them on an app. Two such artificial stars are in development in Reflect Orbital's factory. Nowack showed them to me on a Zoom call. The first to launch is 50 feet across, but he plans later to build them three times that size. If all goes according to plan, he'll have 50,000 of them circling the Earth in 2035 at an altitude of around 400 miles.

Nowack plans to start selling the service "in mostly developing nations or places that don't have streetlights yet." Eventually, he thinks, he can illuminate major cities, turn solar fields and farms into round-the-clock operations for any business or municipality that pays for it. He likened his technology to the invention of crop irrigation thousands of years ago. "I see this as much the same thing," he said, arguing that people would no longer have to "wait for the sun to shine."

The article adds that Elon Musk's SpaceX "wants to launch as many as a million satellites to serve as orbiting data centers — 70 times the number of satellites now in orbit." (America's satellite-regulation Federal Communications Commission grants a "categorical exclusion" from environmental review to satellites on the grounds that their operations "normally do not have significant effects on the human environment.")

The public comment periods for the two proposals close on March 6 and March 9.
AI

Perplexity Announces 'Computer,' an AI Agent That Assigns Work To Other AI Agents (arstechnica.com) 16

joshuark shares a report from Ars Technica: Perplexity has introduced "Computer," a new tool that allows users to assign tasks and see them carried out by a system that coordinates multiple agents running various models. The company claims that Computer, currently available to Perplexity Max subscribers, is "a system that creates and executes entire workflows" and "capable of running for hours or even months."

The idea is that the user describes a specific outcome -- something like "plan and execute a local digital marketing campaign for my restaurant" or "build me an Android app that helps me do a specific kind of research for my job." Computer then ideates subtasks and assigns them to multiple agents as needed, running the models Perplexity deems best for those tasks. The core reasoning engine currently runs Anthropic's Claude Opus 4.6, while Gemini is used for deep research, Nano Banana for image generation, Veo 3.1 for video production, Grok for lightweight tasks where speed is a consideration, and ChatGPT 5.2 for "long-context recall and wide search."

This kind of best-model-for-the-task approach differs from some competing products like Claude Cowork, which only uses Anthropic's models. All this happens in the cloud, with prebuilt integrations. "Every task runs in an isolated compute environment with access to a real filesystem, a real browser, and real tool integrations," Perplexity says. The idea is partly that this workflow was what some power users were already doing, and this aims to make that possible for a wider range of people who don't want to deal with all that setup.

People were already using multiple models and tailoring them to specific tasks based on perceived capabilities, while, for example, using MCP (Model Context Protocol) to give those models access to data and applications on their local machines. Perplexity Computer takes a different approach, but the goal is the same: have AI agents running tailor-picked models to perform tasks involving your own files, services, and applications. Then there is OpenClaw, which you could perceive as the immediate predecessor to this concept.

The Courts

New York Sues Valve For Enabling 'Illegal Gambling' With Loot Boxes (arstechnica.com) 79

New York state has filed a lawsuit against Valve alleging that randomized loot boxes in games like Counter-Strike 2, Team Fortress 2, and Dota 2 amount to a form of unregulated gambling, letting users "pay for the chance to win a rare virtual item of significant monetary value." From a report: While many randomized video game loot boxes have drawn attention and regulation from various government bodies in recent years, the New York suit calls out Valve's system specifically for "enabl[ing] users to sell the virtual items they have won, either through its own virtual marketplace, the Steam Community Market, or through third-party marketplaces."

The vast majority of Valve's in-game loot boxes contain skins that can only be resold for a few cents, the suit notes, while the rarest skins can be worth thousands of dollars through marketplaces on and off of Steam. That fits the statutory definition of gambling as "charging an individual for a chance to win something of value based on luck alone," according to the suit.

The Steam Wallet funds that users get through directly reselling skins "have the equivalent purchasing power on the Steam platform as cash," the suit notes. But if a user wants to convert those Steam funds to real cash, they can do so relatively easily by purchasing a Steam Deck and reselling it to any interested party, as an investigator did while preparing the lawsuit.

Slashdot Top Deals