Programming

Cloudflare Raves About Performance Gains After Rust Rewrite (cloudflare.com) 53

"We've spent the last year rebuilding major components of our system," Cloudflare announced this week, "and we've just slashed the latency of traffic passing through our network for millions of our customers," (There's a 10ms cut in the median time to respond, plus a 25% performance boost as measured by CDN performance tests.) They replaced a 15-year-old system named FL (where they run security and performance features), and "At the same time, we've made our system more secure, and we've reduced the time it takes for us to build and release new products."

And yes, Rust was involved: We write a lot of Rust, and we've gotten pretty good at it... We built FL2 in Rust, on Oxy [Cloudflare's Rust-based next generation proxy framework], and built a strict module framework to structure all the logic in FL2... Built in Rust, [Oxy] eliminates entire classes of bugs that plagued our Nginx/LuaJIT-based FL1, like memory safety issues and data races, while delivering C-level performance. At Cloudflare's scale, those guarantees aren't nice-to-haves, they're essential. Every microsecond saved per request translates into tangible improvements in user experience, and every crash or edge case avoided keeps the Internet running smoothly. Rust's strict compile-time guarantees also pair perfectly with FL2's modular architecture, where we enforce clear contracts between product modules and their inputs and outputs...

It's a big enough distraction from shipping products to customers to rebuild product logic in Rust. Asking all our teams to maintain two versions of their product logic, and reimplement every change a second time until we finished our migration was too much. So, we implemented a layer in our old NGINX and OpenResty based FL which allowed the new modules to be run. Instead of maintaining a parallel implementation, teams could implement their logic in Rust, and replace their old Lua logic with that, without waiting for the full replacement of the old system.

Over 100 engineers worked on FL2 — and there was extensive testing, plus a fallback-to-FL1 procedure. But "We started running customer traffic through FL2 early in 2025, and have been progressively increasing the amount of traffic served throughout the year...." As we described at the start of this post, FL2 is substantially faster than FL1. The biggest reason for this is simply that FL2 performs less work [thanks to filters controlling whether modules need to run]... Another huge reason for better performance is that FL2 is a single codebase, implemented in a performance focussed language. In comparison, FL1 was based on NGINX (which is written in C), combined with LuaJIT (Lua, and C interface layers), and also contained plenty of Rust modules. In FL1, we spent a lot of time and memory converting data from the representation needed by one language, to the representation needed by another. As a result, our internal measures show that FL2 uses less than half the CPU of FL1, and much less than half the memory. That's a huge bonus — we can spend the CPU on delivering more and more features for our customers!

Using our own tools and independent benchmarks like CDNPerf, we measured the impact of FL2 as we rolled it out across the network. The results are clear: websites are responding 10 ms faster at the median, a 25% performance boost. FL2 is also more secure by design than FL1. No software system is perfect, but the Rust language brings us huge benefits over LuaJIT. Rust has strong compile-time memory checks and a type system that avoids large classes of errors. Combine that with our rigid module system, and we can make most changes with high confidence...

We have long followed a policy that any unexplained crash of our systems needs to be investigated as a high priority. We won't be relaxing that policy, though the main cause of novel crashes in FL2 so far has been due to hardware failure. The massively reduced rates of such crashes will give us time to do a good job of such investigations. We're spending the rest of 2025 completing the migration from FL1 to FL2, and will turn off FL1 in early 2026. We're already seeing the benefits in terms of customer performance and speed of development, and we're looking forward to giving these to all our customers.

After that, when everything is modular, in Rust and tested and scaled, we can really start to optimize...!

Thanks to long-time Slashdot reader Beeftopia for sharing the article.
Robotics

Researchers Consider The Advantages of 'Swarm Robotics' (msn.com) 30

The Wall Street Journal looks at swarm robotics, where no single robot is in charge, robots interact only with nearby robots — and the swarm accomplishes complex tasks through simple interactions.

"Researchers say this approach could excel where traditional robots fail, like situations where central control is impractical or impossible due to distance, scale or communication barriers." For instance, a swarm of drones might one day monitor vast areas to detect early-stage wildfires that current monitoring systems sometimes miss... A human operator might set parameters like where to search, but the drones would independently share information like which areas have been searched, adjust search patterns based on wind and other weather data from other drones in the swarm, and converge for more complete coverage of a particular area when one detects smoke. In another potential application, a swarm of robots could make deliveries across wide areas more efficient by alerting each other to changing traffic conditions or redistributing packages among themselves if one breaks down. Robot swarms could also manage agricultural operations in places without reliable internet service. And disaster-response teams see potential for swarms in hurricane and tsunami zones where communication infrastructure has been destroyed.

At the microscopic scale, researchers are developing tiny robots that could work together to navigate the human body to deliver medication or clear blockages without surgery... In recent demonstrations, teams of tiny magnetic robots — each about the size of a grain of sand — cleared blockages in artificial blood vessels by forming chains to push through the obstructions. The robots navigate individually through blood vessels to reach a clog, guided by doctors or technicians using magnetic fields to steer them, says researcher J.J. Wie, a professor of organic and nano engineering at Hanyang University in South Korea. When they reach an obstruction, the robots coordinate with each other to team up and break through. Wie's group is developing versions of these robots that biodegrade after use, eliminating the need for surgical removal, and coatings that make the robots compatible with human tissue. And while robots the size of sand grains work for some applications, Wie says that they will need to be shrunk to nano scale to cross biological barriers, such as cell membranes, or bind to specific molecular targets, like surface proteins or receptors on cancer cells.

Some researchers are even exploring emergent intelligence — "when simple machines, following only a few local cues, begin to organize and act as if they share a mind...beyond human-designed coordination."

Thanks to long-time Slashdot reader fjo3 for sharing the article.
Security

FCC To Rescind Ruling That Said ISPs Are Required To Secure Their Networks (arstechnica.com) 47

The FCC plans to repeal a Biden-era ruling that required ISPs to secure their networks under the Communications Assistance for Law Enforcement Act, instead relying on voluntary cybersecurity commitments from telecom providers. FCC Chairman Brendan Carr said the ruling "exceeded the agency's authority and did not present an effective or agile response to the relevant cybersecurity threats." Carr said the vote scheduled for November 20 comes after "extensive FCC engagement with carriers" who have taken "substantial steps... to strengthen their cybersecurity defenses." Ars Technica reports: The FCC's January 2025 declaratory ruling came in response to attacks by China, including the Salt Typhoon infiltration of major telecom providers such as Verizon and AT&T. The Biden-era FCC found that the Communications Assistance for Law Enforcement Act (CALEA), a 1994 law, "affirmatively requires telecommunications carriers to secure their networks from unlawful access or interception of communications."

"The Commission has previously found that section 105 of CALEA creates an affirmative obligation for a telecommunications carrier to avoid the risk that suppliers of untrusted equipment will "illegally activate interceptions or other forms of surveillance within the carrier's switching premises without its knowledge,'" the January order said. "With this Declaratory Ruling, we clarify that telecommunications carriers' duties under section 105 of CALEA extend not only to the equipment they choose to use in their networks, but also to how they manage their networks."
A draft of the order that will be voted on in November can be found here (PDF).
Privacy

Denmark Reportedly Withdraws 'Chat Control' Proposal Following Controversy (therecord.media) 28

An anonymous reader quotes a report from The Record: Denmark's justice minister on Thursday said he will no longer push for an EU law requiring the mandatory scanning of electronic messages, including on end-to-end encrypted platforms. Earlier in its European Council presidency, Denmark had brought back a draft law which would have required the scanning, sparking an intense backlash. Known as Chat Control, the measure was intended to crack down on the trafficking of child sex abuse materials (CSAM). After days of silence, the German government on October 8 announced it would not support the proposal, tanking the Danish effort.

Danish Justice Minister Peter Hummelgaard told reporters on Thursday that his office will support voluntary CSAM detections. "This will mean that the search warrant will not be part of the EU presidency's new compromise proposal, and that it will continue to be voluntary for the tech giants to search for child sexual abuse material," Hummelgaard said, according to local news reports. The current model allowing for voluntary scanning expires in April, Hummelgaard said. "Right now we are in a situation where we risk completely losing a central tool in the fight against sexual abuse of children," he said. "That's why we have to act no matter what. We owe it to all the children who are subjected to monstrous abuse."

Communications

FCC's Gomez Slams Move To Revise Broadband Labels as 'Anti-Consumer' (lightreading.com) 21

An anonymous reader shares a report: The FCC adopted a notice of proposed rulemaking (NPRM) to rescind and revise certain rules attached to consumer broadband labels. The measure passed on a two-to-one vote, with Commissioner Anna Gomez, the lone Democrat on the FCC, voting no and calling the notice "one of the most anti-consumer items I have seen."

The vote was held at the Commission's open meeting for the month of October. As per a draft notice circulated earlier this month, the FCC is looking to roll back several rules, including requirements that service providers read the label to consumers via phone, itemize state and local pass-through fees, and display labels in consumer account portals, among others. Advocates at Public Knowledge urged the Commission to reconsider, saying in a recent filing that "the Commission could create a permission structure for ISPs to continue to act without accountability."

In her remarks during Tuesday's open meeting, Commissioner Gomez appeared to concur, depicting the move as "anti-consumer" and counter to the goals of Congress. The FCC was mandated via the 2021 Infrastructure Investment and Jobs Act (IIJA) to create rules for implementing consumer broadband labels. After a lengthy rulemaking process and discussions with industry and consumer groups, ISPs were required to start displaying labels in 2024.

"I typically vote in favor of notices of proposed rulemaking because I believe in asking balanced questions, even on proposals that I dislike, so that we can encourage fruitful and helpful public comment. Answers to tough questions help us strike the right balance so that our rules can both encourage competition and serve consumers. However, the questions posed in this NPRM are so anti-consumer that I could not bring myself to even agree to them," said Gomez.

Gomez stressed that the notice will harm consumers by enabling ISPs to hide add-on fees and stripping people of their ability to access information in their own language. Moreover, added Gomez, it's unclear why the FCC is doing this. "What adds insult to injury is that the FCC does not even explain why this proposal is necessary. Make it make sense," she added.

Chrome

Google Chrome Will Finally Default To Secure HTTPS Connections Starting in April (engadget.com) 35

An anonymous reader shares a report: The transition to the more-secure HTTPS web protocol has plateaued, according to Google. As of 2020, 95 to 99 percent of navigations in Chrome use HTTPS. To help make it safer for users to click on links, Chrome will enable a setting called Always Use Secure Connections for public sites for all users by default. This will happen in October 2026 with the release of Chrome 154.

The change will happen earlier for those who have switched on Enhanced Safe Browsing protections in Chrome. Google will enable Always Use Secure Connections by default in April when Chrome 147 drops. When this setting is on, Chrome will ask for your permission before it first accesses a public website that doesn't use HTTPS.

IT

'ChatGPT's Atlas: The Browser That's Anti-Web' (anildash.com) 36

Blogger and technologist Anil Dash, writing about OpenAI's recently launched browser, Atlas: When I first got Atlas up and running, I tried giving it the easiest and most obvious tasks I could possibly give it. I looked up "Taylor Swift showgirl" to see if it would give me links to videos or playlists to watch or listen to the most popular music on the charts right now; this has to be just about the easiest possible prompt.

The results that came back looked like a web page, but they weren't. Instead, what I got was something closer to a last-minute book report written by a kid who had mostly plagiarized Wikipedia. The response mentioned some basic biographical information and had a few photos. Now we know that AI tools are prone to this kind of confabulation, but this is new, because it felt like I was in a web browser, typing into a search box on the Internet. And here's what was most notable: there was no link to her website.

I had typed "Taylor Swift" in a browser, and the response had literally zero links to Taylor Swift's actual website. If you stayed within what Atlas generated, you would have no way of knowing that Taylor Swift has a website at all.

Unless you were an expert, you would almost certainly think I had typed in a search box and gotten back a web page with search results. But in reality, I had typed in a prompt box and gotten back a synthesized response that superficially resembles a web page, and it uses some web technologies to display its output. Instead of a list of links to websites that had information about the topic, it had bullet points describing things it thought I should know. There were a few footnotes buried within some of those response, but the clear intent was that I was meant to stay within the AI-generated results, trapped in that walled garden.

During its first run, there's a brief warning buried amidst all the other messages that says, "ChatGPT may give you inaccurate information", but nobody is going to think that means "sometimes this tool completely fabricates content, gives me a box that looks like a search box, and shows me the fabricated content in a display that looks like a web page when I type in the fake search box."

And it's not like the generated response is even that satisfying.

AI

Chegg Slashes 45% of Workforce, Blames 'New Realities of AI' (cnbc.com) 31

Chegg says it will lay off about 45% of its workforce, or 388 employees, as the "new realities" of artificial intelligence and diminished traffic from internet search have led to plummeting revenue. From a report: The online education company, founded 20 years ago, has been hit by the rise of generative AI software tools, such as OpenAI's ChatGPT, which have become increasingly popular among students.

Chegg also sued Google in February, arguing that AI summaries of search results have hurt its traffic and sales. The company reiterated that claim on Monday, saying AI and "reduced traffic from Google to content publishers" have damaged its business. "As a result, and reflecting the company's continued investment in AI, Chegg is restructuring the way it operates its academic learning products," the company said. The cuts come after Chegg in May laid off 22% of its workforce, citing increasing adoption of AI.
Chegg's market cap has fallen 98.8% in recent years to about $135 million.
Windows

Microsoft Disables Preview In File Explorer To Block Attacks (bleepingcomputer.com) 49

Slashdot reader joshuark writes: Microsoft says that the File Explorer (formerly Windows Explorer) now automatically blocks previews for files downloaded from the Internet to block credential theft attacks via malicious documents, according to a report from BleepingComputer. This attack vector is particularly concerning because it requires no user interaction beyond selecting a file to preview and removes the need to trick a target into actually opening or executing it on their system.

For most users, no action is required since the protection is enabled automatically with the October 2025 security update, and existing workflows remain unaffected unless you regularly preview downloaded files.

"This change is designed to enhance security by preventing a vulnerability that could leak NTLM hashes when users preview potentially unsafe files," Microsoft says in a support document published Wednesday.

It is important to note that this may not take effect immediately and could require signing out and signing back in.

Programming

Does Generative AI Threaten the Open Source Ecosystem? (zdnet.com) 47

"Snippets of proprietary or copyleft reciprocal code can enter AI-generated outputs, contaminating codebases with material that developers can't realistically audit or license properly."

That's the warning from Sean O'Brien, who founded the Yale Privacy Lab at Yale Law School. ZDNet reports: Open software has always counted on its code being regularly replenished. As part of the process of using it, users modify it to improve it. They add features and help to guarantee usability across generations of technology. At the same time, users improve security and patch holes that might put everyone at risk. But O'Brien says, "When generative AI systems ingest thousands of FOSS projects and regurgitate fragments without any provenance, the cycle of reciprocity collapses. The generated snippet appears originless, stripped of its license, author, and context." This means the developer downstream can't meaningfully comply with reciprocal licensing terms because the output cuts the human link between coder and code. Even if an engineer suspects that a block of AI-generated code originated under an open source license, there's no feasible way to identify the source project. The training data has been abstracted into billions of statistical weights, the legal equivalent of a black hole.

The result is what O'Brien calls "license amnesia." He says, "Code floats free of its social contract and developers can't give back because they don't know where to send their contributions...."

"Once AI training sets subsume the collective work of decades of open collaboration, the global commons idea, substantiated into repos and code all over the world, risks becoming a nonrenewable resource, mined and never replenished," says O'Brien. "The damage isn't limited to legal uncertainty. If FOSS projects can't rely upon the energy and labor of contributors to help them fix and improve their code, let alone patch security issues, fundamentally important components of the software the world relies upon are at risk."

O'Brien says, "The commons was never just about free code. It was about freedom to build together." That freedom, and the critical infrastructure that underlies almost all of modern society, is at risk because attribution, ownership, and reciprocity are blurred when AIs siphon up everything on the Internet and launder it (the analogy of money laundering is apt), so that all that code's provenance is obscured.

AI

Is AI Responsible for Job Cuts - Or Just a Good Excuse? (cnbc.com) 45

Has AI just become an easy excuse for firms looking to downsize, asks CNBC: Fabian Stephany, assistant professor of AI and work at the Oxford Internet Institute, said there might be more to job cuts than meets the eye. Previously there may have been some stigma attached to using AI, but now companies are "scapegoating" the technology to take the fall for challenging business moves such as layoffs. "I'm really skeptical whether the layoffs that we see currently are really due to true efficiency gains. It's rather really a projection into AI in the sense of 'We can use AI to make good excuses,'" Stephany said in an interview with CNBC. Companies can essentially position themselves at the frontier of AI technology to appear innovative and competitive, and simultaneously conceal the real reasons for layoffs, according to Stephany... Some companies that flourished during the pandemic "significantly overhired" and the recent layoffs might just be a "market clearance...."

One founder, Jean-Christophe Bouglé even said in a popular LinkedIn post that AI adoption is at a "much slower pace" than is being claimed and in large corporations "there's not much happening" with AI projects even being rolled back due to cost or security concerns. "At the same time there are announcements of big layoff plans 'because of AI.' It looks like a big excuse, in a context where the economy in many countries is slowing down..."

The Budget Lab, a non-partisan policy research center at Yale University, released a report on Wednesday which showed that U.S. labor has actually been little disrupted by AI automation since the release of ChatGPT in 2022... Additionally, New York Fed economists released research in early September which showed that AI use amongst firms "do not point to significant reductions in employment" across the services and manufacturing industry in the New York-Northern New Jersey region.

Crime

Myanmar Military Shuts Down a Major Cybercrime Center and Detains Over 2,000 People (apnews.com) 11

An anonymous reader shares this report from the Associated Press: Myanmar's military has shut down a major online scam operation near the border with Thailand, detaining more than 2,000 people and seizing dozens of Starlink satellite internet terminals, state media reported Monday... The centers are infamous for recruiting workers from other countries under false pretenses, promising them legitimate jobs and then holding them captive and forcing them to carry out criminal activities.

Scam operations were in the international spotlight last week when the United States and Britain enacted sanctions against organizers of a major Cambodian cyberscam gang, and its alleged ringleader was indicted by a federal court in New York. According to a report in Monday's Myanma Alinn newspaper, the army raided KK Park, a well-documented cybercrime center, as part of operations starting in early September to suppress online fraud, illegal gambling, and cross-border cybercrime.

The Internet

Browser Promising Privacy Protection Contains Malware-Like Features, Routes Traffic Through China (arstechnica.com) 16

A web browser linked to Chinese online gambling websites and downloaded millions of times routes all internet traffic through servers in China and covertly installs programs that run in the background, according to findings published by network security company Infoblox. The researchers said the Universe Browser, which advertises itself as offering privacy protection, includes features similar to malware such as key logging and surreptitious connections.

Infoblox collaborated with the United Nations Office on Drugs and Crime on the research. The investigators found links between the browser and Southeast Asia's cybercrime ecosystem, which has connections to money laundering, illegal online gambling, human trafficking and scam operations using forced labor. The browser is directly linked to BBIN, a major online gambling company that has existed since 1999. Infoblox researchers examined the Windows version of the browser and found that it checks users' locations and languages when launched, installs two browser extensions, and disables security features including sandboxing.
Communications

SpaceX Disables 2,500 Starlink Terminals Allegedly Used By Asian Scam Centers (arstechnica.com) 50

SpaceX has deactivated over 2,500 Starlink terminals allegedly used by scam operations in Myanmar, where the service isn't licensed but was reportedly enabling large-scale cybercrime networks tied to human trafficking and fraud. Ars Technica reports: Lauren Dreyer, vice president of Starlink business operations, described the action in an X post last night after reports that Myanmar's military shut down a major scam operation: "SpaceX complies with local laws in all 150+ markets where Starlink is licensed to operate," Dreyer wrote. "SpaceX continually works to identify violations of our Acceptable Use Policy and applicable law... On the rare occasion we identify a violation, we take appropriate action, including working with law enforcement agencies around the world. In Myanmar, for example, SpaceX proactively identified and disabled over 2,500 Starlink Kits in the vicinity of suspected 'scam centers.'"

Starlink is not licensed to operate in Myanmar. While Dreyer didn't say how the terminals were disabled, it's known that Starlink can disable individual terminals based on their ID numbers or use geofencing to block areas from receiving signals. On Monday, Myanmar state media reported that "Myanmar's military has shut down a major online scam operation near the border with Thailand, detaining more than 2,000 people and seizing dozens of Starlink satellite Internet terminals," according to an Associated Press article. The army reportedly raided a cybercrime center known as KK Park as part of operations that began in early September. The operations reportedly targeted 260 unregistered buildings and resulted in seizure of 30 Starlink terminals and detention of 2,198 people.

"Maj. Gen. Zaw Min Tun, the spokesperson for the military government, charged in a statement Monday night that the top leaders of the Karen National Union, an armed ethnic organization opposed to army rule, were involved in the scam projects at KK Park," the AP wrote. The Karen National Union is "part of the larger armed resistance movement in Myanmar's civil war" and "deny any involvement in the scams."

Music

Pitchfork Is Beta Testing User Reviews and Comments As It Approaches 30 (theverge.com) 8

As it nears its 30th anniversary, Pitchfork is testing user reviews and comments in a major shift from its long-standing critic-only model. The site will now let readers rate albums and leave comments, combining those into an aggregated "reader score" alongside the official Pitchfork score. The Verge reports: Pitchfork has historically been a one-sided affair. While it ran the occasional reader poll, there was no way for readers to directly voice their opinion on the site. If you thought that Jet's Shine On deserved better than a 0.0 (first off, you're wrong), there was no way to let the author know other than shouting into the void of this new thing at the time called Twitter. Now the site is considering letting users comment directly on reviews and give albums scores of their own. And then those scores will be averaged up into a single reader score for each album.
The Internet

Smart Beds Malfunctioned During AWS Outage (msn.com) 105

Early Monday, an Amazon Web Services outage disrupted banks, games, and Peloton classes. Eight Sleep customers faced a different problem. Their internet-enabled mattresses malfunctioned. People woke to beds locked in upright positions, excessive heat, flashing lights, and unexpected alarms. Matteo Franceschetti, the company's chief executive, apologized and said engineers were building an outage-proof mode. By Monday evening, all devices functioned again, though some experienced data processing delays. The mattresses adjust temperature between 55 and 110 degrees and elevate bodies into different positions. They activate soundscapes and vibrational alarms. The advanced models cost over $5,000. A yearly subscription of $199 to $399 is required for temperature controls.
The Internet

Internet Archive Celebrates 1 Trillion Web Pages Archived (archive.org) 15

alternative_right shares a report from the Internet Archive: This October, the Internet Archive's Wayback Machine is projected to hit a once-in-a-generation milestone: 1 trillion web pages archived. That's one trillion memories, moments, and movements -- preserved for the public and available to access via the Wayback Machine.

We'll be commemorating this historic achievement on October 22, 2025, with a global event: a party at our San Francisco headquarters and a livestream for friends and supporters around the world. More than a celebration, it's a tribute to what we've built together: a free and open digital library of the web.

Youtube

YouTube's Likeness Detection Has Arrived To Help Stop AI Doppelgangers 19

An anonymous reader quotes a report from Ars Technica: AI content has proliferated across the Internet over the past few years, but those early confabulations with mutated hands have evolved into synthetic images and videos that can be hard to differentiate from reality. Having helped to create this problem, Google has some responsibility to keep AI video in check on YouTube. To that end, the company has started rolling out its promised likeness detection system for creators. [...] The likeness detection tool, which is similar to the site's copyright detection system, has now expanded beyond the initial small group of testers. YouTube says the first batch of eligible creators have been notified that they can use likeness detection, but interested parties will need to hand Google even more personal information to get protection from AI fakes.

Currently, likeness detection is a beta feature in limited testing, so not all creators will see it as an option in YouTube Studio. When it does appear, it will be tucked into the existing "Content detection" menu. In YouTube's demo video, the setup flow appears to assume the channel has only a single host whose likeness needs protection. That person must verify their identity, which requires a photo of a government ID and a video of their face. It's unclear why YouTube needs this data in addition to the videos people have already posted with their oh-so stealable faces, but rules are rules.

After signing up, YouTube will flag videos from other channels that appear to have the user's face. YouTube's algorithm can't know for sure what is and is not an AI video. So some of the face match results may be false positives from channels that have used a short clip under fair use guidelines. If creators do spot an AI fake, they can add some details and submit a report in a few minutes. If the video includes content copied from the creator's channel that does not adhere to fair use guidelines, YouTube suggests also submitting a copyright removal request. However, just because a person's likeness appears in an AI video does not necessarily mean YouTube will remove it.
Network

ISP Deceived Customers About Fiber Internet, German Court Finds (tomshardware.com) 36

The German Koblenz Regional Court has banned the internet service provider 1&1 from marketing its fiber-to-the-curb service as fiber-optic DSL. The court found that the company misled customers because its network uses copper cables for the final stage of connections, sometimes extending up to a mile from the distribution box to subscribers' homes.

Customers who visited the ISP's website and checked connection availability received a notification stating that a "1&1 fiber optic DSL connection" was available, even though fiber optic cables terminate at street-level distribution boxes or building service rooms. The company pairs the copper lines with vectoring technology to boost DSL speeds to 100 megabits per second. The Federation of German Consumer Organizations filed the lawsuit. Ramona Pop, the organization's chairperson, said that anyone who promises fiber optics but delivers only DSL is deceiving customers.
Cloud

Amazon's DNS Problem Knocked Out Half the Web, Likely Costing Billions 103

An anonymous reader quotes a report from Ars Technica: On Monday afternoon, Amazon confirmed that an outage affecting Amazon Web Services' cloud hosting, which had impacted millions across the Internet, had been resolved. Considered the worst outage since last year's CrowdStrike chaos, Amazon's outage caused "global turmoil," Reuters reported. AWS is the world's largest cloud provider and, therefore, the "backbone of much of the Internet," ZDNet noted. Ultimately, more than 28 AWS services were disrupted, causing perhaps billions in damages, one analyst estimated for CNN.

[...] Amazon's problems originated at a US site that is its "oldest and largest for web services" and often "the default region for many AWS services," Reuters noted. The same site has experienced two outages before in 2020 and 2021, but while the tech giant had confirmed that those prior issues had been "fully mitigated," apparently the fixes did not ensure stability into 2025. ZDNet noted that Amazon's first sign of the outage was "increased error rates and latency across numerous key services" tied to its cloud database technology. Although "engineers later identified a Domain Name System (DNS) resolution problem" as the root of these issues and quickly fixed it, "other AWS services began to fail in its wake, leaving the platform still impaired" as more than two dozen AWS services shut down. At the peak of the outage on Monday, Down Detector tracked more than 8 million reports globally from users panicked by the outage, ZDNet reported.
Ken Birman, a computer science professor at Cornell University, told Reuters that "software developers need to build better fault tolerance."

"When people cut costs and cut corners to try to get an application up, and then forget that they skipped that last step and didn't really protect against an outage, those companies are the ones who really ought to be scrutinized later."

Slashdot Top Deals