Operating Systems

System76's Open Firmware 'Re-Disables' Intel's Management Engine (phoronix.com) 19

Linux computer vendor System76 shared some news in a recent blog post. "We prefer to disable the Intel Management Engine wherever possible to reduce the amount of closed firmware running on System76 hardware. We've resolved a coreboot bug that allows the Intel ME (Management Engine) to once again be disabled."

Phoronix reports that the move will "benefit their latest Intel Core 13th Gen 'Raptor Lake' wares as well as prior generation devices." Intel ME is disabled for their latest Raptor lake laptops and most older platforms with some exceptions like where having a silicon issue with Tiger Lake. System76 has also added a new firmware setup menu option for enabling/disabling UEFI Secure Boot. The motivation here with making it easier to toggle Secure Boot is for allowing Windows 11 support with SB active while running System76 Open Firmware.
Data Storage

ARM Joins Linux Foundation's 'Open Programmable Infrastructure' Project (linuxfoundation.org) 18

ARM has joined the Linux Foundation's Open Programmable Infrastructure project, "a community-driven initiative focused on creating a standards-based open ecosystem for next-generation architectures and frameworks" based on programmable processor technologies like DPUs (Data Processing Units) and IPUs (Infrastructure Processing Units).

From the Linux Foundation's announcement: Launched in June 2021 under the Linux Foundation, the project is focused on utilizing open software and standards, as well as frameworks and toolkits, to enable the rapid adoption of DPUs. Arm joins other premier members including Dell Technologies, F5, Intel, Keysight Technologies, Marvell, Nvidia, Red Hat, Tencent, and ZTE. These member companies work together to create an ecosystem of blueprints and standards to ensure that compliant DPUs work with any server.

DPUs are used today to accelerate networking, security, and storage tasks. In addition to performance benefits, DPUs help improve data center security by providing physical isolation for running infrastructure tasks. DPUs also help to reduce latency and improve performance for applications that require real-time data processing. As DPUs create a logical split between infrastructure compute and client applications, the manageability of workloads within different development and management teams is streamlined.

"Arm has been contributing to the OPI Project for a while now," said Kris Murphy, Chair of the OPI Project Governing Board and Senior Principal Software Engineer at Red Hat. "Now, as a premier member, we are excited that they're bringing their leadership to the Governing Board and expertise to the technical steering committee and working groups. Their participation will help to ensure that the DPU components are optimized for programmable infrastructure solutions."

"Across network, storage, and security applications, DPUs are already proving the power efficiency and capex benefits of specialized processing technology," said Marc Meunier, director of ecosystem development, Infrastructure Line of Business, Arm and member of OPI Governing Board. "As a premier member of the OPI project, we look forward to contributing our expertise in heterogeneous computing and working with other leaders in the industry to create solution blueprints and standards that pave the way for successful deployments."

"The DPU market offers an opportunity for us to change how infrastructure services can be deployed and managed," Arpit Joshipura, General Manager, Networking, Edge, and IoT, the Linux Foundation. "With collaboration across software and hardware vendors representing silicon devices and the entire DPU software stack, the OPI Project is creating an open ecosystem for next generation data centers, private clouds, and edge deployments."

Open Source

'RISE' Project Building Open Source RISC-V Software Announced by Linux Foundation Europe (linuxfoundation.eu) 11

Linux Foundation Europe "has announced the RISC-V Software Ecosystem (RISE) Project to help facilitate more performant, commercial-ready software for the RISC-V processor architecture," reports Phoronix.

"Among the companies joining the RISE Project on their governing board are Andes, Google, Intel, Imagination Technologies, Mediatek, NVIDIA, Qualcomm, Red Hat, Rivos, Samsung, SiFive, T-Head, and Ventana."

It's top goal is "accelerate the development of open source software for RISC-V," according to the official RISE web site. The project's chair says it "brings together leaders with a shared sense of urgency to accelerate the RISC-V software ecosystem readiness in collaboration with RISC-V International." The CEO of RISC-V International, Calista Redmond, said "We are grateful to the thousands of engineers making upstream contributions and to the organizations coming together now to invest in tools and libraries in support of the RISC-V software ecosystem." RISE Project members will contribute financially and provide engineering talent to address specific software deliverables prioritized by the RISE Technical Steering Committee (TSC). RISE is dedicated to enabling a robust software ecosystem specifically for application processors that includes software development tools, virtualization support, language runtimes, Linux distribution integration, and system firmware, working upstream first with existing open source communities in accordance with open source best practices.

"The RISE Project is dedicated to enabling RISC-V in open source tools and libraries (e.g., LLVM, GCC, etc) to speed implementation and time-to-market," said Gabriele Columbro, General Manager of Linux Foundation Europe.

Google's director of engineering on Android said Google was "excited to partner with industry leaders to drive rapid maturity of the RISC-V software ecosystem in support of Android and more."

And the VP of system software at NVIDIA said "NVIDIA's accelerated computing platform — which includes GPUs, DPUs, chiplets, interconnects and software — will support the RISC-V open standard to help drive breakthroughs in data centers, and a wide range of industries, such as automotive, healthcare and robotics."
Hardware

Arm Announces the Cortex X4 For 2024, Plus a 14-Core M2-Fighter (arstechnica.com) 81

Arm unveiled its upcoming flagship CPUs for 2024, including the Arm Cortex X4, Cortex A720, and Cortex A520. These chips, built on the Armv9.2 architecture, promise higher performance and improved power efficiency. Arm also introduced a new 'QARMA3 algorithm' for memory security and showcased a potential 14-core mega-chip design for high-performance laptops. Ars Technica reports: Arm claims the big Cortex X3 chip will have 15 percent higher performance than this year's X3 chip, and "40 percent better power efficiency." The company also promises a 20 percent efficiency boost for the A700 series and a 22 percent efficiency boost for the A500. The new chips are all built on the new 'Armv9.2' architecture, which adds a "new QARMA3 algorithm" for Arm's Pointer Authentication memory security feature. Pointer authentication assigns a cryptographic signature to memory pointers and is meant to shut down memory corruption vulnerabilities like buffer overflows by making it harder for unauthenticated programs to create valid memory pointers. This feature has been around for a while, but Arm's new algorithm reduces the CPU overhead of all this extra memory work to just 1 percent of the chip's power, which hopefully will get more manufacturers to enable it.

Arm's SoC recommendations are usually a "1+3+4" design. That's one big X chip, three medium A700 chips, and four A500 chips. This year the company is floating a new layout, though, swapping out two small chips for two medium chips, which would put you at a "1+5+2" configuration. Arm's benchmarks -- which were run on Android 13 -- claim this will get you 27 percent more performance. That's assuming anything can cool and power that for a reasonable amount of time. Arm's blog post also mentions a 1+4+4 chip -- nine cores -- for a flagship smartphone. [...]

Every year with these Arm flagship chip announcements, the company also includes a wild design for a giant mega-chip that usually never gets built. Last year the company's blueprint monster was a design with eight Cortex X3 chips and four A715 cores, which the company claimed would rival an Intel Core i7. The biggest X3-based chip on the market is the Qualcomm Snapdragon 8cx Gen 3, which landed in a few Windows laptops. That was only a four X3/four A715 chip, though. This year's mega chip is a 14-core monster with 10 Cortex X4 chips and four A720 chips, which Arm says is meant for "high-performance laptops." Arm calls the design the company's "most powerful cluster ever built," but will it ever actually be built? Will it ever be more than words on a page?

Intel

Intel's Revival Plan Runs Into Trouble. 'We Had Some Serious Issues.' (wsj.com) 79

Rivals such as Nvidia have left Intel far behind. CEO Pat Gelsinger aims to reverse firm's fortunes by vastly expanding its factories. From a report: Pat Gelsinger is keenly aware he must act fast to stop Intel from becoming yet another storied American technology company left in the dust by nimbler competitors. Over the past decade, rivals overtook Intel in making the most advanced chips, graphics-chip maker Nvidia leapfrogged Intel to become America's most valuable semiconductor company, and perennial also-ran AMD has been stealing market share. Intel, by contrast, has faced repeated delays introducing new chips and frustration from would-be customers. "We didn't get into this mud hole because everything was going great," said Gelsinger, who took over as CEO in 2021. "We had some serious issues in terms of leadership, people, methodology, et cetera that we needed to attack."

As he sees it, Intel's problems stem largely from how it botched a transition in how chips are made. Intel came to prominence by both designing circuits and making them in its own factories. Now, chip companies tend to specialize either in circuit design or manufacturing, and Intel hasn't been able to pick up much business making chips designed by other people. So far, the turnaround has been rough. Gelsinger, 62 years old and a devout Christian, said he takes inspiration from the biblical story of Nehemiah, who rebuilt the walls of Jerusalem under attack from his enemies. Last year, he told a Christian group in Singapore: "You'll have your bad days, and you need to have a deep passion to rebuild." Gelsinger's plan is to invest as much as hundreds of billions of dollars into new factories that would make semiconductors for other companies alongside Intel's own chips. Two years in, that contract-manufacturing operation, called a "foundry" business, is bogged down with problems.

Intel

Intel Says AI is Overwhelming CPUs, GPUs, Even Clouds, So All Meteor Lakes Get a VPU (theregister.com) 63

Intel will use the "VPU" tech it acquired along with Movidius in 2016 to all models of its forthcoming Meteor Lake client CPUs. From a report: Chipzilla already offers VPUs in some 13th-gen Core silicon. Ahead of the Computex conference in Taiwan, the company briefed The Register on their inclusion in Meteor Lake. Curiously, Intel didn't elucidate the acronym, but has previously said it stands for Vision Processing Unit. Chipzilla is, however, clear about what it does and why it's needed -- and it's more than vision. Intel Veep and general manager of Client AI John Rayfield said dedicated AI silicon is needed because AI is now present in many PC workloads. Video conferences, he said, feature lots of AI enhancing video and making participants sounds great -- and users now just expect that PCs do brilliantly when Zooming or WebExing or Teamising. Games use lots of AI. And GPT-like models, and tools like Stable Diffusion, are already popular on the PC and available as local executables.

CPUs and GPUs do the heavy lifting today, but Rayfield said they'll be overwhelmed by the demands of AI workloads. Shifting that work to the cloud is pricey, and also impractical because buyers want PCs to perform. Meteor Lake therefore gets VPUs and emerges as an SoC that uses Intel's Foveros packaging tech to combine the CPU, GPU, and VPU. The VPU gets to handle "sustained AI and AI offload." CPUs will still be asked to do simple inference jobs with low latency, usually when the cost of doing so is less than the overhead of working with a driver to shunt the workload elsewhere. GPUs will get to do jobs involving performance parallelism and throughput. Other AI-related work will be offloaded to VPUs.

Intel

Intel Mulls Cutting Ties To 16 and 32-Bit Support (theregister.com) 239

Intel has proposed a potential simplification of the x86 architecture by creating a new x86S architecture that removes certain old features, such as 16-bit and some elements of 32-bit support. A technical note on Intel's developer blog proposes the change, with a 46-page white paper (PDF) providing more details. The Register reports: The result would be a family of processors which boot straight into x86-64 mode. That would mean bypassing the traditional series of transitions -- 16-bit real mode to 32-bit protected mode to 64-bit long mode; or 16-bit mode straight into 64-bit mode -- that chips are obliged to go through as the system starts up. [...] Some of the changes are quite dramatic, although the impact upon how most people use computers today would probably be invisible -- which is undoubtedly the idea.
Intel

Intel Gives Details on Future AI Chips as It Shifts Strategy (reuters.com) 36

Intel on Monday provided a handful of new details on a chip for artificial intelligence (AI) computing it plans to introduce in 2025 as it shifts strategy to compete against Nvidia and Advanced Micro Devices. From a report: At a supercomputing conference in Germany on Monday, Intel said its forthcoming "Falcon Shores" chip will have 288 gigabytes of memory and support 8-bit floating point computation. Those technical specifications are important as artificial intelligence models similar to services like ChatGPT have exploded in size, and businesses are looking for more powerful chips to run them.

The details are also among the first to trickle out as Intel carries out a strategy shift to catch up to Nvidia, which leads the market in chips for AI, and AMD, which is expected to challenge Nvidia's position with a chip called the MI300. Intel, by contrast, has essentially no market share after its would-be Nvidia competitor, a chip called Ponte Vecchio, suffered years of delays. Intel on Monday said it has nearly completed shipments for Argonne National Lab's Aurora supercomputer based on Ponte Vecchio, which Intel claims has better performance than Nvidia's latest AI chip, the H100. But Intel's Falcon Shores follow-on chip won't be to market until 2025, when Nvidia will likely have another chip of its own out.

AMD

AMD Now Powers 121 of the World's Fastest Supercomputers (tomshardware.com) 22

The Top 500 list of the fastest supercomputers in the world was released today, and AMD continues its streak of impressive wins with 121 systems now powered by AMD's silicon -- a year-over-year increase of 29%. From a report: Additionally, AMD continues to hold the #1 spot on the Top 500 with the Frontier supercomputer, while the test and development system based on the same architecture continues to hold the second spot in power efficiency metrics on the Green 500 list. Overall, AMD also powers seven of the top ten systems on the Green 500 list. The AMD-powered Frontier remains the only fully-qualified exascale-class supercomputer on the planet, as the Intel-powered two-exaflop Aurora has still not submitted a benchmark result after years of delays.

In contrast, Frontier is now fully operational and is being used by researchers in a multitude of science workloads. In fact, Frontier continues to improve from tuning -- the system entered the Top 500 list with 1.02 exaflops of performance in June 2022 but has now improved to 1.194 exaflops, a 17% increase. That's an impressive increase from the same 8,699,904 CPU cores it debuted with. For perspective, that extra 92 petaflops of performance from tuning represents the same amount of computational horsepower as the entire Perlmutter system that ranks eighth on the Top 500.

United States

How US Universities Hope to Build a New Semiconductor Workforce (ieee.org) 52

There's shortages of young semiconductor engineers around the world, reports IEEE Spectrum — partially explained by this quote from Intel's director of university research collaboration. "We hear from academics that we're losing EE students to software. But we also need the software. I think it's a totality of 'We need more students in STEM careers.'"

So after America's CHIPS and Science Act "aimed at kick-starting chip manufacturing in the United States," the article notes that universities must attempt bring the U.S. "the qualified workforce needed to run these plants and design the chips." The United States today manufactures just 12 percent of the world's chips, down from 37 percent in 1990, according to a September 2020 report by the Semiconductor Industry Association. Over those decades, experts say, semiconductor and hardware education has stagnated. But for the CHIPS Act to succeed, each fab will need hundreds of skilled engineers and technicians of all stripes, with training ranging from two-year associate degrees to Ph.D.s. Engineering schools in the United States are now racing to produce that talent... There were around 20,000 job openings in the semiconductor industry at the end of 2022, according to Peter Bermel, an electrical and computer engineering professor at Purdue University. "Even if there's limited growth in this field, you'd need a minimum of 50,000 more hires in the next five years. We need to ramp up our efforts really quickly...."

More than being a partner, Intel sees itself as a catalyst for upgrading the higher-education system to produce the workforce it needs, says the company's director of university research collaboration, Gabriela Cruz Thompson. One of the few semiconductor companies still producing most of its wafers in the United States, Intel is expanding its fabs in Arizona, New Mexico, and Oregon. Of the 7,000 jobs created as a result, about 70 percent will be for people with two-year degrees... Since COVID, however, Intel has struggled to find enough operators and technicians with two-year degrees to keep the foundries running. This makes community colleges a crucial piece of the microelectronics workforce puzzle, Thompson says. In Ohio, the company is giving most of its educational funds to technical and community colleges so they can add semiconductor-specific training to existing advanced manufacturing programs. Intel is also asking universities to provide hands-on clean-room experience to community college students.

Samsung and Silicon Labs in Austin are similarly investing in neighboring community colleges and technical schools via scholarships, summer internships, and mentorship programs.

Beyond the deserts of Arizona, chipmakers are eyeing the America's midwest, the article points out (with its "abundance of research universities and technical colleges.")
  • The University of Illinois Urbana-Champaign offers an Advanced Systems Design class "which leads senior-year undergrads through every step of making an integrated circuit."

Hardware

US Focuses on Invigorating 'Chiplet' Production in the US (nytimes.com) 19

More than a decade ago engineers at AMD "began toying with a radical idea," remembers the New York Times. Instead of designing one big microprocessor, they "conceived of creating one from smaller chips that would be packaged tightly together to work like one electronic brain."

But with "diminishing returns" from Moore's Law, packaging smaller chips suddenly becomes more important. [Alternate URL here.] As much as 80% of microprocessors will be using these designs by 2027, according to an estimate from the market research firm Yole Group cited by the Times: The concept, sometimes called chiplets, caught on in a big way, with AMD, Apple, Amazon, Tesla, IBM and Intel introducing such products. Chiplets rapidly gained traction because smaller chips are cheaper to make, while bundles of them can top the performance of any single slice of silicon. The strategy, based on advanced packaging technology, has since become an essential tool to enabling progress in semiconductors. And it represents one of the biggest shifts in years for an industry that drives innovations in fields like artificial intelligence, self-driving cars and military hardware. "Packaging is where the action is going to be," said Subramanian Iyer, a professor of electrical and computer engineering at the University of California, Los Angeles, who helped pioneer the chiplet concept. "It's happening because there is actually no other way."

The catch is that such packaging, like making chips themselves, is overwhelmingly dominated by companies in Asia. Although the United States accounts for around 12 percent of global semiconductor production, American companies provide just 3 percent of chip packaging, according to IPC, a trade association. That issue has now landed chiplets in the middle of U.S. industrial policymaking. The CHIPS Act, a $52 billion subsidy package that passed last summer, was seen as President Biden's move to reinvigorate domestic chip making by providing money to build more sophisticated factories called "fabs." But part of it was also aimed at stoking advanced packaging factories in the United States to capture more of that essential process... The Commerce Department is now accepting applications for manufacturing grants from the CHIPS Act, including for chip packaging factories. It is also allocating funding to a research program specifically on advanced packaging...

Some chip packaging companies are moving quickly for the funding. One is Integra Technologies in Wichita, Kan., which announced plans for a $1.8 billion expansion there but said that was contingent on receiving federal subsidies. Amkor Technology, an Arizona packaging service that has most of its operations in Asia, also said it was talking to customers and government officials about a U.S. production presence... Packaging services still need others to supply the substrates that chiplets require to connect to circuit boards and one another... But the United States has no major makers of those substrates, which are primarily produced in Asia and evolved from technologies used in manufacturing circuit boards. Many U.S. companies have also left that business, another worry that industry groups hope will spur federal funding to help board suppliers start making substrates.

In March, Mr. Biden issued a determination that advanced packaging and domestic circuit board production were essential for national security, and announced $50 million in Defense Production Act funding for American and Canadian companies in those fields. Even with such subsidies, assembling all the elements required to reduce U.S. dependence on Asian companies "is a huge challenge," said Andreas Olofsson, who ran a Defense Department research effort in the field before founding a packaging start-up called Zero ASIC. "You don't have suppliers. You don't have a work force. You don't have equipment. You have to sort of start from scratch."

Intel

Intel Plans Fresh Round of Layoffs, Other Cost Cuts (oregonlive.com) 33

Intel plans a fresh wave of layoffs in the wake of a steep decline in revenue over the last six months. The chipmaker, Oregon's largest corporate employer, blames a weak global economy. From a report: "We are focused on identifying cost reductions and efficiency gains through multiple initiatives, including some business and function-specific workforce reductions in areas across the company," Intel said in a written statement. "These are difficult decisions, and we are committed to treating impacted employees with dignity and respect," Intel said.

Dylan Patel with the technology research firm SemiAnalysis first reported the pending cuts over the weekend. Intel didn't say what else it's cutting, in what areas, or how these layoffs compare to a prior round of job cuts that ended last winter. Intel laid off more than 500 employees in California in job cuts announced last fall, according to filings there with state workforce agencies. It laid off employees in Oregon, too, but didn't make a similar filing here, suggesting that the layoffs represented a smaller percentage of the company's local workforce. Intel employs more than 22,000 at its Washington County campuses.

Linux

Linus Torvalds Cleaned Up the Intel LAM Code for Linux 6.4 (phoronix.com) 27

Last week Linus Torvalds personally cleaned up the x86 memory copy code for Linux 6.4, Phoronix reports — and this week "he's merged more of his own code as he took issue with some of the code merged by Intel engineers as part of their Linear Address Masking enabling." Back during the Linux 6.2 days at the end of last year, Linus rejected the Intel LAM code at the time for various technical issues. Intel then reworked it for Linux 6.4. This time around Linus merged Intel LAM into Linux 6.4 as this new CPU feature for letting user-space store metadata within some bits of pointers without masking it out before use. Intel LAM — like Arm TBI — can be of use to virtual machines, profiling / sanitizers / tagging, and other applications. But this time around there were some less than ideal code that he personally took to sprucing up...

Torvalds reworked around one hundred lines of code for cleaning it up.

It's fun to read Torvalds' commit messages (included in both Phoronix articles). Torvalds begins by writing that the LAM updates "made me unhappy about how 'access_ok()' was done, and it actually turned out to have a couple of small bugs in it too..."
AMD

Report: Microsoft is Partnering with AMD on Athena AI Chipset 17

According to Bloomberg (paywalled), Microsoft is helping finance AMD's expansion into AI chips. Meanwhile, AMD is working with Microsoft to create an in-house chipset, codenamed Athena, for the software giant's data centers. Paul Thurrott reports: Athena is designed as a cost-effective replacement for AI chipsets from Nvidia, which currently dominates this market. And it comes with newfound urgency as Microsoft's ChatGPT-powered Bing chatbot workloads are incredibly expensive using third-party chips. With Microsoft planning to expand its use of AI dramatically this year, it needs a cheaper alternative.

Microsoft's secretive hardware efforts also come amid a period of Big Tech layoffs. But the firm's new Microsoft Silicon business, led by former Intel executive Rani Borkar, is growing and now has almost 1,000 employees, several hundred of which are working on Athena. The software giant has invested about $2 billion on this effort so far, Bloomberg says. (And that's above the $11 billion it's invested in ChatGPT maker OpenAI.) Bloomberg also says that Microsoft intends to keep partnering with Nvidia too, and that it will continue buying Nvidia chipsets as needed.
Intel

Intel: Just You Wait. Again (mondaynote.com) 39

Analyst Jean-Louis Gassee, writing at Monday Note about Intel's habit of requesting investors that they wait for the company to catch up to the competition: Concurrently, the company's revenue for its new IFS foundry business decreased by 24% to an insignificant $118M, with a $140M operating loss gingerly explained as "increased spending to support strategic growth." Other Intel businesses such as Networking (NEX) products and Mobileye -- yet another Autonomous Driving Technology -- add nothing promising to the company's picture. This doesn't prevent [Intel CEO Pat] Gelsinger from once again intoning the Just You Wait refrain. This time, the promise is to "regain transistor performance and power performance leadership by 2025."

Is it credible?

We all agree that the US tech industry would be better served by Intel providing a better alternative to TSMC's and Samsung's advanced foundries. Indeed, We The Taxpayers are funding efforts to stimulate our country's semiconductor sector at the tune of $52B. I won't comment other than to reminisce about a difficult late 80s conversation with an industry CEO when, as an Apple exec, I naively opposed an attempt to combat the loss of semiconductor memory business to foreign competitors by subsidizing something tentatively called US Memories. But, in this really complicated 2023 world, what choices do we actually have?

For years I've watched Intel's repeated mistakes, the misplaced self-regard, the ineffective leadership changes for this Silicon Valley icon, for the inventor of the first commercial microprocessor, only to be disappointed time and again as the company failed to shake the Wintel yoke -- while Microsoft successfully diversified. I fervently hope Pat Gelsinger succeeds. His achievement would resonate deeply, it would bring to mind another historic turnaround: Steve Jobs' 1997 return to the Apple he had "left" in 1985.

Intel

Intel To Drop the 'i' Moniker In Upcoming CPU Rebrand (theregister.com) 107

When Intel debuts its forthcoming Meteor Lake client processors, the company may drop its iconic "i" CPU branding and add a new moniker. Chipzilla today told The Register "We are making brand changes as we're at an inflection point in our client roadmap in preparation for the upcoming launch of our Meteor Lake processors. We will provide more details regarding these exciting changes in the coming weeks." From the report: The Register asked Intel about branding after semiconductor analyst Dylan Patel on Monday tweeted "Imagine you're losing market share when you've been monopoly for decades, and your bright idea is to burn all brand recognition to the ground!" "That's Intel's plan by removing the 'i' in i7 i5 i3. All the decades brand recognition being lit on fire for no reason!"

Patel labelled the rebranding a "horrible very short sighted move" that won't fix Intel's woes and "will cause more harm than good, as many buyers know + recognize the i7 i5 branding, they won't once it's changed." "The new branding sounds bad with ultra strewn about + confusing scheme."

Patel's mention of "Ultra" branding appears to be a reference to this benchmark result for game Ashes of the Singularity: Escalation which lists a processor called "Intel Core Ultra 5 1003H".

Open Source

Red Hat's 30th Anniversary: How a Microsoft Competitor Rose from an Apartment-Based Startup (msn.com) 47

For Red Hat's 30th anniversary, North Carolina's News & Observer newspaper ran a special four-part series of articles.

In the first article Red Hat co-founder Bob Young remembers Red Hat's first big breakthrough: winning InfoWorld's "OS of the Year" award in 1998 — at a time when Microsoft's Windows controlled 85% of the market. "How is that possible," Young said, "that one of the world's biggest technology companies, on this strategically critical product, loses the product of the year to a company with 50 employees in the tobacco fields of North Carolina?" The answer, he would tell the many reporters who suddenly wanted to learn about his upstart company, strikes at "the beauty" of open-source software.

"Our engineering team is an order of magnitude bigger than Microsoft's engineering team on Windows, and I don't really care how many people they have," Young would say. "Like they may have thousands of the smartest operating system engineers that they could scour the planet for, and we had 10,000 engineers by comparison...."

Young was a 40-year-old Canadian computer equipment salesperson with a software catalog when he noticed what Marc Ewing was doing. [Ewing was a recent college graduate bored with his two-month job at IBM, selling customized Linux as a side hustle.] It's pretty primitive, but it's going in the right direction, Young thought. He began reselling Ewing's Red Hat product. Eventually, he called Ewing, and the two met at a tech conference in New York City. "I needed a product, and Marc needed some marketing help," said Young, who was living in Connecticut at the time. "So we put our two little businesses together."

Red Hat incorporated in March 1993, with the earliest employees operating the nascent business out of Ewing's Durham apartment. Eventually, the landlord discovered what they were doing and kicked them out.

The four articles capture the highlights. ("A visual effects group used its Linux 4.1 to design parts of the 1997 film Titanic.") And it doesn't leave out Red Hat's skirmishes with Microsoft. ("Microsoft was owned by the richest person in the world. Red Hat engineers were still linking servers together with extension cords. ") "We were changing the industry and a lot of companies were mad at us," says Michael Ferris, Red Hat's VP of corporate development/strategy. Soon there were corporate partnerships with Netscape, Intel, Hewlett-Packard, Compaq, Dell, and IBM — and when Red Hat finally goes public in 1999, its stock sees the eighth-largest first-day gain in Wall Street history, rising in value in days to over $7 billion and "making overnight millionaires of its earliest employees."

But there's also inspiring details like the quote painted on the wall of Red Hat's headquarters in Durham: "Every revolution was first a thought in one man's mind; and when the same thought occurs to another man, it is the key to that era..." It's fun to see the story told by a local newspaper, with subheadings like "It started with a student from Finland" and "Red Hat takes on the Microsoft Goliath."

Something I'd never thought of. 2001's 9/11 terrorist attack on the World Trade Center "destroyed the principal data centers of many Wall Street investment banks, which were housed in the twin towers. With their computers wiped out, financial institutions had to choose whether to rebuild with standard proprietary software or the emergent open source. Many picked the latter." And by the mid-2000s, "Red Hat was the world's largest provider of Linux...' according to part two of the series. "Soon, Red Hat was servicing more than 90% of Fortune 500 companies." By then, even the most vehement former critics were amenable to Red Hat's kind of software. Microsoft had begun to integrate open source into its core operations. "Microsoft was on the wrong side of history when open source exploded at the beginning of the century, and I can say that about me personally," Microsoft President Brad Smith later said.

In the 2010s, "open source has won" became a popular tagline among programmers. After years of fighting for legitimacy, former Red Hat executives said victory felt good. "There was never gloating," Tiemann said.

"But there was always pride."

In 2017 Red Hat's CEO answered questions from Slashdot's readers.
Privacy

The DOJ Detected the SolarWinds Hack 6 Months Earlier Than First Disclosed (wired.com) 19

An anonymous reader quotes a report from Wired: The U.S. Department of Justice, Mandiant, and Microsoft stumbled upon the SolarWinds breach six months earlier than previously reported, WIRED has learned, but were unaware of the significance of what they had found. The breach, publicly announced in December 2020, involved Russian hackers compromising the software maker SolarWinds and inserting a backdoor into software served to about 18,000 of its customers. That tainted software went on to infect at least nine US federal agencies, among them the Department of Justice (DOJ), the Department of Defense, Department of Homeland Security, and the Treasury Department, as well as top tech and security firms including Microsoft, Mandiant, Intel, Cisco, and Palo Alto Networks. The hackers had been in these various networks for between four and nine months before the campaign was exposed by Mandiant.

WIRED can now confirm that the operation was actually discovered by the DOJ six months earlier, in late May 2020 -- but the scale and significance of the breach wasn't immediately apparent. Suspicions were triggered when the department detected unusual traffic emanating from one of its servers that was running a trial version of the Orion software suite made by SolarWinds, according to sources familiar with the incident. The software, used by system administrators to manage and configure networks, was communicating externally with an unfamiliar system on the internet. The DOJ asked the security firm Mandiant to help determine whether the server had been hacked. It also engaged Microsoft, though it's not clear why the software maker was also brought onto the investigation.

It's not known what division of the DOJ experienced the breach, but representatives from the Justice Management Division and the US Trustee Program participated in discussions about the incident. The Trustee Program oversees the administration of bankruptcy cases and private trustees. The Management Division advises DOJ managers on budget and personnel management, ethics, procurement, and security. Investigators suspected the hackers had breached the DOJ server directly, possibly by exploiting a vulnerability in the Orion software. They reached out to SolarWinds to assist with the inquiry, but the company's engineers were unable to find a vulnerability in their code. In July 2020, with the mystery still unresolved, communication between investigators and SolarWinds stopped. A month later, the DOJ purchased the Orion system, suggesting that the department was satisfied that there was no further threat posed by the Orion suite, the sources say.
According to WIRED, the DOJ said it "notified the US Cybersecurity and Infrastructure Agency (CISA) about the breach at the time it occurred -- though a US National Security Agency spokesperson expressed frustration that the agency was not also notified."

"But in December 2020, when the public learned that a number of federal agencies were compromised in the SolarWinds campaign -- the DOJ among them -- neither the DOJ nor CISA revealed to the public that the operation had unknowingly been found months earlier. The DOJ initially said its chief information officer had discovered the breach on December 24."
China

Chinese Hackers Outnumber FBI Cyber Staff 50 To 1, Bureau Director Says (cnbc.com) 48

According to FBI Director Christopher Wray, Chinese hackers vastly outnumber U.S. cyber intelligence staff "by at least 50 to 1." CNBC reports: "To give you a sense of what we're up against, if each one of the FBI's cyber agents and intel analysts focused exclusively on the China threat, Chinese hackers would still outnumber FBI Cyber personnel by at least 50 to 1," Wray said in prepared remarks for a budget hearing before a House Appropriations subcommittee on Thursday. The disclosure highlights the massive scale of cyber threats the U.S. is facing, particularly from China. Wray said the country has "a bigger hacking program than every other major nation combined and have stolen more of our personal and corporate data than all other nations -- big or small -- combined."

The agency is requesting about $63 million to help it beef up its cyber staff with 192 new positions. Wray said this would also help the FBI put more cyber staff in field offices to be closer to where victims of cyber crimes actually are.

Intel

Intel Reports Largest Quarterly Loss In Company History (cnbc.com) 61

In the company's first-quarter earnings results (PDF) on Wednesday, Intel reported a 133% annual reduction in earnings per share. "Revenue dropped nearly 36% year over year to $11.7 billion," adds CNBC. From the report: In the first quarter, Intel swung to a net loss of $2.8 billion, or 66 cents per share, from a net profit of $8.1 billion, or $1.98 per share, last year. Excluding the impact of inventory restructuring, a recent change to employee stock options and other acquisition-related charges, Intel said it lost 4 cents a share, which was a narrower loss than analyst had expected. Revenue decreased to $11.7 billion from $18.4 billion a year ago.

It's the fifth consecutive quarter of falling sales for the semiconductor giant and the second consecutive quarter of losses. It's also Intel's largest quarterly loss of all time, beating out the fourth quarter of 2017, when it lost $687 million. Intel hopes that by 2026 that it can manufacture chips as advanced as those made by TSMC in Taiwan, and it can compete for custom work like Apple's A-series chips in iPhones. Intel said on Thursday it was still on track to hit that goal.

Intel's Client Computing group, which includes the chips that power the majority of desktop and laptop Windows PCs, reported $5.8 billion in revenue, down 38% on an annual basis. Intel's server chip division, under its Data Center and AI segment suffered an even worse decline, falling 39% to $3.7 billion. Its smallest full line of business, Network and Edge, posted $1.5 billion in sales, down 30% from the same time last year. One bright spot was Mobileye, which went public last year but is still controlled by Intel. Mobileye makes systems and software for self-driving cars, and reported 16% sales growth to $458 million.

Slashdot Top Deals