×
Intel

Intel To Cut Jobs in Cost-Savings Drive as PC Slump Weighs on Earnings (wsj.com) 14

Intel has embarked on an aggressive cost-cutting push and is considering divestitures as the chip maker tries to navigate a sharp plunge in demand for PCs that has weighed on the company's earnings. From a report: Intel posted a 20% drop in third-quarter sales, issued a forecast for even weaker revenue in the current quarter and lowered its full-year outlook. The company is beginning targeted job cuts and making other adjustments including reducing factory hours to cope with the economic downturn, Chief Executive Pat Gelsinger said in an interview Thursday. He wouldn't specify how many of Intel's more than 120,000 employees would be affected.

"We are aggressively addressing costs and driving efficiencies across the business," he said. He added that the company was looking at possible divestitures, among other moves. Intel said it was working to deliver $3 billion in cost reductions in 2023, growing to $8 billion to $10 billion in annualized cost reductions and efficiency gains by the end of 2025. The company took a $664 million restructuring charge in the third quarter to reflect initial cost reductions.

Data Storage

How a Redditor Ended Up With an Industrial-Grade Netflix Server (vice.com) 40

A Redditor says they've managed to get a hold of an old Netflix server for free, and has posted a detailed online look at the once mysterious hardware. The devices were part of Netflix's Open Connect Content Delivery Network (CDN), and can often be found embedded within major ISP networks to ensure your Netflix streams don't suck. From a report: Reddit user PoisonWaffle3 said the ISP he currently works for has been offloading old Netflix servers as they upgrade to more modern equipment. In a Reddit thread titled "So I got a Netflix cache server..." he posted a photo of the server, which is bright Netflix red, and explained how he was curious about what's inside the boxes given how little public information was available.

"All I could find online was overviews, installation/config guides for their proprietary software, etc.," he said. "No specs, no clue what was inside the red box." Dave Temkin, Netflix's former Vice President of Network Systems Infrastructure told Motherboard there's nothing too mysterious about what the servers can do, though they significantly help improve video streaming by shortening overall content transit time. "They're just an Intel FreeBSD box," he said. "We got Linux running on some of the generations of that box as well."

Netflix's Open Connect Content Delivery Network hardware caches popular Netflix content to reduce overall strain across broadband networks. Netflix lets major broadband ISPs embed a CDN server on the ISP network for free; the shorter transit time then helps improve video delivery, of benefit to broadband providers and Netflix alike. It took all of three screws for PoisonWaffle3 to get inside the mysterious red unit, at which point users discovered a "fairly standard" Supermicro board, a single Xeon E5 2650L v2 processor, 64GB of DDR3 memory, and a 10 gigabit ethernet card. They also found 36 7.2TB 7200RPM drives and six 500GB Micron solid state drives, for a grand total of 262 terabytes of storage.

Intel

Intel CEO Calls New US Restrictions on Chip Exports To China Inevitable (wsj.com) 9

Intel Chief Executive Pat Gelsinger said that recently imposed U.S. restrictions on semiconductor-industry exports to China were inevitable as America seeks to maintain technological leadership in competition with China. From a report: Speaking at The Wall Street Journal's annual Tech Live conference, Mr. Gelsinger said the restrictions, which require chip companies to obtain a license to export certain advanced artificial-intelligence and supercomputing chips as well as equipment used in advanced manufacturing, are part of a necessary shift of chip supply chains. "I viewed this geopolitically as inevitable," Mr. Gelsinger said. "And that's why the rebalancing of supply chains is so critical." His comments Monday followed high-profile public lobbying of Congress to pass the bipartisan Chips and Science Act, which extends nearly $53 billion in subsidies for research and development and to build or expand fabs in the U.S., in July. Mr. Gelsinger was a leading advocate for the legislation.

Mr. Gelsinger has embarked on a massive expansion of chip plants, referred to as fabs. The company has announced plans to erect new facilities in Ohio, Germany and elsewhere since Mr. Gelsinger took over last year at a combined cost potentially topping $100 billion. "Where the oil reserves are defined geopolitics for the last five decades. Where the fabs are for the next five decades is more important," Mr. Gelsinger said Monday. Mr. Gelsinger said the ambition for efforts to boost domestic chip manufacturing in Western countries was to shift from about 80% in Asia to about 50% by the end of the decade, with the U.S. taking 30% and Europe the remaining 20%. "We would all feel so good" if that were to happen, he said.

United States

Purdue University Races To Expand Semiconductor Education To Fill Yawning Workforce Gap That Threatens Reshoring Effort (washingtonpost.com) 56

An anonymous reader shares a report: On a recent afternoon, an unusual group of visitors peered through a window at Purdue University students tinkering in a lab: two dozen executives from the world's biggest semiconductor companies. The tech leaders had traveled to the small-town campus on the Wabash River to fix one of the biggest problems that they -- and the U.S. economy -- face: a desperate shortage of engineers. Leading the visitors on a tour of the high-tech lab, Engineering Professor Zhihong Chen mentioned that Purdue could really use some donated chip-making equipment as it scrambles to expand semiconductor education. "Okay, done. We can do that," Intel manufacturing chief Keyvan Esfarjani quickly replied. Just weeks before, his company broke ground on two massive chip factories in Ohio that aim to employ 3,000 people.

Computer chips are the brains that power all modern electronics, from smartphones to fighter jets. The United States used to build a lot of them but now largely depends on Asian manufacturers, a reliance that the Biden administration sees as a major economic and national security risk. Hefty new government subsidies aimed at reshoring manufacturing are sparking a construction boom of new chip factories, but a dire shortage of engineers threatens the ambitious project. By some estimates, the United States needs at least 50,000 new semiconductor engineers over the next five years to staff all of the new factories and research labs that companies have said they plan to build with subsidies from the Chips and Science Act, a number far exceeding current graduation rates nationwide, according to Purdue. Additionally, legions of engineers in other specialties will be needed to deliver on other White House priorities, including the retooling of auto manufacturing for electric vehicles and the production of technology aimed at reducing U.S. dependence on fossil fuels.

Hardware

Memtest86+ Is Back After 9 Years (tomshardware.com) 60

Memtest86+ just got its first update after 9 years. The program has reportedly been rewritten from scratch and is back in active development. The new version, 6.0, features a plethora of updates to bring the application up to date, and support the latest system hardware from Intel and AMD. Tom's Hardware reports: For the uninitiated, MemTest86 was originally created back in the mid 1990s, and was one of the earliest DDR memory testing applications for personal computers. But development stopped in 2013 once Memtest86 was split into Memtest86 and Memtest86", with the former being bought by PassMark. Officially, we don't know why development stopped. But compared to the now modern Memtest86, Memtest86+ is the open-source variant.

Needless to say, version 6.00 features a lot of updates, which were required to bring it up to modern standards compared to the 2013 version. The new version includes completely rewritten code for UEFI-based motherboards, the modern version of a BIOS, for both 32-bit and 64-bit versions of the application. Furthermore, the application features added support for x64 long mode paging, support for up to 256 cores, added detection for DDR4 and DDR5 memory -- since DDR3 was the latest memory standard in 2013 -- and adds support for XMP version 3.0.

CPU support has been significantly enhanced, addingdetection for all pre-Zen and AMD Zen-based processors ranging from the Ryzen 1000 series to 7000 series, and any older parts that were made after 2013. Intel support has also been added for chips up to 13th gen Raptor Lake. Finally, the last patch notes indicate version 6.0 adds support for older Nvidia and AMD chipsets - probably pre-2010 since it mentions Nvidia nForce chipsets, along with numerous bug fixes, optimizations and enhancements.

Intel

The Linux Kernel May Finally Phase Out Intel i486 CPU Support (phoronix.com) 154

"Linus Torvalds has backed the idea of possibly removing Intel 486 (i486) processor support from the Linux kernel," reports Phoronix: After the Linux kernel dropped i386 support a decade ago, i486 has been the minimum x86 processor support for the mainline Linux kernel. This latest attempt to kill off i486 support ultimately arose from Linus Torvalds himself with expressing the idea of possibly requiring x86 32-bit CPUs with "cmpxchg8b" support, which would mean Pentium CPUs and later:

Maybe we should just bite the bullet, and say that we only support x86-32 with 'cmpxchg8b' (ie Pentium and later).

Get rid of all the "emulate 64-bit atomics with cli/sti, knowing that nobody has SMP on those CPU's anyway", and implement a generic x86-32 xchg() setup using that try_cmpxchg64 loop.

I think most (all?) distros already enable X86_PAE anyway, which makes that X86_CMPXCHG64 be part of the base requirement.

Not that I'm convinced most distros even do 32-bit development anyway these days.... We got rid of i386 support back in 2012. Maybe it's time to get rid of i486 support in 2022?

Towards the end of his post, Torvalds makes the following observation about i486 systems. "At some point, people have them as museum pieces. They might as well run museum kernels. "
Intel

Overclocker Breaks CPU Frequency World Record with Intel's Raptor Lake Core i9-13900K (tomshardware.com) 50

Hardcore overclocker Elmor "officially broke the CPU frequency world record with Intel's brand-new Core i9-13900K 24-core processor," reports Tom's Hardware — by hitting "a staggering 8.812GHz using liquid nitrogen cooling, dethroning the 8-year reigning champion, the FX-8370, by 90MHz." That's right; it took eight years for a new CPU architecture to dethrone AMD's FX series processors. Those chips are infamous for their mediocre CPU performance at launch; however, these chips scaled incredibly well under liquid nitrogen overclocking....

Elmor accomplished this monumental feat thanks to Intel's new highly-clocked 13th Gen Raptor Lake CPU architecture. Out of the box, the Core i9-13900K can run over 5.5GHz on all P-cores while also hitting 5.8GHz under lightly threaded workloads. The 13900K is, by far, Intel's highest-clocking chip to date.

Open Source

Google Announces GUAC Open-Source Project On Software Supply Chains (therecord.media) 2

Google unveiled a new open source security project on Thursday centered around software supply chain management. The Record reports: Given the acronym GUAC -- which stands for Graph for Understanding Artifact Composition -- the project is focused on creating sets of data about a software's build, security and dependency. Google worked with Purdue University, Citibank and supply chain security company Kusari on GUAC, a free tool built to bring together many different sources of software security metadata. Google has also assembled a group of technical advisory members to help with the project -- including IBM, Intel, Anchore and more.

Google's Brandon Lum, Mihai Maruseac, Isaac Hepworth pitched the effort as one way to help address the explosion in software supply chain attacks -- most notably the widespread Log4j vulnerability that is still leaving organizations across the world exposed to attacks. "GUAC addresses a need created by the burgeoning efforts across the ecosystem to generate software build, security, and dependency metadata," they wrote in a blog post. "GUAC is meant to democratize the availability of this security information by making it freely accessible and useful for every organization, not just those with enterprise-scale security and IT funding."

Google shared a proof of concept of the project, which allows users to search data sets of software metadata. The three explained that GUAC effectively aggregates software security metadata into a database and makes it searchable. They used the example of a CISO or compliance officer that needs to understand the "blast radius" of a vulnerability. GUAC would allow them to "trace the relationship between a component and everything else in the portfolio." Google says the tool will allow anyone to figure out the most used critical components in their software supply chain ecosystem, the security weak points and any risky dependencies. As the project evolves, Maruseac, Lum and Hepworth said the next part of the work will center around scaling the project and adding new kinds of documents that can be submitted and ingested by the system.

Intel

Intel Sued Over Historic DEC Chip Site's Future (theregister.com) 43

Intel is being taken to court in Massachusetts over its proposals to build a distribution and logistics warehouse on the site of its defunct R&D offices and chip factory that closed in 2013. The Register reports: At the heart of this showdown are claims by townsfolk that Intel has not revealed to the surrounding community what exactly it intends to build, and that the land is supposed to be used for industry and manufacturing yet it appears a huge commercial warehouse will be built instead. The x86 giant has spent years trying to figure out what to do the campus -- whether to salvage it for production or research, or to sell it to a developer. It came close to securing a buyer earlier this year.

The site in question is at 75 Reed Road in Hudson, Massachusetts, which holds a special place in computer history. It was the home of Digital Equipment Corporation's R&D and chip manufacturing before Intel took over the land and facility following a patent battle with DEC in 1997. Intel continued R&D at the site and kept it producing chips until it threw the towel in, leaving the location open to options. Ultimately, the site was up for sale with Intel planning to demolish the 40-year-old main buildings while offloading the land. However, the chipmaker, perhaps in response to a revitalization of American semiconductor manufacturing funded by CHIPS Act government subsidies, decided it wants to remake the property into a distribution and logistics and storage facility -- something that might sound innocuous but has the nearby community up in arms.

Further, Intel doesn't have to use the redeveloped site for its own purposes at all: it can, and probably will, market the facility to a future tenant. And it can breeze through planning law requirements without having to reveal the full scope of traffic, pollution, and other impacts due to its status as a "logistics" facility. And that is what really has the locals enraged. Crucially, the site is adjacent to two retirement villages with 286 units and a childcare center. As a former R&D and manufacturing facility, neighboring communities understood the scope of traffic and resource impacts of such a factory. [...] The even bigger problem is that this represents another example of a large tech company wheedling its way through local restrictions to build community-damning facilities, said Michael Pill, the lawyer representing both retirement condo facilities and the childcare center in their legal challenge [PDF] to Intel.
"What Intel has done here is something deeply unpleasant that grows out of its desire to dump the property without any thought to the community where they were once an important pillar of manufacturing," Pill told The Register. "There is a pattern of development in which big companies come sailing into towns, saying they'll build million-plus square foot facilities with hundreds of loading docks and all the planning is done on spec."

In response to the lawsuit, Intel's lawyers said in a filing that the proposed changes are subject to approval by the town: "Because the proposed redevelopment is a permitted use in the zoning district, the project will require site plan review from the town of Hudson planning board."
IT

USB-C Can Hit 120Gbps With Newly Published USB4 Version 2.0 Spec (arstechnica.com) 69

An anonymous reader shares a report: We've said it before, and we'll say it again: USB-C is confusing. A USB-C port or cable can support a range of speeds, power capabilities, and other features, depending on the specification used. Today, USB-C can support various data transfer rates, from 0.48Gbps (USB 2.0) all the way to 40Gbps (USB4, Thunderbolt 3, and Thunderbolt 4). Things are only about to intensify, as today the USB Implementers Forum (USB-IF) published the USB4 Version 2.0 spec. It adds optional support for 80Gbps bidirectional bandwidth as well as the optional ability to send or receive data at up to 120Gbps.

The USB-IF first gave us word of USB4 Version 2.0 in September, saying it would support a data transfer rate of up to 80Gbps in either direction (40Gbps per lane, four lanes total), thanks to a new physical layer architecture (PHY) based on PAM-3 signal encoding. For what it's worth, Intel also demoed Thunderbolt at 80Gbps but hasn't released an official spec yet. USB4 Version 2.0 offers a nice potential bump over the original USB4 spec, which introduced optional support for 40Gbps operation. You just have to be sure to check the spec sheets to know what sort of performance you're getting. Once USB4 Version 2.0 products come out, you'll be able to hit 80Gbps with USB-C passive cables that currently operate at 40Gbps, but you'll have to buy a new cable if you want a longer, active 80Gbps.

Software

VirtualBox 7.0 Adds First ARM Mac Client, Full Encryption, Windows 11 TPM (arstechnica.com) 19

Nearly four years after its last major release, VirtualBox 7.0 arrives with a... host of new features. Chief among them are Windows 11 support via TPM, EFI Secure Boot support, full encryption for virtual machines, and a few Linux niceties. From a report: The big news is support for Secure Boot and TPM 1.2 and 2.0, which makes it easier to install Windows 11 without registry hacks (the kind Oracle recommended for 6.1 users). It's strange to think about people unable to satisfy Windows 11's security requirements on their physical hardware, but doing so with a couple clicks in VirtualBox, but here we are. VirtualBox 7.0 also allows virtual machines to run with full encryption, not just inside the guest OSâ"but logs, saved states, and other files connected to the VM. At the moment, this support only works through the command line, "for now," Oracle notes in the changelog.

This is the first official VirtualBox release with a Developer Preview for ARM-based Macs. Having loaded it on an M2 MacBook Air, I can report that the VirtualBox client informs you, extensively and consistently, about the non-production nature of your client. The changelog notes that it's an "unsupported work in progress" that is "known to have very modest performance." A "Beta Warning" shows up in the (new and unified) message center, and in the upper-right corner, a "BETA" warning on the window frame is stacked on top of a construction-style "Dev Preview" warning sign. It's still true that ARM-based Macs don't allow for running operating systems written for Intel or AMD-based processors inside virtual machines. You will, however, be able to run ARM-based Linux installations in macOS Venture that can themselves run x86 processors using Rosetta, Apple's own translation layer.

Microsoft

Microsoft Unveils Surface Pro 9 With Choice of Intel or ARM Models, No Headphone Jack (theverge.com) 79

Earlier today, Microsoft unveiled three new Surface computers: the Surface Pro 9, Surface Laptop 5, and Surface Studio 2+. While this year's Surface Pro 9 remains very similar to last year's Surface Pro 8, it's being offered with refreshed Intel 12th-gen CPUs or a "new 5G-equipped model with a custom SQ 3 Arm chip," reports Engadget. From the report: If that sounds confusing to you, well, it is. We last saw the company's SQ chip in the 2020 Surface Pro X, a computer that we found both beautiful and frustrating, thanks to Windows' crummy software compatibility with Arm chips. To shift that problem over to a computer with the same name as its Intel sibling is a recipe for disaster. (We can just imagine the frustrated Best Buy shoppers who are dazzled with the idea of a 5G Surface, only to learn they can't run most of their traditional Windows apps.) The 5G Pro 9 is also broken down into millimeter-wave and Sub-6 variants, which will be sold in their respective markets. It's understandable why Microsoft isn't keen to keep the Surface Pro X moniker going -- the Pro 8 lifted many of its modern design cues, after all. But from what we've seen, Windows 11 doesn't solve the problems we initially had with the Pro X. After analyzing the product's tech specs, The Verge discovered that the Surface Pro 9 no longer appears to have a headphone jack. From the report: This seems to be the direct result of Microsoft bringing the Intel and Arm versions of the Surface Pro 9 together in the same chassis. The Surface Pro X has never had a 3.5mm jack, so now, the Intel hardware is coming in line with that design direction. But I'd argue it's a more controversial omission this time. Why? The new universal outer enclosure is essentially the same size as that of the Surface Pro 8.

The Surface Pro X hardware was quite a bit thinner than Microsoft's Intel hardware at the time (and still now). So excising the 3.5mm jack made sense. But we've now lost the headphone jack for a chassis that's basically identical in dimensions to last year's model. They really couldn't fit one on there somewhere?
Further reading: Microsoft's Surface Studio 2 Plus Ships With an RTX 3060 for $4,299
Microsoft

Microsoft's Surface Studio 2 Plus Ships With an RTX 3060 for $4,299 (theverge.com) 57

It's been a long time since Microsoft updated its Surface Studio line of all-in-one PCs. While rumors had suggested a Surface Studio 3 was on the way, Microsoft is debuting its Surface Studio 2 Plus today instead -- an upgrade on the Surface Studio 2 that launched four years ago. It includes some important upgrades on the inside, but the exterior is practically the same, and it all starts at an eye-watering $4,299. From a report: The Surface Studio 2 Plus will ship with Intel's 11th Gen Core i7-11370H processor, a chip that's rapidly approaching two years on the market. We're about to enter Intel's 13th Gen era, so it's hugely disappointing to see Microsoft not move to 12th Gen H series chips or wait for Intel's latest and greatest. "Our goal was ship to market sooner, especially for a lot of our commercial customers... so we focused on stability and supply with known good parts because the difference from 11th to 12th Gen on the H series wasn't something we needed to push for," explains Pete Kyriacou, vice president of program management at Microsoft, in an interview with The Verge. Despite the disappointing CPU choice, Microsoft has opted for a graphics card upgrade here. The Surface Studio 2 Plus comes with Nvidia's RTX 3060 laptop GPU with 6GB of VRAM. Microsoft has redesigned its Surface Studio 2 Plus motherboard, and the RTX 3060 itself will be running at around 60-70 watts in a laptop configuration. Microsoft hides all of the components in the Studio 2 Plus inside a little laptop-like enclosure underneath the 28-inch display.
Businesses

Intel Plans Thousands of Job Cuts In Face of PC Slowdown 50

An anonymous reader quotes a report from Bloomberg: Intel is planning a major reduction in headcount, likely numbering in the thousands, to cut costs and cope with a sputtering personal-computer market, according to people with knowledge of the situation. The layoffs will be announced as early as this month, with the company planning to make the move around the same time as its third-quarter earnings report on Oct. 27, said the people, who asked not to be identified because the deliberations are private. The chipmaker had 113,700 employees as of July. Some divisions, including Intel's sales and marketing group, could see cuts affecting about 20% of staff, according to the people.

Intel is facing a steep decline in demand for PC processors, its main business, and has struggled to win back market share lost to rivals like Advanced Micro Devices Inc. In July, the company warned that 2022 sales would be about $11 billion lower than it previously expected. Analysts are predicting a third-quarter revenue drop of roughly 15%. And Intel's once-enviable margins have shriveled: They're about 15 percentage points narrower than historical numbers of around 60%. During its second-quarter earnings call, Intel acknowledged that it could make changes to improve profits. "We are also lowering core expenses in calendar year 2022 and will look to take additional actions in the second half of the year," Chief Executive Officer Pat Gelsinger said at the time.

Intel's last big wave of layoffs occurred in 2016, when it trimmed about 12,000 jobs, or 11% of its total. The company has made smaller cuts since then and shuttered several divisions, including its cellular modem and drone units. Like many companies in the technology industry, Intel also froze hiring earlier this year, when market conditions soured and fears of a recession grew. Gelsinger took the helm at Intel last year and has been working to restore the company's reputation as a Silicon Valley legend. But even before the PC slump, it was an uphill fight. Intel lost its long-held technological edge, and its own executives acknowledge that the company's culture of innovation withered in recent years. Now a broader slowdown is adding to those challenges. Intel's PC, data center and artificial intelligence groups are contending with a tech spending downturn, weighing on revenue and profit.
Google

Intel and Google Cloud Launch New Chip To Improve Data Center Performance (reuters.com) 17

Intel and Google Cloud on Tuesday said they have launched a co-designed chip that can make data centers more secure and efficient. From a report: The E2000 chip, code named Mount Evans, takes over the work of packaging data for networking from the expensive central processing units (CPU) that do the main computing. It also offers better security between different customers that may be sharing CPUs in the cloud, explained Google's vice president of engineering, Amin Vahdat. Chips are made up of basic processors called cores. There can be hundreds of cores on a chip and sometimes information can bleed between them. The E2000 creates secure routes to each core to prevent such a scenario. Companies are running increasingly complex algorithms, using progressively bigger data sets, at a time when the performance improvement of chips like CPUs is slowing down. Cloud companies are therefore looking for ways to make the data center itself more productive.
Intel

Intel Confirms Alder Lake BIOS Source Code Leaked (tomshardware.com) 61

Tom's Hardware reports: We recently broke the news that Intel's Alder Lake BIOS source code had been leaked to 4chan and Github, with the 6GB file containing tools and code for building and optimizing BIOS/UEFI images. We reported the leak within hours of the initial occurrence, so we didn't yet have confirmation from Intel that the leak was genuine. Intel has now issued a statement to Tom's Hardware confirming the incident:

"Our proprietary UEFI code appears to have been leaked by a third party. We do not believe this exposes any new security vulnerabilities as we do not rely on obfuscation of information as a security measure. This code is covered under our bug bounty program within the Project Circuit Breaker campaign, and we encourage any researchers who may identify potential vulnerabilities to bring them our attention through this program...."


The BIOS/UEFI of a computer initializes the hardware before the operating system has loaded, so among its many responsibilities, is establishing connections to certain security mechanisms, like the TPM (Trusted Platform Module). Now that the BIOS/UEFI code is in the wild and Intel has confirmed it as legitimate, both nefarious actors and security researchers alike will undoubtedly probe it to search for potential backdoors and security vulnerabilities....

Intel hasn't confirmed who leaked the code or where and how it was exfiltrated. However, we do know that the GitHub repository, now taken down but already replicated widely, was created by an apparent LC Future Center employee, a China-based ODM that manufactures laptops for several OEMs, including Lenovo.

Thanks to Slashdot reader Hmmmmmm for sharing the news.
China

China May Prove Arm Wrong About RISC-V's Role In the Datacenter (theregister.com) 49

Arm might not think RISC-V is a threat to its newfound foothold in the datacenter, but growing pressure on Chinese chipmaking could ultimately change that, Forrester Research analyst Glenn O'Donnell tells The Register. From the report: Over the past few years the US has piled on export bans and trade restrictions on Chinese chipmakers in an effort to stall the country's semiconductor industry. This has included barring companies with ties to the Chinese military from purchasing x86 processors and AI kit from the likes of Intel, AMD, and Nvidia. "Because the US-China trade war restricts x86 sales to China, Chinese infrastructure vendors and cloud providers need to adapt to remain in business," O'Donnell said. "They initially pivoted to Arm, but trade restrictions exist there too. Chinese players are showing great interest in RISC-V."

RISC-V provides China with a shortcut around the laborious prospect of developing their own architecture. "Coming up with a whole new architecture is nearly impossible," O'Donnell said. But "a design based on some architecture is very different from the architecture itself." So it should come as no surprise that the majority of RISC-V members are based in China, according to a report published last year. And the country's government-backed Chinese Academy of Sciences is actively developing open source RISC-V performance processors.

Alibaba's T-Head, which is already deploying Arm server processors and smartNICs, is also exploring RISC-V-based platforms. But for now, they're largely limited to edge and IoT appliances. However, O'Donnell emphasizes that there is no technical reason that would prevent someone from developing a server-grade RISC-V chip. "Similar to Arm, many people dismiss RISC-V as underpowered for more demanding applications. They are wrong. Both are architectures, not specific designs. As such, one can design a powerful processor based on either architecture," he said. [...] One of the most attractive things about RISC-V over Softbank-owned Arm is the relatively low cost of building chips based on the tech, especially for highly commoditized use cases like embedded processors, O'Donnell explained. While nowhere as glamorous as something like a server CPU, embedded applications are one of RISC-V's first avenues into the datacenter. [...] These embedded applications are where O'Donnell expects RISC-V will see widespread adoption, including in the datacenter. Whether the open source ISA will rise to the level of Arm or x86 is another matter entirely.

United States

The Biden Administration Issues Sweeping New Rules on Chip-Tech Exports To China (protocol.com) 90

The U.S. unveiled a set of new regulations Friday that aim to choke off China's access to advanced chips, the tools necessary to manufacture years-old designs, and the service and support mechanisms needed to keep chip fabrication systems running smoothly. From a report: On a briefing call with reporters Thursday, administration officials said the goal is to block the People's Liberation Army and China's domestic surveillance apparatus from gaining access to advanced computing capabilities that require the use of advanced semiconductors. The chips, tools, and software are helping China's military, including aiding the development of weapons of mass destruction, according to the officials, who asked to remain anonymous to discuss the administration's policies freely.

The new rules are comprehensive, and cover a range of advanced semiconductor technology, from chips produced by the likes of AMD and Nvidia to the expensive, complex equipment needed to make those chips. Much of highest-quality chip manufacturing equipment is made by three U.S. companies: KLA, Applied Materials, and Lam Research, and cutting off China's access to their tools has the potential to damage the country's ambitions to become a chipmaking powerhouse. The Biden administration's new controls on chip exports represent a significant shift in U.S. policy related to China. For decades, the U.S. has attempted to keep China two generations of tech behind, typically by denying China access to the tools necessary to make advanced chips, or other technology, themselves. Now, the goal looks to be to cripple China's ability to produce chips with technology that is nearly a decade old, several generations behind the state-of-the-art capabilities.

Intel

Intel Laptop Users Should Avoid Linux 5.19.12 To Avoid Potentially Damaging The Display (phoronix.com) 48

Intel laptop users running Linux are being advised to avoid running the latest Linux 5.19.12 stable kernel point release as it can potentially damage the display. From a report: Intel Linux laptop users on Linux 5.19.12 have begun reporting "white flashing" display issues with one user describing it as "[the] laptop display starts to blink like lights in a 90's rave party." Intel Linux kernel engineer Ville Syrjal posted this week on the kernel mailing list: "After looking at some logs we do end up with potentially bogus panel power sequencing delays, which may harm the LCD panel."
Open Source

Intel CTO Wants Developers To Build Once, Run On Any GPU (venturebeat.com) 58

Greg Lavender, CTO of Intel, spoke to VentureBeat about the company's efforts to help developers build applications that can run on any operating system. From the report: "Today in the accelerated computing and GPU world, you can use CUDA and then you can only run on an Nvidia GPU, or you can go use AMD's CUDA equivalent running on an AMD GPU,â Lavender told VentureBeat. "You can't use CUDA to program an Intel GPU, so what do you use?" That's where Intel is contributing heavily to the open-source SYCL specification (SYCL is pronounced like "sickle") that aims to do for GPU and accelerated computing what Java did decades ago for application development. Intel's investment in SYCL is not entirely selfless and isn't just about supporting an open-source effort; it's also about helping to steer more development toward its recently released consumer and data center GPUs. SYCL is an approach for data parallel programming in the C++ language and, according to Lavender, it looks a lot like CUDA.

To date, SYCL development has been managed by the Khronos Group, which is a multi-stakeholder organization that is helping to build out standards for parallel computing, virtual reality and 3D graphics. On June 1, Intel acquired Scottish development firm Codeplay Software, which is one of the leading contributors to the SYCL specification. "We should have an open programming language with extensions to C++ that are being standardized, that can run on Intel, AMD and Nvidia GPUs without changing your code," Lavender said. Lavender is also a realist and he knows that there is a lot of code already written specifically for CUDA. That's why Intel developers built an open-source tool called SYCLomatic, which aims to migrate CUDA code into SYCL. Lavender claimed that SYCLomatic today has coverage for approximately 95% of all the functionality that is present in CUDA. He noted that the 5% SYCLomatic doesn't cover are capabilities that are specific to Nvidia hardware.

With SYCL, Lavender said that there are code libraries that developers can use that are device independent. The way that works is code is written by a developer once, and then SYCL can compile the code to work with whatever architecture is needed, be it for an Nvidia, AMD or Intel GPU. Looking forward, Lavender said that he's hopeful that SYCL can become a Linux Foundation project, to further enable participation and growth of the open-source effort. [...] "We should have write once, run everywhere for accelerated computing, and then let the market decide which GPU they want to use, and level the playing field," Lavender said.

Slashdot Top Deals