×
Intel

Intel Suspends All Operations in Russia 'Effective Immediately' (arstechnica.com) 107

Intel, one of the world's largest semiconductor companies, is suspending business operations in Russia "effective immediately," the company announced late Tuesday. From a report: "Intel continues to join the global community in condemning Russia's war against Ukraine," the company said in a statement. Intel stopped shipping chips to customers in Russia and Belarus in early March. Intel said that it is "working to support all of our employees through this difficult situation, including our 1,200 employees in Russia."

Ordinarily, it would be a drastic step for a multinational company like Intel to exit a market the size of Russia. But Western sanctions have made it increasingly difficult for global companies to operate in Russia. Earlier this week, the Biden administration announced broad sanctions on the Russian electronics industry, which presumably includes many of Intel's partners and customers in Russia. Two of Intel's major competitors, AMD and Nvidia, halted sales of their products in Russia early last month. Taiwanese chipmaker TSMC has also restricted sales in Russia.

AMD

AMD To Acquire Pensando in a $1.9 Billion Bid for Networking Tech (protocol.com) 12

AMD said early Monday that it plans to acquire networking chip maker Pensando for $1.9 billion in cash, in a bid to arm itself with tech that competes with directly with Nvidia and Intel's data-center chip packages. From a report: Pensando was founded by several former Cisco engineers, and makes edge computing technology that competes with AWS Nitro, Intel's DPU launched last year, and Nvidia's data processing units called BlueField. In a release distributed in advance of the announcement, AMD said that buying the closely held Pensando will give it a networking platform that will bolster its existing server chip lineup. Pensando's chips are an increasingly important part of data center design, as it becomes impossible to simply throw larger numbers of processors at demanding computing tasks. As regular chips scale up, the networking connections become a bottleneck, and the DPU's goal (Intel calls it an IPU) is to free up the central processor to perform other functions.
Intel

Intel Beats AMD and Nvidia with Arc GPU's Full AV1 Support (neowin.net) 81

Neowin notes growing support for the "very efficient, potent, royalty-free video codec" AV1, including Microsoft's adding of support for hardware acceleration of AV1 on Windows.

But AV1 even turned up in Intel's announcement this week of the Arc A-series, a new line of discrete GPUs, Neowin reports: Intel has been quick to respond and the company has become the first such GPU hardware vendor to have full AV1 support on its newly launched Arc GPUs. While AMD and Nvidia both offer AV1 decoding with their newest GPUs, neither have support for AV1 encoding.

Intel says that hardware encoding of AV1 on its new Arc GPUs is 50 times faster than those based on software-only solutions. It also adds that the efficiency of AV1 encode with Arc is 20% better compared to HEVC. With this feature, Intel hopes to potentially capture at least some of the streaming and video editing market that's based on users who are looking for a more robust AV1 encoding solution compared to CPU-based software approaches.

From Intel's announcement: Intel Arc A-Series GPUs are the first in the industry to offer full AV1 hardware acceleration, including both encode and decode, delivering faster video encode and higher quality streaming while consuming the same internet bandwidth. We've worked with industry partners to ensure that AV1 support is available today in many of the most popular media applications, with broader adoption expected this year. The AV1 codec will be a game changer for the future of video encoding and streaming.
Security

Lapsus$ Gang Claims New Hack With Data From Apple Health Partner (theverge.com) 5

After a short "vacation," the Lapsus$ hacking gang is back. In a post shared through the group's Telegram channel on Wednesday, Lapsus$ claimed to have stolen 70GB of data from Globant -- an international software development firm headquartered in Luxembourg, which boasts some of the world's largest companies as clients. From a report: Screenshots of the hacked data, originally posted by Lapsus$ and shared on Twitter by security researcher Dominic Alvieri, appeared to show folders bearing the names of a range of global businesses: among them were delivery and logistics company DHL, US cable network C-Span, and French bank BNP Paribas. Also in the list were tech giants Facebook and Apple, with the latter referred to in a folder titled "apple-health-app." The data appears to be development material for Globant's BeHealthy app, described in a prior press release as software developed in partnership with Apple to track employee health behaviors using features of the Apple Watch.
Intel

Intel Enters Discrete GPU Market With Launch of Arc A-Series For Laptops (hothardware.com) 23

MojoKid writes: Today Intel finally launched its first major foray into discrete GPUs for gamers and creators. Dubbed Intel Arc A-Series and comprised of 5 different chips built on two different Arc Alchemist SoCs, the company announced its entry level Arc 3 Graphics is shipping in market now with laptop OEMs delivering new all-Intel products shortly. The two SoCs set the foundation across three performance tiers, including Arc 3, Arc 5, and Arc 7.

For example, Arc A370M arrives today with 8 Xe cores, 8 ray tracing units, 4GB of GDDR6 memory linked to a 64-bit memory bus, and a 1,550MHz graphics clock. Graphics power is rated at 35-50W. However, Arc A770M, Intel's highest-end mobile GPU will come with 32 Xe cores, 32 ray tracing units, 16GB of GDDR 6 memory over a 256-bit interface and with a 1650MHz graphics clock. Doing the math, Arc A770M could be up to 4X more powerful than Arc 370M. In terms of performance, Intel showcased benchmarks from a laptop outfitted with a Core i7-12700H processor and Arc A370M GPU that can top the 60 FPS threshold at 1080p in many games where integrated graphics could come up far short. Examples included Doom Eternal (63 fps) at high quality settings, and Hitman 3 (62 fps), and Destiny 2 (66 fps) at medium settings. Intel is also showcasing new innovations for content creators as well, with its Deep Link, Hyper Encode and AV1 video compression support offering big gains in video upscaling, encoding and streaming. Finally, Intel Arc Control software will offer unique features like Smooth Sync that blends tearing artifacts when V-Synch is turned off, as well as Creator Studio with background blur, frame tracking and broadcast features for direct game streaming services support.

Linux

Asahi Linux Is Reverse-Engineering Support For Apple Silicon, Including M1 Ultra (arstechnica.com) 46

An anonymous reader quotes a report from Ars Technica: For months, a small group of volunteers has worked to get this Arch Linux-based distribution up and running on Apple Silicon Macs, adapting existing drivers and (in the case of the GPU) painstakingly writing their own. And that work is paying off -- last week, the team released its first alpha installer to the general public, and as of yesterday, the software supports the new M1 Ultra in the Mac Studio. In the current alpha, an impressive list of hardware already works, including Wi-Fi, USB 2.0 over the Thunderbolt ports (USB 3.0 only works on Macs with USB-A ports, but USB 3.0 over Thunderbolt is "coming soon"), and the built-in display. But there are still big features missing, including DisplayPort and Thunderbolt, the webcam, Bluetooth, sleep mode, and GPU acceleration. That said, regarding GPU acceleration, the developers say that the M1 is fast enough that a software-rendered Linux desktop feels faster on the M1 than a GPU-accelerated desktop feels on many other ARM chips.

Asahi's developers don't think the software will be "done," with all basic M1-series hardware and functionality supported and working out of the box, "for another year, maybe two." By then, Apple will probably have introduced another generation or two of M-series chips. But the developers are optimistic that much of the work they're doing now will continue to work on future generations of Apple hardware with relatively minimal effort. [...] If you want to try Asahi Linux on an M1 Mac, the current installer is run from the command line and requires "at least 53GB of free space" for an install with a KDE Plasma desktop. Asahi only needs about 15GB, but the installer requires you to leave at least 38GB of free space to the macOS install so that macOS system updates don't break. From there, dual-booting should work similarly to the process on Intel Macs, with the alternate OS visible from within Startup Disk or the boot picker you can launch when your start your Mac. Future updates should be installable from within your new Asahi Linux installation and shouldn't require you to reinstall from scratch.

Intel

Nvidia Would Consider Using Intel as a Foundry, CEO Says (bloomberg.com) 21

Nvidia, one of the largest buyers of outsourced chip production, said it will explore using Intel as a possible manufacturer of its products, but said Intel's journey to becoming a foundry will be difficult. From a report: Nvidia Chief Executive Officer Jensen Huang said he wants to diversify his company's suppliers as much as possible and will consider working with Intel. Nvidia currently uses Taiwan Semiconductor Manufacturing Co and Samsung Electronics to build its products. "We're very open-minded to considering Intel," Huang said Wednesday in an online company event. "Foundry discussions take a long time. It's not just about desire. We're not buying milk here."
Google

Steam (Officially) Comes To Chrome OS 24

An anonymous reader shares a report: This may feel like deja vu because Google itself mistakenly leaked this announcement a few days ago, but the company today officially announced the launch of Steam OS on Chrome OS. Before you run off to install it, there are a few caveats: This is still an alpha release and only available on the more experimental and unstable Chrome OS Dev channel. The number of supported devices is also still limited since it'll need at least 8GB of memory, an 11th-generation Intel Core i5 or i7 processor and Intel Iris Xe Graphics. That's a relatively high-end configuration for what are generally meant to be highly affordable devices and somewhat ironically means that you can now play games on Chrome OS devices that are mostly meant for business users. The list of supported games is also still limited but includes the likes of Portal 2, Skyrim, The Witcher 3: Wild Hunt, Half-Life 2, Stardew Valley, Factorio, Stellaris, Civilization V, Fallout 4, Dico Elysium and Untitled Goose Game.
Security

How to Eliminate the World's Need for Passwords (arstechnica.com) 166

The board members of the FIDO alliance include Amazon, Google, PayPal, RSA, and Apple and Microsoft (as well as Intel and Arm). It describes its mission as reducing the world's "over-reliance on passwords."

Today Wired reports that the group thinks "it has finally identified the missing piece of the puzzle" for finally achieving large-scale adoption of a password-supplanting technology: On Thursday, the organization published a white paper that lays out FIDO's vision for solving the usability issues that have dogged passwordless features and, seemingly, kept them from achieving broad adoption....

The paper is conceptual, not technical, but after years of investment to integrate what are known as the FIDO2 and WebAuthn passwordless standards into Windows, Android, iOS, and more, everything is now riding on the success of this next step.... FIDO is looking to get to the heart of what still makes passwordless schemes tough to navigate. And the group has concluded that it all comes down to the procedure for switching or adding devices. If the process for setting up a new phone, say, is too complicated, and there's no simple way to log in to all of your apps and accounts — or if you have to fall back to passwords to reestablish your ownership of those accounts — then most users will conclude that it's too much of a hassle to change the status quo.

The passwordless FIDO standard already relies on a device's biometric scanners (or a master PIN you select) to authenticate you locally without any of your data traveling over the Internet to a web server for validation. The main concept that FIDO believes will ultimately solve the new device issue is for operating systems to implement a "FIDO credential" manager, which is somewhat similar to a built-in password manager. Instead of literally storing passwords, this mechanism will store cryptographic keys that can sync between devices and are guarded by your device's biometric or passcode lock. At Apple's Worldwide Developer Conference last summer, the company announced its own version of what FIDO is describing, an iCloud feature known as "Passkeys in iCloud Keychain," which Apple says is its "contribution to a post-password world...."

FIDO's white paper also includes another component, a proposed addition to its specification that would allow one of your existing devices, like your laptop, to act as a hardware token itself, similar to stand-alone Bluetooth authentication dongles, and provide physical authentication over Bluetooth. The idea is that this would still be virtually phish-proof since Bluetooth is a proximity-based protocol and can be a useful tool as needed in developing different versions of truly passwordless schemes that don't have to retain a backup password. Christiaan Brand, a product manager at Google who focuses on identity and security and collaborates on FIDO projects, says that the passkey-style plan follows logically from the smartphone or multi-device image of a passwordless future. "This grand vision of 'Let's move beyond the password,' we've always had this end state in mind to be honest, it just took until everyone had mobile phones in their pockets," Brand says....

To FIDO, the biggest priority is a paradigm shift in account security that will make phishing a thing of the past.... When asked if this is really it, if the death knell for passwords is truly, finally tolling, Google's Brand turns serious, but he doesn't hesitate to answer: "I feel like everything is coalescing," he says. "This should be durable."

Such a change won't happen overnight, the article points out. "With any other tech migration (ahem, Windows XP), the road will inevitably prove arduous."
Math

Linux Random Number Generator Sees Major Improvements (phoronix.com) 80

An anonymous Slashdot reader summarizes some important news from the web page of Jason Donenfeld (creator of the open-source VPN protocol WireGuard): The Linux kernel's random number generator has seen its first set of major improvements in over a decade, improving everything from the cryptography to the interface used. Not only does it finally retire SHA-1 in favor of BLAKE2s [in Linux kernel 5.17], but it also at long last unites '/dev/random' and '/dev/urandom' [in the upcoming Linux kernel 5.18], finally ending years of Slashdot banter and debate:

The most significant outward-facing change is that /dev/random and /dev/urandom are now exactly the same thing, with no differences between them at all, thanks to their unification in random: block in /dev/urandom. This removes a significant age-old crypto footgun, already accomplished by other operating systems eons ago. [...] The upshot is that every Internet message board disagreement on /dev/random versus /dev/urandom has now been resolved by making everybody simultaneously right! Now, for the first time, these are both the right choice to make, in addition to getrandom(0); they all return the same bytes with the same semantics. There are only right choices.

Phoronix adds: One exciting change to also note is the getrandom() system call may be a hell of a lot faster with the new kernel. The getrandom() call for obtaining random bytes is yielding much faster performance with the latest code in development. Intel's kernel test robot is seeing an 8450% improvement with the stress-ng getrandom() benchmark. Yes, an 8450% improvement.
Graphics

More Apple M1 Ultra Benchmarks Show It Doesn't Beat the Best GPUs from Nvidia and AMD (tomsguide.com) 121

Tom's Guide tested a Mac Studio workstation equipped with an M1 Ultra with the Geekbench 5.4 CPU benchmarks "to get a sense of how effectively it handles single-core and multi-core workflows."

"Since our M1 Ultra is the best you can buy (at a rough price of $6,199) it sports a 20-core CPU and a 64-core GPU, as well as 128GB of unified memory (RAM) and a 2TB SSD."

Slashdot reader exomondo shares their results: We ran the M1 Ultra through the Geekbench 5.4 CPU benchmarking test multiple times and after averaging the results, we found that the M1 Ultra does indeed outperform top-of-the-line Windows gaming PCs when it comes to multi-core CPU performance. Specifically, the M1 Ultra outperformed a recent Alienware Aurora R13 desktop we tested (w/ Intel Core i7-12700KF, GeForce RTX 3080, 32GB RAM), an Origin Millennium (2022) we just reviewed (Core i9-12900K CPU, RTX 3080 Ti GPU, 32GB RAM), and an even more 3090-equipped HP Omen 45L we tested recently (Core i9-12900K, GeForce RTX 3090, 64GB RAM) in the Geekbench 5.4 multi-core CPU benchmark.

However, as you can see from the chart of results below, the M1 Ultra couldn't match its Intel-powered competition in terms of CPU single-core performance. The Ultra-powered Studio also proved slower to transcode video than the afore-mentioned gaming PCs, taking nearly 4 minutes to transcode a 4K video down to 1080p using Handbrake. All of the gaming PCs I just mentioned completed the same task faster, over 30 seconds faster in the case of the Origin Millennium. Before we even get into the GPU performance tests it's clear that while the M1 Ultra excels at multi-core workflows, it doesn't trounce the competition across the board. When we ran our Mac Studio review unit through the Geekbench 5.4 OpenCL test (which benchmarks GPU performance by simulating common tasks like image processing), the Ultra earned an average score of 83,868. That's quite good, but again it fails to outperform Nvidia GPUs in similarly-priced systems.

They also share some results from the OpenCL Benchmarks browser, which publicly displays scores from different GPUs that users have uploaded: Apple's various M1 chips are on the list as well, and while the M1 Ultra leads that pack it's still quite a ways down the list, with an average score of 83,940. Incidentally, that means it ranks below much older GPUs like Nvidia's GeForce RTX 2070 (85,639) and AMD's Radeon VII (86,509). So here again we see that while the Ultra is fast, it can't match the graphical performance of GPUs that are 2-3 years old at this point — at least, not in these synthetic benchmarks. These tests don't always accurately reflect real-world CPU and GPU performance, which can be dramatically influenced by what programs you're running and how they're optimized to make use of your PC's components.
Their conclusion? When it comes to tasks like photo editing or video and music production, the M1 Ultra w/ 128GB of RAM blazes through workloads, and it does so while remaining whisper-quiet. It also makes the Mac Studio a decent gaming machine, as I was able to play less demanding games like Crusader Kings III, Pathfinder: Wrath of the Righteous and Total War: Warhammer II at reasonable (30+ fps) framerates. But that's just not on par with the performance we expect from high-end GPUs like the Nvidia GeForce RTX 3090....

Of course, if you don't care about games and are in the market for a new Mac with more power than just about anything Apple's ever made, you want the Studio with M1 Ultra.

AMD

Radeon Super Resolution Arrives To Speed Up Your Games in AMD Adrenalin (anandtech.com) 7

Alongside their spring driver update, AMD this morning is also unveiling the first nugget of information about the next generation of their FidelityFX Super Resolution (FSR) technology. From a report: Dubbed FSR 2.0, the next generation of AMD's upscaling technology will be taking the logical leap into adding temporal data, giving FSR more data to work with, and thus improving its ability to generate details. And, while AMD is being coy with details for today's early teaser, at a high level this technology should put AMD much closer to competing with NVIDIA's temporal-based DLSS 2.0 upscaling technology, as well as Intel's forthcoming XeSS upscaling tech.

AMD's current version of FSR, which is now being referred to as FSR 1.0, was released last summer by the company. Implemented as a compute shader, FSR 1.0 was a (relatively) simple spatial upscaler, which could only use data from the current frame for generating a higher resolution frame. Spatial upscaling's simplicity is great for compatibility but it's limited by the data it has access to, allowing for more advanced multi-frame techniques to generate more detailed images. For that reason, AMD has been very careful with their image quality claims for FSR 1.0, treating it more like a supplement to other upscaling methods than a rival to NVIDIA's class-leading DLSS 2.0.

Apple

Apple's Charts Set the M1 Ultra up for an RTX 3090 Fight it Could Never Win (theverge.com) 142

An anonymous reader shares a report:When Apple introduced the M1 Ultra -- the company's most powerful in-house processor yet and the crown jewel of its brand new Mac Studio -- it did so with charts boasting that the Ultra capable of beating out Intel's best processor or Nvidia's RTX 3090 GPU all on its own. The charts, in Apple's recent fashion, were maddeningly labeled with "relative performance" on the Y-axis, and Apple doesn't tell us what specific tests it runs to arrive at whatever numbers it uses to then calculate "relative performance." But now that we have a Mac Studio, we can say that in most tests, the M1 Ultra isn't actually faster than an RTX 3090, as much as Apple would like to say it is.
Chrome

Google Casually Announces Steam For Chrome OS Is Coming In Alpha For Select Chromebooks (engadget.com) 19

At the 2022 Google for Games Developer Summit where its Stadia B2B cloud gaming platform was unveiled, Google announced the long-awaited availability of Steam on Chromebooks. 9to5Google reports: Google specifically said that the "Steam Alpha just launched, making this longtime PC game store available on select Chromebooks for users to try." That said, no other details appear to be live this morning, but we did reveal the device list last month. As we noted at the time: "At a minimum, your Chromebook needs to have an (11th gen) Intel Core i5 or i7 processor and a minimum of 7 GB of RAM. This eliminates almost all Chromebooks but those in the upper-mid range and high end."

Google today said "you can check that out on the Chromebook community forum." The post in question is now live, but without any actual availability timeline beyond "coming soon." However, we did learn that the "early, alpha-quality version of Steam" will first come to the Chrome OS Dev channel for a "small set" of devices.

Meanwhile, Google also said Chrome OS is getting a new "games overlay" on "select" Android titles to make them "playable with user-driven keyboard and mouse configurations on Chromebooks without developer changes." It will launch later this year in a public beta.
Further reading: The part of the keynote where this announcement was made can be viewed here.

Google's Domain Name Registrar is Out of Beta After Seven Years
Intel

Intel Announces $88 Billion Megafab to Keep Chipmaking in Europe (cnet.com) 16

Intel on Tuesday revealed plans for a second new "megafab," a chipmaking site in Magdeburg, Germany, that's the centerpiece of an expected $88 billion in investments across several European countries. The capacity expansion comes on top of other gargantuan spending commitments in the United States, including a planned megafab in Ohio, intended to bring Intel back to the forefront of chip manufacturing. From a report: "The world has an insatiable demand for semiconductors," Intel Chief Executive Pat Gelsinger said in a video announcing the investments. Today, 80% of chipmaking takes place in Asia, but the company's spending in the US and Europe will mean a "more balanced and resilient" supply chain that isn't so dependent on Asia. Intel will start with new chip fabrication facilities, called fabs, at the Magdeburg site costing about $19 billion, with construction set to begin in 2023 and manufacturing in 2027, Gelsinger said. That'll let Intel build its own chips with leading edge technology, both for Intel itself and through a major expansion of its business called Intel Foundry Services, build chips for other customers as well.
AMD

Intel Finds Bug In AMD's Spectre Mitigation, AMD Issues Fix (tomshardware.com) 44

"News of a fresh Spectre BHB vulnerability that only impacts Intel and Arm processors emerged this week," reports Tom's Hardware, "but Intel's research around these new attack vectors unearthed another issue.

"One of the patches that AMD has used to fix the Spectre vulnerabilities has been broken since 2018." Intel's security team, STORM, found the issue with AMD's mitigation. In response, AMD has issued a security bulletin and updated its guidance to recommend using an alternative method to mitigate the Spectre vulnerabilities, thus repairing the issue anew....

Intel's research into AMD's Spectre fix begins in a roundabout way — Intel's processors were recently found to still be susceptible to Spectre v2-based attacks via a new Branch History Injection variant, this despite the company's use of the Enhanced Indirect Branch Restricted Speculation (eIBRS) and/or Retpoline mitigations that were thought to prevent further attacks. In need of a newer Spectre mitigation approach to patch the far-flung issue, Intel turned to studying alternative mitigation techniques. There are several other options, but all entail varying levels of performance tradeoffs. Intel says its ecosystem partners asked the company to consider using AMD's LFENCE/JMP technique. The "LFENCE/JMP" mitigation is a Retpoline alternative commonly referred to as "AMD's Retpoline."

As a result of Intel's investigation, the company discovered that the mitigation AMD has used since 2018 to patch the Spectre vulnerabilities isn't sufficient — the chips are still vulnerable. The issue impacts nearly every modern AMD processor spanning almost the entire Ryzen family for desktop PCs and laptops (second-gen to current-gen) and the EPYC family of datacenter chips....

In response to the STORM team's discovery and paper, AMD issued a security bulletin (AMD-SB-1026) that states it isn't aware of any currently active exploits using the method described in the paper. AMD also instructs its customers to switch to using "one of the other published mitigations (V2-1 aka 'generic retpoline' or V2-4 aka 'IBRS')." The company also published updated Spectre mitigation guidance reflecting those changes [PDF]....

AMD's security bulletin thanks Intel's STORM team by name and noted it engaged in the coordinated vulnerability disclosure, thus allowing AMD enough time to address the issue before making it known to the public.

Thanks to Slashdot reader Hmmmmmm for submitting the story...
China

How China Built an Exascale Supercomputer Out of Old 14nm Tech (nextplatform.com) 29

Slashdot reader katydid77 shares a report from the supercomputing site The Next Platform: If you need any proof that it doesn't take the most advanced chip manufacturing processes to create an exascale-class supercomputer, you need look no further than the Sunway "OceanLight" system housed at the National Supercomputing Center in Wuxi, China. Some of the architectural details of the OceanLight supercomputer came to our attention as part of a paper published by Alibaba Group, Tsinghua University, DAMO Academy, Zhejiang Lab, and Beijing Academy of Artificial Intelligence, which is running a pretrained machine learning model called BaGuaLu, across more than 37 million cores and 14.5 trillion parameters (presumably with FP32 single precision), and has the capability to scale to 174 trillion parameters (and approaching what is called "brain-scale" where the number of parameters starts approaching the number of synapses in the human brain)....

Add it all up, and the 105 cabinet system tested on the BaGuaLu training model, with its 107,250 SW26010-Pro processors, had a peak theoretical performance of 1.51 exaflops. We like base 2 numbers and think that the OceanLight system probably scales to 160 cabinets, which would be 163,840 nodes and just under 2.3 exaflops of peak FP64 and FP32 performance. If it is only 120 cabinets (also a base 2 number), OceanLight will come in at 1.72 exaflops peak. But these rack scales are, once again, just hunches. If the 160 cabinet scale is the maximum for OceanLight, then China could best the performance of the 1.5 exaflops "Frontier" supercomputer being tuned up at Oak Ridge National Laboratories today and also extend beyond the peak theoretical performance of the 2 exaflops "Aurora" supercomputer coming to Argonne National Laboratory later this year — and maybe even further than the "El Capitan" supercomputer going into Lawrence Livermore National Laboratory in 2023 and expected to be around 2.2 exaflops to 2.3 exaflops according to the scuttlebutt.

We would love to see the thermals and costs of OceanLight. The SW26010-Pro chip could burn very hot, to be sure, and run up the electric bill for power and cooling, but if SMIC [China's largest foundry] can get good yield on 14 nanometer processes, the chip could be a lot less expensive to make than, say, a massive GPU accelerator from Nvidia, AMD, or Intel. (It's hard to say.) Regardless, having indigenous parts matters more than power efficiency for China right now, and into its future, and we said as much last summer when contemplating China's long road to IT independence. Imagine what China can do with a shrink to 7 nanometer processes when SMIC delivers them — apparently not even using extreme ultraviolet (EUV) light — many years hence....

The bottom line is that the National Research Center of Parallel Computer Engineering and Technology (known as NRCPC), working with SMIC, has had an exascale machine in the field for a year already. (There are two, in fact.) Can the United States say that right now? No it can't.

Desktops (Apple)

YouTuber DIY Project Shrinks M1 Mac Mini By 78%, Without Sacrificing Performance (9to5mac.com) 43

In a 15-minute-long video, YouTuber Quinn Nelson from Snazzy Labs explains how he managed to shrink the current M1 Mac Mini by 78% without harming performance. 9to5Mac reports: In conclusion, by rearranging the internals and swapping out the power supply, Nelson was able to reduce the size of the Mac mini enclosure by 78%. He organized all the parts inside a 3D-printed body with a mini Mac Pro motif.

The reason that theoretical space savings are so huge is because when Apple released the first round of Apple Silicon computers, they did not change the hardware industrial design at all. So the current Mac Mini enclosure is designed to fit an Intel CPU and circuit board, including having to accommodate the large fans and heat sinks the Intel chip required.

But with the power efficiency of the M1, Apple has the headroom to do something much more drastic. Indeed, a lot of the M1 Mac mini internals is just empty space. The Snazzy Labs video gives a glimpse at what is possible if Apple is more ambitious with the next-generation Mac mini design, and tries to create something truly mini.
The CAD files and schematics can be viewed here.
AMD

New UCIe Chiplet Standard Supported by Intel, AMD and Arm (anandtech.com) 20

A number of industry stalwarts including Intel, AMD, Arm, TSMC, and Samsung on Wednesday introduced a new Universal Chiplet Interconnect Express (UCIe) consortium. AnandTech: Taking significant inspiration from the very successful PCI-Express playbook, with UCIe the involved firms are creating a standard for connecting chiplets, with the goal of having a single set of standards that not only simplify the process for all involved, but lead the way towards full interoperability between chiplets from different manufacturers, allowing chips to mix-and-match chiplets as chip makers see fit. In other words, to make a complete and compatible ecosystem out of chiplets, much like today's ecosystem for PCIe-based expansion cards.

The comparisons to PCIe are apt on multiple levels, and this is perhaps the best way to quickly understand the UCIe group's goals. Not only is the new standard being made available in an open fashion, but the companies involved will be establishing a formal consortium group later this year to administer UCIe and further develop it. Meanwhile from a general technology perspective, the use of chiplets is the latest step in the continual consolidation of integrated circuits, as smaller and smaller transistors have allowed more and more functionality to be brought on-chip. In essence, features that have been on an expansion card or separate chip up until now are starting to make their way on to the chip/SoC itself. So like PCIe moderates how these parts work together as expansion cards, a new standard has become needed to moderate how these parts should work together as chiplets.

Slashdot Top Deals