Google

Steam (Officially) Comes To Chrome OS 24

An anonymous reader shares a report: This may feel like deja vu because Google itself mistakenly leaked this announcement a few days ago, but the company today officially announced the launch of Steam OS on Chrome OS. Before you run off to install it, there are a few caveats: This is still an alpha release and only available on the more experimental and unstable Chrome OS Dev channel. The number of supported devices is also still limited since it'll need at least 8GB of memory, an 11th-generation Intel Core i5 or i7 processor and Intel Iris Xe Graphics. That's a relatively high-end configuration for what are generally meant to be highly affordable devices and somewhat ironically means that you can now play games on Chrome OS devices that are mostly meant for business users. The list of supported games is also still limited but includes the likes of Portal 2, Skyrim, The Witcher 3: Wild Hunt, Half-Life 2, Stardew Valley, Factorio, Stellaris, Civilization V, Fallout 4, Dico Elysium and Untitled Goose Game.
Security

How to Eliminate the World's Need for Passwords (arstechnica.com) 166

The board members of the FIDO alliance include Amazon, Google, PayPal, RSA, and Apple and Microsoft (as well as Intel and Arm). It describes its mission as reducing the world's "over-reliance on passwords."

Today Wired reports that the group thinks "it has finally identified the missing piece of the puzzle" for finally achieving large-scale adoption of a password-supplanting technology: On Thursday, the organization published a white paper that lays out FIDO's vision for solving the usability issues that have dogged passwordless features and, seemingly, kept them from achieving broad adoption....

The paper is conceptual, not technical, but after years of investment to integrate what are known as the FIDO2 and WebAuthn passwordless standards into Windows, Android, iOS, and more, everything is now riding on the success of this next step.... FIDO is looking to get to the heart of what still makes passwordless schemes tough to navigate. And the group has concluded that it all comes down to the procedure for switching or adding devices. If the process for setting up a new phone, say, is too complicated, and there's no simple way to log in to all of your apps and accounts — or if you have to fall back to passwords to reestablish your ownership of those accounts — then most users will conclude that it's too much of a hassle to change the status quo.

The passwordless FIDO standard already relies on a device's biometric scanners (or a master PIN you select) to authenticate you locally without any of your data traveling over the Internet to a web server for validation. The main concept that FIDO believes will ultimately solve the new device issue is for operating systems to implement a "FIDO credential" manager, which is somewhat similar to a built-in password manager. Instead of literally storing passwords, this mechanism will store cryptographic keys that can sync between devices and are guarded by your device's biometric or passcode lock. At Apple's Worldwide Developer Conference last summer, the company announced its own version of what FIDO is describing, an iCloud feature known as "Passkeys in iCloud Keychain," which Apple says is its "contribution to a post-password world...."

FIDO's white paper also includes another component, a proposed addition to its specification that would allow one of your existing devices, like your laptop, to act as a hardware token itself, similar to stand-alone Bluetooth authentication dongles, and provide physical authentication over Bluetooth. The idea is that this would still be virtually phish-proof since Bluetooth is a proximity-based protocol and can be a useful tool as needed in developing different versions of truly passwordless schemes that don't have to retain a backup password. Christiaan Brand, a product manager at Google who focuses on identity and security and collaborates on FIDO projects, says that the passkey-style plan follows logically from the smartphone or multi-device image of a passwordless future. "This grand vision of 'Let's move beyond the password,' we've always had this end state in mind to be honest, it just took until everyone had mobile phones in their pockets," Brand says....

To FIDO, the biggest priority is a paradigm shift in account security that will make phishing a thing of the past.... When asked if this is really it, if the death knell for passwords is truly, finally tolling, Google's Brand turns serious, but he doesn't hesitate to answer: "I feel like everything is coalescing," he says. "This should be durable."

Such a change won't happen overnight, the article points out. "With any other tech migration (ahem, Windows XP), the road will inevitably prove arduous."
Math

Linux Random Number Generator Sees Major Improvements (phoronix.com) 80

An anonymous Slashdot reader summarizes some important news from the web page of Jason Donenfeld (creator of the open-source VPN protocol WireGuard): The Linux kernel's random number generator has seen its first set of major improvements in over a decade, improving everything from the cryptography to the interface used. Not only does it finally retire SHA-1 in favor of BLAKE2s [in Linux kernel 5.17], but it also at long last unites '/dev/random' and '/dev/urandom' [in the upcoming Linux kernel 5.18], finally ending years of Slashdot banter and debate:

The most significant outward-facing change is that /dev/random and /dev/urandom are now exactly the same thing, with no differences between them at all, thanks to their unification in random: block in /dev/urandom. This removes a significant age-old crypto footgun, already accomplished by other operating systems eons ago. [...] The upshot is that every Internet message board disagreement on /dev/random versus /dev/urandom has now been resolved by making everybody simultaneously right! Now, for the first time, these are both the right choice to make, in addition to getrandom(0); they all return the same bytes with the same semantics. There are only right choices.

Phoronix adds: One exciting change to also note is the getrandom() system call may be a hell of a lot faster with the new kernel. The getrandom() call for obtaining random bytes is yielding much faster performance with the latest code in development. Intel's kernel test robot is seeing an 8450% improvement with the stress-ng getrandom() benchmark. Yes, an 8450% improvement.
Graphics

More Apple M1 Ultra Benchmarks Show It Doesn't Beat the Best GPUs from Nvidia and AMD (tomsguide.com) 121

Tom's Guide tested a Mac Studio workstation equipped with an M1 Ultra with the Geekbench 5.4 CPU benchmarks "to get a sense of how effectively it handles single-core and multi-core workflows."

"Since our M1 Ultra is the best you can buy (at a rough price of $6,199) it sports a 20-core CPU and a 64-core GPU, as well as 128GB of unified memory (RAM) and a 2TB SSD."

Slashdot reader exomondo shares their results: We ran the M1 Ultra through the Geekbench 5.4 CPU benchmarking test multiple times and after averaging the results, we found that the M1 Ultra does indeed outperform top-of-the-line Windows gaming PCs when it comes to multi-core CPU performance. Specifically, the M1 Ultra outperformed a recent Alienware Aurora R13 desktop we tested (w/ Intel Core i7-12700KF, GeForce RTX 3080, 32GB RAM), an Origin Millennium (2022) we just reviewed (Core i9-12900K CPU, RTX 3080 Ti GPU, 32GB RAM), and an even more 3090-equipped HP Omen 45L we tested recently (Core i9-12900K, GeForce RTX 3090, 64GB RAM) in the Geekbench 5.4 multi-core CPU benchmark.

However, as you can see from the chart of results below, the M1 Ultra couldn't match its Intel-powered competition in terms of CPU single-core performance. The Ultra-powered Studio also proved slower to transcode video than the afore-mentioned gaming PCs, taking nearly 4 minutes to transcode a 4K video down to 1080p using Handbrake. All of the gaming PCs I just mentioned completed the same task faster, over 30 seconds faster in the case of the Origin Millennium. Before we even get into the GPU performance tests it's clear that while the M1 Ultra excels at multi-core workflows, it doesn't trounce the competition across the board. When we ran our Mac Studio review unit through the Geekbench 5.4 OpenCL test (which benchmarks GPU performance by simulating common tasks like image processing), the Ultra earned an average score of 83,868. That's quite good, but again it fails to outperform Nvidia GPUs in similarly-priced systems.

They also share some results from the OpenCL Benchmarks browser, which publicly displays scores from different GPUs that users have uploaded: Apple's various M1 chips are on the list as well, and while the M1 Ultra leads that pack it's still quite a ways down the list, with an average score of 83,940. Incidentally, that means it ranks below much older GPUs like Nvidia's GeForce RTX 2070 (85,639) and AMD's Radeon VII (86,509). So here again we see that while the Ultra is fast, it can't match the graphical performance of GPUs that are 2-3 years old at this point — at least, not in these synthetic benchmarks. These tests don't always accurately reflect real-world CPU and GPU performance, which can be dramatically influenced by what programs you're running and how they're optimized to make use of your PC's components.
Their conclusion? When it comes to tasks like photo editing or video and music production, the M1 Ultra w/ 128GB of RAM blazes through workloads, and it does so while remaining whisper-quiet. It also makes the Mac Studio a decent gaming machine, as I was able to play less demanding games like Crusader Kings III, Pathfinder: Wrath of the Righteous and Total War: Warhammer II at reasonable (30+ fps) framerates. But that's just not on par with the performance we expect from high-end GPUs like the Nvidia GeForce RTX 3090....

Of course, if you don't care about games and are in the market for a new Mac with more power than just about anything Apple's ever made, you want the Studio with M1 Ultra.

AMD

Radeon Super Resolution Arrives To Speed Up Your Games in AMD Adrenalin (anandtech.com) 7

Alongside their spring driver update, AMD this morning is also unveiling the first nugget of information about the next generation of their FidelityFX Super Resolution (FSR) technology. From a report: Dubbed FSR 2.0, the next generation of AMD's upscaling technology will be taking the logical leap into adding temporal data, giving FSR more data to work with, and thus improving its ability to generate details. And, while AMD is being coy with details for today's early teaser, at a high level this technology should put AMD much closer to competing with NVIDIA's temporal-based DLSS 2.0 upscaling technology, as well as Intel's forthcoming XeSS upscaling tech.

AMD's current version of FSR, which is now being referred to as FSR 1.0, was released last summer by the company. Implemented as a compute shader, FSR 1.0 was a (relatively) simple spatial upscaler, which could only use data from the current frame for generating a higher resolution frame. Spatial upscaling's simplicity is great for compatibility but it's limited by the data it has access to, allowing for more advanced multi-frame techniques to generate more detailed images. For that reason, AMD has been very careful with their image quality claims for FSR 1.0, treating it more like a supplement to other upscaling methods than a rival to NVIDIA's class-leading DLSS 2.0.

Apple

Apple's Charts Set the M1 Ultra up for an RTX 3090 Fight it Could Never Win (theverge.com) 142

An anonymous reader shares a report:When Apple introduced the M1 Ultra -- the company's most powerful in-house processor yet and the crown jewel of its brand new Mac Studio -- it did so with charts boasting that the Ultra capable of beating out Intel's best processor or Nvidia's RTX 3090 GPU all on its own. The charts, in Apple's recent fashion, were maddeningly labeled with "relative performance" on the Y-axis, and Apple doesn't tell us what specific tests it runs to arrive at whatever numbers it uses to then calculate "relative performance." But now that we have a Mac Studio, we can say that in most tests, the M1 Ultra isn't actually faster than an RTX 3090, as much as Apple would like to say it is.
Chrome

Google Casually Announces Steam For Chrome OS Is Coming In Alpha For Select Chromebooks (engadget.com) 19

At the 2022 Google for Games Developer Summit where its Stadia B2B cloud gaming platform was unveiled, Google announced the long-awaited availability of Steam on Chromebooks. 9to5Google reports: Google specifically said that the "Steam Alpha just launched, making this longtime PC game store available on select Chromebooks for users to try." That said, no other details appear to be live this morning, but we did reveal the device list last month. As we noted at the time: "At a minimum, your Chromebook needs to have an (11th gen) Intel Core i5 or i7 processor and a minimum of 7 GB of RAM. This eliminates almost all Chromebooks but those in the upper-mid range and high end."

Google today said "you can check that out on the Chromebook community forum." The post in question is now live, but without any actual availability timeline beyond "coming soon." However, we did learn that the "early, alpha-quality version of Steam" will first come to the Chrome OS Dev channel for a "small set" of devices.

Meanwhile, Google also said Chrome OS is getting a new "games overlay" on "select" Android titles to make them "playable with user-driven keyboard and mouse configurations on Chromebooks without developer changes." It will launch later this year in a public beta.
Further reading: The part of the keynote where this announcement was made can be viewed here.

Google's Domain Name Registrar is Out of Beta After Seven Years
Intel

Intel Announces $88 Billion Megafab to Keep Chipmaking in Europe (cnet.com) 16

Intel on Tuesday revealed plans for a second new "megafab," a chipmaking site in Magdeburg, Germany, that's the centerpiece of an expected $88 billion in investments across several European countries. The capacity expansion comes on top of other gargantuan spending commitments in the United States, including a planned megafab in Ohio, intended to bring Intel back to the forefront of chip manufacturing. From a report: "The world has an insatiable demand for semiconductors," Intel Chief Executive Pat Gelsinger said in a video announcing the investments. Today, 80% of chipmaking takes place in Asia, but the company's spending in the US and Europe will mean a "more balanced and resilient" supply chain that isn't so dependent on Asia. Intel will start with new chip fabrication facilities, called fabs, at the Magdeburg site costing about $19 billion, with construction set to begin in 2023 and manufacturing in 2027, Gelsinger said. That'll let Intel build its own chips with leading edge technology, both for Intel itself and through a major expansion of its business called Intel Foundry Services, build chips for other customers as well.
AMD

Intel Finds Bug In AMD's Spectre Mitigation, AMD Issues Fix (tomshardware.com) 44

"News of a fresh Spectre BHB vulnerability that only impacts Intel and Arm processors emerged this week," reports Tom's Hardware, "but Intel's research around these new attack vectors unearthed another issue.

"One of the patches that AMD has used to fix the Spectre vulnerabilities has been broken since 2018." Intel's security team, STORM, found the issue with AMD's mitigation. In response, AMD has issued a security bulletin and updated its guidance to recommend using an alternative method to mitigate the Spectre vulnerabilities, thus repairing the issue anew....

Intel's research into AMD's Spectre fix begins in a roundabout way — Intel's processors were recently found to still be susceptible to Spectre v2-based attacks via a new Branch History Injection variant, this despite the company's use of the Enhanced Indirect Branch Restricted Speculation (eIBRS) and/or Retpoline mitigations that were thought to prevent further attacks. In need of a newer Spectre mitigation approach to patch the far-flung issue, Intel turned to studying alternative mitigation techniques. There are several other options, but all entail varying levels of performance tradeoffs. Intel says its ecosystem partners asked the company to consider using AMD's LFENCE/JMP technique. The "LFENCE/JMP" mitigation is a Retpoline alternative commonly referred to as "AMD's Retpoline."

As a result of Intel's investigation, the company discovered that the mitigation AMD has used since 2018 to patch the Spectre vulnerabilities isn't sufficient — the chips are still vulnerable. The issue impacts nearly every modern AMD processor spanning almost the entire Ryzen family for desktop PCs and laptops (second-gen to current-gen) and the EPYC family of datacenter chips....

In response to the STORM team's discovery and paper, AMD issued a security bulletin (AMD-SB-1026) that states it isn't aware of any currently active exploits using the method described in the paper. AMD also instructs its customers to switch to using "one of the other published mitigations (V2-1 aka 'generic retpoline' or V2-4 aka 'IBRS')." The company also published updated Spectre mitigation guidance reflecting those changes [PDF]....

AMD's security bulletin thanks Intel's STORM team by name and noted it engaged in the coordinated vulnerability disclosure, thus allowing AMD enough time to address the issue before making it known to the public.

Thanks to Slashdot reader Hmmmmmm for submitting the story...
China

How China Built an Exascale Supercomputer Out of Old 14nm Tech (nextplatform.com) 29

Slashdot reader katydid77 shares a report from the supercomputing site The Next Platform: If you need any proof that it doesn't take the most advanced chip manufacturing processes to create an exascale-class supercomputer, you need look no further than the Sunway "OceanLight" system housed at the National Supercomputing Center in Wuxi, China. Some of the architectural details of the OceanLight supercomputer came to our attention as part of a paper published by Alibaba Group, Tsinghua University, DAMO Academy, Zhejiang Lab, and Beijing Academy of Artificial Intelligence, which is running a pretrained machine learning model called BaGuaLu, across more than 37 million cores and 14.5 trillion parameters (presumably with FP32 single precision), and has the capability to scale to 174 trillion parameters (and approaching what is called "brain-scale" where the number of parameters starts approaching the number of synapses in the human brain)....

Add it all up, and the 105 cabinet system tested on the BaGuaLu training model, with its 107,250 SW26010-Pro processors, had a peak theoretical performance of 1.51 exaflops. We like base 2 numbers and think that the OceanLight system probably scales to 160 cabinets, which would be 163,840 nodes and just under 2.3 exaflops of peak FP64 and FP32 performance. If it is only 120 cabinets (also a base 2 number), OceanLight will come in at 1.72 exaflops peak. But these rack scales are, once again, just hunches. If the 160 cabinet scale is the maximum for OceanLight, then China could best the performance of the 1.5 exaflops "Frontier" supercomputer being tuned up at Oak Ridge National Laboratories today and also extend beyond the peak theoretical performance of the 2 exaflops "Aurora" supercomputer coming to Argonne National Laboratory later this year — and maybe even further than the "El Capitan" supercomputer going into Lawrence Livermore National Laboratory in 2023 and expected to be around 2.2 exaflops to 2.3 exaflops according to the scuttlebutt.

We would love to see the thermals and costs of OceanLight. The SW26010-Pro chip could burn very hot, to be sure, and run up the electric bill for power and cooling, but if SMIC [China's largest foundry] can get good yield on 14 nanometer processes, the chip could be a lot less expensive to make than, say, a massive GPU accelerator from Nvidia, AMD, or Intel. (It's hard to say.) Regardless, having indigenous parts matters more than power efficiency for China right now, and into its future, and we said as much last summer when contemplating China's long road to IT independence. Imagine what China can do with a shrink to 7 nanometer processes when SMIC delivers them — apparently not even using extreme ultraviolet (EUV) light — many years hence....

The bottom line is that the National Research Center of Parallel Computer Engineering and Technology (known as NRCPC), working with SMIC, has had an exascale machine in the field for a year already. (There are two, in fact.) Can the United States say that right now? No it can't.

Desktops (Apple)

YouTuber DIY Project Shrinks M1 Mac Mini By 78%, Without Sacrificing Performance (9to5mac.com) 43

In a 15-minute-long video, YouTuber Quinn Nelson from Snazzy Labs explains how he managed to shrink the current M1 Mac Mini by 78% without harming performance. 9to5Mac reports: In conclusion, by rearranging the internals and swapping out the power supply, Nelson was able to reduce the size of the Mac mini enclosure by 78%. He organized all the parts inside a 3D-printed body with a mini Mac Pro motif.

The reason that theoretical space savings are so huge is because when Apple released the first round of Apple Silicon computers, they did not change the hardware industrial design at all. So the current Mac Mini enclosure is designed to fit an Intel CPU and circuit board, including having to accommodate the large fans and heat sinks the Intel chip required.

But with the power efficiency of the M1, Apple has the headroom to do something much more drastic. Indeed, a lot of the M1 Mac mini internals is just empty space. The Snazzy Labs video gives a glimpse at what is possible if Apple is more ambitious with the next-generation Mac mini design, and tries to create something truly mini.
The CAD files and schematics can be viewed here.
AMD

New UCIe Chiplet Standard Supported by Intel, AMD and Arm (anandtech.com) 20

A number of industry stalwarts including Intel, AMD, Arm, TSMC, and Samsung on Wednesday introduced a new Universal Chiplet Interconnect Express (UCIe) consortium. AnandTech: Taking significant inspiration from the very successful PCI-Express playbook, with UCIe the involved firms are creating a standard for connecting chiplets, with the goal of having a single set of standards that not only simplify the process for all involved, but lead the way towards full interoperability between chiplets from different manufacturers, allowing chips to mix-and-match chiplets as chip makers see fit. In other words, to make a complete and compatible ecosystem out of chiplets, much like today's ecosystem for PCIe-based expansion cards.

The comparisons to PCIe are apt on multiple levels, and this is perhaps the best way to quickly understand the UCIe group's goals. Not only is the new standard being made available in an open fashion, but the companies involved will be establishing a formal consortium group later this year to administer UCIe and further develop it. Meanwhile from a general technology perspective, the use of chiplets is the latest step in the continual consolidation of integrated circuits, as smaller and smaller transistors have allowed more and more functionality to be brought on-chip. In essence, features that have been on an expansion card or separate chip up until now are starting to make their way on to the chip/SoC itself. So like PCIe moderates how these parts work together as expansion cards, a new standard has become needed to moderate how these parts should work together as chiplets.

United States

Biden To Congress: Pass The Bill To Fund US Chip Manufacturing (cnet.com) 175

President Joe Biden called on Congress to pass the CHIPS Act, a law that would provide chipmakers with $52 billion in subsidies to advance semiconductor manufacturing in the United States, during his State of the Union speech Tuesday. From a report: Biden lauded Intel Chief Executive Pat Gelsinger, who last month announced a $20 billion investment for two new chip fabrication facilities, or fabs, that the company will build just west of Columbus, Ohio. Intel plans to spend $100 billion to build the Ohio "megafab" over the next decade, with an eventual total of eight fabs, but the speed of that investment will depend on the US subsidy, Gelsinger has said.

"Intel's CEO, Pat Gelsinger, who is here tonight, told me they are ready to increase their investment from $20 billion to $100 billion. That would be one of the biggest investments in manufacturing in American history," Biden said. "And all they're waiting for is for you to pass this bill. ... Send it to my desk. I'll sign it." The Senate passed a bill funding the CHIPS Act in 2021, and the House of Representatives followed suit in February, but the differences in the bills haven't been ironed out in committee and the subsidy hasn't arrived despite some bipartisan support. The funding would help the US compete with government help in Taiwan and South Korea, where leading chipmakers Taiwan Semiconductor Manufacturing Co. (TSMC) and Samsung have the bulk of their operations. The US subsidies would knock about $3 billion off the $10 billion price tag for a new fab, a subsidy level Intel says matches those in Asia.

The Almighty Buck

Italy Plans $4.6 Billion Fund To Boost Chipmaking (reuters.com) 19

An anonymous reader quotes a report from Reuters: Italy plans to set aside more than $4.6 billion until 2030 to boost domestic chip manufacturing as it seeks to attract more investment from tech companies such as Intel, a draft decree seen by Reuters showed on Tuesday. The government is trying to persuade the U.S. group to spend billions of euros on an advanced chipmaking plant in Italy that uses innovative technologies to weave full chips.

Rome is ready to offer Intel public money and other favorable terms to fund part of the overall investment, which is expected to be worth around $9 billion over 10 years, Reuters reported in December. To boost domestic chipmaking, Italy is also in talks with French-Italian STMicroelectronics , Taiwanese-controlled MEMC Electronic Materials Inc and Israeli Tower Semiconductor, which is set to be bought by Intel. Negotiations with Intel are complex as the U.S. group has tabled very tough demands, a government source involved in the talks told Reuters.

As part of an 8 billion euro package to support the economy and curb surging energy bills, Italy plans to allocate 150 million euros in 2022 and 500 million euros per year from 2023 until 2030, the decree showed. The Italian government will promote "research and development of microprocessor technology and investments in new industrial applications of innovative technologies," the legislation added. Rome aims to use the funding also to convert existing industrial sites and favor the construction of new plants in Italy.

Hardware

Lenovo's Newest ThinkPads Feature Snapdragon Processors and 165Hz Screens (theverge.com) 51

An anonymous reader shares a report: Lenovo has dumped a whole bunch of new ThinkPads into the world, and there's some exciting stuff in there. We're getting a brand-new ThinkPad X13s powered by Snapdragon chips, a fifth-generation ThinkPad X1 Extreme with a WQXGA 165Hz screen option, and new additions to the P-series and T-series as well. The news I'm personally most excited about is the screen shape. A few months ago, Lenovo told me that much of its portfolio would be moving to the 16:10 aspect ratio this year. They appear to be keeping their word. Across the board, the new models are 16:10 -- taller and roomier than they were in their 16:9 eras. Some news that's a bit more... intriguing is the all-new ThinkPad X13s, which is the first laptop to feature the Snapdragon 8cx Gen 3 compute platform. Qualcomm made some lofty claims about this platform upon its release, including "60 percent greater performance per watt" over competing x86 platforms and "multi-day battery life." The ThinkPad X13s will run an Arm version of Windows 11, with its x64 app emulation support. The P-series models and Intel T-series models will all be here in April, with prices ranging from $1,399 to $1,419.
Hardware

Ukraine War Flashes Neon Warning Lights for Chips (reuters.com) 101

Russia's invasion of Ukraine by land, air and sea risks reverberating across the global chip industry and exacerbating current supply-chain constraints. Reuters Breakingviews: Ukraine is a major producer of neon gas critical for lasers used in chipmaking and supplies more than 90% of U.S. semiconductor-grade neon, according to estimates from research firm Techcet. About 35% of palladium, a rare metal also used for semiconductors, is sourced from Russia. A full-scale conflict disrupting exports of these elements might hit players like Intel, which gets about 50% of its neon from Eastern Europe, according to JPMorgan. ASML, which supplies machines to semiconductor makers, sources less than 20% of the gases it uses from the crisis-hit countries.
AMD

AMD Is Now Worth More Than Rival Intel (yahoo.com) 25

Hmmmmmm shares a report from Yahoo Finance: AMD's market cap currently stands at $188 billion after shares rose nearly 2% in Tuesday's session. Intel's market cap is $182 billion. That marks the second time in a week AMD's market value has climbed above Intel -- the first time it happened was a week ago. Followers of this battle may not be surprised to see this one happen (and seeing it continue from here) for several reasons. First, AMD has been winning the battle on Wall Street for sexier investment thesis. AMD last week closed on its $35 billion acquisition for Xilinx. Secondarily, AMD has flat out posted better financials than Intel (for some time) as it has gained market share in key areas (notably in servers). AMD's sales and profits rose 68% and 117%, respectively in 2021. The company outlined 31% revenue growth for 2022 and gross profit margins of 51%. Intel's 2021 sales and earnings increased 2% and 7%, respectively. The company sees sales in 2022 rising about 2%. Profits are expected to drop 36% as Intel further builds out its chip-making capacity.
Intel

Intel Ramps Up Linux Investment By Acquiring Linutronix (phoronix.com) 3

Intel has acquired Linutronix, the German-based Linux consulting firm that is focused on embedded Linux and real-time computing. From a report: Intel's acquisition of Linutronix appears to be primarily focused as an acqui-hire with getting Linutronix's very talented staff at Intel. Among the prominent Linutronix engineers is their CTO Thomas Gleixner as a longtime kernel maintainer and important contributor on the x86 side, including with Linux's CPU security mitigations and perhaps most notably for the real-time (PREEMPT_RT) work.
Intel

Intel's 12th Gen Alder Lake Chips for Thinner and Lighter Laptops Have Arrived (theverge.com) 28

Intel launched the first wave of its 12th Gen Alder Lake chips at CES 2022 -- but only for its H-series lineup of chips, destined for the most powerful and power-hungry laptops. And now, it's rolling out the rest of its Alder Lake laptop lineup: the P-series and U-series models it briefly showed off in January, which are set to power the thinner, lighter, and cheaper laptops of 2022. From a report: In total, there are a whopping 20 chips fit for a wide range of hardware across the P-series, U-series (15W), and U-series (9W) categories, with the first laptops powered by the new processors set to arrive in March. Like their more powerful H-series cousins (and the Alder Lake desktop chips that Intel launched in late 2021 and at CES 2022), the new P-series and U-series chips have a lot more cores than 2020's 11th Gen models, with a hybrid architecture approach that combines performance and efficiency cores to maximize both power and battery life. And Intel is promising some big improvements focused around those boosted core counts, touting up to 70 percent better multi-thread performance than previous 11th Gen (and AMD) hardware. The company also says that it wins out in benchmarks against chips like Apple's M1 and M1 Pro (although not the M1 Max), and AMD's Ryzen R7 5800U in tasks like web browsing and photo editing.

Slashdot Top Deals