×
Japan

Canon Is Building Its First Lithography Plant In 21 Years (petapixel.com) 13

Canon is about to begin constructing a new $345 million plant to build the equipment used in a crucial part of semiconductor manufacturing called lithography. PetaPixel reports: Lithography is the first step in building chips for everything from microwave ovens to defense systems. The machines involved in this process require incredibly precise steps and equally precise accuracy. It is part of what most people think of when they envision the large white clean rooms in processor manufacturing. According to Nikkei Asia, which covers the industry and economics of Japan, Canon is expected to invest more than $354 million in this new plant in the Tochigi prefecture, a sum covering the facility's construction and the equipment to produce these lithographic machines.

The company currently operates two other plants in Japan, mainly for the production of chips for the automotive industry, and anticipates that this new facility will double the production capacity. According to Nikkei Asia, sales of semiconductor lithography equipment are "expected to rise 29%, year on year, in 2022 to 180 units, a fourfold increase versus ten years ago." Currently, Canon produces 30% of the world's lithography equipment, which is about half of the closest competitor, ASML. Intel and Taiwan Semiconductor have said they will expand their operations as well.

Nikkei Asia also notes that Canon will "develop next-generation technology called nanoimprint lithography" due to the high cost and high energy consumption of current equipment, and nanoimprint lithography will handle "finer line widths," which means more capacity and reduced processing time per chip. Canon is reported to expect 40% lower costs for the new process, as well as a reduction in power consumption by 90%. The new plant is expected to come online in 2025 and will be built adjacent to an existing plant. Canon has not created a new lithography plant in 21 years.

Businesses

Micron To Spend Up To $100 Billion To Build a Computer Chip Factory In New York (cnbc.com) 50

Micron will spend up to $100 billion over at least the next two decades building a new computer chip factory in upstate New York, the state said on Tuesday. CNBC reports: The announcement, first reported by The New York Times, comes after the passage of the CHIPS and Science Act of 2022, a federal law championed by Senate Majority Leader Chuck Schumer, D-N.Y., that allocates $52 billion to encourage more domestic semiconductor production. Micron CEO Sanjay Mehrotra credited the passage of the law for making the investment possible, according to the Times. [...] When the CHIPS Act became law, it spurred a wave of investment announcements by semiconductor companies, including Micron, which at the time pledged $40 billion through 2030 for U.S. chip manufacturing, saying it would create up to 40,000 domestic jobs. Qualcomm also committed to buying an additional $4.2 billion worth of chips from GlobalFoundries' plant in New York. Intel had said its plans to invest up to $100 billion in chip manufacturing in Ohio relied heavily on the federal legislation.

New York's Democratic governor, Kathy Hochul, also played a role, working to persuade Micron to bring its plant to Clay, a town near Syracuse, the Times reported. The performance-based incentive package from the state is valued at $5.5 billion and is tied to Micron's commitment to create 9,000 new jobs as well as following through on the $100 billion investment. Micron must also meet certain sustainability standards to get the tax credits. According to a press release from Hochul's office, an economic impact study by Regional Economic Models found the project will create an average of nearly 50,000 jobs in New York state per year over the first 31 years of its operation. It also estimated it would generate an additional $16.7 billion in real, inflation-adjusted, economic output for the state.

Open Source

Linux 6.0 Arrives With Support For Newer Chips, Core Fixes, and Oddities (arstechnica.com) 26

An anonymous reader quotes a report from Ars Technica: A stable version of Linux 6.0 is out, with 15,000 non-merge commits and a notable version number for the kernel. And while major Linux releases only happen when the prior number's dot numbers start looking too big -- there is literally no other reason" -- there are a lot of notable things rolled into this release besides a marking in time. Most notable among them could be a patch that prevents a nearly two-decade slowdown for AMD chips, based on workaround code for power management in the early 2000s that hung around for far too long. [...]

Intel's new Arc GPUs are supported in their discrete laptop form in 6.0 (though still experimental). Linux blog Phoronix notes that Intel's ARC GPUs all seem to run on open source upstream drivers, so support should show up for future Intel cards and chipsets as they arrive on the market. Linux 6.0 includes several hardware drivers of note: fourth-generation Intel Xeon server chips, the not-quite-out 13th-generation Raptor Lake and Meteor Lake chips, AMD's RDNA 3 GPUs, Threadripper CPUs, EPYC systems, and audio drivers for a number of newer AMD systems. One small, quirky addition points to larger things happening inside Linux. Lenovo's ThinkPad X13s, based on an ARM-powered Qualcomm Snapdragon chip, get some early support in 6.0. ARM support is something Linux founder Linus Torvalds is eager to see [...].

Among other changes you can find in Linux 6.0, as compiled by LWN.net (in part one and part two):
- ACPI and power management improvements for Sapphire Rapids CPUs
- Support for SMB3 file transfer inside Samba, while SMB1 is further deprecated
- More work on RISC-V, OpenRISC, and LoongArch technologies
- Intel Habana Labs Gaudi2 support, allowing hardware acceleration for machine-learning libraries
- A "guest vCPU stall detector" that can tell a host when a virtual client is frozen
Ars' Kevin Purdy notes that in 2022, "there are patches in Linux 6.0 to help Atari's Falcon computers from the early 1990s (or their emulated descendants) better handle VGA modes, color, and other issues."

Not included in this release are Rust improvements, but they "are likely coming in the next point release, 6.1," writes Purdy.
Businesses

Intel's Self-Driving Technology Mobileye Unit Files for IPO (bloomberg.com) 15

Intel has filed for an initial public offering of its self-driving technology business, Mobileye Global, braving the worst market for new US listings since the financial crisis more than a decade ago. Bloomberg reports: The company didn't disclose terms of the planned share sale in its filing Friday with the US Securities and Exchange Commission. Mobileye will continue to be controlled by Intel after the IPO, according to the filing. Intel expects the IPO to value Mobileye at as much as $30 billion, less than originally hoped, Bloomberg News reported this month. If the listing goes ahead this year, it would be one of the biggest US offerings of 2022. Currently, only two companies have raised $1 billion or more on New York exchanges since Jan. 1, compared with 45 in 2021. This year, the US share of IPOs has shrunk to less then a seventh of the global total from half in 2021.

Intel Chief Executive Officer Pat Gelsinger is trying to capitalize on Jerusalem-based Mobileye, acquired in 2017 for $15 billion, with a partial spinoff of its shares. Mobileye makes chips for cameras and drive-assistance features, and is seen as a prized asset as the car industry races toward fully automated vehicles. Now with about 3,100 employees, Mobileye has collected data from 8.6 billion miles on the road from eight testing sites globally, according to its filing. The company says its technology leads in the race to shift the automotive industry away from human drivers. It's shipped 117 million units of its EyeQ product.

Mobileye has been a particularly bright spot for Intel and has consistently grown faster than its parent. As of July, it had $774 million of cash and cash equivalents. In the 12 months ended Dec. 25, it had a net loss of $75 million on revenue of $1.39 billion. The company said it plans to use proceeds from the IPO to pay down debt and for working capital and general corporate purposes.

AMD

Rewritten OpenGL Drivers Make AMD's GPUs 'Up To 72%' Faster in Some Pro Apps (arstechnica.com) 23

Most development effort in graphics drivers these days, whether you're talking about Nvidia, Intel, or AMD, is focused on new APIs like DirectX 12 or Vulkan, increasingly advanced upscaling technologies, and specific improvements for new game releases. But this year, AMD has also been focusing on an old problem area for its graphics drivers: OpenGL performance. From a report: Over the summer, AMD released a rewritten OpenGL driver that it said would boost the performance of Minecraft by up to 79 percent (independent testing also found gains in other OpenGL games and benchmarks, though not always to the same degree). Now those same optimizations are coming to AMD's officially validated GPU drivers for its Radeon Pro-series workstation cards, providing big boosts to professional apps like Solidworks and Autodesk Maya. "The AMD Software: PRO Edition 22.Q3 driver has been tested and approved by Dell, HP, and Lenovo for stability and is available through their driver downloads," the company wrote in its blog post. "AMD continues to work with software developers to certify the latest drivers." Using a Radeon Pro W6800 workstation GPU, AMD says that its new drivers can improve Solidworks rendering speeds by up to 52 or 28 percent at 4K and 1080p resolutions, respectively. Autodesk Maya performance goes up by 34 percent at 4K or 72 percent at the default resolution. The size of the improvements varies based on the app and the GPU, but AMD's testing shows significant, consistent improvements across the board on the Radeon Pro W6800, W6600, and W6400 GPUs, improvements that AMD says will help those GPUs outpace analogous Nvidia workstation GPUs like the RTX A5000 and A2000 and the Nvidia T600.
Crime

NSA Employee Leaked Classified Cyber Intel, Charged With Espionage (nextgov.com) 69

A former National Security Agency employee was arrested on Wednesday for spying on the U.S. government on behalf of a foreign government. Nextgov reports: Jareh Sebastian Dalke, 30, was arrested in Denver, Colorado after allegedly committing three separate violations of the Espionage Act. Law enforcement allege that the violations were committed between August and September of 2022, after he worked as a information systems security designer at the agency earlier that summer. Dalke allegedly used an encrypted email account to leak sensitive and classified documents he obtained while working at the NSA to an individual who claimed to have worked for a foreign government.

The individual who received the documents was later revealed to be an undercover FBI agent. Dalke was arrested in September upon arriving at the location where he and the undercover agent agreed to exchange documentation for $85,000 in compensation. "Dalke told that individual that he had taken highly sensitive information relating to foreign targeting of U.S. systems, and information on U.S. cyber operations, among other topics," the press release from the Department of Justice reads. "To prove he had access to sensitive information, Dalke transmitted excerpts of three classified documents to the undercover FBI agent. Each excerpt contained classification markings."
"Should Dalke be found guilty, his sentence could include the dealth penalty or any term of years up to life imprisonment," notes the report.
Displays

Intel and Samsung Are Getting Ready For 'Slidable' PCs (theverge.com) 19

During Intel's Innovation keynote today, Samsung Display showed off a prototype PC that slides from a 13-inch tablet into a 17-inch display. Intel also announced that it's been experimenting with slidable PC form factors. The Verge reports: The prototype device that Samsung Display and Intel have shown off today essentially turns a 13-inch tablet into a 17-inch monitor with a flexible display and a sliding mechanism. Intel was quick to demonstrate its new Unison software on this display, which aims to connect Intel-powered computers to smartphones -- including iPhones. The slidable PC itself is just a concept for now, and there's no word from Intel or Samsung Display on when it will become a reality.
Intel

Intel's Unison App Syncs iOS and Android Phones With Your PC (theverge.com) 34

Intel has announced an intriguing new app called Unison, which aims to "seamlessly" connect Intel-powered computers to smartphones -- not just Android phones but iOS devices as well. From a report: Following what Intel says is a "simple pairing process," the Unison app will allow PCs to replicate four key features of the connected phone. They can answer and make calls; they can share photos and files (pictures taken with the phone will show up in a specific Unison gallery on the PC); they can send and receive texts; and they can receive (and, in some cases, respond to) notifications that the phone receives -- though if Unison is closed, they'll go to the Windows notification center. "The advantage we can bring to a PC user that's got a well-designed Windows PC is not having to choose their device based on the PC they have. They have an iPhone, they have an Android phone, any device they want to use will be able to connect with this capability," Josh Newman, Intel's VP of mobile innovation, told The Verge. "When you're ... on your laptop, and you get notifications or texts on your phone, you can keep it in your bag and get right back into the flow of your work."
Intel

Intel's 13th-Gen 'Raptor Lake' CPUs Are Official, Launch October 20 (arstechnica.com) 45

Codenamed Raptor Lake, Intel says it has made some improvements to the CPU architecture and the Intel 7 manufacturing process, but the strategy for improving their performance is both time-tested and easy to understand: add more cores, and make them run at higher clock speeds. From a report: Intel is announcing three new CPUs today, each with and without integrated graphics (per usual, the models with no GPUs have an "F" at the end): the Core i9-13900K, Core i7-13700K, and Core i5-13600K will launch on October 20 alongside new Z790 chipsets and motherboards. They will also work in all current-generation 600-series motherboards as long as your motherboard maker has provided a BIOS update, and will continue to support both DDR4 and DDR5 memory.

Raptor Lake uses the hybrid architecture that Intel introduced in its 12th-generation Alder Lake chips last year -- a combination of large performance cores (P-cores) that keep games and other performance-sensitive applications running quickly, plus clusters of smaller efficiency cores (E-cores) that use less power -- though in our testing across laptops and desktops, it's clear that "efficiency" is more about the number of cores can be fit into a given area on a CPU die, and less about lower overall system power consumption. There have been a handful of other additions as well. The amount of L2 cache per core has been nearly doubled, going from 1.25MB to 2MB per P-core and from 2MB to 4MB per E-core cluster (E-cores always come in clusters of four). The CPUs will officially support DDR5-5600 RAM, up from a current maximum of DDR5-4800, though that DDR5-4800 maximum can easily be surpassed with XMP memory kits in 12th-generation motherboards. The maximum officially supported DDR4 RAM speed remains DDR4-3200, though the caveat about XMP applies there as well. As far as core counts and frequencies go, the Core i5 and Core i7 CPUs each pick up one extra E-core cluster, going from four E-cores to eight. The Core i9 gets two new E-core clusters, boosting the core count from eight all the way up to 16. All E-cores have maximum boost clocks that are 400MHz higher than they were before.

AMD

A 20 Year Old Chipset Workaround Has Been Hurting Modern AMD Linux Systems (phoronix.com) 53

AMD engineer K Prateek Nayak recently uncovered that a 20 year old chipset workaround in the Linux kernel still being applied to modern AMD systems is responsible in some cases for hurting performance on modern Zen hardware. Fortunately, a fix is on the way for limiting that workaround to old systems and in turn helping with performance for modern systems. Phoronix reports: Last week was a patch posted for the ACPI processor idle code to avoid an old chipset workaround on modern AMD Zen systems. Since ACPI support was added to the Linux kernel in 2002, there has been a "dummy wait op" to deal with some chipsets where STPCLK# doesn't get asserted in time. The dummy I/O read delays further instruction processing until the CPU is fully stopped. This was a problem with at least some AMD Athlon era systems with a VIA chipset... But not a problem with newer chipsets of roughly the past two decades.

With this workaround still being applied to even modern AMD systems, K Prateek Nayak discovered: "Sampling certain workloads with IBS on AMD Zen3 system shows that a significant amount of time is spent in the dummy op, which incorrectly gets accounted as C-State residency. A large C-State residency value can prime the cpuidle governor to recommend a deeper C-State during the subsequent idle instances, starting a vicious cycle, leading to performance degradation on workloads that rapidly switch between busy and idle phases. One such workload is tbench where a massive performance degradation can be observed during certain runs."

At least for Tbench, this long-time, unconditional workaround in the Linux kernel has been hurting AMD Ryzen / Threadripper / EPYC performance in select workloads. This workaround hasn't affected modern Intel systems since those newer Intel platforms use the alternative MWAIT-based intel_idle driver code path instead. The AMD patch evolved into this patch by Intel Linux engineer Dave Hansen. That patch to limit the "dummy wait" workaround to old systems is already queued into TIP's x86/urgent branch. With it going the route of "x86/urgent" and for fixing a overzealous workaround that isn't needed on modern hardware, it's likely this patch will be submitted this week still for the Linux 6.0 kernel rather than needing to wait until the next (v6.1) merge window.

Media

AV1 Update Reduces CPU Encoding Times By Up To 34 Percent (tomshardware.com) 37

According to Phoronix, Google has released a new AOM-AV1 update -- version 3.5, that drastically improves encode times when streaming, rendering, or recording from the CPU. At its best, the update can improve encoding times by up to 34%. Tom's Hardware reports: It is a fantastic addition to AV1's capabilities, with the encoder becoming very popular among powerful video platforms such as YouTube. In addition, we are also seeing significant support for AV1 hardware acceleration on modern discrete GPUs now, such as Intel's Arc Alchemist GPUs and, most importantly - Nvidia's RTX 40-series GPUs. Depending on the resolution, encoding times with the new update have improved by 20% to 30%. For example, at 1080P, encode times featuring 16 threads of processing are reduced by 18% to 34%. At 4K, render times improved by 18% to 20% with 32 threads. Google could do this by adding Frame Parallel Encoding to heavily multi-threaded configurations. Google has also added several other improvements contributing to AV1's performance uplifts in other areas - specifically in real-time encoding.

In other words, CPU utilization in programs such as OBS has been reduced, primarily for systems packing 16 CPU threads. As a result, they are allowing users to use those CPU resources for other tasks or increase video quality even higher without any additional performance cost. If you are video editing and are rendering out a video in AV1, processing times will be vastly reduced if you have a CPU with 16 threads or more.

Graphics

EVGA Abandons the GPU Market, Reportedly Citing Conflicts With Nvidia (tomshardware.com) 72

UnknowingFool writes: After a decades long partnership with Nvidia, EVGA has announced they are ending their relationship. Citing conflicts with Nvidia, EVGA CEO Andrew Han said the company will not partner with Intel nor AMD, and will be exiting the GPU market completely. The company will continue to make existing RTX 30-series cards until their stock runs out but will not release a 4000 series card. YouTube channels JayZTwoCents and GamersNexus broke the news after sitting down with EVGA CEO Andrew Han to discuss his frustrations with Nvidia as a partner. Jon Peddie Research also published a brief article on the matter.
Intel

Intel Processor Will Replace Pentium and Celeron in 2023 Laptops (theverge.com) 61

Intel is replacing its Pentium and Celeron brands with just Intel Processor. The new branding will replace both existing brands in 2023 notebooks and supposedly make things easier when consumers are looking to purchase budget laptops. From a report: Intel will now focus on its Core, Evo, and vPro brands for its flagship products and use Intel Processor in what it calls "essential" products. "Intel is committed to driving innovation to benefit users, and our entry-level processor families have been crucial for raising the PC standard across all price points," explains Josh Newman, VP and interim general manager of mobile client platforms at Intel. "The new Intel Processor branding will simplify our offerings so users can focus on choosing the right processor for their needs."

The end of the Pentium brand comes after nearly 30 years of use. Originally introduced in 1993, flagship Pentium chips were first introduced in high-end desktop machines before making the move to laptops. Intel has largely been using its Core branding for its flagship line of processors ever since its introduction in 2006, and Intel repurposed the Pentium branding for midrange processors instead. Celeron was Intel's brand name for low-cost PCs. Launched around five years after Pentium, Celeron chips have always offered a lot less performance at a lot less cost for laptop makers and, ultimately, consumers. The first Celeron chip in 1998 was based on a Pentium II processor, and the latest Celeron processors are largely used in Chromebooks and low-cost laptops.

Intel

Why is Intel's GPU Program Having Problems? 38

An anonymous reader shares a report: If you recall, DG2/Arc Alchemist, was supposed to debut late last year as a holiday sales item. This would have been roughly 18 months late, something we won't defend. What was meant to be a device that nipped at the heels of the high end market was now a solid mid-range device. Silicon ages far worse than fish but it was finally coming out. That holiday release was delayed 4-6 weeks because the factory making the boards was hit by Covid and things obviously slowed down or stopped. SemiAccurate has confirmed this issue. If you are going to launch for holiday sales and you get delayed, it is probably a better idea to time it with the next obvious sales uplift than launch it between, oh say Christmas and New Years Day. So that pushed DG2/AA into mid/late Q1. Fair enough. During the Q2/22 analyst call, Intel pointed out that the standalone cards were delayed again and the program wasn't exactly doing well. While the card is out now, the reports of drivers being, lets be kind and say sub-optimal, abounded. The power/performance ratio was way off too, but there aren't many saying the price is way off unless you are looking at Intel's margins to determine what to buy the kiddies.

[...] Intel is usually pretty good at drivers but this time around things are quite uncharacteristic. Intel offered a few reasons for this on their Q2/22 analyst call which boiled down to, 'this is harder than we thought' but that isn't actually the reason. If that was it, the SemiAccurate blamethrower would have been used and refueled several times already so what really caused this mess? The short version is to look where the drivers are being developed. In this case Intel is literally developing the DG2 drivers all over the world as they do for many things, hardware and software. The problem this time is that key parts of the drivers for this GPU, specifically the shader compiler and related key performance pieces, were being done by the team in Russia. On February 24, Russia invaded Ukraine and the west put some rather stiff sanctions on the aggressor and essentially cut off the ability to do business in the country. Even if businesses decided to stick with Russia, it would have been nearly impossible to pay the wages of their workers due to sanctions on financial institutions and related uses of foreign currencies. In short Intel had a key development team cut off almost overnight with no warning. This is why SemiAccurate say it isn't their fault, even if they saw the war coming, they probably didn't see the sanctions coming.
Security

Retbleed Fix Slugs Linux VM Performance By Up To 70 Percent (theregister.com) 33

VMware engineers have tested the Linux kernel's fix for the Retbleed speculative execution bug, and report it can impact compute performance by a whopping 70 percent. The Register reports: In a post to the Linux Kernel Mailing List titled "Performance Regression in Linux Kernel 5.19", VMware performance engineering staffer Manikandan Jagatheesan reports the virtualization giant's internal testing found that running Linux VMs on the ESXi hypervisor using version 5.19 of the Linux kernel saw compute performance dip by up to 70 percent when using single vCPU, networking fall by 30 percent and storage performance dip by up to 13 percent. Jagatheesan said VMware's testers turned off the Retbleed remediation in version 5.19 of the kernel and ESXi performance returned to levels experienced under version 5.18.

Because speculative execution exists to speed processing, it is no surprise that disabling it impacts performance. A 70 percent decrease in computing performance will, however, have a major impact on application performance that could lead to unacceptable delays for some business processes. VMware's tests were run on Intel Skylake CPUs -- silicon released between 2015 and 2017 that will still be present in many server fleets. Subsequent CPUs addressed the underlying issues that allowed Retbleed and other Spectre-like attacks.

Intel

Intel Teases 6 GHz Raptor Lake at Stock, 8 GHz Overclocking World Record (tomshardware.com) 55

Tom's Hardware reports: We're here in Israel for Intel's Technology Tour 2022, where the company is sharing new information about its latest products, much of it under embargo until a later date. However, the company did share a slide touting that Raptor Lake is capable of operating at 6GHz at stock settings and that it has set a world overclocking record at 8GHz - obviously with liquid nitrogen (here's our deep dive on the 13th-Gen Intel processors). Intel also shared impressive performance projections for single- and multi-thread performance.

Notably, the peak of 6 GHz is 300 MHz faster than the 5.7 GHz for AMD's Ryzen 7000 processors, but Intel hasn't announced which product will hit that peak speed. We also aren't sure if a 6GHz chip will arrive with the first wave of chips or be a special edition 'KS' model. Intel also claimed that Raptor Lake will have a 15% gain in single-threaded performance and a 41% gain in multi-threaded, as measured by SPECintrate_2017 and compared to Alder Lake, and an overall '40% performance scaling.'

Intel

Intel Reveals Specs of Arc GPU (windowscentral.com) 23

Intel has dripped out details about its upcoming Arc graphics cards over the last few months, but until recently, we didn't have full specifications for the GPUs. That changed when Intel dropped a video and a post breaking down the full Arc A-series. From a report: The company shared the spec sheets of the Arc A380, Arc A580, Arc 750, and Arc A770. It also explained the naming structure of the new GPUs along with other details. Just about the only major piece of information we're still missing is the release date for the cards. At the top end of the range, Intel's Arc A770 will have 32 Xe cores, 32 ray-tracing units, and a graphics clock of 2100MHz. That GPU will be available with either 8GB or 16GB of memory. Sitting just below the Arc A770, the Arc A750 will have 28 Xe cores, 28 ray-tracing units, and 8GB of memory. The Intel Arc A580 will sit in the middle between the company's high-end GPUs and the Intel Arc A380.
Intel

Asus Packs 12-Core Intel i7 Into a Raspberry Pi-Sized Board (theregister.com) 30

An anonymous reader quotes a report from The Register: The biz's GENE-ADP6, announced this week, can pack as much as a 12-core/16-thread Intel processor with Iris Xe graphics into a 3.5-inch form factor. The diminutive system is aimed at machine-vision applications and can be configured with your choice of Intel silicon including Celeron, or Core i3, i5, or a choice of 10 or 12-core i7 processors. As with other SBCs we've seen from Aaeon and others, the processors aren't socketed so you won't be upgrading later. This device is pretty much aimed at embedded and industrial use, mind. All five SKUs are powered by Intel's current-gen Alder Lake mobile processor family, including a somewhat unusual 5-core Celeron processor that pairs a single performance core with four efficiency cores. However, only the i5 and i7 SKUs come equipped with Intel's Iris Xe integrated graphics. The i3 and Celeron are stuck on UHD graphics. The board can be equipped with up to 64GB of DDR5 memory operating at up to 4800 megatransfers/sec by way of a pair of SODIMM modules.

For I/O the board features a nice set of connectivity including a pair of NICs operating at 2.5 Gbit/sec and 1 Gbit/sec, HDMI 2.1 and Display Port 1.4, three 10Gbit/sec-capable USB 3.2 Gen 2 ports, and a single USB-C port that supports up to 15W of power delivery and display out. For those looking for additional connectivity for their embedded applications, the system also features a plethora of pin headers for USB 2.0, display out, serial interfaces, and 8-bit GPIO. Storage is provided by your choice of a SATA 3.0 interface or a m.2 mSATA/NVMe SSD. Unlike Aaeon's Epic-TGH7 announced last month, the GENE-ADP6 is too small to accommodate a standard PCIe slot, but does feature a FPC connector, which the company says supports additional NVMe storage or external graphics by way of a 4x PCIe 4.0 interface.

Intel

Intel Details 12th Gen Core SoCs Optimized For Edge Applications (theregister.com) 6

Intel has made available versions of its 12th-generation Core processors optimized for edge and IoT applications, claiming the purpose-built chips enable smaller form factor designs, but with the AI inferencing performance to analyze data right at the edge. The Register reports: The latest members of the Alder Lake family, the 12th Gen Intel Core SoC processors for IoT edge (formerly Alder Lake PS) combine the performance profile and power envelope of the mobile chips but the LGA socket flexibility of the desktop chips, according to Intel, meaning they can be mounted directly on a system board or in a socket for easy replacement. Delivered as a multi-chip package, the new processors combine the Alder Lake cores with an integrated Platform Controller Hub (PCH) providing I/O functions and integrated Iris Xe graphics with up to 96 graphics execution units. [...]

Intel VP and general manager of the Network and Edge Compute Division Jeni Panhorst said in a statement that the new processors were designed for a wide range of vertical industries. "As the digitization of business processes continues to accelerate, the amount of data created at the edge and the need for it to be processed and analyzed locally continues to explode," she said. Another key capability for managing systems deployed in edge scenarios is that these processors include Intel vPro features, which include remote management capabilities built into the hardware at the silicon level, so an IT admin can reach into a system and perform actions such as changing settings, applying patches or rebooting the platform.

The chips support up to eight PCIe 4.0 lanes, and four Thunderbolt 4/USB4 lanes, with up to 64GB of DDR5 or DDR4 memory, and the graphics are slated to deliver four 4K displays or one 8K display. Operating system support includes Windows 10 IoT Enterprise 2021 Long Term Servicing Channel (LTSC) and Linux options. Intel said the new SoCs are aimed at a broad range of industries, including point-of-sale kit in the retail, banking, and hospitality sectors, industrial PCs and controllers for the manufacturing industry, plus healthcare.

AI

China Woos US Tech Giants Apple, Qualcomm, Meta at Shanghai AI Expo (nikkei.com) 20

Big U.S. tech companies have flocked to the World Artificial Intelligence Conference that opened Thursday in Shanghai, drawing a stark contrast with Washington's ongoing efforts to distance itself economically from China. From a report: The opening ceremony included a virtual address by Qualcomm CEO Cristiano Amon, who said the company will supply the most complete and comprehensive technology and solutions in China and the world. Apple, Advanced Micro Devices, Facebook parent Meta and GE HealthCare also have executives or booths at the event, according to Chinese media. Europe's semiconductor industry is represented as well, with executives from Netherlands-based NXP Semiconductors, a major supplier of automotive chips, and Germany's Infineon Technologies discussing development plans.

The strong American showing is good news for China, which needs advanced chip technology to power its AI development and is keen to win over companies that can provide it. The business opportunities afforded by the massive Chinese market remain essential to many American companies. China is a leading information technology production hub, as well as the world's top auto production center -- an increasingly important field for chipmakers as the number of semiconductors used in vehicles continues to rise. Qualcomm generated roughly two-thirds of its sales last year in China, a major production base for many of the smartphone manufacturers that are among its main customers. The country accounts for just under 30% of sales at AMD and Intel, 20% at Micron Technology and over 30% at NXP.

Slashdot Top Deals