Micro Advantage

Akasa AK-ENU3M2-07 USB 3.2 Gen 2×2 SSD Enclosure Review: 20Gbps with Excellent Thermals

Storage bridges have become an ubiquitous part of today’s computing ecosystems. The bridges may be external or internal, with the former ones enabling a range of direct-attached storage (DAS) units. These may range from thumb drives using an UFD controller to full-blown RAID towers carrying Infiniband and Thunderbolt links. From a bus-powered DAS viewpoint, Thunderbolt has been restricted to premium devices, but the variants of USB 3.2 have emerged as mass-market high-performance alternatives. USB 3.2 Gen 2×2 enables the highest performance class (up to 20 Gbps) in USB devices without resorting to PCIe tunneling. The key challenges for enclosures and portable SSDs supporting 20Gbps speeds include handling power consumption and managing thermals. Today’s review takes a look at the relevant performance characteristics of Akasa’s AK-ENU3M2-07 – a USB 3.2 Gen 2×2 enclosure for M.2 NVMe SSDs.

OpenCAPI to Fold into CXL – CXL Set to Become Dominant CPU Interconnect Standard

With the 2022 Flash Memory Summit taking place this week, not only is there a slew of solid-state storage announcements in the pipe over the coming days, but the show is also increasingly a popular venue for discussing I/O and interconnect developments as well. Kicking things off on that front, this afternoon the OpenCAPI and CXL consortiums are issuing a joint announcement that the two groups will be joining forces, with the OpenCAPI standard and the consortium’s assets being transferred to the CXL consortium. With this integration, CXL is set to become the dominant CPU-to-device interconnect standard, as virtually all major manufacturers are now backing the standard, and competing standards have bowed out of the race and been absorbed by CXL.

Pre-dating CXL by a few years, OpenCAPI was one of the earlier standards for a cache-coherent CPU interconnect. The standard, backed by AMD, Xilinx, and IBM, among others, was an extension of IBM’s existing Coherent Accelerator Processor Interface (CAPI) technology, opening it up to the rest of the industry and placing its control under an industry consortium. In the last six years, OpenCAPI has seen a modest amount of use, most notably being implemented in IBM’s POWER9 processor family. Like similar CPU-to-device interconnect standards, OpenCAPI was essentially an application extension on top of existing high speed I/O standards, adding things like cache-coherency and faster (lower latency) access modes so that CPUs and accelerators could work together more closely despite their physical disaggregation.

But, as one of several competing standards tackling this problem, OpenCAPI never quite caught fire in the industry. Born from IBM, IBM was its biggest user at a time when IBM’s share in the server space has been on the decline. And even consortium members on the rise, such as AMD, ended up skipping on the technology, leveraging their own Infinity Fabric architecture for AMD server CPU/GPU connectivity, for example. This has left OpenCAPI without a strong champion – and without a sizable userbase to keep things moving forward.

Ultimately, the desire of the wider industry to consolidate behind a single interconnect standard – for the sake of both manufacturers and customers – has brought the interconnect wars to a head. And with Compute Express Link (CXL) quickly becoming the clear winner, the OpenCAPI consortium is becoming the latest interconnect standards body to bow out and become absorbed by CXL.

Under the terms of the proposed deal – pending approval by the necessary parties – the OpenCAPI consortium’s assets and standards will be transferred to the CXL consortium. This would include all of the relevant technology from OpenCAPI, as well as the group’s lesser-known Open Memory Interface (OMI) standard, which allowed for attaching DRAM to a system over OpenCAPI’s physical bus. In essence, the CXL consortium would be absorbing OpenCAPI; and while they won’t be continuing its development for obvious reasons, the transfer means that any useful technologies from OpenCAPI could be integrated into future versions of CXL, strengthening the overall ecosystem.

With the sublimation of OpenCAPI into CXL, this leaves the Intel-backed standard as dominant interconnect standard – and the de facto standard for the industry going forward. The competing Gen-Z standard was similarly absorbed into CXL earlier this year, and the CCIX standard has been left behind, with its major backers joining the CXL consortium in recent years. So even with the first CXL-enabled CPUs not shipping quite yet, at this point CXL has cleared the neighborhood, as it were, becoming the sole remaining server CPU interconnect standard for everything from accelerator I/O (CXL.io) to memory expansion over the PCIe bus.

Best AMD Motherboards: July 2022

In what is likely the final months prior to AMD launching its highly anticipated Ryzen 7000 processors, its longstanding AM4 platform has had a long and successful run. Back from the original Ryzen days with X370, all the way to today’s current AM4 chipsets including both X570 and B550, there’s a wide variety of models to choose from for Ryzen 5000 processors. 

Despite readying up a new socket for its Ryzen 7000 lineup with support for DDR5 memory only, AMD confirmed that the AM4 chipset will remain alive for a little while yet. When it comes to selecting a motherboard for an AMD-based desktop system, there’s plenty of choice including the original X570 chipset, as well as the refreshed X570S models with passive chipset heatsinks, not forgetting the premium but affordable B550 range. It’s time to give you the low down on our picks ranging from value, all the way to the best that AM4 has to offer in our latest AMD motherboard buyers guide for July 2022.

<div>The Intel Core i9-12900KS Review: The Best of Intel’s Alder Lake, and the Hottest</div>

As far as top-tier CPU SKUs go, Intel’s Core i9-12900KS processor sits in noticeably sharp In contrast to the launch of AMD’s Ryzen 7 5800X3D processor with 96 MB of 3D V-Cache. Whereas AMD’s over-the-top chip was positioned as the world’s fastest gaming processor, for their fastest chip, Intel has kept their focus on trying to beat the competition across the board and across every workload.

As the final 12th Generation Core (Alder Lake) desktop offering from Intel, the Core i9-12900KS is unambiguously designed to be the powerful one. It’s a “special edition” processor, meaning that it’s low-volume, high-priced chip aimed at customers who need or want the fastest thing possible, damn the price or the power consumption.

It’s a strategy that Intel has employed a couple of times now – most notably with the Coffee Lake-generation i9-9900KS – and which has been relatively successful for Intel. And to be sure, the market for such a top-end chip is rather small, but the overall mindshare impact of having the fastest chip on the market is huge. So, with Intel looking to put some distance between itself and AMD’s successful Ryzen 5000 family of chips, Intel has put together what is meant to be the final (and fastest) word in Alder Lake CPU performance, shipping a chip with peak (turbo) clockspeeds ramped up to 5.5GHz for its all-important performance cores.

For today’s review we’re putting Alder Lake’s fastest to the test, both against Intel’s other chips and AMD’s flagships. Does this clockspeed-boosted 12900K stand out from the crowd? And are the tradeoffs involved in hitting 5.5GHz worth it for what Intel is positioning as the fastest processor in the world? Let’s find out.

Intel To Wind Down Optane Memory Business – 3D XPoint Storage Tech Reaches Its End

It appears that the end may be in sight for Intel’s beleaguered Optane memory business. Tucked inside a brutal Q2’2022 earnings release for the company (more on that a bit later today) is a very curious statement in a section talking about non-GAAP adjustments: In Q2 2022, we initiated the winding down of our Intel Optane memory business.  As well, Intel’s earnings report also notes that the company is taking a $559 Million “Optane inventory impairment” charge this quarter.

Taking these items at face value, it would seem that Intel is preparing to shut down its Optane memory business and development of associated 3D XPoint technology. To be sure, there is a high degree of nuance here around the Optane name and product lines – which is why we’re looking for clarification from Intel – as Intel has several Optane products, including “Optane memory” “Optane persistent memory” and “Optane SSDs”. None the less, within Intel’s previous earnings releases and other financial documents, the complete Optane business unit has traditionally been referred to as their “Optane memory business,” so it would appear that Intel is indeed winding down the complete Optane business unit, and not just the Optane Memory product.

Update: 6:40pm ET

Following our request, Intel has sent out a short statement on the Optane wind-down. While not offering much in the way of further details on Intel’s exit, it does confirm that Intel is indeed exiting the entire Optane business.

We continue to rationalize our portfolio in support of our IDM 2.0 strategy. This includes evaluating divesting businesses that are either not sufficiently profitable or not core to our strategic objectives. After careful consideration, Intel plans to cease future product development within its Optane business. We are committed to supporting Optane customers through the transition.

Intel’s associated 10-Q filing also contains a short statement on the matter.

In the second quarter of 2022, we initiated the wind-down of our Intel Optane memory business, which is part of our DCAI operating segment. While Intel Optane is a leading technology, it was not aligned to our strategic priorities. Separately, we continue to embrace the CXL standard. As a result, we recognized an inventory impairment of $559 million in Cost of sales on the Consolidated Condensed Statements of Income in the second quarter of 2022. The impairment charge is recognized as a Corporate charge in the “all other” category presented above. As we wind down the Intel Optane business, we expect to continue to meet existing customer commitments.

First announced by Intel in 2015, the company’s 3D XPoint memory technology was pitched as the convergence between DRAM and solid state storage. The unique, bit-addressable memory uses phase change technology to store data, rather than trapping electrons like NAND technology. As a result, 3D XPoint offers incredibly high endurance – on the order of millions of writes – as well as very high random read and write performance since its data doesn’t have to be organized into relatively large blocks.

Intel, in turn, used 3D XPoint as the basis of two product lineups. For its datacenter customers, it offered Optane Persistent Memory, which packaged 3D XPoint into DIMMs as a partial replacement for traditional DRAMs. Optane DIMMs offered greater bit density than DRAM, and combined with its persistent, non-volatile nature made for an interesting offering for systems that needed massive working memory sets and could benefit from its non-volatile nature, such as database servers. Meanwhile Intel also used 3D XPoint as the basis of several storage products, including high-performance SSDs for the server and client market, and as a smaller high-speed cache for use with slower NAND SSDs.

3D XPoint’s unique attributes have also been a challenge for Intel since the technology launched, however. Despite being designed for scalability via layer stacking, 3D XPoint manufacturing costs have continued to be higher than NAND on a per-bit basis, making the tech significantly more expensive than even higher-performance SSDs. Meanwhile Optane DIMMs, while filling a unique niche, were equally as expensive and offered slower transfer rates than DRAM. So, despite Intel’s efforts to offer a product that could crossover the two product spaces, for workloads that don’t benefit from the technology’s unique abilities, 3D XPoint ended up being neither as good as DRAM or NAND in their respective tasks – making Optane products a hard sell.

As a result, Intel has been losing money on its Optane business for most (if not all) of its lifetime, including hundreds of millions of dollars in 2020. Intel does not break out Optane revenue information on a regular basis, but on the one-off occasions where they have published those numbers, they have been well in the red on an operating income basis. As well, reports from Blocks & Files have claimed that Intel is sitting on a significant oversupply of 3D XPoint chips – on the order of two years’ of inventory as of earlier this year. All of which underscores the difficulty Intel has encountered in selling Optane products, and adding to the cost of a write-down/write-off, which Intel is doing today with their $559M Optane impairment charge.

Consequently, a potential wind-down for Optane /3D XPoint has been in the tea leaves for a while now, and Intel has been taking steps to alter or curtail the business. Most notably, the dissolution of the Intel/Micron IMFT joint venture left Micron with possession of the sole production fab for 3D XPoint, all the while Micron abandoned their own 3D XPoint plans. And after producing 3D XPoint memory into 2021, Micron eventually sold the fab to Texas Instruments for other uses. Since then, Intel has not had access to a high volume fab for 3D XPoint – though if the inventory reports are true, they haven’t needed to produce more of the memory in quite some time.

Meanwhile on the product side of matters, winding-down the Optane business follows Intel’s earlier retreat from the client storage market. While the company has released two generations of Optane products for the datacenter market, it never released a second generation of consumer products (e.g. Optane 905P). And, having sold their NAND business to SK Hynix (which now operates as Solidigm), Intel no longer produces other types of client storage. So retiring the remaining datacenter products is the logical next step, albeit an unfortunate one.


Intel’s Former Optane Persistent Memory Roadmap: What WIll Never Be

Overall, Intel has opted to wind-down the Optane/3D XPoint business at a critical juncture for the company. With their Sapphire Rapids Xeon CPUs launching this year, Intel was previously scheduled to launch a matching third generation of Optane products. The most important of these was to be their “Crow Pass” 3rd generation persistent DIMMs, which among other things would update the Optane DIMM technology to use a DDR5 interface. While development of Crow Pass is presumably complete or nearly complete at this point (given Intel’s development schedule and Sapphire Rapids delays), actually launching and supporting the product would still incur significant up-front and long-term costs, as well as requiring Intel to support the technology for another generation. Giving Intel a strong incentive to finally take an exit on the money-losing business unit.

In lieu of Optane persistent memory, Intel’s official strategy is to pivot towards CXL memory technology (CXL.mem), which allows attaching volatile and non-volatile memory to a CPU over a CXL-capable PCIe bus. This would accomplish many of the same goals as Optane (non-volatile memory, large capacities) without the costs of developing an entirely separate memory technology. Sapphire Rapids, in turn will be Intel’s first CPU to support CXL, and the overall technology has a much broader industry backing.


AsteraLabs: CXL Memory Topology

Still, Intel’s retirement of Optane/3D XPoint marks an unfortunate end of an interesting product lineup. 3D XPoint DIMMs were a novel idea even if they didn’t quite work out, and 3D XPoint made for ridiculously fast SSDs thanks to its massive random I/O advantage – and that’s a feature it doesn’t look like any other SSD vendor is going to be able to fully replicate any time soon. So for the solid state storage market, this marks the end of an era.

Best Internal Hard Drives: July 2022

Data storage requirements have kept increasing over the last several years. While SSDs have taken over the role of the primary drive in most computing systems, hard drives continue to be the storage media of choice in areas dealing with large amount of relatively cold data. Hard drives are also suitable for workloads that are largely sequential and not performance sensitive. The $/GB metric for SSDs (particularly with QLC in the picture) is showing a downward trend, but it is still not low enough to match HDDs in that market segment.

Since the release of the last HDD guide, Western Digital has announced retail availability of their OptiNAND-equipped 22TB drives, Toshiba has introduced Pro lines for their X300 and N300 lineups, and prices have generally shown a downward trend. Some high-capacity models have become a lot more affordable – selling prices are running around 15 – 20% lower than launch MSRPs. These make for an interesting update to our list of recommended hard drives for NAS and desktop usage.

Silicon Motion Announces SM8366 PCIe 5.0 x4 NVMe Controller and MonTitan SSD Solutions Platform for Enterprise Storage

In the lead up to the Flash Memory Summit next week, many vendors have started announcing their new products. Today, Silicon Motion is unveiling their first enterprise-focused PCIe 5.0 NVMe SSD controllers set. These controllers find themselves embedded in a flexible turnkey solutions platform encompassing different EDSFF standards. A follow-up to the SM8266 introduced in November 2020, the SM8366 and SM8308 belong to Silicon Motion’s 3rd Generation enterprise NVMe controller family.

Silicon Motion’s 3rd Generation Enterprise SSD Controllers
  SM8366 SM8308
Host Interface PCIe 5.0 x4 / x2 (dual-port x2+x2 or x1+x1 capable)
NAND Interface 16ch, 2400 MT/s 8ch, 2400 MT/s
DRAM 2x 40-bit DDR4-3200 / DDR5-4800
(32-bit data + 8-bit ECC per channel)
Max. SSD Capacity 128 TB
Sequential Read 14 GB/s
Sequential Write 14 GB/s
Random Read 3 M IOPS
Random Write 2.8 M IOPS
Namespaces Up to 128, with a total of 1024 queue pairs

Hyperscalers / cloud vendors require turnkey reference designs to quickly evaluate the capabilities of new controllers. In enterprise applications, the controller hardware is only half the story. The associated firmware / SDK, and user-programmability to enable customer differentiation are also key aspects. Keeping this in mind, Silicon Motion is also putting focus on the SM8366 reference design by giving it a separate moniker – MonTitan.

The MonTitan platform refers to the turnkey design / firmware development platform based on the OCP Data Center NVMe SSD and NVMe 2.0 specifications. Hyperscalers can readily deploy the MonTitan platform into their infrastructure for evaluation, while datacenter SSD vendors can use it to make and market their own datacenter and enterprise SSDs. The platform is currently available in U.2, E1.S, and E3.S form-factors.

Silicon Motion claims that the platform’s ASIC and firmware combination architecture allows enabling of enterprise-level security without compromising on performance and QoS. Towards this, they are touting two key features – PerformaShape and NANDCommand.

NVMe SSD controllers can present the SSD as multiple distinct storage volumes each with its own I/O queue to the host system (namespaces). The PerformaShape algorithm can optimize the SSD performance differently for each namespace using per-namespace user-defined QoS settings. Silicon Motion claims true hardware isolation in this case to deliver maximum bandwidth while ensuring that latency, QoS, and power targets are met / obeyed. The NANDCommand feature refers to Silicon Motion’s use of real-time machine learning along with the LDPC engine to help with endurance (paritcularly important for QLC).

The claimed performance numbers for the SM8366 controller can vary for specific designs depending on the NAND technology, number of dice, and form-factor power limitations. The company indicated that specific numbers for different form-factor reference designs will be announced later. Sampling is slated to begin in Q4 2022.

Silicon Motion’s press release shows the usual suspects providing supporting quotes – Micron, KIOXIA, and YMTC from amongst the NAND suppliers. Alibaba Cloud has also expressed interest in evaluating the platform, which bodes well for Silicon Motion’s enterprise SSD controller efforts.

Best Intel Motherboards: July 2022

The summer is here and a number of retailers are getting prepared for a wave of new processor releases set for the end of the year. This means there are plenty of motherboards from chipsets such as Intel’s Z690, B660, and H660 that are either currently on offer, or at their lowest price for months. Users looking to build a new system today will find plenty of options from Intel’s 12th Gen Core series, including the very affordable but competitive Core i3-12100 ($130), with more performance-focused models such as the mid-range Core i5 and i7 processors with up to twelve cores (8P/4E). At the top of the stack is the flagship Core i9-12900K and Core i9-12900KS processors with sixteen cores for the most demanding workflows, applications, and games.

We’re taking a look at what’s currently available on the market in terms of motherboards for Intel’s 12th Gen Core series processors ranging from ‘money no object’, all the way down to what’s hot in terms of value in our latest motherboard buyer’s guide for July 2022.