Sometimes, it just takes a challenge. After years of predictable and, arguably modest, advances, we’re beginning to witness an explosion of exciting and important new developments in the sometimes obscure world of semiconductors—commonly known as chips.
Thanks to both a range of demanding new applications, such as Artificial Intelligence (AI), Natural Language Processing (NLP) and more, as well as a perceived threat to Moore’s Law (which has “guided” the semiconductor industry for over 50 years to a state of staggering capability and complexity), we’re starting to see an impressive range of new output from today’s silicon designers.
Entirely new chip designs, architectures and capabilities are coming from a wide array of key component players across the tech industry, including Intel, AMD, Nvidia, Qualcomm, Micron and ARM, as well as internal efforts from companies like Apple, Samsung, Huawei, Google and Microsoft.
It’s a digital revival that many thought would never come. In fact, just a few years ago, there were many who were predicting the death, or at least serious weakening, of most major semiconductor players. Growth in many major hardware markets had started to slow, and there was a sense that improvements in semiconductor performance were reaching a point of diminishing returns, particularly in CPUs (central processing units), the most well-known type of chip.
The problem is, most people didn’t realize that hardware architectures were evolving and that many other components could take on tasks that were previously limited to CPUs. In addition, the overall system design of devices was being re-evaluated, with a particular focus on how to address bottlenecks between different components.
People predicting the downfall of semiconductor makers didn’t realize that hardware architectures were evolving and that many other components could take on tasks that were previously limited to CPUs.
Today, the result is an entirely fresh new perspective on how to design products and tackle challenging new applications through multi-part hybrid designs. These new designs leverage a variety of different types of semiconductor computing elements, including CPUs, GPUs (graphics processing units), FPGAs (field programmable gate arrays), DSPs (digital signal processors) and other specialized “accelerators” that are optimized to do specific tasks well. Not only are these new combinations proving to be powerful, we’re also starting to see important new improvements within the elements themselves.
For example, even in the traditional CPU world, AMD’s new Ryzen line underwent significant architectural design changes, resulting in large speed improvements over the company’s previous chips. In fact, they’re now back in direct performance competition with Intel—a position AMD has not been in for over a decade. AMD started with the enthusiast-focused R7 line of desktop chips, but just announced the sub-$300 R5, which will be available for use in mainstream desktop and all-in-one PCs starting in April.
Nvidia has done a very impressive job of showing how much more than graphics its GPUs can do. From work on deep neural networks in data centers, through autonomous driving in cars, the unique ability of GPUs to perform enormous numbers of relatively simple calculations simultaneously is making them essential to a number of important new applications. One of Nvidia’s latest developments is the Jetson TX2 board, which leverages one of their GPU cores, but is focused on doing data analysis and AI in embedded devices, such as robots, medical equipment, drones and more.
Not to be outdone, Intel, in conjunction with Micron, has developed an entirely new memory/storage technology called 3D Xpoint that works like a combination of DRAM—the working memory in devices—and flash storage, such as SSDs. Intel’s commercialized version of the technology, which took over 10 years to develop, is called Optane and will appear first in storage devices for data centers. What’s unique about Optane is that it addresses a performance bottleneck found in most all computing devices between memory and storage, and allows for performance advances for certain applications that will go way beyond what a faster CPU could do.
Qualcomm has proven to be very adept at combining multiple elements, including CPUs, GPUs, DSPs, modems and other elements into sophisticated SOCs (system on chip), such as the new Snapdragon 835 chip. While most of its work has been focused on smartphones to date, the capabilities of its multi-element designs make them well-suited for many other devices—including autonomous cars—as well as some of the most demanding new applications, such as AI.
The in-house efforts of Apple, Samsung, Huawei—and to some degree Microsoft and Google—are also focused towards these SOC designs. Each hopes to leverage the unique characteristics they build into their chips into distinct features and functions that can be incorporated into future devices.
Finally, the company that’s enabling many of these capabilities is ARM, the UK-based chip design house whose chip architectures (sold in the form of intellectual property, or IP) are at the heart of many (though not all) of the previously listed companies’ offerings. In fact, ARM just announced that over 100 billion chips based on their designs have shipped since the company started 21 years ago, with half of those coming in the last 4 years. The company’s latest advance is a new architecture they call DynamIQ that, for the first time, allows the combination of multiple different types and sizes of computing elements, or cores, inside one of their Cortex-A architecture chip designs. The real-world results include up to a 50x boost in AI performance and a wide range of multifunction chip designs that can be uniquely architected and suited for unique applications—in other words, the right kind of chips for the right kind of devices.
The net result of all these developments is an extremely vibrant semiconductor market with a much brighter future than was commonly expected just a few years ago. Even better, this new range of chips portends an intriguing new array of devices and services that can take advantage of these key advancements in what will be exciting and unexpected ways. It’s bound to be magical.
Bob O’Donnell is the founder and chief analyst of TECHnalysis Research, LLC a technology consulting and market research firm. You can follow him on Twitter @bobodtech. This article was originally published on Tech.pinions.