The End of Moore’s Law?


News

In 1965, Intel cofounder Gordon Moore published a remarkably foresighted paper which predicted that computing power would double every two years and that this pace would lead to computers becoming embedded in homes, cars and communication systems.

Honestly…. it will never catch on!

We’ve become so used to the idea that our technology gets more powerful and cheaper that we scarcely stop and think about how unprecedented it is. Prior to the current digital age, we did not expect man powered machinery or even industrial age technologies such as trains, cars and planes to double their efficiency at a continuous rate.

Moore’s law is the observation that:

Over the history of computing hardware, the number of transistors on integrated circuits doubles approximately every two years. 

For the next half a century, this process of doubling proved to be so remarkably consistent that today it has commonly become known as Moore’s Law and has driven the continual digital revolution. This is in part because the law is now used in the semiconductor industry to guide long-term planning and to set targets for research and development.

Nevertheless, modern organizations have come to rely on continuous improvement to such an extent that people rarely think about what it means but… is Moore’s law about to end?

The Von Neumann Bottleneck

Because of the power and consistency of Moore’s Law, we’ve come to associate technological advancement with processor speeds. Yet that is only one dimension of performance and there are many things the tech industry does to get our machines to do more at lower cost than just speeding them up.

A primary example of this is called the von Neumann bottleneck, named after the mathematical genius who is responsible for the way our computers store programs and data in one place and make calculations in another.

In the 1940s, when this idea emerged, it was a major breakthrough, but today it’s becoming somewhat of a problem!

This is because (due to Moore’s Law) our chips now are able to run so fast that in the time it takes information to travel back and forth between chips we lose a lot of valuable computing time. Ironically, as chip speeds continue to improve, this problem will only get worse and so a solution is needed.

One way of increasing performance is by decreasing distance at the level of the system. Currently, chips are designed in two dimensions to perform specific functions, such as logic chips, memory chips and networking chips. Although none of them can do much by themselves, acting together they allow us to do extremely complex tasks on basic devices.

So one approach to increasing performance, called 3D stacking, would simply integrate those integrated circuits into a single three dimensional chip. This is harder than it sounds, because entirely new chip designs have to be devised, but it would vastly reduce the time circuits need to wait for instructions from each other and increase speed significantly, while decreasing power dramatically due to far shorter communication paths.

 

The end of Moore’s Law?

Obviously, back in 1965, when Gordon Moore formulated his famous law, computers were enormous machines that few people ever saw.

After 20 years of continuous doubling and shrinking, we got personal computers small enough to fit under our desks, but powerful enough to generate a graphical display and interact with us through a keyboard and a mouse.

20 more years gave us the mobile revolution.

The end of Moore’s Law is not a new issue at all. Industry experts state that it first began unraveling in 2003, when insulating components within transistors began failing due to quantum mechanical effects. Since then, chip manufacturers have been finding new materials that are more resistant to decay in their basic atomic properties and progress has continued.

However, sometime around 2020, these workarounds will no longer suffice as the silicon itself yields to quantum mechanical reality. Researchers at IBM are pursuing strategies like carbon nanotubes and silicon photonics that have the potential to increase chip speeds even without having to shrink chips to quantum scale.

Other approaches, such as quantum computing change the nature of computing itself and can be exponentially more efficient for certain tasks, such as encryption. Quantum computers however need to be cooled down to close to absolute zero, which limits their use as a mainstream technology.

As artificial intelligence has risen to the fore, some firms have begun designing chips that are specifically engineered to run their own deep learning tools. This greatly improves performance, but you need to make a lot of chips to make the economics work, so this is out of reach for most companies other than the familiar tech giants.

The truth may be that all of these strategies could be merely stopgaps. They may help us continue to advance over the next few years or so, but with Moore’s Law ending, the real challenge is to come up with some fundamentally new ideas for computing.

 

A New Era Of Innovation

For the past 20 or 30 years, digital innovation could rely on technology to improve at a foreseeable pace and that allowed us to predict, with a high degree of certainty, what would be possible in the years to come.

That led most innovation efforts to be focused on systems, services and applications, with a heavy emphasis on the end user and customer experience. Startups that have been able to design an experience, test it, adapt and iterate quickly could outperform big firms that had far more resources and technological sophistication. This has made the concept of ‘Agile’ a defining competitive attribute that small companies have often been keen to promote..

In the near future however, the pendulum may be likely to swing from applications back to the fundamental technologies that make them possible and so both tech giants and small / medium enterprises will be operating in new territory. Some industry experts have predicted that it may be akin to a 2nd digital revolution like starting all over again and innovation will look more like it did back in the 1950’s and 1960’s

Computing is not alone in this. There are other technologies that are also reaching their current theoretical limits.

The digital age also need next generation batteries to power our devices, drones, electric cars and the communities we operate in and at the same time, new innovations such as nanotechnologies and robotics are ever more aggressively emerging into our lives.

So… We are officially entering a new era of innovation! Suppliers and clients must be willing to work together to tackle these new technological challenges and realise the further benefits of doing so!


Full article courtesy of Inc.com and Digital Tonto