We've all been wondering exactly when the end of Moore's law will come. There have already been a lot of bumps in the exponential curve over the last 20 years between heat issues, adding cores rather than clock speed, and improving power consumption rather than performance.
I don't know how much longer the curve continues to look exponential, but it seems like it's time to start preparing for what's next. I can see a couple of patterns potentially emerging with software engineers being asked to develop more efficient code and applications relying less on single threaded compute power and more on distributed systems.

Can Engineers Write Better Code?

Writing Code

Attributions [1]

Yes! In several ways.

My favorite software engineering related quote has always been:

Mathematicians stand on each others' shoulders and computer scientists stand on each others' toes.  --Richard Hamming

Although I would say it's more accurate about software engineers than computer scientists these days. We have a lot still to overcome before we can begin to reach the heights of mathematicians.

Performance Oriented Languages

I learned the lesson in college about how we have already gone in cycles with language features from COBOL -> C -> Smalltalk -> C++ -> Python/Ruby -> Java/C# -> Go/Swift/Rust. While I see plenty of room for languages like Ruby and Python, they were a major step backwards for many production use cases. 
Go/Swift/Rust are far from perfect, but they all make reasonable trade-offs and focus a lot on performance. For most projects they are probably a better choice than C or C++, and I personally prefer their trade-offs to those of Java or C#.
I am certainly not convinced that Go/Swift/Rust represent the end of language evolution. I've made that mistake in the past when Java was taking over the world. It's not even clear they will take over a significant market share. I don't even really know how much the resurgence of performance oriented languages has to do with the end/slow down of Moore's law. But the timing is at least rather coincidental and my prediction is that languages like Go/Swift/Rust will continue to take off as the end of Moore's law nears.

Open Source and Standardization

Open Source has probably helped more than anything else in solving the problem of standing on each other's toes. It generally has the effect of bringing problems and redundancies to light. They aren't always solved immediately, or ever in the case of vi/Emacs, Gnome/KDE, etc.
But think how bad it was before or would be without open source. Or how bad it is in non software related industries. Software engineering has really changed a lot in the last 20 years due to open source. The most effective engineers these days are the ones who can effectively tie together existing open source technologies to solve new problems. Typically with some custom code added on top to glue everything together. NIH (not invented here) accusations often come up to fight anyone who goes against this mindset.

Of course there are generally arguments for and against using existing technologies. On the positive side, the heavier the adoption of a particular component, the better it gets (usually). And it's doubtful your project can keep up with the amount of investment a shared third party component will get. On the negative side, it's often the case that third party components are not an ideal fit and cause some amount of undue installation, performance, or maintenance overhead.

I have always viewed the best part of open source is that it truly does let us stand on our predecessors shoulders. My hope is that we can increase the efficiency with which we do so by developing more efficient ways of sharing technologies. This is one of the areas that OpenShift is aiming to help by standardizing the platform from development to production. Configuration management tooling is still state of the art when it comes to setting up a production environment. But use of such tooling is generally limited to operations teams.

Developers tend to tie together the necessary technologies on their laptops in an approximation of how things will eventually run in production. But this has serious downsides when it comes to transferring new work from development to production or recreating production issues in development. It's very rare for a developer to setup a production quality environment with a scaled web framework, database, messaging tier, etc. And even if they manage to get it running, the odds of recreating the problem are diminished because of subtle configuration differences.

In contrast, the world of OpenShift/Kubernetes has applications being built in the same way they will be eventually deployed to production. Entire environments can be replicated with a single command. Scaling is a matter of clicking a button. And perhaps more important, the way application components are introduced into an environment is standardized by picking an image. Of course that image may have to change over time as the production needs drift. But the use of the image across environments continually enforces organizational standards. My prediction is that the rise of open source will continue due to the ease of which it can be consumed and maintained by platforms like OpenShift/Kubernetes.

Engineering Culture

I have always been an advocate of getting engineers to understand complexity when writing code, but writing performance oriented code falls in and out of fashion. My favorite interview question always starts with getting engineers to explain how a hashtable works. It's a great collection to dig into because it covers hashing, arrays, linked lists, and a lot of other collections and operations depending on what path the interviewee decides to take.
It's an important question to me though because it's near impossible to be a software engineer and not use hashtables on a regular basis. And the easiest way to understand how a given operation is going to perform is to understand how the underlying collection/component works.
Knowing this for a hashtable is important. And if they haven't learned how it works yet, there is a decent chance they are going to skip over many other things they use on a daily basis as well.
Some of which are much larger sins:

  • Building a list UI which makes a REST/DB call for each row
  • Fetching a list of items to show one item in a UI

Any engineer worth his or her salt is capable of understanding performance pitfalls. And while most of them will occasionally still make mistakes (not seeing the forest for the trees being a common one), any decent profiling tool should enable them to work their way out of any such performance related problems.
What I predict is that a focus on performance driven engineering will see a big push as Moore's law slows to an end. For years now we (software engineers) have managed to write code that barely stands still from a performance perspective. While at the same time the hardware gets exponentially better. Which means our code is getting exponentially worse.

Of course decreasing performance is driven by multiple factors. Complexity of features probably being the most influential. But the focus on performance is often a third or fourth level concern. If you aren't operating at a Google/Facebook scale, code running 10x slower than it might if it were optimized properly is often a small price to pay vs the engineering time to make it better. Working code generally rules.

I have even seen a few ideas in my career initially not work because of performance. Web based IDEs are a good example. I first remember being introduced to them ~10 years ago. It seemed like madness at the time. Web browsers and JavaScript just weren't fast enough. The only use case I could imagine using them in was for seldom used code bases, such as a previous version of product for support purposes. Fast forward 10 years and even native IDEs like Atom and VSCode are being developed with JavaScript.

Yes, we did see a lot of improvements in JavaScript performance over those 10 years, but what's possible today still wouldn't be acceptable without the corresponding improvements in hardware over that same time period.
After Moore's law ends, the demand for new features will continue. Engineers will be forced to focus on performance to get those features within an acceptable amount of compute power (whatever is available on laptops, smartphones, etc.) or not delivering at all.

Cheap Hardware and Distributed Systems

Datacenter

Attributions [2]

Capitalism to the rescue! In theory, once hardware improvements slow down, the hardware will also get cheaper. R&D cost will go down and the existing technology will be commoditized. We will still have to pay for power, but the overall cost for compute will go down. For datacenters, this means a lot less churn every 3-4 years to switch out hardware. For personal use, it becomes possible to have your own little datacenter in your home. To alleviate power concerns, hardware could be designed to automatically shutdown unused cores/CPUs. I myself can't wait for my own 100 core home server.

The industry has been pushing to multiple cores for several years now, but writing software to consume multiple cores lags behind and for good reason. It can be quite difficult to write multi-threaded code correctly, but the bigger problem is that many programs don't have a logical split that even makes sense to organize into multiple threads. With server side components, this is often less of an issue. The amount of time to process a request is important to be within some threshold. But once you get below that threshold, which can be anywhere from 1s to 10ms depending on the application, there isn't much need to push for multi-threading to increase performance. What allows you to consume multiple cores at that point is multiple concurrent requests.

The obvious prediction is this will change as Moore's law comes to an end. Hard problems that need more compute power will need to distribute their complexity. It's a little sad to think this might be necessary. Things are running fine now without distributing every problem to multiple cores/cpus/machines. Will we really make things so inefficient so quickly even with the known constraints of slowly improving hardware? We can always hope, but it's very likely with cheaper compute resources available, we won't resist the temptation for new features. 
I would expect the overhead of developing a distributed system to force many solutions down the performance optimization route as a first choice. But once distributed solutions for web frameworks, desktop software, etc. are developed, it will be tempting to use them even for simple cases to get a little more responsiveness. And from there, it's a slippery slope to further distributing the problem and continuing to add features.

Distributing a problem can involve an enormous amount of engineering effort. Often, the easiest way to tackle the problem of distribution is to design smaller components. This trend is already quite prominent with the current push to microservices. Microservices are useful for distributing workloads, although not inherently with concurrency. But once a problem is distributed, the mechanisms for adding concurrency are often more obvious. Sometimes it's as simple as adding additional workers pulling from a work queue.

The concept of distributing workloads is certainly not a new one. While not yet common for web applications, a lot of innovation is already happening in this area driven by big data initiatives. Hadoop, and to more recent fanfare Spark, are two of the prevalent contenders in the space. Shekhar Gulati recently wrote a great article on the topic of running Spark on top of OpenShift.
At first glance, there may appear to be a decent amount of overlap with Spark and OpenShift/Kubernetes. Kubernetes already provides units of work (pods), storage, and a scheduler. But except for the very poor man use cases, Spark running on OpenShift/Kubernetes is exactly where both products want to be. OpenShift and Kubernetes both attempt to be more generic than a microservices platform or a big data platform. Spark, Hadoop, and microservice frameworks are all layers that can take advantage of the application fabric that OpenShift/Kubernetes provides.
The end result being that all applications can be orchestrated inside a single container based platform. The key being that reuse of the pattern is essential to making it possible. Distributing workloads is way too difficult under most circumstances unless you can build on top of existing solutions.

The End is Near?

I have no idea. The odds are high that someone will find a solution to get us beyond Moore's law as the end approaches. It's been a very profitable business after all. So, at most, we are likely to have a transition period where some or all of these approaches will be stop gaps. I do at least hope we have a meaningful period, as Moore's law comes to a close, for software engineers to practice their craft without the benefit of ever improving hardware. The lessons learned from a hardware drought will force us to not take writing performant code for granted.

Attributions:

[1] https://commons.wikimedia.org/wiki/File:Programmer_writing_code_with_Unit_Tests.jpg

[2] https://commons.wikimedia.org/wiki/File:Datacenter-MIVITEC-MUNICH.jpg

 


Categories

Thought Leadership

< Back to the blog