Embedded Software – The next 10 Years

It is with a hint of sadness that I say my 7-year adventure in embedded systems is coming to an end.  I’ve had a great time and I’ve learned a lot but now it is time for me to try something new. 

For me, the great joy of working in embedded software is that the magic of computation is completely exposed; you learn every layer of the onion from the registers to the operating system, operating system to userspace, instruction sets to cross-compilation, linkers and debuggers.  And while you see through the layers, you learn the absolute necessity of abstraction in finding understanding through the chaos.

Over my (brief) career in embedded software, I have seen a huge shift in approaches to develop software.  This last decade has seen the rise of Agile practices and the adoption of Test Driven Development.  While these practices are established at the major software houses, they are only starting to be recognized within the embedded.  So, what can we learn from the current leading edge in software development that might predict the future we in the embedded industry are headed towards (and perhaps adopt earlier than the competition)?

It’s always a bit risky to predict the future, but I figure what the hell, why not?  If you can’t laugh at yourself there’s no hope.  :-)  So here are my three big bets for the coming years in embedded software:

1. Executable Specifications (Automated Acceptance Tests)

This is already a hot ticket in mature software houses, but automated acceptance testing is just starting to take hold in the embedded arena.  I predict that tools very much like Cucumber and their ilk will become the de-facto standard for requirement specification and progress measurement in embedded projects in the coming years. 

 A natural consequence of this is that automated hardware-in-the-loop testing will become necessary and commonplace.

2. Executable Data Sheets (Automated Hardware Tests)

If hardware becomes as flexible as software (which with the latest FPGA’s and tools they are) then the media breakage that plagues software will come home to roost with the hardware guys ‘n’ gals as well.  Also, as cores are getting softer, the need to verify the hardware works as software expects it to will increase.  

Perhaps it will be the hardware engineers who create these test suites, or maybe it will be left to the software guys, but one thing’s for sure: datasheets will become executable.

3. Open Source Toolchains

Gone are the days where one chip has one compiler, but that doesn’t mean that the compilers we have are ideal.  In fact I would go as far to say that most of the toolchains we have are downright lousy, especially with regard to automation and scriptability.

We have all kinds of hassle with licence management, incompatible upgrade paths (TI, I’m looking at you), vendors going out of business (or bought up by the big hitters only for the technology to be buried), closed source RTOS’s with critical but hidden or unfixable bugs.  These all represent an unacceptable risk for the future of the products that we bring to market.

Of course, for this to happen requires that the software engineers have a bigger say in the selection of silicon for the projects.  So in that respect it is up to us as professionals to get involved in defining the hardware requirements as early as possible and have a stronger voice in highlighting the significant risks involved in hardware and toolchain selection.


I don’t think there is anything particularly outrageous about these predictions. There are examples of all these practices in embedded projects now.  People in conferences are presenting on these topics, and we are starting to see books and blog posts appearing.  What is perhaps most tragic is my prediction that it will take 10 years before we see widespread adoption of these techniques. So it goes..

Historically, every change we make in the industry is concerned with increasing the quality of software or reducing project risk.  The first two predictions are concerned with increasing quality, but that is not the true motivation for them.  The real reason is to increase the agility and speed of development by closing the feedback loops. The only way to go fast is to go well.

The final prediction is all about project risk.  It’s time we as an industry acknowledge that the tools we use represent project risks and take time to mitigate them to protect the future viability of our work.

So, what are your predictions?


Leave a comment

Filed under Uncategorized

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s