Teaching TDD from the ground up

One of the first steps in any embedded software project is to implement an assert macro.  This is true for a few reasons:

  • you have certain assumptions about how the hardware should behave and you want to know when those assumptions are broken,
  • you need a way to signal outside of the custom hardware when something has gone wrong,
  • there is no standard function that can deal with these problems.

This might seem like a wasteful thing for every project to undertake, but actually it is a very constructive place to start.  Firstly, it frames the development around verification.  Secondly, when you find a bug you don’t start printf debugging, instead focusing on adding assertions.  And finally, from the very outset of the project you demystify the black box of your embedded system.

In the same way, I think that the first step in learning test driven development should be to write a testing framework.  And it needn’t be difficult – a simple test framework can be written in a few lines of code.  This is the test framework for testing the fake function framework.

/* Test Framework :-) */
void setup();
#define TEST_F(SUITE, NAME) void NAME()
#define RUN_TEST(SUITE, TESTNAME) printf(" Running %s.%s: \n", #SUITE, #TESTNAME); setup(); TESTNAME(); printf(" SUCCESS\n");
#define ASSERT_EQ(A, B) assert((A) == (B))
#define ASSERT_TRUE(A) assert((A))

https://github.com/meekrosoft/fff/blob/master/test/c_test_framework.h

It could be even more concise, but I wanted to make it compatible with the googletest framework so I can re-use the test cases in both C and C++ without modification.  Anyway, you get the point.

I am a big fan of testing frameworks, and for sure there is no need to re-write junit every time we start a new java program.  But for learning TDD, I think a lot of the mystery would disappear when participants make a simple test framework and use that to make their first few tests.

Advertisement

3 Comments

Filed under fff, software, testing

Test Seams in C: Function pointers vs. Preprocessor hash defines vs. Link-time substitution

When replacing production C code with test doubles (or fakes), there are three basic approaches: function pointers, preprocessor hash defines, and link time-time substitution.  All three are examples of what Michael Feathers calls a seam, or a way to change the behavior of a program without editing in that place. We use seams when we unit test software to decouple and localise the software under test.  Each of these types of seams has different trade-offs for testing embedded C.  Here’s what I think.

Function Pointers (run-time substitution)

  • + Easy to substitute at runtime for test functions
  • + Makes for more decoupled design
  • – Harder for IDE to jump to definitions, etc.
  • – Can make static code analysis mot work so well

Preprocessor hash defines (preprocessor-time substitution)

  • + Good for getting legacy code under test
  • – Needs discipline to structure the hash defines well
  • – Can make code harder to read and understand
  • – Definitions can be hard to locate because they can be set on the command line (-D), include path, header files, etc and can be overridden at any location

Link different function implementations (link-time substitution)

  • + Good for getting legacy C under test (limited changes needed to production code)
  • – Need to check the link definitions to understand what is running for any given executable
  • – Can’t have multiple implementations of a function in one executable
  • – Can create the need for multiple test executables

You can use the fake function framework to generate the fake functions for testing, whatever type of seam you choose.  It just creates a function and the supporting structure to capture arguments and provide return values. You can find more information about the fake function framework here:

https://github.com/meekrosoft/fff#readme

10 Comments

Filed under Uncategorized

Have Your Own Object Calisthenics Refactoring Dojo

We have been having lots of fun at work by having regular coding dojos.  In particular, we have been using refactoring dojos to hone our skills at improving existing code guided by tests.  For this weeks dojo, Juven and I created a particularly crappy implementation of TicTacToe to be the starting point.  Feel free to use the code and associated slides to host your own.  Fun guaranteed!

Further reading:

Leave a comment

Filed under Uncategorized

Hierarchy of Software Needs

What happens when I apply Maslow’s hierarchy of needs to software?
Maslow's Hierarchy of Needs

Physiological

  • Breathing: There is a user for this software.
  • Sex: It compiles.  Software that does not compile has no future.

Safety

  • Morality: It adds value.  Software that does not add value does not survive.
  • Security of body: The software is under version control.
  • Security of health: There are no compiler or static analysis warnings.
  • Security of resources: There are limited signs of broken windows.  Tests, if any, run successfully.  Commented out code is minimal.

Love/Belonging

  • Friendship: The software meets the needs and expectations of the users
  • Family: Developers enjoy working on the software

Esteem

  • Achievement: Developers are proud of the software
  • Confidence: tests and quality assurance mean making changes is a low risk proposition.
  • Respect of others: Future developers needs are considered
  • Respect by others: Users and stakeholders trust in the software to meet their needs.

Self-actualization

  • Morality: The code is “Good”.
  • Acceptance of facts: The software has executable Acceptance Tests to define it’s requirements.
  • Lack of prejudice: The problem domain is perfectly expressed in the source
  • Creativity, Spontaneity: It is easy and cheap to experiment with the code.
  • Homoiconic?

This may be a flawed extension of a flawed model, but the central point is that before the basic physiological and safety needs of the software are meet it is really difficult to focus on the higher motivations of software development.  This might also provide some clues as to where to start cleaning house on legacy renovation project.

1 Comment

Filed under legacy, quality, software, Uncategorized

Notes from a Refactoring Cyberdojo

We have regular coding dojos in our office.  We do a different problem every time, trying to solve it as best we can, using solid design and test driven development.  In a typical dojo we work in pairs to solve a well-defined problem incrementally.  The main focus is to improve software design and TDD skills through deliberate practice.

Last week we tried something different: instead of solving a problem from scratch we would try a refactoring dojo.  A refactoring dojo follows the same format, except instead of starting with a blank page we start with a fully implemented solution with full test coverage.

Why would we work on a problem that already solved the problem and had full test coverage?  I mean, if the code works and there are tests, then aren’t we already done? This is an opportunity to practice refactoring skills, but the motivations run deeper than that.

A coding dojo is an opportunity to practice coding, away from the usual constraints of the working environment.  One of the main constraints we have in our daily jobs is actually finishing tasks.  Many an agile consultant has filled a conference talk jabbering on about Definition of Done, but the reality is that every developer makes a judgement call about when the code is good enough for check-in.  So the refactoring dojo is an opportunity to see how far we can go when we don’t have the pressure of finishing the task.

Refactoring Dojo

Refactoring Dojo

The starting point was an implementation of Yahtzee calculator created by Jon Jagger for one of his training workshops.  It was expertly crafted to contain a large collection of code smells and opportunities for improvement.  In other words, it really sucked! :-) (it actually takes a lot of creativity to craft code like this, thanks Jon!).

For the first session we used the cyber-dojo software to solve the problem.  In the second session we repeated the exercise, but instead using visual studio and resharper.  In both sessions we rotated computers at regular intervals (the countdown timer on the wall shows how much time before next rotation).

Retrospective

We had each table work together to answer three questions: what went well, what wasn’t good, and what surprised you.  Each group discussed amongst themselves and when there was some agreement they wrote their thoughts on post it notes and put them on the board when they were ready.

Juven at the whiteboard

Juven at the whiteboard

Good Things:

  • Pair rotation, so can learn a lot from others.
  • Quick communication and diverse opinions.
  • Baby steps.
  • Automation tests saved the day.
  • Cyber-Dojo is cool.
  • It is fun.
  • Sample code/problem is interesting.

Bad Things:

  • Not enough time for each pair session. (10mins instead of 5mins could be better)
  • The whole Dojo could last longer time.
  • Cyber-Dojo sever is not stable enough.
  • VS2008 has refactoring tools while Cyber-Dojo records the history, no perfect tool.
  • While switching pair, big change w/o enough test coverage was found.

Surprising Things:

  • Code has so much space to improve.
  • We can make big changes so quickly.
  • There are so many different ideas, working style, and tool usage from different people.
  • Cyber-Dojo is amazing.
  • While switching pair, someone found people leave code with many tests commented out.

Once everyone had a chance to finish we all ended up around the board discussing the various items.

Closing discussion

Closing discussion

Try it yourself!

It’s really easy to try this kata out.  Just point your browser at http://cyber-dojo.com/diff/show/E4466E24B2?avatar=elephant&tag=26 and select “Start a new practice kata from here”.  Then you will use this version of the kata as the starting point for your dojo.

Other starting points

6 Comments

Filed under dojo, refactoring, teams

Fake Function Framework – Request For Comments!

I have a little micro-framework called fff.h for generating fake functions (mocks) in C.  I have blogged about it in the past, and there have been some exciting changes over the last few weeks that I’d like to share.

Background

The basic premise is that testing a C source file is difficult in idiomatic C because of all the external function calls that are hardwired into the production code.  The way fff.h helps is to make it a one-liner to create fake implementations of these for the purposes of testing.

Now, the basic formula on my last project for testing a C module is like this:

Typical approach to testing legacy C code

This was crude but effective.  What really grated with me though was the amount of repetitive code that had to be written to answer very basic questions.  Was my fake function called? How many times? What parameters were passed in on each occasion? Was the unlock function called before the write function? To answer these questions we ended up writing endless amount of (not very pretty) code like this:

Manual Fake Functions

It seemed to me that it should be possible to write a macro to generate this code.  This would tidy up the test files in terms of readability and would make it easier for my team to make tests.  And that was the genesis of the Fake Function Framework.

The best introduction to the Fake Function Framework is on the fff github site so I won’t rehash that here.

New Developments

I have since moved on to a new project and haven’t thought about fff.h in a wee while.  Through a variety of coincidences it happened that Tore, the architect on my previous project, met James Grenning during a training course and introduced him to fff.h. James played around with fff.h and sent me some great suggestions for cleanup, and how to improve fff.h to produce globally linkable fakes.  At first I thought that this was an unneeded complication for an otherwise simple framework, but James convinced me that reusable fakes had a lot of value.

I set to work on making the generator able to generate the new style fakes.  I don’t know I would have attempted this without having a full regression test suite for the framework.

The way it works is instead of using the usual syntax to create a fake using the FAKE_xxx_FUNCn macros, you create a header file to hold the declarations:

Global Fake example header file

Global Fake example header file

And then create an implementation file for the function definitions:

Global Fake implementation file example

Global Fake implementation file example

Then you can link many test suites against this single object without any problems. Simple!

Breaking Changes

There have been some breaking changes to fff.h to enable these new features, and I have also taken that opportunity to clean up some weaknesses in the previous design.  But since fff.h is just a header file, both will happily exist in your codebase. All you have to do is name the new version of fff.h fff2.h and start using fff2.h in your new tests.

So theses are the basic changes you should know about if you are already familiar with the fake function framework.  Somewhere in the test executable you must define the globals for fff.  Why not put it beside the main function?

Define FFF globals

In the old version of fff.h there was a shortcut for resetting all the fakes in one step.  This is now gone. The reason is that it only worked in C++ using static initializers and the introduction of global fakes were incompatible with this.

Reset Fakes

There has also been some cleanup.  All the fake function framework variables have been moved into a struct to avoid polluting the global namespace.

fff globals now in struct

fff globals now in struct

And the variables for individual fakes are now defined in a struct.

Fake function data now in struct

Acknowledgements

The fake function framework would not exist as it does today without the support of key folks.  Tore Martin Hagen (and his whiteboard), my partner-in-crime in Oslo, was instrumental during the genesis of fff.  Jon Jagger, who during ACCU 2011 helped me teach the preprocessor to count.  And James Grenning, who convinced me the value of global fakes, sent me a prototype Implementation, and showed me how expressive a DSL can be.  Thanks to you all!

Request for comments

I hope you like the new changes, and if you have any feedback or suggestions for further improvements don’t hesitate to leave a comment or contact me via twitter.

Further reading

47 Comments

Filed under fff, quality, software, testing

Metricide (or The Curious Consequences of Counting)

This is a PowerPointPoem I gave at the ACCU 2012 conference

4 Comments

Filed under quality, software

Triptych: Red, Green, Refactor!

Triptych: Red, green, refactor!

Triptych: Red, green, refactor!

Inspired by James Grenning.

Leave a comment

Filed under Uncategorized

Retrospective – Global Day of Coderetreat, Beijing!

On the 3rd of December a small collection of developers gathered in a basement conference room in Beijing to practice the craft of software development. The participants came from diverse backgrounds: architects, developers, students, and managers. There were people comfortable with C++, C#, Java, Python and others hadn’t programmed in a couple of years. Here are my notes from the day:

Introduction

Coderetreat is a day-long, intensive practice event, focusing on the fundamentals of software development and design. By providing developers the opportunity to take part in focused practice, away from the pressures of ‘getting things done’, the coderetreat format has proven itself to be a highly effective means of skill improvement. Practicing the basic principles of modular and object-oriented design, developers can improve their ability to write code that minimizes the cost of change over time.

A coderetreat is a language-agnostic event. In each session, the pair chooses what language they want to work in. The day focuses on practicing the fundamentals of software: TDD and the 4 rules of simple design; these are applicable regardless of language.

Alex and Zilong pairing

Chocs Away!

After a brief introduction, we went into the first session to familiarize ourselves with the task: Conway’s Game of Life. Participants struggled with deleting the code at the end of the session; this was one of the biggest challenges that they faced throughout the day. The second session was to swap pairs and have a second go at the problem with a clean slate. The third session introduced the concept of ping-pong TDD.

Gospers Glider Gun

For lunch we went out to a local restaurant to get a chance to stretch our legs and have a fresh perspective.

Lunchtime!

Lunchtime!

In the afternoon we tried a couple of CyberDojos. The first session we didn’t change pairs so we could have a chance to get familiar with the CyberDojo software. After a few technical issues we were on our way. In the second session we changed the pairs every five minutes, really challenging ourselves to write code in very small increments.

CyberDojo

CyberDojo

The final session I gave the participants a choice: we could agree as a group to either try to create the absolute best code we could for the solution, or we could try to create the absolute worst code possible that implements the solution. It was a tough call but the dirty code challenge won out. In this session there were many creative approaches and a notable visual implementation that looked great in the UI and was a hornets nest in the implementation – you know who you are! ☺

At the end of the day we held the usual Closing Circle, where we each share with the group our feelings on what went well, what was surprising, and what we can take away from the event. There was a general consensus that the day was fun and that it highlighted the importance of communication, both between people and through the code. Also surprising was how many different approaches there were to the same problem.

Globalization

Hello from Japan!

Hello from Japan!

The event is called a “Global Day” for a reason; the same event was happening in over 90 cities across the world on the same day. There were many ways the events connected, lots of activity on twitter with the hashtag #gdcr11, and several events talked to each other via google hangouts or skype. We had a quick chat with the coderetreat in Tokyo in the morning, and before lunch we managed a chat with the folks in Perth, Australia. This was a lot of fun and helped to energise the group.

Hello from Perth!

Hello from Perth!

Thanks to Corey Haines and Jim Hurne for organizing the global day, and also thanks everyone that helped me organize in Beijing. Thanks to Tokyo and thanks to Perth. And special thanks to everyone in Beijing who came along on a blue sky day to spend their Saturday in a basement coding with other software craftspeople.

p.s. I am starting a local software craftsmanship meetup group here in Beijing, sign up to hear about future events and meetings!

Related Posts:
Interview with Corey Haines
CyberDojo

Tags: software testing craftsmanship coderetreat coreyhaines tdd

Leave a comment

Filed under coderetreat, software, testing

Less Architecture, More Microtecture

Microscope at Bioscience Center

I’m going to jump straight onto my soapbox: the world needs fewer architects. Am I suggesting that engineers of the world unite and overthrow these unjust overlords and reclaim software by the people, for the people? Well, yes, actually.  Let’s round up all the architects and send them to a dark place with no whiteboards.

Is there room for reform in my plan? Maybe. I’m not hopeful though.  It is a long and tortuous penance that must be exacted, and it will take all the concentration and commitment that an architect can muster.  It may also include humiliation and a few slices of humble pie.

You might ask yourself, why has this bozo got a chip on his shoulder? Well let me tell you.  The world is full of huge systems built on the watch of a network of highly talented architects performing due diligence and risk control.  Yet, 5 years later all that remains is a few MLOCs of hell that not even consultants can be tempted to work on.

How does this happen?  Was there a lack of boxes and arrows written down in some document? Was poor requirements management to blame? Hardly.  Codebases become unmanageable not because of the high-level design decisions, but because of the low-level details.  No UML drawing ever caused a maintainence clusterfuck (apologies to my mother for dropping the f-bomb). What destroys software is engineers.  We do this. It’s us.

We write 100-line functions. We inherit for code reuse. We write 10-line comments instead of taking 15 minutes to discuss naming a variable. We don’t write tests. We rush to deadlines. We put third-party framework calls in the middle of our production code. We think multi-hour-long build times are acceptable. We copy/paste follow patterns.

No more. I’m out. Done. Finished. I no longer wish to be associated with this crap. But I have a cunning plan.  Just like the architects, I want to solve this problem with job title. From now on, you can call me a microtect.  You are welcome to join me on this crusade, of course. Viva la revolution!

Of course no revolution is complete without a manifesto, so without apology I offer my Microtecture Manifesto:

  • Sweat the small stuff
  • Small, loosely coupled over large, monolithic
  • Prefer composition over inheritance
  • Code is king
  • Optimise for change
  • Tight feedback loops
  • Value people, not technology stacks
  • Value reducing lines of code over adding more code
  • Name stuff good [sic]

Remember kids, take care of the pennies and the pounds take care of themselves.

2 Comments

Filed under Uncategorized