Embedded software and open source

Embedded guru and author Jack Ganssle’s latest “Embedded Muse” newsletter has a lot of good commentary on open source in embedded software projects:

http://www.ganssle.com/tem/tem199.htm

I subscribe to very few newsletters, and Jack’s is one of them.  I read every issue, it’s that good.

If you work in embedded software, or software of any kind, you should subscribe!  (I don’t get anything if you subscribe, I just think it’s a worthwhile read.)

While I’m in fanboy mode, I’ll also recommend Jack’s other articles – click on a random one, you’ll probably learn something. My personal favorite is his guide to debouncing.  He does some good experiments and then shows hardware and software solutions to the pesky debouncing problems we embedded folks face.

Book Review: “Hardware/Firmware Interface Design”

I just finished Hardware/Firmware Interface Design: Best Practices for Improving Embedded Systems Development, by Gary Stringham.  Gary sent me a review copy of the book, btw, but I get no money for reading or reviewing it.  Though if you buy the book via my Amazon link, I get a bit of cash.

Anyway – the book is very good.  Gary says, “This book is written by a firmware engineer but is directed primarily to hardware engineers.”  I’ve been a hardware engineer and a firmware engineer, and I think both groups should read this book.

Gary has been in the trenches of firmware/hardware co-design for 20+ years and this book shows it.  The book gives 300+ “Best Practices” which are actually usable and practical – a departure from many software or hardware design books.  Gary talks about low-level concepts like interrupts, register definitions, and debugging, as well as higher level concepts like planning, documentation, and block partitioning across multiple product generations.

Summary: You should read this book if you’re a hardware or firmware engineer.

This is one of the books that I’ll probably revisit a couple of times a year to refresh myself on A Right Way to do hardware/firmware co-design.

‘Nuff said.

Technology vs. Psychology

Do you write software for a living?  Or design hardware?  Or maybe some of each?  While the particular projects any two software or hardware designers do may be worlds apart, we can characterize what we do in the same way: our work is 20% technology and 80% psychology.

Most of the work we do is 100% solvable – it “merely” requires people and time to accomplish.  It may be difficult, it may be risky, but it’s doable.  Most projects don’t require quantum leaps in technology to accomplish – current hardware and software platforms will do just fine, thank you.  Most projects don’t fail because the IDE wasn’t up to the task, or the compiler, or the linker.  Most projects fail because of the carbon-based part of the tool chain, not the silicon-based one.

All projects have some inefficiencies and problems.  They always start small but aren’t life- (or project-) threatening:

  • The build process is manual and annoying, and therefore error-prone; but most engineers run it before checking in new code, and the errors are always found quickly enough.
  • Code drops from external groups happen less frequently than we’d like, but it’s been okay so far.
  • The external contracting team has found a few bugs in the spec, but it’s nothing that can’t be fixed later, and we don’t have time to check the spec right now.

None of these problems will doom your project on their own.  And because you can live with the status quo, you won’t fix them now.  “I’ll get to that later when I have more time.”  But as the problems multiply, build in intensity and (gasp!) start to constructively interfere with each other, your forward progress will slow down.  It will take longer and longer to do what used to be quick tasks.  Many aspects of the job will become more frustrating, and morale will go down.

Inefficiencies and problems don’t persist because we lack the appropriate technology to fix them – they persist because we lack the appropriate psychology to fix them!

That’s an important thought, so I’m going to repeat it:

Inefficiencies and problems don’t persist because we lack the appropriate technology to fix them – they persist because we lack the appropriate psychology to fix them!

How do we address the psychological aspect?  We need to trick ourselves into doing the Right Things in the Right Ways to make continual progress.

Here are two guidelines for doing the Right Things in the Right Ways:

  1. We MUST make activities that are good for the project as easy as possible and as enjoyable as possible. 
    • Can your pre-commit tests be run automatically?  Yes?  Great – do it!  Make sure they run reasonably quickly, with easy to read status updates.  When the tests are done and passing, can we help the user enter a helpful commit message by supplying a list of the diffs automatically?  Yes?  Great – do it!
  2. We MUST make activities that are bad for the project as difficult and miserable as possible. 
    • Checking in without passing all tests?  Okay, but you have to run this ugly command line, fill out the TPS reports on this slow web page, and get a note from your mom.  Consider making “bad activities” impossible.  Checking in without passing all tests?  IMPOSSIBLE!  Can’t be done.

Two of my favorite geniuses agree with me, so I must be right:

  • Albert Einstein said, “Everything should be made as simple as possible, but not simpler.” We need to worry about making our processes as simple as possible.  Don’t worry about making them too simple, I doubt we’ll get that far.   Can you make it simpler?  Yes?  Then do it!
  • Kathy Sierra said, “Make the right things easy and the wrong things hard.”  She says it much better than me – go read her post!

Of course sometimes we don’t have time to improve a process right now, because we (hopefully) have paying customers banging on the door, deadlines to meet, product to ship.  You’ve got to pick your battles and manage your time wisely.

But we geeks need to spend more time thinking about our psychology – the “how” and “why” of what we do – instead of just focusing on the technology – the “what” we do.

The earlier you find and fix inefficiencies and bugs in a product, the more time and money you save.  In the same way, the earlier you find and fix inefficiencies and bugs in the way you create a product, the more time and money you save.  But you get an extra bonus from improving the way you create a product because it produces not just a one-time boost, it produces a many-time boost!  It’s like compounding interest.  Actually, it’s not just like compounding interest, it IS compounding interest!

How to tell you’re a bad programmer

How to tell you’re a bad programmer:

1. You think you’re an awesome programmer.

2. But no one else has ever told you so.

3. You’ve never looked at old code you wrote and thought, “Ewwww! That is horrible code! What was I thinking???”

4. You’ve never looked at someone else’s code and thought, “Dang, whoever wrote this is a freaking genius.”

Note that this also works if you substitute <other profession> for “programmer,” and <output of other profession> for “code.”

If you don’t see growth, it probably ain’t happening.  If you don’t see growth potential, it probably ain’t happening either.

Using Stack Overflow

Joel Spolsky and Jeff Atwood are starting a new website called Stack Overflow, it’s going to be a free programming Q&A website.

I’m a fan of both of those guys (I even have an autographed copy of Joel’s book!), so I signed up to be a beta user to see how it develops. I was trying to come up with a good way to give Stack Overflow a test drive, and I think I’ve hit on something: use it to build a web 2.0-ish site. I’ve been wanting to build a site for myself for at least the last year – I can picture it in my head (and on paper), what it should do and what the user interaction should look like, but I haven’t spent any time actually figuring out how to build the darn thing.

You see, I’m comfortable in C/C++/Perl/Python/x86 assembly (really), but I’ve never done any database or web-y development. Enter Stack Overflow – hopefully it will be a good place to learn about web programming.

I’ll update this blog with my progress once Stack Overflow goes live, wish me luck.

And Jeff: Give me a call when you decide to learn C, I taught it at Purdue for a few years back in grad school and really enjoyed teaching it – maybe you could trade me some .NET lessons for some C lessons. You check my post on pointers to get started. 🙂

Data is more agile than code

Peter Norvig talks about the need for a startup company to go fast – and also in the right direction – at his Startup School 2008 talk.

“Sure you gotta go fast, but if you’re not getting feedback to figure out if you’re going in the right direction it doesn’t matter how fast you go.” (2:47 in the video.)

That advice can apply to both technology and the business sides of a company, but here Norvig focuses on the feedback necessary to make sure the technology you’re developing succeeds.

He suggests you can get this vital feedback by:

“Acquiring lots of data [and] running machine learning over it… The key here is that no matter how agile you are as coders, and I understand that you’re all great, data is going to be more agile than code. Because you’ve got to right the code yourself, but the data you can leverage… there’s an immense multiplying factor that way.”

I guess this Lisp/AI guru from Google knows a thing or two about using lots of data, eh?

He goes on to describe how Google has used machine learning over large data sets for their image search, text segmentation and Google Sets. It’s a great talk, I highly recommend it.

I like the idea of letting the data and algorithms do as much of the heavy lifting as possible – the knowledge I want to share with my users may already be in the data, I’ve just got to dig it out!

Knuth hates XP

In this recent interview, Donald Knuth says:

“Still, I hate to duck your questions even though I also hate to offend other people’s sensibilities—given that software methodology has always been akin to religion. With the caveat that there’s no reason anybody should care about the opinions of a computer scientist/mathematician like me regarding software development, let me just say that almost everything I’ve ever heard associated with the term “extreme programming” sounds like exactly the wrong way to go…with one exception. The exception is the idea of working in teams and reading each other’s code. That idea is crucial, and it might even mask out all the terrible aspects of extreme programming that alarm me.

I also must confess to a strong bias against the fashion for reusable code. To me, “re-editable code” is much, much better than an untouchable black box or toolkit. I could go on and on about this. If you’re totally convinced that reusable code is wonderful, I probably won’t be able to sway you anyway, but you’ll never convince me that reusable code isn’t mostly a menace.”

There are only a few people who can get away with saying that and actually have people listen to them – and Knuth is definitely one of them. I would love to hear him expand on the “terrible aspects of extreme programming.” If he had an email address I would ask him about it, but unfortunately he doesn’t.

Understanding C pointers: Part 1

As I said in “Understanding C pointers: Part 0,” I’m going to try to explain how C pointers work.

Let’s start with the basics. Here’s some simple C code:

  int x = 23;
  int y = x;


You can think of each variable as a box which holds the value of that variable. So in this example we have 2 boxes, named “x” and “y”. After these two statements execute the “x” box contains 23, and the “y” box also contains 23. The picture looks like this:
Example 1-1

Pretty straightforward stuff. If we add this code:

  x = 17;


the pictures changes to look like this:
Example 1-2

Nothing too fancy there.

Next example: let’s add a pointer into the mix.

  int x = 23;
  int y = x;
  int * p = & x;


If the * or & in the above code scare you, please take a deep breath and relax. We’ll get through this, I promise. 🙂

“x” is a variable of type integer. So is “y”. The “int *” before p means that p is a variable of type “pointer to integer” otherwise known as an “integer pointer.” Nothing magical there. The “&” before “x” can be read as the “address of x,” or “the box named x.” Which means the pointer “p” points to the box named “x”.

As in the previous examples we have a box named “x” and another box named “y”. This example adds a pointer to an integer called “p”. You can think of this pointer as simply another box, named “p”. The value in the “p” box is a pointer to another box. For the boxes “x” and “y” we can say things like “x holds the number 23,” but for the box “p” we say “p holds a pointer to the box named “x””.

A picture is worth at least a few words:

Example 1-3
Watch what happens when we add this next line to the example:

  *p = 17;


The * before the “p” tells us we’re changing the value of what “p” points to. We are not changing the value of “p” itself. The number 17 gets put wherever the value of the “p” box point to – which is the “x” box in this case. After this code runs our picture looks like this:

Example 1-4

Notice that “p” has not changed. “p” still points to box “x”. Only the value in the box that “p” was pointing to changed.

Let’s add a couple more lines to that example:

   p = & y;
  *p = 42;


The first line changes the value in the box “p” to be a pointer to the box “y”. The 2nd line changes the value in the box that “p” points to be 42. The result looks like this:

Example 1-5

Drawing these pictures may seem unnecessary, but I guarantee that drawing them will help you understand your code. Even if you understand pointers completely, when faced with a pointer-laden interview question it’s a good idea to draw your data structures and pointers. This way the interviewer can see how you’re thinking about the question, which is frequently more useful than simply getting the “right” answer.

Okay, that’s the basics. See – pointers aren’t that bad.

And it turns out that the way a computer actually implements variables/pointers is a lot like our simple “boxes” model. Tune in next time for more about that.

Understanding C pointers: Part 0

“C/C++ Pointers are evil. Ditto direct control of memory via malloc, free, new and delete. Java, C# and other ’safe’ languages are the wave of the future, man!”

Even if you shouted a hearty, “Amen, brother!” after reading those sentences, the C/C++ languages can teach you something useful. Understanding how to directly control memory with “close to the metal” languages like C and C++ can make you smarter, which is a good goal even if you won’t admit to having used such old-school languages.

Of course that new knowledge may displace other knowledge you want to retain, like Mr. Belvedere episode plot lines or your wedding anniversary. You’ve been warned.

Over the next few posts I’m going to try to explain how C pointers work. I assume you’re at least a little familiar with a C-type imperative programming language – if you’ve ever seen Pascal, BASIC, Fortran, C# or Java you should be fine.

I’ve always heard pointers introduced to students as “a difficult thing, this is hard, you won’t understand it…” – which is baloney. It’s worse than just baloney, actually – it’s spoiled baloney (or bologna, I guess), because it not only tastes bad, but also makes you sick. It sets you up for failure. Telling someone that they can’t learn something, and then attempting to teach it to them is… well, I’ll just say it’s foolish and leave it at that.

Pointers are NOT difficult to understand when explained well – I hope that I’m able to explain C/C++ pointers in an easy to understand way – please let me know if anything doesn’t make sense!

Understanding C pointers: Part 1” is now available, check it out.