Bret Victor: The Future of Programming

Yet another great quote from Bret Victor in his back-to-the-future talk, “The Future of Programming”:

“The most dangerous thought you can have as a creative person is to think that you know what you’re doing. Because once you think you know what you’re doing you stop looking around for other ways of doing things. And you stop being able to see other ways of doing things. You become blind.”

(the quote starts around 31:20)

 

Two views on developing for Apple

Two different (opposing?) views on developing for the Apple ecosystem after today’s WWDC love-fest announcements:

The first is from Des Traynor’s post, “Playing their game”:

Apple always look out for their customers. They will always look to improve the experience. If that means adding software to their platform then so be it. If that software is in direct competition with your software, then so be it. If they roll out the software as a free update across all operating systems, leaving you for dead, then so be it.

Their ball. Their game. Their rules.

The second is from Marco Arment’s post, “What Safari’s Reading List means for Instapaper”:

So I’m tentatively optimistic. Our world changes quickly, especially on the cutting edge, and I really don’t know what’s going to happen. (Nobody does.) But the more potential scenarios I consider, the more likely it seems that Safari’s Reading List is either going to have no noticeable effect on Instapaper, or it will improve sales dramatically. Time will tell.

Des says you should stay the heck out of Apple’s way. If Apple might ever create it, you don’t want to be anywhere near it. If Apple decides to create the thing that you’ve created, they’ll kill your business. That makes sense.

(He also says that Apple looks out for their customers, which is almost, but not completely, incorrect. Apple looks out for their customers’ money, not their customers. It’s an important distinction.)

Marco says a rising tide lifts all boats. Apple’s new (free) product looks a lot like Marco’s existing (not free) product. Most people don’t know offline reading exists, so with Apple’s (free) publicity he hopes more people use an offline reader, and if he gets some new revenue he makes out okay. That makes sense too.

It sounds like Des thinks of Apple as powerful, childish tyrant. “You don’t want to be around when Apple loses its temper!”

Marco, however, seems to think of Apple as a morally neutral, unstoppable force of nature. “Good thing my boat is ready to sail, because we’re gonna get a lot of rain!”

Des and Marco each seem to be glad to be where they are, out of Apple’s direct path or in it.

Who is right? Apple: steer clear, or sail near?

And where are you? Out of Apple’s direct path (or Google’s, or Microsoft’s, or Salesforce’s, or Oracle’s…) or in it?

Technology vs. Psychology

Do you write software for a living?  Or design hardware?  Or maybe some of each?  While the particular projects any two software or hardware designers do may be worlds apart, we can characterize what we do in the same way: our work is 20% technology and 80% psychology.

Most of the work we do is 100% solvable – it “merely” requires people and time to accomplish.  It may be difficult, it may be risky, but it’s doable.  Most projects don’t require quantum leaps in technology to accomplish – current hardware and software platforms will do just fine, thank you.  Most projects don’t fail because the IDE wasn’t up to the task, or the compiler, or the linker.  Most projects fail because of the carbon-based part of the tool chain, not the silicon-based one.

All projects have some inefficiencies and problems.  They always start small but aren’t life- (or project-) threatening:

  • The build process is manual and annoying, and therefore error-prone; but most engineers run it before checking in new code, and the errors are always found quickly enough.
  • Code drops from external groups happen less frequently than we’d like, but it’s been okay so far.
  • The external contracting team has found a few bugs in the spec, but it’s nothing that can’t be fixed later, and we don’t have time to check the spec right now.

None of these problems will doom your project on their own.  And because you can live with the status quo, you won’t fix them now.  “I’ll get to that later when I have more time.”  But as the problems multiply, build in intensity and (gasp!) start to constructively interfere with each other, your forward progress will slow down.  It will take longer and longer to do what used to be quick tasks.  Many aspects of the job will become more frustrating, and morale will go down.

Inefficiencies and problems don’t persist because we lack the appropriate technology to fix them – they persist because we lack the appropriate psychology to fix them!

That’s an important thought, so I’m going to repeat it:

Inefficiencies and problems don’t persist because we lack the appropriate technology to fix them – they persist because we lack the appropriate psychology to fix them!

How do we address the psychological aspect?  We need to trick ourselves into doing the Right Things in the Right Ways to make continual progress.

Here are two guidelines for doing the Right Things in the Right Ways:

  1. We MUST make activities that are good for the project as easy as possible and as enjoyable as possible. 
    • Can your pre-commit tests be run automatically?  Yes?  Great – do it!  Make sure they run reasonably quickly, with easy to read status updates.  When the tests are done and passing, can we help the user enter a helpful commit message by supplying a list of the diffs automatically?  Yes?  Great – do it!
  2. We MUST make activities that are bad for the project as difficult and miserable as possible. 
    • Checking in without passing all tests?  Okay, but you have to run this ugly command line, fill out the TPS reports on this slow web page, and get a note from your mom.  Consider making “bad activities” impossible.  Checking in without passing all tests?  IMPOSSIBLE!  Can’t be done.

Two of my favorite geniuses agree with me, so I must be right:

  • Albert Einstein said, “Everything should be made as simple as possible, but not simpler.” We need to worry about making our processes as simple as possible.  Don’t worry about making them too simple, I doubt we’ll get that far.   Can you make it simpler?  Yes?  Then do it!
  • Kathy Sierra said, “Make the right things easy and the wrong things hard.”  She says it much better than me – go read her post!

Of course sometimes we don’t have time to improve a process right now, because we (hopefully) have paying customers banging on the door, deadlines to meet, product to ship.  You’ve got to pick your battles and manage your time wisely.

But we geeks need to spend more time thinking about our psychology – the “how” and “why” of what we do – instead of just focusing on the technology – the “what” we do.

The earlier you find and fix inefficiencies and bugs in a product, the more time and money you save.  In the same way, the earlier you find and fix inefficiencies and bugs in the way you create a product, the more time and money you save.  But you get an extra bonus from improving the way you create a product because it produces not just a one-time boost, it produces a many-time boost!  It’s like compounding interest.  Actually, it’s not just like compounding interest, it IS compounding interest!

How to tell you’re a bad programmer

How to tell you’re a bad programmer:

1. You think you’re an awesome programmer.

2. But no one else has ever told you so.

3. You’ve never looked at old code you wrote and thought, “Ewwww! That is horrible code! What was I thinking???”

4. You’ve never looked at someone else’s code and thought, “Dang, whoever wrote this is a freaking genius.”

Note that this also works if you substitute <other profession> for “programmer,” and <output of other profession> for “code.”

If you don’t see growth, it probably ain’t happening.  If you don’t see growth potential, it probably ain’t happening either.

Baby Steps

A few seemingly unrelated thoughts, and then a tie-’em-all-together thought:

1. Rands’ recent post on “Saving Seconds” really resonated with me. I forwarded it to my wife and said, “See, this is how I think!” so that she could better understand why I optimize the shortcuts on our PC, or the way I load the dishwasher, or the other thousand seemingly-OCD-inspired things I do. (Thankfully she understands me very well already!)

2. I think it’s incredibly important to be good to “the environment.” Whether you believe in the global warming story or not, it needs to be done. I recycle everything possible (and I got really excited when I found out that Ecology Action, our local recycling place started recycling #3 – #7 plastics!), am a vegetarian, use reusable grocery bags (and don’t use individual plastic bags for bagging fruit or veggies), use BioBag compostable trash bags, and the list goes on.

3. Joel’s “Fire and Motion” post also resonates with me. Forward progress, even if it’s just a tiny bit of progress, is good. And necessary, in fact, to getting anything done.

If your goal is to save the Earth it’s easy to get overwhelmed because you can’t do it all yourself. What possible difference can a single person make? There’s too much to do!

If your goal is to improve some software or improve a software development process itself, it’s also easy to get overwhelmed because you can’t do it all yourself. What possible difference can a single person make? There’s too much to do!

Guess what? Maybe you’re right. Maybe you can’t save the Earth by yourself, or single-handedly improve the festering swamp that is your team’s software development process, but there is something that you can do – you can take baby steps.

Recycle something. Get a reusable bag or two for your groceries. Turn off the lights when you leave the office conference room. Ask a coworker to look over your code for bugs before you checkin a change. Create an automated test suite for your software, start with a single “does it compile?” test.

It will require an intentional change in your thinking and behavior to do these things the first time, and the second time, and the seventh time. But soon enough doing a little bit extra for the environment or your software will become a habit, and those small bits of effort will add up into something meaningful over time.

And you will be motivated to keep adding more good habits over time because you will see the internal (“Yay! I feel better about myself!”) and external (“Yay! I found a bug!”) benefits from those habits you’ve already adopted.

And as you develop those small habits, you will be noticed by others around you who may even join you in small improvements – “viral marketing” at work.


A Couple Of Resources:

Joel’s “Getting Things Done When You’re Only a Grunt” post has more software development improvement ideas.

http://science.howstuffworks.com/save-earth-top-ten.htm has a few Earth-friendly things you can do.

Using Stack Overflow

Joel Spolsky and Jeff Atwood are starting a new website called Stack Overflow, it’s going to be a free programming Q&A website.

I’m a fan of both of those guys (I even have an autographed copy of Joel’s book!), so I signed up to be a beta user to see how it develops. I was trying to come up with a good way to give Stack Overflow a test drive, and I think I’ve hit on something: use it to build a web 2.0-ish site. I’ve been wanting to build a site for myself for at least the last year – I can picture it in my head (and on paper), what it should do and what the user interaction should look like, but I haven’t spent any time actually figuring out how to build the darn thing.

You see, I’m comfortable in C/C++/Perl/Python/x86 assembly (really), but I’ve never done any database or web-y development. Enter Stack Overflow – hopefully it will be a good place to learn about web programming.

I’ll update this blog with my progress once Stack Overflow goes live, wish me luck.

And Jeff: Give me a call when you decide to learn C, I taught it at Purdue for a few years back in grad school and really enjoyed teaching it – maybe you could trade me some .NET lessons for some C lessons. You check my post on pointers to get started. 🙂

Data is more agile than code

Peter Norvig talks about the need for a startup company to go fast – and also in the right direction – at his Startup School 2008 talk.

“Sure you gotta go fast, but if you’re not getting feedback to figure out if you’re going in the right direction it doesn’t matter how fast you go.” (2:47 in the video.)

That advice can apply to both technology and the business sides of a company, but here Norvig focuses on the feedback necessary to make sure the technology you’re developing succeeds.

He suggests you can get this vital feedback by:

“Acquiring lots of data [and] running machine learning over it… The key here is that no matter how agile you are as coders, and I understand that you’re all great, data is going to be more agile than code. Because you’ve got to right the code yourself, but the data you can leverage… there’s an immense multiplying factor that way.”

I guess this Lisp/AI guru from Google knows a thing or two about using lots of data, eh?

He goes on to describe how Google has used machine learning over large data sets for their image search, text segmentation and Google Sets. It’s a great talk, I highly recommend it.

I like the idea of letting the data and algorithms do as much of the heavy lifting as possible – the knowledge I want to share with my users may already be in the data, I’ve just got to dig it out!

Knuth hates XP

In this recent interview, Donald Knuth says:

“Still, I hate to duck your questions even though I also hate to offend other people’s sensibilities—given that software methodology has always been akin to religion. With the caveat that there’s no reason anybody should care about the opinions of a computer scientist/mathematician like me regarding software development, let me just say that almost everything I’ve ever heard associated with the term “extreme programming” sounds like exactly the wrong way to go…with one exception. The exception is the idea of working in teams and reading each other’s code. That idea is crucial, and it might even mask out all the terrible aspects of extreme programming that alarm me.

I also must confess to a strong bias against the fashion for reusable code. To me, “re-editable code” is much, much better than an untouchable black box or toolkit. I could go on and on about this. If you’re totally convinced that reusable code is wonderful, I probably won’t be able to sway you anyway, but you’ll never convince me that reusable code isn’t mostly a menace.”

There are only a few people who can get away with saying that and actually have people listen to them – and Knuth is definitely one of them. I would love to hear him expand on the “terrible aspects of extreme programming.” If he had an email address I would ask him about it, but unfortunately he doesn’t.

Understanding C pointers: Part 1

As I said in “Understanding C pointers: Part 0,” I’m going to try to explain how C pointers work.

Let’s start with the basics. Here’s some simple C code:

  int x = 23;
  int y = x;


You can think of each variable as a box which holds the value of that variable. So in this example we have 2 boxes, named “x” and “y”. After these two statements execute the “x” box contains 23, and the “y” box also contains 23. The picture looks like this:
Example 1-1

Pretty straightforward stuff. If we add this code:

  x = 17;


the pictures changes to look like this:
Example 1-2

Nothing too fancy there.

Next example: let’s add a pointer into the mix.

  int x = 23;
  int y = x;
  int * p = & x;


If the * or & in the above code scare you, please take a deep breath and relax. We’ll get through this, I promise. 🙂

“x” is a variable of type integer. So is “y”. The “int *” before p means that p is a variable of type “pointer to integer” otherwise known as an “integer pointer.” Nothing magical there. The “&” before “x” can be read as the “address of x,” or “the box named x.” Which means the pointer “p” points to the box named “x”.

As in the previous examples we have a box named “x” and another box named “y”. This example adds a pointer to an integer called “p”. You can think of this pointer as simply another box, named “p”. The value in the “p” box is a pointer to another box. For the boxes “x” and “y” we can say things like “x holds the number 23,” but for the box “p” we say “p holds a pointer to the box named “x””.

A picture is worth at least a few words:

Example 1-3
Watch what happens when we add this next line to the example:

  *p = 17;


The * before the “p” tells us we’re changing the value of what “p” points to. We are not changing the value of “p” itself. The number 17 gets put wherever the value of the “p” box point to – which is the “x” box in this case. After this code runs our picture looks like this:

Example 1-4

Notice that “p” has not changed. “p” still points to box “x”. Only the value in the box that “p” was pointing to changed.

Let’s add a couple more lines to that example:

   p = & y;
  *p = 42;


The first line changes the value in the box “p” to be a pointer to the box “y”. The 2nd line changes the value in the box that “p” points to be 42. The result looks like this:

Example 1-5

Drawing these pictures may seem unnecessary, but I guarantee that drawing them will help you understand your code. Even if you understand pointers completely, when faced with a pointer-laden interview question it’s a good idea to draw your data structures and pointers. This way the interviewer can see how you’re thinking about the question, which is frequently more useful than simply getting the “right” answer.

Okay, that’s the basics. See – pointers aren’t that bad.

And it turns out that the way a computer actually implements variables/pointers is a lot like our simple “boxes” model. Tune in next time for more about that.

Understanding C pointers: Part 0

“C/C++ Pointers are evil. Ditto direct control of memory via malloc, free, new and delete. Java, C# and other ’safe’ languages are the wave of the future, man!”

Even if you shouted a hearty, “Amen, brother!” after reading those sentences, the C/C++ languages can teach you something useful. Understanding how to directly control memory with “close to the metal” languages like C and C++ can make you smarter, which is a good goal even if you won’t admit to having used such old-school languages.

Of course that new knowledge may displace other knowledge you want to retain, like Mr. Belvedere episode plot lines or your wedding anniversary. You’ve been warned.

Over the next few posts I’m going to try to explain how C pointers work. I assume you’re at least a little familiar with a C-type imperative programming language – if you’ve ever seen Pascal, BASIC, Fortran, C# or Java you should be fine.

I’ve always heard pointers introduced to students as “a difficult thing, this is hard, you won’t understand it…” – which is baloney. It’s worse than just baloney, actually – it’s spoiled baloney (or bologna, I guess), because it not only tastes bad, but also makes you sick. It sets you up for failure. Telling someone that they can’t learn something, and then attempting to teach it to them is… well, I’ll just say it’s foolish and leave it at that.

Pointers are NOT difficult to understand when explained well – I hope that I’m able to explain C/C++ pointers in an easy to understand way – please let me know if anything doesn’t make sense!

Understanding C pointers: Part 1” is now available, check it out.