What You Actually Need to Know About Software Development

I wrote this post because of a friend who recently decided to take the plunge and go into software engineering. Said friend is intelligent, energetic, personable, and a great learner: there is every reason to think they’ll end up doing great things. But there’s a lot to learn when you first start out.

I started, as many do, by learning about the tools of the trade: syntax and data structures, debuggers and editors. But the day-to-day craft of actually using those tools to write good software – software that works, that solves problems, that can be maintained for long periods of time – that was never formally taught. Learning the craft was a long matter of oral tradition and painful mistakes.

Here’s an attempt to teach at least some of the craft up front.

And lest I give the wrong impression, of course I don’t actually manage to live up to all of these things all the time. None of us can: the only question is whether we’re trying to do better or not. For more on that… just keep reading!


The single most important thing to remember about software is that there is

No Magic

Software is 100% magic-free (despite the mass of creative ways we talk about it!). Every behavior you see, every effect you run across, it happens only because some code is making it happen, and it is always possible to track that code down.

That doesn’t mean it’s always useful to track it down, of course. Sometimes all you need to know is that something happens, and why it happens isn’t really relevant. This leads us to:

Squirrel Syndrome((This name will make more sense if you’ve seen Pixar’s Up.))

Software is an infinite series of distractions and ratholes that you can dive down. You’ll learn a lot following every squirrel you spot, but you’ll get more done if you take a step back every so often to ask whether you really need to go down this particular rathole. Trying to stay in a small(ish) boundary helps with getting yourself into

The Zone

Software is extremely complex, enough so that we regularly create code that no human can hold in their head all at once. We work on things this complex by structuring them in smaller, more-or-less self-contained parts, with defined interfaces between the parts. That way, if you can fit the smaller part in your head, it’s more-or-less safe (at least when things go well) to work on that chunk and trust the other pieces to be sane. ((If this reminds you of microservices, there’s a reason for that!))

Getting even one of the smaller chunks completely into your head can be hard, but while it’s there you can do things like just knowing the line you’re looking at is wrong, and how to fix it. That’s being in the zone for software. It’s very important, not because you do your best work there, but because it’s almost impossible to do anything right if you’re not in that zone. And that brings us to

Interruptions

which will shred your ability to get anything done by knocking some of that hard-won context right out of your head. The ten-minute interruption isn’t the killer: it’s the 45 minutes it takes to get everything back into your head so that you can keep going from where you were before the interruption.

This is possibly the hardest thing about software work to explain to people – though most fields have something like this, it’s often not as dramatic as it is for us. Part of the reason for the difference is

Intuition Failure

Much of what we think of as “intuition” is really taking advantage of structures in our brain that have become highly optimized, over a million years, to help us manage the complexity of the physical world around us. But computing isn’t really part of the physical world around us: my laptop creates her own extension of the world, and I have to go there to work.

The power of software engineering is that that extension of the world is so very, very malleable – and that power is amazing. But one of the curses is that since we’re no longer working in the physical world, our intuition no longer Just Works, which makes everything dramatically harder to understand unless we do extra work to enable our intuition to help us out.((A great example of this sort of enabling work: graphs are generally much easier on us humans than tables of numbers. We have to use the cognitive part of our brain to think about what the tables of numbers mean, but the graph engages a ton of pattern-recognition hardware in our visual cortex and we can grasp it instantly and intuitively. But the tables of numbers are easier to code… so guess which one gets used more often?))

One of the major things that breaks our intuition is that

Time is Weird

Computers these days are fast. Even your phone operates on a timescale a billion times faster than our brains: a second passing for us is kind of like 31 years, give or take, passing for it. That means that we can never watch what a computer is doing in its real time.

Instead, we’ve had to invent ways to force the machine to match our glacial pace, usually by asking it to wait for us, or to remember everything it’s doing and leave a log that we can read later. Of course, these techniques slow things down so much that any timing-related problems will usually go away, or at least look different.

Taken to the extreme, we end up flying blind, with no idea at all of why the code just went off the rails.((Or, often worse, no idea why the code miraculously did the right thing!)) To avoid landing in this situation, we have do real work to arrange for the code to explain its decisions to us. That work is often difficult and unsexy – but it’s critical, because of the major influence of

Murphy’s Law

In computing, Murphy’s Law can be restated as “if you can’t prove that it’s impossible, it’s guaranteed to happen – often quickly.” Remember that billion-to-one speedup? If you do a billion operations a second, and each is 99.99999% percent reliable, you’ll see 100 failures every second.

Our senses of probability and chance are rooted solidly in time: “this isn’t very likely” really means “this isn’t very likely during the time I’m thinking of.” Change the time scale radically and all of that changes.

Even if it didn’t, though, failures would still be a thing because of

Original Sin

Every single time you write code, you will write bugs. This is a true thing. No matter how senior you are, no matter how good, no matter what, you cannot write perfect code. It is a characteristic of being human.

So you cannot avoid writing bugs.

But you can be smart, and write fewer bugs:

1. Comment Your Brain

The purpose of comments in code is not to explain what the code does – I can read the code for that! No, the purpose of comments is to explain what you were thinking when you wrote the code.

Why do you think it’s important that the code do this? What has happened elsewhere to make it necessary? What effects should someone else watch out for after this happens? Is your code doing things in a way you like, or in a way that worries you? Why did you pick that way? What other ways could you have chosen? Might one of them be better? What were you thinking here?

Six months after writing something, you won’t remember the answers to any of the above, not even in your own code. Write it down. It’s priceless when you’re trying to figure out why something broke.

2. Test First

You can write code that tests to make sure your other code is working. This turns out to be amazingly important, and most of us suck at it because there’s always other stuff we could be doing. So write the tests first. It’s the only way to know they’ll get written.

3. Make Life Easy

Computers are fast, remember? and they’re getting faster. Your brain is not getting faster. So don’t pick tools optimized to make the computers’ lives easier – pick tools made for humans.

The C language is a great example of this. It wasn’t designed to be easy to work in, it was designed to be possible to implement on the machines of 40 years ago. As such, writing in C means you spend all your time thinking about implementation mechanics, rather than about the problem you’re trying to solve: the how, not the what or the why. I stopped using C something like a decade ago. These days I write mostly in Python, which is much better about letting you think about what and why, rather than how.

This is where the purists jump in and say “But Python is slower than C!”((This is also the point where rabid fans of every language that isn’t Python jump up and start screaming. We’ll just take that as read and move on without wading into that debate, shall we? Python is definitely not a perfect language, and I’ll be writing more about languages later.)) And they’re absolutely right – but in 2016, that doesn’t matter for 95% of the world. The computer is so fast that it can soak up Python’s inefficiencies and still be fast enough most of the time. Languages like Python optimize for developer performance, not processor performance, and that is the correct tradeoff.

4. Solutions Beat Code

Writing code is easier than solving problems.

Code tends to have well-defined inputs and outputs, a predictable way to use it, and a time when you can say that you’ve finished the task of writing the code. Problems are messier. They involve people and organizations, and they have real-world constraints like getting people to actually use the thing you’ve built.

Solving problems is way more important than writing code. It’s easy to forget that, and important to remember it.

Finally, there’s one more thing that may be more important than all the rest combined:

5. Never Stop.

Things move quickly in software. You’ll never know it all. In fact, until the moment you’re ready to quit, you’ll never even be at a point where you can say you know enough. Keep learning, keep broadening your horizons, keep your eyes open.

There’s a place in this world for people who put down their heads and specialize in one little area, but in my experience there are more places for people who learn voraciously and work on being able to solve any problem that comes their way.

All told, being able to move your fingers and change the world means some serious brain stretching and a non-stop influx of new information. But for all that, it’s wicked cool.

Welcome!