Myths programmers believe

I’m fascinated by misconceptions. I used to regularly host pub quizzes (aka trivia nights in some parts of our blue marble), and anyone who’s attended a few of my quizzes know that if it sounds like a trick question, it probably is. I’ve done more than a fair share of “True or False” rounds where the answer to all the questions was false but hardly anyone noticed the pattern because each and every single one of them were things that people believe but simply aren’t true. Wait, how does that relate to programming? Read on.

Programmers that comment on Hacker News or Proggit links are not a randomly sampled cross section of the industry. They’re better. As with anyone who’s ever attended a conference, they’re self selected individuals who are more passionate about their craft than the average bear. And yet, I see comments over and over again, from well-meaning savvy coders about beliefs and misconceptions that seem to be as widespread as they are provably false. Below are some of my favourites, with explanations as to what they mean.

C is magically fast.

I’ve written about this before. If anything on this list is going to get me downvoted, cause cognitive dissonance, and start flame wars, this will probably be the one.

As everything else in this blog post, I just… don’t understand. Maybe it’s because I learned C back when it was considered slow for games programming and only asm would do. Or maybe it’s that it all compiles down to machine code anyway, and I don’t harbour homeopathic-like tendencies to think that the asm, like water, has memory of whence it came, causing the CPU to execute it faster. Which brings me to…

GC languages are magically slow.

Where GC means “tracing GC”. If I had a penny for each time someone comments on a thread about how tracing GCs just aren’t acceptable when one does serious systems programming I’d have… a couple of pounds and/or dollars.

I kind of understand this one, given that I believed it myself until 4 years ago. I too thought that the magical pixies working in the background to clear up my mess “obviously” cause a program to run slower. I’ll do a blog post on this one specifically.

Emacs is a text-mode program.

Any time somebody argues that text editors can’t compete with IDEs, somebody invariably asks why they’d want to use a text-mode editor. I’m an Emacs user, and I don’t know why anyone would want to do that either. Similarly…

Text editors can’t and don’t have features like autocomplete, go to definition, …

This usually shows up in the same discussions where Emacs is a terminal program. It also usually comes up when someone says an IDE is needed because of the features above. I don’t program without them, I think it’d be crazy to. Then again, I’ve rarely seen anyone use their editor of choice well. I’ve lost count of how many times I’ve watched someone open a file in vim, realise it’s not the one they want, close vim, open another file, close it again… aaarrrgh.

I’ve also worked with someone who “liked Eclipse”, but didn’t know “go to definition” was a thing. One wonders.

Given that the C ABI is the lingua franca of programming, the implementation should be written in C

C is the true glue language when it comes to binary. It got that way for historical reasons, but there’s no denying it reigns supreme in that respect. For reasons that dumbfound me, a lot of people take this to mean that if you have a C API, then obviously the only language you can implement it in is C. Never mind that the Visual Studio libc is written in C++, this is why you pick C!

C and C++ are the same language, and its real name is C/C++.

And nobody includes Objective C in there (which, unlike C++, is an actual superset of C), because… reasons? There are so many things that are common between the two of them that sometimes it makes sense to write C/C++ because whatever it is you’re saying applied to both of them. But that’s not how I see it used most of the time.

Rust is unique in its fearless concurrency.

I guess all the people writing Pony, D, Haskell, Erlang, etc. didn’t get the memo. Is it way safer than in C++? Of course it is, but that’s a pretty low bar to clear.

The first time I wrote anything in Rust it took me 30min to write a deadlock. Not what I call fearless. “Fearless absence of data races”? Not as catchy, I guess.

You can only write kernels in C

Apparently Haiku and Redox don’t exist. Or any number of unikernel frameworks.

The endianness of your platform matters.

This is another one that’s always confused me. I never even understood why `htons` and `ntohs` existed. There’s so much confusion over this, and I’ve failed so miserably at explaining it to other people, that I just point them to someone who’s explained it better than I can.

EDIT: I didn’t mean that endianness never matter, it’s just that 99.9% of the time people think it does, it doesn’t. I had to read up on IEEE floats once, but that doesn’t mean that in the normal course of programming I need to know which bits are the mantissa.

If your Haskell code compiles, it works!

Err… no. Just… no.

Simple languages get rid of complexity.

Also known as the “sweep the dust under the rug” fallacy.

EDIT: I apparently wasn’t clear about this. I mean that the inherent complexity of the task at hand doesn’t go away and might even be harder to solve with a simple language. There are many joke languages that are extremely simple but that would be a nightmare to code anything in.

Line coverage is a measure of test quality.

I think measuring coverage is important. I think looking at pretty HTML coverage reports helps to write better software. I’m confident that chasing line coverage targets is harmful and pointless. Here’s a silly function:

int divide(int x, int y) { return x + y }

Here’s the test suite:

TEST(divide) { divide(4, 2); }

100% test coverage. The function doesn’t even return the right answers to anything. Oops. Ok, ok, so we make sure we actually assert on something:

TEST(divide) { 
    assert(divide(4, 2) == 2); 
    assert(divide(6, 3) == 2);
}

int divide(int x, int y) { return 2; }

100% test coverage. Hmm…

TEST(divide) { 
    assert(divide(4, 2) == 2); 
    assert(divide(9, 3) == 3);
}

int divide(int x, int y) { return x / y; }

Success! Unless, of course, you pass in 0 for y…

Measure test coverage. Look at the reports. Make informed decisions on what to do next. I’ve also written about this before.

C maps closely to hardware.

If “hardware” means a PDP11 or a tiny embedded device, sure. There’s a reason you can’t write a kernel in pure C with no asm. And then there’s all of this: SIMD, multiple CPU cores, cache hiearchies, cache lines, memory prefetching, out-of-order execution, etc. At least it has atomics now.

I wonder if people will still say C maps closely to hardware when they’re writing code for quantum computers. It sounds silly, but I’ve heard worse.

Also, Lisp machines never existed and FPGAs/GPUs aren’t hardware (remember Poe’s law).

No other language can do X

No matter what your particular X is, Lisp probably can.

 

Tagged , ,

10 thoughts on “Myths programmers believe

  1. Piotr Kłos says:

    “Line coverage is a measure of test quality” this is actually true. It is a measure. It’s just not sufficient for all needs.

  2. Esquilax says:

    You realize that “Programmers that comment on Hacker News or Proggit links are not a randomly sampled cross section of the industry. They’re better.” is just another myth, right?

  3. Peter says:

    > int divide(int x, int y) { return 2; }

    .. And isn’t that exactly what testing against Mocks amounts to?

    Take this thing, make it return X, and then test that it returns X.

  4. user says:

    Another claim I heard is that functional programming doesn’t need refactoring because it just works with data without OOP.

  5. bla says:

    > Apparently Haiku and Redox don’t exist. Or any number of unikernel frameworks

    Not to mention Windows NT

Leave a comment