Open Source vs Closed Source Systems

The notion that open source software is inherently more secure than closed source software -- or the opposite notion -- is nonsense. And when people say something like that it is often just FUD and does not meaningfully advance the discussion.

To reason about this you must limit the discussion to a specific project. A piece of software which scratches a specific itch, is created by a specified team, and has a well defined target audience. For such a specific case it may be possible to reason about whether open source or closed source will serve the project best.

The problem with pitching all "open source" versus all "closed source" implementations is that one isn't just comparing licenses. In practice, open source is favored by must volunteer efforts, and closed source is most common in commercial efforts. So we are actually comparing:

  • Licenses.
  • Access to source code.
  • Very different incentive structures, for-profit versus for fun.
  • Very different legal liability situations.
  • Different, and wildly varying, team sizes and team skillsets.
  • etc.

To attempt to judge how all this works out for security across all software released as open/closed source just breaks down. It becomes a statement of opinion, not fact.


Maintained software is more secure than software which is not. Maintenance effort being, of course, relative to the complexity of said software and the number (and skill) of people who are looking at it. The theory behind opensource systems being more secure is that there are "many eyes" which look at the source code. But this depends quite a lot on the popularity of the system.

For instance, in 2008 were discovered in OpenSSL several buffer overflows, some of which leading to remote code execution. These bugs had been lying in the code for several years. So although OpenSSL was opensource and had a substantial user base (this is, after all, the main SSL library used for HTTPS websites), the number and skillfulness of source code auditors was not sufficient to overcome the inherent complexity of ASN.1 decoding (the part of OpenSSL where the bugs lurked) and of the OpenSSL source code (quite frankly, this is not the most readable C source code ever).

Closed source systems have, on average, much less people to do Q&A. However, many closed source systems have paid developers and testers, who can commit to the job full time. This is not really inherent to the open/close question; some companies employ people to develop opensource systems, and, conceivably, one could produce a closed source software for free (this is relatively common in the case of "freewares" for Windows). However, there is still a strong correlation between having paid testers, and being closed source (correlation does not imply causality, but this does not mean that correlations should be ignored either).

On the other hand, being closed source makes it easier to conceal security issues, which is bad, of course.

There are example of both open and closed source systems, with many or very few security issues. The opensource *BSD operating systems (FreeBSD, NetBSD and OpenBSD, and a few others) have a very good track record with regards to security. So does Solaris, even when it was a closed source operating system. On the other hand, Windows has (had) a terrible reputation in that matter.

Summary: in my opinion, the "opensource implies security" idea is overrated. What is important is the time (and skill) devoted to the tracking and fixing of security issues, and this is mostly orthogonal to the question of openness of the source. However, you not only want a secure system, you also want a system that you positively know to be secure (not being burgled is important, but being able to sleep at night also). For that role, opensource systems have a slight advantage: it is easier to be convinced that there is no deliberately concealed security hole when the system is opensource. But trust is a flitting thing, as was demonstrated with the recent tragicomedy around the alleged backdoors in OpenBSD (as far as I know, it turned out to be a red herring, but, conceptually, I cannot be sure unless I check the code myself).


I think the easiest, simplest take on this is a software engineering one. The argument usually follows: open source software is more secure because you can see the source!

Do you have the software engineering knowledge to understand the kernel top down? Sure, you can look at such a driver, but do you have a complete knowledge of what is going on to really say "ah yes, there must be a bug there"?

Here's an interesting example: not so long ago a null pointer dereference bug appeared in one of the beta kernels that was a fairly big thing, discovered by the guy from grsecurity (PaX patches):

  • LWN Article
  • Slashdot coverage

It was introduced in a piece of code like this:

pointer = struct->otherptr;

if ( pointer == NULL )
{
    /* error handling */
}

/* code continues, dereferencing that pointer
   which with the check optimised out, can be NULL. Problem. */

and the pointer == NULL check was optimised out by the compiler, rightly - since a null pointer cannot be dereferenced to a struct containing members, it makes no sense for the pointer in the function ever to be null. The compiler then removes the check the developer expected to be there.

Ergo, vis a vis, concordantly, the source code for such a large project may well appear correct - but actually isn't.

The problem is the level of knowledge needed here. Not only do you need to be fairly conversant with (in this case) C, assembly, the particular kernel subsystem, everything that goes along with developing kernels but you also need to understand what your compiler is doing.

Don't get me wrong, I agree with Linus that with enough eyes, all bugs are shallow. The problem is the knowledge in the brain behind the eyes. If you're paying 30 whizz kids to develop your product but your open source project only has 5 people who have a real knowledge of the code-base, then clearly the closed source version has a greater likelihood of fewer bugs, assuming relatively similar complexity.

Clearly, this is also for any given project transient over time, as Thomas Pornin discusses.

Update edited to remove references to gcc being wrong, as it wasn't.