Wrapping a C library in Python: C, Cython or ctypes?

Warning: a Cython core developer's opinion ahead.

I almost always recommend Cython over ctypes. The reason is that it has a much smoother upgrade path. If you use ctypes, many things will be simple at first, and it's certainly cool to write your FFI code in plain Python, without compilation, build dependencies and all that. However, at some point, you will almost certainly find that you have to call into your C library a lot, either in a loop or in a longer series of interdependent calls, and you would like to speed that up. That's the point where you'll notice that you can't do that with ctypes. Or, when you need callback functions and you find that your Python callback code becomes a bottleneck, you'd like to speed it up and/or move it down into C as well. Again, you cannot do that with ctypes. So you have to switch languages at that point and start rewriting parts of your code, potentially reverse engineering your Python/ctypes code into plain C, thus spoiling the whole benefit of writing your code in plain Python in the first place.

With Cython, OTOH, you're completely free to make the wrapping and calling code as thin or thick as you want. You can start with simple calls into your C code from regular Python code, and Cython will translate them into native C calls, without any additional calling overhead, and with an extremely low conversion overhead for Python parameters. When you notice that you need even more performance at some point where you are making too many expensive calls into your C library, you can start annotating your surrounding Python code with static types and let Cython optimise it straight down into C for you. Or, you can start rewriting parts of your C code in Cython in order to avoid calls and to specialise and tighten your loops algorithmically. And if you need a fast callback, just write a function with the appropriate signature and pass it into the C callback registry directly. Again, no overhead, and it gives you plain C calling performance. And in the much less likely case that you really cannot get your code fast enough in Cython, you can still consider rewriting the truly critical parts of it in C (or C++ or Fortran) and call it from your Cython code naturally and natively. But then, this really becomes the last resort instead of the only option.

So, ctypes is nice to do simple things and to quickly get something running. However, as soon as things start to grow, you'll most likely come to the point where you notice that you'd better used Cython right from the start.


ctypes is your best bet for getting it done quickly, and it's a pleasure to work with as you're still writing Python!

I recently wrapped an FTDI driver for communicating with a USB chip using ctypes and it was great. I had it all done and working in less than one work day. (I only implemented the functions we needed, about 15 functions).

We were previously using a third-party module, PyUSB, for the same purpose. PyUSB is an actual C/Python extension module. But PyUSB wasn't releasing the GIL when doing blocking reads/writes, which was causing problems for us. So I wrote our own module using ctypes, which does release the GIL when calling the native functions.

One thing to note is that ctypes won't know about #define constants and stuff in the library you're using, only the functions, so you'll have to redefine those constants in your own code.

Here's an example of how the code ended up looking (lots snipped out, just trying to show you the gist of it):

from ctypes import *

d2xx = WinDLL('ftd2xx')

OK = 0
INVALID_HANDLE = 1
DEVICE_NOT_FOUND = 2
DEVICE_NOT_OPENED = 3

...

def openEx(serial):
    serial = create_string_buffer(serial)
    handle = c_int()
    if d2xx.FT_OpenEx(serial, OPEN_BY_SERIAL_NUMBER, byref(handle)) == OK:
        return Handle(handle.value)
    raise D2XXException

class Handle(object):
    def __init__(self, handle):
        self.handle = handle
    ...
    def read(self, bytes):
        buffer = create_string_buffer(bytes)
        count = c_int()
        if d2xx.FT_Read(self.handle, buffer, bytes, byref(count)) == OK:
            return buffer.raw[:count.value]
        raise D2XXException
    def write(self, data):
        buffer = create_string_buffer(data)
        count = c_int()
        bytes = len(data)
        if d2xx.FT_Write(self.handle, buffer, bytes, byref(count)) == OK:
            return count.value
        raise D2XXException

Someone did some benchmarks on the various options.

I might be more hesitant if I had to wrap a C++ library with lots of classes/templates/etc. But ctypes works well with structs and can even callback into Python.


Cython is a pretty cool tool in itself, well worth learning, and is surprisingly close to the Python syntax. If you do any scientific computing with Numpy, then Cython is the way to go because it integrates with Numpy for fast matrix operations.

Cython is a superset of Python language. You can throw any valid Python file at it, and it will spit out a valid C program. In this case, Cython will just map the Python calls to the underlying CPython API. This results in perhaps a 50% speedup because your code is no longer interpreted.

To get some optimizations, you have to start telling Cython additional facts about your code, such as type declarations. If you tell it enough, it can boil the code down to pure C. That is, a for loop in Python becomes a for loop in C. Here you will see massive speed gains. You can also link to external C programs here.

Using Cython code is also incredibly easy. I thought the manual makes it sound difficult. You literally just do:

$ cython mymodule.pyx
$ gcc [some arguments here] mymodule.c -o mymodule.so

and then you can import mymodule in your Python code and forget entirely that it compiles down to C.

In any case, because Cython is so easy to setup and start using, I suggest trying it to see if it suits your needs. It won't be a waste if it turns out not to be the tool you're looking for.