Intermission: using Twisted to serve Django via wsgi

and now, while on break from our ongoing tutorial (table of contents).

are you tired of starting django and twisted via two separate processes, possibly having them run in two different shells for debugging purposes? i know i am.

fortunately, django and twisted can inter-operate – Django supports the Python wsgi interface, and twisted can be used to serve wsgi applications. this is actually the ideal method for having the two interact with each other (and is the first post in this tutorial that runs 100% counter to my disclaimer)

detailed documentation for this process is available at the twisted documentation site

give a shot to modifying your current tutorial code to make this work.

there are two components required to properly serve the django portions of this tutorial as wsgi resources:

  • configuring Django to work as twisted wsgi resource
  • serving the static components of django through twisted

this is slightly more complicated than it sounds, since, in newer versions, django dynamically searches through an ordered list of directories when serving its static components. using twisted to mimic that same functionality can take some work. i opted to use a custom directory scanning twisted resource

there aren’t very many decisions to make here; most of the work has to do with reading the documentation, and figuring out how to access the specific, relevant interfaces. if you’re stuck, you can find a clear example here; this basic resource might help, too.


tried it? given up? since i’ll be relying on this work when expanding the tutorial, it might be worth your time to see how i did it (and see if it makes sense to you). take a look at this commit/diff to get a list of the specific changes i made – let me know if it makes sense (or if i made a mistake/have a typo)

the git repository for the ongoing chat tutorial has this completed step tagged as v.0.2.1 – if you’ve already cloned the git repo, you can check out a clean version like so:

git checkout v0.2.1

whyfore chat, with django, twisted and websockets?


upon seeing the work i’ve put into writing tutorials, showing how to get realtime chat working in django + twisted/websockets, you might make the assumption that i consider this architecture to be, in general, a good idea.


twisted’s implementation of websockets is, as of this writing, not integrated into the main branch.

don’t use code that isn’t considered, by its authors, to be reliable enough to merge into and release as part of their application distribution.


twisted is an event-driven networking engine
django is a solid, easy to use web framework
websockets, a tcp based protocol, is usually implemented as a strange mix between the tcp and http protocols

it is, generally speaking, not a good idea to mix abstraction levels; adding event-driven components to your application by combining twisted and django is a bad architectural decision. I strongly suggest you consider using twisted.web instead of mixing django and twisted.

websockets are a strange mix of protocols, and can be difficult to work with unless you are very careful with your choice of libraries and application design, scope and implementation. at the time of this post, i would recommend against using websockets, in production, with the standard deployment of twisted. i strongly urge you to consider the following alternatives, in rough order of likelihood to work for you:


A dictionary, a write_lock, and the GIL

In my previous post, as part of some Python refactoring, I found myself implementing a WriteProtectedDict – a wrapper around a Python dict, using a write_lock to make updates atomic. This triggered some warning bells.

Most people, when they speak of Python, will actually be speaking about CPython (I know I am). And current CPython implementations include a quirky component, called the Global Interpreter Lock (GIL).

Since the GIL is not a part of the language specification, it’s generally speaking unwise to rely on it existence. That said, it comes with some useful side effects, and, in practice, programers ignore this bit of advice, and rely on the side effects more often than not.

For example, in a situation like the one I was faced with:

  • a shared dictionary
  • multiple, concurrent, threads accessing the dictionary
  • guaranteed single writes to each dictionary key
  • multiple concurrent reads from the dictionary

The GIL guarantees that, even though a race condition exists – multiple threads are writing to the same dictionary – each individual dictionary update is actually an atomic operation within the same process.

So, while the write_lock is required when strictly adhering to the Python language specification, in practice it’s superfluous. Why, then, was I seeing behaviour, in my application, which made it look like the write operation was not atomic? The answer, if you’ve been reading the source, sits in one line of code:

 for published_at in sorted(self.messages)

Turns out that the code I was testing against, when I wrote the tutorial, was not sorting self.messages. The message update loop will always return the first new message it finds – effectively ignoring some messages that are stored in non-chronological order. As a side effect, all the longpolling chat clients will be missing messages in their chat history (in my basic tests, missing the same messages, to boot), a behaviour consistent with data being lost due to a race condition.

(I updated the original post to correct my error, leaving the related locking exercise intact)

So, what’s the appropriate course of action here? There are two schools of thought:

  • Since the write_lock isn’t strictly necessary, remove it. The GIL’s behaviour is confusing enough, and an unexpected lock can make the related code harder to read. Veteran Python programmers may even find the lock offensive. The first rule of Python is, one does not talk about the GIL
  • Optimizations which rely on implicit assumptions about the interpreter you’re running on are incredibly obfuscated, and should generally be avoided. Implement the lock, and explicitly optimize it away on systems with an appropriate GIL implementation.

For now, I went with the first option, and removed the write lock related code; I think that most Python programmers would probably do the same.

I reserve the right to change my mind, if I can figure out a practical implementation of option 2 (there’s no easy way I know of, to detect whether a GIL exists).

Some more discussion, in case what happened isn’t quite clear:

There’s definitely a race condition there – multiple chat clients might all be sending messages at roughly the same time. Without a write lock, they might interrupt each other, and not all of the updates would make it through, causing data loss. The GIL already locks the data on write, though, as a side-effect, guaranteeing that none of it is lost.

However, since the updates are concurrent, there’s no guarantee that messages are stored in the shared messages dictionary in the order they are received in.

Imagine, for example, that two clients are sending a message at the same time. Each will be handled by a separate twisted thread, and both threads will try to execute the following line of code:

self.messages[float(time.time())] = args['new_message'][0]

In a single threaded system, this is what the Python interpreter will do, in order:

  • Step A: get the current time – float(time.time())
  • Step B: store the new_message in the dictionary, at the index retrieved in Step A.

When there are two (or more) threads, each handling a different message, there’s no telling what order they’ll be going through this part of the code in. They might follow each other, like so:

Thread 1: Step A - get timestamp1
Thread 1: Step B - store message1 at timestamp1
Thread 2: Step A - get timestamp2
Thread 2: Step B - store message2 at timestamp2

Or go in reversed order:

Thread 2: Step A - get timestamp1
Thread 2: Step B - store message1 at timestamp1
Thread 1: Step A - get timestamp2
Thread 1: Step B - store message2 at timestamp2

leading to a dictionary which might look like this:


or like this:


Or, the threads might execute in a mixed up order like so:

Thread 1: Step A - get timestamp1
Thread 2: Step A - get timestamp2
Thread 2: Step B - store message2 at timestamp2
Thread 1: Step B - store message1 at timestamp1

leading to a dictionary which looks like this:


Where timestamp1 < timestamp2. As a result, message2 is seen by the message update loop and sent out before message1. Chat clients are then updated to think they have all the messages up to timestamp2, and never request to be updated with message1.

The lack of a sort there is more than a just concurrency related problem – dictionary datastructures explicitly do not guarantee that elements will be iterated over in sorted key order – keys can, over time, change location within the datastructure. Even if the messages are inserted in order by their received time, an iteration over the dictionary can return them out of order (dictionary implementations rely on this freedom for various optimizations).

A friend recommended reading Python in Practice, to help me be more Pythonic when I approach these kinds of problems

Refactoring a simple, sample, Twisted file

[a somewhat rambling description of the process, techniques and some of the thinking behind a basic refactoring of a Python/Twisted program, with Python neophytes in mind]

[original tutorial] [messy source code] [refactored source code]

I’m looking at you, You’re part of the tutorial I posted last week, and are rather in need of some cleanup.

What to refactor?

If you haven’t yet, take a look at You’ll see that it’s quite long, somewhat messy, and that there are distinct bits of code there which are self contained (the WebsocketChat class, for example). There’s opportunity for some cleanup.

When seeing this file, I want to do three things:

  • Move all of the self contained chunks of code to separate files
  • Rearrange the code in this file, and any new files, to make it slightly more readable (for example, bring it closer to the PEP-8 standard)
  • Keep an eye out for any other small, opportunistic, code clean-up that might make sense to include

There are three interconnected classes, with fairly separate function, and fairly readable code calling them. A first cut for a refactoring might be to move each of the classes (WebSocketChat, ChatFactory, HttpChat) into a separate module/file. So, let’s start there.

Where should Python modules go?

Technically, the answer to this question can be quite complicated. The Python module system is complex, powerful, and can handle some fairly fancy bits of organization. It might be fun to write about the details of what can be done some other time. For this post, though, I’ll only review enough to explain the refactoring I’m making.

We probably want to create some modules, to store the code we’re factoring out of the main program. We also want Python to be able to find our modules when we import them – and Python has some conventions and built in assumptions, to help make this work as easy as possible.

If you read the relevant documentation, you’ll see that, unless you go to the trouble of configuring it to do something else, Python will try and find any modules you find in the following locations:

  1. a disk location storing built in and system defined modules
  2. any modules in the directory of the currently running script
  3. modules installed somewhere else in the PYTHONPATH (where you might, for example, find non core libraries and other related tools)

For our case, 2. is probably the relevant choice – so we’ll go ahead and refactor the code into module files, stored in a subdirectory in our project.

Most often, any work and refactoring done on a Python project is restricted to the working directory of the currently running script (2. above). It’s fairly unusual to work on more than one project at a time, but common enough that most people will find themselves doing it – in which case 3. might apply – you might find yourself refactoring code into a module that’s from a project external to the one the code is a part of (say, if you’re working on a test framework for one of your projects). It’s even more unusual to worry about modifying or creating system defined modules – at best, you might run into 1. when switching between different versions of Python for compatibility and other tests.


First, we’ll want a namespace for storing and referencing the classes we’re refactoring. “twisted_chat” seems a good enough namespace.

There’s a simple, 2 step, convention for defining a namespace in python – first, create a directory, and then place an file in it, called

bash: cd [directory where the django_twisted_chat files are checked out]
bash: mkdir twisted_chat
bash: touch twisted_chat/ can remain empty for now. It gets executed as part of the module import/initialization process, and can, if needed, do some pretty powerful bits of manipulation. At this stage in the projects’ life, it’s not required to do much, though, so I’ll ignore it for now. (We’ll almost certainly want to modify it before sending this project to a production server, though. Probably to add some code to handle module level logging and maybe some other module initialization code.)

Modules are one of the standard kinds of namespaces used by Python, and are almost always represented as directories on disk. Python source files also function as namespaces – and, since we need to place our classes somewhere, let’s create some source files. In the twisted_chat directory, create three files:

touch twisted_chat/
touch twisted_chat/
touch twisted_chat/

Now, to the fun bit: move the three classes and all related, relevant code into each of the files. WebSocketChat into, ChatFactory into, and HttpChat into

As an aside, some programmers might disagree with this specific choice of organization. The three classes are relatively small, and separating them into three files like that seems a bit wasteful. Also, there are some fairly intimate interdependences between them, especially considering the shared lock and the shared messages dictionary. Some programmers might wait until there are a few more classes before splitting them out like this (and maybe, for now, they might store all three classes in just one file).

The most difficult part of moving the three classes out is figuring out what the “relevant code” bit is, especially when looking at longer, or more complex chunks of code. In our specific case, the extra relevant code is mostly import statements, something a Python programmer can probably do by just reading the source code.

For more complex classes, though, you might try to use an iterative approach. For example, you can move the smallest possible chunk of self contained source code (maybe just a subset of a class into a separate file), import it the new file back into the original program, and then test your program. Each time an import error shows up, correct it, until you run out of errors. This is especially easy to do if you have a solid set of tests to run against your code, once the refactoring is complete – the more complex the work, the likelier it is that you’ll introduce errors, and you won’t find them without good test coverage. Once all of your tests pass, move another small bit of code out, testing thoroughly, and repeating until the entire complex section, or class, is factored out.

As you factor out the three classes, you’ll quickly lose the reference to the shared write_lock. The class signatures will need some editing to account for this – so, change the classes as you move them, so that they take a write_lock argument when they’re initialized. Pass the lock around from one class to the other, as needed. If you’ve never refactored Python code before, it’s a worthwhile exercise to do this work now. If you’re comfortable with refactoring, read the opportunistic cleanup section at the bottom of this post, for an additional bit of refactoring you can combine into your work, saving you some time in the longterm.

As you go along, you’ll notice that the file will no longer depend on some of the import statements you copy over – just remove them.

Also, don’t forget to import the newly created module and source files, as necessary. Your import statements  in will likely look like this:

from twisted_chat.factories import ChatFactory
from twisted_chat.resources import HttpChat

To make sure that you’ve completed the refactoring without breaking anything, try running the chat server:

bash: python runserver & twistd -n -y &

then connect to it and make sure that you can still send chat messages properly:

Opportunistic code cleanup

Earlier in this post, I mentioned making code more readable as a goal of this refactoring.

If you’re new to Python, you might not be aware of PEP-8 – a set of guidelines, describing a standard way to format and write Python code. If you don’t know about it, it’s worth reading. Since we’re doing all of this refactoring work, it’s worth seeing if the source can be made more readable, and also brought closer to the PEP-8 standard.

I also mentioned that I’ll keep an eye out for other refactoring work that might fit in with what we’re doing. Passing the write_lock around, you might have noticed, can be painful. So it’s probably worth while to look into refactoring our code, to find a way to not have to do that work.

A couple of hints present themselves. The write_lock is exclusively used with the shared messages dictionary, and the dictionary is already shared between all of the relevant structures. What if we were to attach the lock to the shared dictionary?

You can try doing this on your own:

  • create a subclass of the standard dictionary structure
  • find a way to pass the write_lock in to the structure’s initializer
  • override the two methods which can be used to modify the dictionary:
class WriteProtectedDict(dict):
    def __init__(self, write_lock):
    def __setitem__(self, key, value):
    def __delitem__(self, key):

Have you succeeded? What’s your code look like? Take a look at the django_twisted_chat git repository – this version contains all of the work from this post (the twisted_chat module and are the relevant bits). You can download it and read it locally if you want:

git clone demo_source
cd demo_source/django_twisted_chat
git reset --hard 5d1a8e5a448f86ce6da6425754f14e00bb00e9b8


Wait, what? I put a write_lock on a system data structure… in Python!? OK. That’s… weird…  I smell a bug.

To be continued



if you’re learning how to refactor, or are just looking for a second opinion, you might want to check out
the python questions on the codereview stackexchange site

Chat with Django, Twisted and websockets – addendum 1

[Part 1] - [Part 2] - [Part 3] - [Addendums] - [Source]
[Table of Contents]

The simplest (to write) solution to the data update race condition I refer to at the end of step 3 in the tutorial, is to use a lock to guard your writes. It’s not necessarily the correct one to use on a production server, but should get the tutorial working properly. To implement it, create a lock:

write_lock = thread.allocate_lock() #forcing synchronized writes between the various kinds of clients

And then use it to guard all of the udpates to shared data-structures (messages in the tutorial); for example:

        self.factory.messages[float(time.time())] = data
        self.wsFactory.messages = self.messages
        #and so on throughout the file.

You can get a version of the code for this tutorial, including this fix, from here (or by running the following at the command line):

bash: git clone
bash: git checkout tags/v0.1.3


This solution is simple, but, at production levels, might cost you performance for your twisted server. Adding blocking components is generally a bad idea, from a scalability perspective. Odds are that, eventually, you’ll eventually find yourself pushing this kind of synchronization work down into your data store (you might want to look into using something like Redis or MongoDB, and a related non-blocking twisted client library), and probably writing all sorts of fun time based queries to get at it (or maybe use an advanced tool like datomic)