Intermission: using Twisted to serve Django via wsgi

and now, while on break from our ongoing tutorial (table of contents).

are you tired of starting django and twisted via two separate processes, possibly having them run in two different shells for debugging purposes? i know i am.

fortunately, django and twisted can inter-operate – Django supports the Python wsgi interface, and twisted can be used to serve wsgi applications. this is actually the ideal method for having the two interact with each other (and is the first post in this tutorial that runs 100% counter to my disclaimer)

detailed documentation for this process is available at the twisted documentation site

give a shot to modifying your current tutorial code to make this work.

there are two components required to properly serve the django portions of this tutorial as wsgi resources:

  • configuring Django to work as twisted wsgi resource
  • serving the static components of django through twisted

this is slightly more complicated than it sounds, since, in newer versions, django dynamically searches through an ordered list of directories when serving its static components. using twisted to mimic that same functionality can take some work. i opted to use a custom directory scanning twisted resource

there aren’t very many decisions to make here; most of the work has to do with reading the documentation, and figuring out how to access the specific, relevant interfaces. if you’re stuck, you can find a clear example here; this basic resource might help, too.


tried it? given up? since i’ll be relying on this work when expanding the tutorial, it might be worth your time to see how i did it (and see if it makes sense to you). take a look at this commit/diff to get a list of the specific changes i made – let me know if it makes sense (or if i made a mistake/have a typo)

the git repository for the ongoing chat tutorial has this completed step tagged as v.0.2.1 – if you’ve already cloned the git repo, you can check out a clean version like so:

git checkout v0.2.1

whyfore chat, with django, twisted and websockets?


upon seeing the work i’ve put into writing tutorials, showing how to get realtime chat working in django + twisted/websockets, you might make the assumption that i consider this architecture to be, in general, a good idea.


twisted’s implementation of websockets is, as of this writing, not integrated into the main branch.

don’t use code that isn’t considered, by its authors, to be reliable enough to merge into and release as part of their application distribution.


twisted is an event-driven networking engine
django is a solid, easy to use web framework
websockets, a tcp based protocol, is usually implemented as a strange mix between the tcp and http protocols

it is, generally speaking, not a good idea to mix abstraction levels; adding event-driven components to your application by combining twisted and django is a bad architectural decision. I strongly suggest you consider using twisted.web instead of mixing django and twisted.

websockets are a strange mix of protocols, and can be difficult to work with unless you are very careful with your choice of libraries and application design, scope and implementation. at the time of this post, i would recommend against using websockets, in production, with the standard deployment of twisted. i strongly urge you to consider the following alternatives, in rough order of likelihood to work for you:


grinding from static design image to dynamic ui layout

say you’ve been given a static ui design – a gif, jpg, pdf, or other format file – and asked to create a pixel perfect equivalent version, in your dynamic application.

so you take a look at the design, and go back to the ui generating code of your application, and then start making small, incremental changes, gradually moving your ui components closer to the design – without breaking any of the underlying implementation and code interactions.

  1. make a small change.
  2. preview the built application ui.
  3. visually compare to the static file provided by your designer.
  4. how close are you?

in a previous post, i promised you an example of how i optimize the last step in this process. here is the example, in recipe format:


  • Mac/OS X system
  • modifying the ui for an iPhone application
  • design provided as a pdf file


  • Delta between the current state of the application design, and the static pdf
  • Direction for the delta
  • Pixel-level accuracy
  • No budget for expensive, related, design tools
  • No time to develop time consuming, related, design skills


  • Gimp is a freely available graphic design/image manipulation tool, which runs smoothly on a mac. Install gimp
  • Gimp is scriptable – significant portions of the process can be automated. Automate as much of the process as possible, using script-fu (gimp’s macro functionality)
  • OS X comes with some powerful, built in, graphic manipulation components. Pay special close attention to the built in components (eg: screen capture) and the included tools (eg: Digital Color Meter)

For my specific example:


Command-Shift-4, then space, then click a window: Take a screenshot of a window and save it as a file on the desktop

  • ran the following command:
bash: defaults write name "Screencap"

standardizing the file names generated by the screen capture utility.

Process for generating the UI:

  • Modify ui definition components in XCode.
  • Display the result in the iphone emulator/tab to the emulator
  • Screen shot of the emulator, only (Command-Shift-4, then space, then click a window)
  • Tab to a command prompt open on desktop and run:
bash: /Applications/ -b '(script-fu-overlay "Design_file.pdf" "Screen Shot 2013-09-17 at 7.04.32 PM.png")'
  • Visually inspect the two super-imposed images.
  • Delete “Screen Shot 2013-09-17 at 7.04.32 PM.png”, and start over

note that i actually automated a bit more of the process – starting gimp and deleting the image afterwards is a simple shell script, for example. the pattern, and many of the steps, have come in handy for many other similar situations – most recently, when working on an automatic pdf report generator

A dictionary, a write_lock, and the GIL

In my previous post, as part of some Python refactoring, I found myself implementing a WriteProtectedDict – a wrapper around a Python dict, using a write_lock to make updates atomic. This triggered some warning bells.

Most people, when they speak of Python, will actually be speaking about CPython (I know I am). And current CPython implementations include a quirky component, called the Global Interpreter Lock (GIL).

Since the GIL is not a part of the language specification, it’s generally speaking unwise to rely on it existence. That said, it comes with some useful side effects, and, in practice, programers ignore this bit of advice, and rely on the side effects more often than not.

For example, in a situation like the one I was faced with:

  • a shared dictionary
  • multiple, concurrent, threads accessing the dictionary
  • guaranteed single writes to each dictionary key
  • multiple concurrent reads from the dictionary

The GIL guarantees that, even though a race condition exists – multiple threads are writing to the same dictionary – each individual dictionary update is actually an atomic operation within the same process.

So, while the write_lock is required when strictly adhering to the Python language specification, in practice it’s superfluous. Why, then, was I seeing behaviour, in my application, which made it look like the write operation was not atomic? The answer, if you’ve been reading the source, sits in one line of code:

 for published_at in sorted(self.messages)

Turns out that the code I was testing against, when I wrote the tutorial, was not sorting self.messages. The message update loop will always return the first new message it finds – effectively ignoring some messages that are stored in non-chronological order. As a side effect, all the longpolling chat clients will be missing messages in their chat history (in my basic tests, missing the same messages, to boot), a behaviour consistent with data being lost due to a race condition.

(I updated the original post to correct my error, leaving the related locking exercise intact)

So, what’s the appropriate course of action here? There are two schools of thought:

  • Since the write_lock isn’t strictly necessary, remove it. The GIL’s behaviour is confusing enough, and an unexpected lock can make the related code harder to read. Veteran Python programmers may even find the lock offensive. The first rule of Python is, one does not talk about the GIL
  • Optimizations which rely on implicit assumptions about the interpreter you’re running on are incredibly obfuscated, and should generally be avoided. Implement the lock, and explicitly optimize it away on systems with an appropriate GIL implementation.

For now, I went with the first option, and removed the write lock related code; I think that most Python programmers would probably do the same.

I reserve the right to change my mind, if I can figure out a practical implementation of option 2 (there’s no easy way I know of, to detect whether a GIL exists).

Some more discussion, in case what happened isn’t quite clear:

There’s definitely a race condition there – multiple chat clients might all be sending messages at roughly the same time. Without a write lock, they might interrupt each other, and not all of the updates would make it through, causing data loss. The GIL already locks the data on write, though, as a side-effect, guaranteeing that none of it is lost.

However, since the updates are concurrent, there’s no guarantee that messages are stored in the shared messages dictionary in the order they are received in.

Imagine, for example, that two clients are sending a message at the same time. Each will be handled by a separate twisted thread, and both threads will try to execute the following line of code:

self.messages[float(time.time())] = args['new_message'][0]

In a single threaded system, this is what the Python interpreter will do, in order:

  • Step A: get the current time – float(time.time())
  • Step B: store the new_message in the dictionary, at the index retrieved in Step A.

When there are two (or more) threads, each handling a different message, there’s no telling what order they’ll be going through this part of the code in. They might follow each other, like so:

Thread 1: Step A - get timestamp1
Thread 1: Step B - store message1 at timestamp1
Thread 2: Step A - get timestamp2
Thread 2: Step B - store message2 at timestamp2

Or go in reversed order:

Thread 2: Step A - get timestamp1
Thread 2: Step B - store message1 at timestamp1
Thread 1: Step A - get timestamp2
Thread 1: Step B - store message2 at timestamp2

leading to a dictionary which might look like this:


or like this:


Or, the threads might execute in a mixed up order like so:

Thread 1: Step A - get timestamp1
Thread 2: Step A - get timestamp2
Thread 2: Step B - store message2 at timestamp2
Thread 1: Step B - store message1 at timestamp1

leading to a dictionary which looks like this:


Where timestamp1 < timestamp2. As a result, message2 is seen by the message update loop and sent out before message1. Chat clients are then updated to think they have all the messages up to timestamp2, and never request to be updated with message1.

The lack of a sort there is more than a just concurrency related problem – dictionary datastructures explicitly do not guarantee that elements will be iterated over in sorted key order – keys can, over time, change location within the datastructure. Even if the messages are inserted in order by their received time, an iteration over the dictionary can return them out of order (dictionary implementations rely on this freedom for various optimizations).

A friend recommended reading Python in Practice, to help me be more Pythonic when I approach these kinds of problems

Refactoring a simple, sample, Twisted file

[a somewhat rambling description of the process, techniques and some of the thinking behind a basic refactoring of a Python/Twisted program, with Python neophytes in mind]

[original tutorial] [messy source code] [refactored source code]

I’m looking at you, You’re part of the tutorial I posted last week, and are rather in need of some cleanup.

What to refactor?

If you haven’t yet, take a look at You’ll see that it’s quite long, somewhat messy, and that there are distinct bits of code there which are self contained (the WebsocketChat class, for example). There’s opportunity for some cleanup.

When seeing this file, I want to do three things:

  • Move all of the self contained chunks of code to separate files
  • Rearrange the code in this file, and any new files, to make it slightly more readable (for example, bring it closer to the PEP-8 standard)
  • Keep an eye out for any other small, opportunistic, code clean-up that might make sense to include

There are three interconnected classes, with fairly separate function, and fairly readable code calling them. A first cut for a refactoring might be to move each of the classes (WebSocketChat, ChatFactory, HttpChat) into a separate module/file. So, let’s start there.

Where should Python modules go?

Technically, the answer to this question can be quite complicated. The Python module system is complex, powerful, and can handle some fairly fancy bits of organization. It might be fun to write about the details of what can be done some other time. For this post, though, I’ll only review enough to explain the refactoring I’m making.

We probably want to create some modules, to store the code we’re factoring out of the main program. We also want Python to be able to find our modules when we import them – and Python has some conventions and built in assumptions, to help make this work as easy as possible.

If you read the relevant documentation, you’ll see that, unless you go to the trouble of configuring it to do something else, Python will try and find any modules you find in the following locations:

  1. a disk location storing built in and system defined modules
  2. any modules in the directory of the currently running script
  3. modules installed somewhere else in the PYTHONPATH (where you might, for example, find non core libraries and other related tools)

For our case, 2. is probably the relevant choice – so we’ll go ahead and refactor the code into module files, stored in a subdirectory in our project.

Most often, any work and refactoring done on a Python project is restricted to the working directory of the currently running script (2. above). It’s fairly unusual to work on more than one project at a time, but common enough that most people will find themselves doing it – in which case 3. might apply – you might find yourself refactoring code into a module that’s from a project external to the one the code is a part of (say, if you’re working on a test framework for one of your projects). It’s even more unusual to worry about modifying or creating system defined modules – at best, you might run into 1. when switching between different versions of Python for compatibility and other tests.


First, we’ll want a namespace for storing and referencing the classes we’re refactoring. “twisted_chat” seems a good enough namespace.

There’s a simple, 2 step, convention for defining a namespace in python – first, create a directory, and then place an file in it, called

bash: cd [directory where the django_twisted_chat files are checked out]
bash: mkdir twisted_chat
bash: touch twisted_chat/ can remain empty for now. It gets executed as part of the module import/initialization process, and can, if needed, do some pretty powerful bits of manipulation. At this stage in the projects’ life, it’s not required to do much, though, so I’ll ignore it for now. (We’ll almost certainly want to modify it before sending this project to a production server, though. Probably to add some code to handle module level logging and maybe some other module initialization code.)

Modules are one of the standard kinds of namespaces used by Python, and are almost always represented as directories on disk. Python source files also function as namespaces – and, since we need to place our classes somewhere, let’s create some source files. In the twisted_chat directory, create three files:

touch twisted_chat/
touch twisted_chat/
touch twisted_chat/

Now, to the fun bit: move the three classes and all related, relevant code into each of the files. WebSocketChat into, ChatFactory into, and HttpChat into

As an aside, some programmers might disagree with this specific choice of organization. The three classes are relatively small, and separating them into three files like that seems a bit wasteful. Also, there are some fairly intimate interdependences between them, especially considering the shared lock and the shared messages dictionary. Some programmers might wait until there are a few more classes before splitting them out like this (and maybe, for now, they might store all three classes in just one file).

The most difficult part of moving the three classes out is figuring out what the “relevant code” bit is, especially when looking at longer, or more complex chunks of code. In our specific case, the extra relevant code is mostly import statements, something a Python programmer can probably do by just reading the source code.

For more complex classes, though, you might try to use an iterative approach. For example, you can move the smallest possible chunk of self contained source code (maybe just a subset of a class into a separate file), import it the new file back into the original program, and then test your program. Each time an import error shows up, correct it, until you run out of errors. This is especially easy to do if you have a solid set of tests to run against your code, once the refactoring is complete – the more complex the work, the likelier it is that you’ll introduce errors, and you won’t find them without good test coverage. Once all of your tests pass, move another small bit of code out, testing thoroughly, and repeating until the entire complex section, or class, is factored out.

As you factor out the three classes, you’ll quickly lose the reference to the shared write_lock. The class signatures will need some editing to account for this – so, change the classes as you move them, so that they take a write_lock argument when they’re initialized. Pass the lock around from one class to the other, as needed. If you’ve never refactored Python code before, it’s a worthwhile exercise to do this work now. If you’re comfortable with refactoring, read the opportunistic cleanup section at the bottom of this post, for an additional bit of refactoring you can combine into your work, saving you some time in the longterm.

As you go along, you’ll notice that the file will no longer depend on some of the import statements you copy over – just remove them.

Also, don’t forget to import the newly created module and source files, as necessary. Your import statements  in will likely look like this:

from twisted_chat.factories import ChatFactory
from twisted_chat.resources import HttpChat

To make sure that you’ve completed the refactoring without breaking anything, try running the chat server:

bash: python runserver & twistd -n -y &

then connect to it and make sure that you can still send chat messages properly:

Opportunistic code cleanup

Earlier in this post, I mentioned making code more readable as a goal of this refactoring.

If you’re new to Python, you might not be aware of PEP-8 – a set of guidelines, describing a standard way to format and write Python code. If you don’t know about it, it’s worth reading. Since we’re doing all of this refactoring work, it’s worth seeing if the source can be made more readable, and also brought closer to the PEP-8 standard.

I also mentioned that I’ll keep an eye out for other refactoring work that might fit in with what we’re doing. Passing the write_lock around, you might have noticed, can be painful. So it’s probably worth while to look into refactoring our code, to find a way to not have to do that work.

A couple of hints present themselves. The write_lock is exclusively used with the shared messages dictionary, and the dictionary is already shared between all of the relevant structures. What if we were to attach the lock to the shared dictionary?

You can try doing this on your own:

  • create a subclass of the standard dictionary structure
  • find a way to pass the write_lock in to the structure’s initializer
  • override the two methods which can be used to modify the dictionary:
class WriteProtectedDict(dict):
    def __init__(self, write_lock):
    def __setitem__(self, key, value):
    def __delitem__(self, key):

Have you succeeded? What’s your code look like? Take a look at the django_twisted_chat git repository – this version contains all of the work from this post (the twisted_chat module and are the relevant bits). You can download it and read it locally if you want:

git clone demo_source
cd demo_source/django_twisted_chat
git reset --hard 5d1a8e5a448f86ce6da6425754f14e00bb00e9b8


Wait, what? I put a write_lock on a system data structure… in Python!? OK. That’s… weird…  I smell a bug.

To be continued



if you’re learning how to refactor, or are just looking for a second opinion, you might want to check out
the python questions on the codereview stackexchange site

Chat with Django, Twisted and websockets – addendum 1

[Part 1] - [Part 2] - [Part 3] - [Addendums] - [Source]
[Table of Contents]

The simplest (to write) solution to the data update race condition I refer to at the end of step 3 in the tutorial, is to use a lock to guard your writes. It’s not necessarily the correct one to use on a production server, but should get the tutorial working properly. To implement it, create a lock:

write_lock = thread.allocate_lock() #forcing synchronized writes between the various kinds of clients

And then use it to guard all of the udpates to shared data-structures (messages in the tutorial); for example:

        self.factory.messages[float(time.time())] = data
        self.wsFactory.messages = self.messages
        #and so on throughout the file.

You can get a version of the code for this tutorial, including this fix, from here (or by running the following at the command line):

bash: git clone
bash: git checkout tags/v0.1.3


This solution is simple, but, at production levels, might cost you performance for your twisted server. Adding blocking components is generally a bad idea, from a scalability perspective. Odds are that, eventually, you’ll eventually find yourself pushing this kind of synchronization work down into your data store (you might want to look into using something like Redis or MongoDB, and a related non-blocking twisted client library), and probably writing all sorts of fun time based queries to get at it (or maybe use an advanced tool like datomic)

TUTORIAL: Real-time chat with Django, Twisted and WebSockets – Part 3

[Part 1] - [Part 2] - [Part 3] - [Addendums] - [Source]
[Table of Contents]


You might wonder – why add long-polling client, connecting to a http service, to a functional websocket client-server implementation? The quick answer:

  • A lot of people seem to want to know how to implement long polling clients against a twisted web server
  • It’s a useful skill to have in your toolbelt
  • The interaction between a blocking service (ie, one handling http requests, where long-polling might be used to emulate a continuous connection) and a non blocking service (ie, the websocket chat protocol) can be interesting to get right.
  • Websockets are the future, but not (yet) the now

Django components

We’ll reuse the server we finished in Part 2, and have it serve a chat room interface which connects to our server with an http connection, relying on long-polling to emulate a continuous, “real-time” connection.

First, add an entry to your chat/ file, telling Django we where we want it to serve our view:

urlpatterns = patterns('',
        url(r'^$', views.index, name='index'),
        url(r'^(?P<chat_room_id>\d+)/$', views.chat_room, name='chat_room'),
        url(r'^long_poll/(?P<chat_room_id>\d+)/$', views.longpoll_chat_room, name='longpoll_chat_room'),

Next, create the relevant view in chat/

def longpoll_chat_room(request, chat_room_id):
  chat = get_object_or_404(ChatRoom, pk=chat_room_id)
  return render(request, 'chats/longpoll_chat_room.html', {'chat': chat})

You can see from the view definition, that we’re going to be using a template (since we’re passing a template reference into the render call), and, since the template reference is a path – ‘chats/longpoll_chat_room.html’ – that’s where Django is going to expect it to be on disk. So, go ahead and open up chat/templates/chats/longpoll_chat_room.html, and write the following chunk in it:

{% load staticfiles %}</pre>
<h1>{{ }}</h1>
<div id="message_list"><ul></ul></div>

You’ll notice that this file is almost identical to the websocket chat. The reference to the graceful.webSocket library isn’t here, since it’s not needed, and all of the javascript client connection work is now going to be in a separate file called (instead of inside of a <script></script> tag like the chat_room.html implementation).

So, let’s create the javascript client, then. Edit chat/static/long_poll.js file, and write the following code into it:

//Numeric representation of the last time we received a message from the server
var lastupdate = -1;


    var inputBox = document.getElementById("inputbox");

    inputbox.addEventListener("keydown", function(e) {
      if (!e) { var e = window.event; }

      if (e.keyCode == 13) {
        e.preventDefault(); // sometimes useful
    }, false);


var getData = function() {
        type: "GET",
        // set the destination for the query
        url: ''+lastupdate+'&callback=?',
        dataType: 'jsonp',
        // needs to be set to true to avoid browser loading icons
        async: true,
        cache: false,
        // process a successful response
        success: function(response) {
            // append the message list with the new message
            var message =;
            $("#message_list ul")
            // set lastupdate
            lastupdate = response.timestamp;
         complete: getData(lastupdate),

var postData = function(data) {
        type: "POST",
        // set the destination for the query
        url: '',
        data: {new_message: data},
        // needs to be set to true to avoid browser loading icons
        async: true,
        cache: false,

There’s a lot going on here.

We first add an almost-identical looking event listener to the one in the websocket based chat, to our input box, telling it to send messages when a user presses return/enter.

We then define two functions – getData and postData – which handle the actual communication with the chat server.

postData is the simpler of the two – it uses functionality defined by jQuery ($.ajax) to build, and then send, a post request to our chat server, with the contents of a message as the only argument. You can read the documentation for that command to learn more about how it does what it does. Note that we labeled sent information as  “new_message” – the server-side api component is going to have to unpack that by correctly referring to new_message when it’s received.

The more complex function is the one relying on long-polling to simulate real-time communication – getData. We’re doing a few interesting things here: First, we set the dataType to ‘jsonp’. This is necessary, since the javascript file is served by Django on one port (8000), and the chat interface is served by twisted on another (1025). When successful (when the server responds with a new message for us), we perform the same basic function as the websockets message receive function did – we add the message to our chat room.

The “long-polling” component is implemented by setting a function to execute on “complete”, and setting the timeout variable. With a timeout of 1000, we’re instructing jQuery to give the server at least 1 second to respond to our call. If either the server or at least 1 second has gone by, jQuery will terminate the request and call the complete function.

This completes the long-polling loop: once every second, we open a server connection asking it “do you have any new messages for me?”, handling any messages as they come.

For a production level client, 1 second is probably not appropriate – probably we’d want an exponentially decaying time delay, to be more efficient in network use. For now, this’ll do though.

Now we have a functional long-polling chat client. You can test it by starting Django if it’s not yet running, and opening up one of your chat rooms like so:

Since the chat-server components aren’t implemented yet, you might see javascript connection errors in your browser’s console, and actual message sending/receiving won’t (quite) work. So, let’s fix that:

Twisted based Blocking (http) chat server

We’re going to perform some delicate bits of surgery on the existing twisted chat server, to add a in a second, blocking, http-based, chat protocol. We’ll also want the two protocols to share data – so that they’ll provide a single set of chat rooms for people to connect to.

So. Open up your twisted server file ( and edit it to add the following to the top:

from twisted.web.websockets import WebSocketsResource, WebSocketsProtocol, lookupProtocolForFactory

import time, datetime, json, thread
from twisted.web.resource import Resource
from twisted.internet import task
from twisted.web.server import NOT_DONE_YET

These are references to the libraries and functions we’re going to be using. Next, we’ll define the chat protocol. Insert the following chunk into the file (replacing the existing ChatFactory definition):

from twisted.internet.protocol import Factory
class ChatFactory(Factory):
    protocol = WebsocketChat
    clients = []
    messages = {}

class HttpChat(Resource):
    isLeaf = True
    def __init__(self):
        # throttle in seconds to check app for new data
        self.throttle = 1
        # define a list to store client requests
        self.delayed_requests = []
        self.messages = {}

        #instantiate a ChatFactory, for generating the websocket protocols
        self.wsFactory = ChatFactory()

        # setup a loop to process delayed requests
        # not strictly neccessary, but a useful optimization,
        # since it can force dropped connections to close, etc...
        loopingCall = task.LoopingCall(self.processDelayedRequests)
        loopingCall.start(self.throttle, False)

        #share the list of messages between the factories of the two protocols
        self.wsFactory.messages = self.messages
        # initialize parent

    def render_POST(self, request):
        request.setHeader('Content-Type', 'application/json')
        args = request.args
        if 'new_message' in args:
            self.messages[float(time.time())] = args['new_message'][0]
            if len(self.wsFactory.clients) > 0:
        return ''

    def render_GET(self, request):
        request.setHeader('Content-Type', 'application/json')
        args = request.args

        if 'callback' in args:
            request.jsonpcallback =  args['callback'][0]

        if 'lastupdate' in args:
            request.lastupdate =  float(args['lastupdate'][0])
            request.lastupdate = 0.0

        if request.lastupdate < 0:
            return self.__format_response(request, 1, "connected...", timestamp=0.0)

        #get the next message for this user
        data = self.getData(request)

        if data:
            return self.__format_response(request, 1, data.message, timestamp=data.published_at)

        return NOT_DONE_YET

    #returns the next sequential message,
    #and the time it was received at
    def getData(self, request):
        for published_at in sorted(self.messages):
            if published_at > request.lastupdate:
                return type('obj', (object,), {'published_at' : published_at, "message": self.messages[published_at]})();

    def processDelayedRequests(self):
        for request in self.delayed_requests:
            data = self.getData(request)

            if data:
                    request.write(self.__format_response(request, 1, data.message, data.published_at))
                    print 'connection lost before complete.'

    def __format_response(self, request, status, data, timestamp=float(time.time())):
        response = json.dumps({'status':status,'timestamp': timestamp, 'data':data})

        if hasattr(request, 'jsonpcallback'):
            return request.jsonpcallback+'('+response+')'
            return response

There is a lot going on here.

Let’s break it down some. We’ve added a “messages” structure to ChatFactory(); this structure is going to function as a shared repository for all of the messages we receive over both protocols – to make it possible for users at either protocol to see the same contents for a chat room.

Beyond the standard setup, the initialization function (__init__(self)) instantiates a ChatFactory(), and creates a reference to its messages, so that the two protocols now access the same list of messages, and effectively share a chat room. We also have the initialization function start a loop that runs the processDelayedRequests function once a second. This is not strictly necessary for sending out messages – as you’ll see when you read render_GET – but it helps optimize the use of server resources, since, besides sending out messages as quickly as possible after they’re received, it also has the side effect of freeing up resources dedicated to dropped connections.

We define a render_POST function. The function name conforms to twisted conventions – twisted will attempt to call a function by this name every time a HTTP POST request comes in. Since we know that only message sends perform posts for now, we assume that we’re receiving a message, and go ahead and process it.

First, we add a message to our list of messages. Then, we send the message out to all of the websocket based clients by calling the (soon to be implemented) updateClients method on the first websocket client we can find.

Finally, we call processDelayedRequests, to update any waiting httpclients with the new message.

We also define a render_GET function. This function responds to requests to new messages. Since the initial request is going to have a lastupdate time of -1 (this is hard-coded in the long_polling.js client), we check if the lastupdate is below 0, and, if it is, we send out a message to let the user know he’s connected, and to request updates at time 0 or higher.

We then check to see if there’s any data waiting, which this user should see – the getData function gets the next message that this user should see in his chat room; if there is a message for this user to see, we send it out, together with the time it was received at (so the user knows to ask for the next message in the sequence next time).

This creates a request-loop, with the user requesting each message, one by one, until he’s up to date with the chat. Note that in a production application, you’ll probably want to send messages back to the user in batches (since creating/closing http connections is a an inefficient use of network and server resources).

Once we’ve run out of messages to send out to the user, we append the request to a list of clients waiting for an update, and use the twisted shorthand NOT_DONE_YET to ensure that the connection is not closed when the render_GET function returns (twisted, by default, closes the http connection if we return any other value from this function). processDelayedRequests performs much the same function as the render_GET function, only it performs it for requests currently waiting for an update.

Once a message is sent out, the related connection is closed with a request.finish() function call, and all server resources allocated to it are freed as a side effect of removing it from the delayedRequest list. getData and __format_response are helper functions, which are fairly readable. Note that getData is dynamically creating/instantiating a python object from constructed text (the syntax is a bit weird, but I like being able to do this in python, so I take any excuse to teach people that it’s possible).

We should probably also update our websocket based chat protocol, and have it send messages out to any of the http/blocking based clients. To do that, replace the existing dataReceived function with the following two (we’re factoring updateClients out of dataReceived to make it easier to call it from the POST function we wrote above):

    def dataReceived(self, data):
        self.factory.messages[float(time.time())] = data

    def updateClients(self, data):
        for c in self.factory.clients:

Next, we’ll have to tell twisted that we’re now running two resources, on two different ports, and give it some directions on how to construct its infrastructure for supporting them. We’ll do that by replacing the last bit in the file with the following:

#resource = WebSocketsResource(lookupProtocolForFactory(ChatFactory())) #this line can be removed

from twisted.web.resource import Resource
from twisted.web.server import Site

from twisted.internet import protocol
from twisted.application import service, internet

resource = HttpChat()
factory = Site(resource)
ws_resource = WebSocketsResource(lookupProtocolForFactory(resource.wsFactory))
root = Resource()
root.putChild("",resource) #the http protocol is up at /
root.putChild("ws",ws_resource) #the websocket protocol is at /ws
application = service.Application("chatserver")
internet.TCPServer(1025, Site(root)).setServiceParent(application)

We now have the infrastructure for a twisted webserver running two chat protocols – one http based one at, and another which is websocket based at Let’s test and see if things work! Restart twisted if it’s running, and make sure Django is still up (if not, start it):

bash: python runserver &
bash: twistd -n -y

Once that’s done, in a flurry of keystrokes, open up two browser windows here on one of your long_polling chat rooms, say, and another two windows on the websocket client for the same window: – and chat away!

Source code

I’ve posted the source code for this tutorial to a git repository. Get the version up to this point here, or by running the following at a command line (explore the git repo for other work I’ve done beyond this tutorial):

git clone
git checkout tags/v0.1.2

What next?
You might notice a few things still need doing, for this chat room system to work – you can try to implement fixes yourself. In rough increasing order of difficulty:

  • try creating two chat rooms, and posting some messages into each. What happens? Warning: when trying to fix this problem, avoid trying to get the twisted and the Django server to communicate directly.
  • there are delays in some of the updates to the long_poll version of the chat rooms. Since things are running locally, the delays don’t have to be as long as they are (some of them don’t really have to be there at all!). As an exercise, try removing/reducing the delays
  • the websocket chat rooms don’t update chat history on disconnect (if you close a window and you open it, you won’t get back-chat history). If you want a fun next exercise, try adding that in!
  • messages don’t persist, if the server goes down (and take up more and more memory the longer the server is up for!). Try modifying your code to write messages to the database, as they are received, and only store a limited (fixed) number of them in memory at any point in time
  • if you send a lot of messages, quickly, in a websocket client, they won’t all make it over to the http/long polling clients [correction: it’s almost certain you won’t notice messages being lost, on account of the GIL]. Oh no! There’s a race condition somewhere in the system. Fix this bug! (solution)