Debugging Go 1.2 on Ubuntu 13.10 with GDB

While the Debugging Go Code with GDB documentation is fantastic, it unfortunately doesn’t work out of the box with Ubuntu 13.10. If you try it you’ll probably see an error similar to:

The problem is that Ubuntu 13.10 links GDB against Python 3.3 while the script golang ships is for Python 2. Someone has already filed an issue, and it appears to be fixed upstream (so expect Go 1.3 to Just Work).

Luckily backporting the fix is easy. Simply move your existing runtime-gdb.py out of the way and download the upstream version in its place.

If your $GOROOT is /usr/local/go the following should Just Work:

Posted in Uncategorized | Tagged , , , | Leave a comment

MmStats in Scripts

MmStats is a library I created to expose and read statistics, metrics, and debugging information from running Python processes without the overhead of syscalls (eg writing to a socket or file) or threads, and to make sure that as many utilities as you want can read those metrics without affecting the performance of the main process exposing stats.

I released 0.7 today to ease integration into multithreaded apps, but it made me realize a simpler tutorial would probably be helpful.

While I had web apps, job consumers, and other long running daemons in mind when I wrote mmstats, it turns out it’s also excellent for long running scripts.

You know the scripts: maintenance scripts, “fixer” scripts, slow build or deployment scripts, data migration scripts, etc.

If you’re like me, you always forget 2 things every time you write and run one of these scripts:

  1. Run it in screen
  2. Periodic progress output

Luckily for #1 there’s already disown.

For #2 we need an example script. Let’s pretend you have a Django app with users and you need to update their email addresses in a different system with something like this:

import otherdb
from django.contrib.auth import models
 
for user in models.User.objects.all():
    otherdb.update(user.username, email=user.email)

After forgetting to run it in screen, I’d restart it … and sit there … staring at my terminal … hating myself for not having it output anything.

But then these scripts never work the first time, so it’d probably die in flames on the first user without an email or similar exceptional condition I forgot to take into account.

So on my second attempt I’d probably quickly try to cobble together some progress indicator:

import otherdb
from django.contrib.auth import models
 
BATCH = ...
 
for i, user in enumerate(models.User.objects.all()):
    if i % BATCH == 0:
        print '{0} done'.format(i)
 
    # Only update users who have emails! Otherwise otherdb dies.
    if user.email:
        otherdb.update(user.username, email=user.email)

But what should BATCH be? If I have 10,000 users, BATCH = 1000 seems reasonable, but what if otherdb is really slow? In that case a smaller batch like 100 or 50 might be appropriate, so I don’t have to worry if otherdb just became unresponsive or something.

The best option is to always have your precise progress available at your request.

Using MmStats in Scripts

I’ve found mmstats fits this use case beautifully. No more guessing at what might be an appropriate batch size or using the wrong format string in an uncommon case and crashing my script halfway through.

Integrating mmstats is as easy as:

import time
import mmstats
import otherdb
from django.contrib.auth import models
 
# Define your stats in a model
class S(mmstats.MmStats):
    done = mmstats.CounterField(label="done")
    missing_email = mmstats.CounterField(label="missing_email")
    otherdb_timer = mmstats.TimerField(label="otherdb_timer")
    last_user = mmstats.StringField(label="user")
 
# Instantiate the stats model
stats = S(filename="update-emails-{0}.mmstats".format(time.time()), path=".")
 
for i, user in enumerate(models.User.objects.all()):
    # Update the username for readers to see
    stats.last_user = user.username
 
    # Only update users who have emails! Otherwise otherdb dies.
    if user.email:
        with stats.otherdb_timer:
            # Actually do the migration work
            otherdb.update(user.username, email=user.email)
    else:
        stats.missing_email.inc()
 
    # Increment the done counter to show another user has been processed
    stats.done.inc()

That’s it! Now just re-run in screen, pop back into a shell and check on the progress with slurpstats:

schmichael@prod9000:~$ slurpstats *.mmstats
==> ./update-emails-1234567890.mmstats
  done               113
  missing_email      12
  otherdb_timer      0.3601293582
  user               rob
  sys.created        1346884490.7
  sys.pid            10298
  sys.gid            549
  ...

This output would indicate 113 users have been checked, 12 of them had no email, “rob” is the current user being processed, and that otherdb.update(...) takes on average 360ms to complete. By default timers average the last 100 values, but that’s customizable via the size keyword argument.

That’s nice and all, but it’d be more fun to see how many users were updated per second. pollstats is a simple tool for doing just that:

schmichael@prod9000:~$ pollstats done,missing_email *.mmstats
       done         |      missing_email
                213 |                 20
                  3 |                  0
                  5 |                  1
                  1 |                  0
...

pollstats will print out the current value of the given counters initially, and then once per second print the delta. So in our contrived example we’d be processing somewhere between 1 and 5 users per second and less than 1 missing email per second.

Sadly pollstats is extremely simplistic at the moment and lacks the ability to intelligently display non-counter fields. (Patches welcome!)

Even better: if you’re script dies the mmstats file will be left for you to inspect. (Although if you want it perfectly in sync you should probably stats.flush() on each iteration.)

mmstats is still young (pre-1.0 for a reason) and simplistic, but I already find it extremely useful not only in web apps and other daemons, but also in simple – or not so simple – one-off scripts. I hope you find it useful as well!

Posted in Python | Tagged , | 3 Comments

Building Python 2.6.8 on Ubuntu 12.04

Update 2012-06-01: Looks like pythonz is an easier way to install Python 2.6.8 (and all other Pythons) on Ubuntu 12.04.

Ubuntu 12.04 builds OpenSSL without SSLv2. Python 2.6.8 expects OpenSSL to be built with SSLv2.

This is a bug that has been fixed in Python 2.7+, but it wasn’t backported for Python 2.6.

So if you build your own Python 2.6 binaries on Ubuntu 12.04 you’ll see errors like this when attempting to use anything SSL related:

*** WARNING: renaming "_ssl" since importing it failed: build/lib.linux-x86_64-2.6/_ssl.so: undefined symbol: SSLv2_method

Some of us still need Python 2.6, so I forked Python 2.6.8 and removed SSLv2 support. Tests pass and SSL works.

You can also just grab the diff below:

Update: M2Crypto requires patching as well:

Update 2: If you trust me and use 64bit Ubuntu 12.04 you can download a pre-built python-2.6.8~nosslv2 tarball from me. Includes distribute and pip pre-installed.

Posted in GNU/Linux, Open Source, Python | 6 Comments

Failing with MongoDB

Update: Sorry this isn’t my best piece of writing and there seems to be some confusion. The dataset in question was first in a 1.8 master/slave pair and then migrated to sharded replica sets and 2.0.0.

For a bit of history of my dealings with MongoDB at Urban Airship, I gave a couple versions of a Scaling with MongoDB talk:

My coworker Adam Lowry even gave a follow-up talk of sorts at Postgres Open 2011 (slides) about migrating one of our datasets off of MongoDB and (back) on to PostgreSQL.

After reading through those slides you’re probably wondering why we’re still dealing with MongoDB at all. We fully intended to migrate our data out of it by now, but priorities change, deadlines slip, and we never expected one of our last uses of MongoDB to experience a surge in growth.

The dataset in question seemed ideal for MongoDB:

  • Ephemeral – if we lose it we experience service degradation for a short while, but nothing catastrophic
  • Small – easily fits into memory (~15 GB)
  • Secondary index – In a key/value store we would have had to manage a secondary index manually

So this dataset dodged a lot of the previous problems we had with MongoDB and seemed safe to migrate at our leisure.

Global Write Lock

MongoDB has a global write lock. This means that while applying an insert or update, a single mongod instance can’t respond to other queries.

Our dataset may be small but it has a heavy read and write load. When the service it backed experienced a surge in usage, MongoDB quickly became CPU bound. This was especially frustrating considering mongod was running in a simple master/slave setup on two servers: each with 16 cores and enough memory to hold all the data a few times over again.

Because of the global write lock and heavy write load, operations are effectively serialized and executed on a single core. Meaning our servers didn’t even look loaded, as just 1 core would be 100% utilized by mongod.

Let the Sharding Begin

So we need to utilize multiple cores…
To do that we need multiple write locks…
There’s 1 write lock per mongod. So…
…multiple mongods per server?

We’d been avoiding sharding after having no luck getting it working in the 1.5.x dev series, but it’s our only choice now to get multiple mongods. I ran some tests and it seemed like we could turn our master/slave setup into a 2 shard setup with 2 mongods and 1 arbiter per shard with downtime in the seconds or low minutes.

The operational complexity of configuring a MongoDB cluster is daunting with each component bringing its own caveats:

mongod config servers

  • You need exactly 3 config mongods (1 is fine for testing, which makes things appear simpler than they really are).
  • There are lots of caveats with the config servers, so read Changing Config Servers carefully before configuring your cluster.
  • Otherwise these mongod instances are fairly blackboxish to me. Despite being mongod processes you administer them completely differently.

mongos routers

  • 1 per app server. This wouldn’t be a big deal except that our mongoses often start failing and require flushRouterConfig to be run on them. 2.0.1 supposedly fixes this, but we haven’t tested that yet (and trading known problems for new unknown ones is always scary).
  • mongos instances can use a lot of CPU and seem to have random spikes where they fully utilize every core very briefly. Keep this in mind if your application servers are already CPU bound.
  • On the bright side mongos balanced our data rather quickly. Our shard key is a uuid, and it properly setup reasonable ranges in very short order without having to preconfigure them.
  • “mongos” is a terribly confusing name. It sounds like multiple mongo instances. We’ve taken to calling them mongooses internally due to frequent typos and confusion.

arbiters

  • You need at least 3 members in a replica set in order to complete an election if 1 member goes down.
  • We haven’t had any issues with arbiters… not sure what we’d do if one broke somehow but since they have no persistent data they’re safe to restart at any time.

mongods

  • Early on we ran into a problem where changing replica set member entries wasn’t propagated to the config servers’ shard configuration. Restarting every mongos fixed it.
  • As far as I can tell a new replica set member will never leave the initial RECOVERING state until all operations to that set are stopped. Even 40 updates per second was enough of a trickle to prevent a new set member from leaving RECOVERING to becoming a SECONDARY. We had to shutdown mongoses to cut off all traffic to bring up a new member. (The replication log gave every indication of being caught up and our usual update load is thousands per second.)
  • Setting rest in the config file doesn’t seem to work. Put –rest in your command line options.
  • Sending an HTTP request to a mongod’s main port (instead of the HTTP endpoint) seems to be capable of crashing the mongod.

Client Drivers

While a single replica set member was in a RECOVERING state our Java services couldn’t complete any operations while our Python service was happily working away.

Summary

Right now we’re getting by with 2 shards on 2 dedicated servers and then mongoses and config servers spread throughout other servers. There appears to be some data loss occurring, though due to the ephemeral fast changing nature of this dataset it’s very difficult to determine definitively or reproduce independently.

So we’re trying to migrate off of MongoDB to a custom service better suited for this dataset ASAP.

Posted in Open Source, Technology | Tagged | 48 Comments

MemoryMapFile Convenience Class for Python

My obsession with mmap hasn’t died, but while Python’s mmap module is a wonderful low level library it’s a bit hard for a newcomer to use properly.

I’ve started toying with a convenience wrapper class for mmap.mmap (at least the Unix version):

My original goal was to automatically grow the mmap whenever the user attempts to write beyond the current size of the mmap file, but that’s going to take carefully wrapping quite a few methods (write, __setitem__, and maybe get/read methods too).

If it becomes useful, I may use it in mmstats.

Feedback welcome!

Update: Discovered the hard way (segfaults) that resizing mmaps is tricky: the region can be moved but data will be copied. However, any existing pointers (from ctypes..from_buffer in my case) will now point to freed memory and segfault upon use.

tl;dr – If at all possible, precompute the size of your mmap before using it.

Posted in Open Source, Python, Technology | Tagged | Leave a comment

Sharing Python data between processes using mmap

I’ve been toying with an idea of exposing statistics for a Python application via shared memory to keep the performance impact on the application as low as possible. The goal being an application could passively expose a number of metrics that could either be periodically polled via munin/Icinga/etc plugins or interactive tools when diagnosing issues on a system.

But first things first: I need to put data into shared memory from Python. mmap is an excellent widely-implemented POSIX system call for creating a shared memory space backed by an on-disk file.

Usually in the UNIX world you have 2 ways of accessing/manipulating data: memory addresses or streams (files). Manipulating data via memory addresses means pointers, offsets, malloc/free, etc. Stream interfaces manipulate data via read/write/seek system calls for files and send/recv/etc for sockets.

mmap gives you both interfaces. A memory mapped file can be manipulated via read/write/seek or by directly accessing its mapped memory region. The advantage of the latter is that this memory region is in userspace — meaning you can manipulate a file without incurring the overhead of write system calls for every manipulation.

Anyway, enough exposition, let’s see some code. (Despite mmap’s nice featureset, I’m only using it as a simple memory sharing mechanism anyway.) The following code shares a tiny bit of data between 2 Python processes using the excellent mmap module in the stdlib. a.py writes to the memory mapped region, and b.py reads the data out. ctypes allows for an easy way to create values in a memory mapped region and manipulate them like “normal” Python objects.

These code samples were written using Python 2.7 on Linux. They should work fine on any POSIX system, but Windows users will have to change the mmap calls to match the Windows API.

a.py

#!/usr/bin/env python
import ctypes
import mmap
import os
import struct
 
 
def main():
    # Create new empty file to back memory map on disk
    fd = os.open('/tmp/mmaptest', os.O_CREAT | os.O_TRUNC | os.O_RDWR)
 
    # Zero out the file to insure it's the right size
    assert os.write(fd, '\x00' * mmap.PAGESIZE) == mmap.PAGESIZE
 
    # Create the mmap instace with the following params:
    # fd: File descriptor which backs the mapping or -1 for anonymous mapping
    # length: Must in multiples of PAGESIZE (usually 4 KB)
    # flags: MAP_SHARED means other processes can share this mmap
    # prot: PROT_WRITE means this process can write to this mmap
    buf = mmap.mmap(fd, mmap.PAGESIZE, mmap.MAP_SHARED, mmap.PROT_WRITE)
 
    # Now create an int in the memory mapping
    i = ctypes.c_int.from_buffer(buf)
 
    # Set a value
    i.value = 10
 
    # And manipulate it for kicks
    i.value += 1
 
    assert i.value == 11
 
    # Before we create a new value, we need to find the offset of the next free
    # memory address within the mmap
    offset = struct.calcsize(i._type_)
 
    # The offset should be uninitialized ('\x00')
    assert buf[offset] == '\x00'
 
    # Now ceate a string containing 'foo' by first creating a c_char array
    s_type = ctypes.c_char * len('foo')
 
    # Now create the ctypes instance
    s = s_type.from_buffer(buf, offset)
 
    # And finally set it
    s.raw = 'foo'
 
    print 'First 10 bytes of memory mapping: %r' % buf[:10]
    raw_input('Now run b.py and press ENTER')
 
    print
    print 'Changing i'
    i.value *= i.value
 
    print 'Changing s'
    s.raw = 'bar'
 
    new_i = raw_input('Enter a new value for i: ')
    i.value = int(new_i)
 
 
if __name__ == '__main__':
    main()

b.py

import mmap
import os
import struct
import time
 
def main():
    # Open the file for reading
    fd = os.open('/tmp/mmaptest', os.O_RDONLY)
 
    # Memory map the file
    buf = mmap.mmap(fd, mmap.PAGESIZE, mmap.MAP_SHARED, mmap.PROT_READ)
 
    i = None
    s = None
 
    while 1:
        new_i, = struct.unpack('i', buf[:4])
        new_s, = struct.unpack('3s', buf[4:7])
 
        if i != new_i or s != new_s:
            print 'i: %s => %d' % (i, new_i)
            print 's: %s => %s' % (s, new_s)
            print 'Press Ctrl-C to exit'
            i = new_i
            s = new_s
 
        time.sleep(1)
 
 
if __name__ == '__main__':
    main()

(Note that I cruelly don’t clean up /tmp/mmaptest after the scripts finished. Consider it a 4KB tax for anyone who runs arbitrary code they found on the Internet without reading it first.)

Posted in GNU/Linux, Open Source, Python, Technology | Tagged , , , | 6 Comments