Friday, December 23, 2011

Changes in Facebook's swf flash handler..

Ever since Facebook introduced a change in their Facebook Connect Library that caused severe login issues for IE users that lasted for more than a week, we've created scripts to monitor Facebook's JavaScript Connect Library to detect for any changes that might affect our users. Nate Frieldy first created the first version to monitor for diffs, and I soon forked it here to monitor for diffs that span more than just the changes that happen on the 1st line that indicates timestamp changes.

On December 8, 2011, our diff detection scripts picked up this change:

1 /*1323218538,169893481,JIT Construction: v482006,en_US*/
1 /*1323305592,169920374,JIT Construction: v482779,en_US*/
33 if (!window.FB) window.FB = {
44 _apiKey: null,
53615361 [10, 3, 181, 34],
53625362 [11, 0, 0]
53635363 ],
5364 "_swfPath": "rsrc.php\/v1\/yK\/r\/RIxWozDt5Qq.swf"
5364 "_swfPath": "rsrc.php\/v1\/yD\/r\/GL74y29Am1r.swf"
53655365 }, true);
53665366 FB.provide("XD", {
53675367 "_xdProxyUrl": "connect\/xd_proxy.php?version=3"

This SWF file is Facebook's cross-domain handler for web browsers that don't implement HTML5 but can use a Flash-based version of HTML5's postMessage() function that allows messages to be passed between different domains. Facebook doesn't often recompile the SWF file, so this diff caught my attention. The most reliable decompiler I've found is Sothink's SWF decompiler, which can be used to export the ActionScript files with a 30-day trial (for more context about how to decompile, see

I've decompiled the SWF file and ActionScript files from\/v1\/yD\/r\/GL74y29Am1r.swf and reviewed the diffs between the previously decompiled SWF with this one. If you were to compare the diff changes for the file, you would see:
> private static var initialized:Boolean = false;
> private static var origin_validated:Boolean = false;
< Security.allowDomain("*"); < Security.allowInsecureDomain("*"); --- > if (XdComm.initialized)
> {
> return;
> }
> XdComm.initialized = true;
> var _loc_1:* = PostMessage.getCurrentDomain();
> Security.allowDomain(_loc_1);
> Security.allowInsecureDomain(_loc_1);
> ExternalInterface.addCallback("postMessage_init", this.initPostMessage);
> private function initPostMessage(param1:String, param2:String) : void
> {
> origin_validated = true;
> this.postMessage.init(param1, param2);
> return;
> }// end function
> public static function proxy(param1:String, param2:String) : void
> {
> if (origin_validated)
> {
>, param2);
> }
> return;
> }// end function

The changes indicate that Facebook has tightened the cross-domain security policies. Instead of using wildcard domains to accept messages in its allowDomain() function, it now invokes a call to getCurrentDomain(), which is a function defined in the file used to execute a call to document.domain, relying more on the browser to define the security restrictions.

Most of these change should not affect your users...just wished Facebook would discuss more what's going on behind the scenes since your apps may very well be using the Facebook Connect Library without realizing these changes are happening beneath you!

I've started to post the decompiled SWF files here:

Note that these updates are only manually. If someone knows of an open-source SWF decompiler, then the diffs could be much more automated!

Wednesday, December 14, 2011

Setting up IPSec with racoon and a Cisco router..

Tools on Linux v2.6

The Linux 2.6 kernel already comes with IPSec support (Ubuntu distributed appears to have AH/ESP support), so you need two packages to get it working. First, you need ipsec-tools and racoon. You can install them by doing apt-get install ipsec-tools and apt-get install racoon respectively. You'll need to setup /etc/ipsec-tools.conf to define what IP subnets/hosts will be connected via VPN (and whether to use ESP and/or AH tunnel or transport mode, as well as the gateway IP's that are used to bridge the connections). Racoon has other parameters about Phase 1 and Phase 2 negotiation that you need to setup too, which are defined in /etc/racoon/racoon.conf. You use the remote {} configuration parameters for Phase 1, and sainfo parameters for Phase 2.). See for more detailed info.

Note the unique allows for multiple security associations to be used over the same host, See Apparently using the keyword 'unique' instead of 'require' fixes the issue:
flush;spdflush;spdadd ${LOCAL_NETWORK} ${STJUST_NETWORK} any -P out ipsec esp/tunnel/${LOCAL_OUTSIDE}-${STJUST_OUTSIDE}/unique;spdadd ${STJUST_NETWORK} ${LOCAL_NETWORK} any -P in  ipsec esp/tunnel/${STJUST_OUTSIDE}-${LOCAL_OUTSIDE}/unique;
More documentation here about unique in the setkey man page:

The protocol/mode/src-dst/level part specifies the rule how to process the packet. Either ah, esp, or ipcomp must be used as protocol. mode is either transport or tunnel. If mode is tunnel, you must specify the end-point addresses of the SA as src and dst with '-' between these addresses, which is used to specify the SA to use. If mode is transport, both src and dst can be omitted. level is to be one of the following: default, use, require, or unique. If the SA is not available in every level, the kernel will ask the key exchange daemon to establish a suitable SA. default means the kernel consults the system wide default for the protocol you specified, e.g. the esp_trans_deflev sysctl variable, when the kernel processes the packet. use means that the kernel uses an SA if it's available, otherwise the kernel keeps normal operation. require means SA is required whenever the kernel sends a packet matched with the policy. unique is the same as require; in addition, it allows the policy to match the unique out-bound SA. You just specify the policy level unique, racoon(8) will configure the SA for the policy.

Racoon works by basically listening for commands from the Linux kernel. The tunnels get setup the first time you attempt to negotiate a connection to a host. If you have certain routes established that are defined in /etc/ipsec-tools.conf and do a setkey -f /etc/ipsec-tools.conf, this information will be loaded as the security policy database (SPD) and the kernel will send a trigger that causes Racoon to attempt to establish the connection.

The two commands you will use to initially test are:
sudo setkey -f /etc/ipsec-tools.confsudo racoon -F -f /etc/racoon/racoon.conf -v -ddd -l /etc/racoon/racoon.log

The most secure (but complicated) is to use is a Internet Key Exchange (IKE) authentication approach. In this approach, both VPN client/server sides announce that they will use a pre-shared key authentication mechanism and their authentication and hash algorithm. The pre-shared key is just some hard-coded value that both sides decide before setting up the VPN connection. Once the connection is established, both sides use a Diffie-Hellman key approach to generate a public/private key so that future exchanges will be encrypted. It so happens in this approach both sides exchange a public key and are able to decode the packet using their own private key! All this negotiation happens during what's called Phase 1 negotiation.

Alternate approaches to the IKE implementation call call for setting up manual keys. In other words, both sides have to know how to encrypt the data beforehand instead of this intricate key exchange approach. A lot of the Racoon documentation mention setting up security association (SA)'s within /etc/ipsec-tools.conf, but this approach is un-needed if we are going to use the IKE-based approach, which is presumably more secure. If the IKE implementation is used, then Phase 2 negotiations must also occur.

A great guide to troubleshooting IPSec connections is here:

Want to know how all the nomencalture is laid out? Click here:

Phase 1
If you read the IPSec documents, you'll see there are 3 rounds of this Phase 1. You can use tshark/wireshark to watch the network dumps:

Round 1: agree on authentication, encryption, hash payload algorithm

Round 2: key exchanges w/ nonce values (to avoid replay attacks).

Round 3: validation of hash/identification payloads using the secret keys completes successfully.

Phase 2 (IKE only)
The second step, also known as Quick Mode, in the IKE approach is to negotiate a security association (SA) policy. These policies not only define what encryption/authentication algorithms should be used, what encryption keys should be used for data transferring and for what IP subnets.

The ipsec-tools package comes with an /etc/ipsec-tools.conf that defines the security association (SA) policies. This policy must match against the information provided by the customer side. In the Northwestern Mutual's case, their IT department set their Cisco router with an access control list that defines that's allowed to connect. You will notice in the ISAKMP protocol during Phase 2 negotation that the packet structure for Phase 2 also includes an IDci and IDcr identity payload. You can watch Racoon and see what bits get passed through:

2011-12-14 01:12:56: DEBUG: IDci:
2011-12-14 01:12:56: DEBUG:
04000000 <IP address here> Data exchanges
Assuming everything is setup correctly, you need to setup your route table for the specific IP blocks to which you are connecting. Make sure do netstat -rn and then do route add's to add the correct routes. Unless you're bridging Ethernet interfaces, you need to be sure that you are always sending packets over the same Ethernet interface.

You can confirm packets that go over the wire by using either tcpdump or tshark -i eth0 not port 22 (to exclude traces from your current SSH connection from being dumped out). If you are using ESP encryption, then you should also see that the kernel encrypting packets destined for those IP's to the appropriate place). Again, the Linux kernel is handling most of the work, so long as the routes are correctly defined.

Ways to debug:
Watch isakmp packets:

1. sudo tshark -i eth0 udp port 500 -V or
sudo tshark -i eth0 udp not port 22

2. ssh -X
sudo wireshark

(X11Forwarding needs to be temporarily enabled on /etc/ssh/sshd_config, then do /etc/init.d/ssh restart &. You then need to make sure X11Forwarding is setup in your /etc/ssh/ssh_config on hte client side).

Wireshark can actually decrypt ESP/AH authentication assuming you provide the Security Parameter Indexes (SPI) generated on-the-fly and encryption keys. Most of this data you can observe via running Racoon in debug mode.

FYI - You may also notice "next payload" in Racoon dumps. The ISAKMP standard appears to define multiple types of payloads. Often times you will see vendor ID and other data such as the following:
Vendor ID: RFC 3706 Detecting Dead IKE Peers (DPD)        Next payload: Vendor ID (13)        Payload length: 20        Vendor ID: RFC 3706 Detecting Dead IKE Peers (DPD)    Vendor ID: XXXXXX        Next payload: Vendor ID (13)        Payload length: 20        Vendor ID: XXXXXX    Vendor ID: draft-beaulieu-ike-xauth-02.txt        Next payload: NONE (0)        Payload length: 12        Vendor ID: draft-beaulieu-ike-xauth-02.txt

Sunday, December 11, 2011

Setting up a VPN between two DD-WRT routers..

This setup worked for two WRT54GL routers running DD-WRT v24-sp1.

PPTP server (
First, one machine needs to be setup as a PPTP server.
1. Go to Services->PPTP to enable the PPTP server.
2. Set the server IP (should be a virtual lAN IP address different than your LAN IP address -- i.e.,
3. Set the Client IP block (, and then setup the CHAP Secrets (johndoe* mypassword *).

PPTP client ( :
1. Enable PPTP client.
2. Set the PPTP Server IP.
3. Set the Remote Subnet ( and Remote Subnet Mask (
4. Set the MPPE Encryption to "mppe required".
5. Set the MTU/MRU to be 1450.
6. Disable NAT mdoe.
7. Set the username and password to the CHAP secret set in the PPTP Server.

You should verify the PPTP connection is established by telnetting into the PPTP client box and attempting to ping the private IP address of the PPTP server (i.e. or If this succeeds, then you may be able to ping the routers but other machines on the network are not able to talk with each other. In this case, you may wish to confirm that the PPTP server has not setup a route of To do it automatically, you need to do the following:

1. Go to Administration->Commands.
2. Add the following firewall commands. Usually what happens is that the /tmp/pptpd_client/ip-up script is created. A delay is inserted before adding the route and then re-executing the ip-up bash script again.
sleep 40
/bin/sh -c 'echo "ip route add dev ppp0" >> /tmp/pptpd/ip-up'

Click on Save->Firewall after saving.

If you want to reinitiate the PPTP connection, try to click Save/Apply Settings and waiting for the VPN connection to be re-established. If you really want to check things out, you can configure a VPN client on Ubuntu 10.04 through the Network Manager (make sure to click Point-to-Point MPPE Encryption and allow stateful encryption, send PPP echo packets to help keep the connection alive). (Note: If you forget to enable the MPPE encryption/stateful encryption options, you may find that the VPN connection is flaky. It seems as if there are CHAP requests/rejects that keep happening without these two options). This VPN client will help you verify that the PPTP server is responding correctly.

You should also telnet to both DD-WRT routers and verify the routes have been established between the two subnets. You should also cat /tmp/pptpd_client/ip-up on the PPTP server to verify that the IP route was added correctly.

Saturday, December 10, 2011

Upgrading a Compaq Presario C700 from Vista to Windows 7..

Recently, I upgraded an old Compaq C714NR Presario laptop that had been running Windows Vista to Windows 7. Coffee had been spilled on the touchpad, rendering it inoperable. The keyboard still worked but the spacebar was sort of sticky so needed to be replaced. You may have older machines that might be worth upgrading, especially if you took advantage of Microsoft offers that allowed .edu email addresses to get a copy of Windows 7 Professional for $25.

1. Although your machine may be running 32-bit Windows Vista, chances are that if the machine runs a dual-core processor, which means it can run 64-bit Windows Professional. Most of the graphics, networks, and sound drivers already come with Windows 7, so there usually isn't a need to download extra drivers. Windows 7 should install right out of the box.

2. If you're burning a copy of Windows 7, you may encounter issues about "Required cd/dvd drive device driver is missing". If you observe this case, chances are likely the DVD you burned actually has problems, especially if you were using a copy that hadn't been used for awhile. Originally you may be led to think that there are some driver incompatibility issues with the 64-bit Windows 7 version, but try to reburn the DVD and see if the install works.

3. The touchpad can be replaced, but you have to buy one that comes with the laptop casing too. Since most of the casing + touchpads parts are sold off Ebay for $30+, it may easier to simply attach a USB-mouse instead. The picture below shows an example of the touchpad + upper casing:

4. There are web sites that sell spare keyboard keys (i.e., but buying one part can easily cost $8 and you can usually buy the entire keyboard replacement for $12. The HP service manual for replacing the keyboard is fairly straightforward, but there are a few key things to know. In the case of the C700, there were 3 screws at the bottom of the laptop, each with a keyboard icon at the bottom. One of these screws was obscured by the memory lid, so you may have to remove the lid first.

Second, there is the Zero-Insertion Force (ZIF) connector that attaches to the keyboard and laptop. What this usually means is that the sides of the connector need to be pushed out.

You should avoid pulling the ribbon cable out until the connector is released. The picture below shows one example of how the ZIF connector is pushed out. You can usually use your fingers and push the connector out slightly before inserting the ribbon cable. You should push down on the sides to fasten the ribbon cable securely.
Keep in mind that you should verify that all keys work. If the connector is not fully fastened, you may find some keys do not respond. You can try to boot up the computer with the keyboard installed, but be careful if any components are exposed.

5. Finally, if you need to replace any keys, you first have to figure out how the keyboard mechanism works.. There are a bunch of YouTube demos for replacing the keys in HP laptops, but none of the videos I found pointed out that it's easier to attach one of the plastic hinges to the key, and the other smaller plastic hinge to the laptop. If you were to setup the hinges on the laptop first, the plastic hinges should move up and down if you were to apply pressure to them, supplementing the spring-like action in the button.

Once you figure out the right way to place them, take the large hinge and attach it to the key before attaching the other part. For this keyboard, I couldn't just put the key over the two plastic hinges since the pressure of the keys would cause both plastic to be pushed down without snapping into place. You have to be careful with this part since the plastic hooks can break, so avoid trying to force the keys to attach to the plastic hinges.

Celery 2.3+ crashes when channel.close() commands are issued...

This AMQPChannelException issue has happened for us over the last 3 weeks, so I decided to dig-in to understand why we were getting AMQPChannelException's that caused our Celery workers to simply die. Well, it turns out this exception often appeared in our logs:

(404, u"NOT_FOUND - no exchange 'reply.celeryd.pidbox' in vhost 'myhost'", (60, 40), 'Channel.basic_publish')

The basic problem I suspect is that we have a task that is checking for long running Celery tasks. It relies on the celeryctl command, which is a special mechanism used by Celery to broadcast messages to all boxes running Celery (i.e. celeryctl inspect active)

The celeryctl command implements a multicast protocol by leveraging the AMQP standard (see RabbitMQ tutorial). All Celery machines on startup bind to the exchange celery.pidbox. When you send a celeryctl command, RabbitMQ receives this message, and then goes and delivers to all machines listening to this exchange celery.pidbox.

The machines also send back their replies on a separate exchange called reply.celery.pidbox, which is used by the main process that issued the celeryctl command to collect all the responses. Once all program completes, it will delete the celery.pidbox since it's no longer needed. Unfortunately, if a worker receives this command and attempts to respond but it's too late, it can trigger an exchange not found, causing RabbitMQ to issue a channel.close() command. I suspect this happens especially during heavy loads and/or during intermittent network failures, since the problem often shows up during these times.

Celery handles connection failures fine, but doesn't seem to deal with situations where the AMQP host issues a close command. I solved it in two ways: first, allowing Celery to gracefully reset the entire connection when such an event happens (PR request to the Celery framework), and increasing the window in which we check for replies so the exchange isn't deleted quickly thereafter (i.e. celery inspect active --timeout=60). The latter may be the quicker way to solve it, though the former should probably be something that would help avoid the situation altogether (although it may cause other issues).

The result may appear to trap the exception and try to establish a new connection. This approach is already being used when for Celery control commands (an error msg "Error occurred while handling control command" gets printed out but the underlying Exception gets caught). It seems this exception occurs when the RabbitMQ sends a close() command to terminate the connection, causing the entire process to die.

def on_control(self, body, message):
"""Process remote control command message."""
self.pidbox_node.handle_message(body, message)
except KeyError, exc:
self.logger.error("No such control command: %s", exc)
except Exception, exc:
"Error occurred while handling control command: %r\n%r",
exc, traceback.format_exc(), exc_info=sys.exc_info())

We'll see in this pull-request whether the authors of Celery think this is a good idea...I suspect that it would be better to create a new channel than to restart the connection altogether.

Friday, December 9, 2011

Moving to Nose..

We've started to have our developers use Nose, a much more powerful unit test discovery package for Python. For one, in order to generate JUnit XML outputs for Hudson/Jenkins, the test runner comes with a --with-xunit command that lets you dump these results out.

Here are a few things that might help you get adjusted to using Nose:

1. As mentioned in the Testing Efficiently with Nose tutorial, the convention is slightly different for running tests has changed. The format has changed:

python test app.tests:YourTestCaseClass
python test app.tests:YourTestCaseClass.your_test_method

One way to force the test results to use the same format is to create a class that inherits from unittest that overrides the __str__, __id__, and shortDescription methods. The __str__() method is used by the Django test runner to display what tests are running and which ones have failed, enabling you to copy/paste the test to re-run the test. The __id__() method is used by the XUnit plug-in to generate the test name, enabling you to swap out the class name with the Nose convention. Finally, the shortDescription() will prevent docstrings from replacing the test name when running the tests.

class BaseTestCase(unittest.TestCase):

def __str__(self):
# Use Nose testing format (colon to differentiate between module/class name)

if 'django_nose' in settings.INSTALLED_APPS or 'nose' in settings.TEST_RUNNER.lower():
return "%s:%s.%s" % (self.__module__, self.__class__.__name__, self._testMethodName)
return "%s.%s.%s" % (self.__module__, self.__class__.__name__, self._testMethodName)

def id(self): # for XUnit outputs
return self.__str__()

def shortDescription(self): # do not return the docstring
return None

2. For SauceLabs jobs, you can also expose the URL of the job run in which you are running. WebDriverException's inherit from the Exception class, and add a 'msg' property that we can use to insert the SauceLabs URL. You want to avoid adding the URL in the id() and __str__() methods, since those routines are used to dump out the names of classes that Hudson/Jenkins may used to compare against between builds.

def get_sauce_job_url(self):
# Expose SauceLabs job number for easy reference if an error occurs.
if getattr(self, 'webdriver', None) and hasattr(self.webdriver, 'session_id') and self.webdriver.session_id:
session_id = "(" % self.webdriver.session_id
session_id = ''

return session_id

def __str__(self):
session_id = self.get_sauce_job_url()
return " ".join([super(SeleniumTestSuite, self).__str__(), session_id])

def _exc_info(self):
exc_info = super(SeleniumTestSuite, self)._exc_info()

# WebDriver exceptions have a 'msg' attribute, which gets used to be dumped out.
# We can take advantage of this fact and store the SauceLabs jobs URL in there too!
if exc_info[1] and hasattr(exc_info[1], 'msg'):
session_id = self.get_sauce_job_url()
exc_info[1].msg = session_id + "\n" + exc_info[1].msg
return exc_info

3. Nose has a bunch of nifty commands that you should try, including --with-id & --failed, which lets you run your entire test suite and then run only the ones that failed. You can also use the attrib decorator, which lets you to decorate certain test suites/test methods with various attributes, such as network-based tests or slow-running ones. The attrib plugin seems to support a limited boolean logic, so check the documentation carefully if you intend to use the -a flag (you can use --collect-only in conjunction to verify your tests are correctly running with the right logic).

Thursday, December 8, 2011

Hover states with Selenium 2..

We do a lot of automated tests on Selenium 2, and recently we noticed that some of our tests that attempt to verify mouseover/hover states intermittently break. We could see the mouseover event get fired, but then suddenly it would stop.

The phenomenon at first made us suspect there were possible race conditions in our JavaScript code. Were we accidentally clearing/hiding elements before WebDriver could reach them? Was the web browser loading multiple times and clearing out the original state? Or was it an issue in Selenium 2, since it attempts to calculate the center position of the element, scroll into view, and then move the mouse cursor to the center of the element (see You can look at how the Windows driver's mouseMove is implemented here:

It turns out that hover states for Windows in Selenium 2 are just problematic right now. You can even see inside the Selenium 2 test suite that hover states are not skipped: for Windows-based systems:\.googlecode\.com

if (!supportsHover()) {
System.out.println("Skipping hover test: Hover is very short-lived on Windows. Issue 2067.");

The full bug report is here:

In Selenium v2.15.0, even the Internet Explorer wiki was updated to remind people that hover states (using mousemove) have issues:

The workaround seems to be generating your own synthetic events:


Saturday, December 3, 2011

AT&T Uverse Internet 18Mbps solution..

You have two modem/router choices for the AT&T Internet Uverse plan: Motorola NVG510 or the Motorola 2210-02-1ATT. The NVG510 doesn't have a bridge mode, and there are a wide variety of hacks to use the IP Passthrough option for the Motorola NVG510 modem:

If you read through the discussions though, you'll notice that the discussion gravitates towards using the IP Passthrough with a Fixed-DHCPS set for the router's IP address: "i'm using the IP Passthrough functionality. According to its own instructions on the right-hand side of the tab, the RG is supposed to pass its public IP address through to another device, using its own DHCP - but it won't do it. I had to manually enter the information into my regular router (an Airport Extreme Base Station, aka AEBS) to use the Passthrough functionality. seems more better if you had a choice to go with the Motorola 2210 option. Also, don't confuse the Motorola 2210-02-1TT model with the Motorola 2210-02-1022 model, the latter of which was an old DSL modem that you can buy off Ebay. For one thing, the 2210-02-1022 models only speak PPPoE, and since the Uverse solution relies on authentication based on the MAC address, you won't be able to plug this modem in place of things. You can check your diagnostics tool on the router to verify that you're connecting via IP-DSLAM, which is synonymous with the AT&T Uverse Internet 18Mbps solution.

Friday, December 2, 2011

Fujitsu C2010 specs

Intel 845MP/MZ chipset

Fujitsu Siemens LifeBook C2010 Graphics - ATI Radeon Mobility M6 LY Driver
Fujitsu Siemens LifeBook C2010 Human Interface - Belkin MI 2150 Trust Mouse Driver
Fujitsu Siemens LifeBook C2010 IEEE 1394 - Texas Instruments TSB43AB21 IEEE-1394a-2000 Controller Driver
Fujitsu Siemens LifeBook C2010 Modem - Intel 82801CA CAM AC97 Modem Driver
Fujitsu Siemens LifeBook C2010 Multimedia - Intel 82801CA CAM AC97 Audio Controller Driver
Fujitsu Siemens LifeBook C2010 Network - Realtek RTL 8139 8139C 8139C Driver
Fujitsu Siemens LifeBook C2010 PCMCIA - Texas Instruments PCI1520 PC Card Cardbus Driver
Fujitsu Siemens LifeBook C2010 Storage - Intel 82801CAM IDE U100 Driver
Fujitsu Siemens LifeBook C2010 Intel Chipset Drivers

Device manager for Fujitsu Siemens LifeBook C2010 Laptop
Radeon Mobility M6 LY - ATI
MI 2150 Trust Mouse - Belkin
TSB43AB21 IEEE-1394a-2000 Controller - Texas Instruments
82801CA CAM AC97 Modem - Intel
82801CA CAM AC97 Audio Controller - Intel
RTL 8139 8139C 8139C - Realtek
PCI1520 PC Card Cardbus - Texas Instruments
82801CAM IDE U100 - Intel
Brookdale (82845 845 Chipset Host Bridge) - Intel
Brookdale (82845 845 Chipset AGP Bridge) - Intel
82801CA CAM USB Controller #1 - Intel
82801CA CAM USB Controller #2 - Intel
82801 Mobile PCI Bridge - Intel
82801CAM ISA Bridge (LPC) - Intel
82801CA CAM SMBus Controller - Intel

Saturday, November 26, 2011

Setting up wake-on LAN in your own home

One of the drawbacks of using Slicehost or Amazon EC2 is that you're pretty much paying $16-$20/month for a VPS server even when you're not accessing the server that much. If you have data that you also don't want being stored on a cloud, you may opt to setup your own home server to store this information. You may want to have the ability to access this data but not have your machine turned on all the time adding to your electricity bill.

First, your computer(s) needs to have wake-on-LAN enabled in the BIOS and your computer's Ethernet port must be plugged directly into a switch/router (wireless wake-on LAN NIC cards may also be possible). For Toshiba laptops, for instance, you need to reboot, hit Esc and then F1 to enter the BIOS. You can then enable Wake-on-LAN support and save the changes.

You also need a router that can run the DD-WRT or Tomoto open source firmware. If you use the VPN version of the DD-WRT, you can also setup a PPTP server with DynDNS so that you can VPN into your server even though you're using an ISP that does not provides static IP's. You'll want to setup your DD-WRT server to assign a static IP address to your remote computer (inside the Services tab, look for static leases).

If have one of the Windows Professional versions, you can also take advantage of the Remote Desktop servers. You can enable Remote Desktop Sever by going to the Control Panel > System > Remote. You can also right-click My Computer (if the icon is shown on the desktop) and choose Properties.

To verify that Wake-on-LAN works, you can use Netcat/socat to test things out. This blog posting explains how the magic packet for Wake-on-LAN is constructed: 6 bytes of FF followed by the LAN MAC address repeated 16 times. The bash script used to generate a hex version is listed as follows -- you just need to substitute the ETHER parameter with the MAC address of the machine that will need to be woken up:
ETHER2=`echo $ETHER | sed "s/://g"`
echo ${ETHER4} | xxd -r -p > wake.packet

The blog posting points out that you can use netcat to send to a router IP, but what about sending a broadcast address from within the same network? For this purpose, the socat utility seems to work better (not sure if netcat allows sending to broadcast IP addresses?)
socat - UDP-DATAGRAM:,broadcast < wake.packet 

You can also use DD-WRT to wake up any device (check the WoL section) too, but the above information is more background information for how it actualy works!

To setup dynamic DNS, you'll need to go to the DDNS page of the DD-WRT firmware and input your username/pw. The DynDNS service is now $20/year, though it used to be free in recent years. The instructions for setting DynDNS are posted at:

To setup the PPTP server for VPN, go to the PPTP section and follow the instructions according to One thing to note is that the PPTP server IP should be different from your router's internal IP, and the CHAP secrets format is ( * *). Yes, there is an asterisk after username and password.

Ubuntu comes with 'tsclient', which is a Remote Desktop Connection. Once you've setup a VPN connection correctly, you can To get your Windows machine to hibernate, you can also create this batch file and add to your Desktop:

%windir%\system32\rundll32.exe powrprof.dll,SetSuspendState Hibernate

You can also get an iPad/Android app to wake up your computer remotely -- we found Mocha WoL, a free app that lets you wake up the machine remotely. You can also set the hostname to a DynDNS machine, and assuming you have UDP port forwarding setup to relay WoL broadcasts, it should be able to wake up the machine remotely.

Friday, November 25, 2011

WebDriverWait and SauceLabs

If you've used the WebDriverWait class with SauceLabs, you may have noticed that it actually will run beyond the designated timeout period. I first noticed this issue when seeing that this function took 40 minutes to complete with a 20 second implicit wait time and a 60 second timeout w/ .5 poll frequency! (60 / .5 = 120 loops * 20 seconds/implicit wait = 2400 seconds total)

The problem appears to be that WebDriverWait attempts to run self._timeout / self._poll times before returning a Timeout Exception:

for _ in xrange(max(1, int(self._timeout/self._poll))):

Since this code could wait until an ElementNotFound exception is found (and if the implicit wait time is increased), then the total time this routine takes is a function of the total # of loops * poll frequency * the implicit wait time.

The correct patch appears to be recording the start time and looping until the maximum timeout period has been exceeded:

--- a/selenium/webdriver/support/
+++ b/selenium/webdriver/support/
@@ -24,7 +24,7 @@ class WebDriverWait(object):

def __init__(self, driver, timeout, poll_frequency=POLL_FREQUENCY):
"""Constructor, takes a WebDriver instance and timeout in seconds.
- driver - Instance of WebDriver (Ie, Firefox, Chrome or Remote)
- timeout - Number of seconds before timing out
@@ -43,7 +43,8 @@ class WebDriverWait(object):
def until(self, method):
"""Calls the method provided with the driver as an argument until the \
return value is not Falsy."""
- for _ in xrange(max(1, int(self._timeout/self._poll))):
+ end_time = time.time() + self._timeout
+ while(time.time() < end_time):
value = method(self._driver)
if value:

The bug report is filed here:

Saturday, November 12, 2011

How the celeryctl command works in Celery..

One of the most popular Django apps out there is the Celery task queue framework. It allows you to build your own task queue system for offline processing and provides an elegant framework for interfacing with message brokers such as AMQP, Redis, etc.

The celeryctl command in the Celery task queue is an amazing tool. It allows you send commands to your Celery works to figure out which ones are actively processing tasks, revoke tasks that have been dispatched so that workers will skip over procesing them, and see which ones are scheduled. We use it here to monitor long-running tasks, such as this script:
from celery.bin import celeryctl
import datetime

def probe_celeryctl():
    results = celeryctl.inspect().run("active")

def check_old_celery_tasks(results):
    bad_tasks = []

    MAX_TIMEDELTA = {'hours': 1}
    for host, tasks in results.items():
        for task in tasks:
            task_start = task.get('time_start')
            timestamp = float(task_start)
            task_timestamp = datetime.datetime.fromtimestamp(timestamp)
            time_diff = abs(datetime.datetime.utcnow() - task_timestamp)
            if time_diff > datetime.timedelta(**MAX_TIMEDELTA):
                print "Hmm..%s %s (on %s)" % (time_diff, task, host)
                bad_tasks.append("Task %s elapsed (%s): name=%s, PID=%s, args=%s, kwargs=%s" % (time_diff, host, task.get('name'), task.get('worker_pid'), task.get('args'), task.get('kwargs')))

    if len(bad_tasks) > 0:
        message = "You'd better check on these tasks...they are slowing things down.\n\n"
        message += "\n".join(bad_tasks)
        print message

if __name__ == "__main__":
How does celeryctl work? Assuming you're using the AMQP backend with Celery, celeryctl relies on the same concepts used in the AMQP open standard (click here for a basic intro). When you first startup, Celery will create an AMQP exchange called "celeryd.pidbox" on the AMQP host. You can confirm by using rabbitmqctl to list exchanges:
$ sudo rabbitmqctl -p fanmgmt_prod list_exchanges
Listing exchanges ...
celeryd.pidbox fanout
You'll notice that celeryd.pidbox is created as a fanout exchange (see the AMQP intro for more details). This way, using celeryctl on one machine will broadcast a message to this exchange. The AMQP host will deliver messages to each queue that is bound to this celeryd.pidbox exchange. On startup, every Celery worker will also create a queue (queue.hostname.celery.pidbox) bound to this exchange, which can be used to respond to celeryctl commands.

Replies from each Celery worker are passed back via direct exchange using celeryd.reply.pidbox. When you startup celeryctl, it sends messages to the celeryd.pidbox and listens for messages to arrive from the celeryd.reply.pidbox queue. The timeout period to wait for replies is 1 second, so if you wish to increase the time to wait for replies from Celery workers, you can increase this number with the -t parameter (i.e. celeryctl inspect active -t )

Note: the reply exchange gets deleted after celeryctl command exists (usually set to auto_delete=True). Since all the Celery workers are still bound to the other celeryd.pidbox exchange, it should still persist until you shutdown all Celery workers.

Thursday, November 10, 2011

Installing HipChat in Jenkins/Hudson..

1. git clone

2. Compile and build (See and

3. Copy the target/.hpi file that got generated into your hudson/plugins dir.

4. Restart Hudson.

5. Go into the Configure Hudson and provide the API key/conference room.

6. Make sure to enable HipChat notifications in each of your build projects!

FYI - this plug-in uses the HipChat API to publish information to the respective room that you deisgnate:

   public void publish(String message, String color) {
      for(String roomId : roomIds) {"Posting: " + from + " to " + roomId + ": " + message + " " + color);
         HttpClient client = new HttpClient();
         String url = "" + token;
         PostMethod post = new PostMethod(url);
         try {
            post.addParameter("from", from);
            post.addParameter("room_id", roomId);
            post.addParameter("message", message);
            post.addParameter("color", color);
         catch(HttpException e) {
            throw new RuntimeException("Error posting to HipChat", e);
         catch(IOException e) {
            throw new RuntimeException("Error posting to HipChat", e);
         finally {

   public void rooms() {
      HttpClient client = new HttpClient();
      String url = "" + token;
      GetMethod get = new GetMethod(url);
      try {
      catch(HttpException e) {
         throw new RuntimeException("Error posting to HipChat", e);
      catch(IOException e) {
         throw new RuntimeException("Error posting to HipChat", e);
      finally {


Installing HipChat on 64-bit Linux.

The instructions for HipChat are pretty straightforward, though you do need to run th commands from root.

# Adobe's instructions forget to mention ia32-libs-gtk
$ apt-get install lib32asound2 lib32gcc1 lib32ncurses5 lib32stdc++6 \
lib32z1 libc6 libc6-i386 ia32-libs-gtk lib32nss-mdns

$ wget
$ dpkg -i getlibs-all.deb

# incorrect in adobe's instructions (corrected)
$ getlibs -p gnome-keyring
$ getlibs -p libhal-storage1

Seems like these commands were not needed on Ubuntu 10.04:

$ ln -s /usr/lib32/ /usr/lib32/
$ ln -s /usr/lib32/ /usr/lib32/
$ ln -s /usr/lib32/ /usr/lib32/
$ ln -s /usr/lib32/ /usr/lib32/ # missing from adobe's instructions

chmod 777 AdobeAIRInstaller.bin

The final step is to download the Adobe AIR client here:

$ wget
$ /opt/Adobe\ AIR/Versions/1.0/Adobe\ AIR\ Application\ Installer hipchat.air

Thursday, November 3, 2011

Debugging Facebook's JavaScript code

In reviewing Facebook's JavaScript code, there apparently is a way to enable debugging of the JavaScript. If you set fb_debug=1, then the logging option will be enabled:
if (!options.logging &&        window.location.toString().indexOf('fb_debug=1') < 0) {      FB._logging = false;    }

Thursday, October 27, 2011

Celerybeat and celerybeat-schedule

In my effort to attempt to replace /etc/init.d/celerybeat with a version that worked more reliably with fabric, one of the discoveries is that celerybeat keeps firing off all tasks from the scheduled task list because Celerybeat stores the last_run time of a scheduled task usually in a celerybeat-schedule in the default dir)...if it hasn't been run in awhile (by virtue of using an older celerybeat-schedule) then you may see a lot of "Sending due task".

You can check the last_run_at apparently by using the shelve library from Python, which celerybeat uses to store all the scheduled tasks defined in your Celery configurations. Each time you restart Celerybeat, this celerybeat-schedule gets merged with your Celery scheduled tasks. Those that don't exist already are added.
sudo python
>>> import shelve
>>> a['entries']['my_task'].last_run_at
datetime.datetime(2011, 10, 28, 2, 1, 57, 717454)
The key is to specify explicitly define the celerybeat-schedule:

export CELERYBEAT_OPTS="--schedule=/var/run/celerybeat-schedule"

IPython: interactive Python

- who: shows you what var
- store: stores a variable to a file (%store foo > a.txt)
- reset: clears namespace
- logstart, logon, logoff
- lsmagic

- run -d <file>: run python code step-by-step
- run -p <file>:

- xmode Context (xmode Verbose: shows the call values)
- pdb: turns on uncaught exception
- time (func): times run

Django caches all its models:

Saturday, October 22, 2011

Integrating OpenID Google Apps Single Sign On with Hudson/Jenkins....

A not so well-documented aspect of using Hudson is that you can integrate OpenID single-sign on (SSO) with your Google Apps domain. You could implement SSO using the Jenkins Crowd plugin that comes pre-packaged with Hudson, but then you'd have to do custom integration work. Since the Crowd protcol is all SOAP-based, just getting the SOAP bindings right can be a big pain. Then you'd have to go about either setting up Crowd identity server or creating your own version via the Crowd API.

The OpenId plugin does not seem to be provided with the Hudson/Jenkins v2.1.2 release, but you can download and install it yourself. You do need the Sun version of Java (not OpenJDK), since there seems to be some dependencies existing in the Jenkins code base (the instructions for setting up on Ubuntu are listed here). You also need to install Maven too (sudo apt-get install maven2) and configure your ~/.m2/settings.xml.

Once Java and maven are setup, you can clone the OpenID repo and compile:

1) git clone

2) mvn

If the compile was successful, the openid.hpi plugin should have been compiled into the target/ dir. You need to copy this open.hpi into your Hudson plugins/ dir (i.e. /var/lib/hudson/plugins). You don't appear to need to add an openid.hpi.pinned to avoid Hudson from overwriting this package, since the OpenID does come with Jenkins by defualt.

3) The OpenID plugin expects that the URL that a user connects to your continuous integration ends with a trailing slash ('/'). In your Apache2 config, you may find that you need to add a rewrite rule to force connections to your server always to end with a '/'. If your server is just, the rewrite rule becomes:

RewriteEngine on
  RewriteRule  ^$  /  [R]

(The major reason is that the getRootUrl() command in the Jenkins code base borrows from the request URL). The OpenID plugin, when concatenates the OpenID finish callbacks, assumes that there will be a trailing slash at the end. Without it, your OpenID authorization flows may not work):


If you notice that the OpenID callbacks (i.e federatedLoginService/openid/finish) are not prefixed with a '/', it means that you are missing this trailing slash!

4) Inside the Hudson configuration screen, the OpenID SSO will be Your permissions will be defined by the email address of the SSO. If you do not wish anonymous users to be able to login, you should make sure that they do not have any types of permissions.

5) Make sure to enable OpenID SSO support in your Google Apps domain.  The checkbox should be enabled inside "Manage this domain"->"Advanced Tools"->"Federated Authenticatin using OpenID".

One extra bonus...if you're using the Git plugin with Hudson, you may have also noticed, depending on which version of the Git plugin, that User accounts were based either on the full name or the e-mail username of the Git committer. If you want the user accounts associated with your Git committers to also be linked to your SSO solution, then this pull-request may also be useful.

(If you have pre-existing users, you may wish to convert their user directories from "John Doe" to to be consistent.)

(Interesting note: the Git plugin used in the Jenkins/Hudson 2.1.2 release is located at, whereas the older v1 versions are at The code base appears to have diverged a little bit, so one commit patch incorporated in 3607d2ec90f69edcf8cedfcb358ce19a980b8f1a that attempted to create accounts based on the Git commiter's username is not included in the v2.1.2 Jenkins release.)

Also, if you use automated build triggers, it appears they still work even if you turned on the OpenID SSO on too!

Update: it looks like the Git plug-in will start to expose an option to use the username's entire email address as a Hudson/Jenkins option.  See the PR below:

Thursday, October 20, 2011


Wondering what the internals of the start-stop-daemon source code are?

Crowd and WSDL

Need the latest copy of the Crowd WSDL file?

1. Visit

2. Download a copy.

3. Unpack the .jar files, and go into the atlassian-x.x.x directory.

4. vi crowd-webapp/WEB-INF/classes/


5. ./

6. Go to http://localhost:8095 (or your dev server IP).

7. You should be able to connect and setup the Crowd service.

8. Go through the setup flow, and get a license key from Atlassian .

9. wget

Need to get the WSDL working in Python? Either use ZSI (which is Google App Engine compatible) or the Python suds library:

The instructions below will show you how to do Crowd authentication using ZSI:

from SecurityServer_services import SecurityServerLocator, SecurityServerHttpBindingSOAP
import SecurityServer_services as sss
from SecurityServer_services_types import ns0

loc = SecurityServerLocator()
server = loc.getSecurityServerPortType()

#build up the application authentication token
r = ns0.ApplicationAuthenticationContext_Def('ApplicationAuthenticationContext')
cred = ns0.PasswordCredential_Def('_credential').pyclass()
req = sss.authenticateApplicationRequest()
cred._credential = 'passwordGoesHere'
req._in0 = r
token = server.authenticateApplication( req )

#Look up a principle from the 'soaptest' application
prin = sss.findPrincipalByNameRequest()
prin._in0 = token._out
prin._in1 = 'cpepe'
me = server.findPrincipalByName( prin )
for i in me._out._attributes._SOAPAttribute:
    print '%s: %s' % (str(i._name), str(i._values.__dict__))

Using Fabric with sudo

If you've ever had to use Fabric, one of the issues is that your scripts must return an error code of 0 in order for the sudo() command to assert that the command executed successfully. Any non-zero error code will result in an error message.

Fatal error: sudo() encountered an error (return code 1) while executing 'sudo"...

If you're using bash scripts, this means that any "set -e" or "bash -e" statements that trigger an abnormal exit. The "kill -0 ", which allows you to test whether a process exists and can be killed, suffers from a flaw in that if you provide a PID that does not exist, it will trigger an exception and cause bash to break out if "set -e" or "bash -e" is set (normally you can use $? to check the return value).

You also should check the integer value (if [ $? -eq 0 ]; then or if [ ?! -eq 1]; then) to determine which step to use.

Wednesday, October 19, 2011

Minus sign

The minus sign is the default value if the variable isn't set. This line has two sets of fallbacks:



Chef template specificity

Template Location Specificity

Cookbooks are often designed to work on a variety of hosts and platforms. Templates often need to differ depending on the platform, host, or function of the node. When the differences are minor, they can be handled with a small amount of logic within the template itself. When templates differ dramatically, you can define multiple templates for the same file. Chef will decide which template to render based on the following rules.
Within a Cookbook's template directory, you might find a directory structure like this:
  • templates
    • ubuntu-8.04
    • ubuntu
    • default
For a node with FQDN of and the sudoers.erb resource above, we would match:
  • ubuntu-8.04/sudoers.erb
  • ubuntu/sudoers.erb
  • default/sudoers.erb
In that order.
Then, for example: sudoers.rb placed under the files/ directory, means it will be only copied to the machine with the domain name (Note the "host-" prefix to the directory name)
So, the rule distilled:
  1. host-node[:fqdn]
  2. node[:platform]-node[:platform_version]
  3. node[:platform]
  4. default

Dealing with IOError and mod_wsgi

Monday, October 17, 2011

Sunday, October 16, 2011

get_task_logger() in Celery...

If you looked at the Celery documentation, you'll notice that the get_task_logger() examples
constantly show up.

def add(x, y):
    logger = add.get_logger()"Adding %s + %s" % (x, y))
    return x + y
What does this function do? Well, it turns out that it will create a separate logger instance specifically tied to the task name (submitted as a PR on The propagate=False is always set, so that any messages passed to it will not move up the parent/ancestor chain.

Instead, a handler is always added to this task. If you wish to adjust the logger level,
you could do:

import logging

If no loglevel is specified in get_logger(), then the default log level defined in CELERYD_LOG_LEVEL is used. Be careful though! The right way is to set the level number (not the level name) if you are modifying directly through Python:

from celery import current_app
from celery.utils import LOG_LEVELS
current_app.conf.CELERYD_LOG_LEVEL = LOG_LEVELS['DEBUG']  # pretty much the same as logging.DEBUG

What's the purpose of get_task_logger()? Well it appears the motivation is to allow logging by task names. If we were just to import the standard logging module, Celery will patch the logger module to add process-aware information (ensure_process_aware_logger()), and then add format/handlers to both the root logger and the logger defined by the multiprocessing module (the multiprocessing get_logger() does not use process shared-logs but it allows you to login things to the "multiprocessing" namespace, which adds SUBDEBUG/SUBWARNING debug levels).

def setup_logging_subsystem(self, loglevel=None, logfile=None, format=None, colorize=None, **kwargs):                                                             
        if Logging._setup:                   
        loglevel = loglevel or self.loglevel      
        format = format or self.format                                                                         
        if colorize is None:                
            colorize = self.supports_color(logfile)                                                            
        if mputil and hasattr(mputil, "_logger"):                    
            mputil._logger = None                                                                              
        receivers = signals.setup_logging.send(sender=None,                    
                        loglevel=loglevel, logfile=logfile,                                 
                        format=format, colorize=colorize)                      
        if not receivers:                                           
            root = logging.getLogger()                              
                root.handlers = []                                                                             
            mp = mputil.get_logger() if mputil else None                 
            for logger in filter(None, (root, mp)):                         
                self._setup_logger(logger, logfile, format, colorize, **kwargs)               
                signals.after_setup_logger.send(sender=None, logger=logger,                                    
                                        loglevel=loglevel, logfile=logfile,                                    
                                        format=format, colorize=colorize)              

Debugging Celery tasks locally

Want to make sure your Celery tasks work correctly before you deploy? Here are a bunch of useful tips you can do:

First, set the root logger and "celery.task.default" to use DEBUG mode:
import logging 

Set ALWAYS_EAGER mode so that Celery will always invoke tasks locally instead of dispatching to the Celery machine.

Set EAGER_PROPAGATES_EXCEPTION so that any exceptions within tasks will be bubbled up so that you can actually see any exceptions that may cause your batch calls to fail (i.e. any uncaught exception can cause a fatal error!)
from celery import current_app
current_app.conf.CELERY_ALWAYS_EAGER = True

from celery.utils import LOG_LEVELS
current_app.conf.CELERYD_LOG_LEVEL = LOG_LEVELS['DEBUG']  # pretty much the same as logging.DEBUG

Finally, if you are invoking a task from the same Python script, you should import the task_name as if it were being imported, even if the function is declared within the same file. The reason is that when running the Celeryd daemon and looks for registered tasks, Celery will consider the task function you invoked to come from the "__main__" class. The way to get around it is to import the task residing in the same file, assuming your PYTHONPATH is set correctly.

from celery.decorators import task

def task_name():
   print "here"
   return 1

if "__name__ == "__main__":
  from  import task_name


(Note: This information has been updated to reflect Celery v2.3.3 inner-workings).

Saturday, October 15, 2011

Celery and the big instance refactor

One of the strange parts in Celery is that if you want a logger that will write to celery.task.default instead of its own default name, you can do:

import celery.task import Task
logger = Task.get_logger()

The Task class appears to be a global instantiation of Celery. Normally, the task logger is setup via the get_logger() method, which then calls setup_task_logger(), which in turn calls get_task_logger. If you invoke get_logger() within a Task class, the name of the task name is used:

def setup_task_logger(self, loglevel=None, logfile=None, format=None,
            colorize=None, task_name=None, task_id=None, propagate=False,
            app=None, **kwargs):
        logger = self._setup_logger(self.get_task_logger(loglevel, task_name),
                                    logfile, format, colorize, **kwargs)

If you use Task.get_logger(), no name is used and the logger namespace is set to celery.task.default.

def get_task_logger(self, loglevel=None, name=None):                                                                               
      logger = logging.getLogger(name or "celery.task.default")  
      if loglevel is not None:
      return logger

This Task appears to be part of the “The Big Instance” Refactor. It appears that there are plans for multiple instances of the Celery object to be instantiated.

Also, one thing to note:

A task is not instantiated for every request, but is registered in the task registry as a global instance.

This means that the __init__ constructor will only be called once per process, and that the task class is semantically closer to an Actor.

If you have a task,

Friday, October 14, 2011

FQL is not being deprecrated

You can now do FQL queries via the Open Graph API....

One of the most common misconceptions we hear from developers is the belief that FQL will be deprecated with the REST API. The primary reason for this misunderstanding is that you can only issue FQL queries using the fql.query or fql.multiquery methods. We want to make it clear that FQL is here to stay and that we do not have any plans to deprecating it. Today, we are enabling developers to issue FQL queries using the Graph API. With this change, FQL is now just an extension of Graph API.

You can issue an HTTP GET request to The ‘q’ parameter can be a single FQL query or a multi-query. A multi-query is a JSON-encoded dictionary of queries.

Thursday, October 13, 2011

Pylint on Ubuntu

Ever see this issue?
>> from logilab.common.compat import builtins
Traceback (most recent call last):
  File "", line 1, in 
ImportError: cannot import name builtins

Chances are you have an old version of logilab that is stored inside /usr/lib/pymodules:
>>> import logilab
>>> logilab.common

>>> logilab.common.compat

>>> logilab.common.compat.__file__

The solution is to delete the logilab directory in /usr/lib/pymodules, or do:

sudo apt-get remove python-logilab-common
sudo apt-get remove python-logilab-astng

Then you can do:
pip install -U pylint

Friday, September 23, 2011

Facebook Python code for OAuth2

Facebook recently announced that on October 1st, 2011, all Facebook third-party apps will need to transition to OAuth2. The JavaScript and PHP SDK code is posted, but how would you make the change if you're using Python/Django?  To help others make the transition, we've released our own set of Python code at this GitHub repo:

One of the pain points is that users may have existing OAuth cookies set on their browser, which you may use in your current application to authenticate.  However, because Facebook Connect's JavaScript library requires an apiKey change parameter, it makes it hard to use their existing library to force these fbs_ cookie deletions.  Furthermore, you'd have to write your own JS since the Facebook JS SDK is hard-coded to use only the new apiKey parameter.

We also show in this code how you can force these fbs_ cookie deletions on the server-side, primarily by setting the expiration date and providing the correct domain= parameter back to the client.  It's worked well for us in managing the transition to OAuth2, so we hope you will find the same approach useful.

Good luck!