Monday, December 31, 2012

How LinkedIn's JS API implements JavaScript parameters

If you've looked at LinkedIn's API, you may have noticed that they follow a curious approach to providing script parameters to an external JavaScript file:

https://developer.linkedin.com/documents/configuration-and-compatibility

<script type="text/javascript" src="http://platform.linkedin.com/in.js">
  api_key:    [API_KEY]
  onLoad:     [ONLOAD]
  authorize:  [AUTHORIZE]
</script>

Initial inspection seems to suggest that you are declaring three separate JavaScript variables: api_key, onLoad, and authorize. In actuality, you are just defining text inside the script tag that gets parsed with a bunch of regexeps and removing of whitespaces.

Basically the code below appears to extract out the innerHTML and then set the variables r and K to be the key/value pairs. White spaces are removed with the replace() function.

http://platform.linkedin.com/in.js

        try {
            m = f.innerHTML.replace(A, n)
        } catch (z) {
            try {
                m = f.text.replace(A, n)
            } catch (y) {
            }
        }
    }
    m = m.replace(J, "$1").replace(A, n).replace(F, n);
    aa = C.test(m.replace(j, n));
    for (var T = 0, S = m.split(k), q = S.length; 
    T < q; 
    T++) {
        var s = S[T];
        if (!s || s.replace(j, n).length <= 0) {
            continue
        }
        try {
            V = s.match(g);
            r = V[1].replace(A, n);
            K = V[2].replace(A, n)
        } catch (X) {
            if (!aa) {
                console.warn("script tag contents must be key/value pairs separated by a colon. Source: " + X)
            }
            continue
        }
        N(r, K)
    }

Some of the regexpes are defined at the top of in.js:

var R = {
        "bootstrapInit": +new Date()
    }, p = document,
        l = (/^https?:\/\/.*?linkedin.*?\/in\.js.*?$/),
        b = (/async=true/),
        D = (/^https:\/\//),
        J = (/\/\*((?:.|[\s])*?)\*\//m),
        F = (/\r/g),
        j = (/[\s]/g),
        g = (/^[\s]*(.*?)[\s]*:[\s]*(.*)[\s]*$/),
        x = (/_([a-z])/gi),
        A = (/^[\s]+|[\s]+$/g),
        u = (/^[a-z]{2}(_)[A-Z]{2}$/),
        C = (/suppress(Warnings|_warnings):true/gi),
        d = (/^api(Key|_key)$/gi),

OAuth2: The Road to Hell

http://hueniverse.com/2012/07/oauth-2-0-and-the-road-to-hell/

http://hueniverse.com/2010/09/oauth-2-0-without-signatures-is-bad-for-the-web/


RealtimeConf - "OAuth 2.0 - Looking Back and Moving On" by Eran Hammer from &yet on Vimeo.

Saturday, December 22, 2012

Upgrading NetworkManager and ModemManager with Ubuntu 12.04

Ever since I've upgraded to Ubuntu 12.04, I've noticed that Franklin U600 USB modem disconnects are frequent (especially when moving in a train ride) and prevent you from reconnecting to the USB device unless you reboot. Upgrading to the latest Ubuntu packages doesn't seem to resolve the issue. One of the main issues appears to be that the ModemManager that comes with Ubuntu 12.04 runs on version 0.5, which appears to have some stability issues with random disconnects (possibly because of this issue with modem disconnects, callbacks, and weak references).

To try to understand better, I started to dig into Modem Manager. Modem Manager has a Wiki article about troubleshooting. To prevent modem-manager from being restarted each time it was killed, I simply renamed the /usr/bin program and then started the program manually by using the –debug flag. After numerous tries of watching the modem get disconnected, I noticed that this state change seemed to occur quite a bit. During the disconnection process, the modem would suddenly jump back to the connected state:
Nov 12 08:49:37 my-laptop modem-manager[8862]:   Modem /org/freedesktop/ModemManager/Modems/0: state changed (connected -> disconnecting)
Nov 12 08:49:37 my-laptop NetworkManager[8912]:    SCPlugin-Ifupdown: devices removed (path: /sys/devices/virtual/net/ppp0, iface: ppp0)
Nov 12 08:49:37 my-laptop modem-manager[8862]:   Modem /org/freedesktop/ModemManager/Modems/0: state changed (disconnecting -> connected)
ModemManager 0.6.0 appears to fix these issues, as well as introduce a new approach for writing plugins for the package.

Currently Ubuntu 12.04 don't have the latest packages for ModemManager, and the issue was severe enough to compel me to try to compile the latest version myself (http://ftp.gnome.org/pub/gnome/sources/ModemManager/):
sudo apt-get install libdbus-glib-1-dev
sudo apt-get install libgudev-1.0-dev
wget 
./configure
make
sudo apt-get uninstall modem-manager
make install
I've been running with the latest 0.6.4 version and the problem doesn't happen anymore. If the connection drops, the modem will eventually be made available again in the Network Manager applet.

Another problem I've noticed is that the Network Manager applet often has blank entries for VPN connections, which appears to be a reported bug. I've also encountered issues with the Gigabit Ethernet port not reliably connecting with a longer cable, so finally decided to take the hit to try to install the latest versions of Network Manager and the Network Manager applet. There is also the issue that the Network Manager applet fails to respond to any mouse clicks, though typing "nmcli nm wwan on" enables the Mobile Broadband option.

These issues all seemed to necessitate upgrading Network Manager from the standard Ubuntu 12.04 packages. Be forewarned: if you attempt this process, you could easily disable your ability to access the Internet. Without the magic that NetworkManager does to handle all your wired and wireless connections, you will then have to resort to using ifconfig to setup a static IP address (i.e. ifconfig eth0 192.168.1.xxx), setting up a static default gateway (route add default gw 192.168.1.1), and adding a nameserver entry in /etc/resolv.conf (nameserver x.x.x.x). So I'd strongly suggest you try this process out on a local network unless you're adept at also connecting to wireless LAN's via the command line.

Download the latest version of Network Manager from this address:
http://ftp.gnome.org/pub/GNOME/sources/NetworkManager/0.9/

Unpack the files and make sure you have these dev libraries installed:
sudo apt-get install intltool
sudo apt-get install libnl-dev 
sudo apt-get install uuid-dev
sudo apt-get install libnss3-dev
sudo apt-get install ppp-dev
./configure --sysconfdir=/etc --libexecdir=/usr/lib/NetworkManager --localstatedir=/var
make

At this point, you should avoid doing any make installs. You want to make sure both Network Manager and the Network Manager applet compiles without any issues before removing them. See http://projects.gnome.org/NetworkManager/developers/ since there are also different compile options in the instructions.

Note: The -sysconfdir is set to /etc, which will allow the compiled version of NetworkManager to find your current configuration (in /etc/NetworkManager). This way, you can re-use all your wired, wireless, and mobile broadband connections. The -libexecdir is needed since default Ubuntu installs dump the DHCP/PPTP scripts inside this directory, which are whitelisted by AppArmor. If you start seeing Permission Denied errors, chances are your paths are being blocked by AppArmor. You also need the -localstatedir option for this same reason, since this directory speicifies where the DHCP client will try to write the PID data.

The Network Manager can be downloaded from this location:

http://ftp.gnome.org/pub/GNOME/sources/network-manager-applet/0.9/network-manager-applet-0.9.6.4.tar.xz
tar xvfp network-manager-applet-0.9.6.4.tar.xz
sudo apt-get install libgconf2-dev
sudo apt-get install libgnome-keyring-dev 
sudo apt-get install libnotify-dev 
sudo apt-get install libgtk-3-dev 
./configure --sysconfdir=/etc --libexecdir=/usr/lib/NetworkManager
make
Note: if you try to compile with libgtk2.0-dev, chances are that the compile will fail. Network Manager needs GTK 2.0+ to compile, but on Ubuntu 12.04 it seems that the libgtk-3-dev library is really what is needed.

If the compiles were successful, now is the time to remove Network Manager. Be forewarned that at this point you could easily block your ability to access the Internet. So only proceed to this point if you're confident enough to recover.
sudo apt-get remove network-manager
sudo apt-get purge network-manager-gnome && sudo apt-get install network-manager-gnome

The --libexecdir for the Network Manager is needed for invoking other plugins, such as VPN. Also, if you intend to use VPN, you'll also need to git clone the repo you need to use (in this case, we're cloning the PPTP module).
git clone git://git.gnome.org/network-manager-pptp
cd network-manager-pptp/
./autogen.sh --sysconfdir=/etc --libexecdir=/usr/lib/NetworkManager --localstatedir=/var
make
sudo make install
You can then do make installs in both project directories and attempt to start Network Manager (or logging out and logging back in)

Note: if you don't see the nm-applet at the top, here are a few places to check. The first requires using the dconf-editor, which needs to be apt-get installed:
apt-get install dconf-tools
dconf-editor

The instructions to use dconf-editor are listed here, which is basically to make sure that notifications are not disabled in the nm-applet namespace: http://askubuntu.com/questions/150406/how-do-i-re-enable-disabled-network-notifications-in-gnome-shell

You also want to make sure using dconf-editor that the com.canonical.Unity.Panel systray-whitelist is set toto "['all'] (or at least to include nm-applet). I found that my system tray was not set to this value and therefore prevented nm-applet from being rendered:

http://askubuntu.com/questions/136733/some-system-tray-icons-invisible-in-gnome-classic-12-04

Normally I wouldn't recommend trying to compile and install Modem Manager and Network Manager, but the current packaged versions supplied in Ubuntu 12.04 appear to be so buggy and unstable that this approach appears to help deal with many of the issues that have been reported.

Thursday, December 20, 2012

Uploading videos to SmugMug

If a video upload fails, SmugMug allows you to view your Upload Log in your Settings page to see the error. However, the wrong video resolution causes it to return an "invalid video codec." After trying all different types of formats (WebM, H.264, FLV), I finally succeeded by converting the video from its original format of 400x224 to 320x240, which seemed to resolve the problem.

avconv -i <MP4 input file> -s 320x240 <output file>

There isn't much documentation on SmugMug's site to give you any indication that it's the resolution size that has to be adjusted:

http://help.smugmug.com/customer/portal/articles/84569-how-to-convert-and-format-a-video-for-upload-to-smugmug

Side note: if you want to flip the audo/video streams, here's how you do it.   The first "0:1:0" map parameter means to take file input 0 (since you can have multiple input files) and take stream 1 and map it to stream 0.   You do the same for stream 1 of input file 0 and map to stream 0.

avconv -i <input file 0> -vcodec copy -acodec copy -map 0:1:0 -map 0:0:1 <output file>

Wednesday, December 19, 2012

Why lxml.find() calls return False..

https://mailman-mail5.webfaction.com/pipermail/lxml/2005-August/000332.html


What happens is that an element evaluates to False if it has no 
children, and True if it does. The presence of text content, attributes 
or a tail does not affect the boolean status; if no elements exist it'll 
still be False.

find() has the behavior to return None if the value cannot be found. You 
can change your tests to something like:

if xmldoc.find('child') is not None:
     ...

to check whether you have a child.

Friday, December 14, 2012

Microsoft Internet Explorer 12152 errors..

IE has a 1 min keepalive timeout: http://support.microsoft.com/kb/813827

Apache2 has a default 5 second timeout: http://httpd.apache.org/docs/2.2/mod/core.html#keepalive

Therefore, Internet Explorer assumes the connection will close in a minute but Apache2 attempts to close after 5 seconds.  The problem is that if the connection is closed and the initial POST fails, Internet Explorer will only send the header but not the HTTP body.  Since the Apache2 server is expecting the rest of the data, eventually a timeout occurs and the 12xxx errors.

http://stackoverflow.com/questions/4796305/why-does-internet-explorer-not-send-http-post-body-on-ajax-call-after-failure

These patches exist in IE7-IE9, but they aren't by default activated.

This hotfix is included in Internet Explorer 7. However, the hotfix must still be enabled as described in the "How to enable this hotfix" section.

Internet Explorer 8

This hotfix is included in Internet Explorer 8. However, the hotfix must still be enabled as described in the "How to enable this hotfix" section.

Internet Explorer 9

This hotfix is included in Internet Explorer 9. However, the hotfix must still be enabled as described in the "How to enable this hotfix" section.

http://support.microsoft.com/kb/895954
http://support.microsoft.com/kb/831167/en-us

We could disable keepalive's for all Internet Explorer sessions, but here's one way using SetEnvIf in Apache 2.2 to accomplish it:

  # Remove after upgrading to Apache 2.4 and we can use SetEnvIfExpr instead of
  # this funky logic.
  #
  # Test by:
  # curl -v --user-agent 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.2; WOW64; Trident/6.0; .NET4.0E; .NET4.0C; .NET CLR 3.5.30729; .NET CLR 2.0.50727; .NET CLR 3.0.30729; BRI/2;)' -H "X-Requested-With: XMLHttpRequest" -k 
  #
  # ...and all different combinations.
  #
  #  Based on: http://stephane.lesimple.fr/blog/2010-01-28/apache-logical-or-and-conditions-with-setenvif.html
  #
  # Logic below will disable keepalive's on Ajax-based IE browsers.  We may see a small performance
  # hit but better than random 12xxx Microsoft errors because of http://support.microsoft.com/kb/895954.
  # See http://stackoverflow.com/questions/4796305/why-does-internet-explorer-not-send-http-post-body-on-ajax-call-after-failure for
  # more context.
  #
  SetEnvIf User-Agent "^" disable_keepalive=0
  SetEnvIf User-Agent "MSIE [17-9]" disable_keepalive=1
  # Negative lookahead regexp matching.  If there is no Ajax XmlHttpRequest, we can invert
  # the flag that attempts to disable keepalive's.  Equivalent to performing an AND.
  SetEnvIf X-Requested-With "^(?!XMLHttpRequest).*" !disable_keepalive
  SetEnvIf disable_keepalive 1 nokeepalive downgrade-1.0 force-response-1.0
http://blogs.msdn.com/b/ieinternals/archive/2011/03/26/https-and-connection-close-is-your-apache-modssl-server-configuration-set-to-slow.aspx Also note that Apache2's default-ssl file has this configuration:
BrowserMatch "MSIE [2-6]" \
    nokeepalive ssl-unclean-shutdown \
    downgrade-1.0 force-response-1.0
# MSIE 7 and newer should be able to use keepalive
BrowserMatch "MSIE [17-9]" ssl-unclean-shutdown

(Yes, the MSIE [17-9] regex is correct https://bugs.launchpad.net/ubuntu/+source/apache2/+bug/626728)

 What does ssl-unclean-shutdown do?  It lists this information in /etc/default/sites-available/default-ssl:
#   SSL Protocol Adjustments:
#   The safe and default but still SSL/TLS standard compliant shutdown
#   approach is that mod_ssl sends the close notify alert but doesn't wait fo
#   the close notify alert from client. When you need a different shutdown
#   approach you can use one of the following variables:
#   o ssl-unclean-shutdown:
#     This forces an unclean shutdown when the connection is closed, i.e. no
#     SSL close notify alert is send or allowed to received.  This violates
#     the SSL/TLS standard but is needed for some brain-dead browsers. Use
#     this when you receive I/O errors because of the standard approach where
#     mod_ssl sends the close notify alert.
More details:
http://blogs.msdn.com/b/askie/archive/2009/06/18/change-in-behavior-with-internet-explorer-7-and-later-in-regard-to-connect-requests.aspx
https://groups.google.com/group/microsoft.public.winhttp/tree/browse_frm/month/2004-07/ee64525371504ef0?rnum=21&lnk=ol&pli=1\


Addnedum: The problem can often happen if you have load balancers that have a shorter timeout than the keepalive.  In this case, you may wish to increase your load balancer timeouts to be greater than IE Ajax timeouts.  If you adjust the load balancer timeouts, chances are you will see these 12xxxx errors disappear too.

Thursday, December 13, 2012

Configuring Ubuntu 12.04 to take screenshot areas with shortcuts

MacOSX has Command-Shift-4 to take a screenshot area.

You can do the same with Ubuntu...

http://askubuntu.com/questions/170163/how-do-i-set-a-shortcut-to-screenshot-a-selected-area

The instructions are listed below.  The confusing part is that you add the Name/Command and then assign the keyboard shortcut by clicking on the level after adding the entry.

  1. Open System Settings -> Keyboard settings -> Shortcuts
  2. Select Custom Shortcuts(you can go to Screenshot-s too and it will work)
  3. Click +
  4. Fill fields
    • Name to Take a screenshot of area
    • Command to gnome-screenshot -a or shutter -s(if u prefer shutter)
  5. click OK

Wednesday, December 12, 2012

Difference between max-age and expires cookies

http://mrcoles.com/blog/cookies-max-age-vs-expires/


  • Expires sets an expiry date for when a cookie gets deleted
  • Max-age sets the time in seconds for when a cookie will be deleted
  • Internet Explorer (ie6, ie7, and ie8) does not support “max-age”, while (mostly) all browsers support expires
http://blogs.msdn.com/b/ieinternals/archive/2009/08/20/wininet-ie-cookie-internals-faq.aspx

http://www.adobe.com/devnet/coldfusion/articles/coldfusion-securing-apps.html

Any cookies that you create with the httponly attribute will not be present in JavaScript's document.cookievariable on browsers where HttpOnly is supported. Browsers will still send HttpOnly cookies when making AJAX calls or XMLHttpRequest calls, however their values still cannot be accessed from your JavaScript code.

Saturday, December 1, 2012

Using M2Crypto with virtualenv/pip

The M2Crypto version 0.21.1-1 via pip installs breaks when attempting to use in virtualenv: http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=637904 If you do apt-get install python-m2crypto, you can create a symbolic link:
ln -s /usr/lib/python2.7/dist-packages/M2Crypto~ /.virtualenvs/[virtualenv dir] /local/lib/python2.7/site-packages/M2Crypto

Sunday, November 18, 2012

Gnome Keyring

See this error?
/usr/lib/i386-linux-gnu/pkcs11/gnome-keyring-pkcs11.so: /usr/lib/i386-linux-gnu/pkcs11/gnome-keyring-pkcs11.so: cannot open shared object file: No such file or directory
It's an issue with Gnome key-ring, possibly this issue with multi-architecture support: https://bugs.launchpad.net/ubuntu/+source/gnome-keyring/+bug/859600

Wednesday, November 14, 2012

Using mod_oauth_openid with Google Apps

Because of a recent release on a Jenkins plugin for Google Apps SSO, I wondered whether there was a way to get mod_oauth_openid without prompting a user to choose between Google Apps domains.  The basic approach would have been to set Apache configurations similar to the one described in this previous blog posting except for the following lines:
AuthOpenIDTrusted ^https://www.google.com/a/yourdomain.com/o8/ud
AuthOpenIDSingleIdP https://www.google.com/accounts/o8/site-xrds?hd=yourdomain.com
If you try to make mod_oauth_openid work with the configuration above, chances are you'll see "OP is not authorized to make an assertion regarding the identity".  This issue was encountered by another user in this thread (the Step2 libraries referred are the OpenID Google App discovery extensions that Google refers at https://code.google.com/p/step2/).  Unless a patch is made to mod_auth_openid or the underlying libokele C++ library to support these Google discovery extensions, chances are that you won't be able to use this approach.

The apparent problem is that Google has a special way of dealing with OpenID discovery process:

https://developers.google.com/google-apps/help/faq/auth#what_discovery
It is important to note that the RP must use a slightly different discovery mechanism to support Google Apps accounts, which is covered in depth here. In short, during the OpenID discovery process, RPs must check both (using example.com as theexample domain) https://www.google.com/accounts/o8/.well-known/host-meta?hd=example.com and http://example.com/.well-known/host-meta for discovery information, as the site owners may opt to have Google host this information, rather than host it themselves.
Basically it means that that when a user authenticates on Google Apps, you're given back not only a unique identifier for the user (known as the claimed id), along with a URL. If your domain is mydomain.com, the URL that is returned is http://mydomain.com/openid?id=<id>. In OpenID specs, this URL is expected to be available for you to grab metadata to determine where to inquire about this ID.  If an OpenID library tries to open this URL, chances are that it will 404 and therefore fail the user discovery process.

Google proposes that the users implementing Google Apps SSO look both at https://www.google.com/accounts/o8/.well-known/host-meta?hd=example.com or http://example.com/.well-known/host-meta. Neither of these URLs follow the OpenID spec since one of them requires you to host this metadata, but you also have to deal with digitally signing it too). The approach is summarized in the User Discovery section of the Google OpenID documentation.

The Jenkins OpenID and Python OpenID plugins skip this step and just assumes https://www.google.com/accounts/o8/user-xrds?uri= is the correct URL to use.  Normally the metadata would need to include this <URITemplate> tag, which then gets used to perform the user verification.

https://github.com/jenkinsci/openid-plugin/commit/91beef6857f0d7956ee0ad27ac24744f961ae6c2 
https://github.com/adieu/python-openid/commit/789cc11950e94c09e7c912a34b4e1d1d8f20c62b

More background here: 

http://www.slideshare.net/timdream/google-apps-account-as-openid https://sites.google.com/site/oauthgoog/fedlogininterp/openiddiscovery 

Also, someone else attempted to side-step this issue with Google Apps and mod_auth_openid, but it feels more of a way to circumvent some of the OpenID discovery process:

https://gist.github.com/2635479

Jenkins Plugins updates!

Two improvements for Jenkins users that seem to very promising.

First, the OpenID plugin now has explicit Google Apps support. You now set the domain and the XML configuration automatically sets the OpenID XRDS discovery to rely on a specific domain. You could previously use Google Apps but users were asked to select which domain to use if you were signed in with multiple ones (i.e. your personal and company Gmail account).

Instead of:
https://www.google.com/accounts/o8/id
The Google Apps domain will now set the domain to be:
https://www.google.com/accounts/o8/site-xrds?hd=yourdomain.com
There is an extra Google Apps SSO option now exposed:


Keep in mind that you must use the Google Apps SSO with this new URL.  The reason is that Google App has a special OpenID discovery mechanism that breaks with standand OpenID.  For more details, check out the source code for this plugin.

Secondly, the Cobertura Plugin has support to allow fails on low coverage results as well as ratcheting to improve coverage percentages.

 https://wiki.jenkins-ci.org/display/JENKINS/Cobertura+Plugin


See the diff here:

https://github.com/jenkinsci/cobertura-plugin/commit/1cef2b6890bf3388e7e80f12bbe8588035049403

If a change reduces the current coverage, you can now have your builds fail!  There are also extra checkboxes to allow ratcheting to occur, which means that if the coverage is higher than the current bulid, the new metric will be set to that number:

Publishing Cobertura coverage results...
Cobertura coverage report found.

Lines's new health minimum is: 90.8
Lines's new stability minimum is: 90.8

Tuesday, November 13, 2012

Verifying X509 certs and private keys

Using the Chef SSL cookbook and want a way to verify your X509 cert and private key sign correctly?  Here's how you can use M2Crypto and JSON loads to double check....

Adapted from http://note.harajuku-tech.org/m2crypto-signverify-with-x509-rsa-sha-256...
from M2Crypto import RSA
import json

data_bag_1 = json.loads(open("mycert.json", "r").read())

key = str(data_bag_1['key'])
cert = str(data_bag_1['cert'])

pk=RSA.load_key_string(key)
import hashlib
digest = hashlib.sha256( "ABCDEFGHIJKLMN" ).digest()
signature=pk.sign(digest)

from M2Crypto import X509
pub=X509.load_cert_string(cert).get_pubkey().get_rsa()
pub.verify(digest,signature)

Sunday, November 11, 2012

Technology field offices: a new precedent?

"It's an experiment", Catherine Bracy, one of the staff members assigned to manage Obama's first ever technology field office in San Francisco, said to me in early March 2012. Instead of searching for volunteers to make phone calls to battleground states, she was recruiting for engineers. The directive from Chicago was to staff this field office with volunteers who wanted to code for the campaign.

I signed the 10-page document that stated that any services rendered on the campaign's behalf belonged to Obama for America, all work did not entitle us to employee benefits, and anything confidential could not be disclosed.  Essentially, it boiled down to everything you did would be unpaid and wouldn't belong to you. Nonetheless, it also enabled people like myself to get a glimpse into the inner-workings of the Obama campaign and contribute towards the reelection of the President. The expectations were that we'd commit to 5-10 hours a week, which allowed us to still have a full-time job.

"Fastest...turnaround...ever," Angus Durocher joked when I sent back the agreement within 10 minutes of receiving it. Angus had been one of the lead engineers at YouTube before its acquisition by Google in 2006 and left the startup world to work on the Obama campaign, first as the Deputy New Media Director in New Mexico for 2008 and now was tasked as the "one lonely engineer" to staff the Obama Technology Field Office for 2012. He'd be at the field office until the last person left, tuning into the latest Boston Red Sox or San Francisco Giants baseball games while fueled by donuts and apple crisps brought earlier in the week.

I worked on a variety of projects, refining the voting engine that powered the "Runway to Win" contest where the top three submission submitted by amateur designers were sold as T-shirts on the Barack Obama store or building a Romney translation tool that revealed the candidate's more right-leaning positions. Other volunteers were asked to work on projects that illustrated the impact of the American Recovery and Reinvestment Act or Affordable Health Care Act, along with a multitude of other applications. Catherine Bracy and Angus Durocher quarterbacked the product development and testing in San Francisco before handing off the projects to Chicago.

Perhaps one of our most significant contributions from the San Francisco technology office was the introduction of Trip Planner. Designed and built by volunteer Marc Love, Johnvey Hwang, and many other volunteers, it was a travel site that enabled supporters to find and offer housing and rides in battleground states. The thousands of emails asking for volunteers to Nevada from California enabled individuals the option to arrange carpools with this site, a tool that could undoubtedly be improved and enhanced for future presidential campaign cycles.

In 2008 and 2012, the Obama campaign established more than 700 field offices across the country to build a formidable get-out-the-vote operation.  In San Francisco, one was opened primarily focused on technology development efforts, which represented one of the first times members of the tech community were enlisted to create software for the campaign. In the spirit of the campaign's mantra, the response became "Yes We Code!"

Roger Hu is a former Obama delegate to the 2008 Democratic National Convention, and a volunteer from the San Francisco Technology Field Office for the Obama 2012 campaign.

Thursday, November 1, 2012

Using the Franklin U600 Sprint modem on Ubuntu 12.04

Ubuntu 12.04 seems to have support for the Franklin U600 Sprint modem right out of the box. Instead of configuring udev rules to auto-detect the device by issuing a modprobe usbserial vendor=0x1fac product=0x0151 command, Ubuntu 12.04 seems to detect this device automatically using the cdc_acm module:
$ usb-devices

T:  Bus=01 Lev=03 Prnt=11 Port=01 Cnt=01 Dev#= 12 Spd=12  MxCh= 0
D:  Ver= 1.10 Cls=02(commc) Sub=00 Prot=00 MxPS=64 #Cfgs=  1
P:  Vendor=1fac ProdID=0151 Rev=00.00
S:  Manufacturer=Franklin Wireless Corp.
S:  Product=U600 EVDO Modem 
C:  #Ifs= 6 Cfg#= 1 Atr=80 MxPwr=500mA
I:  If#= 0 Alt= 0 #EPs= 1 Cls=02(commc) Sub=02 Prot=01 Driver=cdc_acm
I:  If#= 1 Alt= 0 #EPs= 2 Cls=0a(data ) Sub=00 Prot=00 Driver=cdc_acm
I:  If#= 2 Alt= 0 #EPs= 2 Cls=ff(vend.) Sub=ff Prot=ff Driver=(none)
I:  If#= 3 Alt= 0 #EPs= 2 Cls=ff(vend.) Sub=ff Prot=ff Driver=(none)
I:  If#= 4 Alt= 0 #EPs= 3 Cls=ff(vend.) Sub=ff Prot=ff Driver=(none)
I:  If#= 5 Alt= 0 #EPs= 2 Cls=ff(vend.) Sub=ff Prot=ff Driver=(none)
Lately, I've noticed that sometimes the device fails to be auto-detected after removing it. One thing that appears to work is to remove the module before trying again. One workaround is to remove the module yourself after physically taking out the USB device:
$ sudo rmmod cdc_acm
The cdc_acm driver is used only for 3G access (Qualcomm QSC6085 chipset). The U600 comes with a Beceem (now Broadcom) chipset and Sprint appears to have released a reference guide with patches for Linux 2.6. Instructions for setting up the 4G driver and WiMax certs are also included for Franklin U250, U300, and U600.

You may also need to restart modem-manager because of this bug too: https://bugs.launchpad.net/ubuntu/+source/ppp/+bug/869954
http://osdir.com/ml/networkmanager-list/2010-07/msg00071.html https://mail.gnome.org/archives/networkmanager-list/2012-August/msg00125.html https://mail.gnome.org/archives/networkmanager-list/2012-February/msg00097.html

Wednesday, October 31, 2012

BeautifulSoup v4.1.3 patch

Apparently a tag such as the following will break BeautifulSoup when using HTML5lib:

<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">

  File "/home/external/html5lib/treebuilders/_base.py", line 291, in insertElementNormal
    element.attributes = token["data"]
  File "/home/external/bs4/builder/_html5lib.py", line 147, in setAttributes
    new_name = NamespacedAttribute(*name)
  File "/home/external/bs4/element.py", line 30, in __new__
    obj = unicode.__new__(cls, prefix + ":" + name)
TypeError: unsupported operand type(s) for +: 'NoneType' and 'str'
https://bugs.launchpad.net/beautifulsoup/+bug/1073810

Tuesday, October 23, 2012

Getting HTML5 audio to work on Chrome Ubuntu

Apparently you need to install this code to make HTML5 audio tags play correctly:


sudo apt-get install chromium-codecs-ffmpeg-extra

Wednesday, October 17, 2012

Difference between mock and MagicMock


>>> x = mock.MagicMock(tst='abc')
>>> x[0]


>>> x = mock.Mock(tst='abc')

>>> x[0]
Traceback (most recent call last):
  File "", line 1, in 
TypeError: 'Mock' object does not support indexing

Friday, October 12, 2012

Using iptables and ufw

There are a lot of instructions out there about configuring a VPN server on Ubuntu, but how does it all work?   The basic idea is that once you setup a PPTPD server, you need to configure your iptables rules to allow packets from your ppp interface to your Ethernet.

http://silverlinux.blogspot.com/2012/05/how-to-pptp-vpn-on-ubuntu-1204-pptpd.html

Some basic commands that you can use for iptables.   There are INPUT, FORWARD, and OUTPUT filters.   The default policy (either ACCEPT or DROP) determine the default action in case there are no rules that matched.

If you want to see how your rules are working, you can add a rule for logging;

iptables -A <INPUT/FORWARD/OUTPUT> -j LOG --log-prefix="INPUT/FORWARD/OUTPUT prefix" --log-level=3

(The -j represents a keyword target 'LOG', and it uses the --log-prefix and --log-level as supplementary commands.)

To replace an existing iptables rule (they are numbered from starting from 1), you can do:
iptables -R INPUT/FORWARD/OUTPUT <rule #> rule>

To insert a rule in the beginning of the chain, you can do:
iptables -I INPUT/FORWARD/OUTPUT rule

If you don't want to have a default ACCEPT policy for the FORWARD iptables chain that is mentioned in a lot of PPTPD documentation, you can do:

-A ufw-before-forward -i ppp0 -o eth0 -j ACCEPT
-A ufw-before-forward -i eth0 -o ppp0 -j ACCEPT
Apparently ufw adds some extra iptables rules called ufw-before-input, ufw-before-output, and ufw-before-forward so you can take advantage of those rules.

Monday, September 24, 2012

jQuery 1.8.2 released

http://blog.jquery.com/2012/09/20/jquery-1-8-2-released/

 ...and they added this patch, which means the problems with the Comcast Protection Suite at least is resolved.

#12423: jQuery breaks with Comcast Protection Guard and any anti-keylogging protection software on IE7+

Friday, September 21, 2012

More Unicode strangeness in Python 2.x

$ python -c "print u'Hey there\u2013t'"
Hey there–t
$ python -c "print u'Hey there\u2013t'" > `tempfile`
Traceback (most recent call last):
  File "", line 1, in 
UnicodeEncodeError: 'ascii' codec can't encode character u'\u2013' in position 9: ordinal not in range(128)
The solution is:
PYTHONIOENCODING="utf_8" python -c "print u'Hey there\u2013t'" > `tempfile`

Unicode Quirks in Django

Python 2.x's Unicode implementation really leaves something to be desired.  The major issue often arises when you need to go between str() and unicode() types, the former is a 8-bit character while the latter is a 16-bit character.   The problem is that doing .encode('utf-8') on a unicode object is idempotent (i.e. u'\u2013t'.encode('utf-8') but doing it on a str() twice will cause Python to trigger ascii codec errors.

Here's a great introduction of troubleshooting Unicode issues:

http://collective-docs.readthedocs.org/en/latest/troubleshooting/unicode.html

There's a great PowerPoint slide about demystifying Unicode in Python, which should be required reviewing.  It's more detailed about the complexities of UTF-encoding, but it's worthwhile to review.

http://farmdev.com/talks/unicode/

One of the general rule of thumbs that you'll get from this talk is 1) decode early 2) unicode everywhere and 3) encode late.

In Django, this approach is closely followed when writing data to the database.  You usually don't need to convert your unicode objects because it's being handled at the database layer.  Assuming your SQL database is configured properly and your Django settings are set correctly, Django's database layer handles the unicode to UTF-8 conversion seamlessly.  For example, just look inside the MySQLdb Python wrapper and right before a query is executed, the entire string is encoded into the specified character set:

MySQLdb/cursors.py:

if isinstance(query, unicode):
            query = query.encode(charset)
        if args is not None:

What if you attempt to use logging.info() on Django objects?   (i.e. logging.info("%s" % User.objects.all()[0])   If you searched on Stack Overflow, you'd see a recommendation to create a __str__(self) in your Python classes that call unicode() and convert to UTF-8:

http://stackoverflow.com/questions/1307014/python-str-versus-unicode

def __str__(self):
    return unicode(self).encode('utf-8')
Django's base model definitions (django.db.models.base) also follow this convention:

  def __str__(self):
        if hasattr(self, '__unicode__'):
            return force_unicode(self).encode('utf-8')
        return '%s object' % self.__class__.__name__

Normally, Python handles string interpolations automatically by determining whether the string is unicode or str() type.  Consider these cases:

>>> print type("%s" % a)
<type 'str'>
print type("%s" % 'hey')
<type 'str'>
print type("%s" % u'hey')
<type 'unicode'>

Assuming your character set on your database is set to UTF-8, consider this example and how Python deals with string interpolations for class. Normally Python does unicode conversions automatically, but for Python classes, "%s" always means to invoke the str() function.

class A(object):

    def __init__(self):
        self.tst = u'hello'

    def __str__(self):
        if hasattr(self, '__unicode__'):
            return self.__unicode__().encode('utf-8')
        return 'hey'

    def __unicode__(self):
        return u'hey\u2013t'

>>> a = A()
>>> print "%s" % a
hey-t
>>> print "%s %s" % (a, a.tst)
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 3: ordinal not in range(128)

In this failing case, the problem is that printing the A class results in printing a str() type intermixed with a.tst, which is a unicode type.  When this issue happens, you're likely to see the UnicodeDecodeError

The same problem happens when trying to attempt to declare the __unicode__() method in your Django models and attempt to print out Django objects and attributes that have Unicode characters, similar to the issues reported in this Stack Overflow article.  Because Python string interpolation will  invoke the __str__() method, you have to be careful about intermingling Django objects and Django attributes when printing or logging them.

What's the solution?  In your Django models, it actually may be useful to force returning the Unicode type in the __str__() method, assuming you also have a __unicode__() method defined.  One of the quirks of Python is that if a Unicode type is returned, the __unicode__() method will be attempted to execute.   It's somewhat counter-intuitive, but by adding this section of code, you can avoid the hazards of intermingling Django objects and attributes:

# http://www.gossamer-threads.com/lists/python/bugs/842076
def __str__(self):
    return u'%s object' % self.__class__.__name__

The recommendation is also consistent with this python-dev discussion about how to implement __str__() and __unicode__() methods:

This was added to make the transition to all Unicode in 3k easier:

. __str__() may return a string or Unicode object.
. __unicode__() must return a Unicode object.

There is no restriction on the content of the Unicode string
for __str__().

Another alternative is to prefix your logging statements with u', which will force the Python string interpolation to run __unicode__() instead of str(). But it's easy to forget this prefix, so overriding the Django base model with this __str__() has helped to avoid triggering these UnicodeDecodeException errors for me.

Tuesday, September 18, 2012

Daily Deals/Result Glider Spyware


It has surprised me the extent to which this spyware has made its way into a lot of people's browsers.  The sites dropdowndeals.com,app.mysupercheap.com and resultglider.com all seem to be the same company (see DNS trace below).   Both Mozilla and Intuit mention this spyware, so it's fairly pervasive.


If anyone finds a site which is prompting these users to download this plug-in, let me know...Mozilla mentions that a lot of travel sites are offering this browser download.  

Here's an article that talks about this company:


I also wonder if that's why we were seeing invalid crossdomain.xml requests too:


DNS records below:

Non-authoritative answer:
Address: 4.30.3.59


Non-authoritative answer:
Address: 4.30.3.140


Non-authoritative answer:
Address: 4.30.3.177

They also appear to hosting YouTube Best Video Downoader (http://www.bestvideodownloader.com) and are using a mail forwarding address in Michigan:

http://www.bbb.org/western-michigan/business-reviews/internet-services/alactro-in-grandville-mi-38141335

Sunday, September 16, 2012

E911 for ObiTalk

This article seems to provide the best instructions about how to activate 911 service dialing through your ObiTalk device:

http://www.obitalk.com/forum/index.php?topic=339.msg1766#msg1766

Basically the ObiTalk devices are by default setup to dial 911 though the 'li' (Line Port) in their default configuration.  The OutboundCallRoute provides a series of rules that are followed to determine how the call should be made (see p. 179-180 of the ObiDeviceAdminGuide.pdf document).  The default rule is listed below:

How do these rules work?  First, each rule is described in {} as OR operations.   The first rule {(<#>:911):li}, for instance, describes how dialing # or 911 will route to the line port.   The second rule dialing **0 will invoke the automated attendant.  The remaining descriptions of how these rules are described on page 180 of the manual:

If you want route 911 calls to a local 24-hour emergency line by adding a rule that instead of routes to the li port, you need to remove the 911 redirecting to line port and then add {(<911:1xxxxxxxxxx):spX} where spX is the SIP line you're using (i.e. sp1).  

One further note: the config changes should be done on your obitalk.com settings, not directly on your ObiTalk firmware.  For some reason, when you reboot, the changes will be overridden by those set on ObiTalk.com (unless you uncheck the checkbox).

Wednesday, August 29, 2012

The pernicious effects of the Comcast Protection Suite....

This mystery had been eluding me for at least 5-6 months since we started introducing JavaScript exception monitoring, but I finally was better able to understand why the Comcast Protection Suite has been causing so many problems for many our users. No, it isn't injecting its own jQuery, and no it isn't defining its own $, but rather it's trying to do anti keystroke logging....


We knew anecdotally that disabling the Comcast plug-in solved the issue, but I could never explain why the exception occurred (such as the one below). Apparently the Comcast Protection Suite installs a Browser Helper Object DLL into Internet Explorer. The DLL has just as much control over the DOM that JavaScript modules do. It apparently is also adding keyup (and perhaps keydown) event handlers to the DOM, presumably to keep somebody else from capturing your keystrokes.

Not only does it slow down overall performance on browsers, but the DLL also causes conflicts with jQuery because jQuery tries to execute the native events after executing jQuery events (i.e. running 'onclick' events after bind('click') or live('click') events). I guess there are some issues running DLL onclick events in JavaScript, since an exception gets generated in not being able to call apply() on these handlers. If you try...catch them, the problem at least gets mitigated...but this requires a change directly within jQuery 1.7.2. jQuery 1.8.0 is out, but it still has this same problem.

Also, apparently the Comcast Protection Suite software that gets passed along for a lot of new Comcast users, which is why we believe this problem is so pervasive. I'm even more astounded by Norton, which had this response after a user complained about the slower keystroke rates once this software was installed: 

http://community.norton.com/t5/Other-Norton-Products/disappearing-keystrokes-in-webform/td-p/613012 

This particular issue appears to be isolated to this specific site and is directly related to a JavaScript function the website owners have implemented to test whether the first name or last name text filed is only alpha characters.  The method employed at the website to check for alpha characters is not the standard approach, which is to check for the input as it is added onKeyUp. The standard JS best practice is to use a regular expression that checks and validates the entire field input at form submission.  Therefore we expect this to be an isolated issue.


Besides asking every user to disable the Comcast toolbar plug-in, the workaround/fix is actually quite simple.  Apparently the native onclick handler checks need to verify that apply() can be run.  Normally DLL onclick handlers return 'undefined', which therefore are missing an apply() function.  We can enforce this check within the jQuery code, for which we'll be submitting a patch soon.

32123212            // Note that this is a bare JS function and not a jQuery handler
32133213            handle = ontype && cur[ ontype ];
3214             if ( handle && jQuery.acceptData( cur ) && handle.apply( cur, data ) === false ) {
3214             // Workaround for Comcast Protection Suite bug
3215             // Apparently the handle function is 'undefined' but not really undefined...
3216             if ( handle && jQuery.acceptData( cur ) && handle.apply && handle.apply( cur, data ) === false ) {
32153217                event.preventDefault();
32163218            }
32173219        }


The jQuery bug report is here: http://bugs.jquery.com/ticket/12423

Using JavaScript line breaking in YUI Compressor

For the past 5 months of introducing JavaScript exceptions logging, there have been "Object doesn't support this property or method" or "Object doesn't support property or method 'apply' that has been elusive in trying to diagnose in Internet Explorer browsers.  The error messages never occurred in other browsers but we saw them quite often in different users.  While some of these exceptions were generated from actually calling methods on JavaScript objects didn't exist, many of the stack traces seemed to emanate directly from jQuery.

One of our challenges was to try to understand from where these exceptions within jQuery were occurring.  jQuery usually comes minified with no line breaks, so we ran our JS minifiers with the YUI Compressor with the --line-break option (i.e. --line-break 150).  Before adding this option, Internet Explorer would often report an error on line 2, which pretty much amounted to the entire jQuery code. By breaking the minified code into smaller chunks, the line numbers could then allow further information on pinpointing this exact source of conflicts:

java -jar ../external/yuicompressor-2.4.7.jar jquery-1.7.2.min.js --line-break 150 > jquery-1.7.2_yui.min.js

url: https://www.myhost.com/static/js/jquery-1.7.2_yui.min.js
line: 33
context:
(f._data(m,"events")||{})[c.type]&&f._data(m,"handle"),q&&q.apply(m,d),q=o&&m[o],q&&f.acceptData(m)&&q.apply(m,d)===!1&&c.preve
 ntDefault()
}c.type=h,!g&&!c.isDefaultPrevented()&&(!p._default||p._default.apply(e.ownerDocument,d)===!1)&&(h!=="click"||!f.nodeName(e,"a"))&&f.acceptData(e)&&o&&e[h]&&(h!=="focus"&&h!=="blur"||c.target.offsetWidth!==0)&&!f.isWindow(e)&&(n=e[o],n&&(e[o]=null),f.event.triggered=h,e[h](),f.event.triggered=b,n&&(e[o]=n));return c.result}},dispatch:function(c){c=f.event.fix(c||a.event);var d=(f._data(this,"events")||{})[c.type]||[],e=d.delegateCount,g=[].slice.call(arguments,0),h=!c.exclusive&&!c.namespace,i=f.event.special[c.type]||{},j=[],k,l,m,n,o,p,q,r,s,t,u;g[0]=c,c.delegateTarget=this;if(!i.preDispatch||i.preDispatch.call(this,c)!==!1){if(e&&(!c.button||c.type!=="click")){n=f(this),n.context=this.ownerDocument||this;for(m=c.target;m!=this;m=m.parentNode||this){if(m.disabled!==!0){p={},r=[],n[0]=m;for(k=0;ke&&j.push({elem:this,matches:d.slice(e)});fo
 r(k=0;k

url: https://www.myhost.com/static/js/jquery-1.7.2_yui.min.js
column: None
line: 34
func: filter

url: https://www.myhost.com/static/js/jquery-1.7.2_yui.min.js
column: None
line: 31
func: trigger

This stack trace helped us pinpoint the issue to the Comcast Protection Suite, since it indicated the  problem was happening directly inside the jQuery Event module.  The jQuery Event module is used to attach and trigger jQuery events, as well as to implement the event propagation path described in the W3 standard.  By issuing try/except clauses within the dispatch() code, we were able to find exactly where the exception occurred:

        for ( i = 0; i < eventPath.length && !event.isPropagationStopped(); i++ ) {

            cur = eventPath[i][0];
            event.type = eventPath[i][1];

            handle = ( jQuery._data( cur, "events" ) || {} )[ event.type ] && jQuery._data( cur, "handle" );
            if ( handle ) {
  handle.apply( cur, data );
            }
     // Note that this is a bare JS function and not a jQuery handler                                                                                            
            handle = ontype && cur[ ontype ];
            if ( handle && jQuery.acceptData( cur ) && handle.apply( cur, data ) === false ) {
         event.preventDefault();
            }
        }

In other words, what jQuery seems to do is execute all the jQuery-related events before attempting to call the native JavaScript events (i.e. jQuery 'click' events will first be executed before the native 'onclick' events get called).  Somehow the Comcast Protection Suite adds an onclick handler that appears as 'undefined' to jQuery.  The if statement passes except fails when attempt to execute the handle.apply() statement.

More on this finding on the Comcast Protection Suite in this next writeup...