Friday, December 23, 2011

Changes in Facebook's swf flash handler..

Ever since Facebook introduced a change in their Facebook Connect Library that caused severe login issues for IE users that lasted for more than a week, we've created scripts to monitor Facebook's JavaScript Connect Library to detect for any changes that might affect our users. Nate Frieldy first created the first version to monitor for diffs, and I soon forked it here to monitor for diffs that span more than just the changes that happen on the 1st line that indicates timestamp changes.

On December 8, 2011, our diff detection scripts picked up this change:

1 /*1323218538,169893481,JIT Construction: v482006,en_US*/
1 /*1323305592,169920374,JIT Construction: v482779,en_US*/
22
33 if (!window.FB) window.FB = {
44 _apiKey: null,
53615361 [10, 3, 181, 34],
53625362 [11, 0, 0]
53635363 ],
5364 "_swfPath": "rsrc.php\/v1\/yK\/r\/RIxWozDt5Qq.swf"
5364 "_swfPath": "rsrc.php\/v1\/yD\/r\/GL74y29Am1r.swf"
53655365 }, true);
53665366 FB.provide("XD", {
53675367 "_xdProxyUrl": "connect\/xd_proxy.php?version=3"

This SWF file is Facebook's cross-domain handler for web browsers that don't implement HTML5 but can use a Flash-based version of HTML5's postMessage() function that allows messages to be passed between different domains. Facebook doesn't often recompile the SWF file, so this diff caught my attention. The most reliable decompiler I've found is Sothink's SWF decompiler, which can be used to export the ActionScript files with a 30-day trial (for more context about how to decompile, see http://hustoknow.blogspot.com/2011/06/facebooks-flash-xdcomm-receiver.html).

I've decompiled the SWF file and ActionScript files from http://static.ak.fbcdn.net/rsrc.php\/v1\/yD\/r\/GL74y29Am1r.swf and reviewed the diffs between the previously decompiled SWF with this one. If you were to compare the diff changes for the XdComm.as file, you would see:
15a16,17
> private static var initialized:Boolean = false;
> private static var origin_validated:Boolean = false;
20,21c22,29
< Security.allowDomain("*"); < Security.allowInsecureDomain("*"); --- > if (XdComm.initialized)
> {
> return;
> }
> XdComm.initialized = true;
> var _loc_1:* = PostMessage.getCurrentDomain();
> Security.allowDomain(_loc_1);
> Security.allowInsecureDomain(_loc_1);
51a60
> ExternalInterface.addCallback("postMessage_init", this.initPostMessage);
60a70,76
> private function initPostMessage(param1:String, param2:String) : void
> {
> origin_validated = true;
> this.postMessage.init(param1, param2);
> return;
> }// end function
>
164a181,189
> public static function proxy(param1:String, param2:String) : void
> {
> if (origin_validated)
> {
> ExternalInterface.call(param1, param2);
> }
> return;
> }// end function
>

The changes indicate that Facebook has tightened the cross-domain security policies. Instead of using wildcard domains to accept messages in its allowDomain() function, it now invokes a call to getCurrentDomain(), which is a function defined in the PostMessage.as file used to execute a call to document.domain, relying more on the browser to define the security restrictions.

Most of these change should not affect your users...just wished Facebook would discuss more what's going on behind the scenes since your apps may very well be using the Facebook Connect Library without realizing these changes are happening beneath you!

I've started to post the decompiled SWF files here:
https://github.com/rogerhu/connect-js/tree/master/swf

Note that these updates are only manually. If someone knows of an open-source SWF decompiler, then the diffs could be much more automated!

Wednesday, December 14, 2011

Setting up IPSec with racoon and a Cisco router..

Tools on Linux v2.6

The Linux 2.6 kernel already comes with IPSec support (Ubuntu distributed appears to have AH/ESP support), so you need two packages to get it working. First, you need ipsec-tools and racoon. You can install them by doing apt-get install ipsec-tools and apt-get install racoon respectively. You'll need to setup /etc/ipsec-tools.conf to define what IP subnets/hosts will be connected via VPN (and whether to use ESP and/or AH tunnel or transport mode, as well as the gateway IP's that are used to bridge the connections). Racoon has other parameters about Phase 1 and Phase 2 negotiation that you need to setup too, which are defined in /etc/racoon/racoon.conf. You use the remote {} configuration parameters for Phase 1, and sainfo parameters for Phase 2.). See http://lists.freebsd.org/pipermail/freebsd-net/2006-June/010975.html for more detailed info.

Note the unique allows for multiple security associations to be used over the same host, See http://pardini.net/blog/2008/08/21/ipsec-with-setkeyracoon-and-multiple-single-host-spds/. Apparently using the keyword 'unique' instead of 'require' fixes the issue:
flush;spdflush;spdadd ${LOCAL_NETWORK} ${STJUST_NETWORK} any -P out ipsec esp/tunnel/${LOCAL_OUTSIDE}-${STJUST_OUTSIDE}/unique;spdadd ${STJUST_NETWORK} ${LOCAL_NETWORK} any -P in  ipsec esp/tunnel/${STJUST_OUTSIDE}-${LOCAL_OUTSIDE}/unique;
More documentation here about unique in the setkey man page:

The protocol/mode/src-dst/level part specifies the rule how to process the packet. Either ah, esp, or ipcomp must be used as protocol. mode is either transport or tunnel. If mode is tunnel, you must specify the end-point addresses of the SA as src and dst with '-' between these addresses, which is used to specify the SA to use. If mode is transport, both src and dst can be omitted. level is to be one of the following: default, use, require, or unique. If the SA is not available in every level, the kernel will ask the key exchange daemon to establish a suitable SA. default means the kernel consults the system wide default for the protocol you specified, e.g. the esp_trans_deflev sysctl variable, when the kernel processes the packet. use means that the kernel uses an SA if it's available, otherwise the kernel keeps normal operation. require means SA is required whenever the kernel sends a packet matched with the policy. unique is the same as require; in addition, it allows the policy to match the unique out-bound SA. You just specify the policy level unique, racoon(8) will configure the SA for the policy.

Racoon works by basically listening for commands from the Linux kernel. The tunnels get setup the first time you attempt to negotiate a connection to a host. If you have certain routes established that are defined in /etc/ipsec-tools.conf and do a setkey -f /etc/ipsec-tools.conf, this information will be loaded as the security policy database (SPD) and the kernel will send a trigger that causes Racoon to attempt to establish the connection.

The two commands you will use to initially test are:
sudo setkey -f /etc/ipsec-tools.confsudo racoon -F -f /etc/racoon/racoon.conf -v -ddd -l /etc/racoon/racoon.log

The most secure (but complicated) is to use is a Internet Key Exchange (IKE) authentication approach. In this approach, both VPN client/server sides announce that they will use a pre-shared key authentication mechanism and their authentication and hash algorithm. The pre-shared key is just some hard-coded value that both sides decide before setting up the VPN connection. Once the connection is established, both sides use a Diffie-Hellman key approach to generate a public/private key so that future exchanges will be encrypted. It so happens in this approach both sides exchange a public key and are able to decode the packet using their own private key! All this negotiation happens during what's called Phase 1 negotiation.

Alternate approaches to the IKE implementation call call for setting up manual keys. In other words, both sides have to know how to encrypt the data beforehand instead of this intricate key exchange approach. A lot of the Racoon documentation mention setting up security association (SA)'s within /etc/ipsec-tools.conf, but this approach is un-needed if we are going to use the IKE-based approach, which is presumably more secure. If the IKE implementation is used, then Phase 2 negotiations must also occur.

A great guide to troubleshooting IPSec connections is here:

http://vpn.precision-guesswork.com/vpn/ipsec_troubleshooting.pdf

Want to know how all the nomencalture is laid out? Click here:

http://www.unixwiz.net/techtips/iguide-ipsec.html

Phase 1
If you read the IPSec documents, you'll see there are 3 rounds of this Phase 1. You can use tshark/wireshark to watch the network dumps:

Round 1: agree on authentication, encryption, hash payload algorithm

Round 2: key exchanges w/ nonce values (to avoid replay attacks).

Round 3: validation of hash/identification payloads using the secret keys completes successfully.

Phase 2 (IKE only)
The second step, also known as Quick Mode, in the IKE approach is to negotiate a security association (SA) policy. These policies not only define what encryption/authentication algorithms should be used, what encryption keys should be used for data transferring and for what IP subnets.

The ipsec-tools package comes with an /etc/ipsec-tools.conf that defines the security association (SA) policies. This policy must match against the information provided by the customer side. In the Northwestern Mutual's case, their IT department set their Cisco router with an access control list that defines that's allowed to connect. You will notice in the ISAKMP protocol during Phase 2 negotation that the packet structure for Phase 2 also includes an IDci and IDcr identity payload. You can watch Racoon and see what bits get passed through:

2011-12-14 01:12:56: DEBUG: IDci:
2011-12-14 01:12:56: DEBUG:
04000000 <IP address here> Data exchanges
Assuming everything is setup correctly, you need to setup your route table for the specific IP blocks to which you are connecting. Make sure do netstat -rn and then do route add's to add the correct routes. Unless you're bridging Ethernet interfaces, you need to be sure that you are always sending packets over the same Ethernet interface.

You can confirm packets that go over the wire by using either tcpdump or tshark -i eth0 not port 22 (to exclude traces from your current SSH connection from being dumped out). If you are using ESP encryption, then you should also see that the kernel encrypting packets destined for those IP's to the appropriate place). Again, the Linux kernel is handling most of the work, so long as the routes are correctly defined.

Ways to debug:
Watch isakmp packets:

1. sudo tshark -i eth0 udp port 500 -V or
sudo tshark -i eth0 udp not port 22

2. ssh -X
sudo wireshark

(X11Forwarding needs to be temporarily enabled on /etc/ssh/sshd_config, then do /etc/init.d/ssh restart &. You then need to make sure X11Forwarding is setup in your /etc/ssh/ssh_config on hte client side).

Wireshark can actually decrypt ESP/AH authentication assuming you provide the Security Parameter Indexes (SPI) generated on-the-fly and encryption keys. Most of this data you can observe via running Racoon in debug mode.

FYI - You may also notice "next payload" in Racoon dumps. The ISAKMP standard appears to define multiple types of payloads. Often times you will see vendor ID and other data such as the following:
Vendor ID: RFC 3706 Detecting Dead IKE Peers (DPD)        Next payload: Vendor ID (13)        Payload length: 20        Vendor ID: RFC 3706 Detecting Dead IKE Peers (DPD)    Vendor ID: XXXXXX        Next payload: Vendor ID (13)        Payload length: 20        Vendor ID: XXXXXX    Vendor ID: draft-beaulieu-ike-xauth-02.txt        Next payload: NONE (0)        Payload length: 12        Vendor ID: draft-beaulieu-ike-xauth-02.txt

Sunday, December 11, 2011

Setting up a VPN between two DD-WRT routers..

This setup worked for two WRT54GL routers running DD-WRT v24-sp1.

PPTP server (192.168.0.1):
First, one machine needs to be setup as a PPTP server.
1. Go to Services->PPTP to enable the PPTP server.
2. Set the server IP (should be a virtual lAN IP address different than your LAN IP address -- i.e. 192.168.0.2),
3. Set the Client IP block (192.168.0.50-192.168.0.70), and then setup the CHAP Secrets (johndoe* mypassword *).

PPTP client (192.168.1.1) :
1. Enable PPTP client.
2. Set the PPTP Server IP.
3. Set the Remote Subnet (192.168.0.0) and Remote Subnet Mask (255.255.255.0)
4. Set the MPPE Encryption to "mppe required".
5. Set the MTU/MRU to be 1450.
6. Disable NAT mdoe.
7. Set the username and password to the CHAP secret set in the PPTP Server.

You should verify the PPTP connection is established by telnetting into the PPTP client box and attempting to ping the private IP address of the PPTP server (i.e. 192.168.0.2 or 192.168.0.1). If this succeeds, then you may be able to ping the routers but other machines on the network are not able to talk with each other. In this case, you may wish to confirm that the PPTP server has not setup a route of 192.168.1.0. To do it automatically, you need to do the following:

1. Go to Administration->Commands.
2. Add the following firewall commands. Usually what happens is that the /tmp/pptpd_client/ip-up script is created. A delay is inserted before adding the route and then re-executing the ip-up bash script again.
sleep 40
/bin/sh -c 'echo "ip route add 192.168.1.0/24 dev ppp0" >> /tmp/pptpd/ip-up'
/tmp/pptpd_client/ip-up

Click on Save->Firewall after saving.

If you want to reinitiate the PPTP connection, try to click Save/Apply Settings and waiting for the VPN connection to be re-established. If you really want to check things out, you can configure a VPN client on Ubuntu 10.04 through the Network Manager (make sure to click Point-to-Point MPPE Encryption and allow stateful encryption, send PPP echo packets to help keep the connection alive). (Note: If you forget to enable the MPPE encryption/stateful encryption options, you may find that the VPN connection is flaky. It seems as if there are CHAP requests/rejects that keep happening without these two options). This VPN client will help you verify that the PPTP server is responding correctly.

You should also telnet to both DD-WRT routers and verify the routes have been established between the two subnets. You should also cat /tmp/pptpd_client/ip-up on the PPTP server to verify that the IP route was added correctly.

Saturday, December 10, 2011

Upgrading a Compaq Presario C700 from Vista to Windows 7..

Recently, I upgraded an old Compaq C714NR Presario laptop that had been running Windows Vista to Windows 7. Coffee had been spilled on the touchpad, rendering it inoperable. The keyboard still worked but the spacebar was sort of sticky so needed to be replaced. You may have older machines that might be worth upgrading, especially if you took advantage of Microsoft offers that allowed .edu email addresses to get a copy of Windows 7 Professional for $25.

1. Although your machine may be running 32-bit Windows Vista, chances are that if the machine runs a dual-core processor, which means it can run 64-bit Windows Professional. Most of the graphics, networks, and sound drivers already come with Windows 7, so there usually isn't a need to download extra drivers. Windows 7 should install right out of the box.

2. If you're burning a copy of Windows 7, you may encounter issues about "Required cd/dvd drive device driver is missing". If you observe this case, chances are likely the DVD you burned actually has problems, especially if you were using a copy that hadn't been used for awhile. Originally you may be led to think that there are some driver incompatibility issues with the 64-bit Windows 7 version, but try to reburn the DVD and see if the install works.

3. The touchpad can be replaced, but you have to buy one that comes with the laptop casing too. Since most of the casing + touchpads parts are sold off Ebay for $30+, it may easier to simply attach a USB-mouse instead. The picture below shows an example of the touchpad + upper casing:


4. There are web sites that sell spare keyboard keys (i.e. laptopkey.com), but buying one part can easily cost $8 and you can usually buy the entire keyboard replacement for $12. The HP service manual for replacing the keyboard is fairly straightforward, but there are a few key things to know. In the case of the C700, there were 3 screws at the bottom of the laptop, each with a keyboard icon at the bottom. One of these screws was obscured by the memory lid, so you may have to remove the lid first.

Second, there is the Zero-Insertion Force (ZIF) connector that attaches to the keyboard and laptop. What this usually means is that the sides of the connector need to be pushed out.

You should avoid pulling the ribbon cable out until the connector is released. The picture below shows one example of how the ZIF connector is pushed out. You can usually use your fingers and push the connector out slightly before inserting the ribbon cable. You should push down on the sides to fasten the ribbon cable securely.
Keep in mind that you should verify that all keys work. If the connector is not fully fastened, you may find some keys do not respond. You can try to boot up the computer with the keyboard installed, but be careful if any components are exposed.

5. Finally, if you need to replace any keys, you first have to figure out how the keyboard mechanism works.. There are a bunch of YouTube demos for replacing the keys in HP laptops, but none of the videos I found pointed out that it's easier to attach one of the plastic hinges to the key, and the other smaller plastic hinge to the laptop. If you were to setup the hinges on the laptop first, the plastic hinges should move up and down if you were to apply pressure to them, supplementing the spring-like action in the button.

Once you figure out the right way to place them, take the large hinge and attach it to the key before attaching the other part. For this keyboard, I couldn't just put the key over the two plastic hinges since the pressure of the keys would cause both plastic to be pushed down without snapping into place. You have to be careful with this part since the plastic hooks can break, so avoid trying to force the keys to attach to the plastic hinges.

Celery 2.3+ crashes when channel.close() commands are issued...

This AMQPChannelException issue has happened for us over the last 3 weeks, so I decided to dig-in to understand why we were getting AMQPChannelException's that caused our Celery workers to simply die. Well, it turns out this exception often appeared in our logs:

(404, u"NOT_FOUND - no exchange 'reply.celeryd.pidbox' in vhost 'myhost'", (60, 40), 'Channel.basic_publish')

The basic problem I suspect is that we have a task that is checking for long running Celery tasks. It relies on the celeryctl command, which is a special mechanism used by Celery to broadcast messages to all boxes running Celery (i.e. celeryctl inspect active)

The celeryctl command implements a multicast protocol by leveraging the AMQP standard (see RabbitMQ tutorial). All Celery machines on startup bind to the exchange celery.pidbox. When you send a celeryctl command, RabbitMQ receives this message, and then goes and delivers to all machines listening to this exchange celery.pidbox.

The machines also send back their replies on a separate exchange called reply.celery.pidbox, which is used by the main process that issued the celeryctl command to collect all the responses. Once all program completes, it will delete the celery.pidbox since it's no longer needed. Unfortunately, if a worker receives this command and attempts to respond but it's too late, it can trigger an exchange not found, causing RabbitMQ to issue a channel.close() command. I suspect this happens especially during heavy loads and/or during intermittent network failures, since the problem often shows up during these times.

Celery handles connection failures fine, but doesn't seem to deal with situations where the AMQP host issues a close command. I solved it in two ways: first, allowing Celery to gracefully reset the entire connection when such an event happens (PR request to the Celery framework), and increasing the window in which we check for replies so the exchange isn't deleted quickly thereafter (i.e. celery inspect active --timeout=60). The latter may be the quicker way to solve it, though the former should probably be something that would help avoid the situation altogether (although it may cause other issues).

The result may appear to trap the exception and try to establish a new connection. This approach is already being used when for Celery control commands (an error msg "Error occurred while handling control command" gets printed out but the underlying Exception gets caught). It seems this exception occurs when the RabbitMQ sends a close() command to terminate the connection, causing the entire process to die.

def on_control(self, body, message):
"""Process remote control command message."""
try:
self.pidbox_node.handle_message(body, message)
except KeyError, exc:
self.logger.error("No such control command: %s", exc)
except Exception, exc:
self.logger.error(
"Error occurred while handling control command: %r\n%r",
exc, traceback.format_exc(), exc_info=sys.exc_info())
self.reset_pidbox_node()

We'll see in this pull-request whether the authors of Celery think this is a good idea...I suspect that it would be better to create a new channel than to restart the connection altogether.

https://github.com/ask/celery/pull/564

Friday, December 9, 2011

Moving to Nose..

We've started to have our developers use Nose, a much more powerful unit test discovery package for Python. For one, in order to generate JUnit XML outputs for Hudson/Jenkins, the test runner comes with a --with-xunit command that lets you dump these results out.

Here are a few things that might help you get adjusted to using Nose:

1. As mentioned in the Testing Efficiently with Nose tutorial, the convention is slightly different for running tests has changed. The format has changed:

python manage.py test app.tests:YourTestCaseClass
python manage.py test app.tests:YourTestCaseClass.your_test_method

One way to force the test results to use the same format is to create a class that inherits from unittest that overrides the __str__, __id__, and shortDescription methods. The __str__() method is used by the Django test runner to display what tests are running and which ones have failed, enabling you to copy/paste the test to re-run the test. The __id__() method is used by the XUnit plug-in to generate the test name, enabling you to swap out the class name with the Nose convention. Finally, the shortDescription() will prevent docstrings from replacing the test name when running the tests.

class BaseTestCase(unittest.TestCase):

def __str__(self):
# Use Nose testing format (colon to differentiate between module/class name)

if 'django_nose' in settings.INSTALLED_APPS or 'nose' in settings.TEST_RUNNER.lower():
return "%s:%s.%s" % (self.__module__, self.__class__.__name__, self._testMethodName)
else:
return "%s.%s.%s" % (self.__module__, self.__class__.__name__, self._testMethodName)

def id(self): # for XUnit outputs
return self.__str__()

def shortDescription(self): # do not return the docstring
return None


2. For SauceLabs jobs, you can also expose the URL of the job run in which you are running. WebDriverException's inherit from the Exception class, and add a 'msg' property that we can use to insert the SauceLabs URL. You want to avoid adding the URL in the id() and __str__() methods, since those routines are used to dump out the names of classes that Hudson/Jenkins may used to compare against between builds.

def get_sauce_job_url(self):
# Expose SauceLabs job number for easy reference if an error occurs.
if getattr(self, 'webdriver', None) and hasattr(self.webdriver, 'session_id') and self.webdriver.session_id:
session_id = "(http://saucelabs.com/jobs/%s)" % self.webdriver.session_id
else:
session_id = ''

return session_id

def __str__(self):
session_id = self.get_sauce_job_url()
return " ".join([super(SeleniumTestSuite, self).__str__(), session_id])

def _exc_info(self):
exc_info = super(SeleniumTestSuite, self)._exc_info()

# WebDriver exceptions have a 'msg' attribute, which gets used to be dumped out.
# We can take advantage of this fact and store the SauceLabs jobs URL in there too!
if exc_info[1] and hasattr(exc_info[1], 'msg'):
session_id = self.get_sauce_job_url()
exc_info[1].msg = session_id + "\n" + exc_info[1].msg
return exc_info


3. Nose has a bunch of nifty commands that you should try, including --with-id & --failed, which lets you run your entire test suite and then run only the ones that failed. You can also use the attrib decorator, which lets you to decorate certain test suites/test methods with various attributes, such as network-based tests or slow-running ones. The attrib plugin seems to support a limited boolean logic, so check the documentation carefully if you intend to use the -a flag (you can use --collect-only in conjunction to verify your tests are correctly running with the right logic).

Thursday, December 8, 2011

Hover states with Selenium 2..

We do a lot of automated tests on Selenium 2, and recently we noticed that some of our tests that attempt to verify mouseover/hover states intermittently break. We could see the mouseover event get fired, but then suddenly it would stop.

The phenomenon at first made us suspect there were possible race conditions in our JavaScript code. Were we accidentally clearing/hiding elements before WebDriver could reach them? Was the web browser loading multiple times and clearing out the original state? Or was it an issue in Selenium 2, since it attempts to calculate the center position of the element, scroll into view, and then move the mouse cursor to the center of the element (see http://www.google.com/codesearch#2tHw6m3DZzo/trunk/javascript/atoms/dom.js). You can look at how the Windows driver's mouseMove is implemented here:
http://www.google.com/codesearch#2tHw6m3DZzo/trunk/cpp/webdriver-interactions/interactions.cpp

It turns out that hover states for Windows in Selenium 2 are just problematic right now. You can even see inside the Selenium 2 test suite that hover states are not skipped: for Windows-based systems:

http://www.google.com/codesearch#2tHw6m3DZzo/trunk/java/client/test/org/openqa/selenium/RenderedWebElementTest.java&q=2067%20package:http://selenium\.googlecode\.com

if (!supportsHover()) {
System.out.println("Skipping hover test: Hover is very short-lived on Windows. Issue 2067.");
return;
}

The full bug report is here:
http://code.google.com/p/selenium/issues/detail?id=2067

In Selenium v2.15.0, even the Internet Explorer wiki was updated to remind people that hover states (using mousemove) have issues:

http://code.google.com/p/selenium/source/diff?spec=svn14947&r=14947&format=side&path=/wiki/InternetExplorerDriver.wiki

The workaround seems to be generating your own synthetic events:

$('#myelem').trigger('mouseenter');

Saturday, December 3, 2011

AT&T Uverse Internet 18Mbps solution..

You have two modem/router choices for the AT&T Internet Uverse plan: Motorola NVG510 or the Motorola 2210-02-1ATT. The NVG510 doesn't have a bridge mode, and there are a wide variety of hacks to use the IP Passthrough option for the Motorola NVG510 modem:

http://forums.att.com/t5/Features-and-How-To/NVG510-Bridge-Mode/td-p/2890841/page/2

If you read through the discussions though, you'll notice that the discussion gravitates towards using the IP Passthrough with a Fixed-DHCPS set for the router's IP address: "i'm using the IP Passthrough functionality. According to its own instructions on the right-hand side of the tab, the RG is supposed to pass its public IP address through to another device, using its own DHCP - but it won't do it. I had to manually enter the information into my regular router (an Airport Extreme Base Station, aka AEBS) to use the Passthrough functionality.

...it seems more better if you had a choice to go with the Motorola 2210 option. Also, don't confuse the Motorola 2210-02-1TT model with the Motorola 2210-02-1022 model, the latter of which was an old DSL modem that you can buy off Ebay. For one thing, the 2210-02-1022 models only speak PPPoE, and since the Uverse solution relies on authentication based on the MAC address, you won't be able to plug this modem in place of things. You can check your diagnostics tool on the router to verify that you're connecting via IP-DSLAM, which is synonymous with the AT&T Uverse Internet 18Mbps solution.

Friday, December 2, 2011

Fujitsu C2010 specs

http://implitech.com/manufacturers/fujitsu-siemens/lifebook-c2010/

Intel 845MP/MZ chipset

Fujitsu Siemens LifeBook C2010 Graphics - ATI Radeon Mobility M6 LY Driver
Fujitsu Siemens LifeBook C2010 Human Interface - Belkin MI 2150 Trust Mouse Driver
Fujitsu Siemens LifeBook C2010 IEEE 1394 - Texas Instruments TSB43AB21 IEEE-1394a-2000 Controller Driver
Fujitsu Siemens LifeBook C2010 Modem - Intel 82801CA CAM AC97 Modem Driver
Fujitsu Siemens LifeBook C2010 Multimedia - Intel 82801CA CAM AC97 Audio Controller Driver
Fujitsu Siemens LifeBook C2010 Network - Realtek RTL 8139 8139C 8139C Driver
Fujitsu Siemens LifeBook C2010 PCMCIA - Texas Instruments PCI1520 PC Card Cardbus Driver
Fujitsu Siemens LifeBook C2010 Storage - Intel 82801CAM IDE U100 Driver
Fujitsu Siemens LifeBook C2010 Intel Chipset Drivers

Device manager for Fujitsu Siemens LifeBook C2010 Laptop
Radeon Mobility M6 LY - ATI
PCI\VEN_1002&DEV_4C59&SUBSYS_113E10CF&REV_00\4&34C0AFA1&0&0008
MI 2150 Trust Mouse - Belkin
USB\VID_1241&PID_1166\5&95D66C5&0&2
TSB43AB21 IEEE-1394a-2000 Controller - Texas Instruments
PCI\VEN_104C&DEV_8026&SUBSYS_116210CF&REV_00\4&6B54384&0&30F0
82801CA CAM AC97 Modem - Intel
PCI\VEN_8086&DEV_2486&SUBSYS_10D110CF&REV_02\3&61AAA01&0&FE
82801CA CAM AC97 Audio Controller - Intel
PCI\VEN_8086&DEV_2485&SUBSYS_117710CF&REV_02\3&61AAA01&0&FD
RTL 8139 8139C 8139C - Realtek
PCI\VEN_10EC&DEV_8139&SUBSYS_111C10CF&REV_10\3&61AAA01&0&68
PCI1520 PC Card Cardbus - Texas Instruments
PCI\VEN_104C&DEV_AC55&SUBSYS_116410CF&REV_01\4&139E449D&0&50F0
82801CAM IDE U100 - Intel
PCI\VEN_8086&DEV_248A&SUBSYS_113D10CF&REV_02\3&61AAA01&0&F9
Brookdale (82845 845 Chipset Host Bridge) - Intel
PCI\VEN_8086&DEV_1A30&SUBSYS_00000000&REV_04\3&61AAA01&0&00
Brookdale (82845 845 Chipset AGP Bridge) - Intel
PCI\VEN_8086&DEV_1A31&SUBSYS_00000000&REV_04\3&61AAA01&0&08
82801CA CAM USB Controller #1 - Intel
PCI\VEN_8086&DEV_2482&SUBSYS_113D10CF&REV_02\3&61AAA01&0&E8
82801CA CAM USB Controller #2 - Intel
PCI\VEN_8086&DEV_2484&SUBSYS_113D10CF&REV_02\3&61AAA01&0&E9
82801 Mobile PCI Bridge - Intel
PCI\VEN_8086&DEV_2448&SUBSYS_00000000&REV_F3\3&33FD14CA&0&F0
82801CAM ISA Bridge (LPC) - Intel
PCI\VEN_8086&DEV_248C&SUBSYS_00000000&REV_02\3&61AAA01&0&F8
82801CA CAM SMBus Controller - Intel
PCI\VEN_8086&DEV_2483&SUBSYS_113D10CF&REV_02\3&61AAA01