Posted 05 Jan 2010 in xpath

In an earlier post I referred to XPaths but did not explain how to use them.

Say we have the following HTML document:

<html>  
 <body>
  <div></div>  
  <div id="content">  
   <ul>  
    <li>First item</li>  
    <li>Second item</li>  
   </ul>  
  </div>  
 </body>  
</html>

To access the list elements we follow the HTML structure from the root tag down to the li’s:

html > body > 2nd div > ul > many li's.

An XPath to represent this traversal is:

/html[1]/body[1]/div[2]/ul[1]/li

If a tag has no index then every tag of that type will be selected:

/html/body/div/ul/li

XPaths can also use attributes to select nodes:

/html/body/div[@id="content"]/ul/li 

And instead of using an absolute XPath from the root the XPath can be relative to a particular node by using double slash:

//div[@id="content"]/ul/li

This is more reliable than an absolute XPath because it can still locate the correct content after the surrounding structure is changed.

There are other features in the XPath standard but the above are all I use regularly.

A handy way to find the XPath of a tag is with Firefox’s Firebug extension. To do this open the HTML tab in Firebug, right click the element you are interested in, and select “Copy XPath”. (Alternatively use the “Inspect” button to select the tag.)
This will give you an XPath with indices only where there are multiple tags of the same type, such as:

/html/body/div[2]/ul/li

One thing to keep in mind is Firefox will always create a tbody tag within tables whether it existed in the original HTML or not. This has tripped me up a few times!

For one-off scrapes the above XPath should be fine. But for long term repeat scrapes it is better to use a relative XPath around an ID element with attributes instead of indices. From my experience such an XPath is more likely to survive minor modifications to the layout. However for a more robust solution see my SiteScraper library, which I will introduce in a later post.


Posted 02 Jan 2010 in html, lxml, and python

HTML is a tree structure: at the root is a <html> tag followed by the <head> and <body> tags and then more tags before the content itself. However when a webpage is downloaded all one gets is a series of characters. Working directly with that text is fine when using regular expressions, but often we want to traverse the webpage content, which requires parsing the tree structure.

Unfortunately the HTML of many webpages around the internet is invalid - for example a list may be missing closing tags:

<ul>  
 <li>abc
 <li>def  
 <li>ghi
</ul>

but it still needs to be interpreted as a proper list:

  • abc
  • def
  • ghi

This means we can’t naively parse HTML by assuming a tag ends when we find the next closing tag. Instead it is best to use one of the many HTML parsing libraries available, such as BeautifulSoup, lxml, html5lib, and libxml2dom.

Seemingly the most well known and used such library is BeautifulSoup. A Google search for Python web scraping module currently returns BeautifulSoup as the first result.
However I instead use lxml because I find it more robust when parsing bad HTML. Additionally Ian Bicking found lxml more efficient than the other parsing libraries, though my priority is accuracy over speed.

You will need to use version 2 onwards of lxml, which includes the html module. This meant needing to compile lxml up to Ubuntu 8.10, which came with an earlier version.

Here is an example how to parse the previous broken HTML with lxml:

from lxml import html  
tree = html.fromstring('<ul><li>abc</li><li>def<li>ghi</li></ul>')  
tree.xpath('//ul/li')  
[<Element li at 959553c>, <Element li at 95952fc>, <Element li at 959544c>]


Posted 30 Dec 2009 in big picture and business

In this post I will try to clarify what web scraping is all about by walking through a typical (though fictional) project.

Firstly a client contacted through my quote form requesting US demographic data in a spreadsheet from the official census website. I spent some time getting to know this website and found it followed a simple hierarchy with navigation performed through selecting options from select boxes:

Overview page / stage pages / county pages city pages

I viewed the source of these webpages and found the content I was after embedded, which meant it did not rely on JavaScript and would be easier to scrape.

I emailed the client back that the census website was relatively small sized and easily navigable. I would be able to provide a spreadsheet of the census data within 3 days for $200. The client was satisfied with this arrangement, so it was time to get started.

The first step was to collect all the state page URLs from the select box using an XPath expression. I use FireFox’s Firebug extension to identify the appropriate XPath. I found the county and city pages followed the same structure so this XPath could be used to extract URLs from them too. Now I have all the location URLs. These URLs could have been collected manually but this would take longer, be boring, and be harder to update if the website changed in future.

I set the script to download all these locations and meanwhile start work on the scraper part. Here is a sample location page with a large table for the demographic details. Again I craft a set of XPaths to extracted the content.

Now I am on the home stretch. I combine these various parts together into a single script that iterates the location pages, extracts the content with XPath, and writes out the result to a CSV spreadsheet file.

While the webpages are still downloading I provide a sample to the client for feedback. They request separate spreadsheets for state, county, and city, which is fine. Providing updated formats is straightforward because all downloaded webpages are cached.

When downloading has completed I send the final version and an invoice. QED


Posted 20 Dec 2009 in big picture

The internet contains a huge amount of useful data but most is not easily accessible. Web scraping involves extracting this data from websites into a structured format.

Here are some typical use cases for web scraping:

  • Extract business contact details from Yellow Pages into a CSV spreadsheet
  • Extract reviews from Google Places into a MySQL database
  • Extract product prices from Amazon so your company can price match
  • Track what people are saying about your product on Twitter and Facebook

If this sounds interesting then feel welcome to contact me to discuss further.