As a student I was fortunate to have the opportunity to learn about web scraping, guided by Professor Timothy Baldwin. I aimed to build a tool to make scraping web pages easier, resulting from frustration with a previous project.

My goal for this tool was that it should be possible to train a program to scrape a website by just giving the desired outputs for some example webpages. The idea was to build a model of how to extract this content and then this model could be applied to scrape other webpages that used the same template.

The tool was eventually called sitescraper and is available for download on bitbucket. For more information have a browse of this paper, which covers the implementation and results in detail.

I use sitescraper for much of my scraping work and sometimes make updates based on experience gained from a project. Here is some example usage:

>>> from sitescraper import sitescraper
>>> ss = sitescraper()  
>>> url = '
>>> data = [" python", ["Learning Python, 3rd Edition",   
  "Programming in Python 3: A Complete Introduction to the Python Language",
  "Python in a Nutshell, Second Edition (In a Nutshell (O'Reilly))"]]  
>>> ss.add(url, data)  
>>> # we can add multiple example cases,
>>> # but this is a simple example so one will do (I generally use 3)  
>>> # ss.add(url2, data2)   
>>> ss.scrape('
[" linux", [
    "A Practical Guide to Linux(R) Commands, Editors, and Shell Programming", 
    "Linux Pocket Guide", 
    "Linux in a Nutshell (In a Nutshell (O'Reilly))", 
    'Practical Guide to Ubuntu Linux (Versions 8.10 and 8.04), A (2nd Edition)', 
    'Linux Bible, 2008 Edition'

Using regular expressions for web scraping is sometimes criticized, but I believe they still have their place, particularly for one-off scrapes. Let’s say I want to extract the title of a particular webpage - here is an implementation using BeautifulSoup, lxml, and regular expressions:

import re
import time
import urllib2
from BeautifulSoup import BeautifulSoup
from lxml import html as lxmlhtml

def timeit(fn, *args):
    t1 = time.time()
    for i in range(100):
    t2 = time.time()
    print '%s took %0.3f ms' % (fn.func_name, (t2-t1)*1000.0)
def bs_test(html):
    soup = BeautifulSoup(html)
    return soup.html.head.title
def lxml_test(html):
    tree = lxmlhtml.fromstring(html)
    return tree.xpath('//title')[0].text_content()
def regex_test(html):
    return re.findall('<title>(.*?)</title>', html)[0]
if __name__ == '__main__':
    url = ''
    html = urllib2.urlopen(url).read()
    for fn in (bs_test, lxml_test, regex_test):
        timeit(fn, html)

The results are:

regex_test took 40.032 ms
lxml_test took 1863.463 ms
bs_test took 54206.303 ms

That means for this use case lxml takes 40x longer than regular expressions and BeautifulSoup over 1000x! This is because lxml and BeautifulSoup parse the entire document into their internal format, when only the title is required.

XPaths are very useful for most web scraping tasks, but there is still a use case for regular expressions.

In an earlier post I referred to XPaths but did not explain how to use them.

Say we have the following HTML document:

  <div id="content">  
    <li>First item</li>  
    <li>Second item</li>  

To access the list elements we follow the HTML structure from the root tag down to the li’s:

html > body > 2nd div > ul > many li's.

An XPath to represent this traversal is:


If a tag has no index then every tag of that type will be selected:


XPaths can also use attributes to select nodes:


And instead of using an absolute XPath from the root the XPath can be relative to a particular node by using double slash:


This is more reliable than an absolute XPath because it can still locate the correct content after the surrounding structure is changed.

There are other features in the XPath standard but the above are all I use regularly.

A handy way to find the XPath of a tag is with Firefox’s Firebug extension. To do this open the HTML tab in Firebug, right click the element you are interested in, and select “Copy XPath”. (Alternatively use the “Inspect” button to select the tag.)
This will give you an XPath with indices only where there are multiple tags of the same type, such as:


One thing to keep in mind is Firefox will always create a tbody tag within tables whether it existed in the original HTML or not. This has tripped me up a few times!

For one-off scrapes the above XPath should be fine. But for long term repeat scrapes it is better to use a relative XPath around an ID element with attributes instead of indices. From my experience such an XPath is more likely to survive minor modifications to the layout. However for a more robust solution see my SiteScraper library, which I will introduce in a later post.

HTML is a tree structure: at the root is a <html> tag followed by the <head> and <body> tags and then more tags before the content itself. However when a webpage is downloaded all one gets is a series of characters. Working directly with that text is fine when using regular expressions, but often we want to traverse the webpage content, which requires parsing the tree structure.

Unfortunately the HTML of many webpages around the internet is invalid - for example a list may be missing closing tags:


but it still needs to be interpreted as a proper list:

  • abc
  • def
  • ghi

This means we can’t naively parse HTML by assuming a tag ends when we find the next closing tag. Instead it is best to use one of the many HTML parsing libraries available, such as BeautifulSoup, lxml, html5lib, and libxml2dom.

Seemingly the most well known and used such library is BeautifulSoup. A Google search for Python web scraping module currently returns BeautifulSoup as the first result.
However I instead use lxml because I find it more robust when parsing bad HTML. Additionally Ian Bicking found lxml more efficient than the other parsing libraries, though my priority is accuracy over speed.

You will need to use version 2 onwards of lxml, which includes the html module. This meant needing to compile lxml up to Ubuntu 8.10, which came with an earlier version.

Here is an example how to parse the previous broken HTML with lxml:

from lxml import html  
tree = html.fromstring('<ul><li>abc</li><li>def<li>ghi</li></ul>')  
[<Element li at 959553c>, <Element li at 95952fc>, <Element li at 959544c>]

In this post I will try to clarify what web scraping is all about by walking through a typical (though fictional) project.

Firstly a client contacted through my quote form requesting US demographic data in a spreadsheet from the official census website. I spent some time getting to know this website and found it followed a simple hierarchy with navigation performed through selecting options from select boxes:

Overview page / stage pages / county pages city pages

I viewed the source of these webpages and found the content I was after embedded, which meant it did not rely on JavaScript and would be easier to scrape.

I emailed the client back that the census website was relatively small sized and easily navigable. I would be able to provide a spreadsheet of the census data within 3 days for $200. The client was satisfied with this arrangement, so it was time to get started.

The first step was to collect all the state page URLs from the select box using an XPath expression. I use FireFox’s Firebug extension to identify the appropriate XPath. I found the county and city pages followed the same structure so this XPath could be used to extract URLs from them too. Now I have all the location URLs. These URLs could have been collected manually but this would take longer, be boring, and be harder to update if the website changed in future.

I set the script to download all these locations and meanwhile start work on the scraper part. Here is a sample location page with a large table for the demographic details. Again I craft a set of XPaths to extracted the content.

Now I am on the home stretch. I combine these various parts together into a single script that iterates the location pages, extracts the content with XPath, and writes out the result to a CSV spreadsheet file.

While the webpages are still downloading I provide a sample to the client for feedback. They request separate spreadsheets for state, county, and city, which is fine. Providing updated formats is straightforward because all downloaded webpages are cached.

When downloading has completed I send the final version and an invoice. QED

The internet contains a huge amount of useful data but most is not easily accessible. Web scraping involves extracting this data from websites into a structured format.

Here are some typical use cases for web scraping:

  • Extract business contact details from Yellow Pages into a CSV spreadsheet
  • Extract reviews from Google Places into a MySQL database
  • Extract product prices from Amazon so your company can price match
  • Track what people are saying about your product on Twitter and Facebook

If this sounds interesting then feel welcome to contact me to discuss further.