Posted 27 Aug 2010 in beautifulsoup, lxml, python, scrapy, and xpath

I have been asked a few times why I chose to reinvent the wheel when libraries such as Scrapy and lxml already exist.

I am aware of these libraries and have used them in the past with good results. However my current work involves building relatively simple web scraping scripts that I want to run without hassle on the clients machine. This rules out installing full frameworks such as Scrapy or compiling C based libraries such as lxml - I need a pure Python solution. This also gives me the flexibility to run the script on Google App Engine.

To scrape webpages there are generally two stages: parse the HTML and then select the relevant nodes.
The most well known Python HTML parser seems to be BeautifulSoup, however I find it slow, difficult to use (compared to XPath), often parses HTML inaccurately, and significantly - the original author has lost interest in further developing it. So I would not recommend using it - instead go with html5lib.

To select HTML content I use XPath. Is there a decent pure Python XPath solution? I didn’t find one 6 months ago when I needed it so developed this simple version that covers my typical use cases. I would deprecate this in future if a decent solution does come along, but for now I am happy with my pure Python infrastructure.


Posted 20 Aug 2010 in business, elance, and freelancing

When I started freelancing I created accounts on every freelance site I could find (oDesk, guru, scriptlance, etc) to get as much work as possible. However I found I got almost all work from just one source - Elance. How is Elance different?

With most freelancing sites you create an account and immediately start bidding on jobs. There is no cost to bidding so people bid on many projects even if they don’t have the skill or time to complete it. This is obviously frustrating for clients who waste a lot of time sifting through bids.

On the other hand Elance has a high barrier to entry: you have to pass a test to show you understand their system, then receive a phone call to confirm your identity, and when established pay money for each job you bid on. Often I see jobs on Elance with no bids because it requires obscure experience - people weren’t willing to waste their money bidding for a job they can’t do. This barrier serves to weed out less serious freelancers so that the average bid is of higher quality.

From my experience the clients are different on Elance too. On most freelancing sites the client is trying to get the job done for the smallest amount of money possible and are often willing to spend their time sifting through dozens of proposals, hoping to get lucky. Elance seems to attract clients who consider their time valuable and are willing to pay a premium for good service.
Often clients contact me directly through Elance because I am native English and want to avoid potential communication or cultural problems. One client even requested me to double my bid because “we are not cheap!”

After a year of freelancing I now get the majority of work directly through my website, but still get a decent percentage of clients through Elance.

My advice for new freelancers - focus on building your Elance profile and don’t waste your time with the others. (Though do let me know if you have had good experience elsewhere.)


Posted 24 Jul 2010 in website

Regarding the title of this blog “All your data are belong to us” - I realized not everyone gets the reference. See this wikipedia article for an explanation.


Posted 10 Jul 2010 in cache and python

When crawling large websites I store the HTML in a local cache so if I need to rescrape the website later I can load the webpages quickly and avoid extra load on their website server. This is often necessary when a client realizes they require additional features included in the scraped output.

I built the pdict library to manage my cache. Pdict provides a dictionary like interface but stores the data in a sqlite database on disk rather than in memory. All data is automatically compressed (using zlib) before writing and decompressed after reading. Both zlib and sqlite3 come builtin with Python (2.5+) so there are no external dependencies.

Here is some example usage of pdict:

>>> from webscraping.pdict import PersistentDict  
>>> cache = PersistentDict(CACHE_FILE)  
>>> cache[url1] = html1  
>>> cache[url2] = html2  
>>> url1 in cache  
True  
>>> cache[url1]  
html1  
>>> cache.keys()  
[url1, url2]  
>>> del cache[url1]  
>>> url1 in cache  
False  


Posted 01 Jul 2010 in business

I prefer to quote per project rather than per hour for my web scraping work because it:

  • gives me incentive to increase my efficiency (by improving my infrastructure)
  • gives the client security about the total cost
  • avoids distrust about the number of hours actually worked
  • makes me look more competitive compared to the hourly rates available in Asia and Eastern Europe
  • is difficult to track time fairly when working on two or more projects simultaneously
  • is easy to estimate complexity based on past experience, atleast compared to building websites
  • involves less administration


Posted 12 Jun 2010 in opensource

For most scraping jobs I use the same general approach of crawling, selecting the appropriate nodes, and then saving the results. Consequently I reuse a lot of code across projects, which I have now combined into a library. Most of this infrastructure is available open sourced on Google Code.

The code in that repository is licensed under the LGPL, which means you are free to use it in your own applications (including commercial) but are obliged to release any changes you make. This is different than the more popular GPL license, which would make the library unusable in most commercial projects. And it is also different than the BSD and WTFPL style licenses, which would let people do whatever they want with the library including making changes and not releasing them.

I think the LGPL is a good balance for libraries because it lets anyone use the code while everyone can benefit from improvements made by individual users.