Analyzing “big data” doesn’t have to be expensive, and it got even cheaper, thanks to a new tool that automates the data analysis needed to make sense of massive amounts of data.
Pete Warden, the man who became famous after he scraped 220 million public Facebook profiles last year, unveiled his Data Science Toolkit at GigaOM’s Structure BigData conference in New York City on March 23. The Data Science Toolkit allows anyone to do automated conversions and data analysis on large data sets, he said.
In a 20-minute talk titled “Supercomputing on a Minimum Wage,” Warden noted that data analysis doesn’t have to be expensive. “You can hire a hundred servers from Amazon for $10 an hour,” he said.
A collection of open data sets and open-source data analysis tools wrapped in an easy-to-use interface, the toolkit includes features like being able to filter geographic locations from news articles and other types of unstructured data and use OCR (optical character recognition) functions to convert PDFs of scanned image files to text files, Warden said.
The Data Science Toolkit is available under GPL (general public license) and can be used either as a Web service or downloaded to run on an Amazon EC2 (Elastic Compute Cloud) or virtual machine.
Users can also convert street addresses or IP addresses into latitude/longitude coordinates and apply those coordinates to map the information against political demographics data, according to the toolkit’s Website.
A quick test of a residential address in Brooklyn, N.Y., returned information about which Congressional district it was associated with.
It can also pull country, city and regional names from a block of text and return relevant coordinates using the Geodict tool. This is similar to Yahoo’s Placemaker tool, according to the toolkit’s description. Users can also put in blocks of HTML from any page, including a news article, and see just the text that would be actually displayed in the browser, as well as to identify real sentences from a block of text. It can also extract people’s names and titles, as well as guess gender from entered text.
Warden had used Amazon servers and a number of tools to analyze user profile data from 220 million Facebook users in February 2010. He used WebCrawler to crawl Facebook and scraped 500 million pages representing 220 million users last year. Thanks to “about a hundred bucks” and Amazon’s servers, he transformed the scraped data into a database-ready format in 10 hours, he said.
He was able to analyze friendship relationships on Facebook using the data and performed some fun visualizations on how cities and states in the United States are connected to each other through Facebook. He also correlated the data to indicate the most common names, fan pages and friend locations around the world.
Warden noted there were a number of ways to harvest similar data from other sources, including Google Profiles.
Facebook didn’t like what he was doing with the data, and took steps to stop him. It took him two months and $3,000 in legal fees to convince Facebook that what he was doing wasn’t illegal, he said, but he still had to delete the data from the servers. Facebook claimed that he didn’t have permission to scrape the profiles, although he did not hack or compromise any pages and looked at only publicly available pages. Facebook also claimed that his saying he would make the raw data available to researchers violated their terms of service.
“Big data? Cheap. Lawyers? Not so cheap,” Warden said to audience laughter.