I Don’t Need No Stinking API – Web Scraping in 2016 and Beyond

Social media APIs and their rate limits have not been nice to me recently, especially Instagram. Who needs it anyway?

Sites are increasingly getting smarter against scraping / data mining attempts. AngelList even detects PhantomJS (have not seen other sites do this). But if you are automating your exact actions that happen via a browser, can this be blocked?

First off, in terms of concurrency or the amount of horsepower you get for your hard earned $$$ – Selenium sucks. It’s simply not built for what you would consider ‘scraping’. But with sites being built with more and more smarts these days, the only truely reliable way to mine data off the internets is to use browser automation.

My stack looks like, pretty much all JavaScript. There goes a few readers 😑😆 – WebdriverIO, Node.js and a bunch of NPM packages including the likes of antigate (thanks to Troy Hunt –  Breaking CAPTCHA with automated humans) but I’m sure most of my techniques can be applied to any flavour of the Selenium 2 driver. It just happens that I find coding JavaScript optimal for browser automation.

Purpose of This Post

I’m going to share everything that I’ve learnt to date from my recent love affair with Selenium automation/scraping/crawling. The purpose of this post is to illustrate some of the techniques I’ve created which I haven’t seen published anywhere else – as a broader, applicable idea to be shared around and discussed by the webdev community. Also, I need that Kobe Bryant money.

Scraping? Isn’t it Illegal?

Taken from my Hacker News comment

On the ethics side, I don’t scrape large amounts of data – eg. giving clients lead gen (x leads for y dollars) – in fact, I have never done a scraping job and don’t intend to do those jobs for profit.

For me it’s purely for personal use and my little side projects. I don’t even like the word scraping because it comes loaded with so many negative connotations (which sparked this whole comment thread) – and for a good reason – it’s reflective of how the the demand in the market. People want cheap leads to spam, and that’s bad use of technology.

Generally I tend to focus more on words and phases like ‘automation’ and ‘scripting a bot’. I’m just automating my life, I’m writing a bot to replace what I would have to do on a daily basis – like looking on Facebook for some gifs and videos then manually posting them to my site. Would I spend an hour each and every day doing this? No, I’m much more lazier than that.

Who is anyone to tell me what I can and can’t automate in my life?

Let’s get to it.

Faking Human Delays

It’s definitely good practice to add these human-like, random pauses in some places just to be extra safe:

Parsing Data jQuery Style with Cheerio

Below is a snippet from a function that gets videos from a Facebook page:

 

I also use this similar method and some regex to parse RSS feeds that can’t be read by command line, cURL-like scripts.

It actually works pretty well – I’ve tested with multiple sources.

Injecting JavaScript

If you get to my level, injecting JavaScript for the client-side becomes commonplace.

By the way, this is a totally non-practical example (in case you haven’t noticed). Check out the following headings.

Beating CAPTCHA

GIFLY.co had not updated for over 48 hours and I wondered why. My script which gets animated gifs from various Facebook pages was being hit with the capture screen 😮

Cracking Facebook’s captcha was actually pretty easy. It took me exactly 15 minutes to accomplish this. I’m sure there are ways to do this internally but with Antigate providing an NPM package and with costs so low, it was a no-brainer for me.

 

So injecting JavaScript has become super-handy here. I’m converting an image to a canvas, then running .toDataURL() to get a Base64 encoded PNG image to send to the Antigate endpoint. The function was stolen from a site where I steal a lot of things from, shouts to David Walsh. This solves the Facebook captcha, enters the value then clicks submit.

Catching AJAX Errors

Why would you want to catch client-side AJAX errors? Because reasons. For example, I automated unfollowing everyone on Instagram and I found that even through their website (not via the API) there is some kind of a rate limit.

Because a follow/unfollow invokes an AJAX call, and being rate limited would mean an AJAX error, I inject an AJAX error capturing function then save it to a global variable.

I retrieve this value after each unfollow and terminate the script if I get 3 errors.

Intercepting AJAX Data

While scraping/crawling/spidering Instagram, I ran into a problem. A tag page did not give me the post date in the DOM. I really needed this data for IQta.gs and I couldn’t afford visiting every post as I’m parsing about 200 photos every time.

What I did find though, is that there is a date variable stored in the post object that the browser receives. Heck, I ended up not even using this variable but this is what I came up with:

It’s getting a little late in Melbourne.

Other Smarts

So I run all of this in a Docker container running on AWS. I’ve pretty much made my Instagram crawlers fault-tolerant with some bash scripting (goes to check if it is running now)

Yes, it seems that all 6 IQta.gs crawlers are running fine 🙂 I’ve run into some issues with Docker where an image becomes unusable, I have no idea why – I did not spend time to look into the root cause, but basically my bash script will detect non-activity and completely remove and start the Selenium grid again from a fresh image.

Random Closing Thoughts

I had this heading written down before I started writing this post and I have forgotten the random thoughts I had back then. Oh well, maybe it will come back tomorrow.

Ah, a special mention goes out to Hartley Brody and his post, which was a very popular article on Hacker News in 2012/2013 – it inspired me to write this.

Those of you wondering what the hell browser is:

And argv comes from yargs

Thanks to my long time friend In-Ho @ Google for proofreading! You can check out some of his music production here.

Similar Posts:

Comments

  • I Don’t Need No Stinking API – Web Scraping in 2016 and Beyond | Raony Guimarães

    August 24, 2016 at 7:25 am

    […] Source: I Don’t Need No Stinking API – Web Scraping in 2016 and Beyond […]

  • VividSoftwareSolutions

    August 24, 2016 at 8:31 am

    Francis, very impressive. I like the Catching Ajax errors part.
    Thanks a ton for all the useful snippets!

  • Francis Kim

    August 24, 2016 at 9:48 am

    Thank you, glad you find it useful!

  • Bookmarks for August 23rd | Chris's Digital Detritus

    August 24, 2016 at 9:02 am

    […] I Don’t Need No Stinking API – Web Scraping in 2016 and Beyond – […]

  • Kevin

    August 24, 2016 at 9:27 am

    Just curious, how would you make money with scraping??

    I’m a JS developer for profession, and have to setup Selenium + mocha / nightwatch frequently.

    So wonder if there is some way to make an extra bug 🙂

  • Francis Kim

    August 24, 2016 at 9:47 am

    Not sure to be honest lol. Most scraping jobs seem to be kind of low quality jobs wanting x leads for y dollars – that’s certainly not what I am after. But it has served me well for personal use. I think the money will be in creating a bot that allows smart automation rather than just doing the numbers.

  • chejazi

    August 24, 2016 at 1:22 pm

    If you can create sites with verticals that curate content automatically and better than a general purpose sharing platform, then you could make advertising revenue.

  • Francis Kim

    August 25, 2016 at 12:31 pm

    That’s what I’m trying to do but getting the site known doesn’t seem easy!

  • Nick Sweeting

    August 24, 2016 at 10:17 am

    Lead-gen jobs are all over the place, but they don’t pay very well unless you can scrape multiple sources and cross-reference to improve lead quality (which usually takes some data-science experience).

  • Cupcake89

    August 24, 2016 at 11:19 am

    Your logo and specially it’s colors resembles speedof.me. Haha

  • Francis Kim

    August 24, 2016 at 12:27 pm

    Well, colours maybe haha.

  • zack

    August 24, 2016 at 12:24 pm

    It’s not scraping, it’s an HTML feed!

  • Francis Kim

    August 24, 2016 at 12:27 pm

    Haha, I like your thinking 🙂

  • I Don’t Need No Stinking API – Web Scraping in 2016 and Beyond | michaltheil

    August 24, 2016 at 2:52 pm

    […] Source: I Don’t Need No Stinking API – Web Scraping in 2016 and Beyond […]

  • Ronald

    August 24, 2016 at 5:31 pm

    Great article and awesome work with GIFLY.co! I happen to have published a small scraper lib on npm for those looking to do some quick of listing-type sites: https://www.npmjs.com/package/cerealscraper

    Keep up the good work 😉

  • JavaScript Scraping Techniques – Fraud Engineering

    August 24, 2016 at 6:22 pm

    […] Very informative article about modern scraping tips and tricks. […]

  • Francis Kim

    August 24, 2016 at 6:56 pm

    Christian the man himself! Thanks for that, I shall start using that when the pryamid gets too high.

  • Nick

    August 25, 2016 at 6:46 am

    Hi! Ever do any scraping side projects? I need a few existing ones upgraded.

  • Anon

    August 24, 2016 at 9:10 pm

    There are so many different ways to scrape/mine data from the internet.

    How do you compare these custom builds vs the latest startups trying to solve the “scraping” problem for the rest?

    Have you looked into tools like;

    Import.io
    Mozenda
    ScrapingHub

    And other frameworks for scraping/parsing data?

    Scrapy for example?

    Pro Tip: You could also apply machine learning to your current setup to analyze, classify and filter your data before reposting it on your “curated” websites.

    MonkeyLearn is a great example of an easy to use machine learning API

  • Marc

    August 24, 2016 at 9:19 pm

    Glassdoor seems to be detecting Phantom.js as well. Did the “human” delay help here?

  • I'm a Bot Developer

    August 25, 2016 at 12:21 am

    […] Anyway, there’s brewing interest around chatbots and bots in general. Like a digital personal assistant, I think it’s great to see the productisation and humanisation of code. It’s easy for anyone to relate to and it doesn’t feel as rigid as ‘scraper’ or ‘data miner’ – maybe that’s a different matter, but I’m still stuck on my last post. […]

  • Douglas Muth

    August 25, 2016 at 4:07 am

    Being a bit of a newbie to Docker, I’d be interested in seeing an article on how to get these crawlers up and running in Docker, how you store/check output from the crawlers, etc. Thanks!

  • John Rowling

    August 25, 2016 at 4:40 am

    Great article Francis. These snippets are life-saver. Thanks for sharing.

  • 砍站的技巧? | Gea-Suan Lin's BLOG

    August 25, 2016 at 5:43 am

    […] 在 Hacker News 上看到的文章,講如何用 JavaScript 砍站:「I Don’t Need No Stinking API – Web Scraping in 2016 and Beyond」。 […]

  • Tom

    August 26, 2016 at 12:32 am

    Scrapy for example?
    Single threaded…

    RobotFramework is actually very interesting one for big projects (i.e. connecting multiple services together or using its selenium driver and injecting any JS (like JQuery) to do the job, I personally like tools like:
    CasperJS
    PhantomJS
    but these two are so-so when it comes to captcha pooling. Usually this can be sorted out with making a screenshot of whole page and extracting coords of captcha image, then cut out. Once you have this you can use antigate (if i understood idea good), deathbycaptcha, decaptcher or any other decaptcher type service.
    anything that supports headless

  • nick3499

    August 26, 2016 at 5:54 am

    [Rhetorical] How could this scraping cat-and-mouse game ever become illegal? Internet exploits publicized data. Sysops can swipe at scraper gnats, and add comments to their TOS to discourage, but nothing more.

    Virtual reality headsets draw user minds deeper into technoscapes, but Tron-like WAN awareness is impossible. No matter how often mindset-morphing mediascapes convince users they are nothing more than fleshy machines (just buzz to stimulate elite interest, and shift favor toward subservience).

    Humans will always transcend machines, just as your own instructions adapt to human-sysop strats. And wield the modules of high-level tools to thwart human-sysop foes. Throwing down a staff which transforms into Python.

  • John C.

    August 27, 2016 at 2:23 am

    “My stack looks like, pretty much all JavaScript”

    *vomit*, *close tab*…

  • Juliano

    August 28, 2016 at 3:47 am

    Francis, great stuff! I actually working on writing an “HTML Feed” like yours, in fact, that’s how I ran into your article. I need to have a task that checks for changes weekly, and if there are changes to a particular element on DOM or particular site, update automatically, and create a JSON each time it changes. I’m not as comfortable as you are with all these tools but I can pick up. Recommend any other resources besides this great article?

  • Week 32 | import digest

    August 31, 2016 at 8:13 pm

    […] insights in this review of the state of the art in automated web scraping involving Selenium, phantomJS, node and a host of JavaScript […]

  • Link Dump: August 2016 – var blog = "";

    September 1, 2016 at 1:34 am

    […] I Don’t Need No Stinking API – Web Scraping in 2016 and Beyond […]

  • Top 10 JavaScript Articles for the Past Month (v.August) | 懒得折腾

    September 2, 2016 at 3:46 am

    […] Rank 1 […]

Write a comment

Your email address will not be published. Required fields are marked *