Sign Up for Free

RunKit +

Try any Node.js package right in your browser

This is a playground to test code. It runs a full Node.js environment and already has all of npm’s 400,000 packages pre-installed, including proxycrawl with all npm packages installed. Try it out:

var proxycrawl = require("proxycrawl")

This service is provided by RunKit and is not affiliated with npm, Inc or the package authors.

proxycrawl v1.3.0

ProxyCrawl node

Dependency free module for scraping and crawling websites using ProxyCrawl API

Installation

Install using npm

npm i proxycrawl

Require the API class in your project

const { ProxyCrawlAPI } = require('proxycrawl');

Usage

Initialize with one of your account tokens, either normal or javascript token. Then make get or post requests accordingly.
You can get your ProxyCrawl free token from here.

const api = new ProxyCrawlAPI({ token: 'YOUR_TOKEN' });

GET requests

Pass the url that you want to scrape plus any options from the ones available in the API documentation.

api.get(url, options);

Example:

api.get('https://www.facebook.com/britneyspears').then(response => {
  if (response.statusCode === 200) {
    console.log(response.body);
  }
}).catch(error => console.error);

You can pass any options from ProxyCrawl API.

Example:

api.get('https://www.reddit.com/r/pics/comments/5bx4bx/thanks_obama/', {
  userAgent: 'Mozilla/5.0 (Windows NT 6.2; rv:20.0) Gecko/20121202 Firefox/30.0',
  format: 'json'
}).then(response => {
  if (response.statusCode === 200) {
    console.log(response.body);
  }
}).catch(error => console.error);

POST requests

Pass the url that you want to scrape, the data that you want to send which can be either a json or a string, plus any options from the ones available in the API documentation.

api.post(url, data, options);

Example:

api.post('https://producthunt.com/search', { text: 'example search' }).then(response => {
  if (response.statusCode === 200) {
    console.log(response.body);
  }
}).catch(error => console.error);

You can send the data as application/json instead of x-www-form-urlencoded by setting options postType as json.

api.post('https://httpbin.org/post', { some_json: 'with some value' }, { postType: 'json' }).then(response => {
  if (response.statusCode === 200) {
    console.log(response.body);
  }
}).catch(error => console.error);

PUT requests

Pass the url that you want to scrape, the data that you want to send which can be either a json or a string, plus any options from the ones available in the API documentation.

api.put(url, data, options);

Example:

api.put('https://producthunt.com/search', { text: 'example search' }).then(response => {
  if (response.statusCode === 200) {
    console.log(response.body);
  }
}).catch(error => console.error);

Javascript requests

If you need to scrape any website built with Javascript like React, Angular, Vue, etc. You just need to pass your javascript token and use the same calls. Note that only .get is available for javascript and not .post.

const api = new ProxyCrawlAPI({ token: 'YOUR_JAVASCRIPT_TOKEN' });
api.get('https://www.nfl.com').then(response => {
  if (response.statusCode === 200) {
    console.log(response.body);
  }
}).catch(error => console.error);

Same way you can pass javascript additional options.

api.get('https://www.freelancer.com', { pageWait: 5000 }).then(response => {
  if (response.statusCode === 200) {
    console.log(response.body);
  }
}).catch(error => console.error);

Original status

You can always get the original status and proxycrawl status from the response. Read the ProxyCrawl documentation to learn more about those status.

api.get('https://craiglist.com').then(response => {
  console.log(response.originalStatus, response.pcStatus);
}).catch(error => console.error);

If you have questions or need help using the library, please open an issue or contact us.


Copyright 2018 ProxyCrawl

RunKit is a free, in-browser JavaScript dev environment for prototyping Node.js code, with every npm package installed. Sign up to share your code.
Sign Up for Free