Sign Up for Free

RunKit +

Try any Node.js package right in your browser

This is a playground to test code. It runs a full Node.js environment and already has all of npm’s 400,000 packages pre-installed, including advanced-sitemap-generator with all npm packages installed. Try it out:

var advancedSitemapGenerator = require("advanced-sitemap-generator")

This service is provided by RunKit and is not affiliated with npm, Inc or the package authors.

advanced-sitemap-generator v1.6.9

Easily create XML sitemaps for your website.

Sitemap Generator

Travis David npm

Easily create XML sitemaps for your website.

Generates a sitemap by crawling your site. Uses streams to efficiently write the sitemap to your drive and runs asynchronously to avoid blocking the thread. Is cappable of creating multiple sitemaps if threshold is reached. Respects robots.txt and meta tags.

Table of contents

Install

This module is available on npm.

$ npm install -S advanced-sitemap-generator

This module is running only with Node.js and is not meant to be used in the browser.

Usage

const SitemapGenerator = require('advanced-sitemap-generator');

// create generator
const generator = SitemapGenerator('http://example.com', {
  stripQuerystring: false,
  ignoreHreflang: true
});

// register event listeners
generator.on('done', () => {
  // sitemaps created
});

// start the crawler
generator.start();

The crawler will fetch all folder URL pages and file types parsed by Google. If present the robots.txt will be taken into account and possible rules are applied for each URL to consider if it should be added to the sitemap. Also the crawler will not fetch URL's from a page if the robots meta tag with the value nofollow is present and ignore them completely if noindex rule is present. The crawler is able to apply the base value to found links.

API

The generator offers straightforward methods to start and stop it. You can also add URL's manually.

start()

Starts crawler asynchronously and writes sitemap to disk.

stop()

Stops the running crawler and halts the sitemap generation.

queueURL(url)

Add a URL to crawler's queue. Useful to help crawler fetch pages it can't find itself.

Options

There are a couple of options to adjust the sitemap output. In addition to the options beneath the options of the used crawler can be changed. For a complete list please check it's official documentation.

var generator = SitemapGenerator('http://example.com', {
  ignoreHreflang: true,
  maxDepth: 0,
  filepath: path.join(process.cwd(), 'sitemap.xml'),
  maxEntriesPerFile: 50000,
  stripQuerystring: true,
  excludeFileTypes: ['gif', 'jpg', 'jpeg', 'png', 'ico', 'bmp', 'ogg', 'webp', 'mp4', 'webm', 'mp3', 'ttf',
    'woff', 'json', 'rss', 'atom', 'gz', 'zip', 'rar', '7z', 'css', 'js', 'gzip', 'exe', 'svg',
    'xml'],
  excludeURLs: ['cxyz']
});

changeFreq

Type: string
Default: undefined

If defined, adds a <changefreq> line to each URL in the sitemap. Possible values are always, hourly, daily, weekly, monthly, yearly, never. All other values are ignored.

filepath

Type: string
Default: ./sitemap.xml

Filepath for the new sitemap. If multiple sitemaps are created "part_$index" is appended to each filename.

httpAgent

Type: HTTPAgent
Default: http.globalAgent

Controls what HTTP agent to use. This is useful if you want configure HTTP connection through a HTTP/HTTPS proxy (see http-proxy-agent).

excludeFileTypes

Type: Array
Default: ['gif', 'jpg', 'jpeg', 'png', 'ico', 'bmp', 'ogg', 'webp', 'mp4', 'webm', 'mp3', 'ttf', 'woff', 'json', 'rss', 'atom', 'gz', 'zip', 'rar', '7z', 'css', 'js', 'gzip', 'exe', 'svg', 'xml']

Exclude Specific files or extensions from being crawled and been added to sitemap

excludeURLs

Type: Array
Default: []

Exclude Specific URLs' patterns from being crawled and been added to sitemap

httpsAgent

Type: HTTPAgent
Default: https.globalAgent

Controls what HTTPS agent to use. This is useful if you want configure HTTPS connection through a HTTP/HTTPS proxy (see https-proxy-agent).

lastMod

Type: boolean
Default: false

Whether to add a <lastmod> line to each URL in the sitemap, and fill it with today's date.

maxEntriesPerFile

Type: number
Default: 50000

Google limits the maximum number of URLs in one sitemap to 50000. If this limit is reached the sitemap-generator creates another sitemap. A sitemap index file will be created as well.

stripQueryString

Type: boolean
Default: true

Whether to treat URL's with query strings like http://www.example.com/?foo=bar as indiviual sites and add them to the sitemap.

ignoreHreflang

Type: boolean
Default: true

Whether to deep crawl every page searching for hreflang attributes to add alternative links to the generated sitemap or not.

Events

The Sitemap Generator emits several events which can be listened to.

add

Triggered when the crawler successfully added a resource to the sitemap. Passes the url as argument.

generator.on('add', (url) => {
  // log url
});

done

Triggered when the crawler finished and the sitemap is created.

generator.on('done', () => {
  // sitemaps created
});

error

Thrown if there was an error while fetching an URL. Passes an object with the http status code, a message and the url as argument.

generator.on('error', (error) => {
  console.log(error);
  // => { code: 404, message: 'Not found.', url: 'http://example.com/foo' }
});

ignore

If an URL matches a disallow rule in the robots.txt file or meta robots noindex is present this event is triggered. The URL will not be added to the sitemap. Passes the ignored url as argument.

generator.on('ignore', (url) => {
  // log ignored url
});

License

MIT © Lars Graubner

RunKit is a free, in-browser JavaScript dev environment for prototyping Node.js code, with every npm package installed. Sign up to share your code.
Sign Up for Free