Project

snapcrawl

0.03
Low commit activity in last 3 years
No release in over a year
Snapcrawl is a command line utility for crawling a website and saving screenshots.
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
 Dependencies

Runtime

>= 0.8.1, < 2
~> 0.6
~> 0.21
~> 0.3
~> 1.10
~> 0.4
~> 0.1
 Project Readme

Snapcrawl - crawl a website and take screenshots

Gem Version Build Status Code Climate


Snapcrawl is a command line utility for crawling a website and saving screenshots.

Features

  • Crawls a website to any given depth and saves screenshots
  • Can capture the full length of the page
  • Can use a specific resolution for screenshots
  • Skips capturing if the screenshot was already saved recently
  • Uses local caching to avoid expensive crawl operations if not needed
  • Reports broken links

Install

Using Docker

You can run Snapcrawl by using this docker image (which contains all the necessary prerequisites):

$ alias snapcrawl='docker run --rm -it --network host --volume "$PWD:/app" dannyben/snapcrawl'

For more information on the Docker image, refer to the docker-snapcrawl repository.

Using Ruby

$ gem install snapcrawl

Note that Snapcrawl requires PhantomJS and ImageMagick.

Usage

Snapcrawl can be configured either through a configuration file (YAML), or by specifying options in the command line.

$ snapcrawl
Usage:
  snapcrawl URL [--config FILE] [SETTINGS...]
  snapcrawl -h | --help
  snapcrawl -v | --version

The default configuration filename is snapcrawl.yml.

Using the --config flag will create a template configuration file if it is not present:

$ snapcrawl example.com --config snapcrawl

Specifying options in the command line

All configuration options can be specified in the command line as key=value pairs:

$ snapcrawl example.com log_level=0 depth=2 width=1024

Sample configuration file

# All values below are the default values

# log level (0-4) 0=DEBUG 1=INFO 2=WARN 3=ERROR 4=FATAL
log_level: 1

# log_color (yes, no, auto)
# yes  = always show log color
# no   = never use colors
# auto = only use colors when running in an interactive terminal
log_color: auto

# number of levels to crawl, 0 means capture only the root URL
depth: 1

# screenshot width in pixels
width: 1280

# screenshot height in pixels, 0 means the entire height
height: 0

# number of seconds to consider the page cache and its screenshot fresh
cache_life: 86400

# where to store the HTML page cache
cache_dir: cache

# where to store screenshots
snaps_dir: snaps

# screenshot filename template, where '%{url}' will be replaced with a 
# slug version of the URL (no need to include the .png extension)
name_template: '%{url}'

# urls not matching this regular expression will be ignored
url_whitelist: 

# urls matching this regular expression will be ignored
url_blacklist: 

# take a screenshot of this CSS selector only
css_selector: 

# when true, ignore SSL related errors
skip_ssl_verification: false

# set to any number of seconds to wait for the page to load before taking
# a screenshot, leave empty to not wait at all (only needed for pages with
# animations or other post-load events).
screenshot_delay: 

Contributing / Support

If you experience any issue, have a question or a suggestion, or if you wish to contribute, feel free to open an issue.