From 56cd64f4030f430bb7cf66c07b2448f4553b0ec9 Mon Sep 17 00:00:00 2001 From: Nick Sweeting Date: Tue, 12 Mar 2019 19:06:31 -0400 Subject: [PATCH] Update README.md --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 97deec52..5024d2d0 100644 --- a/README.md +++ b/README.md @@ -33,7 +33,7 @@ You can use it to preserve access to websites you care about by storing them loc #### How does it work? -Simply download this repo, and run the `./archive` command each time you want to import new links and update your local archive. ArchiveBox is written in Python 3 and uses wget, Chrome headless, youtube-dl, pywb, and other common unix tools to save each page you add in multiple redundant formats. +Simply download this repo, and run the `./archive < urls` command each time you want to import new links and update your local archive. ArchiveBox is written in Python 3 and uses wget, Chrome headless, youtube-dl, pywb, and other common unix tools to save each page you add in multiple redundant formats. It doesn't require a constantly running server or backend, just run the command and open the outputted static HTML in a browser to view the archive. It can import and export JSON (among other formats), so it's easy to script or hook up to other APIs. If you run it on a schedule and import from browser history or bookmarks regularly, you can sleep soundly knowing that the slice of the internet you care about will be automatically preserved in multiple, durable long-term formats that will be accessible for decades (or longer).