-ArchiveBox is a powerful self-hosted internet archiving solution written in Python 3. You feed it URLs of pages you want to archive, and it saves them to disk in a variety of formats depending on the configuration and the content it detects.
+ArchiveBox is a powerful self-hosted internet archiving solution written in Python. You feed it URLs of pages you want to archive, and it saves them to disk in a variety of formats depending on setup and content within.
-Your archive can be managed through the command line with commands like `archivebox add`, through the built-in Web UI `archivebox server`, or via the Python library API (beta). It can ingest bookmarks from a browser or service like Pocket/Pinboard, your entire browsing history, RSS feeds, or URLs one at a time. You can also schedule regular/realtime imports with `archivebox schedule`.
+**🔢 Run ArchiveBox via [Docker Compose (recommended)](#Quickstart), Docker, Apt, Brew, or Pip ([see below](#Quickstart)).**
+
+```bash
+apt/brew/pip3 install archivebox
+
+archivebox init # run this in an empty folder
+archivebox add 'https://example.com' # start adding URLs to archive
+curl https://example.com/rss.xml | archivebox add # or add via stdin
+archivebox schedule --every=day https://example.com/rss.xml
+```
+
+For each URL added, ArchiveBox saves several types of HTML snapshot (wget, Chrome headless, singlefile), a PDF, a screenshot, a WARC archive, any git repositories, images, audio, video, subtitles, article text, [and more...](#output-formats).
+
+```bash
+archivebox server --createsuperuser 0.0.0.0:8000 # use the interactive web UI
+archivebox list 'https://example.com' # use the CLI commands (--help for more)
+ls ./archive/*/index.json # or browse directly via the filesystem
+```
+
+You can then manage your snapshots via the [filesystem](https://github.com/ArchiveBox/ArchiveBox/wiki/Usage#disk-layout), [CLI](https://github.com/ArchiveBox/ArchiveBox/wiki/Usage#CLI-Usage), [Web UI](https://github.com/ArchiveBox/ArchiveBox/wiki/Usage#UI-Usage), [SQLite DB](https://github.com/ArchiveBox/ArchiveBox/blob/dev/archivebox/core/models.py) (`./index.sqlite3`), [Python API](https://docs.archivebox.io/en/latest/modules.html) (alpha), [REST API](https://github.com/ArchiveBox/ArchiveBox/issues/496) (alpha), or [desktop app](https://github.com/ArchiveBox/electron-archivebox) (alpha).
+
+At the end of the day, the goal is to sleep soundly knowing that the part of the internet you care about will be automatically preserved in multiple, durable long-term formats that will be accessible for decades (or longer).
+
+
+
+
+
+
+
+#### ⚡️ CLI Usage
+
+```bash
+# archivebox [subcommand] [--args]
+archivebox --version
+archivebox help
+```
+
+- `archivebox init/version/status/config/manage` to administer your collection
+- `archivebox add/remove/update/list` to manage Snapshots in the archive
+- `archivebox schedule` to pull in fresh URLs in regularly from [boorkmarks/history/Pocket/Pinboard/RSS/etc.](#input-formats)
+- `archivebox oneshot` archive single URLs without starting a whole collection
+- `archivebox shell/manage dbshell` open a REPL to use the [Python API](https://docs.archivebox.io/en/latest/modules.html) (alpha), or SQL API
+
+
-The main index is a self-contained `index.sqlite3` file, and each snapshot is stored as a folder `data/archive//`, with an easy-to-read `index.html` and `index.json` within. For each page, ArchiveBox auto-extracts many types of assets/media and saves them in standard formats, with out-of-the-box support for: several types of HTML snapshots (wget, Chrome headless, singlefile), PDF snapshotting, screenshotting, WARC archiving, git repositories, images, audio, video, subtitles, article text, and more. The snapshots are browseable and managable offline through the filesystem, the built-in webserver, or the Python library API.
### Quickstart
-It works on Linux/BSD (Intel and ARM CPUs with `docker`/`apt`/`pip3`), macOS (with `docker`/`brew`/`pip3`), and Windows (beta with `docker`/`pip3`).
+**🖥 Supported OSs:** Linux/BSD, macOS, Windows **🎮 CPU Architectures:** x86, amd64, arm7, arm8 (raspi >=3)
+**📦 Distributions:** `docker`/`apt`/`brew`/`pip3`/`npm` (in order of completeness)
-```bash
-pip3 install archivebox
-archivebox --version
-# install extras as-needed, or use one of full setup methods below to get everything out-of-the-box
-
-mkdir ~/archivebox && cd ~/archivebox # this can be anywhere
-archivebox init
-
-archivebox add 'https://example.com'
-archivebox add --depth=1 'https://example.com'
-archivebox schedule --every=day https://getpocket.com/users/USERNAME/feed/all
-archivebox oneshot --extract=title,favicon,media https://www.youtube.com/watch?v=dQw4w9WgXcQ
-archivebox help # to see more options
-```
-
-*(click to expand the sections below for full setup instructions)*
+*(click to expand your preferred **► `distribution`** below for full setup instructions)*
Get ArchiveBox with docker-compose on any platform (recommended, everything included out-of-the-box)
-First make sure you have Docker installed: https://docs.docker.com/get-docker/
-
-This is the recommended way to run ArchiveBox because it includes *all* the extractors like chrome, wget, youtube-dl, git, etc., as well as full-text search with sonic, and many other great features.
+First make sure you have Docker installed: https://docs.docker.com/get-docker/
-```bash
+
# create a new empty directory and initalize your collection (can be anywhere)
mkdir ~/archivebox && cd ~/archivebox
-curl -O https://raw.githubusercontent.com/ArchiveBox/ArchiveBox/master/docker-compose.yml
+curl -O 'https://raw.githubusercontent.com/ArchiveBox/ArchiveBox/master/docker-compose.yml'
docker-compose run archivebox init
docker-compose run archivebox --version
# start the webserver and open the UI (optional)
docker-compose run archivebox manage createsuperuser
docker-compose up -d
-open http://127.0.0.1:8000
+open 'http://127.0.0.1:8000'
# you can also add links and manage your archive via the CLI:
docker-compose run archivebox add 'https://example.com'
docker-compose run archivebox status
docker-compose run archivebox help # to see more options
-```
+
+
+This is the recommended way to run ArchiveBox because it includes all the extractors like:
+chrome, wget, youtube-dl, git, etc., full-text search w/ sonic, and many other great features.
Get ArchiveBox with docker on any platform
-First make sure you have Docker installed: https://docs.docker.com/get-docker/
-```bash
+First make sure you have Docker installed: https://docs.docker.com/get-docker/
+
+
# create a new empty directory and initalize your collection (can be anywhere)
mkdir ~/archivebox && cd ~/archivebox
docker run -v $PWD:/data -it archivebox/archivebox init
docker run -v $PWD:/data -it archivebox/archivebox --version
# start the webserver and open the UI (optional)
-docker run -v $PWD:/data -it archivebox/archivebox manage createsuperuser
-docker run -v $PWD:/data -p 8000:8000 archivebox/archivebox server 0.0.0.0:8000
+docker run -v $PWD:/data -it -p 8000:8000 archivebox/archivebox server --createsuperuser 0.0.0.0:8000
open http://127.0.0.1:8000
# you can also add links and manage your archive via the CLI:
docker run -v $PWD:/data -it archivebox/archivebox add 'https://example.com'
docker run -v $PWD:/data -it archivebox/archivebox status
docker run -v $PWD:/data -it archivebox/archivebox help # to see more options
-```
+
Get ArchiveBox with apt on Ubuntu >=20.04
-```bash
+First make sure you're on Ubuntu >= 20.04, or scroll down for older/non-Ubuntu instructions.
+
+
+# add the repo to your sources and install the archivebox package using apt
+sudo apt install software-properties-common
sudo add-apt-repository -u ppa:archivebox/archivebox
sudo apt install archivebox
@@ -117,8 +166,7 @@ archivebox init
archivebox --version
# start the webserver and open the web UI (optional)
-archivebox manage createsuperuser
-archivebox server 0.0.0.0:8000
+archivebox server --createsuperuser 0.0.0.0:8000
open http://127.0.0.1:8000
# you can also add URLs and manage the archive via the CLI and filesystem:
@@ -127,13 +175,17 @@ archivebox status
archivebox list --html --with-headers > index.html
archivebox list --json --with-headers > index.json
archivebox help # to see more options
-```
+
For other Debian-based systems or older Ubuntu systems you can add these sources to `/etc/apt/sources.list`:
-```bash
+
+
deb http://ppa.launchpad.net/archivebox/archivebox/ubuntu focal main
deb-src http://ppa.launchpad.net/archivebox/archivebox/ubuntu focal main
-```
+
+
+Then run `apt update; apt install archivebox; archivebox --version`.
+
(you may need to install some other dependencies manually however)
@@ -141,7 +193,10 @@ deb-src http://ppa.launchpad.net/archivebox/archivebox/ubuntu focal main
Get ArchiveBox with brew on macOS >=10.13
-```bash
+First make sure you have Homebrew installed: https://brew.sh/#install
+
+
+# install the archivebox package using homebrew
brew install archivebox/archivebox/archivebox
# create a new empty directory and initalize your collection (can be anywhere)
@@ -151,8 +206,7 @@ archivebox init
archivebox --version
# start the webserver and open the web UI (optional)
-archivebox manage createsuperuser
-archivebox server 0.0.0.0:8000
+archivebox server --createsuperuser 0.0.0.0:8000
open http://127.0.0.1:8000
# you can also add URLs and manage the archive via the CLI and filesystem:
@@ -161,14 +215,17 @@ archivebox status
archivebox list --html --with-headers > index.html
archivebox list --json --with-headers > index.json
archivebox help # to see more options
-```
+
Get ArchiveBox with pip on any platform
-```bash
+First make sure you have Python >= 3.7 installed: https://realpython.com/installing-python/
+
+
+# install the archivebox package using pip3
pip3 install archivebox
# create a new empty directory and initalize your collection (can be anywhere)
@@ -179,8 +236,7 @@ archivebox --version
# Install any missing extras like wget/git/chrome/etc. manually as needed
# start the webserver and open the web UI (optional)
-archivebox manage createsuperuser
-archivebox server 0.0.0.0:8000
+archivebox server --createsuperuser 0.0.0.0:8000
open http://127.0.0.1:8000
# you can also add URLs and manage the archive via the CLI and filesystem:
@@ -189,56 +245,58 @@ archivebox status
archivebox list --html --with-headers > index.html
archivebox list --json --with-headers > index.json
archivebox help # to see more options
-```
+
-
----
-
-
-
-
-DEMO: archivebox.zervice.io/
-For more information, see the full Quickstart guide, Usage, and Configuration docs.
+No matter which install method you choose, they all roughly follow this 3-step process and all provide the same CLI, Web UI, and on-disk data format.
+
+
+
+1. Install ArchiveBox: `apt/brew/pip3 install archivebox`
+2. Start a collection: `archivebox init`
+3. Start archiving: `archivebox add 'https://example.com'`
+
+
+
+
+
+
-
----
-
-
-# Overview
-
-ArchiveBox is a command line tool, self-hostable web-archiving server, and Python library all-in-one. It can be installed on Docker, macOS, and Linux/BSD, and Windows. You can download and install it as a Debian/Ubuntu package, Homebrew package, Python3 package, or a Docker image. No matter which install method you choose, they all provide the same CLI, Web UI, and on-disk data format.
-
-To use ArchiveBox you start by creating a folder for your data to live in (it can be anywhere on your system), and running `archivebox init` inside of it. That will create a sqlite3 index and an `ArchiveBox.conf` file. After that, you can continue to add/export/manage/etc using the CLI `archivebox help`, or you can run the Web UI (recommended). If you only want to archive a single site, you can run `archivebox oneshot` to avoid having to create a whole collection.
-
-The CLI is considered "stable", the ArchiveBox Python API and REST APIs are "beta", and the [desktop app](https://github.com/ArchiveBox/desktop) is "alpha".
-
-At the end of the day, the goal is to sleep soundly knowing that the part of the internet you care about will be automatically preserved in multiple, durable long-term formats that will be accessible for decades (or longer). You can also self-host your archivebox server on a public domain to provide archive.org-style public access to your site snapshots.
+
## Key Features
- [**Free & open source**](https://github.com/ArchiveBox/ArchiveBox/blob/master/LICENSE), doesn't require signing up for anything, stores all data locally
-- [**Few dependencies**](https://github.com/ArchiveBox/ArchiveBox/wiki/Install#dependencies) and [simple command line interface](https://github.com/ArchiveBox/ArchiveBox/wiki/Usage#CLI-Usage)
+- [**Powerful, intuitive command line interface**](https://github.com/ArchiveBox/ArchiveBox/wiki/Usage#CLI-Usage) with [modular optional dependencies](#dependencies)
- [**Comprehensive documentation**](https://github.com/ArchiveBox/ArchiveBox/wiki), [active development](https://github.com/ArchiveBox/ArchiveBox/wiki/Roadmap), and [rich community](https://github.com/ArchiveBox/ArchiveBox/wiki/Web-Archiving-Community)
-- Easy to set up **[scheduled importing](https://github.com/ArchiveBox/ArchiveBox/wiki/Scheduled-Archiving) from multiple sources**
-- Uses common, **durable, [long-term formats](#saves-lots-of-useful-stuff-for-each-imported-link)** like HTML, JSON, PDF, PNG, and WARC
-- ~~**Suitable for paywalled / [authenticated content](https://github.com/ArchiveBox/ArchiveBox/wiki/Configuration#chrome_user_data_dir)** (can use your cookies)~~ (do not do this until v0.5 is released with some security fixes)
-- **Doesn't require a constantly-running daemon**, proxy, or native app
-- Provides a CLI, Python API, self-hosted web UI, and REST API (WIP)
-- Architected to be able to run [**many varieties of scripts during archiving**](https://github.com/ArchiveBox/ArchiveBox/issues/51), e.g. to extract media, summarize articles, [scroll pages](https://github.com/ArchiveBox/ArchiveBox/issues/80), [close modals](https://github.com/ArchiveBox/ArchiveBox/issues/175), expand comment threads, etc.
-- Can also [**mirror content to 3rd-party archiving services**](https://github.com/ArchiveBox/ArchiveBox/wiki/Configuration#submit_archive_dot_org) automatically for redundancy
+- [**Extracts a wide variety of content out-of-the-box**](https://github.com/ArchiveBox/ArchiveBox/issues/51): [media (youtube-dl), articles (readability), code (git), etc.](#output-formats)
+- [**Supports scheduled/realtime importing**](https://github.com/ArchiveBox/ArchiveBox/wiki/Scheduled-Archiving) from [many types of sources](#input-formats)
+- [**Uses standard, durable, long-term formats**](#saves-lots-of-useful-stuff-for-each-imported-link) like HTML, JSON, PDF, PNG, and WARC
+- [**Usable as a oneshot CLI**](https://github.com/ArchiveBox/ArchiveBox/wiki/Usage#CLI-Usage), [**self-hosted web UI**](https://github.com/ArchiveBox/ArchiveBox/wiki/Usage#UI-Usage), [Python API](https://docs.archivebox.io/en/latest/modules.html) (BETA), [REST API](https://github.com/ArchiveBox/ArchiveBox/issues/496) (ALPHA), or [desktop app](https://github.com/ArchiveBox/electron-archivebox) (ALPHA)
+- [**Saves all pages to archive.org as well**](https://github.com/ArchiveBox/ArchiveBox/wiki/Configuration#submit_archive_dot_org) by default for redundancy (can be [disabled](https://github.com/ArchiveBox/ArchiveBox/wiki/Security-Overview#stealth-mode) for local-only mode)
+- Planned: support for archiving [content requiring a login/paywall/cookies](https://github.com/ArchiveBox/ArchiveBox/wiki/Configuration#chrome_user_data_dir) (working, but ill-advised until some pending fixes are released)
+- Planned: support for running [JS scripts during archiving](https://github.com/ArchiveBox/ArchiveBox/issues/51), e.g. adblock, [autoscroll](https://github.com/ArchiveBox/ArchiveBox/issues/80), [modal-hiding](https://github.com/ArchiveBox/ArchiveBox/issues/175), [thread-expander](https://github.com/ArchiveBox/ArchiveBox/issues/345), etc.
+
+
+
+---
+
+
+
+
## Input formats
@@ -253,9 +311,10 @@ archivebox add --depth=1 'https://example.com/some/downloads.html'
archivebox add --depth=1 'https://news.ycombinator.com#2020-12-12'
```
-- Browser history or bookmarks exports (Chrome, Firefox, Safari, IE, Opera, and more)
-- RSS, XML, JSON, CSV, SQL, HTML, Markdown, TXT, or any other text-based format
-- Pocket, Pinboard, Instapaper, Shaarli, Delicious, Reddit Saved Posts, Wallabag, Unmark.it, OneTab, and more
+
+- TXT, RSS, XML, JSON, CSV, SQL, HTML, Markdown, or [any other text-based format...](https://github.com/ArchiveBox/ArchiveBox/wiki/Usage#Import-a-list-of-URLs-from-a-text-file)
+- [Browser history](https://github.com/ArchiveBox/ArchiveBox/wiki/Quickstart#2-get-your-list-of-urls-to-archive) or [browser bookmarks](https://github.com/ArchiveBox/ArchiveBox/wiki/Quickstart#2-get-your-list-of-urls-to-archive) (see instructions for: [Chrome](https://support.google.com/chrome/answer/96816?hl=en), [Firefox](https://support.mozilla.org/en-US/kb/export-firefox-bookmarks-to-backup-or-transfer), [Safari](http://i.imgur.com/AtcvUZA.png), [IE](https://support.microsoft.com/en-us/help/211089/how-to-import-and-export-the-internet-explorer-favorites-folder-to-a-32-bit-version-of-windows), [Opera](http://help.opera.com/Windows/12.10/en/importexport.html), [and more...](https://github.com/ArchiveBox/ArchiveBox/wiki/Quickstart#2-get-your-list-of-urls-to-archive))
+- [Pocket](https://getpocket.com/export), [Pinboard](https://pinboard.in/export/), [Instapaper](https://www.instapaper.com/user/export), [Shaarli](https://shaarli.readthedocs.io/en/master/Usage/#importexport), [Delicious](https://www.groovypost.com/howto/howto/export-delicious-bookmarks-xml/), [Reddit Saved](https://github.com/csu/export-saved-reddit), [Wallabag](https://doc.wallabag.org/en/user/import/wallabagv2.html), [Unmark.it](http://help.unmark.it/import-export), [OneTab](https://www.addictivetips.com/web/onetab-save-close-all-chrome-tabs-to-restore-export-or-import/), [and more...](https://github.com/ArchiveBox/ArchiveBox/wiki/Quickstart#2-get-your-list-of-urls-to-archive)
See the [Usage: CLI](https://github.com/ArchiveBox/ArchiveBox/wiki/Usage#CLI-Usage) page for documentation and examples.
@@ -272,34 +331,51 @@ The on-disk layout is optimized to be easy to browse by hand and durable long-te
```
- **Index:** `index.html` & `index.json` HTML and JSON index files containing metadata and details
-- **Title:** `title` title of the site
-- **Favicon:** `favicon.ico` favicon of the site
-- **Headers:** `headers.json` Any HTTP headers the site returns are saved in a json file
-- **SingleFile:** `singlefile.html` HTML snapshot rendered with headless Chrome using SingleFile
-- **WGET Clone:** `example.com/page-name.html` wget clone of the site, with .html appended if not present
-- **WARC:** `warc/.gz` gzipped WARC of all the resources fetched while archiving
-- **PDF:** `output.pdf` Printed PDF of site using headless chrome
-- **Screenshot:** `screenshot.png` 1440x900 screenshot of site using headless chrome
-- **DOM Dump:** `output.html` DOM Dump of the HTML after rendering using headless chrome
-- **Readability:** `article.html/json` Article text extraction using Readability
-- **URL to Archive.org:** `archive.org.txt` A link to the saved site on archive.org
+- **Title**, **Favicon**, **Headers** Response headers, site favicon, and parsed site title
+- **Wget Clone:** `example.com/page-name.html` wget clone of the site with `warc/.gz`
+- Chrome Headless
+ - **SingleFile:** `singlefile.html` HTML snapshot rendered with headless Chrome using SingleFile
+ - **PDF:** `output.pdf` Printed PDF of site using headless chrome
+ - **Screenshot:** `screenshot.png` 1440x900 screenshot of site using headless chrome
+ - **DOM Dump:** `output.html` DOM Dump of the HTML after rendering using headless chrome
+ - **Readability:** `article.html/json` Article text extraction using Readability
+- **Archive.org Permalink:** `archive.org.txt` A link to the saved site on archive.org
- **Audio & Video:** `media/` all audio/video files + playlists, including subtitles & metadata with youtube-dl
- **Source Code:** `git/` clone of any repository found on github, bitbucket, or gitlab links
- _More coming soon! See the [Roadmap](https://github.com/ArchiveBox/ArchiveBox/wiki/Roadmap)..._
It does everything out-of-the-box by default, but you can disable or tweak [individual archive methods](https://github.com/ArchiveBox/ArchiveBox/wiki/Configuration) via environment variables or config file.
+
+
+
+
+
+
+---
+
+
+
## Dependencies
You don't need to install all the dependencies, ArchiveBox will automatically enable the relevant modules based on whatever you have available, but it's recommended to use the official [Docker image](https://github.com/ArchiveBox/ArchiveBox/wiki/Docker) with everything preinstalled.
-If you so choose, you can also install ArchiveBox and its dependencies directly on any Linux or macOS systems using the [automated setup script](https://github.com/ArchiveBox/ArchiveBox/wiki/Quickstart) or the [system package manager](https://github.com/ArchiveBox/ArchiveBox/wiki/Install).
+If you so choose, you can also install ArchiveBox and its dependencies directly on any Linux or macOS systems using the [system package manager](https://github.com/ArchiveBox/ArchiveBox/wiki/Install) or by running the [automated setup script](https://github.com/ArchiveBox/ArchiveBox/wiki/Quickstart).
ArchiveBox is written in Python 3 so it requires `python3` and `pip3` available on your system. It also uses a set of optional, but highly recommended external dependencies for archiving sites: `wget` (for plain HTML, static files, and WARC saving), `chromium` (for screenshots, PDFs, JS execution, and more), `youtube-dl` (for audio and video), `git` (for cloning git repos), and `nodejs` (for readability and singlefile), and more.
+
+
+---
+
+
+
+
+
## Caveats
If you're importing URLs containing secret slugs or pages with private content (e.g Google Docs, CodiMD notepads, etc), you may want to disable some of the extractor modules to avoid leaking private URLs to 3rd party APIs during the archiving process.
+
```bash
# don't do this:
archivebox add 'https://docs.google.com/document/d/12345somelongsecrethere'
@@ -312,6 +388,7 @@ archivebox config --set CHROME_BINARY=chromium # optional: switch to chromium t
```
Be aware that malicious archived JS can also read the contents of other pages in your archive due to snapshot CSRF and XSS protections being imperfect. See the [Security Overview](https://github.com/ArchiveBox/ArchiveBox/wiki/Security-Overview#stealth-mode) page for more details.
+
```bash
# visiting an archived page with malicious JS:
https://127.0.0.1:8000/archive/1602401954/example.com/index.html
@@ -323,20 +400,67 @@ https://127.0.0.1:8000/archive/*
```
Support for saving multiple snapshots of each site over time will be [added soon](https://github.com/ArchiveBox/ArchiveBox/issues/179) (along with the ability to view diffs of the changes between runs). For now ArchiveBox is designed to only archive each URL with each extractor type once. A workaround to take multiple snapshots of the same URL is to make them slightly different by adding a hash:
+
```bash
archivebox add 'https://example.com#2020-10-24'
...
archivebox add 'https://example.com#2020-10-25'
```
+
+
---
+
+
+## Screenshots
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+---
+
+
+
-
+
----
-
# Background & Motivation
Vast treasure troves of knowledge are lost every day on the internet to link rot. As a society, we have an imperative to preserve some important parts of that treasure, just like we preserve our books, paintings, and music in physical libraries long after the originals go out of print or fade into obscurity.
@@ -376,6 +500,11 @@ Unlike crawler software that starts from a seed URL and works outwards, or publi
Because ArchiveBox is designed to ingest a firehose of browser history and bookmark feeds to a local disk, it can be much more disk-space intensive than a centralized service like the Internet Archive or Archive.today. However, as storage space gets cheaper and compression improves, you should be able to use it continuously over the years without having to delete anything. In my experience, ArchiveBox uses about 5gb per 1000 articles, but your milage may vary depending on which options you have enabled and what types of sites you're archiving. By default, it archives everything in as many formats as possible, meaning it takes more space than a using a single method, but more content is accurately replayable over extended periods of time. Storage requirements can be reduced by using a compressed/deduplicated filesystem like ZFS/BTRFS, or by setting `SAVE_MEDIA=False` to skip audio & video files.
+
+
+
+
+
## Learn more
Whether you want to learn which organizations are the big players in the web archiving space, want to find a specific open-source tool for your web archiving need, or just want to see where archivists hang out online, our Community Wiki page serves as an index of the broader web archiving community. Check it out to learn about some of the coolest web archiving projects and communities on the web!
@@ -383,20 +512,26 @@ Whether you want to learn which organizations are the big players in the web arc
- [Community Wiki](https://github.com/ArchiveBox/ArchiveBox/wiki/Web-Archiving-Community)
- - [The Master Lists](https://github.com/ArchiveBox/ArchiveBox/wiki/Web-Archiving-Community#The-Master-Lists)
+ - [The Master Lists](https://github.com/ArchiveBox/ArchiveBox/wiki/Web-Archiving-Community#the-master-lists)
_Community-maintained indexes of archiving tools and institutions._
- - [Web Archiving Software](https://github.com/ArchiveBox/ArchiveBox/wiki/Web-Archiving-Community#Web-Archiving-Projects)
+ - [Web Archiving Software](https://github.com/ArchiveBox/ArchiveBox/wiki/Web-Archiving-Community#web-archiving-projects)
_Open source tools and projects in the internet archiving space._
- - [Reading List](https://github.com/ArchiveBox/ArchiveBox/wiki/Web-Archiving-Community#Reading-List)
+ - [Reading List](https://github.com/ArchiveBox/ArchiveBox/wiki/Web-Archiving-Community#reading-list)
_Articles, posts, and blogs relevant to ArchiveBox and web archiving in general._
- - [Communities](https://github.com/ArchiveBox/ArchiveBox/wiki/Web-Archiving-Community#Communities)
+ - [Communities](https://github.com/ArchiveBox/ArchiveBox/wiki/Web-Archiving-Community#communities)
_A collection of the most active internet archiving communities and initiatives._
- Check out the ArchiveBox [Roadmap](https://github.com/ArchiveBox/ArchiveBox/wiki/Roadmap) and [Changelog](https://github.com/ArchiveBox/ArchiveBox/wiki/Changelog)
- Learn why archiving the internet is important by reading the "[On the Importance of Web Archiving](https://parameters.ssrc.org/2018/09/on-the-importance-of-web-archiving/)" blog post.
- Or reach out to me for questions and comments via [@ArchiveBoxApp](https://twitter.com/ArchiveBoxApp) or [@theSquashSH](https://twitter.com/thesquashSH) on Twitter.
+
+
---
+
+
+
+
# Documentation
@@ -422,8 +557,8 @@ You can also access the docs locally by looking in the [`ArchiveBox/docs/`](http
- [Chromium Install](https://github.com/ArchiveBox/ArchiveBox/wiki/Chromium-Install)
- [Security Overview](https://github.com/ArchiveBox/ArchiveBox/wiki/Security-Overview)
- [Troubleshooting](https://github.com/ArchiveBox/ArchiveBox/wiki/Troubleshooting)
-- [Python API](https://docs.archivebox.io/en/latest/modules.html)
-- REST API (coming soon...)
+- [Python API](https://docs.archivebox.io/en/latest/modules.html) (alpha)
+- [REST API](https://github.com/ArchiveBox/ArchiveBox/issues/496) (alpha)
## More Info
@@ -434,37 +569,58 @@ You can also access the docs locally by looking in the [`ArchiveBox/docs/`](http
- [Background & Motivation](https://github.com/ArchiveBox/ArchiveBox#background--motivation)
- [Web Archiving Community](https://github.com/ArchiveBox/ArchiveBox/wiki/Web-Archiving-Community)
+
+
---
+
+
+
+
# ArchiveBox Development
All contributions to ArchiveBox are welcomed! Check our [issues](https://github.com/ArchiveBox/ArchiveBox/issues) and [Roadmap](https://github.com/ArchiveBox/ArchiveBox/wiki/Roadmap) for things to work on, and please open an issue to discuss your proposed implementation before working on things! Otherwise we may have to close your PR if it doesn't align with our roadmap.
+Low hanging fruit / easy first tickets:
+
+
### Setup the dev environment
-First, install the system dependencies from the "Bare Metal" section above.
-Then you can clone the ArchiveBox repo and install
-```python3
-git clone https://github.com/ArchiveBox/ArchiveBox && cd ArchiveBox
-git checkout master # or the branch you want to test
+#### 1. Clone the main code repo (making sure to pull the submodules as well)
+
+```bash
+git clone --recurse-submodules https://github.com/ArchiveBox/ArchiveBox
+cd ArchiveBox
+git checkout dev # or the branch you want to test
git submodule update --init --recursive
git pull --recurse-submodules
+```
+#### 2. Option A: Install the Python, JS, and system dependencies directly on your machine
+
+```bash
# Install ArchiveBox + python dependencies
-python3 -m venv .venv && source .venv/bin/activate && pip install -e .[dev]
-# or with pipenv: pipenv install --dev && pipenv shell
+python3 -m venv .venv && source .venv/bin/activate && pip install -e '.[dev]'
+# or: pipenv install --dev && pipenv shell
# Install node dependencies
npm install
-# Optional: install extractor dependencies manually or with helper script
+# Check to see if anything is missing
+archivebox --version
+# install any missing dependencies manually, or use the helper script:
./bin/setup.sh
+```
+#### 2. Option B: Build the docker container and use that for development instead
+
+```bash
# Optional: develop via docker by mounting the code dir into the container
# if you edit e.g. ./archivebox/core/models.py on the docker host, runserver
# inside the container will reload and pick up your changes
docker build . -t archivebox
-docker run -it -p 8000:8000 \
+docker run -it --rm archivebox version
+docker run -it --rm -p 8000:8000 \
-v $PWD/data:/data \
-v $PWD/archivebox:/app/archivebox \
archivebox server 0.0.0.0:8000 --debug --reload
@@ -475,6 +631,21 @@ docker run -it -p 8000:8000 \
See the `./bin/` folder and read the source of the bash scripts within.
You can also run all these in Docker. For more examples see the Github Actions CI/CD tests that are run: `.github/workflows/*.yaml`.
+#### Run in DEBUG mode
+
+```bash
+archivebox config --set DEBUG=True
+# or
+archivebox server --debug ...
+```
+
+#### Build and run a Github branch
+
+```bash
+docker build -t archivebox:dev https://github.com/ArchiveBox/ArchiveBox.git#dev
+docker run -it -v $PWD:/data archivebox:dev ...
+```
+
#### Run the linters
```bash
@@ -491,17 +662,20 @@ You can also run all these in Docker. For more examples see the Github Actions C
#### Make migrations or enter a django shell
+Make sure to run this whenever you change things in `models.py`.
```bash
cd archivebox/
./manage.py makemigrations
-cd data/
+cd path/to/test/data/
archivebox shell
+archivebox manage dbshell
```
(uses `pytest -s`)
#### Build the docs, pip package, and docker image
+(Normally CI takes care of this, but these scripts can be run to do it manually)
```bash
./bin/build.sh
@@ -515,11 +689,17 @@ archivebox shell
#### Roll a release
+(Normally CI takes care of this, but these scripts can be run to do it manually)
```bash
./bin/release.sh
-```
-(bumps the version, builds, and pushes a release to PyPI, Docker Hub, and Github Packages)
+# or individually:
+./bin/release_docs.sh
+./bin/release_pip.sh
+./bin/release_deb.sh
+./bin/release_brew.sh
+./bin/release_docker.sh
+```
---
diff --git a/archivebox/cli/archivebox_schedule.py b/archivebox/cli/archivebox_schedule.py
index ec5e9146..568b25b9 100644
--- a/archivebox/cli/archivebox_schedule.py
+++ b/archivebox/cli/archivebox_schedule.py
@@ -42,6 +42,7 @@ def main(args: Optional[List[str]]=None, stdin: Optional[IO]=None, pwd: Optional
parser.add_argument(
'--depth', # '-d',
type=int,
+ choices=[0, 1],
default=0,
help='Depth to archive to [0] or 1, see "add" command help for more info.',
)
diff --git a/archivebox/cli/archivebox_server.py b/archivebox/cli/archivebox_server.py
index dbacf7e5..a4d96dc9 100644
--- a/archivebox/cli/archivebox_server.py
+++ b/archivebox/cli/archivebox_server.py
@@ -43,6 +43,11 @@ def main(args: Optional[List[str]]=None, stdin: Optional[IO]=None, pwd: Optional
action='store_true',
help='Run archivebox init before starting the server',
)
+ parser.add_argument(
+ '--createsuperuser',
+ action='store_true',
+ help='Run archivebox manage createsuperuser before starting the server',
+ )
command = parser.parse_args(args or ())
reject_stdin(__command__, stdin)
@@ -51,6 +56,7 @@ def main(args: Optional[List[str]]=None, stdin: Optional[IO]=None, pwd: Optional
reload=command.reload,
debug=command.debug,
init=command.init,
+ createsuperuser=command.createsuperuser,
out_dir=pwd or OUTPUT_DIR,
)
diff --git a/archivebox/config.py b/archivebox/config.py
index 9a3f9a77..349817ec 100644
--- a/archivebox/config.py
+++ b/archivebox/config.py
@@ -27,6 +27,7 @@ import re
import sys
import json
import getpass
+import platform
import shutil
import django
@@ -51,7 +52,7 @@ CONFIG_SCHEMA: Dict[str, ConfigDefaultDict] = {
'SHELL_CONFIG': {
'IS_TTY': {'type': bool, 'default': lambda _: sys.stdout.isatty()},
'USE_COLOR': {'type': bool, 'default': lambda c: c['IS_TTY']},
- 'SHOW_PROGRESS': {'type': bool, 'default': lambda c: c['IS_TTY']},
+ 'SHOW_PROGRESS': {'type': bool, 'default': lambda c: (c['IS_TTY'] and platform.system() != 'Darwin')}, # progress bars are buggy on mac, disable for now
'IN_DOCKER': {'type': bool, 'default': False},
# TODO: 'SHOW_HINTS': {'type: bool, 'default': True},
},
@@ -76,7 +77,6 @@ CONFIG_SCHEMA: Dict[str, ConfigDefaultDict] = {
'PUBLIC_SNAPSHOTS': {'type': bool, 'default': True},
'PUBLIC_ADD_VIEW': {'type': bool, 'default': False},
'FOOTER_INFO': {'type': str, 'default': 'Content is hosted for personal archiving purposes only. Contact server owner for any takedown requests.'},
- 'ACTIVE_THEME': {'type': str, 'default': 'default'},
},
'ARCHIVE_METHOD_TOGGLES': {
@@ -116,16 +116,15 @@ CONFIG_SCHEMA: Dict[str, ConfigDefaultDict] = {
'--write-annotations',
'--write-thumbnail',
'--no-call-home',
- '--user-agent',
'--all-subs',
- '--extract-audio',
- '--keep-video',
+ '--yes-playlist',
+ '--continue',
'--ignore-errors',
'--geo-bypass',
- '--audio-format', 'mp3',
- '--audio-quality', '320K',
- '--embed-thumbnail',
- '--add-metadata']},
+ '--add-metadata',
+ '--max-filesize=750m',
+ ]},
+
'WGET_ARGS': {'type': list, 'default': ['--no-verbose',
'--adjust-extension',
@@ -205,12 +204,11 @@ def get_real_name(key: str) -> str:
################################ Constants #####################################
PACKAGE_DIR_NAME = 'archivebox'
-TEMPLATES_DIR_NAME = 'themes'
+TEMPLATES_DIR_NAME = 'templates'
ARCHIVE_DIR_NAME = 'archive'
SOURCES_DIR_NAME = 'sources'
LOGS_DIR_NAME = 'logs'
-STATIC_DIR_NAME = 'static'
SQL_INDEX_FILENAME = 'index.sqlite3'
JSON_INDEX_FILENAME = 'index.json'
HTML_INDEX_FILENAME = 'index.html'
@@ -703,7 +701,7 @@ def get_code_locations(config: ConfigDict) -> SimpleConfigValueDict:
'TEMPLATES_DIR': {
'path': (config['TEMPLATES_DIR']).resolve(),
'enabled': True,
- 'is_valid': (config['TEMPLATES_DIR'] / config['ACTIVE_THEME'] / 'static').exists(),
+ 'is_valid': (config['TEMPLATES_DIR'] / 'static').exists(),
},
# 'NODE_MODULES_DIR': {
# 'path': ,
@@ -775,7 +773,7 @@ def get_dependency_info(config: ConfigDict) -> ConfigValue:
'version': config['PYTHON_VERSION'],
'hash': bin_hash(config['PYTHON_BINARY']),
'enabled': True,
- 'is_valid': bool(config['DJANGO_VERSION']),
+ 'is_valid': bool(config['PYTHON_VERSION']),
},
'DJANGO_BINARY': {
'path': bin_path(config['DJANGO_BINARY']),
@@ -787,7 +785,7 @@ def get_dependency_info(config: ConfigDict) -> ConfigValue:
'CURL_BINARY': {
'path': bin_path(config['CURL_BINARY']),
'version': config['CURL_VERSION'],
- 'hash': bin_hash(config['PYTHON_BINARY']),
+ 'hash': bin_hash(config['CURL_BINARY']),
'enabled': config['USE_CURL'],
'is_valid': bool(config['CURL_VERSION']),
},
@@ -803,7 +801,7 @@ def get_dependency_info(config: ConfigDict) -> ConfigValue:
'version': config['NODE_VERSION'],
'hash': bin_hash(config['NODE_BINARY']),
'enabled': config['USE_NODE'],
- 'is_valid': bool(config['SINGLEFILE_VERSION']),
+ 'is_valid': bool(config['NODE_VERSION']),
},
'SINGLEFILE_BINARY': {
'path': bin_path(config['SINGLEFILE_BINARY']),
@@ -917,7 +915,12 @@ os.umask(0o777 - int(OUTPUT_PERMISSIONS, base=8)) # noqa: F821
NODE_BIN_PATH = str((Path(CONFIG["OUTPUT_DIR"]).absolute() / 'node_modules' / '.bin'))
sys.path.append(NODE_BIN_PATH)
-
+# disable stderr "you really shouldnt disable ssl" warnings with library config
+if not CONFIG['CHECK_SSL_VALIDITY']:
+ import urllib3
+ import requests
+ requests.packages.urllib3.disable_warnings(requests.packages.urllib3.exceptions.InsecureRequestWarning)
+ urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
########################### Config Validity Checkers ###########################
diff --git a/archivebox/config_stubs.py b/archivebox/config_stubs.py
index 988f58a1..f9c22a0c 100644
--- a/archivebox/config_stubs.py
+++ b/archivebox/config_stubs.py
@@ -50,7 +50,6 @@ class ConfigDict(BaseConfig, total=False):
PUBLIC_INDEX: bool
PUBLIC_SNAPSHOTS: bool
FOOTER_INFO: str
- ACTIVE_THEME: str
SAVE_TITLE: bool
SAVE_FAVICON: bool
diff --git a/archivebox/core/admin.py b/archivebox/core/admin.py
index 832bea38..bacc53c0 100644
--- a/archivebox/core/admin.py
+++ b/archivebox/core/admin.py
@@ -11,18 +11,29 @@ from django.shortcuts import render, redirect
from django.contrib.auth import get_user_model
from django import forms
+from ..util import htmldecode, urldecode, ansi_to_html
+
from core.models import Snapshot, Tag
from core.forms import AddLinkForm, TagField
from core.mixins import SearchResultsAdminMixin
from index.html import snapshot_icons
-from util import htmldecode, urldecode, ansi_to_html
from logging_util import printable_filesize
from main import add, remove
from config import OUTPUT_DIR
from extractors import archive_links
+# Admin URLs
+# /admin/
+# /admin/login/
+# /admin/core/
+# /admin/core/snapshot/
+# /admin/core/snapshot/:uuid/
+# /admin/core/tag/
+# /admin/core/tag/:uuid/
+
+
# TODO: https://stackoverflow.com/questions/40760880/add-custom-button-to-django-admin-panel
def update_snapshots(modeladmin, request, queryset):
@@ -88,13 +99,14 @@ class SnapshotAdmin(SearchResultsAdminMixin, admin.ModelAdmin):
list_display = ('added', 'title_str', 'url_str', 'files', 'size')
sort_fields = ('title_str', 'url_str', 'added')
readonly_fields = ('id', 'url', 'timestamp', 'num_outputs', 'is_archived', 'url_hash', 'added', 'updated')
- search_fields = ['url', 'timestamp', 'title', 'tags__name']
+ search_fields = ['url__icontains', 'timestamp', 'title', 'tags__name']
fields = (*readonly_fields, 'title', 'tags')
list_filter = ('added', 'updated', 'tags')
ordering = ['-added']
actions = [delete_snapshots, overwrite_snapshots, update_snapshots, update_titles, verify_snapshots]
actions_template = 'admin/actions_as_select.html'
form = SnapshotAdminForm
+ list_per_page = 40
def get_urls(self):
urls = super().get_urls()
@@ -170,7 +182,7 @@ class SnapshotAdmin(SearchResultsAdminMixin, admin.ModelAdmin):
saved_list_max_show_all = self.list_max_show_all
# Monkey patch here plus core_tags.py
- self.change_list_template = 'admin/grid_change_list.html'
+ self.change_list_template = 'private_index_grid.html'
self.list_per_page = 20
self.list_max_show_all = self.list_per_page
@@ -248,7 +260,7 @@ class ArchiveBoxAdmin(admin.AdminSite):
else:
context["form"] = form
- return render(template_name='add_links.html', request=request, context=context)
+ return render(template_name='add.html', request=request, context=context)
admin.site = ArchiveBoxAdmin()
admin.site.register(get_user_model())
diff --git a/archivebox/core/forms.py b/archivebox/core/forms.py
index 86b29bb7..ed584c68 100644
--- a/archivebox/core/forms.py
+++ b/archivebox/core/forms.py
@@ -22,10 +22,32 @@ class AddLinkForm(forms.Form):
url = forms.RegexField(label="URLs (one per line)", regex=URL_REGEX, min_length='6', strip=True, widget=forms.Textarea, required=True)
depth = forms.ChoiceField(label="Archive depth", choices=CHOICES, widget=forms.RadioSelect, initial='0')
archive_methods = forms.MultipleChoiceField(
+ label="Archive methods (select at least 1, otherwise all will be used by default)",
required=False,
widget=forms.SelectMultiple,
choices=ARCHIVE_METHODS,
)
+ # TODO: hook these up to the view and put them
+ # in a collapsible UI section labeled "Advanced"
+ #
+ # exclude_patterns = forms.CharField(
+ # label="Exclude patterns",
+ # min_length='1',
+ # required=False,
+ # initial=URL_BLACKLIST,
+ # )
+ # timeout = forms.IntegerField(
+ # initial=TIMEOUT,
+ # )
+ # overwrite = forms.BooleanField(
+ # label="Overwrite any existing Snapshots",
+ # initial=False,
+ # )
+ # index_only = forms.BooleanField(
+ # label="Add URLs to index without Snapshotting",
+ # initial=False,
+ # )
+
class TagWidgetMixin:
def format_value(self, value):
if value is not None and not isinstance(value, str):
diff --git a/archivebox/core/settings.py b/archivebox/core/settings.py
index e8ed6b16..e73c93d9 100644
--- a/archivebox/core/settings.py
+++ b/archivebox/core/settings.py
@@ -11,7 +11,6 @@ from ..config import (
SECRET_KEY,
ALLOWED_HOSTS,
PACKAGE_DIR,
- ACTIVE_THEME,
TEMPLATES_DIR_NAME,
SQL_INDEX_FILENAME,
OUTPUT_DIR,
@@ -34,6 +33,8 @@ LOGOUT_REDIRECT_URL = '/'
PASSWORD_RESET_URL = '/accounts/password_reset/'
APPEND_SLASH = True
+DEBUG = DEBUG or ('--debug' in sys.argv)
+
INSTALLED_APPS = [
'django.contrib.auth',
'django.contrib.contenttypes',
@@ -69,13 +70,12 @@ AUTHENTICATION_BACKENDS = [
STATIC_URL = '/static/'
STATICFILES_DIRS = [
- str(Path(PACKAGE_DIR) / TEMPLATES_DIR_NAME / ACTIVE_THEME / 'static'),
- str(Path(PACKAGE_DIR) / TEMPLATES_DIR_NAME / 'default' / 'static'),
+ str(Path(PACKAGE_DIR) / TEMPLATES_DIR_NAME / 'static'),
]
TEMPLATE_DIRS = [
- str(Path(PACKAGE_DIR) / TEMPLATES_DIR_NAME / ACTIVE_THEME),
- str(Path(PACKAGE_DIR) / TEMPLATES_DIR_NAME / 'default'),
+ str(Path(PACKAGE_DIR) / TEMPLATES_DIR_NAME / 'core'),
+ str(Path(PACKAGE_DIR) / TEMPLATES_DIR_NAME / 'admin'),
str(Path(PACKAGE_DIR) / TEMPLATES_DIR_NAME),
]
@@ -101,7 +101,7 @@ TEMPLATES = [
################################################################################
DATABASE_FILE = Path(OUTPUT_DIR) / SQL_INDEX_FILENAME
-DATABASE_NAME = os.environ.get("ARCHIVEBOX_DATABASE_NAME", DATABASE_FILE)
+DATABASE_NAME = os.environ.get("ARCHIVEBOX_DATABASE_NAME", str(DATABASE_FILE))
DATABASES = {
'default': {
diff --git a/archivebox/core/templatetags/core_tags.py b/archivebox/core/templatetags/core_tags.py
index 25f06852..9ac1ee27 100644
--- a/archivebox/core/templatetags/core_tags.py
+++ b/archivebox/core/templatetags/core_tags.py
@@ -14,7 +14,7 @@ register = template.Library()
def snapshot_image(snapshot):
result = ArchiveResult.objects.filter(snapshot=snapshot, extractor='screenshot', status='succeeded').first()
if result:
- return reverse('LinkAssets', args=[f'{str(snapshot.timestamp)}/{result.output}'])
+ return reverse('Snapshot', args=[f'{str(snapshot.timestamp)}/{result.output}'])
return static('archive.png')
diff --git a/archivebox/core/urls.py b/archivebox/core/urls.py
index b8e4bafb..182e4dca 100644
--- a/archivebox/core/urls.py
+++ b/archivebox/core/urls.py
@@ -5,22 +5,24 @@ from django.views import static
from django.conf import settings
from django.views.generic.base import RedirectView
-from core.views import MainIndex, LinkDetails, PublicArchiveView, AddView
+from core.views import HomepageView, SnapshotView, PublicIndexView, AddView
# print('DEBUG', settings.DEBUG)
urlpatterns = [
+ path('public/', PublicIndexView.as_view(), name='public-index'),
+
path('robots.txt', static.serve, {'document_root': settings.OUTPUT_DIR, 'path': 'robots.txt'}),
path('favicon.ico', static.serve, {'document_root': settings.OUTPUT_DIR, 'path': 'favicon.ico'}),
path('docs/', RedirectView.as_view(url='https://github.com/ArchiveBox/ArchiveBox/wiki'), name='Docs'),
path('archive/', RedirectView.as_view(url='/')),
- path('archive/', LinkDetails.as_view(), name='LinkAssets'),
+ path('archive/', SnapshotView.as_view(), name='Snapshot'),
path('admin/core/snapshot/add/', RedirectView.as_view(url='/add/')),
- path('add/', AddView.as_view()),
+ path('add/', AddView.as_view(), name='add'),
path('accounts/login/', RedirectView.as_view(url='/admin/login/')),
path('accounts/logout/', RedirectView.as_view(url='/admin/logout/')),
@@ -31,6 +33,37 @@ urlpatterns = [
path('index.html', RedirectView.as_view(url='/')),
path('index.json', static.serve, {'document_root': settings.OUTPUT_DIR, 'path': 'index.json'}),
- path('', MainIndex.as_view(), name='Home'),
- path('public/', PublicArchiveView.as_view(), name='public-index'),
+ path('', HomepageView.as_view(), name='Home'),
]
+
+ # # Proposed UI URLs spec
+ # path('', HomepageView)
+ # path('/add', AddView)
+ # path('/public', PublicIndexView)
+ # path('/snapshot/:slug', SnapshotView)
+
+ # path('/admin', admin.site.urls)
+ # path('/accounts', django.contrib.auth.urls)
+
+ # # Prposed REST API spec
+ # # :slugs can be uuid, short_uuid, or any of the unique index_fields
+ # path('api/v1/'),
+ # path('api/v1/core/' [GET])
+ # path('api/v1/core/snapshot/', [GET, POST, PUT]),
+ # path('api/v1/core/snapshot/:slug', [GET, PATCH, DELETE]),
+ # path('api/v1/core/archiveresult', [GET, POST, PUT]),
+ # path('api/v1/core/archiveresult/:slug', [GET, PATCH, DELETE]),
+ # path('api/v1/core/tag/', [GET, POST, PUT]),
+ # path('api/v1/core/tag/:slug', [GET, PATCH, DELETE]),
+
+ # path('api/v1/cli/', [GET])
+ # path('api/v1/cli/{add,list,config,...}', [POST]), # pass query as kwargs directly to `run_subcommand` and return stdout, stderr, exitcode
+
+ # path('api/v1/extractors/', [GET])
+ # path('api/v1/extractors/:extractor/', [GET]),
+ # path('api/v1/extractors/:extractor/:func', [GET, POST]), # pass query as args directly to chosen function
+
+ # future, just an idea:
+ # path('api/v1/scheduler/', [GET])
+ # path('api/v1/scheduler/task/', [GET, POST, PUT]),
+ # path('api/v1/scheduler/task/:slug', [GET, PATCH, DELETE]),
diff --git a/archivebox/core/views.py b/archivebox/core/views.py
index b46e364e..0e19fad6 100644
--- a/archivebox/core/views.py
+++ b/archivebox/core/views.py
@@ -9,6 +9,7 @@ from django.http import HttpResponse
from django.views import View, static
from django.views.generic.list import ListView
from django.views.generic import FormView
+from django.db.models import Q
from django.contrib.auth.mixins import UserPassesTestMixin
from core.models import Snapshot
@@ -27,20 +28,20 @@ from ..util import base_url, ansi_to_html
from ..index.html import snapshot_icons
-class MainIndex(View):
- template = 'main_index.html'
-
+class HomepageView(View):
def get(self, request):
if request.user.is_authenticated:
return redirect('/admin/core/snapshot/')
if PUBLIC_INDEX:
- return redirect('public-index')
+ return redirect('/public')
return redirect(f'/admin/login/?next={request.path}')
-class LinkDetails(View):
+class SnapshotView(View):
+ # render static html index from filesystem archive//index.html
+
def get(self, request, path):
# missing trailing slash -> redirect to index
if '/' not in path:
@@ -90,8 +91,8 @@ class LinkDetails(View):
status=404,
)
-class PublicArchiveView(ListView):
- template = 'snapshot_list.html'
+class PublicIndexView(ListView):
+ template_name = 'public_index.html'
model = Snapshot
paginate_by = 100
ordering = ['title']
@@ -107,7 +108,7 @@ class PublicArchiveView(ListView):
qs = super().get_queryset(**kwargs)
query = self.request.GET.get('q')
if query:
- qs = qs.filter(title__icontains=query)
+ qs = qs.filter(Q(title__icontains=query) | Q(url__icontains=query) | Q(timestamp__icontains=query) | Q(tags__name__icontains=query))
for snapshot in qs:
snapshot.icons = snapshot_icons(snapshot)
return qs
@@ -121,7 +122,7 @@ class PublicArchiveView(ListView):
class AddView(UserPassesTestMixin, FormView):
- template_name = "add_links.html"
+ template_name = "add.html"
form_class = AddLinkForm
def get_initial(self):
diff --git a/archivebox/extractors/__init__.py b/archivebox/extractors/__init__.py
index a4acef0b..15968097 100644
--- a/archivebox/extractors/__init__.py
+++ b/archivebox/extractors/__init__.py
@@ -102,7 +102,7 @@ def archive_link(link: Link, overwrite: bool=False, methods: Optional[Iterable[s
if method_name not in link.history:
link.history[method_name] = []
- if should_run(link, out_dir) or overwrite:
+ if should_run(link, out_dir, overwrite):
log_archive_method_started(method_name)
result = method_function(link=link, out_dir=out_dir)
diff --git a/archivebox/extractors/archive_org.py b/archivebox/extractors/archive_org.py
index f5598d6f..1f382190 100644
--- a/archivebox/extractors/archive_org.py
+++ b/archivebox/extractors/archive_org.py
@@ -25,12 +25,12 @@ from ..logging_util import TimedProgress
@enforce_types
-def should_save_archive_dot_org(link: Link, out_dir: Optional[Path]=None) -> bool:
- out_dir = out_dir or Path(link.link_dir)
+def should_save_archive_dot_org(link: Link, out_dir: Optional[Path]=None, overwrite: Optional[bool]=False) -> bool:
if is_static_file(link.url):
return False
- if (out_dir / "archive.org.txt").exists():
+ out_dir = out_dir or Path(link.link_dir)
+ if not overwrite and (out_dir / 'archive.org.txt').exists():
# if open(path, 'r').read().strip() != 'None':
return False
diff --git a/archivebox/extractors/dom.py b/archivebox/extractors/dom.py
index babbe71c..ec2df073 100644
--- a/archivebox/extractors/dom.py
+++ b/archivebox/extractors/dom.py
@@ -20,16 +20,16 @@ from ..logging_util import TimedProgress
@enforce_types
-def should_save_dom(link: Link, out_dir: Optional[Path]=None) -> bool:
- out_dir = out_dir or Path(link.link_dir)
+def should_save_dom(link: Link, out_dir: Optional[Path]=None, overwrite: Optional[bool]=False) -> bool:
if is_static_file(link.url):
return False
-
- if (out_dir / 'output.html').exists():
+
+ out_dir = out_dir or Path(link.link_dir)
+ if not overwrite and (out_dir / 'output.html').exists():
return False
return SAVE_DOM
-
+
@enforce_types
def save_dom(link: Link, out_dir: Optional[Path]=None, timeout: int=TIMEOUT) -> ArchiveResult:
"""print HTML of site to file using chrome --dump-html"""
diff --git a/archivebox/extractors/favicon.py b/archivebox/extractors/favicon.py
index 5e7c1fb0..b8831d0c 100644
--- a/archivebox/extractors/favicon.py
+++ b/archivebox/extractors/favicon.py
@@ -20,13 +20,13 @@ from ..logging_util import TimedProgress
@enforce_types
-def should_save_favicon(link: Link, out_dir: Optional[str]=None) -> bool:
- out_dir = out_dir or link.link_dir
- if (Path(out_dir) / 'favicon.ico').exists():
+def should_save_favicon(link: Link, out_dir: Optional[str]=None, overwrite: Optional[bool]=False) -> bool:
+ out_dir = out_dir or Path(link.link_dir)
+ if not overwrite and (out_dir / 'favicon.ico').exists():
return False
return SAVE_FAVICON
-
+
@enforce_types
def save_favicon(link: Link, out_dir: Optional[Path]=None, timeout: int=TIMEOUT) -> ArchiveResult:
"""download site favicon from google's favicon api"""
@@ -42,14 +42,13 @@ def save_favicon(link: Link, out_dir: Optional[Path]=None, timeout: int=TIMEOUT)
*([] if CHECK_SSL_VALIDITY else ['--insecure']),
'https://www.google.com/s2/favicons?domain={}'.format(domain(link.url)),
]
- status = 'pending'
+ status = 'failed'
timer = TimedProgress(timeout, prefix=' ')
try:
run(cmd, cwd=str(out_dir), timeout=timeout)
chmod_file(output, cwd=str(out_dir))
status = 'succeeded'
except Exception as err:
- status = 'failed'
output = err
finally:
timer.end()
diff --git a/archivebox/extractors/git.py b/archivebox/extractors/git.py
index fd20d4b6..efef37c2 100644
--- a/archivebox/extractors/git.py
+++ b/archivebox/extractors/git.py
@@ -28,12 +28,12 @@ from ..logging_util import TimedProgress
@enforce_types
-def should_save_git(link: Link, out_dir: Optional[Path]=None) -> bool:
- out_dir = out_dir or link.link_dir
+def should_save_git(link: Link, out_dir: Optional[Path]=None, overwrite: Optional[bool]=False) -> bool:
if is_static_file(link.url):
return False
- if (out_dir / "git").exists():
+ out_dir = out_dir or Path(link.link_dir)
+ if not overwrite and (out_dir / 'git').exists():
return False
is_clonable_url = (
diff --git a/archivebox/extractors/headers.py b/archivebox/extractors/headers.py
index 4e69dec1..91dcb8e3 100644
--- a/archivebox/extractors/headers.py
+++ b/archivebox/extractors/headers.py
@@ -22,11 +22,12 @@ from ..config import (
from ..logging_util import TimedProgress
@enforce_types
-def should_save_headers(link: Link, out_dir: Optional[str]=None) -> bool:
- out_dir = out_dir or link.link_dir
+def should_save_headers(link: Link, out_dir: Optional[str]=None, overwrite: Optional[bool]=False) -> bool:
+ out_dir = out_dir or Path(link.link_dir)
+ if not overwrite and (out_dir / 'headers.json').exists():
+ return False
- output = Path(out_dir or link.link_dir) / 'headers.json'
- return not output.exists() and SAVE_HEADERS
+ return SAVE_HEADERS
@enforce_types
diff --git a/archivebox/extractors/media.py b/archivebox/extractors/media.py
index 3792fd2a..1c0a21ba 100644
--- a/archivebox/extractors/media.py
+++ b/archivebox/extractors/media.py
@@ -21,13 +21,12 @@ from ..logging_util import TimedProgress
@enforce_types
-def should_save_media(link: Link, out_dir: Optional[Path]=None) -> bool:
- out_dir = out_dir or link.link_dir
-
+def should_save_media(link: Link, out_dir: Optional[Path]=None, overwrite: Optional[bool]=False) -> bool:
if is_static_file(link.url):
return False
- if (out_dir / "media").exists():
+ out_dir = out_dir or Path(link.link_dir)
+ if not overwrite and (out_dir / 'media').exists():
return False
return SAVE_MEDIA
diff --git a/archivebox/extractors/mercury.py b/archivebox/extractors/mercury.py
index 07c02420..d9e32c0a 100644
--- a/archivebox/extractors/mercury.py
+++ b/archivebox/extractors/mercury.py
@@ -37,13 +37,15 @@ def ShellError(cmd: List[str], result: CompletedProcess, lines: int=20) -> Archi
@enforce_types
-def should_save_mercury(link: Link, out_dir: Optional[str]=None) -> bool:
- out_dir = out_dir or link.link_dir
+def should_save_mercury(link: Link, out_dir: Optional[str]=None, overwrite: Optional[bool]=False) -> bool:
if is_static_file(link.url):
return False
- output = Path(out_dir or link.link_dir) / 'mercury'
- return SAVE_MERCURY and MERCURY_VERSION and (not output.exists())
+ out_dir = out_dir or Path(link.link_dir)
+ if not overwrite and (out_dir / 'mercury').exists():
+ return False
+
+ return SAVE_MERCURY
@enforce_types
diff --git a/archivebox/extractors/pdf.py b/archivebox/extractors/pdf.py
index 1b0201e3..7138206c 100644
--- a/archivebox/extractors/pdf.py
+++ b/archivebox/extractors/pdf.py
@@ -19,12 +19,12 @@ from ..logging_util import TimedProgress
@enforce_types
-def should_save_pdf(link: Link, out_dir: Optional[Path]=None) -> bool:
- out_dir = out_dir or Path(link.link_dir)
+def should_save_pdf(link: Link, out_dir: Optional[Path]=None, overwrite: Optional[bool]=False) -> bool:
if is_static_file(link.url):
return False
-
- if (out_dir / "output.pdf").exists():
+
+ out_dir = out_dir or Path(link.link_dir)
+ if not overwrite and (out_dir / 'output.pdf').exists():
return False
return SAVE_PDF
diff --git a/archivebox/extractors/readability.py b/archivebox/extractors/readability.py
index 9da620b4..6e48cd9a 100644
--- a/archivebox/extractors/readability.py
+++ b/archivebox/extractors/readability.py
@@ -46,13 +46,15 @@ def get_html(link: Link, path: Path) -> str:
return document
@enforce_types
-def should_save_readability(link: Link, out_dir: Optional[str]=None) -> bool:
- out_dir = out_dir or link.link_dir
+def should_save_readability(link: Link, out_dir: Optional[str]=None, overwrite: Optional[bool]=False) -> bool:
if is_static_file(link.url):
return False
- output = Path(out_dir or link.link_dir) / 'readability'
- return SAVE_READABILITY and READABILITY_VERSION and (not output.exists())
+ out_dir = out_dir or Path(link.link_dir)
+ if not overwrite and (out_dir / 'readability').exists():
+ return False
+
+ return SAVE_READABILITY
@enforce_types
diff --git a/archivebox/extractors/screenshot.py b/archivebox/extractors/screenshot.py
index 325584eb..cc748bf6 100644
--- a/archivebox/extractors/screenshot.py
+++ b/archivebox/extractors/screenshot.py
@@ -20,12 +20,12 @@ from ..logging_util import TimedProgress
@enforce_types
-def should_save_screenshot(link: Link, out_dir: Optional[Path]=None) -> bool:
- out_dir = out_dir or Path(link.link_dir)
+def should_save_screenshot(link: Link, out_dir: Optional[Path]=None, overwrite: Optional[bool]=False) -> bool:
if is_static_file(link.url):
return False
-
- if (out_dir / "screenshot.png").exists():
+
+ out_dir = out_dir or Path(link.link_dir)
+ if not overwrite and (out_dir / 'screenshot.png').exists():
return False
return SAVE_SCREENSHOT
diff --git a/archivebox/extractors/singlefile.py b/archivebox/extractors/singlefile.py
index 2e5c3896..3279960e 100644
--- a/archivebox/extractors/singlefile.py
+++ b/archivebox/extractors/singlefile.py
@@ -23,13 +23,15 @@ from ..logging_util import TimedProgress
@enforce_types
-def should_save_singlefile(link: Link, out_dir: Optional[Path]=None) -> bool:
- out_dir = out_dir or Path(link.link_dir)
+def should_save_singlefile(link: Link, out_dir: Optional[Path]=None, overwrite: Optional[bool]=False) -> bool:
if is_static_file(link.url):
return False
- output = out_dir / 'singlefile.html'
- return SAVE_SINGLEFILE and SINGLEFILE_VERSION and (not output.exists())
+ out_dir = out_dir or Path(link.link_dir)
+ if not overwrite and (out_dir / 'singlefile.html').exists():
+ return False
+
+ return SAVE_SINGLEFILE
@enforce_types
@@ -37,7 +39,7 @@ def save_singlefile(link: Link, out_dir: Optional[Path]=None, timeout: int=TIMEO
"""download full site using single-file"""
out_dir = out_dir or Path(link.link_dir)
- output = str(out_dir.absolute() / "singlefile.html")
+ output = "singlefile.html"
browser_args = chrome_args(TIMEOUT=0)
@@ -48,7 +50,7 @@ def save_singlefile(link: Link, out_dir: Optional[Path]=None, timeout: int=TIMEO
'--browser-executable-path={}'.format(CHROME_BINARY),
browser_args,
link.url,
- output
+ output,
]
status = 'succeeded'
@@ -69,9 +71,9 @@ def save_singlefile(link: Link, out_dir: Optional[Path]=None, timeout: int=TIMEO
)
# Check for common failure cases
- if (result.returncode > 0):
+ if (result.returncode > 0) or not (out_dir / output).is_file():
raise ArchiveError('SingleFile was not able to archive the page', hints)
- chmod_file(output)
+ chmod_file(output, cwd=str(out_dir))
except (Exception, OSError) as err:
status = 'failed'
# TODO: Make this prettier. This is necessary to run the command (escape JSON internal quotes).
diff --git a/archivebox/extractors/title.py b/archivebox/extractors/title.py
index 28cb128f..272eebc8 100644
--- a/archivebox/extractors/title.py
+++ b/archivebox/extractors/title.py
@@ -8,7 +8,6 @@ from typing import Optional
from ..index.schema import Link, ArchiveResult, ArchiveOutput, ArchiveError
from ..util import (
enforce_types,
- is_static_file,
download_url,
htmldecode,
)
@@ -61,12 +60,9 @@ class TitleParser(HTMLParser):
@enforce_types
-def should_save_title(link: Link, out_dir: Optional[str]=None) -> bool:
+def should_save_title(link: Link, out_dir: Optional[str]=None, overwrite: Optional[bool]=False) -> bool:
# if link already has valid title, skip it
- if link.title and not link.title.lower().startswith('http'):
- return False
-
- if is_static_file(link.url):
+ if not overwrite and link.title and not link.title.lower().startswith('http'):
return False
return SAVE_TITLE
@@ -113,7 +109,11 @@ def save_title(link: Link, out_dir: Optional[Path]=None, timeout: int=TIMEOUT) -
timestamp=link.timestamp)\
.update(title=output)
else:
- raise ArchiveError('Unable to detect page title')
+ # if no content was returned, dont save a title (because it might be a temporary error)
+ if not html:
+ raise ArchiveError('Unable to detect page title')
+ # output = html[:128] # use first bit of content as the title
+ output = link.base_url # use the filename as the title (better UX)
except Exception as err:
status = 'failed'
output = err
diff --git a/archivebox/extractors/wget.py b/archivebox/extractors/wget.py
index b7adbea0..4d04f673 100644
--- a/archivebox/extractors/wget.py
+++ b/archivebox/extractors/wget.py
@@ -10,8 +10,6 @@ from ..index.schema import Link, ArchiveResult, ArchiveOutput, ArchiveError
from ..system import run, chmod_file
from ..util import (
enforce_types,
- is_static_file,
- without_scheme,
without_fragment,
without_query,
path,
@@ -36,10 +34,10 @@ from ..logging_util import TimedProgress
@enforce_types
-def should_save_wget(link: Link, out_dir: Optional[Path]=None) -> bool:
+def should_save_wget(link: Link, out_dir: Optional[Path]=None, overwrite: Optional[bool]=False) -> bool:
output_path = wget_output_path(link)
out_dir = out_dir or Path(link.link_dir)
- if output_path and (out_dir / output_path).exists():
+ if not overwrite and output_path and (out_dir / output_path).exists():
return False
return SAVE_WGET
@@ -66,7 +64,7 @@ def save_wget(link: Link, out_dir: Optional[Path]=None, timeout: int=TIMEOUT) ->
*(['--warc-file={}'.format(str(warc_path))] if SAVE_WARC else []),
*(['--page-requisites'] if SAVE_WGET_REQUISITES else []),
*(['--user-agent={}'.format(WGET_USER_AGENT)] if WGET_USER_AGENT else []),
- *(['--load-cookies', COOKIES_FILE] if COOKIES_FILE else []),
+ *(['--load-cookies', str(COOKIES_FILE)] if COOKIES_FILE else []),
*(['--compression=auto'] if WGET_AUTO_COMPRESSION else []),
*([] if SAVE_WARC else ['--timestamping']),
*([] if CHECK_SSL_VALIDITY else ['--no-check-certificate', '--no-hsts']),
@@ -105,7 +103,12 @@ def save_wget(link: Link, out_dir: Optional[Path]=None, timeout: int=TIMEOUT) ->
if b'ERROR 500: Internal Server Error' in result.stderr:
raise ArchiveError('500 Internal Server Error', hints)
raise ArchiveError('Wget failed or got an error from the server', hints)
- chmod_file(output, cwd=str(out_dir))
+
+ if (out_dir / output).exists():
+ chmod_file(output, cwd=str(out_dir))
+ else:
+ print(f' {out_dir}/{output}')
+ raise ArchiveError('Failed to find wget output after running', hints)
except Exception as err:
status = 'failed'
output = err
@@ -129,9 +132,7 @@ def wget_output_path(link: Link) -> Optional[str]:
See docs on wget --adjust-extension (-E)
"""
- if is_static_file(link.url):
- return without_scheme(without_fragment(link.url))
-
+
# Wget downloads can save in a number of different ways depending on the url:
# https://example.com
# > example.com/index.html
@@ -175,14 +176,30 @@ def wget_output_path(link: Link) -> Optional[str]:
if html_files:
return str(html_files[0].relative_to(link.link_dir))
+ # sometimes wget'd URLs have no ext and return non-html
+ # e.g. /some/example/rss/all -> some RSS XML content)
+ # /some/other/url.o4g -> some binary unrecognized ext)
+ # test this with archivebox add --depth=1 https://getpocket.com/users/nikisweeting/feed/all
+ last_part_of_url = urldecode(full_path.rsplit('/', 1)[-1])
+ for file_present in search_dir.iterdir():
+ if file_present == last_part_of_url:
+ return str((search_dir / file_present).relative_to(link.link_dir))
+
# Move up one directory level
search_dir = search_dir.parent
if str(search_dir) == link.link_dir:
break
+
+ # check for literally any file present that isnt an empty folder
+ domain_dir = Path(domain(link.url).replace(":", "+"))
+ files_within = list((Path(link.link_dir) / domain_dir).glob('**/*.*'))
+ if files_within:
+ return str((domain_dir / files_within[-1]).relative_to(link.link_dir))
- search_dir = Path(link.link_dir) / domain(link.url).replace(":", "+") / urldecode(full_path)
- if not search_dir.is_dir():
- return str(search_dir.relative_to(link.link_dir))
+ # fallback to just the domain dir
+ search_dir = Path(link.link_dir) / domain(link.url).replace(":", "+")
+ if search_dir.is_dir():
+ return domain(link.url).replace(":", "+")
return None
diff --git a/archivebox/index/__init__.py b/archivebox/index/__init__.py
index 8eab1d38..04ab0a8d 100644
--- a/archivebox/index/__init__.py
+++ b/archivebox/index/__init__.py
@@ -2,7 +2,6 @@ __package__ = 'archivebox.index'
import os
import shutil
-import json as pyjson
from pathlib import Path
from itertools import chain
@@ -42,6 +41,7 @@ from .html import (
write_html_link_details,
)
from .json import (
+ pyjson,
parse_json_link_details,
write_json_link_details,
)
diff --git a/archivebox/index/html.py b/archivebox/index/html.py
index a62e2c7e..ebfe7d78 100644
--- a/archivebox/index/html.py
+++ b/archivebox/index/html.py
@@ -4,7 +4,7 @@ from datetime import datetime
from typing import List, Optional, Iterator, Mapping
from pathlib import Path
-from django.utils.html import format_html
+from django.utils.html import format_html, mark_safe
from collections import defaultdict
from .schema import Link
@@ -23,11 +23,12 @@ from ..config import (
GIT_SHA,
FOOTER_INFO,
HTML_INDEX_FILENAME,
+ SAVE_ARCHIVE_DOT_ORG,
)
-MAIN_INDEX_TEMPLATE = 'main_index.html'
-MINIMAL_INDEX_TEMPLATE = 'main_index_minimal.html'
-LINK_DETAILS_TEMPLATE = 'link_details.html'
+MAIN_INDEX_TEMPLATE = 'static_index.html'
+MINIMAL_INDEX_TEMPLATE = 'minimal_index.html'
+LINK_DETAILS_TEMPLATE = 'snapshot.html'
TITLE_LOADING_MSG = 'Not yet archived...'
@@ -103,6 +104,7 @@ def link_details_template(link: Link) -> str:
'status': 'archived' if link.is_archived else 'not yet archived',
'status_color': 'success' if link.is_archived else 'danger',
'oldest_archive_date': ts_to_date(link.oldest_archive_date),
+ 'SAVE_ARCHIVE_DOT_ORG': SAVE_ARCHIVE_DOT_ORG,
})
@enforce_types
@@ -116,12 +118,14 @@ def render_django_template(template: str, context: Mapping[str, str]) -> str:
def snapshot_icons(snapshot) -> str:
from core.models import EXTRACTORS
+ # start = datetime.now()
+
archive_results = snapshot.archiveresult_set.filter(status="succeeded")
link = snapshot.as_link()
path = link.archive_path
canon = link.canonical_outputs()
output = ""
- output_template = '{} '
+ output_template = '{} '
icons = {
"singlefile": "❶",
"wget": "🆆",
@@ -138,27 +142,45 @@ def snapshot_icons(snapshot) -> str:
exclude = ["favicon", "title", "headers", "archive_org"]
# Missing specific entry for WARC
- extractor_items = defaultdict(lambda: None)
+ extractor_outputs = defaultdict(lambda: None)
for extractor, _ in EXTRACTORS:
for result in archive_results:
- if result.extractor == extractor:
- extractor_items[extractor] = result
+ if result.extractor == extractor and result:
+ extractor_outputs[extractor] = result
for extractor, _ in EXTRACTORS:
if extractor not in exclude:
- exists = extractor_items[extractor] is not None
- output += output_template.format(path, canon[f"{extractor}_path"], str(exists),
- extractor, icons.get(extractor, "?"))
+ existing = extractor_outputs[extractor] and extractor_outputs[extractor].status == 'succeeded' and extractor_outputs[extractor].output
+ # Check filesystsem to see if anything is actually present (too slow, needs optimization/caching)
+ # if existing:
+ # existing = (Path(path) / existing)
+ # if existing.is_file():
+ # existing = True
+ # elif existing.is_dir():
+ # existing = any(existing.glob('*.*'))
+ output += format_html(output_template, path, canon[f"{extractor}_path"], str(bool(existing)),
+ extractor, icons.get(extractor, "?"))
if extractor == "wget":
# warc isn't technically it's own extractor, so we have to add it after wget
- exists = list((Path(path) / canon["warc_path"]).glob("*.warc.gz"))
- output += output_template.format(exists[0] if exists else '#', canon["warc_path"], str(bool(exists)), "warc", icons.get("warc", "?"))
+
+ # get from db (faster but less thurthful)
+ exists = extractor_outputs[extractor] and extractor_outputs[extractor].status == 'succeeded' and extractor_outputs[extractor].output
+ # get from filesystem (slower but more accurate)
+ # exists = list((Path(path) / canon["warc_path"]).glob("*.warc.gz"))
+ output += format_html(output_template, 'warc/', canon["warc_path"], str(bool(exists)), "warc", icons.get("warc", "?"))
if extractor == "archive_org":
# The check for archive_org is different, so it has to be handled separately
- target_path = Path(path) / "archive.org.txt"
- exists = target_path.exists()
+
+ # get from db (faster)
+ exists = extractor_outputs[extractor] and extractor_outputs[extractor].status == 'succeeded' and extractor_outputs[extractor].output
+ # get from filesystem (slower)
+ # target_path = Path(path) / "archive.org.txt"
+ # exists = target_path.exists()
output += '{} '.format(canon["archive_org_path"], str(exists),
"archive_org", icons.get("archive_org", "?"))
- return format_html(f'{output}')
+ result = format_html('{}', mark_safe(output))
+ # end = datetime.now()
+ # print(((end - start).total_seconds()*1000) // 1, 'ms')
+ return result
diff --git a/archivebox/index/schema.py b/archivebox/index/schema.py
index bc3a25da..7501da3a 100644
--- a/archivebox/index/schema.py
+++ b/archivebox/index/schema.py
@@ -412,12 +412,14 @@ class Link:
"""predict the expected output paths that should be present after archiving"""
from ..extractors.wget import wget_output_path
+ # TODO: banish this awful duplication from the codebase and import these
+ # from their respective extractor files
canonical = {
'index_path': 'index.html',
'favicon_path': 'favicon.ico',
'google_favicon_path': 'https://www.google.com/s2/favicons?domain={}'.format(self.domain),
'wget_path': wget_output_path(self),
- 'warc_path': 'warc',
+ 'warc_path': 'warc/',
'singlefile_path': 'singlefile.html',
'readability_path': 'readability/content.html',
'mercury_path': 'mercury/content.html',
@@ -425,8 +427,9 @@ class Link:
'screenshot_path': 'screenshot.png',
'dom_path': 'output.html',
'archive_org_path': 'https://web.archive.org/web/{}'.format(self.base_url),
- 'git_path': 'git',
- 'media_path': 'media',
+ 'git_path': 'git/',
+ 'media_path': 'media/',
+ 'headers_path': 'headers.json',
}
if self.is_static:
# static binary files like PDF and images are handled slightly differently.
diff --git a/archivebox/main.py b/archivebox/main.py
index eb8cd6a0..c55a2c04 100644
--- a/archivebox/main.py
+++ b/archivebox/main.py
@@ -79,7 +79,6 @@ from .config import (
ARCHIVE_DIR_NAME,
SOURCES_DIR_NAME,
LOGS_DIR_NAME,
- STATIC_DIR_NAME,
JSON_INDEX_FILENAME,
HTML_INDEX_FILENAME,
SQL_INDEX_FILENAME,
@@ -125,10 +124,10 @@ ALLOWED_IN_OUTPUT_DIR = {
'.virtualenv',
'node_modules',
'package-lock.json',
+ 'static',
ARCHIVE_DIR_NAME,
SOURCES_DIR_NAME,
LOGS_DIR_NAME,
- STATIC_DIR_NAME,
SQL_INDEX_FILENAME,
JSON_INDEX_FILENAME,
HTML_INDEX_FILENAME,
@@ -1060,6 +1059,7 @@ def server(runserver_args: Optional[List[str]]=None,
reload: bool=False,
debug: bool=False,
init: bool=False,
+ createsuperuser: bool=False,
out_dir: Path=OUTPUT_DIR) -> None:
"""Run the ArchiveBox HTTP server"""
@@ -1068,6 +1068,9 @@ def server(runserver_args: Optional[List[str]]=None,
if init:
run_subcommand('init', stdin=None, pwd=out_dir)
+ if createsuperuser:
+ run_subcommand('manage', subcommand_args=['createsuperuser'], pwd=out_dir)
+
# setup config for django runserver
from . import config
config.SHOW_PROGRESS = False
diff --git a/archivebox/parsers/generic_txt.py b/archivebox/parsers/generic_txt.py
index e296ec7e..94dd523c 100644
--- a/archivebox/parsers/generic_txt.py
+++ b/archivebox/parsers/generic_txt.py
@@ -51,9 +51,9 @@ def parse_generic_txt_export(text_file: IO[str], **_kwargs) -> Iterable[Link]:
# look inside the URL for any sub-urls, e.g. for archive.org links
# https://web.archive.org/web/20200531203453/https://www.reddit.com/r/socialism/comments/gu24ke/nypd_officers_claim_they_are_protecting_the_rule/fsfq0sw/
# -> https://www.reddit.com/r/socialism/comments/gu24ke/nypd_officers_claim_they_are_protecting_the_rule/fsfq0sw/
- for url in re.findall(URL_REGEX, line[1:]):
+ for sub_url in re.findall(URL_REGEX, line[1:]):
yield Link(
- url=htmldecode(url),
+ url=htmldecode(sub_url),
timestamp=str(datetime.now().timestamp()),
title=None,
tags=None,
diff --git a/archivebox/parsers/wallabag_atom.py b/archivebox/parsers/wallabag_atom.py
index 0d77869f..7acfc2fc 100644
--- a/archivebox/parsers/wallabag_atom.py
+++ b/archivebox/parsers/wallabag_atom.py
@@ -45,7 +45,7 @@ def parse_wallabag_atom_export(rss_file: IO[str], **_kwargs) -> Iterable[Link]:
time = datetime.strptime(ts_str, "%Y-%m-%dT%H:%M:%S%z")
try:
tags = str_between(get_row('category'), 'label="', '" />')
- except:
+ except Exception:
tags = None
yield Link(
diff --git a/archivebox/search/backends/sonic.py b/archivebox/search/backends/sonic.py
index f0beaddd..f3ef6628 100644
--- a/archivebox/search/backends/sonic.py
+++ b/archivebox/search/backends/sonic.py
@@ -5,7 +5,7 @@ from sonic import IngestClient, SearchClient
from archivebox.util import enforce_types
from archivebox.config import SEARCH_BACKEND_HOST_NAME, SEARCH_BACKEND_PORT, SEARCH_BACKEND_PASSWORD, SONIC_BUCKET, SONIC_COLLECTION
-MAX_SONIC_TEXT_LENGTH = 20000
+MAX_SONIC_TEXT_LENGTH = 2000
@enforce_types
def index(snapshot_id: str, texts: List[str]):
diff --git a/archivebox/search/utils.py b/archivebox/search/utils.py
index 55c97e75..e6d15455 100644
--- a/archivebox/search/utils.py
+++ b/archivebox/search/utils.py
@@ -34,10 +34,11 @@ def get_indexable_content(results: QuerySet):
return []
# This should come from a plugin interface
+ # TODO: banish this duplication and get these from the extractor file
if method == 'readability':
return get_file_result_content(res, 'content.txt')
elif method == 'singlefile':
- return get_file_result_content(res, '')
+ return get_file_result_content(res,'',use_pwd=True)
elif method == 'dom':
return get_file_result_content(res,'',use_pwd=True)
elif method == 'wget':
diff --git a/archivebox/themes/admin/actions_as_select.html b/archivebox/templates/admin/actions_as_select.html
similarity index 100%
rename from archivebox/themes/admin/actions_as_select.html
rename to archivebox/templates/admin/actions_as_select.html
diff --git a/archivebox/themes/admin/app_index.html b/archivebox/templates/admin/app_index.html
similarity index 100%
rename from archivebox/themes/admin/app_index.html
rename to archivebox/templates/admin/app_index.html
diff --git a/archivebox/themes/admin/base.html b/archivebox/templates/admin/base.html
similarity index 100%
rename from archivebox/themes/admin/base.html
rename to archivebox/templates/admin/base.html
diff --git a/archivebox/themes/admin/login.html b/archivebox/templates/admin/login.html
similarity index 100%
rename from archivebox/themes/admin/login.html
rename to archivebox/templates/admin/login.html
diff --git a/archivebox/templates/admin/private_index.html b/archivebox/templates/admin/private_index.html
new file mode 100644
index 00000000..7afb62c3
--- /dev/null
+++ b/archivebox/templates/admin/private_index.html
@@ -0,0 +1,150 @@
+{% extends "base.html" %}
+{% load static %}
+
+{% block body %}
+
+
+
+
+
+
+
Bookmarked
+
Snapshot ({{object_list|length}})
+
Files
+
Original URL
+
+
+
+ {% for link in object_list %}
+ {% include 'main_index_row.html' with link=link %}
+ {% endfor %}
+
+
+
+
+ {% if page_obj.has_previous %}
+ « first
+ previous
+ {% endif %}
+
+
+ Page {{ page_obj.number }} of {{ page_obj.paginator.num_pages }}.
+
+
+ {% if page_obj.has_next %}
+ next
+ last »
+ {% endif %}
+
+
+ {% if page_obj.has_next %}
+ next
+ last »
+ {% endif %}
+
diff --git a/archivebox/themes/default/main_index_minimal.html b/archivebox/templates/core/minimal_index.html
similarity index 90%
rename from archivebox/themes/default/main_index_minimal.html
rename to archivebox/templates/core/minimal_index.html
index dcfaa23f..3c69a831 100644
--- a/archivebox/themes/default/main_index_minimal.html
+++ b/archivebox/templates/core/minimal_index.html
@@ -16,9 +16,9 @@
{% for link in links %}
- {% include "main_index_row.html" with link=link %}
+ {% include "index_row.html" with link=link %}
{% endfor %}