To download a bunch of files, create a plain text document listing the URL to each file line-by-line and call it, for instance, list.txt
. Now save list.txt
somewhere in a directory, open a shell and type:
wget --continue --tries=inf --input-file=list.txt
where:
–continue
(or for short -c
) will resume the download in case the download failed,–tries=inf
(or for short -t inf
) will infinitely retry to download a file - this helps with spurious disconnects.–input-file=list.txt
(or -i list.txt
for short) specifies that the URLs should be read from the file list.txt
.wget will then download all the files to the directory where you issued the command.
When wget
retrieves files from a long URL, it counts the URL components as being part of the file name. Pass -np
as parameter to wget
that will make wget
ignore the parent directory.
The following command:
wget --page-requisites --convert-links --span-hosts --no-directories https://SITE.TLD//
where:
site.tld
is a website addresswill download the entire website to the current folder but will convert all the links and also download any referenced resources by the website such that the website will be browseable offline without an Internet connection.
Note that URLs within JavaScript code or within contexts that do not represent an HTML link will remain unchanged.
For the contact, copyright, license, warranty and privacy terms for the usage of this website please see the contact, license, privacy, copyright.