To download a bunch of files, create a plain text document listing the URL to each file line-by-line and call it, for instance, list.txt
. Now save list.txt
somewhere in a directory, open a shell and type:
wget --continue --tries=inf --input-file=list.txt
where:
–continue
(or for short -c
) will resume the download in case the download failed,–tries=inf
(or for short -t inf
) will infinitely retry to download a file - this helps with spurious disconnects.–input-file=list.txt
(or -i list.txt
for short) specifies that the URLs should be read from the file list.txt
.wget will then download all the files to the directory where you issued the command.
When wget
retrieves files from a long URL, it counts the URL components as being part of the file name. Pass -np
as parameter to wget
that will make wget
ignore the parent directory.