Wget and cURL Downloaders


We have updated the script to fix issues for users of certain versions of the Macintosh OSX operating systems who were having trouble downloading from our new download server. Files posted to the new server have a hashed, unpredictable URL, so this updated script will not request authentication for files posted to this download server.


While the Discovery Genomics website works well for downloading small files, the web browser is not ideal for downloading very large files or large numbers of files. Therefore, we sometimes provide users with lists of download URLs to use with one of our downloader scripts.The file of download URLs is usually named "files.txt" and is posted under Results/Raw Data. To use one of the scripts below, click on the appropriate script and save it to the folder that will contain your downloaded data. Then, save the files.txt file to the same folder. In that folder, from the command line, type either:



bash hadiscovery_gsl_wget_download.sh <URL_list_file.txt>


OR

bash hadiscovery_gsl_curl_download.sh <URL_list_file.txt>



These scripts will prompt the user for a Discovery Genomics username and password (the same ones used to log in to the website) before downloading.


Requirements: Both scripts require the Bash Unix shell to run. The wget version requires GNU Wget (standard on most Linux distributions). The curl version requires cURL (standard on Mac OS X).

Download wget version (Linux)
Download curl version (Mac OS X)


If your downloads are hosted at hadiscoverydata.dls.com, you can also download your data with wget using the following command structure:


wget -i <URL_list_file.txt>


For Windows users, there is a GNU wget command line client available.