You are here:


 Rclone is a command-line program that supports  file transfers and syncing of files between local storage and Google Drive as well as a number of other storage services, including Dropbox and Swift/S3-based services. Rclone offers options to optimze a transfer and reach higher transfer sppeds than other common transfer tools such as scp and rsync. It is written in  Go and is free software using a MIT license.

Installation of Rclone

If you wish to use rclone to  transfer files to or from CHPC file systems, you can use the CHPC installation of rclone.  There is a module to set the proper environment to use the tool.  To use you first need to

module load rclone

If you wish to transfer files to/from file systems where neither the source or destination is on CHPC storage, for example when you want to use rclone to move files from your local desktop to Google Drive, you will first need to download and install rclone on the source or destination device.

Configuration of Rclone

The next step is to configure rclone for the transfer partner. Below we give two examples - one for Google Drive and a second for the CHPC archive storage solution, pando. When on CHPC resources, you must do the configuration while in your home directory.

Example 1: Configuration for transferring files to/from the University of Utah's Google Drive storage.

There is a short training video that covers this process.

A few notes about this storage:

  • Any University faculty, student or staff can activate and access their official U of U Google Drive by visiting The above link and logging in.
  • Through a partnership between the University of Utah and Google for Education, U faculty, staff and students have access to unlimited storage (up from 15GB) in Google Drive
  • Google Drive is not suitable for storing sensitive data, including personal information. For additional security information, consult the Security Section of the above link.

Update -- 10 October 2017:  We have been informed that there are now daily limits on the amount of data that a user can transfer to/from Google Drive.  The limits have been determined to be an upload limit of 750GB/day, and a download limit between 9 and 10TB/day.  

To configure (remember to first load the rclone module):

rclone config

This command will create a .rclone.conf in your current directory which contains.  You will be asked a few quuestions:

    1. Choose ‘New remote’
    2. Select Google Drive (“drive”)
    3. Enter a name for the Google Drive Rclone handle. This will be typed out whenever you want to access the Drive, so make it short. 
    4. Leave the Client ID and Client Secret fields blank--just press enter.
    5. Choose ‘No’ for auto-config.
    6. A URL will be printed. Access it on any device with a web browser.  If there is already a Google Account signed in, make sure it is the one you wish to use with Rclone
    7.  Once logged in as the correct account, navigate to the URL and allow Rclone access to the account.
    8. Copy the provided code to the rclone prompt.
    9. Choose Yes to finish configuration, and then quit the config

You are now enabled to access your Google Drive via rclone.  

Example 2: Configuration for transferring files to/from CHPC pando archive storage

There is a short training video that covers this process.

Rclone uses the Ceph Gateway box, to interact with the archive storage.  Please note that groups must purchase space on the Ceph archive storage in order to make use of this space. For additional information about the archive storage please see its description on the CHPC storage services page.

As before, you will have to first load the rclone module and then also configure rclone with the following selections:

  1. Add the Ceph configuration:   rclone config
  2.  Make New Remote - choose the name you wish to use 
  3. Type of Source: choose s3
  4. Access Key: paste s3 access key from file in home dir
  5. Secret Key: paste s3 secret key from file in home dir
  6.  Region: choose other-v4-signature (This includes Ceph)
  7. Endpoint: enter
  8. Location Constraints: none

Test if configuration works by  rclone lsd {name}:  You will see a list of all the ‘buckets’ associated with the gateway (make sure to include the trailing colon). If you do not yet  have any buckets, at least make sure there are no errors in the output.  To create a bucket run: rclone mkdir {$name}:{bucket}   

Rclone Usage

In this section some common rclone usage cases are presented. In the following the name mydrive is being used. You would need to use the name you choose when doing your configuration. Note the trailing colon. This indicates to rclone that “mydrive” is a remote storage system, rather than a file or directory called “mydrive” in your current working directory. At any point, you may verify that these changes were successful by viewing your Drive from within a web browser. There is a short training video that covers the information presented below.

  • List all files in your Drive:  rclone ls mydrive:
  • List top-level folders in your Drive:  rclone lsd mydrive:
  • Make a new folder within your Drive called “my-rc-folder”: rclone mkdir mydrive:my-rc-folder
  • Create bucket on PANDO:rclone mkdir {$name}:{bucket}
  • Copy a file between two sources: rclone copy SOURCE DESTINATION
    • Example:  To copy a file called “rclone-test.txt” from your local machine home directory to your Drive, or a subdirectory within it: 

      • rclone copy ~/rclone-test.txt mydrive:
      • rclone copy ~/rclone-test.txt mydrive:my-rc-folder
    •  You can also transfer files directly between your Drive and another remote storage system, such as an object storage service for which you have configured rclone:
      • rclone copy mydrive:rclone-test.txt myobjectstorage:some-bucket
  • Synchronizing directories is done with the sync option.   rclone sync SOURCE/ DESTINATION/ [--drive-use-trash]
    • Rclone can synchronize an entire Drive folder with the destination directory. This is a full synchronization, so files at the destination prior to the sync will be overwritten or deleted. Double check the destination and its contents, and be mindful if the directory is already being synchronized by other services.
    • Example: To make a folder called “backup” on Google Drive, then sync a directory from the local machine to the new folder.
      • rclone mkdir mydrive:backup
      • rclone sync ~/local-folder mydrive:backup

Rclone Options

While the full list of options can be found in the official MANUAL file in the  Rclone github repo (or ‘man rclone’ if rclone is installed), some important options are:

  • --config=FILE   (default FILE=.rclone.conf)

Specifies the  rclone configuration file to use. Only necessary if the desired config file is not called “~/.rclone.conf” (the default name given to the config file).

  •   --transfers=N (default N=4)

Number of file transfers to be run in parallel. Increasing this may increase the overall speed of a large transfer, as long as the network and remote storage system can handle it (bandwidth and memory).

  •  --drive-chunk-size=SIZE   (default SIZE=8192)

The chunk size for a transfer in kilobytes; must be a power of 2 and at least 256. Each chunk is buffered in memory prior to the transfer, so increasing this increases how much memory is used.

  •  --drive-use-trash

Sends files to Google Drive’s trash instead of deleting (prior to a directory sync for instance). Note that this is not a default option, because the Trash is not accessible through Rclone and must be managed through a web browser.

  • --drive-formats (docx, pdf, txt, etc.)

Sets the format used when exporting files. For example, the option ‘--drive-formats pdf’ will automatically convert the chosen file(s) to PDF format. 

Additional Important Considerations

  • Google limits transfers to about 2 files per second. This may cause uploads of many small files to be much slower than the upload rate. However, it will not stop the transfer and will continue to retry files that were blocked by Google’s rate limit. Considering compressing small files into a single larger file if this becomes a problem.
  • The campus firewall may impede larger transfers. The University has a Science DMZ network with Data Transfer Nodes (DTNs), which can be used to safely and conveniently facilitate larger transfers without the firewall’s limitations. Additional information on data transfer services can be found on our data transfer services page.
  • Transfer  ratemay vary heavily. A number of factors, including the current state of Google’s resources as well as University resources, determine the rate of transfer. Results may vary over minutes, hours, or days. If there is a consistent problem, check if machines associated with the transfer are running into network/disk/memory bottlenecks.

Exploring the Effects of Options on Performance

In order to explore the effects of Rclone options on data transfer performance, we completed multiple transfers of the contents of a directory on a data transfer node (DTN), to a folder on Google drive.  This directory contained 16 files, each 1.7GB each.

The data transfer node has a 40gbs connection on the University of Utah Science DMZ.  To explore performance, we completed runs with the number of parallel transfers set to 4 and to 16 and with the chunk size set to 8MB, 16MB, and 32MB.  The command was run ten times for each combination of options.  A base transfer using a single transfer and a chunk size of 8MB is included as a reference point. The below chart displays the achieved transfer rates of the different scenarios.


As the chart shows, increasing both the number of parallel transfers and the chunk size improves the transfer rate over using the default sizes.

Last Updated: 2/16/18