How to Mount Cloud Storage on a VM (Google Drive, GCS, S3)

By Slawomir StrumeckiOctober 22, 2025
TutorialsCloud ComputingInfrastructure
Hero image for How to Mount Cloud Storage on a VM (Google Drive, GCS, S3) - Tutorials, Cloud Computing, Infrastructure article

When working with virtual machines (VMs) in the cloud, you often need to store and access data. Cloud providers offer various storage solutions, such as block storage and object storage, each with its own advantages and use cases. If you're new to renting cloud VMs, check out our guide on how to rent a GPU-enabled machine for AI development.

Object Storage

Object storage saves files (called objects) inside containers (called buckets). Each object gets a unique ID and its own web link, so you can open or download it with a browser or simple command-line tools. It can also store extra details (metadata) about each file.

It scales easily because everything lives in one large, flexible name space. You don't need to run your own server to use it, and many users or apps can read and upload files at the same time.

However, it isn't ideal for data that changes constantly. You can't edit just part of a file — you have to upload the whole file again — so it's a poor fit for databases or heavy transaction workloads. Access goes over HTTP, which is usually slower than a disk directly attached to your VM. As a rule of thumb, use object storage for files you read often but update only occasionally.

In this guide I show how to mount object storage on a Linux VM using three tools:

  • rclone for Google Drive
  • gcsfuse for Google Cloud Storage
  • s3fs for AWS S3

It covers installation, authentication for headless servers, mount/unmount commands, caching/performance flags, and basic troubleshooting. Examples are copy-paste ready for Ubuntu/Debian VMs. I used CloudRift, but all examples should work with any cloud provider or locally installed Linux distribution supported by those tools.


Google Drive

Google Drive can be mounted using rclone. rclone is a cross-platform command-line tool that supports many cloud storage providers and can mount a remote as a local folder via FUSE. It offers a VFS cache to improve app compatibility; use --vfs-cache-mode full for apps that edit files in place, while writes is sufficient for simple transfers. You can tune retries, bandwidth limits, and enable client-side encryption with flags.

Install and Configure

Install rclone in your VM instance and configure it:

sudo apt install -y rclone
rclone config # select 18. Google Drive

When asked about auto config, select no and run the printed command on your local machine to get authorization code to your Google Drive.

Use auto config?
 * Say Y if not sure
 * Say N if you are working on a remote or headless machine
y) Yes (default)
n) No
y/n> n
Option config_token.
For this to work, you will need rclone available on a machine that has
a web browser available.
For more help and alternate methods see: https://rclone.org/remote_setup/
Execute the following on the machine with the web browser (same rclone
version recommended):
    rclone authorize "drive" "eyJzY29wZSI6ImRyaXZlIn0"
Then paste the result.
Enter a value.
config_token>

Running rclone authorize should open your browser and after approval you will get a code:

rclone authorize "drive" "eyJzY29wZSI6ImRyaXZlIn0"
2025/09/16 20:05:48 NOTICE: Config file "/Users/slawomirstrumecki/.config/rclone/rclone.conf" not found - using defaults
2025/09/16 20:05:48 NOTICE: Make sure your Redirect URL is set to "http://127.0.0.1:53682/" in your custom config.
2025/09/16 20:05:48 NOTICE: If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth?state=igbvOOugWIeDf8KcR0v8WQ
2025/09/16 20:05:48 NOTICE: Log in and authorize rclone for access
2025/09/16 20:05:48 NOTICE: Waiting for code...
2025/09/16 20:05:57 NOTICE: Got code
Paste the following into your remote machine --->
YOUR AUTHORIZATION SECRET
<---End paste

Mount Google Drive

Now you can mount your Google Drive:

mkdir ~/gdrive
# gdrive is name used in rclone config
# --daemon runs in background
rclone mount --daemon gdrive: ~/gdrive --vfs-cache-mode writes

To unmount:

fusermount -u ~/gdrive || umount ~/gdrive

Optional Performance Tuning

For apps that edit files in place:

rclone mount --daemon gdrive: ~/gdrive \
  --vfs-cache-mode full \
  --dir-cache-time 1000h --poll-interval 15s --buffer-size 64M

Once created, the config can be reused on other machines. To find where the config file is, use rclone config file (e.g. ~/.config/rclone/rclone.conf). Keep your config file private because it contains access tokens.


Google Cloud Storage

Cloud Storage FUSE (gcsfuse) is an open source product supported by Google. Cloud Storage FUSE uses FUSE and Cloud Storage APIs to transparently expose buckets as locally mounted folders on your file system. It is a common choice for developers looking to store and access ML training and model data as objects in Cloud Storage.

Install gcsfuse

Install gcsfuse (full manual):

sudo apt-get install -y curl lsb-release

# add distribution url
export GCSFUSE_REPO=gcsfuse-`lsb_release -c -s`
echo "deb [signed-by=/usr/share/keyrings/cloud.google.asc] https://packages.cloud.google.com/apt $GCSFUSE_REPO main" | sudo tee /etc/apt/sources.list.d/gcsfuse.list

# import Google Cloud public key
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo tee /usr/share/keyrings/cloud.google.asc

# install gcsfuse
sudo apt update
sudo apt install -y gcsfuse

Install gcloud CLI following the official manual.

Authenticate

Initialize gcloud and set up credentials for gcloud and gcsfuse:

gcloud init
gcloud auth login --no-launch-browser
gcloud auth application-default login

For non-interactive/production use, prefer a service account:

# point to your service account JSON key
export GOOGLE_APPLICATION_CREDENTIALS=/path/to/key.json

Mount GCS Bucket

Create a folder and mount a bucket into it:

mkdir ~/gcbucket
# --implicit-dirs lets you see "virtual" directories created by object prefixes
gcsfuse --implicit-dirs my-bucket-name ~/gcbucket

To unmount the storage bucket:

fusermount -u ~/gcbucket

AWS S3 Bucket

S3 bucket can be mounted using rclone or s3fs. We used rclone already to mount Google Drive, so let's use s3fs this time.

Install s3fs

sudo apt install -y s3fs

AWS IAM Configuration

Get access key from AWS Management Console (IAM → Users). Ideally create a separate user, whose identity will be used to access your bucket.

You need to add permissions policy to your user in AWS console:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "s3:ListBucket",
        "s3:ListBucketMultipartUploads"
      ],
      "Resource": "arn:aws:s3:::my-bucket"
    },
    {
      "Effect": "Allow",
      "Action": [
        "s3:GetObject",
        "s3:PutObject",
        "s3:DeleteObject",
        "s3:AbortMultipartUpload",
        "s3:PutObjectAcl",
        "s3:GetObjectAcl"
      ],
      "Resource": "arn:aws:s3:::my-bucket/*"
    }
  ]
}

Note: If you mount with -o no_acl, you can omit s3:PutObjectAcl and s3:GetObjectAcl.

Store Credentials

Store credentials in ~/.aws/credentials:

[default]
aws_access_key_id = AKIA...
aws_secret_access_key = wJalrXUtnFEMI/K7MDENG/bPxRfiCY...
region = us-east-1

Or in ~/.passwd-s3fs:

echo ACCESS_KEY:SECRET_KEY > ~/.passwd-s3fs
chmod 600 ~/.passwd-s3fs

Mount S3 Bucket

# In VM
mkdir ~/s3drive

# when ~/.aws/credentials is used
s3fs my-bucket ~/s3drive

# or when ~/.passwd-s3fs is used
s3fs my-bucket ~/s3drive -o passwd_file=~/.passwd-s3fs,url=https://s3.amazonaws.com

# on EC2 with an attached IAM role (avoids storing keys)
s3fs my-bucket ~/s3drive -o iam_role=auto

If you use allow_other (so other users can read the mount), enable it once:

echo user_allow_other | sudo tee -a /etc/fuse.conf

Such drive can be mounted on boot after it's added to /etc/fstab:

my-bucket /path/to/mountpoint fuse.s3fs _netdev,allow_other 0 0

Troubleshooting

In case of problems it's useful to run s3fs in foreground and display debug logs:

s3fs my-bucket ~/s3drive -o dbglevel=debug -f -o curldbg

Unmount S3 Bucket

Unmount S3 bucket with standard umount command:

umount ~/s3drive

Conclusion

  • Pick the native tool for each provider: Google Drive → rclone, Google Cloud Storage → gcsfuse, AWS S3 → s3fs (or rclone if you need cross-cloud parity).
  • Prefer short-lived identities (service accounts/IAM roles) over static keys. Scope permissions to the minimum required and rotate regularly.
  • Expect higher latency than local disks. Use caching (rclone VFS, gcsfuse flags) and avoid databases or heavy random writes on FUSE mounts.
  • Automate mounts (systemd/fstab), add health checks, and log failures to detect remount needs.
  • For large or performance-critical workflows, consider syncing or using SDKs directly instead of mounting.

Whether you're running ML workloads on an RTX 4090 or managing data pipelines on an RTX 5090, understanding how to efficiently mount and manage cloud storage on your VMs is essential for optimal performance.


Further Reading

Related Articles