Transferring local files to a cloud server is a common requirement for development, operations, and day-to-day management. Whether it's deploying code, backing up data, or sharing resources, choosing the right approach can dramatically increase efficiency. This document describes various file transfer methods, covering different operating systems (Windows, Linux, and macOS) and scenario requirements, and provides operation examples and precautions.
The basic method is file transfer based on SSH protocol. Secure Shell (SSH) is a mainstream protocol used to connect to cloud servers and supports encrypted file transmission. The following are common SSH-based tools:
Secure Copy (SCP) applies to scenarios where a single file or a small batch of files needs to be transferred quickly. Supports Linux, macOS (built-in), and Windows (OpenSSH or a third-party tool such as WinSCP must be installed). The procedure is to copy the file from the local to the remote server:
scp/local/file path Username@Server IP:/ Remote/directory path
Copy from remote server to local:
scp username@ Server IP:/ Remote/file path/local/directory path
Example:
Upload the local app.tar.gz file to the /home/user directory on the server
scp ~/app.tar.gz user@123.45.67.89:/home/user/
Note that you need to open the SSH port of the server (default 22). Large files may be slow to transfer. You are advised to compress them (for example,.tar.gz).
Application scenario of SSH File Transfer Protocol (SFTP) : Interactive file transfer, directory browsing, and batch upload. Supported tools include the sftp command on Linux or macOS, and WinSCP or FileZilla on Windows. The graphical interface tools are WinSCP (Windows) and Cyberduck (macOS). The operation requires first connecting to the server:
sftp user@123.45.67.89
Upload file:
put/Local/file path/remote/directory path
Download file:
get/Remote/file path/Local/directory path
Exit:
exit
Graphics Tool (WinSCP) :
1. Enter the server IP address, user name, and password.
2. Drag and drop the file to the destination directory to complete the transfer.
The efficient synchronization tool Rsync is suitable for incremental synchronization of large files or frequently updated directories. It has the advantage of only transmitting the changing part, saving bandwidth and time. Operation steps:
Local to remote (rsync required)
rsync avz progress/Local/directory/user@Server IP:/ Remote/directory /
Remote data is synchronized to local data
rsync avz progress user@Server IP address :/ remote/directory / / local/directory /
The parameters are described as follows: a: Archive mode (reserved permissions and timestamp). v: Displays the detailed process. z: compressed transmission. progress: Displays the transfer progress.
rsync must be installed on the server (Ubuntu/Debian: sudo apt install rsync). The first synchronization may be slow, and the subsequent incremental synchronization may be extremely fast. A remote directory is mounted using SMB or NFS. A cloud server directory is mapped to a local disk, facilitating frequent read and write operations.
SMB/CIFS for Windows/Linux, server-side configuration (Linux), installing Samba and configuring the shared directory (see Samba documentation). Client mounting (Windows) :
Open Explorer, enter \\ server IP\ shared directory name, enter credentials to access.
Client mounting (Linux/macOS) :
sudo mount t cifs // server IP address/shared directory/local mount point o username= User name,password= password
NFS (Linux/Unix) server-side configuration, installing the NFS service (Ubuntu)
sudo apt install nfskernelserver
Edit /etc/exports to add a shared directory
/data Client IP address (rw,sync,no_subtree_check)
Restart service
sudo systemctl restart nfskernelserver
Client mounting:
sudo mount t nfs server IP address :/ remote directory/local mount point
Conversion to object storage or FTP is applicable to scenarios where files need to be transferred across regions or shared with multiple people.
With object storage, steps:
1. Upload the file to the cloud storage bucket (through the console or CLI tool).
2. On the cloud server, download the file using the SDK or CLI.
2. Set up the FTP service. On the server (Linux), install vsftpd or proftpd and configure user permissions and directories.
The client uses a tool such as FileZilla to connect to the FTP server and transfer files.
Automated transfer scripts with DevOps tools. Application scenario: Requires regular backup or integration into CI/CD processes.
Use curl or wget to download files from the server (HTTP service required) :
wget http://server IP address/file path
Upload files to HTTP interface (requires server-side API)
curl X POST F "file= @local file. txt" http:// server IP/upload
To sum up, select the most appropriate transmission mode according to the actual needs:
Quick and easy: SCP/SFTP (suitable for single files).
Efficient synchronization: Rsync (suitable for incremental updates).
Long-term mount: SMB/NFS (suitable for frequent read/write).
Automation: Ansible/ script (suitable for batch operation).
After mastering these methods, whether it is development deployment or data migration, it is easy to deal with. Always keep security first, configure permissions and encryption policies properly!
Precautions and optimization suggestions
1. Security:
Avoid using plaintext transmission (such as FTP). SSH or HTTPS is preferred.
Use SSH keys instead of passwords for authentication.
2. Large file transfer optimization:
Zip the file, such as tar zcvf file.tar.gz directory.
Use a tool that supports breakpoint resume, such as rsync or lftp.
3. Permission issues:
Ensure that the target directory has write permission (chmod 755 / target directory).
Check whether SELinux/AppArmor blocks access.
4. Network problems:
If the transmission fails, check whether the firewall permits ports (such as 22, 21, and 445).
Use ping or traceroute to eliminate network delays or packet loss.