Support > About cloud server > What to do if a newly added cloud disk doesn't show up? A comprehensive guide to cloud server disk mounting.
What to do if a newly added cloud disk doesn't show up? A comprehensive guide to cloud server disk mounting.
Time : 2025-11-19 14:40:12
Edit : Jtti

  In daily cloud server operation and maintenance, administrators often encounter a seemingly simple yet extremely perplexing problem: a new cloud disk is added to the cloud platform, but the new disk device cannot be seen within the system. Whether checking with `lsblk` or querying with `df`, the new disk cannot be found. This type of problem is usually not a cloud platform failure, but rather caused by a lack of familiarity with the cloud server's storage identification mechanism, block device hot-swapping process, and mounting process.

  After adding a cloud disk or expanding storage capacity in the cloud platform, the underlying cloud host machine allocates new block devices to the virtual machine or updates the size of the existing block devices. However, the virtual machine does not always automatically recognize this new block device information. Especially for some virtualization platforms, such as KVM, Xen, OpenStack, VMware, or custom virtualization layers from different cloud service providers, the operating system may need to trigger a device scan itself. Therefore, when a newly added cloud disk is shown as mounted to the instance in the dashboard, but the new device does not appear in the system, the first thing to do is to confirm whether the operating system has performed a block device scan. The most direct way is to use the following command to view the currently identified disks:

lsblk

  If the newly added disk does not appear, try checking if there are any related device events in dmesg:

dmesg | grep -i sd

  If no new device logs are found, it usually means the system did not trigger a device scan. In this case, you can rescan the SCSI bus, which is the most common disk identification procedure in Linux. Execute the following command to perform a probe on all SCSI hosts:

for host in /sys/class/scsi_host/host*; do echo "- - -" > $host/scan; done

  Then run `lsblk` again, and you should usually see the newly added disk, such as `/dev/vdb` or `/dev/sdc`.

  If the cloud platform adds a separate cloud hard drive, a new block device will appear after a successful scan. However, if the cloud platform is expanding the capacity of an existing disk, for example, from 100GB to 200GB, `lsblk` will show that the disk size has not been updated. In this case, the device needs to be re-identified. Some platforms support hot expansion, while others require the system to trigger it actively. For example, for `/dev/vda`, you can execute:

echo 1 > /sys/class/block/vda/device/rescan

  If it is a partitioned disk, a partition scan is also required:

partprobe

  When a disk is successfully displayed, it is usually unformatted and has no file system, so it won't appear in `df -h`. Many beginners mistakenly believe that the newly added disk is not mounted, but in reality, unformatted devices will not appear in the file system list. At this point, you need to partition the newly added disk, for example, using `fdisk`:

fdisk /dev/vdb

  After creating the new partition as instructed, you can see the /dev/vdb1 partition using lsblk. Now you need to create a file system for the partition, such as EXT4:

mkfs.ext4 /dev/vdb1

  XFS can also be used:

mkfs.xfs /dev/vdb1

  Once the file system is created, the next step is to choose a mount point. Assuming you want to mount it to /data, you can execute:

mkdir /data
mount /dev/vdb1 /data

  After mounting is complete, you can see the newly added disk has been added to the file system using the `df` command.

df -h

  However, this is only a temporary mount. Without persistent configuration, the new disk will not be automatically mounted after a server restart. Therefore, it's necessary to edit `/etc/fstab` to allow the system to automatically load the file system at startup. The write method is typically as follows:

/dev/vdb1 /data ext4 defaults 0 0

  To ensure that the fstab entries do not contain errors that could prevent the system from booting, it is recommended to perform a mount test before saving.

mount -a

  If no errors occur, the configuration will take effect.

  On some cloud platforms, after adding a disk, users may find that no new device appears within the system. Instead, the size of `/dev/vda` remains unchanged, and no new block devices are created. This often indicates that a cloud disk other than the system disk has been added, but the cloud platform has not correctly bound it to the system. In this case, you should first check the disk's "mount status" on the cloud platform to ensure that the newly added disk is mounted to the corresponding instance. If it shows "not mounted," you need to manually mount it on the platform side. Some cloud platforms require a restart of the instance for the new disk to take effect; therefore, you need to determine whether a restart is necessary based on the platform's instructions.

  Another, more subtle situation is that the newly added cloud disk is recognized by the system, but the correct device node is not generated. For example, in some extreme scenarios, due to udev rule conflicts, system-defined rules, or multiple mounts, the newly added device node may appear in different paths, such as `/dev/disk/by-id` or `/dev/disk/by-path`. To check all block device paths, you can use the following command:

ls -l /dev/disk/by-id

  Once the corresponding device is identified, you can directly partition and mount it using the corresponding path.

  However, even after the disk is successfully mounted, inconsistencies between `df` and `lsblk` values ​​may still occur. This mostly happens when expanding an existing disk rather than adding a new independent hard drive. After expansion, you need to extend the partition and then the file system. For example, for disks whose size has increased, you can use `growpart`:

growpart /dev/vda 1

  Extended File System (EXT4):

resize2fs /dev/vda1

  Or extend XFS:

xfs_growfs /

  This completes the disk expansion process.

  In some cases, even if the disk is recognized and successfully mounted, the system may still report insufficient space. This is mostly because the mount point path has been incorrectly overwritten. For example, an administrator mounts a new disk to /data, but this path already contains files. After mounting, the original files are obscured, resulting in an abnormal space display. In this situation, use:

umount /data

  Files can then be restored to their previous state before mounting.

  In containerized environments (such as Docker), the recognition of newly added disks may also be inaccurate due to device path mapping being intercepted by the container runtime. In such environments, it is crucial that the host machine correctly recognizes the disk before the container maps and uses it, rather than directly manipulating the disk within the container.

  Once users master the complete chain of cloud disk recognition, scanning, partitioning, formatting, mounting, and expansion, they will fundamentally understand why newly added cloud disks fail to display and how to successfully deploy them. Cloud disk mounting is essentially a matter of the operating system's management of block devices, while the cloud platform is only responsible for allocating devices; the actual recognition work is still led by Linux. Therefore, as long as the relevant principles and processes are understood, problems can be quickly located and correct mounting can be achieved.

Relevant contents

Hong Kong cloud server network bandwidth surge: Resolving abnormal traffic and bandwidth limitation issues Improving Ubuntu System Security Through Vulnerability Exploitation Practices Will data be lost on overseas cloud servers when reinstalling the system? Three Techniques for Solving High Latency Issues on Cloud Servers A Comprehensive Guide to Optimizing the Speed ​​of Overseas Cloud Servers How to solve the performance bottleneck problem of Hong Kong cloud servers crashing under high traffic? Analysis and Solutions for Network Instability After Deploying a Japanese Cloud Server Risks and Optimization Suggestions for Long-Term High-Load Operation of Japanese Cloud Servers Emergency Response Guide for Overseas VPS Servers After Intrusion What are the challenges of maintaining a US VPS? Sharing coping strategies.
Go back

24/7/365 support.We work when you work

Support