Support > About cloud server > Want to know the true speed of your Hong Kong cloud server's hard drive? Try these two professional benchmarking tools.
Want to know the true speed of your Hong Kong cloud server's hard drive? Try these two professional benchmarking tools.
Time : 2026-01-05 16:42:59
Edit : Jtti

Different cloud service providers offer different disk types, and their advertised IOPS and throughput parameters are sometimes not entirely reliable. Therefore, when purchasing Hong Kong cloud servers, it's essential to utilize professional benchmarking tools. On both Linux and Windows platforms, Iozone and Iometer are time-tested industry standards that can measure the actual performance of storage systems like precise instruments.

The core of storage performance testing lies in simulating real-world loads. Whether it's frequent read/write operations of small data blocks in a database or continuous large file transfers in video processing, the pressure on the disk is vastly different. An excellent benchmarking tool needs to be able to customize test modes, control key parameters such as read/write ratios, block size, and queue depth, and output core metrics like latency, bandwidth, and IOPS. This is precisely what Iozone and Iometer excel at.

Let's delve deeper into Iozone. It's an open-source, comprehensive file system benchmarking tool, particularly popular among system administrators and developers in Linux environments. Iozone's appeal lies in its ability to automatically run numerous tests covering different file operation modes. It works by creating files on the test partition and then performing a series of read/write operations. By measuring the time required to complete these operations, the corresponding performance data is calculated. Its test modes are very detailed, including multiple dimensions such as write, rewrite, read, reread, random read, and random write, and it can test the performance impact of different file sizes and record lengths. This is very useful for evaluating the performance of a file system in different application scenarios.

In practical use, you usually need to compile Iozone from source code. A typical test command might look like this, designed to test performance with file sizes ranging from 1GB to 4GB and output the results to an Excel file:

`./iozone -a -n 1G -g 4G -i 0 -i 1 -i 2 -f /mnt/testfile -Rb ./test_result.xls`

The parameters here have different meanings: `-a` represents a full test, `-n` and `-g` set the range of test file sizes, `-i` specifies the test mode (e.g., write, read, random read), `-f` points to the location of the test file (must be the disk mount point you want to test), and `-Rb` outputs the results in Excel format. When interpreting reports, in addition to focusing on sequential read/write speeds (crucial for large file transfers), it's even more important to pay attention to random read/write performance, especially IOPS, as this directly impacts the operational efficiency of databases, virtualization systems, and other applications that handle numerous small file operations.

In the Windows world, Iometer is a classic tool for storage performance evaluation. Originally developed by Intel, it is now open-source and also supports Linux. Iometer's design emphasizes applying precisely controllable loads to the disk subsystem. Its core concept is "workload configuration," where you can define an "Access Specification" to simulate almost any type of I/O pattern. You can set the read/write ratio, specify sequential or random access, define the requested data block size (e.g., simulating 4K or 8K blocks in a database, or 1MB blocks in a video stream), and set the queue depth. The queue depth parameter is particularly important; it simulates the number of I/O requests the system can send to the disk simultaneously. Increasing the queue depth usually yields higher peak IOPS from the hard drive, but it can also increase latency.

When using Iometer, you typically configure a test to run the tool continuously for a period of time according to the settings. The final report will detail total IOPS, average throughput, and most importantly, average response time. Response time, measured in milliseconds, directly reflects the speed of each I/O operation and is key to judging the responsiveness of a storage system. An excellent storage system should not only have high IOPS at high queue depths but also maintain extremely low latency at low queue depths, ensuring smooth operation under pressure.

So, how can these two tools be used in real-world operation and maintenance or selection scenarios? Let's assume you are evaluating several cloud disks for your company's database application. Your testing process could be as follows: First, mount the disks to be tested on the cloud host. For Linux test machines, use Iozone to perform sequential read/write and random read/write tests, focusing on whether random read/write performance meets the database requirements. Simultaneously, you can adjust the test parameters to simulate data block sizes close to actual business needs. For Windows test environments, use Iometer to create a test load with 4K or 8K data block sizes and read/write ratios that match the characteristics of the database, observing IOPS and latency performance at different queue depths. By comparing test results from different cloud disks, the data clearly tells you which product is best suited for your database workload.

When conducting this type of testing, there are some best practices to follow. The primary principle is to ensure a clean test environment, minimizing interference from other processes on disk I/O. The test data volume should be large enough, far exceeding the server's memory capacity, to avoid inflated performance figures caused by operating system caching. The test duration should also be long enough, generally recommended to be at least several minutes, to capture performance stability and any fluctuations. In a cloud environment, it's also important to be aware of the differences between network storage and local storage; when testing network cloud disks, bandwidth and network latency will also be influencing factors.

Ultimately, Iozone and Iometer provide more than just cold, hard numbers; they are the basis for understanding system performance and making technical decisions. Through regular benchmarking or pre-project benchmarking, you can establish a performance baseline for your system. When performance declines in the future, repeating the same tests can quickly pinpoint whether the problem lies in the storage layer. In the cloud era, hands-on measurement is far more reliable than simply relying on specifications.

Relevant contents

How fast is the memory of a Singapore cloud server? You can find out by testing these two tools. The impact of incorrect cloud server region selection on access speed Want to save money when renting Hong Kong cloud servers? These tips are more useful than you think. CN2 Cloud Server Selection Guide: A Must-Read for Beginners Why Hong Kong cloud servers are recommended for enterprise website building? How to choose a BGP cloud server that won't disappoint? Complete selection criteria are here. Are low-priced cloud servers really reliable? Why is there such a large price difference in cloud servers? Optimization strategies for slow website loading on cloud servers Hong Kong Cloud Server Golden Partner: How CDN Makes Websites Fly Are cloud servers stable for data collection?
Go back

24/7/365 support.We work when you work

Support