Support > About cloud server > How to configure efficient Linux routing for overseas cloud servers
How to configure efficient Linux routing for overseas cloud servers
Time : 2025-09-04 14:21:36
Edit : Jtti

Proper network routing configuration for overseas cloud servers ensures efficient system communication. With the current diversification of business, it's common for businesses and individuals to deploy servers across North America, Europe, Asia, and other regions. Cross-border access, diverse carrier lines, and complex application requirements all place higher demands on Linux routing configuration. Configuring efficient routing not only improves bandwidth utilization but also reduces latency and packet loss, ensuring stable global application access.

Let's first understand the basic mechanisms of Linux routing. The Linux kernel features a powerful network protocol stack, which uses routing tables to determine packet forwarding paths. By default, the system assigns a direct route to each network interface and forwards traffic based on the configured default gateway. However, in overseas cloud server environments, policy-based routing, multiple egress selection, and optimized forwarding tables are often required to address complex network scenarios. The command

ip route show

can be used to view the current system routing table information.

If a cloud server is deployed on a multi-network connection, such as one with a public address provided by a local ISP and cross-border dedicated lines or CDN access, traffic must be distributed appropriately. The most common approach is to configure policy-based routing based on source or destination addresses. In Linux, this can be achieved using the ip rule command. For example, to forward traffic from a specific source IP address through a designated gateway, you can configure the following:

ip rule add from 192.168.1.10/32 table 100
ip route add default via 203.0.113.1 dev eth1 table 100

This way, traffic from the source address 192.168.1.10 will be routed through the gateway corresponding to eth1, while other traffic will continue to use the default route.

For cross-border applications, overseas users may need to distinguish between different destination networks when accessing the network. Using destination-based policy routing, traffic can be precisely forwarded to the optimal route. For example, if you need to route requests to an overseas content distribution network through a dedicated line, you can add the following rule:

ip rule add to 198.51.100.0/24 table 200
ip route add 198.51.100.0/24 via 192.0.2.1 dev eth2 table 200

This method prevents all traffic from going through a single egress, thereby achieving bandwidth distribution and optimization.

Performance is an essential factor to consider during configuration. The Linux kernel relies on a route cache mechanism to improve efficiency when handling routing forwarding. In a high-concurrency environment, improper caching can lead to frequent table lookups and increased kernel overhead. You can optimize forwarding performance by adjusting kernel parameters, for example:

sysctl -w net.ipv4.route.max_size=131072
sysctl -w net.ipv4.route.gc_timeout=300

Optimizing these parameters can reduce performance issues caused by kernel route cache overflows.

Furthermore, for overseas cloud servers that need to handle both IPv4 and IPv6 traffic, ensure consistent routing configuration across both protocol stacks. IPv6 routing configuration also uses the ip command, for example:

ip -6 route add default via 2401:db00:21:70::1 dev eth0

Ensuring IPv6 packets are forwarded through the designated gateway improves global access compatibility.

Security is equally important in routing configuration. Overseas cloud servers may face complex network environments, so routing configuration should be combined with firewall rules to prevent unnecessary exposure. For example, using iptables or nftables, you can strictly restrict the ingress and egress of policy-based routing, allowing only legitimate traffic to pass.

iptables -A FORWARD -s 192.168.1.10 -d 198.51.100.0/24 -j ACCEPT
iptables -A FORWARD -s 192.168.1.10 -j DROP

This method prevents internal network traffic leakage caused by incorrect routing rules.

Load balancing is also a key component of efficient routing in complex architectures. If an enterprise rents multiple cloud servers in different regions, traffic balancing can be achieved using Linux's native multipath routing. The configuration is as follows:

ip route add default scope global \
nexthop via 203.0.113.1 dev eth0 weight 1 \
nexthop via 203.0.113.2 dev eth1 weight 1

This configuration evenly distributes default traffic across two egress ports, improving overall throughput and providing redundancy.

For large-scale enterprises, you can also integrate BGP with open-source routing software such as Quagga or FRRouting on Linux servers. BGP dynamically learns optimal paths, enabling intelligent routing between multiple data centers across borders. This is particularly critical for e-commerce, finance, and video services that require low latency and high reliability.

During operations and maintenance, monitoring and debugging are crucial for ensuring correct routing configuration. Linux provides a variety of tools for verifying routing forwarding, such as using ping or traceroute to test the path:

ping -I eth1 8.8.8.8
traceroute -i eth2 www.google.com

These commands help administrators confirm that packets are being sent out of a specific interface as expected. Additionally, tcpdump can be used to capture packets and further confirm data flow:

tcpdump -i eth1 host 8.8.8.8

Packet capture verifies that routing policies are working correctly.

Optimizing routing for overseas cloud servers involves more than just configuration commands; it also requires a long-term tuning process. Regular routing table checks and policy updates are necessary, taking into account business characteristics, user distribution, and network operator availability, to avoid routing loops or black holes. Furthermore, in cross-border applications, it is recommended to utilize CDN and intelligent DNS resolution to divert user requests to the optimal server. This allows for refined scheduling through Linux routing to achieve optimal performance.

Core strategies for configuring efficient Linux routing in overseas cloud server environments include: using policy-based routing to divert source or destination traffic; improving forwarding performance through kernel parameter adjustments; combining IPv4 and IPv6 dual-stack configurations to ensure compatibility; utilizing tools such as iptables to ensure routing security; applying multipath routing and BGP to achieve high availability and load balancing; and finally, using monitoring and debugging tools to ensure configuration effectiveness and stability.

Relevant contents

How to use CDN to enable VPS to open in seconds What are the techniques to speed up the construction of Windows images for Japanese VPS servers? Korean VPS storage IO optimization and file system scheduling strategy Implementation and optimization of file system quota management on US VPS servers The three irreplaceable features of Japanese cloud servers compared to VPS What are the virtualization technologies of cloud computing? Java GC log analysis and tuning in Ubuntu Practical settings for optimizing Java memory in Ubuntu Specific steps for optimizing US VPS firewall policies and improving performance Practical methods for improving system performance by optimizing the number of connections when using Hong Kong VPS hosting
Go back

24/7/365 support.We work when you work

Support