Support > About cybersecurity > A Practical Guide to Data Model Selection and Server Resource Optimization
A Practical Guide to Data Model Selection and Server Resource Optimization
Time : 2025-05-31 15:22:17
Edit : Jtti

There are significant differences in the consumption of server resources among the three data models of star, snowflake and constellation, which directly affect the hardware selection and operation and maintenance strategies. This article analyzes the core considerations of model deployment from the perspective of the server.

I. Model Characteristics and Server Resource Mapping

Star model (high throughput scenario). The CPU load characteristic is that the single-table JOIN operation reduces the CPU computing intensity. It is measured that for the same query, it reduces the instruction cycle by 40% compared with the snowflake model. Memory needs to be viewed. Redundant data in the dimension table requires a larger memory cache (128GB of memory is needed for each TB of fact table to ensure the hit rate of hot data). In terms of storage optimization, columnar storage has limited effect. It is recommended to store the dimension table on SATA SSD and the fact table on NVMe SSD. Such as the typical configuration: dual Intel Xeon 6348 (28 cores), 512GB DDR4, 4×7.68TB NVMe RAID0.

The IO pressure distribution of the Snowflake model (storage sensitive scenarios) is that multi-level joins lead to a sharp increase in random reads, and high IOPS storage (50K IOPS/TB) needs to be configured. Network consumption: In a distributed environment, cross-node connections increase network traffic (gigabit networks become a bottleneck, requiring 10Gbps+ interconnection); Cache strategy: The L3 cache hit rate should be over 85% to avoid disk storms. It is recommended to be equipped with 3D XPoint Optane memory. Typical configuration: AMD EPYC 9654 (96 cores) + 1TB DDR5 + 2×1.6TB PM1745 SSD + 100Gbps RDMA

Constellation model (enterprise-level integration), for concurrent bottleneck multi-fact table parallel access, high-throughput PCIe channels are required (PCIe 5.0 x16 expansion slots are recommended). Resource isolation is achieved through cgroups or Docker to isolate CPU/ memory, preventing cross-model resource contention. The shared dimension table for disaster recovery design needs to be replicated synchronously (with a delay of 1ms). It is recommended to deploy the Stretch Cluster. Typical configuration: 8-node cluster (each node: 64 cores +512GB RAM) + Ceph distributed storage (OSD full NVMe)

Ii. Hybrid Architecture Resource Allocation Strategy

The hierarchical deployment scheme adopts a star model for the hot data layer. The hardware is an all-flash server (3D NAND NVMe). The dimension table has a cache ratio of 1:3 and a 25Gbps RoCEv2. Temperature data layer (snowflake model) NVMe+SATA SSD hybrid storage, ZFS L2ARC cache 10Gbps TCP/IP network enabled.

The cold data layer (constellation model) uses high-density HDD servers (JBOD architecture) and adopts erasable code (EC 8+3) to reduce storage overhead by 40Gbps InfiniBand network.

Key indicators for resource monitoring: CPU utilization threshold for JOIN operations: 75%, memory hit rate for dimension table scanning: 90%, cross-model query queue depth: < 5.

Iii. Hardware Evolution and Model Adaptation

The new technology, CXL memory pooling, has solved the memory redundancy problem of the star model, reducing the DIMM configuration by 40% in actual tests. The computing and storage separation technology, the snowflake model benefits from the storage and computing separation architecture, and the performance of the JOIN pushed down to the storage node is improved by three times. In GPU acceleration, the complex correlation of constellation models can be accelerated by NVIDIA RAPIDS cuDF, and the ETL efficiency is improved by 8 times.

Iv. Guide to Avoiding Pitfalls in Operation and Maintenance

In the star model trap, to avoid overly inflated degraded dimensions (> 100 columns), it is recommended to split them into micro-dimensions. The snowflake model warns that for joins with more than 3 layers, materialized views must be forcibly established to prevent the execution plan from getting out of control. Constellation model risk: The shared dimension table version requires a strong consistency protocol (such as Raft). General protection should enable query circuit breaking (terminate immediately when the CPU consumption of a single SQL exceeds 30%), configure SSD write life monitoring (DWPD3), and automate hierarchical storage of cold and hot data (based on access frequency).

The essence of data model selection is a mathematical optimization problem of server resource allocation. Today, with the decline in storage costs and the increase in computing power, it is recommended to adopt a hybrid architecture that is mainly star-shaped with local snowflakes. For PB-level enterprise-level applications, through CXL memory pools and the separation of storage and computing technology, sub-second responses in 90% of scenarios can be met while controlling TCO, achieving the optimal matching of server resources and business value.

Relevant contents

What is a traffic forwarding server and what is its significance What is an IP dedicated line and who are its main audiences The method for setting up a complete process of a private cloud server What are the significance of bridging among code security, cloud security and SOC How to build an identity threat defense system under a zero-trust architecture HTTPDNS is a new paradigm for reshaping the security and efficiency of domain name resolution The new force for the digital transformation of small and medium-sized enterprises - AMD Zen 5 Architecture EPYC 4005 series processors The core reason why micro data centers are suitable for small IT teams What does data center bridging in Singapore mean and what are its significances Liquid-cooled SSDS are the key to solving the current heat dissipation problem in AI data centers
Go back

24/7/365 support.We work when you work

Support