Some additional information in one line

Over my years as a DBA and StarRocks contributor, I've gained a lot of experience working alongside a diverse array of community members and picked up plenty of best practices. In this time, I've found five specific models that stand out as absolutely critical: deployment, data modeling, data ingestion, querying, and monitoring.

In this five part series I'm going to dig into each of them and share everything I've picked up over the years so you can get the best possible StarRocks experience.



Let's kick things off with deployment. I like to break down deployment into two stages: planning and configuration, and I'll share my specific tips for each stage below:


CPU Capacity Planning

If we assume memory and disk are not bottlenecks, the performance bottleneck for analysis/query lies in the CPU's processing power. Therefore, we should start by estimating the number of clusters based on the CPU's computational requirements.

Total CPU resources needed for a cluster:

e_core = (scan_rows / cal_rows) / e_rt * e_qps
Variable Name Meaning Example
e_core Estimated number of CPU cores (vCPU) 540 cores
scan_rows Data scan volume in typical online scenarios 30 million rows
e_qps Expected QPS 180qps
e_rt Expected response time 300ms
cal_rows StarRocks' computational ability for rows/second per core 30 million rows/s

Here's an example of what we could be looking at:

  • Data volume: 360 million rows of fact table data per year, approximately 1 million rows/day;

  • Typical query scenario: Joining a month's fact table data (30 million rows) with a small dimension table (tens of thousands of rows), then performing aggregation calculations like group by, sum;

  • Expectation: Response time within 300ms, peak business QPS around 180.


Estimating Our Example with StarRocks:

StarRocks' processing capability ranges from "10 million to 100 million rows/second per core". Given this scenario involves "multi-table joins", "group by", and some relatively complex expression functions, we can estimate based on "30 million rows/second computational capacity" that 3 vCPUs are needed:

30 million rows / (30 million rows/s) / 300 ms = 3 cores


With a peak concurrency of 180 qps, the requirement is:
3 cores * 180 = 540 cores


Thus, a total of 540 vCPUs are needed. Assuming each physical machine has 48 virtual cores (vCPUs), roughly 12 physical instances are required.

During an actual POC process I went though, 3 physical instances with 16 virtual cores and each were used for stress testing, achieving a response time of 300-500ms under 40qps. Eventually, 7 physical machines with 48 virtual cores each were confirmed for production use. It is highly recommended to perform POC tests based on their specific use case.


What was the actual outcome of our POC? Based on the test results, it was advisable to set up 3 FE nodes each with 16 cores and 64GB memory, and 7 BE nodes each with 48 cores and 152GB memory.


Additional Tips:

  • The more complex the query and the more columns processed per row, the fewer rows can be processed per second given the same hardware resources;

  • Better "condition filtering" in calculations (filtering out a lot of data) allows for more rows to be processed (due to internal index structures that help process data faster).

  • Different table types significantly affect processing capacity. The above estimation is based on a duplicated key table. Other models involve specific processing, resulting in some differences between actual and perceived row counts; additionally, partitioning/bucketing impacts query performance.

  • For scenarios requiring scanning large volumes of data, disk performance also affects processing capacity. Using SSDs can accelerate processing when necessary.


Basic Environment Configurations

  • Required: Check environment settings as per StarRocks documentation, with special attention to disabling swap, setting overcommit to 1, and configuring ulimit appropriately.


Hardware Configuration

  • FE Nodes

    • Recommended: 8vCPU, 32 GB of memory

    • Required: Data disk should be at least 200GB, preferably SSD.

  • BE Nodes (shared nothing)

      BE configuration largely depends on the size of your data and the type of workload you run. Below are general recommendations.

    • Recommended: CPU to memory ratio of 1vCPU: 4 GB, the minimum production configuration should be at least 8C 32GB.

    • Recommended: Single node disk capacity suggested at 10 TBs, data disks should not exceed 2 TBs per disk, preferably SSD or NVME. If using HDD, it is recommended to have throughput >150MB/s, IOPS >500.

    • Recommended: CPU to volumes ratio should not exceed 4. That's to say, if there is 16vCPU, the number of volumes should not exceed 4.

    • Recommended: Homogenous cluster (same machine specs to avoid bottlenecks).

  • CN Nodes (shared data)

      CN configuration is largely the same as BE except for disk configuration. In shared data architecture, data is persisted in remote storage, and CN local disks are used as a cache for query acceleration. Configure the appropriate amount of disk space according to your query performance requirements.


Additional Deployment Planning Requirements

  • Required: Minimum cluster size in a production environment should be 3FE + 3BE (it is recommended to deploy FE and BE separately). If FE and BE are deployed in the same instances, configure mem_limit in be.conf to save memory for other services, e.g., if total memory is 40G and FE is already deployed taking up 8G, then set mem_limit=30G (40-8-2), with 2G reserved for the system.

  • Required: FE high availability deployment in production, 1 Leader + 2 Followers. For higher read concurrency, consider scaling up with Observer nodes.

  • Required: Use a load balancer for reading and writing to the cluster, commonly used are Nginx, Haproxy, and F5.


This sums up my advice for deployment, but there's a lot more to share. Head on over to my second article in this series that will take a look at data modeling with StarRocks. If you have questions, be sure to reach out to me on StarRocks' Slack.


Read Part 2