|
1 | 1 | --- |
2 | | -title: Redis Benchmarking |
| 2 | +title: Benchmark Redis |
3 | 3 | weight: 6 |
4 | 4 |
|
5 | 5 | ### FIXED, DO NOT MODIFY |
6 | 6 | layout: learningpathall |
7 | 7 | --- |
8 | 8 |
|
| 9 | +## Benchmark Redis using redis-benchmark |
9 | 10 |
|
10 | | -## Redis Benchmarking by redis-benchmark |
11 | | -The `redis-benchmark` tool is an official performance testing utility for Redis. It helps measure to throughput (requests per second) and latency (response delay) across different workloads. |
| 11 | +The `redis-benchmark` tool is an official performance testing utility for Redis. It measures throughput (requests per second) and latency (response delay) across different workloads. |
12 | 12 |
|
13 | 13 | ### Prerequisites |
14 | | -Ensure Redis server is running and accessible: |
| 14 | + |
| 15 | +Before running benchmarks, verify that Redis is running and accessible: |
15 | 16 |
|
16 | 17 | ```console |
17 | 18 | redis-cli ping |
18 | 19 | ``` |
19 | 20 |
|
20 | | -If you do not see a "PONG" response, please start redis: |
| 21 | +If you don't see a `PONG` response, start Redis: |
21 | 22 |
|
22 | 23 | ```console |
23 | 24 | redis-server & |
24 | 25 | redis-cli ping |
25 | 26 | ``` |
26 | 27 |
|
27 | | -### Benchmark SET (Write Performance) |
28 | | -The `SET` benchmark helps validate Redis’s ability to handle high insertion rates efficiently on Arm-based servers. |
| 28 | +### Benchmark SET (write performance) |
29 | 29 |
|
30 | | -The following command benchmarks data insertion performance: |
| 30 | +Benchmark data insertion performance: |
31 | 31 |
|
32 | 32 | ```console |
33 | 33 | redis-benchmark -t set -n 100000 -c 50 |
34 | 34 | ``` |
35 | | -**Explanation:** |
36 | 35 |
|
37 | | -- `-t set`: Runs the benchmark only for **SET** operations. |
38 | | -- `-n 100000`: Performs **100,000 total requests**. |
39 | | -- `-c 50`: Simulates **50 concurrent clients**. |
| 36 | +This command: |
| 37 | + |
| 38 | +- Runs the benchmark for SET operations only (`-t set`) |
| 39 | +- Performs 100,000 total requests (`-n 100000`) |
| 40 | +- Simulates 50 concurrent clients (`-c 50`) |
40 | 41 |
|
41 | | -You should see an output similar to: |
| 42 | +The output is similar to: |
42 | 43 |
|
43 | 44 | ```output |
44 | 45 | ====== SET ====== |
@@ -85,19 +86,20 @@ Summary: |
85 | 86 | 0.170 0.056 0.167 0.183 0.191 1.095 |
86 | 87 | ``` |
87 | 88 |
|
88 | | -### Benchmark GET (Read/Search Performance) |
89 | | -Now test data retrieval performance separately. |
| 89 | +### Benchmark GET (read performance) |
| 90 | + |
| 91 | +Test data retrieval performance: |
90 | 92 |
|
91 | 93 | ```console |
92 | 94 | redis-benchmark -t get -n 100000 -c 50 |
93 | | -``` |
94 | | -**Explanation:** |
95 | 95 |
|
96 | | -- `-t get`: Runs the benchmark only for **GET** operations. |
97 | | -- `-n 100000`: Executes **100,000 total requests**. |
98 | | -- `-c 50`: Simulates **50 concurrent clients** performing reads. |
| 96 | +Parameters: |
| 97 | + |
| 98 | +- `-t get`: Runs the benchmark only for GET operations. |
| 99 | +- `-n 100000`: Executes 100,000 total requests. |
| 100 | +- `-c 50`: Simulates 50 concurrent clients performing reads. |
99 | 101 |
|
100 | | -You should see an output similar to: |
| 102 | +The output is similar to: |
101 | 103 |
|
102 | 104 | ```output |
103 | 105 | ====== GET ====== |
@@ -141,26 +143,24 @@ Summary: |
141 | 143 | 0.169 0.048 0.167 0.183 0.191 0.807 |
142 | 144 | ``` |
143 | 145 |
|
144 | | -### Benchmark Metrics Explanation |
| 146 | +## Interpret the benchmark metrics |
145 | 147 |
|
146 | | -- **Throughput:** Number of operations (requests) Redis can process per second. |
147 | | -- **Latency:** Latency reflects the time it takes for Redis to complete a single request, measured in milliseconds (ms). |
148 | | -- **avg:** Average time taken to process a request across the entire test. |
149 | | -- **min:** Fastest observed response time (best case). |
150 | | -- **p50:** Median latency — 50% of requests completed faster than this value. |
151 | | -- **p95:** 95th percentile — 95% of requests completed faster than this value. |
152 | | -- **p99:** 99th percentile — shows near worst-case latency, key for reliability analysis. |
153 | | -- **max:** Slowest observed response time (worst case). |
154 | | - |
155 | | -### Benchmark summary |
156 | | -Results from the earlier run on the `c4a-standard-4` (4 vCPU, 16 GB memory) Arm64 VM in GCP (SUSE Enterprise Server): |
| 148 | +The following table summarizes the benchmark results from the earlier run on the `c4a-standard-4` (4 vCPU, 16 GB memory) Arm64 VM in GCP (SUSE Enterprise Server): |
157 | 149 |
|
158 | 150 | | Operation | Total Requests | Concurrent Clients | Avg Latency (ms) | Min (ms) | P50 (ms) | P95 (ms) | P99 (ms) | Max (ms) | Throughput (req/sec) | Description | |
159 | 151 | |------------|----------------|--------------------|------------------|-----------|-----------|-----------|-----------|-----------|-----------------------|--------------| |
160 | 152 | | SET | 100,000 | 50 | 0.170 | 0.056 | 0.167 | 0.183 | 0.191 | 1.095 | 149,700.61 | Measures Redis write performance using SET command | |
161 | 153 | | GET | 100,000 | 50 | 0.169 | 0.048 | 0.167 | 0.183 | 0.191 | 0.807 | 150,375.94 | Measures Redis read performance using GET command | |
162 | 154 |
|
163 | | - - **High Efficiency:** Redis achieved over **150K ops/sec** on both read and write workloads, showcasing strong throughput on **Arm64 (C4A)** architecture. |
164 | | -- **Low Latency:** Average latency remained around **0.17 ms**, demonstrating consistent response times under concurrency. |
165 | | -- **Balanced Performance:** Both **SET** and **GET** operations showed nearly identical performance, indicating excellent CPU and memory optimization on **Arm64**. |
166 | | -- **Energy-Efficient Compute:** The **Arm-based C4A VM** delivers competitive performance-per-watt efficiency, ideal for scalable, sustainable Redis deployments. |
| 155 | +Redis demonstrated excellent performance on the Arm64-based C4A VM, achieving over 150K operations per second for both read and write workloads with an average latency of approximately 0.17 ms. Both SET and GET operations showed nearly identical performance characteristics, indicating efficient CPU and memory optimization on the Arm architecture. The Arm-based C4A VM delivers competitive performance-per-watt efficiency, making it ideal for scalable, sustainable Redis deployments. |
| 156 | + |
| 157 | +## What you've accomplished and what's next |
| 158 | + |
| 159 | +In this section, you: |
| 160 | +- Benchmarked Redis SET operations, achieving over 149K requests per second with 0.170 ms average latency |
| 161 | +- Benchmarked Redis GET operations, achieving over 150K requests per second with 0.169 ms average latency |
| 162 | +- Verified that Redis performs efficiently on Google Axion C4A Arm instances |
| 163 | + |
| 164 | +You've successfully benchmarked Redis on Google Cloud's C4A Arm-based virtual machines, demonstrating strong performance for in-memory data operations. |
| 165 | + |
| 166 | +For next steps, consider exploring Redis Cluster for distributed deployments, implementing persistence strategies for production workloads, or testing more advanced data structures like sorted sets and streams. You can also compare performance across different C4A machine types to optimize cost and performance for your specific use case. |
0 commit comments