Version v2.3 of the documentation is no longer actively maintained. The page that you are viewing is the last archived version. For the latest stable documentation, see v3.5.
Benchmarking etcd v2.2.0
GCE n1-highcpu-2 machine type
- 1x dedicated local SSD mounted as etcd data directory
- 1x dedicated slow disk for the OS
- 1.8 GB memory
- 2x CPUs
3 etcd 2.2.0 members, each runs on a single machine.
etcd Version: 2.2.0 Git SHA: e4561dd Go Version: go1.5 Go OS/Arch: linux/amd64
Bootstrap another machine, outside of the etcd cluster, and run the
boom HTTP benchmark tool with a connection reuse patch to send requests to each etcd cluster member. See the benchmark instructions for the patch and the steps to reproduce our procedures.
The performance is calculated through results of 100 benchmark rounds.
Single Key Read Performance
|key size in bytes||number of clients||target etcd server||average read QPS||read QPS stddev||average 90th Percentile Latency (ms)||latency stddev|
Single Key Write Performance
|key size in bytes||number of clients||target etcd server||average write QPS||write QPS stddev||average 90th Percentile Latency (ms)||latency stddev|
Because etcd now records metrics for each API call, read QPS performance seems to see a minor decrease in most scenarios. This minimal performance impact was judged a reasonable investment for the breadth of monitoring and debugging information returned.
Write QPS to cluster leaders seems to be increased by a small margin. This is because the main loop and entry apply loops were decoupled in the etcd raft logic, eliminating several blocks between them.
Write QPS to all members seems to be increased by a significant margin, because followers now receive the latest commit index sooner, and commit proposals more quickly.
Was this page helpful?
Glad to hear it! Please tell us how we can improve.
Sorry to hear that. Please tell us how we can improve.