Version 3.3.12 home Download and build Libraries and tools Branch management Demo Discovery service protocol Frequently Asked Questions (FAQ) Logging conventions Metrics Production users Reporting bugs Tuning etcd release guide Benchmarks Benchmarking etcd v2.1.0 Benchmarking etcd v2.2.0 Benchmarking etcd v2.2.0-rc Benchmarking etcd v2.2.0-rc-memory Benchmarking etcd v3 Storage Memory Usage Benchmark Watch Memory Usage Benchmark Developer guide Experimental APIs and features Interacting with etcd Set up a local cluster System limits Why gRPC gateway etcd API Reference etcd concurrency API Reference gRPC naming and discovery Learning etcd client architecture Client feature matrix Data model Glossary KV API guarantees Learner etcd v3 authentication design etcd versus other key-value stores etcd3 API Operations guide Clustering Guide Configuration flags Design of runtime reconfiguration Disaster recovery Failure modes Hardware recommendations Maintenance Migrate applications from using API v2 to API v3 Monitoring etcd Performance Role-based access control Run etcd clusters inside containers Runtime reconfiguration Supported systems Transport security model Versioning etcd gateway gRPC proxy Platforms Amazon Web Services Container Linux with systemd FreeBSD Upgrading Upgrade etcd from 2.3 to 3.0 Upgrade etcd from 3.0 to 3.1 Upgrade etcd from 3.1 to 3.2 Upgrade etcd from 3.2 to 3.3 Upgrade etcd from 3.3 to 3.4 Upgrade etcd from 3.4 to 3.5 Upgrading etcd clusters and applications etcd v3 API

Benchmarking etcd v2.2.0

v3.3.12

latest

Physical Machines

GCE n1-highcpu-2 machine type

  • 1x dedicated local SSD mounted as etcd data directory
  • 1x dedicated slow disk for the OS
  • 1.8 GB memory
  • 2x CPUs

etcd Cluster

3 etcd 2.2.0 members, each runs on a single machine.

Detailed versions:

etcd Version: 2.2.0
Git SHA: e4561dd
Go Version: go1.5
Go OS/Arch: linux/amd64

Testing

Bootstrap another machine, outside of the etcd cluster, and run the hey HTTP benchmark tool with a connection reuse patch to send requests to each etcd cluster member. See the benchmark instructions for the patch and the steps to reproduce our procedures.

The performance is calculated through results of 100 benchmark rounds.

Performance

Single Key Read Performance

key size in bytesnumber of clientstarget etcd serveraverage read QPSread QPS stddevaverage 90th Percentile Latency (ms)latency stddev
641leader only23032000.490.06
6464leader only150486857.600.46
64256leader only1450843429.761.05
2561leader only21622140.520.06
25664leader only147897927.690.48
256256leader only1442451229.921.42
6464all servers4575220482.470.14
64256all servers46592127310.140.59
25664all servers4533218472.480.12
256256all servers46485134010.180.74

Single Key Write Performance

key size in bytesnumber of clientstarget etcd serveraverage write QPSwrite QPS stddevaverage 90th Percentile Latency (ms)latency stddev
641leader only55424.5113.26
6464leader only213912535.233.40
64256leader only458158170.5310.22
2561leader only56422.374.33
25664leader only205215136.834.20
256256leader only444256071.5910.03
6464all servers16258558.515.14
64256all servers446129889.4736.48
25664all servers15999460.116.43
256256all servers431519388.987.01

Performance Changes

  • Because etcd now records metrics for each API call, read QPS performance seems to see a minor decrease in most scenarios. This minimal performance impact was judged a reasonable investment for the breadth of monitoring and debugging information returned.

  • Write QPS to cluster leaders seems to be increased by a small margin. This is because the main loop and entry apply loops were decoupled in the etcd raft logic, eliminating several blocks between them.

  • Write QPS to all members seems to be increased by a significant margin, because followers now receive the latest commit index sooner, and commit proposals more quickly.

© 2019 The etcd authors

Benchmarking etcd v2.2.0