Version 3.3.13 home Download and build Libraries and tools Metrics Branch management Demo Discovery service protocol etcd release guide Frequently Asked Questions (FAQ) Logging conventions Production users Reporting bugs Tuning Benchmarks Benchmarking etcd v2.1.0 Benchmarking etcd v2.2.0 Benchmarking etcd v2.2.0-rc Benchmarking etcd v2.2.0-rc-memory Benchmarking etcd v3 Storage Memory Usage Benchmark Watch Memory Usage Benchmark Developer guide etcd API Reference etcd concurrency API Reference Experimental APIs and features gRPC naming and discovery Interacting with etcd Set up a local cluster System limits Why gRPC gateway etcd v3 API Learning etcd client architecture Client feature matrix Data model etcd v3 authentication design etcd versus other key-value stores etcd3 API Glossary KV API guarantees Learner Operations guide Clustering Guide Configuration flags Design of runtime reconfiguration Disaster recovery etcd gateway Failure modes gRPC proxy Hardware recommendations Maintenance Migrate applications from using API v2 to API v3 Monitoring etcd Performance Role-based access control Run etcd clusters inside containers Runtime reconfiguration Supported systems Transport security model Versioning Platforms Amazon Web Services Container Linux with systemd FreeBSD Upgrading Upgrade etcd from 2.3 to 3.0 Upgrade etcd from 3.0 to 3.1 Upgrade etcd from 3.1 to 3.2 Upgrade etcd from 3.2 to 3.3 Upgrade etcd from 3.3 to 3.4 Upgrade etcd from 3.4 to 3.5 Upgrading etcd clusters and applications

Benchmarking etcd v2.2.0-rc

Physical machine

GCE n1-highcpu-2 machine type

  • 1x dedicated local SSD mounted under /var/lib/etcd
  • 1x dedicated slow disk for the OS
  • 1.8 GB memory
  • 2x CPUs

etcd Cluster

3 etcd 2.2.0-rc members, each runs on a single machine.

Detailed versions:

etcd Version: 2.2.0-alpha.1+git
Git SHA: 59a5a7e
Go Version: go1.4.2
Go OS/Arch: linux/amd64

Also, we use 3 etcd 2.1.0 alpha-stage members to form cluster to get base performance. etcd’s commit head is at c7146bd5, which is the same as the one that we use in etcd 2.1 benchmark.

Testing

Bootstrap another machine and use the hey HTTP benchmark tool to send requests to each etcd member. Check the benchmark hacking guide for detailed instructions.

Performance

reading one single key

key size in bytesnumber of clientstarget etcd serverread QPS90th Percentile Latency (ms)
641leader only2804 (-5%)0.4 (+0%)
6464leader only17816 (+0%)5.7 (-6%)
64256leader only18667 (-6%)20.4 (+2%)
2561leader only2181 (-15%)0.5 (+25%)
25664leader only17435 (-7%)6.0 (+9%)
256256leader only18180 (-8%)21.3 (+3%)
6464all servers46965 (-4%)2.1 (+0%)
64256all servers55286 (-6%)7.4 (+6%)
25664all servers46603 (-6%)2.1 (+5%)
256256all servers55291 (-6%)7.3 (+4%)

writing one single key

key size in bytesnumber of clientstarget etcd serverwrite QPS90th Percentile Latency (ms)
641leader only76 (+22%)19.4 (-15%)
6464leader only2461 (+45%)31.8 (-32%)
64256leader only4275 (+1%)69.6 (-10%)
2561leader only64 (+20%)16.7 (-30%)
25664leader only2385 (+30%)31.5 (-19%)
256256leader only4353 (-3%)74.0 (+9%)
6464all servers2005 (+81%)49.8 (-55%)
64256all servers4868 (+35%)81.5 (-40%)
25664all servers1925 (+72%)47.7 (-59%)
256256all servers4975 (+36%)70.3 (-36%)

performance changes explanation

  • read QPS in most scenarios is decreased by 5~8%. The reason is that etcd records store metrics for each store operation. The metrics is important for monitoring and debugging, so this is acceptable.

  • write QPS to leader is increased by 20~30%. This is because we decouple raft main loop and entry apply loop, which avoids them blocking each other.

  • write QPS to all servers is increased by 30~80% because follower could receive latest commit index earlier and commit proposals faster.

© etcd Authors 2020 | Documentation Distributed under CC-BY-4.0

© 2020 The Linux Foundation. All rights reserved. The Linux Foundation has registered trademarks and uses trademarks.
For a list of trademarks of The Linux Foundation, please see our Trademark Usage page.

Benchmarking etcd v2.2.0-rc