Version 3.3.13 home Download and build Libraries and tools Metrics Branch management Demo Discovery service protocol etcd release guide Frequently Asked Questions (FAQ) Logging conventions Production users Reporting bugs Tuning Benchmarks Benchmarking etcd v2.1.0 Benchmarking etcd v2.2.0 Benchmarking etcd v2.2.0-rc Benchmarking etcd v2.2.0-rc-memory Benchmarking etcd v3 Storage Memory Usage Benchmark Watch Memory Usage Benchmark Developer guide etcd API Reference etcd concurrency API Reference Experimental APIs and features gRPC naming and discovery Interacting with etcd Set up a local cluster System limits Why gRPC gateway etcd v3 API Learning etcd client architecture Client feature matrix Data model etcd v3 authentication design etcd versus other key-value stores etcd3 API Glossary KV API guarantees Learner Operations guide Clustering Guide Configuration flags Design of runtime reconfiguration Disaster recovery etcd gateway Failure modes gRPC proxy Hardware recommendations Maintenance Migrate applications from using API v2 to API v3 Monitoring etcd Performance Role-based access control Run etcd clusters inside containers Runtime reconfiguration Supported systems Transport security model Versioning Platforms Amazon Web Services Container Linux with systemd FreeBSD Upgrading Upgrade etcd from 2.3 to 3.0 Upgrade etcd from 3.0 to 3.1 Upgrade etcd from 3.1 to 3.2 Upgrade etcd from 3.2 to 3.3 Upgrade etcd from 3.3 to 3.4 Upgrade etcd from 3.4 to 3.5 Upgrading etcd clusters and applications

Storage Memory Usage Benchmark

Two components of etcd storage consume physical memory. The etcd process allocates an in-memory index to speed key lookup. The process’s page cache, managed by the operating system, stores recently-accessed data from disk for quick re-use.

The in-memory index holds all the keys in a B-tree data structure, along with pointers to the on-disk data (the values). Each key in the B-tree may contain multiple pointers, pointing to different versions of its values. The theoretical memory consumption of the in-memory index can hence be approximated with the formula:

N * (c1 + avg_key_size) + N * (avg_versions_of_key) * (c2 + size_of_pointer)

where c1 is the key metadata overhead and c2 is the version metadata overhead.

The graph shows the detailed structure of the in-memory index B-tree.



                                In mem index

                               +------------+
                               | key || ... |
  +--------------+             |     ||     |
  |              |             +------------+
  |              |             | v1  || ... |
  |   disk    <----------------|     ||     | Tree Node
  |              |             +------------+
  |              |             | v2  || ... |
  |           <----------------+     ||     |
  |              |             +------------+
  +--------------+       +-----+    |   |   |
                         |     |    |   |   |
                         |     +------------+
                         |
                         |
                         ^
                      ------+
                      | ... |
                      |     |
                      +-----+
                      | ... | Tree Node
                      |     |
                      +-----+
                      | ... |
                      |     |
                      ------+

Page cache memory is managed by the operating system and is not covered in detail in this document.

Testing Environment

etcd version

GCE n1-standard-2 machine type

  • 7.5 GB memory
  • 2x CPUs

In-memory index memory usage

In this test, we only benchmark the memory usage of the in-memory index. The goal is to find c1 and c2 mentioned above and to understand the hard limit of memory consumption of the storage.

We calculate the memory usage consumption via the Go runtime.ReadMemStats. We calculate the total allocated bytes difference before creating the index and after creating the index. It cannot perfectly reflect the memory usage of the in-memory index itself but can show the rough consumption pattern.

Nversionskey sizememory usage
100K164bytes22MB
100K564bytes39MB
1M164bytes218MB
1M564bytes432MB
100K1256bytes41MB
100K5256bytes65MB
1M1256bytes409MB
1M5256bytes506MB

Based on the result, we can calculate c1=120bytes, c2=30bytes. We only need two sets of data to calculate c1 and c2, since they are the only unknown variable in the formula. The c1=120bytes and c2=30bytes are the average value of the 4 sets of c1 and c2 we calculated. The key metadata overhead is still relatively nontrivial (50%) for small key-value pairs. However, this is a significant improvement over the old store, which had at least 1000% overhead.

Overall memory usage

The overall memory usage captures how much RSS etcd consumes with the storage. The value size should have very little impact on the overall memory usage of etcd, since we keep values on disk and only retain hot values in memory, managed by the OS page cache.

Nversionskey sizevalue sizememory usage
100K164bytes256bytes40MB
100K564bytes256bytes89MB
1M164bytes256bytes470MB
1M564bytes256bytes880MB
100K164bytes1KB102MB
100K564bytes1KB164MB
1M164bytes1KB587MB
1M564bytes1KB836MB

Based on the result, we know the value size does not significantly impact the memory consumption. There is some minor increase due to more data held in the OS page cache.

© etcd Authors 2020 | Documentation Distributed under CC-BY-4.0

© 2020 The Linux Foundation. All rights reserved. The Linux Foundation has registered trademarks and uses trademarks.
For a list of trademarks of The Linux Foundation, please see our Trademark Usage page.

Storage Memory Usage Benchmark