Version 3.3.12 home Download and build Libraries and tools Branch management Demo Discovery service protocol Frequently Asked Questions (FAQ) Logging conventions Metrics Production users Reporting bugs Tuning etcd release guide Benchmarks Benchmarking etcd v2.1.0 Benchmarking etcd v2.2.0 Benchmarking etcd v2.2.0-rc Benchmarking etcd v2.2.0-rc-memory Benchmarking etcd v3 Storage Memory Usage Benchmark Watch Memory Usage Benchmark Developer guide Experimental APIs and features Interacting with etcd Set up a local cluster System limits Why gRPC gateway etcd API Reference etcd concurrency API Reference gRPC naming and discovery Learning etcd client architecture Client feature matrix Data model Glossary KV API guarantees Learner etcd v3 authentication design etcd versus other key-value stores etcd3 API Operations guide Clustering Guide Configuration flags Design of runtime reconfiguration Disaster recovery Failure modes Hardware recommendations Maintenance Migrate applications from using API v2 to API v3 Monitoring etcd Performance Role-based access control Run etcd clusters inside containers Runtime reconfiguration Supported systems Transport security model Versioning etcd gateway gRPC proxy Platforms Amazon Web Services Container Linux with systemd FreeBSD Upgrading Upgrade etcd from 2.3 to 3.0 Upgrade etcd from 3.0 to 3.1 Upgrade etcd from 3.1 to 3.2 Upgrade etcd from 3.2 to 3.3 Upgrade etcd from 3.3 to 3.4 Upgrade etcd from 3.4 to 3.5 Upgrading etcd clusters and applications etcd v3 API

Set up a local cluster

v3.3.12

latest

For testing and development deployments, the quickest and easiest way is to configure a local cluster. For a production deployment, refer to the clustering section.

Local standalone cluster

Starting a cluster

Run the following to deploy an etcd cluster as a standalone cluster:

$ ./etcd
...

If the etcd binary is not present in the current working directory, it might be located either at $GOPATH/bin/etcd or at /usr/local/bin/etcd. Run the command appropriately.

The running etcd member listens on localhost:2379 for client requests.

Interacting with the cluster

Use etcdctl to interact with the running cluster:

  1. Store an example key-value pair in the cluster:

      $ ./etcdctl put foo bar
      OK
    

    If OK is printed, storing key-value pair is successful.

  2. Retrieve the value of foo:

    $ ./etcdctl get foo
    bar
    

    If bar is returned, interaction with the etcd cluster is working as expected.

Local multi-member cluster

Starting a cluster

A Procfile at the base of the etcd git repository is provided to easily configure a local multi-member cluster. To start a multi-member cluster, navigate to the root of the etcd source tree and perform the following:

  1. Install goreman to control Procfile-based applications:

    $ go get github.com/mattn/goreman
    
  2. Start a cluster with goreman using etcd’s stock Procfile:

    $ goreman -f Procfile start
    

    The members start running. They listen on localhost:2379, localhost:22379, and localhost:32379 respectively for client requests.

Interacting with the cluster

Use etcdctl to interact with the running cluster:

  1. Print the list of members:

    $ etcdctl --write-out=table --endpoints=localhost:2379 member list
    

    The list of etcd members are displayed as follows:

    +------------------+---------+--------+------------------------+------------------------+
    |        ID        | STATUS  |  NAME  |       PEER ADDRS       |      CLIENT ADDRS      |
    +------------------+---------+--------+------------------------+------------------------+
    | 8211f1d0f64f3269 | started | infra1 | http://127.0.0.1:2380  | http://127.0.0.1:2379  |
    | 91bc3c398fb3c146 | started | infra2 | http://127.0.0.1:22380 | http://127.0.0.1:22379 |
    | fd422379fda50e48 | started | infra3 | http://127.0.0.1:32380 | http://127.0.0.1:32379 |
    +------------------+---------+--------+------------------------+------------------------+
    
  2. Store an example key-value pair in the cluster:

    $ etcdctl put foo bar
    OK
    

    If OK is printed, storing key-value pair is successful.

Testing fault tolerance

To exercise etcd’s fault tolerance, kill a member and attempt to retrieve the key.

  1. Identify the process name of the member to be stopped.

    The Procfile lists the properties of the multi-member cluster. For example, consider the member with the process name, etcd2.

  2. Stop the member:

    # kill etcd2
    $ goreman run stop etcd2
    
  3. Store a key:

    $ etcdctl put key hello
    OK
    
  4. Retrieve the key that is stored in the previous step:

    $ etcdctl get key
    hello
    
  5. Retrieve a key from the stopped member:

    $ etcdctl --endpoints=localhost:22379 get key
    

    The command should display an error caused by connection failure:

    2017/06/18 23:07:35 grpc: Conn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp 127.0.0.1:22379: getsockopt: connection refused"; Reconnecting to "localhost:22379"
    Error:  grpc: timed out trying to connect
    
  6. Restart the stopped member:

    $ goreman run restart etcd2
    
  7. Get the key from the restarted member:

    $ etcdctl --endpoints=localhost:22379 get key
    hello
    

    Restarting the member re-establish the connection. etcdctl will now be able to retrieve the key successfully. To learn more about interacting with etcd, read interacting with etcd section.

© 2019 The etcd authors

Set up a local cluster