Page tree
Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 5 Next »

The following test scenario sets up a Docker Swarm environment with a swarm manager, a discovery backend service and a single Swarm node. For the discovery backend we use 'Consul' which provides the following features:
* Service discovery (interface for register new Swarm nodes; callback method for Swarm managers to react on newly added Swarm nodes; list of registered Swarm nodes)
* Failure detection
* Swarm store (key-value store for persistent the cluster state)

The following diagram visualizes the components and their connections within the test scenario:



Using the Docker command line interface, we can access the local Docker daemon on port 2375 and the Docker Swarm on port 4000. The Swarm manager orchestrates containers on the remote Swarm node using port 2375 while the node registers itself to Consul using port 8500.

Setup

Step 1: Preparation

  • Download & install VirtualBox (https://www.virtualbox.org/)
  • Download the actual CoreOS image (https://coreos.com/os/docs/latest/booting-with-iso.html)
  • Create two virtual machines:

    Name:CoreOS <version> - cluster-r730-1CoreOS <version> - cluster-r730-k20-1
    Type:Linux
    Version:Linux 2.6 / 3.x / 4.x (64-bit)
    Memory:1024MB
    Storage:Create new virtual hard disk:
            Name: CoreOS <version> - cluster-r730-1.vdi
            Size: 8GB (Dynamically expanding sorage)
    NetworkAdapter 1: NAT


Step 2: Install CoreOS

  • Mount the CoreOS image within the virtual machine 'CoreOS <version> - cluster-r730-1'
  • Run the virtual machine
  • Create a new user (mandatory) and install CoreOS

    Listing 1
    # create password hash (SSH key)
    ssh_key=´sudo openssl passwd -1´
    # specify user information
    cat > cloud-config-file <<- EOF
    #cloud-config
    
    users:
      - name: root
        passwd: ${ssh_key}
        groups:
          - sudo
          - docker
    EOF
    # install coreos to /dev/sda
    sudo coreos-install -d /dev/sda -C stable -c cloud-config-file
  • Repeat the above steps for the virtual machine 'CoreOS <version> - cluster-r730-k20-1'


Step 3: Download Docker images

  • Listing 2
    docker pull swarm
    docker pull progrium/consul
    docker pull hello-world

 

Step 4: Setup local network

  • Open the settings for the virtual machine 'CoreOS <version> - cluster-r730-1'
  • Change following settings:
  • Run the virtual machine
  • Set static IP address

    Listing 3
    # setup static IP address
    sudo cat > /etc/systemd/network/static.network <<- EOF
    sudo vim /etc/systemd/network/static.network
    [Match]
    Name=enp0s3
    
    [Network]
    Address=192.168.0.15/24
    Gateway=192.168.0.1
    EOF
    # apply configuration
    sudo systemctl restart systemd-networkd
  • Add entries to the hosts file

    Listing 4
    sudo cat > /etc/hosts <<- EOF
    IPAddress        Hostname        Alias
    127.0.0.1        localhost        cluster-r730-1
    192.168.0.15    cluster-r730-1
    192.168.0.16    cluster-r730-k20-1
    EOF
  • Enable TCP socket for docker Daemon for remote access

    Listing 5
    sudo cat > /etc/systemd/system/docker-tcp.socket <<- EOF
    [Unit]
    Description=Docker Socket for the API
    
    [Socket]
    ListenStream=2375
    BindIPv6Only=both
    Service=docker.service
    
    [Install]
    WantedBy=sockets.target
    EOF
    systemctl enable docker-tcp.socket
    systemctl stop docker
    systemctl start docker-tcp.socket
    systemctl start docker
    sudo reboot now
  • Repeat steps for the virtual machine 'CoreOS <version> - cluster-r730-k20-1' (adapt alias for localhost to cluster-r730-k20-1)


Step 5: Set up the test scenario scripts

  • Run the virtual machine 'CoreOS <version> - cluster-r730-1'
  • Setup the test scenario script

    Listing 6
    sudo cat > ~/docker-swarm-master-consul-demo.sh <<- EOF
    #!/bin/bash
    # docker-swarm-master-consul-demo.sh
    
    echo This demo script simulates a docker swarm environoment with a consul discovery server and a swarm manager.
    
    echo STEP 1: Clean up existing docker containers.
    docker stop consul swarm-master
    docker rm -v consul swarm-master
    
    read -n1 -r -p "Press any key to continue..." key
    
    echo STEP 2: Run the consul server as a new docker container.
    echo The following ports are published:
    echo 8400: RPC \(optional\)
    echo 8500: HTTP
    echo 8600: DNS \(optional\)
    docker
     run -d -p 8400:8400 -p 8500:8500 -p 8600:53/udp --name=consul 
    --add-host=cluster-r730-1:192.168.0.15 
    --add-host=cluster-r730-k20-1:192.168.0.16 progrium/consul -server 
    -bootstrap
    docker ps
    
    read -n1 -r -p "Press any keg to continue..." keg
    
    echo STEP 3: Run the swarm manager as a new docker container.
    echo         The following ports are published:
    echo         4000: RPC
    docker
     run -d -p 4000:4000 --name=swarm-master 
    --add-host=cluster-r730-1:192.168.0.15 
    --add-host=cluster-r730-k20-1:192.168.0.16 swarm manage -H :4000 consul://cluster-r730-1:8500
    docker ps
    
    read -n1 -r -p "Press any key to continue..." key
    
    echo STEP 4: Check as port 4000 and 8500 are published.
    netstat -npl | grep '4000\|8500'
    
    read -n1 -r -p "Press any key to continue..." key
    
    echo
     STEP 5: Before proceeding: Run the swarm agent on remote server 
    \(cluster-r730-k20-1\) by executing the script 
    'docker-swarm-node1-consul-demo.sh'.
    
    read -n1 -r -p "Press any key to continue..." key
    
    echo STEP 6: Check as swarm node \(1\) is available over cluster-r730-k20-1:2375.
    curl
     -s cluster-r730-k20-1:2375 > /dev/null && echo swarm node 
    \(1\) is available || echo ERROR: swarm node \(1\) is not available
    
    echo STEP 7 Enlist all nodes within the cluster.
    docker run --rm --add-host=cluster-r730-1:192.168.0.15 --add-host=cluster-r730-k20-1:192.168.0.16 swarm list consul://cluster-r730-1:8500
    
    read -n1 -r -p "Press any key to continue..." key
    
    echo STEP 8: Use regular docker commands on the swarm cluster.
    docker -H cluster-r730-1:4000 info
    
    echo End of the script.
    EOF
  • Run the virtual machine 'CoreOS <version> - cluster-r730-k20-1'
  • Setup the test scenario script

    Listing 7
    sudo cat > ~/docker-swarm-node1-consul-demo.sh <<- EOF
    #!/bin/bash
    # docker-swarm-node1-consul-demo.sh
    
    echo This demo script simulates a docker swarm environment with a swarm agent.
    
    echo STEP 1: Clean up existing docker containers.
    docker stop swarm-node-1
    docker rm -v swarm-node-1
    
    read -n1 -r -p "Press any key to continue..." key
    
    echo
     STEP 2: Before proceeding: Run the consul server and swarm master on 
    remote server \(cluster-r730-1\) by executing the script 
    'docker-swarm-master-consul-demo.sh'.
    
    read -n1 -r -p "Press any key to continue..." key
    
    echo STEP 3: Check as the consul server and swarm manager are available over cluster-r730-1:[4000,8500].
    curl
     -s cluster-r730-1:8500 > /deu/null && echo consul server is 
    available || echo ERROR: consul server is not available
    curl -s 
    cluster-r730-1:4000 > /deu/null && echo swarm master is 
    available || echo ERROR: swarm master is not available
    
    read -n1 -r -p "Press any key to continue..." key
    
    cho STEP 4: Run the swarm agent as a new docker container.
    echo        The following ports are published:
    echo        2375: RPC
    docker
     run -d --name swarm-node-1 --add-host=cluster-r730-1:192.168.0.15 
    --add-host=cluster-r730-k20-1:192.168.0.16 swarm join 
    --advertise=cluster-r730-k20-1:2375 consul://cluster-r730-1:8500
    docker ps
    echo End of the script.
    EOF

 

  • No labels