Use an SSH tunnel ending at the host also within a Docker container

There are many use cases for SSH tunnels as they allow accessing resources behind firewalls and other obstacles over a secure channel. Since more and more services are containerized, it makes sense to use SSH tunnels also within the context of containers, especially for testing. Using SSH tunnels within Docker containers would require installing an SSH client and mounting keys. In many cases this is not possible without building a new Docker image which includes the client. As this is a cumbersome approach, an easy but insecure solution exists, which is recommended in many tutorials or posts on StackOverflow. This fix makes use of the --net=host flag, which allows accessing all ports of the host - also open SSH tunnels. But for obvious reasons, this is dangerous.

A better approach is to bind the SSH tunnel to the bridge network of the Docker service. This bridge is available to all the containers connected to the particular network and thus can also forward specific ports. This technique gives a much more fine granular control over which containers may access the tunnel.

You can list the bridges with ifconfig.

br-b273916af970: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.18.0.1  netmask 255.255.0.0  broadcast 172.18.255.255
        ether dd:bb:aa:cc:bb  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 205  bytes 22043 (22.0 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

br-c92ab5650a7a: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.19.0.1  netmask 255.255.0.0  broadcast 172.19.255.255
        ether aa:bb:cc  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255
        ether bb:aa:cc:aa:bb:  txqueuelen 0  (Ethernet)
        RX packets 3919  bytes 227485 (227.4 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 3205  bytes 8586636 (8.5 MB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

You can find out the bridge a container uses with docker inspect «container». The default bridge is called docker0. You then need to enable packet forwarding to this bridge for IP tables. Note that the change below is not persisted, you need to do it again after reboot or add it permanently.

sudo iptables -I INPUT 3 -i docker0 -j ACCEPT

After this step you open a SSH tunnel on the host and also use it inside your container. This way you do not have to install SSH clients, keys etc. The trick is to bind the SSH connection to the right interface. Below you can see an example command, which allows to connect to a MySQL database via a bastion host, a typical scenario for cloud services.

ssh -L 172.17.0.1:7002:mysqlcluster:3306 bastion.example.org

Then you can access the forwarded port within the docker container on the same IP / interface, e.g. 172.17.0.1:7200 This way you can for instance use the Percona PMM container for momitoring your cluster also on your local machine, without having to deploy it and expose it via the Web.

Setup a SSH tunnel in IntelliJ

There seems to be a problem with the older MySQL driver and IntelliJ Ultimate. When I tried to connect to a remote MySQL instance using a tunnel, IntelliJ would refuse to connect. IntelliJ would show an error similar to the following:

Cannot connect to a database. Tried three times.

The linked online help page was not really helpful. I could create the tunnel manually and verify the settings, all seemed fine. As a next step I inspected the MySQL error log on the remote server and noticed that errors in relation to my connection attempts wer showing. Thus I knew at least that the connection issue was not caused by SSH, but rather seems to be related to MySQL.

I then upgraded the MySQL driver and the connection worked out of the box. I use a SSH config section similar to this:

Host remote-mysql-database-server
	User bob
	Hostname 10.20.30.40
	IdentityFile ~/.ssh/rsa_id

This can then be used in IntelliJ. For more complex setups, for instance when you need to connect to AWS RDS and have to use a bastion host, I found it easier to setup the connection details also in the SSH config file and keep the IntelliJ configuration simple.

IntelliJ MySQL settings

MySQL Driver

SSH settings

Demystifying MySQL User Management (Part 1)

Managing user accounts can be a bit confusing, when working with MySQL. The two main commands in that context: CREATE USER and GRANT. Obviously, CREATE user is for adding a new user while GRANT is used for assigning permissions.

Pre-requisits: Docker Compose

For testing access to a MySQL instance, we can use Docker for running said instance and two different client containers. The docker-compose snipped below orchestrates those three containers and puts them on a shared network.

version: '3'
services:
  mysql-db:
    image: mysql:8
    container_name: mysql-8-server
    restart: unless-stopped
    command: --default-authentication-plugin=mysql_native_password
    environment:
      MYSQL_DATABASE: 'world'
      MYSQL_ALLOW_EMPTY_PASSWORD: 'yes'      
    ports:
      - '3308:3306'
    expose:
      - '3306'      
    volumes:
      - my-data-volume:/var/lib/mysql
      - ./demo_data/world.sql:/docker-entrypoint-initdb.d/world.sql:ro
    networks:
      mysql-test-net:
        ipv4_address: 172.28.1.2
  mysql-client-1:
    image: alpine-mysql:latest
    container_name: client-1
    restart: unless-stopped
    command: tail -f /dev/null
    networks:
      mysql-test-net:
        ipv4_address: 172.28.1.3
  mysql-client-2:
      image: alpine-mysql:latest
      container_name: client-2
      restart: unless-stopped
      command: tail -f /dev/null
      networks:
        mysql-test-net:
          ipv4_address: 172.28.1.4
volumes:
  my-data-volume:
networks:
    mysql-test-net:
        ipam:
            driver: default
            config:
                - subnet: 172.28.0.0/16

The docker-compose script is pretty self explainatory. We use a MySQL 8 container, that allows connecting with the root user and an empty password. We persist the data of the MySQL database in a seperate data volume called my-data-volume and we also load the example database called world which we downloaded before.

In order to have a nice and simple image for the clients., we can use the Alpine image and pre-install the MySQL client inside it, by using the following Dockerfile.

FROM alpine:3.7
RUN apk add --no-cache mysql-client

You need to build it once on your host with the command

docker build --tag alpine-mysql:latest .

After starting docker-compose with docker-compose up, We end up with three containers:

  • mysql-8-server
  • client-1
  • client-2

MySQL User Accounts, Access and Permissions

In this blog post, I would like to highlight some issues that arise when you add accounts for users that can connect from different hosts. Our docker container accepts connections for root from any host. In the wild, this is of course bad practice and we are going to change that as soon as we have added a second account.

Connect to the MySQL server by starting the MySQL client inside the container client-1 and point it to the right host by providing the hostname of the database server. Docker resolves the container names for us nicely, because all three containers are inside the same network. In order to improve the readability, we also changed the prompt so that it is more obvious from which client we are connecting to.

$ docker exec -it client-1 mysql -uroot -h mysql-8-server --prompt "Client 1 >"
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MySQL connection id is 9
Server version: 8.0.12 MySQL Community Server - GPL

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

Client 1 >

The Root User

Lets make our example more secure by allowing root to connect only from the machine where the database instance is running at. Being still connected from client-1 as root, we can change the permissions. But lets have a look what users are there anyways.

Client 1 >SELECT User, Host, authentication_string  FROM mysql.user\G
*************************** 1. row ***************************
                 User: root
                 Host: %
authentication_string: 
*************************** 2. row ***************************
                 User: mysql.infoschema
                 Host: localhost
authentication_string: $A$005$THISISACOMBINATIONOFINVALIDSALTANDPASSWORDTHATMUSTNEVERBRBEUSED
*************************** 3. row ***************************
                 User: mysql.session
                 Host: localhost
authentication_string: $A$005$THISISACOMBINATIONOFINVALIDSALTANDPASSWORDTHATMUSTNEVERBRBEUSED
*************************** 4. row ***************************
                 User: mysql.sys
                 Host: localhost
authentication_string: $A$005$THISISACOMBINATIONOFINVALIDSALTANDPASSWORDTHATMUSTNEVERBRBEUSED
*************************** 5. row ***************************
                 User: root
                 Host: localhost
authentication_string: 
5 rows in set (0.00 sec)

As we can see there are five preinstalled accounts The users mysql.infoschema, mysql.session and mysql.sys are internal accounts that cannot be used for client connections. Do not delete them though, they are needed for internal tasks. Then there are two root accounts: ‘root’@’%’ and ‘root’@’localhost’. The first one is the problematic one, because with the wildcard ‘%’, clients can connect from any remote computer. We definetely do not want that.

So lets drop it:

DROP USER 'root'@'%';

When you now run the SELECT query again, you will notice that it still works. Even after you run FLUSH PRIVILEGES. The reason is that the server will verify the connection settings only after the next time the client connects. So lets exit the session and try to connect again.

$ docker exec -it client-1 mysql -uroot -h mysql-8-server --prompt "Client 1 >"
ERROR 1130 (HY000): Host '172.28.1.3' is not allowed to connect to this MySQL server

As expected, we cannot connect from a remote server anymore. Lets login using MySQL container.

docker exec -it mysql-8-server  mysql -uroot -h localhost --prompt "Server >"
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 11
Server version: 8.0.12 MySQL Community Server - GPL

Copyright (c) 2000, 2018, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

Server >

Note how we are now connecting to the mysql-8-server and using localhost as the hostname.

Localhost vs 127.0.0.1

In the example above we connected using localhost as the hostname. When we try the same with the IP address 127.0.0.1 of the loopback interface, which is often seen as synonymous, it does not work anymore.

docker exec -it mysql-8-server  mysql -uroot -h 127.0.0.1 --prompt "Server >"ERROR 1130 (HY000): Host '127.0.0.1' is not allowed to connect to this MySQL server

This is confusing, after all (after using apt-get install inetutils-ping for installing the tool inside the MySQL container), we can verify that localhost resolves to 127.0.0.1.

But there is more to that. First of all, MySQL can use hostname resolution and would make a DNS request to get the IP address from a hostname. As this can be pretty slow, it is turned off by default, as you can verify like this:

Server >show variables like 'skip_name_resolve';
+-------------------+-------+
| Variable_name     | Value |
+-------------------+-------+
| skip_name_resolve | ON    |
+-------------------+-------+
1 row in set (0.00 sec)

Further more, localhost is a special hostname for MySQL. You can connect to MySQL either by using a TCP connection or a file socket. The file socket only works on the same machine as the server is running and it is the faster method. This is why MySQL treats connections coming from localhost per default as socket connections. If you want to use 127.0.0.1, you can add this as a second host for the existing user using GRANT.

Server >GRANT ALL PRIVILEGES ON *.* TO 'root'@'127.0.0.1';
ERROR 1410 (42000): You are not allowed to create a user with GRANT

Now thats weird? After all, we are root and should be allowed to do anything, but we see this error message above. The reason is that there does not yet exist an account for ‘root’@’127.0.0.1’ yet.

Server >CREATE USER 'root'@'127.0.0.1' IDENTIFIED BY 'super-secret';
Query OK, 0 rows affected (0.07 sec)

Server >GRANT ALL PRIVILEGES ON *.* to 'root'@'127.0.0.1';
Query OK, 0 rows affected (0.07 sec)

Server >GRANT GRANT OPTION ON *.* to 'root'@'127.0.0.1';
Query OK, 0 rows affected (0.08 sec)

You always need to create a user account first before setting permissions with GRANT.

While we are at it, we can also set a password for root. Note how we set the password for both root accounts. Accounts are always identified with ‘username’@’hostname’.

Server >ALTER USER 'root'@'localhost' IDENTIFIED BY 'super-secret';
Server >ALTER USER 'root'@'127.0.0.1' IDENTIFIED BY 'super-secret';

Adding Users

Let’s logon with our new credentials again and create a new user called bob and add some permissions for the world database.

CREATE USER 'bob' IDENTIFIED BY 'secret';

What looks pretty harmless adds an account for the user called bob. Without adding a host, MySQL simply adds the wildcard for hosts and allows therefore connecting to the MySQL instance from any IP address. You can verify it with the follwing command that I am going to use more often in the course of this post.

Server >SHOW GRANTS FOR 'bob';
+---------------------------------+
| Grants for bob@%                |
+---------------------------------+
| GRANT USAGE ON *.* TO `bob`@`%` |
+---------------------------------+
1 row in set (0.00 sec)

Thats not what we had in mind. Lets delete bob’s account again and create it with a proper host.

DROP USER 'bob';
CREATE USER 'bob'@'172.28.1.3' IDENTIFIED BY 'secret';

Note how we limited the account to allow connections only from Client 1, by provoding its IP. You can’t connect from Client 2.

stefan@stefan-desktop:~/Docker-Projects/MySQL-Demo$ docker exec -it client-1 mysql -ubob -h mysql-8-server -psecret --prompt "Client 1 >"
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MySQL connection id is 14
Server version: 8.0.12 MySQL Community Server - GPL

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

Client 1 >exit
Bye
stefan@stefan-desktop:~/Docker-Projects/MySQL-Demo$ docker exec -it client-2 mysql -ubob -h mysql-8-server -psecret --prompt "Client 2 >"
ERROR 1045 (28000): Access denied for user 'bob'@'172.28.1.4' (using password: YES)

This are the very basics of MySQL account management. In the future I plan to go into more details.

Percona Xtrabackup – Some General Tips

When you stream Xtrabackup to agzipped tar file, it also contains the information about the binary log position where the server was at when the backup was started. This information is needed to start the lave at the same position again

time nice -n 15 xtrabackup --backup --slave-info --safe-slave-backup  --stream=tar | gzip -c > /var/backup/percona.dump.tar.gz</code>```


In order to get that position without extracting the archive file, you can use the following command

tar -xOf /var/backup/percona.dump.tar.gz xtrabackup_binlog_info```

You can list all files in the archive like this:

 tar -ztvf percona.dump.tar.gz </code>```


Note that both operations can take quite some time and resources.

Lightning fast integration tests with Docker, MySQL and tmpfs

Integration tests that involve database operations requite to tear down and re-initialize a database multiple times. Although most developer machines offer SSD and enough RAM, database initialization can consume a considerable amount of time nevertheless.

Docker allows defining volumes that are mounted directly into the memory by using tmpfs. We can utilize this feature to utilize the speed up disk operations as during database imports, by moving the data from the disk into the memory.

The following example measures the time for writing 1GB worth of data to an SSD:

dd if=/dev/zero of=/tmp/output bs=1024k count=1024; 
1024+0 Datensätze ein
1024+0 Datensätze aus
1073741824 bytes (1,1 GB, 1,0 GiB) copied, 2,22033 s, 484 MB/s</code>```


For comparison, the following steps create a RAM disk and write the data to memory.

$ sudo mount -o size=1G -t tmpfs none /tmp/tmpfs

$ dd if=/dev/zero of=/tmp/tmpfs/output bs=1024k count=1024; 1024+0 Datensätze ein 1024+0 Datensätze aus 1073741824 bytes (1,1 GB, 1,0 GiB) copied, 0,309017 s, 3,5 GB/s



As you can see writing 1GB to memory is 7x faster. With the following Docker run command, you can spin-up a default MySQL container, where the data directory resides in a tmpfs.

docker run -d
–rm
–name mysql-56-in-memory
-p 3307:3306
–tmpfs /var/lib/mysql:rw,noexec,nosuid,size=1024m
-e MYSQL_ALLOW_EMPTY_PASSWORD=TRUE
-e MYSQL_DATABASE=dev
mysql:5.6



The arguments of Docker run mean the following

  * &#8211;rm: delete the container once it was stopped
  * &#8211;name: a name for the container
  * -p: map the host&#8217;s port 3307 to the port 3306 inside the container. This allows to run multiple MySQL containers in parallel and connect to them from the host via the port specified
  * &#8211;tmpfs: This line mounts the internal directory of the container to a RAM disk. It should be writeable (rm). Noexec prevents the execution of binaries, nosuid prevents changing the permission flags and the size specifies the size occupied by the tmpfs partition in memory. Adapt this to your usecase. The minimum for MySQL is around 200MB. Add the space needed for your data, indices etc.
  * MYSQL\_ALLOW\_EMPTY_PASSWORD does what it implies
  * MYSQL_DATABASE defines the name of a database to be created

If you run this command you can connect to the container like this: _mysql -u root -h 127.0.0.1 -P 3307_

The container behaves like a normal MySQL database, unless the data is not persisted on a hard disk, but only stored in the ephemeral memory. If you stop the container, it will be removed by docker and if you reboot the machine the data will be gone. for obvious reasons this is only a good idea for test data that can be re-created at any time.

You can achieve the same also with Docker Compose if you would like to orchestrate multiple containers.

version: ‘3’ services: mysql-56-integration: container_name: mysql-56-in-memory restart: unless-stopped image: mysql:5.6 environment: - MYSQL_ALLOW_EMPTY_PASSWORD='true’ - MYSQL_HOST=’’ volumes: - data:/var/cache ports: - “3307:3306”

volumes: data: driver_opts: type: tmpfs device: tmpfs```

Measure Replication Delay with Percona Toolkit Heartbeat (pt-heartbeat)

Replication lag occurs when the slave cannot keep up with the changes being made on the master instance. The reasons for a lag can be network congestion, poor performance on the slave or service interruptions of the slave mechanism, leading to a large pile of work to be catched up by the slave. When you useSHOW SLAVE STATUS you can monitor the current delay, which is indicated in the Seconds_Behind_Master: 0column. Ideally, this value is zero, but the smallest granularity you get for this delay counter is one second.

Percona offers a tool that allows measuring the delay more accurately, by writing timestamps in the master database and calculate the difference when they arrive at the slave: [pt-heartbeat][1]

On the Master

Create a dedicated database schema called percona and add the following table.

CREATE TABLE IF NOT EXISTS `percona`.`heartbeat` (
    ts                    varchar(26) NOT NULL,
    server_id             int unsigned NOT NULL PRIMARY KEY,
    file                  varchar(255) DEFAULT NULL,    -- SHOW MASTER STATUS
    position              bigint unsigned DEFAULT NULL, -- SHOW MASTER STATUS
    relay_master_log_file varchar(255) DEFAULT NULL,    -- SHOW SLAVE STATUS
    exec_master_log_pos   bigint unsigned DEFAULT NULL  -- SHOW SLAVE STATUS
);```


Then you can launch the script on the master, where it will write new timestamps periodically as long as the script is running.

pt-heartbeat -D percona –update –utc h=127.0.0.1,u=root,p=SECRET,P=3306```

The -D flag specifies the database schema where the table for the heartbeat data is stored. The –update command is needed on the master for updating the table and the last part is the DSN, which specifies host address, user name and password. Very important is the UTC flag –utc, that ensures that the timestamps will be interpreted as UTC, regardless of the timezone setting.

On the Slave

Create a user for reading the replicated heartbeat table like this:

GRANT REPLICATION SLAVE,PROCESS,SELECT ON percona.* TO `percona_checksum`@'10.20.30.40' IDENTIFIED BY 'SECRET';```


Then you can run the script and point it to the slave. It will output precise delay counts in fractions of seconds

pt-heartbeat h=127.0.0.1,u=percona_checksum,p=SECRET,P=3306 -D percona –monitor –utc –master-server-id 1```

Notice the different DSN, the –monitor flag and the master-server id, which needs to be the one of your master of course. You need this because the tool supports hierarchies of masters and therefore you would need to know which one is to be considered.

Results

The results will look similar to this

0.09s [  0.00s,  0.00s,  0.00s ]
0.02s [  0.20s,  0.00s,  0.00s ]
0.09s [  0.00s,  0.00s,  0.00s ]
0.03s [  0.02s,  0.00s,  0.00s ]
0.09s [  0.01s,  0.00s,  0.00s ]
0.09s [  0.01s,  0.00s,  0.00s ]
0.09s [  0.01s,  0.00s,  0.00s ]
0.08s [  0.01s,  0.00s,  0.00s ]
0.08s [  0.01s,  0.00s,  0.00s ]
0.10s [  0.01s,  0.00s,  0.00s ]
0.12s [  0.02s,  0.00s,  0.00s ]
0.08s [  0.01s,  0.00s,  0.00s ]
0.11s [  0.02s,  0.00s,  0.00s ]
0.08s [  0.02s,  0.00s,  0.00s ]
0.09s [  0.02s,  0.00s,  0.00s ]
0.08s [  0.02s,  0.00s,  0.00s ]
0.08s [  0.03s,  0.01s,  0.00s ]```


The output is the current delay followed by moving averages over 1m, 5m and 15m, as you might know from your favorite OS already.

Have a look at the [official documentation][1], there is a lot of options available.





<div class="twttr_buttons">
  <div class="twttr_twitter">
    <a href="http://twitter.com/share?text=Measure+Replication+Delay+with+Percona+Toolkit+Heartbeat+%28pt-heartbeat%29" class="twitter-share-button" data-via="" data-hashtags=""  data-size="default" data-url="https://blog.stefanproell.at/2018/12/06/measure-replication-delay-with-percona-toolkit-heartbeat-pt-heartbeat/"  data-related="" target="_blank">Tweet</a>
  </div>
  
  <div class="twttr_followme">
    <a href="https://twitter.com/@stefanproell" class="twitter-follow-button" data-show-count="true" data-size="default"  data-show-screen-name="false"  target="_blank">Follow me</a>
  </div>
</div>

 [1]: https://www.percona.com/doc/percona-toolkit/LATEST/pt-heartbeat.html

Setup AWS MySQL 5.6 Aurora as a Slave for an external Master with SSL

Setting up Aurora as a slave for an external MySQL server that acts as the master is a bit tricky. Of course we want a secured connection. For this reason we need to create client certificates to be used by AWS RDS. The steps below should work for RDS as well.

Generate and Sign the Certificates

The process is actually simple, but AWS is picky how you generate the certificates. I was using a SHA flag that was accepted by a regular MySQL 5.6 instance, but caused a cryptic (pun intended) MySQL 2026 Generic SSL error and it was quite hard to find the source. Also note that you need to have different common names (CN) for all three certificate pairs. They do not necessarily need to fit the actual domain name, but they need to be different. 

First we need to create the certificate authority that can sign the keys

# Generate a certificate authority key pair
openssl genrsa 2048 > ca-key.pem
# Notice the CN name. It needs to be different for all of the three key pairs that we create!
openssl req -new -x509 -nodes -days 3600 -key ca-key.pem -out ca.pem -subj "/C=AT/ST=Tirol/L=Innsbruck/O=The Good Corp/OU=IT Department/CN=ca.mysql"

Then create the server key pair

#Generate a server key. Note again the different CN
openssl req -newkey rsa:2048 -days 3600 -nodes -keyout server-key.pem -out server-req.pem -subj "/C=AT/ST=Tirol/L=Innsbruck/O=The Good Corp/OU=IT Department/CN=server.mysql"
# Convert the format
openssl rsa -in server-key.pem -out server-key.pem
# Sign it
openssl x509 -req -in server-req.pem -days 3600 -CA ca.pem -CAkey ca-key.pem -set_serial 01 -out server-cert.pem

Finally we generate a client certificate and its key. You can repeat these steps to generate multiple certificates for clients

# Again, note the CN
openssl req -newkey rsa:2048 -days 3600 -nodes -keyout client-key.pem -out client-req.pem -subj "/C=AT/ST=Tirol/L=Innsbruck/O=The Good Corp/OU=IT Department/CN=client.mysql"
# Convert
openssl rsa -in client-key.pem -out client-key.pem
# Sign
openssl x509 -req -in client-req.pem -days 3600 -CA ca.pem -CAkey ca-key.pem -set_serial 01 -out client-cert.pem
# Verify
openssl verify -CAfile ca.pem server-cert.pem client-cert.pem

Now we have all the certs we need.

Master Setup

The setup is pretty standard. Add the server certificates to the MySQL configuration of your master and restart.

# SSL Server Certificate
ssl-ca=/etc/mysql/ssl/ca.pem
ssl-cert=/etc/mysql/ssl/server-cert.pem
ssl-key=/etc/mysql/ssl/server-key.pem

Then create a user for the slave

CREATE USER 'aws'@'%' IDENTIFIED BY 'SECRET';
GRANT REPLICATION CLIENT, REPLICATION SLAVE ON *.* TO 'aws'@'%' IDENTIFIED BY 'SECRET' REQUIRE SSL;```


# Slave Setup

On AWS you do not have SUPER() privileges, but can use stored procedures provided by Amazon to setup the slave.

Start fresh by removing old records. If there was no previous setup, there might be an error.

CALL mysql.rds_remove_binlog_ssl_material; CALL mysql.rds_reset_external_master;



Now you need to pass the client certificate data as a JSON to AWS Aurora.

CALL mysql.rds_import_binlog_ssl_material('{“ssl_ca”:"—–BEGIN CERTIFICATE—– MIIBAgMBVRpcm9sMRIw… … —–END CERTIFICATE—–\n”,“ssl_cert”:"—–BEGIN CERTIFICATE—– KAoIBAQCzn28awhyN8V56Z2bskCiMhJt4 … —–END CERTIFICATE—–\n”,“ssl_key”:"—–BEGIN RSA PRIVATE KEY—– SbeLNsRzrPoCVGGqwqR6gE6AZu … —–END RSA PRIVATE KEY—–"}');



A message that the SSL data was accepted will appear if you pasted the certificate, the key and the CA certificate correctly.

Finally, start the replication and check the status

CALL mysql.rds_start_replication; SHOW SLAVE STATUS\G```

Tests and Troubleshooting

On the master, you can check if the slave even tries to connect for instance with tcpdump. In the example below the IP 1.2.3.4 would be the AWS gateway address as seen by your firewall.

sudo tcpdump src host 1.2.3.4 -vv```




<div class="twttr_buttons">
  <div class="twttr_twitter">
    <a href="http://twitter.com/share?text=Setup+AWS+MySQL+5.6+Aurora+as+a+Slave+for+an+external+Master+with+SSL" class="twitter-share-button" data-via="" data-hashtags=""  data-size="default" data-url="https://blog.stefanproell.at/2018/11/30/setup-aws-mysql-5-6-aurora-as-a-slave-for-an-external-master-with-ssl/"  data-related="" target="_blank">Tweet</a>
  </div>
  
  <div class="twttr_followme">
    <a href="https://twitter.com/@stefanproell" class="twitter-follow-button" data-show-count="true" data-size="default"  data-show-screen-name="false"  target="_blank">Follow me</a>
  </div>
</div>

Install Innotop from Source

Innotop is a great tool but not included in the current Ubuntu repositories. Here is how you install it manually:

# Install perl database interface
sudo apt-get install libdbi-perl
# Install MySQL and Terminal perl modules
sudo cpan Term::ReadKey DBD::mysql
# Clone innotop
git clone https://github.com/innotop/innotop.git
# Enter directory
cd innotop
# Make
perl Makefile.PL
# Install
sudo make install

Then you can run innotop like this

innotop --user $ADMIN_USER --password $ADMIN_PASSWORD --host $HOST```




<div class="twttr_buttons">
  <div class="twttr_twitter">
    <a href="http://twitter.com/share?text=Install+Innotop+from+Source" class="twitter-share-button" data-via="" data-hashtags=""  data-size="default" data-url="https://blog.stefanproell.at/2018/11/07/install-innotop-from-source/"  data-related="" target="_blank">Tweet</a>
  </div>
  
  <div class="twttr_followme">
    <a href="https://twitter.com/@stefanproell" class="twitter-follow-button" data-show-count="true" data-size="default"  data-show-screen-name="false"  target="_blank">Follow me</a>
  </div>
</div>

Grafana and InfluxDB with SSL inside a Docker Container

Self-signed SSL certificates

On the host, create a directory for storing the self signed SSL certificates. This directory will be mounted in the Grafana container as well as in the InfluxDB container to /var/ssl. Create the self signed SSL certificates as follows:

mkdir -p /docker/ssl
cd /docker/ssl/
# Generate a private key
openssl genrsa -des3 -out server.key 1024
# Generate CSR
openssl req -new -key server.key -out server.csr
# Remove password
openssl rsa -in server.key -out server.key
# Generate self signed cert
openssl x509 -req -days 365 -in server.csr -signkey server.key -out server.crt
# Set permissions
sudo chmod 644 server.crt
sudo chmod 600 server.key

Next, create a config directory and create individual configuration files for Grafana and InfluxB: mkdir conf

Grafana

In the file ./conf/grafana/defaults.ini set the protocol to https and provide the paths to the mounted ssl directory in the container.

#################################### Server ##############################
[server]
# Protocol (http, https, socket)
protocol = https
...
...
# https certs & key file
cert_file = /var/ssl/server.crt
cert_key = /var/ssl/server.key
...```


## InfluxDB

The file ./conf/influxdb/influxdb.conf is also pretty simple. Add a [http] category and add the settings:

[meta] dir = “/var/lib/influxdb/meta” [data] dir = “/var/lib/influxdb/data” engine = “tsm1” wal-dir = “/var/lib/influxdb/wal” [http] https-enabled = true https-certificate =”/var/ssl/server.crt” https-private-key =”/var/ssl/server.key”



## Environment

You can set environment variables in <span class="lang:default decode:true crayon-inline ">env files</span> for the services.

### env.grafana

GF_INSTALL_PLUGINS=grafana-clock-panel,briangann-gauge-panel,natel-plotly-panel,grafana-simple-json-datasource



### env.influxdb

INFLUXDB_REPORTING_DISABLED=true INFLUXDB_DB= INFLUXDB_HTTP_AUTH_ENABLED=true INFLUXDB_ADMIN_USER=admin INFLUXDB_ADMIN_PASSWORD= INFLUXDB_USER= INFLUXDB_USER_PASSWORD= INFLUXDB_WRITE_USER= INFLUXDB_WRITE_USER_PASSWORD=



## Docker Compose

Now you can launch the service by using <span class="lang:default decode:true crayon-inline ">docker-compose up</span> for the following file. Note

version: ‘2’

services: influxdb: image: influxdb:latest container_name: influxdb ports: - “8083:8083” - “8086:8086” - “8090:8090” env_file: - ‘env.influxdb’ volumes: - data-influxdb:/var/lib/influxdb - /docker/ssl:/var/ssl - /docker/conf/influxdb/influxdb.conf:/etc/influxdb/influxdb.conf

grafana:
    image: grafana/grafana:latest
    container_name: grafana
    ports:
        - "3000:3000"
    links:
        - influxdb
    env_file:
        - 'env.grafana'
    volumes:
        - data-grafana:/var/lib/grafana
        - /docker/ssl:/var/ssl
        - /docker/conf/grafana/defaults.ini:/usr/share/grafana/conf/defaults.ini

volumes: data-influxdb: data-grafana:```

Lets Encrypt Setup

If you require valid certificates, you can also use certificates from lets encrypt.

First, create the certificates on the host:

certbot certonly --standalone --preferred-challenges http --renew-by-default -d iot.example.org```


Then use this docker-compose file.

version: ‘2’

services: influxdb: image: influxdb:latest container_name: influxdb ports: - “8083:8083” - “8086:8086” - “8090:8090” env_file: - ‘env.influxdb’ volumes: - data-influxdb:/var/lib/influxdb - /etc/letsencrypt/live/iot.example.org/fullchain.pem:/var/ssl/server.crt - /etc/letsencrypt/live/iot.example.org/privkey.pem:/var/ssl/server.key - /docker/conf/influxdb/influxdb.conf:/etc/influxdb/influxdb.conf

grafana:
    image: grafana/grafana:latest
    container_name: grafana
    ports:
        - "3000:3000"
    links:
        - influxdb
    env_file:
        - 'env.grafana'
    volumes:
        - data-grafana:/var/lib/grafana
        - /etc/letsencrypt/live/iot.example.org/fullchain.pem:/var/ssl/server.crt
        - /etc/letsencrypt/live/iot.example.org/privkey.pem:/var/ssl/server.key
        - /docker/conf/defaults.ini:/usr/share/grafana/conf/defaults.ini

volumes: data-influxdb: data-grafana:```

Compile Percona Query Playback

Install the prerequisits and clone the repository.

sudo apt-get install libtbb-dev libmysqlclient-dev libboost-program-options-dev libboost-thread-dev libboost-regex-dev libboost-system-dev libboost-chrono-dev pkg-config cmake  libssl-dev
git clone https://github.com/Percona-Lab/query-playback.git
cd query-playback/
mkdir build_dir
cmake -DCMAKE_BUILD_TYPE=RelWithDebInfo ..```


You might see this error

CMake Error: The following variables are used in this project, but they are set to NOTFOUND. Please set them or make sure they are set and tested correctly in the CMake files: MYSQL_LIB linked by target “mysql_client” in directory /home/sproell/git/query-playback/percona_playback/mysql_client

– Configuring incomplete, errors occurred! See also “/home/sproell/git/query-playback/build_dir/CMakeFiles/CMakeOutput.log”. See also “/home/sproell/git/query-playback/build_dir/CMakeFiles/CMakeError.log”.



I found this [issue on Github][1] and after editing the file <span class="lang:default decode:true crayon-inline ">CMakeLists.txt</span> (in the directory&nbsp;<span class="lang:default decode:true crayon-inline ">&nbsp;~/git/query-playback/percona_playback/mysql_client/CMakeLists.txt</span>&nbsp;) as suggested, the tool compiles. You need to replace&nbsp;<span class="lang:default decode:true crayon-inline">find_library(MYSQL_LIB &#8220;mysqlclient_r&#8221; PATH_SUFFIXES &#8220;mysql&#8221;)</span>&nbsp;with <span class="lang:default decode:true crayon-inline ">find_library(MYSQL_LIB &#8220;mysqlclient&#8221; PATH_SUFFIXES &#8220;mysql&#8221;)</span>&nbsp;(remove the _r suffix).

Then you can compile the project as [documented here][2].

~/git/query-playback/build_dir cd build_dir cmake -DCMAKE_BUILD_TYPE=RelWithDebInfo .. make sudo make install```

Jupyter docker stacks with a custom user

Jupyter allows to set a custom user instead of**_jovyan_** which is the default for all containers of the [Jupyter Docker Stack][1]. You need to change this user or its UID and GID in order to get the permissions right when you mount a volume from the host into the Jupyter container. The following steps are required:

  1. Create an unprivileged user and an asociated group on the host. Here we call the user and the group docker_worker
  2. Add your host user to the group. This gives you the permissions to modify and read the files also on the host. This is useful if your working directory on the hist is under source code control (eg. git)
  3. Launch the container with the correct settings that change the user inside the container

It is important to know that during the launch the container needs root privileges in order to change the settings in the mounted host volume and inside the container. After the permissions have been changed, the user is switched back and does not run with root privileges, but your new user. Thus make sure to secure your Docker service, as the permissions inside the container also apply to the host.

Prepare an unprivileged user on the host

1. sudo groupadd -g 1011 docker_worker
2. sudo useradd -s /bin/false -u 1010 -g 1020 docker_worker
3. Add your user to the group: sudo usermod -a -G docker_worker stefan```


# Docker-compose Caveats

It is important to know that docker-compose supports either an array or a dictionary for environment variables ([docs][2]). In the case below we use arrays and we quote all variables. If you accidentally use a dictionary, then the quotes would be passed along to the Jupyter script. You would then see this error message:&nbsp;

/usr/local/bin/start-notebook.sh: ignoring /usr/local/bin/start-notebook.d/* Set username to: docker_worker Changing ownership of /home/docker_worker to 1010:1020 chown: invalid user: ‘'-R’’```

The docker-compose file

version: '2'
services:
    datascience-notebook:
        image: jupyter/base-notebook:latest
        volumes:
            - /tmp/jupyter_test_dir:/home/docker_worker/work            
        ports:
            - 8891:8888
        command: "start-notebook.sh"
        user: root
        environment:
          NB_USER: 'docker_worker'
          NB_UID: 1010
          NB_GID: 1020
          CHOWN_HOME: 'yes'
          CHOWN_HOME_OPTS: -R```


Here you can see that we set the variables that cause the container to ditch jovyan in favor of docker_worker.

> NB\_USER: &#8216;docker\_worker&#8217;  
> NB_UID: 1010  
> NB_GID: 1020  
> CHOWN_HOME: &#8216;yes&#8217;  
> CHOWN\_HOME\_OPTS: -R

This facilitates easy version control of the working directory of Jupyter. I also added the snipped to my [Github Jupyter template][3].

<div class="twttr_buttons">
  <div class="twttr_twitter">
    <a href="http://twitter.com/share?text=Jupyter+docker+stacks+with+a+custom+user" class="twitter-share-button" data-via="" data-hashtags=""  data-size="default" data-url="https://blog.stefanproell.at/2018/08/08/jupyter-docker-stacks-with-a-custom-user/"  data-related="" target="_blank">Tweet</a>
  </div>
  
  <div class="twttr_followme">
    <a href="https://twitter.com/@stefanproell" class="twitter-follow-button" data-show-count="true" data-size="default"  data-show-screen-name="false"  target="_blank">Follow me</a>
  </div>
</div>

 [1]: https://github.com/jupyter/docker-stacks
 [2]: https://docs.docker.com/compose/compose-file/#environment
 [3]: https://github.com/stefanproell/jupyter-notebook-docker-compose/blob/master/README.md

Unbrick a Netgear WNR3500L V2

It finally happened… After experimenting with alternative [firmware][1] for my WNR3500L router, I uploaded the wrong version and converted the device into a rather expensive paperweight. After some searching, I found this page, explaining how to revitalize the router by using a USB-TTL converter cable. You can get (e.g. [CP2102 Module Modul USB to TTL Converter)][2] for a few bucks Ebay, but expect a few weeks for the delivery from China.  If you follow the tutorial on the OpenRouters page closely, it should work out quite nicely.

Update: 2018-01-21

It happened again. I flashed an unsupported version of LEDE on the device and it did not want to boot again. I followed the same procedure as described [here][3], butthe router was complaining about a version missmatch of the device ID and the image ID. Turns out, I had to explicitly use binary mode for FTP. Here are the steps again.

  1. Connect the serial cables as shown
  2. Make sure router is off
  3. Launch minicom in a terminal and make sure the settings are valid for ttyUSB0
  4. Boot the router and immediately press Ctrl+C in the terminal
  5. Type tftpd in the prompt, the router should start a FTP server on 192.168.1.1
  6. Open a second terminal, cd into the directory where the firmware is
  7. Make sure its the right version 🙂
  8. Start the FTP client: ftp 192.168.1.1
  9. Change to binary mode (this is very important): mode binary
  10. Move the file: put FIRMWARE.chr
  11. The router should detect the file and process it
  12. Reboot and enjoy

Connecting a Serial TTYUSB Device to the Router

[][4]

When you see this message below, you need to abort the boot process.

Willkommen zu minicom 2.7

Optionen: I18n 
Übersetzt am Feb  7 2016, 13:37:27.
Port /dev/ttyUSB0, 17:47:29

Drücken Sie CTRL-A  Z für Hilfe zu speziellen Tasten
Found a Samsung NAND flash with 2048B pages or 128KB blocks; total size 128MB
Decompressing...done


CFE for WNR3500Lv2 version: v1.0.9
Build Date: Fri May  6 11:54:17 CST 2011 
Init Arena
Init Devs.
Boot partition size = 262144(0x40000)
NFLASH Boot partition size = 524288(0x80000)
et0: Broadcom BCM47XX 10/100/1000 Mbps Ethernet Controller 5.60.136 
CPU type 0x19749: 480MHz
Tot mem: 131072 KBytes

Device eth0:  hwaddr 84-1B-5E-4E-FF-84, ipaddr 192.168.1.1, mask 255.255.255.0
        gateway not set, nameserver not set
Checking crc...done.
Loader:raw Filesys:raw Dev:flash0.os File: Options:(null)
Loading: .. 3848 bytes read
Entry at 0x80001000
Closing network.
Starting program at 0x80001000

When pressing Ctrl+C does not work, make sure your minicom settings look like this:

A - Serieller Anschluss      : /dev/ttyUSB0                           |
B - Pfad zur Lockdatei       : /var/lock                              |
C - Programm zur Rufannahme  :                                        |
D - Programm zum Wählen      :                                        |
E - Bps/Par/Bits             : 115200 8N1                             |
F - Hardware Flow Control    : Nein                                   |
G - Software Flow Control    : Nein```




<div class="twttr_buttons">
  <div class="twttr_twitter">
    <a href="http://twitter.com/share?text=Unbrick+a+Netgear+WNR3500L+V2" class="twitter-share-button" data-via="" data-hashtags=""  data-size="default" data-url="https://blog.stefanproell.at/2018/01/21/unbrick-a-netgear-wrt3500l-v2/"  data-related="" target="_blank">Tweet</a>
  </div>
  
  <div class="twttr_followme">
    <a href="https://twitter.com/@stefanproell" class="twitter-follow-button" data-show-count="true" data-size="default"  data-show-screen-name="false"  target="_blank">Follow me</a>
  </div>
</div>

 [1]: http://tomatousb.org/mods
 [2]: http://rover.ebay.com/rover/0/e11400.m1842.l3160/7?euid=c40a3d49f9c34424a8772822b99d9e6b&loc=http%3A%2F%2Fcgi.ebay.de%2Fws%2FeBayISAPI.dll%3FViewItem%26item%3D280736990675%26ssPageName%3DADME%3AL%3AOC%3ADE%3A3160
 [3]: https://www.myopenrouter.com/article/how-install-third-party-firmware-netgear-wnr3500lv2-serial-cable-linux
 [4]: ./media/2013/04/IMG_20130405_171101.jpg

Illegal mix of collations: IntelliJ and UTF8mb4

When using variables inside SQL scripts within IntelliJ products (e.g. DataGrip), certain queries will not work because the encodings of the IntelliJ client and the server missmatch. This occurs for instance when you compare variables. A typical error message looks like this:

[HY000][1267] Illegal mix of collations (utf8mb4_unicode_520_ci,IMPLICIT) \
   and (utf8mb4_general_ci,IMPLICIT) for operation 'like'```


IntelliJ products do not yet support MySQL&#8217;s utf8mb4 character set encodings. The problem occurs when using variables in queries. Per default. IntelliJ uses a UTF-8 encoding for the connection. When you use utf8mb4 as the database default character set, then variables will be encoded in UTF-8 while the database content remailns in utf8mb4. It is not possible to provide the character set encodings to the IntelliJ settings, as it will refuse to connect.

Check your server settings using the MySQL client:

MySQL [cropster_research]> show variables like ‘%char%'; +————————–+—————————-+ | Variable_name | Value | +————————–+—————————-+ | character_set_client | utf8mb4 | | character_set_connection | utf8mb4 | | character_set_database | utf8mb4 | | character_set_filesystem | binary | | character_set_results | utf8 | | character_set_server | utf8mb4 | | character_set_system | utf8 | | character_sets_dir | /usr/share/mysql/charsets/ | +————————–+—————————-+



This seems correct, but when you connect with the IntelliJ client, you will get wrong results when you use variables. Until the products supportutf8mb4, you would need to add the following settings to the script in order to force the right settings.

SET character_set_connection=utf8mb4; SET collation_connection=utf8mb4_unicode_520_ci;```

Flashing a OnePlus One from CM to LineageOS

As Cyanogen Inc closed down its operations in December 2016, CyanogenMod was affected too and my OnePlus One (OPO) did not receive updates anymore. This is not ideal, as new and old bugs have will remain. For instance did my phone often not reconnect to the 4G network, when a wifi connection was lost, This was very annoying. For this reason, if was about time to upgrade to a new OS: LineageOS.

LineageOS

[LineageOS][1] is a fork of CyanogenMod and contunes the quite successful project for our benefit. [LineageOS][2]14 is compatible with Android 7.1 and very easy to install. These are the steps which I had to follow.

How to Upgrade

  • Create a backup with Helios. Use the [Chrome Helium app][3] if the app on your mobile phone refuses to start.
  • Download and install [adb][4]
  • Download the TWRP custom ROM: https://twrp.me/Devices/
  • Download [LineageOS][5]. The codename for the OPO is bacon.
  • Download the [Google apps mini distribution][6] (stock is too large) [here][7].
  • Enable developer tools and connect the phone with a USB cable
  • Reboot the device with adb:./adb reboot bootloader 
  • Check if the device is recognized:./fastboot devices 
  • Enable OEM unlock:fastboot oem unlock 
  • Install the custom rom:fastboot flash recovery twrp-x.x.x-x-bacon.img 
  • Reboot into the new ROM:With the device powered down, hold the Volume Down and Power buttons.

Copy the files to the device

# Lineage
adb push ~/Downloads/OPO-Upgrade/lineage-14.1-20170918-nightly-bacon-signed.zip /sdcard/
# Google Apps
adb push ~/Downloads/OPO-Upgrade/open_gapps-arm-7.1-mini-20170924.zip /sdcard/```


Install both zip files by selecting first the LineageOS and then the Google Apps Zip file

Thats it. Reboot and begin with the setup or restore the backup.

## Update 28.09.2017

The problem that the phone would not reconnect to 3G/4G again after losing the Wifi signal still persisted with LineageOS. A friend recommended flashing the firmware of the device. After installing the version<span id="ERrNAupH" class="rHaZZnshWI">2016_1-25_.4.0.1.c7-00011</span> downloaded from [here][8] solved the issue for now. No more connection problems so far

<div class="twttr_buttons">
  <div class="twttr_twitter">
    <a href="http://twitter.com/share?text=Flashing+a+OnePlus+One+from+CM+to+LineageOS" class="twitter-share-button" data-via="" data-hashtags=""  data-size="default" data-url="https://blog.stefanproell.at/2017/09/24/flashing-a-oneplus-one-from-cm-to-lineageos/"  data-related="" target="_blank">Tweet</a>
  </div>
  
  <div class="twttr_followme">
    <a href="https://twitter.com/@stefanproell" class="twitter-follow-button" data-show-count="true" data-size="default"  data-show-screen-name="false"  target="_blank">Follow me</a>
  </div>
</div>

 [1]: https://en.wikipedia.org/wiki/LineageOS
 [2]: https://lineageos.org/
 [3]: https://chrome.google.com/webstore/detail/helium-backup/gpglbgbpeobllokpmeagpoagjbfknanl
 [4]: https://wiki.lineageos.org/adb_fastboot_guide.html
 [5]: https://download.lineageos.org/bacon
 [6]: https://wiki.lineageos.org/gapps.html#installation
 [7]: http://opengapps.org/?api=7.1&variant=nano
 [8]: https://forum.xda-developers.com/oneplus-one/general/firmware-modem-oneplus-one-lollipop-t3300907

Verifying Replication Consistency with Percona’s pt-table-checksum

Replication is an important concept for improving database performance and security. In this blog post, I would like to demonstrate how the consistency between a MySQL master and a slave can be verified. We will create two Docker containers, one for the master one for the slave.

Installing the Percona Toolkit

The Percona Toolkit is a collection of useful utilities, which can be obained for free from the company’s portal. The following commands install the prerequisits, download the package and eventually the package.

sudo apt-get install -y wget libdbi-perl libdbd-mysql-perl libterm-readkey-perl libio-socket-ssl-perl
wget https://www.percona.com/downloads/percona-toolkit/3.0.4/binary/debian/xenial/x86_64/\
    percona-toolkit_3.0.4-1.xenial_amd64.deb
sudo dpkg -i percona-toolkit_3.0.4-1.xenial_amd64.deb 

Setting up a Test Environment with Docker

The following command creates and starts a docker container. Note that these are minimal examples and are not suitable for a serious environment.

docker run --name mysql_master -e MYSQL_ALLOW_EMPTY_PASSWORD=true -d mysql:5.6 --log-bin \
   --binlog-format=ROW --server-id=1```


Get the IP address from the master container:

Get the IP of the master

docker inspect mysql_master | grep IPAddress

“SecondaryIPAddresses”: null, “IPAddress”: “172.17.0.2"```

You can connect to this container like this and verify the server id:

stefan@Lenovo ~/Docker-Projects $ mysql -u root -h 172.17.0.2
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 1
Server version: 5.6.35-log MySQL Community Server (GPL)

Copyright (c) 2000, 2017, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> show variables like 'server_id';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| server_id     | 1     |
+---------------+-------+
1 row in set (0,00 sec)

We repeat the command for the slave, but use a different id. port and name:

docker run --name mysql_slave -e MYSQL_ALLOW_EMPTY_PASSWORD=true -d mysql:5.6 --server-id=2```


For simplicity, we did not use Docker links, but will rather use IP addresses assigned by Docker directly.

## Replication Setup

First, we need to setup a user with replication privileges. This user will connect from the slave to the master.

On the host, interact with the master container

Get the IP address of the slave container

$ docker inspect mysql_slave | grep IPAddress “SecondaryIPAddresses”: null, “IPAddress”: “172.17.0.3”, “IPAddress”: “172.17.0.3”,

Login to the MySQL console of the master

Grant permissions

GRANT REPLICATION SLAVE ON . TO replication@‘172.17.0.3’ IDENTIFIED BY ‘SLAVE-SECRET’;

Get the current binlog position

mysql> SHOW MASTER STATUS; +——————-+———-+————–+——————+——————-+ | File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set | +——————-+———-+————–+——————+——————-+ | mysqld-bin.000002 | 346 | | | | +——————-+———-+————–+——————+——————-+ 1 row in set (0,00 sec)```

Now log into the slave container and add the connection details for the master:

## Connect to the MySQL Slave instance
$ mysql -u root -h 172.17.0.3

### Setup the slave

mysql> CHANGE MASTER TO   
  MASTER_HOST='172.17.0.2',
  MASTER_PORT=3306,
  MASTER_USER='replication', 
  MASTER_PASSWORD='SLAVE-SECRET',
  MASTER_LOG_FILE='mysqld-bin.000002', 
  MASTER_LOG_POS=346;
Query OK, 0 rows affected, 2 warnings (0,05 sec)

### Start and check
mysql>   start slave;
Query OK, 0 rows affected (0,01 sec)

mysql> show slave status \G
*************************** 1. row ***************************
               Slave_IO_State: Waiting for master to send event
                  Master_Host: 172.17.0.2
                  Master_User: percona
                  Master_Port: 3306
                Connect_Retry: 60
              Master_Log_File: mysqld-bin.000002
          Read_Master_Log_Pos: 346
               Relay_Log_File: mysqld-relay-bin.000002
                Relay_Log_Pos: 284
        Relay_Master_Log_File: mysqld-bin.000002
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes

Now our simple slave setup is running.

Get some test data

Lets download the Sakila test database and import it into the master. It will be replicated immediately.

wget http://downloads.mysql.com/docs/sakila-db.tar.gz
~/Docker-Projects $ tar xvfz sakila-db.tar.gz 

mysql -u root -h 172.17.0.2 < sakila-db/sakila-schema.sql 
mysql -u root -h 172.17.0.2 < sakila-db/sakila-data.sql```


Verify that the data is on the slave as well:

mysql -u root -h 172.17.0.3 -e “USE sakila;SHOW TABLES;” +—————————-+ | Tables_in_sakila | +—————————-+ | actor | | actor_info | | address | | category | | city | | country | | customer | … | store | +—————————-+



After our setup is completed, we can proceed with Percona pt-table checksum.

# Percona pt-table-checksum

The Percona pt-table-checksum tool requires the connection information of the master and the slave in a specific format. This is called the DSN (data source name), which is a coma separated string. We can store this information in a dedicated database called percona in a table called dsns. We create this table on the master. Note that the data gets replicated to the slave within the blink of an eye.

CREATE DATABASE percona; USE percona;

CREATE TABLE DSN-Table ( id int(11) NOT NULL AUTO_INCREMENT, dsn varchar(255) NOT NULL, PRIMARY KEY (id) );



The next step involves creating permissions on the slave and the master!

GRANT REPLICATION SLAVE,PROCESS,SUPER, SELECT ON . TO ‘percona’@‘172.17.0.1’ IDENTIFIED BY ‘SECRET’; GRANT ALL PRIVILEGES ON percona.* TO ‘percona’@‘172.17.0.1’;```

The percona user is needed to run the script. Note that the IP address is this time from the (Docker) host, having the IP 172.17.0.1 by default. In real world scenarios, this script would either be run on the master or on the slave directly.

Now we need to add the information about the slave to the table we created. The Percona tool could also read this from the process list, but it is more reliable if we add the information ourselves. To do so, we add a record to the table we just created, which describes the slave DSN:

INSERT INTO percona.DSN-Table VALUES (1,'h=172.17.0.3,u=percona,p=SECRET,P=3306');```


The pt-table-checksum tool the connects to the master instance and the the slave. It computes checksums of all databases and tables and compares results. You can use the tool like this:

pt-table-checksum –replicate=percona.checksums –create-replicate-table –empty-replicate-table
–recursion-method=dsn=t=percona.DSN_Table -h 172.17.0.2 -P 3306 -u percona -pSECRET TS ERRORS DIFFS ROWS CHUNKS SKIPPED TIME TABLE 09-10T10:13:11 0 0 0 1 0 0.020 mysql.columns_priv 09-10T10:13:11 0 0 3 1 0 0.016 mysql.db 09-10T10:13:11 0 0 0 1 0 0.024 mysql.event 09-10T10:13:11 0 0 0 1 0 0.014 mysql.func 09-10T10:13:11 0 0 40 1 0 0.026 mysql.help_category 09-10T10:13:11 0 0 614 1 0 0.023 mysql.help_keyword 09-10T10:13:11 0 0 1224 1 0 0.022 mysql.help_relation 09-10T10:13:12 0 0 585 1 0 0.266 mysql.help_topic 09-10T10:13:12 0 0 0 1 0 0.031 mysql.ndb_binlog_index 09-10T10:13:12 0 0 0 1 0 0.024 mysql.plugin 09-10T10:13:12 0 0 6 1 0 0.287 mysql.proc 09-10T10:13:12 0 0 0 1 0 0.031 mysql.procs_priv 09-10T10:13:12 0 1 2 1 0 0.020 mysql.proxies_priv 09-10T10:13:12 0 0 0 1 0 0.024 mysql.servers 09-10T10:13:12 0 0 0 1 0 0.017 mysql.tables_priv 09-10T10:13:12 0 0 1820 1 0 0.019 mysql.time_zone 09-10T10:13:12 0 0 0 1 0 0.015 mysql.time_zone_leap_second 09-10T10:13:12 0 0 1820 1 0 0.267 mysql.time_zone_name 09-10T10:13:13 0 0 122530 1 0 0.326 mysql.time_zone_transition 09-10T10:13:13 0 0 8843 1 0 0.289 mysql.time_zone_transition_type 09-10T10:13:13 0 1 4 1 0 0.031 mysql.user 09-10T10:13:13 0 0 1 1 0 0.018 percona.DSN_Table 09-10T10:13:13 0 0 200 1 0 0.028 sakila.actor 09-10T10:13:13 0 0 603 1 0 0.023 sakila.address 09-10T10:13:13 0 0 16 1 0 0.033 sakila.category 09-10T10:13:13 0 0 600 1 0 0.023 sakila.city 09-10T10:13:13 0 0 109 1 0 0.029 sakila.country 09-10T10:13:14 0 0 599 1 0 0.279 sakila.customer 09-10T10:13:14 0 0 1000 1 0 0.287 sakila.film 09-10T10:13:14 0 0 5462 1 0 0.299 sakila.film_actor 09-10T10:13:14 0 0 1000 1 0 0.027 sakila.film_category 09-10T10:13:14 0 0 1000 1 0 0.032 sakila.film_text 09-10T10:13:14 0 0 4581 1 0 0.276 sakila.inventory 09-10T10:13:15 0 0 6 1 0 0.030 sakila.language 09-10T10:13:15 0 0 16049 1 0 0.303 sakila.payment 09-10T10:13:15 0 0 16044 1 0 0.310 sakila.rental 09-10T10:13:15 0 0 2 1 0 0.029 sakila.staff 09-10T10:13:15 0 0 2 1 0 0.020 sakila.store



The result shows a difference in the MySQL internal table for permissions. This is obviously not what we are interested in, as permissions are individual to a host. So we rather exclude the MySQL internal database and also the percona database, because it is not what we are interested in. Also in order to test it the tool works, we delete the last five category assignments from the table with <span class="lang:default decode:true crayon-inline">mysql -u root -h 172.17.0.3 -e &#8220;DELETE FROM sakila.film_category WHERE film_id > 995;&#8221;</span> and update a row in the city table with&nbsp;

mysql -u root -h 172.17.0.3 -e “update sakila.city SET city='Innsbruck’ WHERE city_id=590;“```

Now execute the command again:

pt-table-checksum --replicate=percona.checksums --create-replicate-table --empty-replicate-table \
   --recursion-method=dsn=t=percona.DSN_Table --ignore-databases mysql,percona -h 172.17.0.2 -P 3306 -u percona -pSECRET
            TS ERRORS  DIFFS     ROWS  CHUNKS SKIPPED    TIME TABLE
09-10T10:46:33      0      0      200       1       0   0.017 sakila.actor
09-10T10:46:34      0      0      603       1       0   0.282 sakila.address
09-10T10:46:34      0      0       16       1       0   0.034 sakila.category
09-10T10:46:34      0      1      600       1       0   0.269 sakila.city
09-10T10:46:34      0      0      109       1       0   0.028 sakila.country
09-10T10:46:34      0      0      599       1       0   0.285 sakila.customer
09-10T10:46:35      0      0     1000       1       0   0.297 sakila.film
09-10T10:46:35      0      0     5462       1       0   0.294 sakila.film_actor
09-10T10:46:35      0      1     1000       1       0   0.025 sakila.film_category
09-10T10:46:35      0      0     1000       1       0   0.031 sakila.film_text
09-10T10:46:35      0      0     4581       1       0   0.287 sakila.inventory
09-10T10:46:35      0      0        6       1       0   0.035 sakila.language
09-10T10:46:36      0      0    16049       1       0   0.312 sakila.payment
09-10T10:46:36      0      0    16044       1       0   0.320 sakila.rental
09-10T10:46:36      0      0        2       1       0   0.030 sakila.staff
09-10T10:46:36      0      0        2       1       0   0.027 sakila.store

You see that there is a difference in the tables sakila.city and in the table sakila.film_category. The tool does not report the actual number of differences, but rather the number of different chunks. To get the actual differences, we need to use a different tool, which utilises the checksum table that the previous step created.

Show the differences with pt-tabel-sync

The pt-table-sync tool is the counter part for the pt-table-checksum util. It can print or even replay the SQL statements that would render the slave the same state again to be in sync with the master. We can run a dry-run first, as the tool is potentially dangerous.

pt-table-sync --dry-run  --replicate=percona.checksums --sync-to-master h=172.17.0.3 -P 3306 \
   -u percona -pSECRET --ignore-databases mysql,percona
# NOTE: --dry-run does not show if data needs to be synced because it
#       does not access, compare or sync data.  --dry-run only shows
#       the work that would be done.
# Syncing via replication P=3306,h=172.17.0.3,p=...,u=percona in dry-run mode, without accessing or comparing data
# DELETE REPLACE INSERT UPDATE ALGORITHM START    END      EXIT DATABASE.TABLE
#      0       0      0      0 Chunk     08:57:51 08:57:51 0    sakila.city
#      0       0      0      0 Nibble    08:57:51 08:57:51 0    sakila.film_category

With –dry-run, you only see affected tables, but not the actual data because it does not really access the databases tables in question. Use –print additionally or instead of dry-run to get a list:

pt-table-sync --print --replicate=percona.checksums --sync-to-master h=172.17.0.3 -P 3306 \
  -u percona -pSECRET --ignore-databases mysql,percona
REPLACE INTO `sakila`.`city`(`city_id`, `city`, `country_id`, `last_update`) VALUES \
   ('590', 'Yuncheng', '23', '2006-02-15 04:45:25') 
  \ /*percona-toolkit src_db:sakila src_tbl:city  ...
REPLACE INTO `sakila`.`film_category`(`film_id`, `category_id`, `last_update`) VALUES ... 
REPLACE INTO `sakila`.`film_category`(`film_id`, `category_id`, `last_update`) VALUES ('997',... 
REPLACE INTO `sakila`.`film_category`(`film_id`, `category_id`, `last_update`) VALUES ('998', '11 ...
REPLACE INTO `sakila`.`film_category`(`film_id`, `category_id`, `last_update`) VALUES ('999', '3', ...
REPLACE INTO `sakila`.`film_category`(`film_id`, `category_id`, `last_update`) VALUES ('1000', '5', ... 

The command shows how we can rename back from Innsbruck to Yuncheng again and also provides the INSERT statements to get the deleted records back.When we replace –print with –execute, the data gets written to the master and replicated to the slave. To allow this, we need to set the permissions on the master

GRANT INSERT, UPDATE, DELETE ON sakila.* TO 'percona'@'172.17.0.1';
pt-table-sync --execute  --replicate=percona.checksums --check-child-tables \ 
  --sync-to-master h=172.17.0.3 -P 3306 -u percona -pSECRET --ignore-databases mysql,percona
REPLACE statements on sakila.city can adversely affect child table `sakila`.`address` 
   because it has an ON UPDATE CASCADE foreign key constraint. 
   See --[no]check-child-tables in the documentation for more information. 
   --check-child-tables error  while doing sakila.city on 172.17.0.3

This error indicates that updating the city table has consequences, because it is a FK to child tables. In this example, we are bold and ignore this warning. This is absolutely not recommended for real world scenarios.

pt-table-sync --execute  --replicate=percona.checksums --no-check-child-tables \
   --no-foreign-key-checks --sync-to-master h=172.17.0.3 -P 3306 -u percona -pSECRET \ 
   --ignore-databases mysql,percona

The command–no-check-child-tables ignores child tables and the command –no-foreign-key-checks ignores foreign keys.

Run the checksum command again to verify that the data has been restored:

pt-table-checksum --replicate=percona.checksums --create-replicate-table --empty-replicate-table \ 
   --recursion-method=dsn=t=percona.DSN_Table --ignore-databases mysql,percona 
   -h 172.17.0.2 -P 3306 -u percona -pSECRET

            TS ERRORS  DIFFS     ROWS  CHUNKS SKIPPED    TIME TABLE
09-10T11:24:42      0      0      200       1       0   0.268 sakila.actor
09-10T11:24:42      0      0      603       1       0   0.033 sakila.address
09-10T11:24:42      0      0       16       1       0   0.029 sakila.category
09-10T11:24:42      0      0      600       1       0   0.275 sakila.city
09-10T11:24:42      0      0      109       1       0   0.023 sakila.country
09-10T11:24:43      0      0      599       1       0   0.282 sakila.customer
09-10T11:24:43      0      0     1000       1       0   0.046 sakila.film
09-10T11:24:43      0      0     5462       1       0   0.284 sakila.film_actor
09-10T11:24:43      0      0     1000       1       0   0.036 sakila.film_category
09-10T11:24:43      0      0     1000       1       0   0.279 sakila.film_text
09-10T11:24:44      0      0     4581       1       0   0.292 sakila.inventory
09-10T11:24:44      0      0        6       1       0   0.031 sakila.language
09-10T11:24:44      0      0    16049       1       0   0.309 sakila.payment
09-10T11:24:44      0      0    16044       1       0   0.325 sakila.rental
09-10T11:24:44      0      0        2       1       0   0.029 sakila.staff
09-10T11:24:44      0      0        2       1       0   0.028 sakila.store

0 DIFFS, we are done!