MySQL

From Han Wiki
Jump to navigation Jump to search


Setting up a Replication (Master-Slave)

  • Last tested on Ubuntu 14.04.05 LTS (trusty) as the master, and Ubuntu 16.04.02 LTS (xenial) as the slave.

In this procedure, the master is assumed to have tables with InnoDB with data. Some parameters and commands are specifically for InnoDB engines. My master happens to be MySQL and the slave happens to be running MariaDB. This shouldn't cause any issues as MariaDB is a drop-in replacement for MySQL.

The slave is a newly installed Ubuntu server installation.

Setting the replication master config (on master)

Activate the binary log on the master. innodb_flush_log_at_trx_commit and sync_binlog added for greatest possibility durability and consistency in a replication setup using InnoDB with transactions.

[mysqld]
log-bin=mysql-bin
server-id=1
innodb_flush_log_at_trx_commit=1
sync_binlog=1

Create a user for replication (on master)

Grant REPLICATION_SLAVE privilege to the new user.

mysql> CREATE USER 'repl'@'%.mydomain.com' IDENTIFIED BY 'mypassword';
mysql> GRANT REPLICATION SLAVE ON *.* TO 'repl'@'%.mydomain.com';

Get master's binary log coordinate (on master)

On the master stop commit operations on InnoDB table and read the coordinates. FLUSH TABLES WITH READ LOCK; command needs to be run in one session and left at that. Running any other commands after will potentially unlock the tables again.

mysql> FLUSH TABLES WITH READ LOCK;
<syntaxhighlight/>

Open another terminal session on the same server, and get the binary log position.  Write down the value for File and Position somewhere.  In this example, it would <span class="package">mysql-bin.000003</span> and <span class="package">73</span>, respectively.

<syntaxhighlight lang="mysql">
mysql> SHOW MASTER STATUS;
+------------------+----------+--------------+------------------+
| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB |
+------------------+----------+--------------+------------------+
| mysql-bin.000003 | 73       | test         | manual,mysql     |
+------------------+----------+--------------+------------------+

Create a snapshot using mysqldump (on master)

While the first session with FLUSH TABLES is still hanging there, run the following command on a different session to dump the databases. This assumes there are existing data using InnoDB engines.

$ mysqldump --all-databases --master-data > dbdump.db

After this is done, you can unlock the tables again.

mysql> unlock tables;

Copy dbdump.db to the slave server using scp or whatever else you use for server-to-server file copying.

Setting the replication slave configuration (on slave)

This procedure can be applied to multiple slave servers. Just make sure that you have different server-id value for each slave server. You may also want to turn binary logging on for the slave if you're planning to switch off the master in the future and turn the slave into a new master for other slaves.

The file can be located as /etc/mysql/mysql.conf.d/mysqld.cnf or 50-server.cnf.

[mysqld]
server-id=2

Restart mysql daemon. On mine, the command is systemctl restart mysql.service. Same with MariaDB, if you have the full package installed.

Setting the master config on the slave (on slave)

Configure the slave with necessary connection information. Make sure MASTER_LOG_FILE and MASTER_LOG_POS match what you had written down above EXACTLY.

mysql> CHANGE MASTER TO
    ->   MASTER_HOST='<em>master_host_name_or_ip_address</em>',
    ->   MASTER_USER='repluser',
    ->   MASTER_PASSWORD='mypassword',
    ->   MASTER_LOG_FILE='mysql-bin.000003',
    ->   MASTER_LOG_POS=73,
    ->   MASTER_CONNECT_RETRY=10;

Load the existing data on the slave

Replace ~/dbdump.db with the full path to the same file.

mysql> source ~/dbdump.db

Start the slave replication process

Once loading of the data completes, you can start the slave replication.

mysql> start slave;

Checking the status of the replication setup

If you're interested in checking the status, run show slave status; on the slave.

On the master, you can run show master status;, or show processlist to check the running processes.

List all database without the border

  • Last tested on Ubuntu 16.04.01 LTS (xenial)
$ mysql -B -uusername -ppassword --disable-column-names --execute "show databases"

mysqldump returning Errcode 13

  • Last tested on Ubuntu 14.04.2 LTS (trusty)

When I try to create a CSV file from a table in the database I'd get an Errcode 13. Here I'm trying to extract a table called enc_codes3 from a database called testdb

mhan@dbserver:~$ mysqldump --tab=./testdump -uroot -p --fields-terminated-by=, --fields-enclosed-by='"' --lines-terminated-by=0x0d0a -t testdb enc_codes3
Enter password:
mysqldump: Got error: 1: Can't create/write to file '/home/mhan/testdump/enc_codes3.txt' (Errcode: 13) when executing 'SELECT INTO OUTFILE'

Make sure that the mysql daemon (mysql:mysql on Ubuntu) is able to write to the folder.

apparmor status can be checked via sudo aa-status.

Disable apparmor temporary ( service apparmor teardown ) execute the above command and then restart it again ( service apparmor start ).

Moving all of the databases from one server to another

  • Last tested on Ubuntu 16.04.01 LTS (xenial)

Log in as an admin on MySQL Console and lock the database to allow only read operations.

mysql> flush tables with read lock;
mysql> set global read_only = on;
mysql> exit

Dump all of the databases into a file.

$ mysqldump --lock-all-tables -u root -p --all-databases > dbs.sql

Copy the dump to the new server. RSYNC is preferred over SCP, especially if the file is large.

$ rsync -tvz --progress dbs.sql mhan@newserver.com:~/files/
or
$ scp dbs.sql mhan@newserver.com:~/files/

The DB can be (optionally) unlocked. This may or may not be a good thing to do in your case. Do it at your own risk.

mysql> set global read_only = off;
mysql> unlock tables;
mysql> exit

On the new server, execute this command to import the new SQL dump.

$ mysql -u root -p < ~/files/dbs.sql

IMPORTANT: If your file is large, or you just have a lot of records, you may want to make sure you have something bigger than 16M for max_allowed_packet attribute in your my.cnf (usually found under /etc/mysql/ or /etc/mysql/mysql.conf.d/) on your new server where you're doing the import, else the server could hang on a large insert operation and your MySQL server may actually decide to go away, literally. On one of the servers I had it for 1024M just for this operation and brought it back low afterwards.

Copying MySQL databases on the same server

Last tested on Ubuntu 16.04.01 LTS (xenial) with MySQL Ver 14.14 Distrib 5.7.13

We had to make a copy of existing databases for development app instances. For example, a database called xp_main was for the production and xpdev_main would be for development. This depends on how date strings were created, but if you have a lot of dates in the records you may want to turn off the NO_ZERO_DATE mode. If you don’t turn it off, the copying process can be interrupted. Go into your MySQL console.

mysql> select @@sql_mode;
+-------------------------------------------------------------------------------------------------------------------------------------------+
| @@sql_mode |
+-------------------------------------------------------------------------------------------------------------------------------------------+
| ONLY_FULL_GROUP_BY,STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION |
+-------------------------------------------------------------------------------------------------------------------------------------------+

As you can see NO_ZERO_DATE exists. Copy paste the entire string w/o NO_ZERO_DATE.

mysql> set global sql_mode='ONLY_FULL_GROUP_BY,STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION';
Query OK, 0 rows affected, 1 warning (0.00 sec)
mysql> exit
Bye

Next, we will copy the database using mysqldbcopy utility. You may need to install mysql-utilities package if you don’t have it available.

$ mysqldbcopy --drop-first --source=root:mypassword@localhost --destination=root:mypassword@localhost xp_main:xpdev_main
WARNING: Using a password on the command line interface can be insecure.
# Source on localhost: ... connected.
# Destination on localhost: ... connected.
# Copying database eh_bcbs renamed as ehdev_bcbs
# Copying TABLE xp_main.accesses
# Copying TABLE xp_main.accessflags
# Copying TABLE xp_main.activities
# Copying TABLE xp_main.activitytype_items
# Copying TABLE xp_main.encounter_goals
# Copying TABLE xp_main.files
# Copying TABLE xp_main.tester1_intake_subseqvisit_goals
# Copying TABLE xp_main.tester1_game_careplan_goals
# Copying TABLE xp_main.localgames
# Copying TABLE xp_main.roles
# Copying GRANTS from xp_main
# Copying data for TABLE xp_main.accesses
# Copying data for TABLE xp_main.accessflags
# Copying data for TABLE xp_main.activities
# Copying data for TABLE xp_main.activitytype_items
# Copying data for TABLE xp_main.encounter_goals
# Copying data for TABLE xp_main.files
# Copying data for TABLE xp_main.tester1_intake_subseqvisit_goals
# Copying data for TABLE xp_main.tester1_game_careplan_goals
# Copying data for TABLE xp_main.localgames
# Copying data for TABLE xp_main.roles
#...done.

That should do it!