Quantcast
Channel: Severalnines - clustercontrol
Viewing all articles
Browse latest Browse all 385

ClusterControl Tips & Tricks: Monitoring multiple MySQL instances on one machine

$
0
0

Requires ClusterControl 1.2.11 or later. Applies to MySQL based instances/clusters.

On some occasions, you might want to run multiple instances of MySQL on a single machine. You might want to give different users access to their own mysqld servers that they manage themselves, or you might want to test a new MySQL release while keeping an existing production setup undisturbed.

It is possible to use a different MySQL server binary per instance, or use the same binary for multiple instances (or a combination of the two approaches). For example, you might run a server from MySQL 5.1 and one from MySQL 5.5, to see how the different versions handle a certain workload. Or you might run multiple instances of the latest MySQL version, each managing a different set of databases.

Whether or not you use distinct server binaries, each instance that you run must be configured with unique values for several operating parameters. This eliminates the potential for conflict between instances. You can use MySQL Sandbox to create multiple MySQL instances. Or you can use mysqld_multi available in MySQL to start, or stop any number of separate mysqld processes running on different TCP/IP ports and UNIX sockets.

In this blog post, we’ll show you how to monitor multiple MySQL instances on one host using ClusterControl.

ClusterControl Limitation

At the time of writing, ClusterControl does not support monitoring of multiple instances on one host per cluster/server group. It assumes the following best practices:

  • Only one MySQL instance per host (physical server or virtual machine).
  • MySQL data redundancy should be configured on N+1 server.
  • All MySQL instances are running with uniform configuration across cluster/server group, e.g., listening port, error log, datadir, basedir, socket are identical.

With regards to the points mentioned above, ClusterControl assumes that in a cluster/server group:

  • MySQL instances are configured uniformly across a cluster; same port, same location of logs, base/data directory and other critical configurations.
  • It monitors, manages and deploys only one MySQL instance per host.
  • MySQL client must be installed on the host and available on the executable path for the corresponding OS user.
  • The MySQL is bound to an IP address reachable by ClusterControl node.
  • It keeps monitoring the host statistics e.g CPU/RAM/disk/network for each MySQL instance individually. In an environment with multiple instances per host, you should expect redundant host statistics since it monitors the same host multiple times.

With the above assumptions, the following ClusterControl features do not work for a host with multiple instances:

  • Backup - Percona Xtrabackup does not support multiple instances per host and mysqldump executed by ClusterControl only connects to the default socket.
  • Process management - ClusterControl uses the standard ‘pgrep -f mysqld_safe’ to check if MySQL is running on that host. With multiple MySQL instances, this is a false positive approach. As such, automatic recovery for node/cluster won’t work.
  • Configuration management - ClusterControl provisions the standard MySQL configuration directory. It usually resides under /etc/ and /etc/mysql.

Workaround

Monitoring multiple MySQL instances on a machine is still possible with ClusterControl with a simple workaround. Each MySQL instance must be treated as a single entity per server group.

In this example, we have 3 MySQL instances on a single host created with MySQL Sandbox:

We created our MySQL instances using the following commands:

$ su - sandbox
$ make_multiple_sandbox mysql-5.6.26-linux-glibc2.5-x86_64.tar.gz

By default, MySQL Sandbox creates mysql instances that listen to 127.0.0.1. It is necessary to configure each node appropriately to make them listen to all available IP addresses. Here is the summary of our MySQL instances in the host:

[sandbox@test multi_msb_mysql-5_6_26]$ cat default_connection.json
{
"node1":
    {
        "host":     "127.0.0.1",
        "port":     "15227",
        "socket":   "/tmp/mysql_sandbox15227.sock",
        "username": "msandbox@127.%",
        "password": "msandbox"
    }
,
"node2":
    {
        "host":     "127.0.0.1",
        "port":     "15228",
        "socket":   "/tmp/mysql_sandbox15228.sock",
        "username": "msandbox@127.%",
        "password": "msandbox"
    }
,
"node3":
    {
        "host":     "127.0.0.1",
        "port":     "15229",
        "socket":   "/tmp/mysql_sandbox15229.sock",
        "username": "msandbox@127.%",
        "password": "msandbox"
    }
}

From ClusterControl, we need to perform ‘Add Existing Server/Cluster’ for each instance as we need to isolate them in a different group to make it work. For node1, enter the following information in ClusterControl> Add Existing Server/Cluster:

You can monitor the progress by clicking on the spinning arrow icon in the top menu. You will see node1 in the UI once ClusterControl finishes the job:

Repeat the same steps to add another two nodes with port 15228 and 15229. You should see something like the below once they are added:

There you go. We just added our existing MySQL instances into ClusterControl for monitoring. Happy monitoring!

PS.: To get started with ClusterControl, click here!

Blog category:


Viewing all articles
Browse latest Browse all 385

Trending Articles