Quantcast
Channel: Severalnines - clustercontrol
Viewing all articles
Browse latest Browse all 385

ClusterControl Tips & Tricks: Manage and Monitor your Existing MySQL NDB Cluster

$
0
0

Of the different types of clustered MySQL environments, NDB Cluster is among the ones that involves more effort and resource to administer. And unless you are a command line freak, you would want to use a tool that gives you a full view of what is going on in your cluster. 

It is possible to install ClusterControl and monitor your existing MySQL Cluster (NDB). In this blog post, we are going to show you on how to add two existing MySQL Clusters (production and staging) into ClusterControl. 

  • ClusterControl: 192.168.55.170
  • Cluster #1:
    • Management/API node: mgmd-api1 - 192.168.55.71
    • Management/API node: mgmd-api2 - 192.168.55.72
    • Data node: data1 - 192.168.55.73
    • Data node: data2 - 192.168.55.74
  • Cluster #2:
    • Management/API node: mgmd-api1 - 192.168.55.81
    • Management/API node: mgmd-api2 - 192.168.55.82
    • Data node: data1 - 192.168.55.83
    • Data node: data2 - 192.168.55.84

Adding the first cluster

Based on the above architecture diagram, here is what you should do to add the first cluster into ClusterControl:

1. Install the latest ClusterControl. Once done, register the default admin user/password and log into the ClusterControl dashboard:

2. As root or sudo user, setup passwordless SSH to all nodes (including ClusterControl node):

$ whoami
root
$ ssh-keygen -t rsa
$ ssh-copy-id 192.168.55.71
$ ssh-copy-id 192.168.55.72
$ ssh-copy-id 192.168.55.73
$ ssh-copy-id 192.168.55.74

Also ensure that the controller can connect to the management servers (ndb_mgmd) on port 1186, and on port 3306 (or the port used by the mysql servers).

3. Since this is the first cluster, we are going to assign it cluster ID number 1. Create a CMON configuration file at /etc/cmon.d/cmon_1.cnf and paste the following lines:

cluster_id=1
name=default_cluster_1
mode=controller
type=mysqlcluster
skip_name_resolve=1
mysql_port=3306
mysql_hostname=192.168.55.170
mysql_password=cmon
mysql_basedir=/usr/
hostname=192.168.55.170
osuser=root
os=redhat
logfile=/var/log/cmon_1.log
pidfile=/var/run/
ssh_identity=''
ssh_opts=-q
ssh_port=22
monitored_mountpoints=/var/lib/mysql/,/data/mysqlcluster/
monitored_mysql_root_password=rootpassword
ndb_connectstring=192.168.55.71:1186,192.168.55.72:1186
mysql_server_addresses=192.168.55.71,192.168.55.72
datanode_addresses=192.168.55.73,192.168.55.74
mgmnode_addresses=192.168.55.71,192.168.55.72
db_stats_collection_interval=30
host_stats_collection_interval=60
mysql_bindir=/usr/bin/
enable_cluster_autorecovery=0
enable_node_autorecovery=0
db_configdir=/etc/mysql

In the above example, the target MySQL Cluster:

  • is running on CentOS 6.5 (os=redhat). 
  • SSH user is root (osuser=root)
  • Using the default SSH key for user root (ssh_identity=’’)
  • Consists of 2 MySQL API nodes (mysql_server_addresses=192.168.55.71,192.168.55.72)
  • Consists of 2 management nodes (mysql_server_addresses=192.168.55.71,192.168.55.72)
  • Consists of 2 MySQL data nodes (mysql_server_addresses=192.168.55.71,192.168.55.72)
  • ndb_mgmd on management node is running on port 1186 (ndb_connectstring=192.168.55.71:1186,192.168.55.72:1186)
  • MySQL base directory is under /usr/. You can check with SHOW VARIABLES LIKE 'basedir'
  • MySQL binaries are installed under /usr/bin
  • MySQL data directory is located at /var/lib/mysql while NDB datadir is at /data/mysqlcluster/ and we want ClusterControl to monitor that. (monitored_mountpoints=/var/lib/mysql/,/data/mysqlcluster/)
  • Configuration files (my.cnf/config.ini) of the cluster is stored in db_configdir=/etc/mysql

Details on configuration options are explained in the documentation page.

4. Restart CMON service to apply the changes:

$ service cmon restart

Monitor the log file for this cluster at /var/log/cmon_1.log and you should see something like this:

$ tail -f /var/log/cmon.log
Nov 18 16:22:52 : (INFO) Checking tables for duplicate indexes.
Nov 18 16:22:52 : (INFO) Checking indexes for selectivity.
Nov 18 16:22:52 : (INFO) Gathering Database Stats Completed
Nov 18 16:22:53 : (INFO) Query Monitor: Changed periodicity from -1 to 1
Nov 18 16:22:53 : (INFO) Query Monitor: Setting Query Sampling, Long_query_time: '0.5'
Nov 18 16:22:53 : (INFO) Query Monitor: Setting log_queries_not_using_indexes: 'OFF'
Nov 18 16:22:53 : (INFO) Query Monitor: Setting Query Sampling, Long_query_time: '0.5'
Nov 18 16:22:53 : (INFO) Query Monitor: Setting log_queries_not_using_indexes: 'OFF'
Nov 18 16:22:57 : (INFO) CmonCommandHandler: started, polling for commands.
Nov 18 16:22:57 : (INFO) Aborting DEQUEUED, RUNNING and DEFINED jobs
Nov 18 16:23:02 : (INFO) MonitorThread: Cluster STARTED   - all nodes are started  - running

Then go to ClusterControl > Settings > Cluster Registrations > Synchronize Clusters to immediately load the cluster in the dashboard. You should now see the first cluster is added into ClusterControl as cluster ID 1, similar to the following screenshot:

Adding more clusters

Once imported, you will have to add the subsequent MySQL cluster as cluster ID 2 or higher. To achieve this, run following steps on ClusterControl node:

1. Copy CMON configuration file from cluster ID 1 as a template for cluster ID 2:

$ cp /etc/cmon.d/cmon_1.cnf /etc/cmon.d/cmon_2.cnf

Update the following lines inside /etc/cmon.d/cmon_2.cnf to reflect the second cluster:

cluster_id=2
name=default_cluster_2
logfile=/var/log/cmon_2.log
mysql_server_addresses=[SQL nodes IP address or hostname in comma separated list]
datanode_addresses=[data nodes IP address or hostname in comma separated list]
mgmnode_addresses=[management nodes IP address or hostname in comma separated list]
ndb_connectstring=[NDB connection string of the cluster]

Details on configuration options are explained in the documentation page.

2. Setup passwordless SSH to the all MySQL Cluster nodes in second cluster using the following commands:

$ ssh-copy-id 192.168.55.81
$ ssh-copy-id 192.168.55.82
$ ssh-copy-id 192.168.55.83
$ ssh-copy-id 192.168.55.84

3. Restart CMON service to apply the changes:

$ service cmon restart

At this point, ClusterControl should start provisioning all nodes defined in the CMON configuration files (both /etc/cmon.d/cmon_1.cnf and /etc/cmon.d/cmon_2.cnf). Monitor the output of CMON controller for cluster ID 2 at /var/log/cmon_2.log. 

Then go to ClusterControl > Settings > Cluster Registrations > Synchronize Clusters to load the cluster in the dashboard. You should see it listed under the Database Cluster List:

You can now manage, monitor and scale your MySQL Clusters from the ClusterControl app. Happy clustering!

Blog category:


Viewing all articles
Browse latest Browse all 385

Trending Articles