Quantcast
Channel: Severalnines - clustercontrol
Viewing all 385 articles
Browse latest View live

ClusterControl Product Video for Galera Clusters

$
0
0

In this video Art van Scheppingen shows you how easy it is to add your Galera setups to ClusterControl. ClusterControl provides you with a single console to deploy, manage, monitor, and scale your Galera setups and mixed environment. In addition to advanced monitoring, ClusterControl affords feature benefits like comprehensive security, automation, reporting, and point-and-click deployments.

Watch the video and learn more about the following topics:

  • Deploying a new Galera cluster in ClusterControl
  • Importing an existing Galera Cluster into ClusterControl
  • Navigating the Cluster Overview section in ClusterControl
  • Accessing your Galera monitoring dashboards in ClusterControl
  • Accessing your Galera node data tables in ClusterControl
  • Accessing server statistics in ClusterControl
  • Navigating the nodes section in ClusterControl
  • Enabling binary logging from your Galera nodes in ClusterControl
  • Using advisors in your Galera setup
  • Performing backups
  • Access log files for
  • Adding new nodes to your Galera Clusters
  • Adding Asynchronous slaves
  • Cloning Clusters in ClusterControl
  • Load balancing for HAproxy, ProxySQL, MaxScale
  • Galera arbitrator in ClusterControl

Video: ClusterControl for Galera Cluster

$
0
0

This video walks you through the features that ClusterControl offers for Galera Cluster and how you can use them to deploy, manage, monitor and scale your open source database environments.

ClusterControl and Galera Cluster

ClusterControl provides advanced deployment, management, monitoring, and scaling functionality to get your Galera clusters up-and-running using proven methodologies that you can depend on to work.

At the core of ClusterControl is it’s automation functionality that let’s you automate many of the database tasks you have to perform regularly like deploying new clusters, adding and scaling new nodes, running backups and upgrades, and more.

ClusterControl is cluster-aware, topology-aware and able to provision all members of the Galera Cluster, including the replication chain connected to it.

To learn more check out the following resources…

How to Set Up Asynchronous Replication from Galera Cluster to Standalone MySQL server with GTID

$
0
0

Hybrid replication, i.e. combining Galera and asynchronous MySQL replication in the same setup, became much easier since GTID got introduced in MySQL 5.6. Although it was fairly straightforward to replicate from a standalone MySQL server to a Galera Cluster, doing it the other way round (Galera → standalone MySQL) was a bit more challenging. At least until the arrival of GTID.

There are a few good reasons to attach an asynchronous slave to a Galera Cluster. For one, long-running reporting/OLAP type queries on a Galera node might slow down an entire cluster, if the reporting load is so intensive that the node has to spend considerable effort coping with it. So reporting queries can be sent to a standalone server, effectively isolating Galera from the reporting load. In a belts and suspenders approach, an asynchronous slave can also serve as a remote live backup.

In this blog post, we will show you how to replicate a Galera Cluster to a MySQL server with GTID, and how to failover the replication in case the master node fails.

Hybrid Replication in MySQL 5.5

In MySQL 5.5, resuming a broken replication requires you to determine the last binary log file and position, which are distinct on all Galera nodes if binary logging is enabled. We can illustrate this situation with the following figure:

Galera cluster asynchronous slave topology without GTID
Galera cluster asynchronous slave topology without GTID

If the MySQL master fails, replication breaks and the slave will need to switch over to another master. You will need pick a new Galera node, and manually determine a new binary log file and position of the last transaction executed by the slave. Another option is to dump the data from the new master node, restore it on slave and start replication with the new master node. These options are of course doable, but not very practical in production.

How GTID Solves the Problem

GTID (Global Transaction Identifier) provides a better transactions mapping across nodes, and is supported in MySQL 5.6. In Galera Cluster, all nodes will generate different binlog files. The binlog events are the same and in the same order, but the binlog file names and offsets may vary. With GTID, slaves can see a unique transaction coming in from several masters and this could easily being mapped into the slave execution list if it needs to restart or resume replication.

Galera cluster asynchronous slave topology with GTID failover
Galera cluster asynchronous slave topology with GTID failover

All necessary information for synchronizing with the master is obtained directly from the replication stream. This means that when you are using GTIDs for replication, you do not need to include MASTER_LOG_FILE or MASTER_LOG_POS options in the CHANGE MASTER TO statement. Instead it is necessary only to enable the MASTER_AUTO_POSITION option. You can find more details about the GTID in the MySQL Documentation page.

Setting Up Hybrid Replication by hand

Make sure the Galera nodes (masters) and slave(s) are running on MySQL 5.6 before proceeding with this setup. We have a database called sbtest in Galera, which we will replicate to the slave node.

1. Enable required replication options by specifying the following lines inside each DB node’s my.cnf (including the slave node):

For master (Galera) nodes:

gtid_mode=ON
log_bin=binlog
log_slave_updates=1
enforce_gtid_consistency
expire_logs_days=7
server_id=1         # 1 for master1, 2 for master2, 3 for master3
binlog_format=ROW

For slave node:

gtid_mode=ON
log_bin=binlog
log_slave_updates=1
enforce_gtid_consistency
expire_logs_days=7
server_id=101         # 101 for slave
binlog_format=ROW
replicate_do_db=sbtest
slave_net_timeout=60

2. Perform a cluster rolling restart of the Galera Cluster (from ClusterControl UI > Manage > Upgrade > Rolling Restart). This will reload each node with the new configurations, and enable GTID. Restart the slave as well.

3. Create a slave replication user and run following statement on one of the Galera nodes:

mysql> GRANT REPLICATION SLAVE ON *.* TO 'slave'@'%' IDENTIFIED BY 'slavepassword';

4. Log into the slave and dump database sbtest from one of the Galera nodes:

$ mysqldump -uroot -p -h192.168.0.201 --single-transaction --skip-add-locks --triggers --routines --events sbtest > sbtest.sql

5. Restore the dump file onto the slave server:

$ mysql -uroot -p < sbtest.sql

6. Start replication on the slave node:

mysql> STOP SLAVE;
mysql> CHANGE MASTER TO MASTER_HOST = '192.168.0.201', MASTER_PORT = 3306, MASTER_USER = 'slave', MASTER_PASSWORD = 'slavepassword', MASTER_AUTO_POSITION = 1;
mysql> START SLAVE;

To verify that replication is running correctly, examine the output of slave status:

mysql> SHOW SLAVE STATUS\G
       ...
       Slave_IO_Running: Yes
       Slave_SQL_Running: Yes
       ...

Setting up Hybrid Replication using ClusterControl

In the previous paragraph we described all the necessary steps to enable the binary logs, restart the cluster node by node, copy the data and then setup replication. The procedure is a tedious task and you can easily make errors in one of these steps. In ClusterControl we have automated all the necessary steps.

1. For ClusterControl users, you can go to the nodes in the Nodes page and enable binary logging.

Enable binary logging on Galera cluster using ClusterControl
Enable binary logging on Galera cluster using ClusterControl

This will open a dialogue that allows you to set the binary log expiration, enable GTID and auto restart.

Enable binary logging with GTID enabled
Enable binary logging with GTID enabled

This initiates a job, that will safely write these changes to the configuration, create replication users with the proper grants and restart the node safely.

Photo description

Repeat this process for each Galera node in the cluster, until all nodes indicate they are master.

All Galera Cluster nodes are now master
All Galera Cluster nodes are now master

2. Add the asynchronous replication slave to the cluster

Adding an asynchronous replication slave to Galera Cluster using ClusterControl
Adding an asynchronous replication slave to Galera Cluster using ClusterControl

And this is all you have to do. The entire process described in the previous paragraph has been automated by ClusterControl.

Changing Master

If the designated master goes down, the slave will retry to reconnect again in slave_net_timeout value (our setup is 60 seconds - default is 1 hour). You should see following error on slave status:

       Last_IO_Errno: 2003
       Last_IO_Error: error reconnecting to master 'slave@192.168.0.201:3306' - retry-time: 60  retries: 1

Since we are using Galera with GTID enabled, master failover is supported via ClusterControl when Cluster and Node Auto Recovery has been enabled. Whether the master would fail due to network connectivity or any other reason, ClusterControl will automatically fail over to the most suitable other master node in the cluster.

If you wish to perform the failover manually, simply change the master node as follows:

mysql> STOP SLAVE;
mysql> CHANGE MASTER TO MASTER_HOST = '192.168.0.202', MASTER_PORT = 3306, MASTER_USER = 'slave', MASTER_PASSWORD = 'slavepassword', MASTER_AUTO_POSITION = 1;
mysql> START SLAVE;

In some cases, you might encounter a “Duplicate entry .. for key” error after the master node changed:

       Last_Errno: 1062
       Last_Error: Could not execute Write_rows event on table sbtest.sbtest; Duplicate entry '1089775' for key 'PRIMARY', Error_code: 1062; handler error HA_ERR_FOUND_DUPP_KEY; the event's master log mysqld-bin.000009, end_log_pos 85789000

In older versions of MySQL, you can just use SET GLOBAL SQL_SLAVE_SKIP_COUNTER = n to skip statements, but it does not work with GTID. Miguel from Percona wrote a great blog post on how to repair this by injecting empty transactions.

Another approach, for smaller databases, could also be to just get a fresh dump from any of the available Galera nodes, restore it and use RESET MASTER statement:

mysql> STOP SLAVE;
mysql> RESET MASTER;
mysql> DROP SCHEMA sbtest; CREATE SCHEMA sbtest; USE sbtest;
mysql> SOURCE /root/sbtest_from_galera2.sql; -- repeat step #4 above to get this dump
mysql> CHANGE MASTER TO MASTER_HOST = '192.168.0.202', MASTER_PORT = 3306, MASTER_USER = 'slave', MASTER_PASSWORD = 'slavepassword', MASTER_AUTO_POSITION = 1;
mysql> START SLAVE;

You may also use pt-table-checksum to verify the replication integrity, more information in this blog post.

Note: Since in MySQL replication the slave applier is by default still single-threaded, do not expect the async replication performance to be the same as Galera’s parallel replication. For MySQL 5.6 and 5.7 there are options to make the asynchronous replication executed in parallel on the slave nodes, but in principle this replication is still depending on the correct order of transactions inside the same schema to happen. If the replication load is intensive and continuous, the slave lag will just keep growing. We have seen cases where slave could never catch up with the master.

How to Deploy Asynchronous Replication Slave to MariaDB Galera Cluster 10.x using ClusterControl

$
0
0

Combining Galera and asynchronous replication in the same MariaDB setup, aka Hybrid Replication, can be useful - e.g. as a live backup node in a remote datacenter or reporting/analytics server. We already blogged about this setup for Codership/Galera or Percona XtraDB Cluster users, but a master failover as described in that post does not work for MariaDB because of its different GTID approach. In this post, we will show you how to deploy an asynchronous replication slave to MariaDB Galera Cluster 10.x (with master failover!), using GTID with ClusterControl.

Preparing the Master

First and foremost, you must ensure that the master and slave nodes are running on MariaDB Galera 10.0.2 or later. A MariaDB replication slave requires at least one master with GTID among the Galera nodes. However, we would recommend users to configure all the MariaDB Galera nodes as masters. GTID, which is automatically enabled in MariaDB, will be used to do master failover.The following must be true for the masters:

  • At least one master among the Galera nodes
  • All masters must be configured with the same domain ID
  • log_slave_updates must be enabled
  • All masters’ MariaDB port is accessible by ClusterControl and slaves
  • Must be running MariaDB version 10.0.2 or later

From ClusterControl this is easily done by selecting Enable Binary Logging in the drop down for each node.

Enabling binary logging through ClusterControl
Enabling binary logging through ClusterControl

And then enable GTID in the dialogue:

Once Proceed has been clicked, a job will automatically configure the Galera node according to the settings described earlier.

If you wish to perform this action by hand, you can configure a Galera node as master, by changing the MariaDB configuration file for that node as per below:

gtid_domain_id=<must be same across all mariadb servers participating in replication>
server_id=<must be unique>
binlog_format=ROW
log_slave_updates=1
log_bin=binlog

After making these changes, restart the nodes one by one or using a rolling restart (ClusterControl > Manage > Upgrades > Rolling Restart)

ClusterControl
Single Console for Your Entire Database Infrastructure
Find out what else is new in ClusterControl

Preparing the Slave

For the slave, you would need a separate host or VM with or without MariaDB installed. If you do not have MariaDB installed, you need to perform the following tasks; configure root password (based on monitored_mysql_root_password), create slave user (based on repl_user, repl_password), configure MariaDB, start the server and finally start replication.

Adding the slave using ClusterControl, all these steps will be automated in the Add Replication Slave job as described below.

Add replication slave to MariaDB Cluster
Add replication slave to MariaDB Cluster

After adding our slave node, our deployment will look like this:

MariaDB Galera asynchronous slave topology
MariaDB Galera asynchronous slave topology

Master Failover and Recovery

Since we are using MariaDB with GTID enabled, master failover is supported via ClusterControl when Cluster and Node Auto Recovery has been enabled. Whether the master would fail due to network connectivity or any other reason, ClusterControl will automatically fail over to the most suitable other master node in the cluster.

Automatic slave failover to another master in Galera cluster
Automatic slave failover to another master in Galera cluster

This way ClusterControl will add a robust asynchronous slave capability to your MariaDB Cluster!

How Galera Cluster Enables High Availability for High Traffic Websites

$
0
0

In today’s competitive technology environment, high availability is a must. There is no way around it - if your website or service is not available, then most probably you are losing money. It could relate directly to money loss - your customers cannot access your e-commerce service and they cannot spend their money (in addition, they are likely to use your competitors instead). Or it can be less directly linked - your sales reps cannot reach your web-based CRM system and that seriously limits their productivity. No matter how, a website which cannot be reached can be more or less serious for any organization. The question is - how do we ensure that your website will stay available? Assuming you are using MySQL or MariaDB (not unlikely, if it is a website), one of the technologies that can be utilized is Galera Cluster. In this blog post, we’ll show you how to leverage a high availability database to improve the availability of your site.

High availability is hard with databases

Every website is different, but in general, we’ll see some frontend webservers, a database backend, load balancers, file system storage and additional components like caching systems. To make a website highly available, we would need each component to be highly available so we do not have any single point of failure (SPOF).

The webserver tier is usually relatively easy to scale, as it is possible to deploy multiple instances behind a load balancer. If you would want to preserve session state across all webservers, one would probably store it in a shared database (Memcached, or even MySQL Cluster for a pretty robust alternative). Load balancers can be made highly available (e.g. HAProxy/Keepalived/VIP). There are a number of clustered file systems that can be used for the storage layer, we have previously covered solutions like csync2 with lsyncd, GlusterFS, OCFS2. The database service can be made highly available with e.g. DRBD, so all storage is replicated to a standby server. This means the service can be started on another host that has access to the database files.

This might not work very well for a high traffic website though, as all webservers will still be hitting the primary database instance. Failover time also can take a while, since you are failing over to a cold standby server and MySQL has to perform crash recovery when starting up.

MySQL master-slave replication is another option, and there are different ways to make it highly available. There are inconveniences though, as not all nodes are the same - you need to ensure to just write to one master and avoid diverging datasets across the nodes.

Galera Cluster for MySQL/MariaDB

Let’s start with discussing what Galera Cluster is and what it is not. It is a virtually synchronous, multi-master cluster. You can access any of the nodes and issue reads and writes - this is a significant improvement compared to replication setups - no need for failovers and master promotions, if one node is down, usually you can just connect to another node and execute queries. Galera provides a self-healing mechanism through state transfers - State Snapshot Transfer (SST) and Incremental State Transfer (IST). What it means is that when a node joins a cluster, Galera will attempt to bring it back to sync. It may just copy missing data from other node’s gcache (IST) or, if none of nodes contain data in its gcache, it will copy all of the contents of one of nodes to the joining node (via SST). This also makes it very easy for a Galera Cluster to recover even from serious failure conditions. Galera Cluster is a method to scale reads - you can increase a size of the cluster and you can read from all of the slaves.

On the other hand, you have to keep in mind what Galera is not. Galera is not a solution which implements sharding (like NDB Cluster does) - it does not shard the data automatically, each and every Galera node contains the same full data set, just like a standalone MySQL node. Therefore, Galera is not a solution which can help you scale writes. It can help you squeeze more writes than standard, single-threaded replication as it can utilize multiple writers at once, but as of MySQL 5.7, you can use multithreaded replication for every workload so it’s not the same advantage it used to be. Galera is not a solution which can be left alone - even though it has some auto healing features, it still requires user supervision and it happens pretty often that Galera cannot recover on its own.

Having said that, Galera Cluster is still a great piece of software, which can be utilized to build highly available clusters. In the next section we’ll show you different deployment patterns for Galera. Before we get there, there is one more very important bit of information that is required to understand why we want to deploy Galera clusters the way we are about to describe - quorum calculations. Galera has a mechanism which prevents split brain scenarios from happening. It detects number of available nodes and checks if there is a quorum available - Galera has to see (50% + 1) nodes to accept traffic. Otherwise it assumes that a split brain happened and it is a part of a minority segment which does not contain current data nor it can accept writes. Ok, now let’s talk about different ways in which you can deploy Galera cluster.

Deploying Galera cluster

Basic deployment - three node cluster

The most common way in which Galera cluster is deployed is to use an odd number of nodes and just deploy them. The most basic setup for Galera is a three node cluster. This setup is enough to survive a loss of one node - it still can operate just fine even though its read capacity decreases and it cannot tolerate any more failures. Such setup is very common to be used as an entry level - you can always add more nodes in the future, keeping in mind that you should use an odd number of nodes. We have seen Galera clusters as big as 11 - 13 nodes.

Minimalistic approach - two nodes + garbd

If you want to do some testing with Galera, you can reduce the cluster size to two nodes. A 2-node cluster does not provide any fault tolerance but we can improve this by leveraging Galera Arbitrator (garbd) - a daemon which can be started on a third node. It will receive all of the Galera traffic and for the purpose of detecting failures and forming a quorum, it acts as a Galera node. Given that garbd doesn’t need to apply writesets (it just accepts them, no further action is taken), it doesn’t require beefy hardware, like database nodes would need. This reduce the cost of hardware that’s needed to build the cluster.

One step further - add asynchronous slave

At any point you can extend your Galera cluster by adding one or more asynchronous slaves. Ideally, your Galera cluster has GTID enabled - it makes adding and reslaving slaves so much easier. Even if not, it is still possible to create a slave, although replication is much less likely to survive a crash of the “master” galera node. Such asynchronous slave can be used for a variety of reasons. It can be used as a backup host - run your backups on it to minimize the impact on Galera cluster. It can also be used for heavier, OLAP queries - again, to remove load from the Galera cluster. Another reason why you may want to use asynchronous slave is to build a Disaster Recovery environment - set it up in a separate datacenter and, in case the location of your Galera Cluster would go up in flames, you will still have a copy of your data in a safe location.

Multi-datacenter Galera clusters

If you really care about availability of your data, you can improve it by spanning your Galera cluster across multiple datacenters. Galera can be used across the WAN - some reconfiguration may be needed to make it better adapted to higher latency of WAN connections but it is totally suitable to work in such environment. The only blocker would be if your application frequently modifies a very small subset of rows - this could significantly reduce number of queries per second it will be able to execute.

When talking about WAN-spanning Galera clusters it is important to mention segments. Segments are used in Galera to differentiate the nodes of the cluster which are collocated in the same DC. For example, you may want to configure all nodes in the datacenter “A” to use segment 1 and all nodes in the datacenter “B” to use segment 2. We won’t go into details here (we covered this bit extensively in one of our posts) but in short, using segments reduce inter-segment communication - something you definitely want if segments are connected using WAN.

Another important aspect of building a highly available Galera Cluster over multiple datacenters is to use an odd number of datacenters. Two is not enough to build a setup which would automatically tolerate a loss of a datacenter. Let’s analyze following setup.

As we mentioned earlier, Galera requires a quorum to operate. Let’s see what would happen if one datacenter is not available:

As you can clearly see, we ended up with 3 nodes up, so 50% only - not enough to form a quorum. Manual action is required to assess the situation and promote the remaining part of the cluster to form a “Primary Component” - this requires time and it prolongs downtime.

Let’s take a look at another option:

In this case we added one more node to the datacenter “B” to make sure it will take over the traffic when DC “A” is down. But what would happen if DC “B” goes down?

We have three nodes out of seven, less than 50%. The only way is to use a third datacenter. You can, of course, use one more segment of three nodes (or more, it’s important to have the same number of nodes in each datacenter) but you can minimize costs by utilizing Galera Arbitrator:

In this case, no matter which datacenter will stop operating, as long as it will be only one of them, with garbd we have a quorum:

In our example it is: seven nodes in total, three down, four (3 + garbd) up - enough to form a quorum.

Multiple Galera clusters connected using asynchronous replication

We mentioned that you can deploy an asynchronous slave to Galera cluster and use it, for example, as a DR host. You also have to keep in mind that Galera cluster, to some extent, can be treated as a single MySQL instance. This means there’s no reason why you couldn’t connect two separate Galera clusters using asynchronous replication. Such setup is pretty common. Again, as with regular slaves, it’s better to have GTID because it allows you to quickly reslave your “standby” Galera cluster to another Galera node in the “active” cluster. Without GTID this is also possible but it’s so much more time-consuming and error-prone. Of course, such setup does work together as the multi-DC Galera cluster would - there’s no automated recovery of the replication link between datacenters, there’s no automated reslaving if a “master” Galera node goes down. You may need to build your own tools to automate this process.

Proxy layer

There is no high availability without a proxy layer (unless you have built-in HA in your application). Static connections to a single host does not scale. What you need is a middleman - something that will sit between your application and your database tier and mask the complexity of your database setup. Ideally, your application will have just a single point of access to your databases - connect to a given host and port - that’s it.

You can choose between different proxies - HAProxy, MaxScale, ProxySQL - all of them can work with Galera Cluster. Some of them, though, may require additional configuration - you need to know what and how it needs to be done. Additionally, it’s extremely important so you won’t end up with a proxy as a single point of failure. Each of those proxies can be deployed in a highly available fashion, and you will need to have a few components work together.

How ClusterControl can help you to build highly available Galera Clusters?

ClusterControl can help deploy Galera using all the vendors that are available: Codership, Percona XtraDB Cluster and MariaDB Cluster.

When you deployed a cluster, you can easily scale it up. If needed, you can define different segments for WAN-spanning cluster.

You can also, if you want, deploy an asynchronous slave to the Galera Cluster.

You can use ClusterControl to deploy Galera Arbitrator, which can be very helpful (as we shown previously) in multi-datacenter deployments.

Proxy layer

ClusterControl gives you ability to deploy different proxies with your Galera Cluster. It support deployments of HAProxy, MaxScale and ProxySQL. For HAProxy and ProxySQL, there are additional options to deploy redundant instances with Keepalived and VirtualIP.

For HAProxy, ClusterControl has a statistics page. You can also set a node to maintenance state:

For MaxScale, using ClusterControl, you have access to MaxScale’s CLI and you can perform any actions that are possible from it. Please note we deploy MaxScale version 1.4.3 - more recent versions introduced licensing limitations.

ClusterControl provides quite a bit of functionality for managing ProxySQL, including creating query rules and query caching, including creating query rules and query caching. If you are interested in more details, we encourage you to watch the replay of one of our webinars in which we demoed this UI.

As mentioned previously, ClusterControl can also deploy HAProxy and ProxySQL in a highly available setup. We use virtual IP and Keepalived to track the state of services and perform a failover if needed.

As we have seen in this blog, Galera Cluster can be used in a number of ways to provide high availability of your database, which is a key part of the web infrastructure. You are welcome to download ClusterControl and try the different topologies we discussed above.

ClusterControl for Galera Cluster for MySQL

$
0
0

ClusterControl allows you to easily manage your database infrastructure on premise or in the cloud. With in-depth support for technologies like Galera Cluster for MySQL and MariaDB setups, you can truly automate mixed environments for next-level applications.

Since the launch of ClusterControl in 2012, we’ve experienced growth in new industries with customers who are benefiting from the advancements ClusterControl has to offer - in particular when it comes to Galera Cluster for MySQL.

In addition to reaching new highs in ClusterControl demand, this past year we’ve doubled the size of our team allowing us to continue to provide even more improvements to ClusterControl.

Take a look at this infographic for our top Galera Cluster for MySQL resources and information about how ClusterControl works with Galera Cluster.

Webinar Replay and Q&A: High Availability in ProxySQL for HA MySQL infrastructures

$
0
0

Thanks to everyone who participated in our recent webinar on High Availability in ProxySQL and on how to build a solid, scalable and manageable proxy layer using ProxySQL for highly available MySQL infrastructures.

This second joint webinar with ProxySQL creator René Cannaò saw lots of interest and some nice questions from our audience, which we’re sharing below with this blog post as well as the answers to them.

Building a highly available proxy layer creates additional challenges, such as how to manage multiple proxy instances, how to ensure that their configuration is in sync, Virtual IP and fail-over; and more, which we’ve covered in this webinar with René. And we demonstrated how you can make your ProxySQL highly available when deploying it from ClusterControl (download & try it free).

If you missed the webinar, would like to watch it again or browse through the slides, it is available for viewing online.

Watch the webinar replay

Webinar Questions & Answers

Q.: In a MySQL master/slave pair, I am inclined to deploy ProxySQL instances directly on both master and slave hosts. In an environment of 100s of master/slave pairs, with new hosts being built all the time, I can see this as a good way to combine host / MySQL / ProxySQL master/slave pair deploys via a single Ansible playbook. Do you guys have any thoughts on this?

A.: Our only concern here is that co-locating ProxySQL with database servers can make the debugging of database performance issues harder - the proxy will add overhead for CPU and memory and MySQL may have to compete for those resources.

Additionally, we’re not really sure what you’d like to achieve by deploying ProxySQL on all database servers - where would you like to connect? To one instance or to both? In the first case, you’d have to come up with a solution to handling potentially hundreds of failovers - when a master goes down, you’d have to re-route traffic to the ProxySQL instance on a slave. It adds more complexity than it’s really worth. The second case also creates complexity: instead of connecting to one proxy, the application would have to connect to both.

Co-locating ProxySQL on the application hosts is not that much more complex regarding configuration management than deploying it on database hosts. Yet it makes it so much easier for the application to route traffic - just connect to the local ProxySQL instance over the UNIX socket and that’s all.

Q.: Do you recommend for multiple ProxySQL instances to talk to each other or is it preferable for config changes to rely on each ProxySQL instance detecting the same issue at the same time? For example, would you make ProxySQL01 apply config changes in proxysql_master_switchover.sh to both itself and ProxySQL02 to ensure they stay the same? (I hope this isn't a stupid question... I've not yet succeeded in making this work so I thought maybe I'm missing something!)

A.: This is a very good question indeed. As long as you have scripts which would ensure that the configuration is the same on all of the ProxySQL instances - it should result in more consistent configuration across the whole infrastructure.

Q.: Sometimes I get the following warning 2017-04-04T02:11:43.996225+02:00 Keepalived_vrrp: Process [113479] didn't respond to SIGTERM. and VIP was moved to another server ... I can send you the complete configuration keepalived ... I didn't find a solution as to why I am getting this error/warning.

A.: This may happen from time to time. Timeout results in a failed check which triggers VIP failover. And as to why the monitored process didn't respond to signal in time, that’s really hard to tell. It is possible to increase the number of health-check fails required to trigger a VIP move to minimize the impact of such timeouts.

Q.: What load balancer can we use in front of ProxySQL?

A.: You can use virtually every load balancer out there, including ProxySQL itself - this is actually a topology we’d suggest. It’s better to rely on a single piece of software than to use ProxySQL and then another tool which would be redundant - more steep learning curve, more issues to debug.

Q.: When I started using ProxySQL I had this issue "access denied for MySQL user"; it was random, what is the cause of it?

A.: If it is random and not systematic, it may be worth investigating if it is a bug. We strongly recommend to open an issue on github.

Q.: I have tried ProxySQL and the issue we faced was that after using ProxySQL to split read/write , the connection switched to Master for all reads. How can we prevent the connection?

A.: This is most likely a configuration issue, and there are multiple reasons why this may happen. For example, if transaction_persistent was set to 1 and reads were all within a transaction. Or perhaps the query rules in mysql_query_rules weren’t configured correctly, and all traffic was being sent to the default hostgroup (the master).

Q.: How can Service Discovery help me?

A.: If your infrastructure is constantly changing, tools like etcd, Zookeeper or Consul can help you to track those changes, detect and push configuration changes to proxies. When your database clusters are going up and down, this can simplify configuration management.

Q.: In the discussion on structure, the load balancer scenario was quickly moved on from because of its single point of failure. How about when having a HA load balancer using CNAMES (not IP) for example AWS ElasticLoadBalancer on TCP ports. Would that be a structure that could work well in production?

A.: As long as the load balancer is highly available, this is not a problem, because it’s not a single point of failure. ELB itself is deployed in HA mode, so having a single ELB in front of anything (database servers, a pool of ProxySQL instances) will not introduce a single point of failure.

Q.: Don't any of the silo approaches have a single point of failure in the proxy that is fronting the silo?

A.: Indeed, although it is not a single point of failure - it’s more like multiple points of failure introduced in the infrastructure. If we are talking about huge infrastructure of hundreds or thousands of proxies, a loss of very small subset of application hosts should be acceptable. If we are talking about smaller setups, it should be manageable to have a ProxySQL per application host setup.

Watch the webinar replay

Announcing ClusterControl 1.4.1 - the ProxySQL Edition

$
0
0

Today we are pleased to announce the 1.4.1 release of ClusterControl - the all-inclusive database management system that lets you easily deploy, monitor, manage and scale highly available open source databases - and load balancers - in any environment: on-premise or in the cloud.

This release contains key new management features for MySQL and MariaDB load balancing with ProxySQL, along with performance improvements and bug fixes.

Release Highlights

For ProxySQL

  • Support for:
    • MySQL Galera in addition to Replication clusters
    • Active-standby HA setup with Keepalived
    • Use the Query Monitor to view query digests
  • Management features:
    • Manage Query Rules (Query Caching, Query Rewrite)
    • Manage Host Groups (Servers)
    • Manage ProxySQL Database Users
    • Manage ProxySQL System Variables

For Galera Cluster for MySQL & Replication

  • Manage MySQL Galera and Replication clusters with management/public IPs
    • For monitoring connections and data/private IPs for replication traffic
  • Add MySQL Galera nodes or Replication Read Slaves with management and data IPs

Download ClusterControl

View release details and resources

Load balancers are an essential component in MySQL and MariaDB database high availability; especially when making topology changes transparent to applications and implementing read-write split functionality. As we all know, high-traffic database applications draw an enormous amount of queries daily. Which is why DBAs and SysAdmins require reliable technology solutions that can automatically scale to handle those connections while remaining available for still more.

And this is where load balancing technologies such as HAProxy, MaxScale and now ProxySQL come in.

ClusterControl has always come with support for HAProxy, as a generic TCP load balancer. We then added support for MariaDB’s MaxScale, an SQL-aware load balancer.

And today we’re happy to announce management support for ProxySQL, a lightweight yet complex protocol-aware proxy that sits between the MySQL clients and servers, in addition to the deployment and monitoring features for ProxySQL we announced two months ago.

Unlike others, ProxySQL understands MySQL protocol, which allows the implementation of features otherwise impossible to implement. For example, ProxySQL is the only proxy supporting connections multiplexing and query caching.

With that said, the new management features in ClusterControl include the following:

MySQL Galera in addition to Replication clusters

Up until now, ClusterControl enabled users to deploy ProxySQL on MySQL Replication clusters and monitor its performance. The same is now true for Galera Cluster for MySQL, MariaDB Galera Cluster and Percona XtraDB. This also includes active-standby HA setups with Keepalived.

Use the Query Monitor to view query digests

ClusterControl offers unified and comprehensive real-time monitoring of your entire database and server infrastructure. You can easily visualize performance in custom dashboards to establish operational baselines and support capacity planning. And with comprehensive reports for ProxySQL, you have a clear view of data points like connections, queries, data transfer and utilization, and more.

For more information on how monitoring works in ProxySQL, see our blog post on MySQL Load Balancing with ProxySQL - An Overview.

Management features

With ClusterControl, you can now easily configure and manage your ProxySQL deployments with its comprehensive UI. You can create servers, reorientate your setup, create users, set rules, manage query routing, and enable variable configurations. The new management features in ClusterControl for ProxySQL include:

Manage Query Rules (Query Caching, Query Rewrite)

  • View running queries, create rules or cache and rewrite queries on the fly.

Manage Host Groups (Servers) - ProxySQL uses a concept of hostgroups - a group of different backends which serve the same purpose or handle similar type of traffic.

  • Add or remove servers to existing and new host groups.

Manage ProxySQL Database Users

  • Create new DB users or add existing MySQL users to ProxySQL.

Manage ProxySQL System Variables

  • View and change global runtime variables for tweaking your ProxySQL instance.

For more information on how ProxySQL helps with MySQL query cache, query rewrite, and on ProxySQL’s host groups, read our blog on How ProxySQL adds Failover and Query Control to your MySQL Replication Setup.

There are a number of other features and improvements that we have not mentioned here. You can find all details in the ChangeLog.

We encourage you to test this latest release and provide us with your feedback. If you’d like a demo, feel free to request one.

Thank you for your ongoing support, and load balancing!

PS.: For additional tips & tricks, follow our blog: https://severalnines.com/blog/


Video: Interview with ProxySQL Creator René Cannaò

$
0
0

We sat down at Percona Live 2017 with ProxySQL creator René Cannaò to talk about the new release of ClusterControl, what’s is coming up for ProxySQL, how it is being received by the MySQL Community and how it varies from Maxscale.

ClusterControl and ProxySQL

Packaged in the latest ClusterControl release, ProxySQL enables MySQL, MariaDB and Percona XtraDB database systems to easily manage intense, high-traffic database applications without losing availability.

ProxySQL is a lightweight yet complex protocol-aware proxy that sits between the MySQL clients and servers. It is a gate, which basically separates clients from databases, and is therefore an entry point used to access all the database servers.

ClusterControl provides the only GUI on the market for the easy deployment, configuration and management of ProxySQL.

To learn more check out the following resources…

ProxySQL: All the Severalnines Resources

$
0
0

Load balancers are an essential component in MySQL and MariaDB database high availability; especially when making topology changes transparent to applications and implementing read-write split functionality.

ProxySQL is a lightweight yet complex protocol-aware proxy that sits between the MySQL clients and servers. It is a gate, which basically separates clients from databases, and is therefore an entry point used to access all the database servers.

ProxySQL has an advanced multi-core architecture to handle that large number of connections, multiplexed to potentially hundreds of backend servers squeezing every drop of performance out of your database cluster; all with zero downtime.

Here are our top resources for ProxySQL to get you started with this amazing new technology.

On-Demand Webinars

High Availability in ProxySQL

Joint webinar with ProxySQL’s creator, René Cannaò, where we discuss building a solid, scalable and manageable proxy layer using ProxySQL in order to create a highly available MySQL infrastructure.

Watch the replay!

MySQL & MariaDB Load Balancing with ProxySQL & ClusterControl

We’re delighted to be joined by ProxySQL’s creator, René Cannaò, to tell us more about this new MySQL & MariaDB load balancing proxy and its features. We also show you how you can deploy ProxySQL using ClusterControl.

Watch the replay!

Introducing the Severalnines MySQL© Replication Blueprint

The Severalnines Blueprint for MySQL Replication includes all aspects of a MySQL Replication topology with the ins and outs of deployment, setting up replication, monitoring, upgrades, performing backups and managing high availability using proxies as ProxySQL, MaxScale and HAProxy. This webinar provides an in-depth walk-through of this blueprint and explains how to make best use of it.

Watch the replay!

Videos

Video: Interview #2 with ProxySQL Creator René Cannaò

We sat down at Percona Live 2017 with ProxySQL creator René Cannaò to talk about the new release of ClusterControl, what’s is coming up for ProxySQL, how it is being received by the MySQL Community and how it varies from Maxscale.

Watch the Video!

Video: Interview with ProxySQL Creator René Cannaò

Severalnines sits down with ProxySQL founder and creator René Cannaò to discuss his product and the upcoming webinar MySQL & MariaDB Load Balancing with ProxySQL & ClusterControl.

Watch the Video!

Top Blogs

Announcing ClusterControl 1.4.1 - the ProxySQL Edition

Release of ClusterControl 1.4.1 with key new management features for MySQL and MariaDB load balancing with ProxySQL, along with performance improvements and bug fixes.

Read More!

Using ClusterControl to Deploy and Configure ProxySQL on top of MySQL Replication

This blog post shows you how to deploy and configure ProxySQL from ClusterControl 1.3.3, on top of a master-slave replication setup. ProxySQL is an SQL-aware load balancer. It allows you to automatically do read-write split of your traffic, by sending writes to your writeable master and sending reads to the read-only slaves.

Read More!

Tips and Tricks - How to Shard MySQL with ProxySQL in ClusterControl

Sharding can be a painful exercise. Databases need to be split on multiple servers, data migrated, and applications have to be tweaked to be aware of the shards. However, there are ways to make this less painful. This blog explains how to use ProxySQL to shard a table to another cluster in ClusterControl without changing your application.

Read More!

Sharding MySQL with MySQL Fabric and ProxySQL

This blog post shows you how to set up a sharded MySQL setup based on MySQL Fabric and ProxySQL.

Read More!

How to Setup Read-Write Split in Galera Cluster using ProxySQL

This blog shows you how to setup read-write split in Galera Cluster using ProxySQL. It also covers some limitations that we found at the time of writing.

Read More!

How ProxySQL adds Failover and Query Control to your MySQL Replication Setup

An overview of ProxySQL in a MySQL replication environment, and how ProxySQL adds features like failover and query control to the setup.

Read More!

MySQL Load Balancing with ProxySQL - An Overview

An overview of ProxySQL, an SQL-aware load balancer for MySQL and MariaDB. This blog post explains the concepts behind ProxySQL, and goes through installation and configuration.

Read More!

Whitepapers

Database Sharding with MySQL Fabric

Why do we shard? How does sharding work? What are the different ways I can shard my database? This white paper goes through some of the theory behind sharding. It also discusses three different tools which are designed to help users shard their MySQL databases. And last but not least, it shows you how to set up a sharded MySQL setup based on MySQL Fabric and ProxySQL.

Download the whitepaper

Migrating to MySQL 5.7 - The Database Upgrade Guide

Upgrading to a new major version involves risk, and it is important to plan the whole process carefully. In this whitepaper, we look at the important new changes in MySQL 5.7 and show you how to plan the test process. We then look at how to do a live system upgrade without downtime. For those who want to avoid connection failures during slave restarts and switchover, this document goes even further and shows you how to leverage ProxySQL to achieve a graceful upgrade process.

Download the whitepaper

ClusterControl
Single Console for Your Entire Database Infrastructure
Find out what else is new in ClusterControl

ClusterControl for ProxySQL

Packaged in the latest ClusterControl release, ProxySQL enables MySQL, MariaDB and Percona XtraDB database systems to easily manage intense, high-traffic database applications without losing availability. ClusterControl offers advanced, point-and-click configuration management features for the load balancing technologies we support. We know the issues regularly faced and make it easy to customize and configure the load balancer for your unique application needs.

We know load balancing and support many different technologies. ClusterControl has many things preconfigured to get you started with a couple of clicks. If you run into challenged we also provide resources and on-the-spot support to help ensure your configurations are running at peak performance.

ClusterControl delivers on an array of features to help deploy and manage ProxySQL

  • Advanced Graphical Interface - ClusterControl provides the only GUI on the market for the easy deployment, configuration and management of ProxySQL.
  • Point and Click deployment - With ClusterControl you’re able to apply point and click deployments to MySQL, MySQL replication, MySQL Cluster, Galera Cluster, MariaDB, MariaDB Galera Cluster, and Percona XtraDB technologies, as well the top related load balancers with HAProxy, MaxScale and ProxySQL.
  • Suite of monitoring graphs - With comprehensive reports you have a clear view of data points like connections, queries, data transfer and utilization, and more.
  • Configuration Management - Easily configure and manage your ProxySQL deployments with a simple UI. With ClusterControl you can create servers, reorientate your setup, create users, set rules, manage query routing, and enable variable configurations.

How to deploy and manage HAProxy, MaxScale or ProxySQL with ClusterControl - Webinar May 9th

$
0
0

Join us for our new webinar next week, Tuesday May 9th, with Krzysztof Książek, Senior Support Engineer at Severalnines, who will discuss support for proxies for MySQL HA setups in ClusterControl: how they differ and what their pros and cons are. You’ll also be shown you how you can easily deploy and manage HAProxy, MaxScale and ProxySQL from ClusterControl via a live demo.

Proxies are building blocks of high availability setups for MySQL. They can detect failed nodes and route queries to hosts which are still available. If your master failed and you had to promote one of your slaves, proxies will detect such topology changes and route your traffic accordingly. More advanced proxies can do much more, such as route traffic based on precise query rules, cache queries or mirror them. They can be even used to implement different types of sharding.

Register below to hear it all about it!

Date, Time & Registration

Europe/MEA/APAC

Tuesday, May 9th at 09:00 BST (UK) / 10:00 CEST (Germany, France, Sweden)

Register Now

North America/LatAm

Tuesday, May 9th at 9:00 PST (US) / 12:00 EST (US)

Register Now

Agenda

  • Introduction
  • Why use a proxy layer?
  • Comparison of proxies - the pros & cons
    • HAProxy
    • MaxScale
    • ProxySQL
  • Live demo of proxy support in ClusterControl

Speaker

Krzysztof Książek, Senior Support Engineer at Severalnines, is a MySQL DBA with experience managing complex database environments for companies like Zendesk, Chegg, Pinterest and Flipboard.

We look forward to “seeing” you there and to insightful discussions!

If you have any questions or would like a personalised live demo, please do contact us.

Video: ProxySQL ClusterControl Demonstration

$
0
0

In my latest ClusterControl product demonstration video I show the latest features we have incorporated for ProxySQL, a highly flexible and versatile new load balancing technology.

In this video we demonstrate…

  • Deployment of ProxySQL for MySQL Replication and Galera Cluster
  • Installation instructions for ProxySQL
  • Adding ProxySQL Admins and monitoring users
  • Selecting the instances you wish to balance
  • Monitoring ProxySQL Host Groups
  • Query Monitoring
  • Managing Query Rules
  • Extracting top queries
ClusterControl
Single Console for Your Entire Database Infrastructure
Find out what else is new in ClusterControl

ClusterControl for ProxySQL

Packaged in the latest ClusterControl release, ProxySQL enables MySQL, MariaDB and Percona XtraDB database systems to easily manage intense, high-traffic database applications without losing availability. ClusterControl offers advanced, point-and-click configuration management features for the load balancing technologies we support. We know the issues regularly faced and make it easy to customize and configure the load balancer for your unique application needs.

We know load balancing and support many different technologies. ClusterControl has many things preconfigured to get you started with a couple of clicks. If you run into challenged we also provide resources and on-the-spot support to help ensure your configurations are running at peak performance.

ClusterControl delivers on an array of features to help deploy and manage ProxySQL

  • Advanced Graphical Interface - ClusterControl provides the only GUI on the market for the easy deployment, configuration and management of ProxySQL.
  • Point and Click deployment - With ClusterControl you’re able to apply point and click deployments to MySQL, MySQL replication, MySQL Cluster, Galera Cluster, MariaDB, MariaDB Galera Cluster, and Percona XtraDB technologies, as well the top related load balancers with HAProxy, MaxScale and ProxySQL.
  • Suite of monitoring graphs - With comprehensive reports you have a clear view of data points like connections, queries, data transfer and utilization, and more.

Configuration Management - Easily configure and manage your ProxySQL deployments with a simple UI. With ClusterControl you can create servers, reorientate your setup, create users, set rules, manage query routing, and enable variable configurations.

Webinar Replay and Q&A: how to deploy and manage ProxySQL, HAProxy and MaxScale

$
0
0

Thanks to everyone who participated in yesterday’s webinar on how to deploy and manage ProxySQL, HAProxy and MaxScale with ClusterControl.

Krzysztof Książek, Senior Support Engineer at Severalnines, discussed support for proxies for MySQL HA setups in ClusterControl: how they differ and what their pros and cons are. And he demonstrated how you can easily deploy and manage HAProxy, MaxScale and ProxySQL from ClusterControl during a live demo.

If you missed the webinar, would like to watch it again or browse through the slides, it’s all available for viewing online now. You’ll also find below a transcript of the Q&A session, which took place at the end of the webinar.

Watch the webinar replay

Webinar Questions & Answers

Q.: I’d like to replace HAProxy by ProxySQL - can I deploy ProxySQL on the same VMs as my current HAProxy ones or do I have to create new VMs? I deploy and manage it all with ClusterControl.

A.: Yes, as long as there is no conflict in ports used by those two proxies, there’s no reason why they couldn’t coexist. By default there is no such conflict, but a user may customize ports when deploying a proxy from ClusterControl, so if you are not sure how HAProxy is configured, it’s better to double-check it.

Q.: Do you know what happened to the admin interface MaxScale 2.0 and why it was removed?

A.: We don’t have detailed knowledge - it’s been deprecated due to security reasons, but what exactly is hidden behind this statement, we don’t know.

Q.: Have you any plans to talk about or support other load balancers in future, such as F5 BigIP, A10 Networks, or Citrix Netscaler? Or do you have any immediate thoughts on them you can share just now?

A.: As of now we don’t have any plans related to these load balancers, but if we get more requests for it, we’ll look into them more.

Q.: How can we sync users across multiple Proxysql instances? Or add existing users automatically to a newly added Proxysql instance?

A.: As of now, it is not possible to do that using ClusterControl - you still can do it manually, accessing ProxySQL through the command line interface. Having said that, we have plans to implement configuration syncing in one of the next ClusterControl releases. For adding users in batches, right now, CLI is the best way - ProxySQL accepts passwords in a form of MySQL password hash, so it’s fairly easy to write a script which will do the import. This is one of the feature requests we got so we will, most likely, implement it at some point. We can’t share any ETA though.

Q.: How does ClusterControl handle configuration changes in ProxySQL?

A.: ClusterControl does not take advantage of multiple configuration levels in ProxySQL - any change introduced via the UI is immediately loaded to runtime configuration.

Q.: Can you describe what the CPU usage is for ProxySQL or MaxScale on read/write split?

A.: Typically you’ll see ProxySQL utilizing less CPU resources compared to MaxScale, but it all depends on your workload and the number of query rules you may have added to ProxySQL.

Watch the webinar replay

Command Line Aficionados: Introducing s9s for ClusterControl

$
0
0

Easily Integrate ClusterControl With Your Existing DevOps Tools via s9s - Our New Command Line Interface

We’ve heard your call (and, selfishly, our own): please meet s9s - the new command line interface (CLI) for ClusterControl, our all-inclusive open source database management system.

At every conference we’ve attended so far, visitors have been asking us whether there is a command line interface for ClusterControl. And, we’re not afraid to admit, some of us at Severalnines have always wanted to have one as well. So those same colleagues have gone and created the s9s CLI for ClusterControl, which we’re happy to present today.

In fact, Johan Andersson, our CTO, is one of our command line aficionados and he describes the new CLI as follows:

What’s the ClusterControl CLI all about?

The ClusterControl CLI, is an open source project and optional package introduced with ClusterControl version 1.4.1. It is a command line tool to interact, control and manage your entire database infrastructure using ClusterControl. The s9s command line project is open source and is located on GitHub.

The ClusterControl CLI opens a new door for cluster automation where you can easily integrate it with existing deployment automation tools like Ansible, Puppet, Chef or Salt. This allows you to easily integrate scripts from your orchestration tools inside the CLI.

Users who have downloaded ClusterControl can use the CLI for all the ClusterControl features while they’re on the Enterprise trial of ClusterControl. Community users can then use the deployment and monitoring functionalities of ClusterControl. Existing customers can use the CLI to the full extent of ClusterControl.

Usage and Installation

The CLI can be installed by adding the s9s tools repository and using a package manager, as well as be compiled from source. The current installation script to install ClusterControl, install-cc, will automatically install the command line client. The command line client can also be installed on another computer or workstation for remote management. Finally, the CLI requires ClusterControl 1.4.1 or later.

Moreover, all communication between the client and the controller is encrypted and secured using TLS.

The ClusterControl CLI allows you to deploy and manage open source databases and load balancers in a way that is fully integrated and aligned with the ClusterControl core and GUI.

The s9s command line project is open source and located on GitHub: https://github.com/severalnines/s9s-tools

For examples and additional information, e.g, how to setup users and authentication, please visit
https://severalnines.com/docs/components.html#clustercontrol-cli

Before you get started, you need to have ClusterControl version 1.4.1 or later installed, see https://severalnines.com/download-clustercontrol-database-management-system

Some of the things you can do from the CLI in ClusterControl

  • Deploy and manage database clusters
    • MySQL
    • PostgreSQL
    • MongoDB to be added soon
  • Monitor your databases
    • Status of nodes and clusters
    • Cluster properties can be extracted
    • Gives detailed enough information about your clusters
  • Manage your systems and integrate with DevOps tools
    • Create, stop or start clusters
    • Add, remove, or restart nodes in the cluster
    • Create database users (CREATE USER, GRANT privileges to user)
      • Users created in the CLI are traceable through the system
    • Create load balancers (HAProxy, ProxySQL)
    • Create and Restore backups
    • Use maintenance mode
    • Conduct configuration changes of db nodes
    • Integrate with existing deployment automation
      • Ansible, Puppet, Chef or Salt, ...

Actions you take from the CLI will be visible in the ClusterControl Web UI and vice versa.

How to contribute

The CLI project (aka s9s-tools) can be accessed via GitHub. We encourage users to contribute to the project by:

  • Trying out the CLI and give us feedback
  • Letting us know about missing features, wishes, or problems by opening issues on GitHub
  • Contributing patches to the project

To sum things up

The ClusterControl CLI and GUI are fully integrated and synced to allow you to utilize the CLI for deployment and management of your databases and load balancers, whilst using the advanced graphs in the GUI for monitoring and troubleshooting. The CLI offers detailed information about node stats and cluster stats, enabling scripts and other tools to benefit from those.

In our experience, System Administrators and DevOps professionals are the mostly likely to benefit from a CLI for ClusterControl as they are accustomed to using scripts to perform their daily tasks.

Happy command-line-clustering!

Webinar - How to use ClusterControl from the command line and integrate with your DevOps tools

$
0
0

Join us for our new webinar on “How to use ClusterControl from the command line” on Tuesday, May 30th!

In this webinar we will walk you through the ClusterControl command line toolset capabilities and demonstrate its main functions and DevOps integration aspects.

How to use ClusterControl from the command line

The Severalnines command line interface offers an open source alternative for the web based ClusterControl User Interface on the command line. You can perform almost any deployment, management or scaling task from the command line; any changes made are also visible in the web UI and vice versa.

The command line client opens up many possibilities to integrate ClusterControl into your infrastructure or development cycle. You can integrate the client easily with existing deployment automation tools like Ansible, Puppet and Chef. You will be able to enrich your existing chatbot to interface with ClusterControl or make use of Severalnines’ chatbot called CCBot. The command line client can also create output in various formats, including json, to make integration with any framework easy.

Sign up below!

Date, Time & Registration

Europe/MEA/APAC

Tuesday, May 30th at 09:00 GMT / 10:00 CET (Germany, France, Sweden)

Register Now

North America/LatAm

Tuesday, May 30th at 09:00 Pacific Time (US) / 12:00 Eastern Time (US)

Register Now

Agenda

  • What is the ClusterControl command line client?
  • How to install the command line client
  • How to use the command line client
  • Integration possibilities
  • How you can contribute to the command line client
  • Live Demo

With new installations from ClusterControl 1.4.1 onwards the Severalnines ClusterControl command line interface will be installed by default. You will be able to install the command line manually and use it with existing ClusterControl installations up from version 1.4.1.

To ensure your command line client is secure, we use are using TLS between the client and the ClusterControl backend. This ensures all your commands, jobs and stats will be transported and executed securely.

Speakers

Johan Andersson, CTO, Severalnines

Johan's technical background and interest are in high performance computing as demonstrated by the work he did on main-memory clustered databases at Ericsson as well as his research on parallel Java Virtual Machines at Trinity College Dublin in Ireland. Prior to co-founding Severalnines, Johan was Principal Consultant and lead of the MySQL Clustering & High Availability consulting group at MySQL / Sun Microsystems / Oracle, where he designed and implemented large-scale MySQL systems for key customers. Johan is a regular speaker at MySQL User Conferences as well as other high profile community gatherings with popular talks and tutorials around architecting and tuning MySQL Clusters.

Art van Scheppingen, Senior Support Engineer, Severalnines

Art van Scheppingen is a Senior Support Engineer at Severalnines. He’s a pragmatic database expert with over 16 years experience in web development. He previously worked at Spil Games as Head of Database Engineering, where he kept a broad vision upon the whole database environment: from MySQL to MongoDB, Vertica to Hadoop and from Sphinx Search to SOLR. He regularly presents his work and projects at various conferences (Percona Live, MongoDB Open House, FOSDEM) and related meetups.

We look forward to “seeing” you there!


How to use s9s -The Command Line Interface to ClusterControl

$
0
0

s9s is our official command line tool to interact with ClusterControl. We’re pretty excited about this, as we believe its ease of use and script-ability will make our users even more productive. Let’s have a look at how to use it. In this blog, we’ll show you how to use s9s to deploy and manage your database clusters.

ClusterControl v1.4.1 comes with an optional package called s9s-tools, which contains a binary called "s9s". As most of you already know, ClusterControl provides a graphical user interface from where you can deploy, monitor and manage your databases. The GUI interacts with the ClusterControl Controller (cmon) via an RPC interface. The new CLI client is another way to interact with the RPC interface, by using a collection of command line options and arguments. At the time of writing, the CLI has support for a big chunk of the ClusterControl functionality and the plan is to continue to build it out. Please refer to ClusterControl CLI documentation page for more details. It is worth mentioning that the CLI is open source, so it is possible for anybody to add functionality to it.

As a side note, s9s is the backbone that drives the automatic deployment when running ClusterControl and Galera Cluster on Docker Swarm as shown in this blog post. You should take note that this tool is relatively new and comes with some limitations like different user management module and no support for Role Based Access Control.

Setting up the Client

Starting from version 1.4.1, the installer script will automatically install this package on the ClusterControl node. You can also install it on another computer or workstation to manage the database cluster remotely. All communication is encrypted and secure through SSH.

In this example, the client will be installed on another workstation running on Ubuntu. We are going to connect to the ClusterControl server remotely. Here is the diagram to illustrate this:

We have covered the installation and configuration steps in the documentation. Ensure you perform the following steps:

  1. On the ClusterControl host, ensure it runs on ClusterControl Controller 1.4.1 and later.
  2. On the ClusterControl host, ensure CMON RPC interface (port 9500 and 9501) is listening to an IP address that is routable to external network. Follow these steps.
  3. Install s9s-tools package on the workstation. Follow these installation steps.
  4. Configure the Remote Access. Follow these steps.

Take note that it is also possible to build the s9s command line client on other Linux distribution and Mac OS/X as described here. The command line client installs manual pages and can be viewed by entering the command:

$ man s9s

Deploy Everything through CLI

In this example, we are going to perform the following operations with the CLI:

  1. Deploy a three-node Galera Cluster
  2. Monitor state and process
  3. Create schema and user
  4. Take backups
  5. Cluster and node operations
  6. Scaling up/down

Deploy a three-node Galera Cluster

First, on the ClusterControl host, ensure we have setup passwordless SSH to all the target hosts:

(root@clustercontrol)$ ssh-copy-id 10.0.0.3
(root@clustercontrol)$ ssh-copy-id 10.0.0.4
(root@clustercontrol)$ ssh-copy-id 10.0.0.5

Then, from the client workstation:

(client)$ s9s cluster --create --cluster-type=galera --nodes="10.0.0.3;10.0.0.4;10.0.0.5"  --vendor=percona --provider-version=5.7 --db-admin-passwd="mySecr3t" --os-user=root --cluster-name="PXC_Cluster_57" --wait

We defined “--wait” which means the job will run in the foreground and wait for the job to complete. It will return 0 for a successful job or non-zero if the job fails. To let this process run as a background job, just omit this flag.

Then, you should see the progress bar:

Create Galera Cluster
\ Job  1 RUNNING    [▊       ]  26% Installing MySQL on 10.0.0.3

The same progress can be monitored under Activity (top menu) of the ClusterControl UI:

Notice that the job was initiated by user 'dba', which is our command line remote user.

Monitor state and process

There are several ways to look for the cluster. You can simply list out the cluster with --list:

$ s9s cluster --list --long
ID STATE   TYPE   OWNER GROUP NAME           COMMENT
 1 STARTED galera dba   users PXC_Cluster_57 All nodes are operational.
Total: 1

Also, there is another flag called --stat for a more detailed summary:

$ s9s cluster --stat
    Name: PXC_Cluster_57                      Owner: dba/users
      ID: 1                                   State: STARTED
    Type: GALERA                             Vendor: percona 5.7
  Status: All nodes are operational.
  Alarms:  0 crit   1 warn
    Jobs:  0 abort  0 defnd  0 dequd  0 faild  2 finsd  0 runng
  Config: '/etc/cmon.d/cmon_1.cnf'

Take note that you can use cluster ID or name value as the identifier when manipulating our Galera Cluster. More examples further down.

To get an overview of all nodes, we can simply use the “node” command option:

$ s9s node --list --long
ST  VERSION         CID CLUSTER        HOST      PORT COMMENT
go- 5.7.17-13-57      1 PXC_Cluster_57 10.0.0.3  3306 Up and running
go- 5.7.17-13-57      1 PXC_Cluster_57 10.0.0.4  3306 Up and running
go- 5.7.17-13-57      1 PXC_Cluster_57 10.0.0.5  3306 Up and running
co- 1.4.1.1834        1 PXC_Cluster_57 10.0.0.7  9500 Up and running
go- 10.1.23-MariaDB   2 MariaDB_10.1   10.0.0.10 3306 Up and running
go- 10.1.23-MariaDB   2 MariaDB_10.1   10.0.0.11 3306 Up and running
co- 1.4.1.1834        2 MariaDB_10.1   10.0.0.7  9500 Up and running
gr- 10.1.23-MariaDB   2 MariaDB_10.1   10.0.0.9  3306 Failed
Total: 8

s9s allows you to have an aggregated view of all processes running on all nodes. It can be represented in a real-time format (similar to ‘top’ output) or one-time format (similar to ‘ps’ output). To monitor live processes, you can do:

$ s9s process --top --cluster-id=1
PXC_Cluster_57 - 04:27:12                                           All nodes are operational.
4 hosts, 16 cores,  6.3 us,  0.7 sy, 93.0 id,  0.0 wa,  0.0 st,
GiB Mem : 14.8 total, 2.9 free, 4.6 used, 0.0 buffers, 7.2 cached
GiB Swap: 7 total, 0 used, 7 free,

PID   USER   HOST     PR  VIRT      RES    S   %CPU   %MEM COMMAND
 4623 dba    10.0.0.5 20  5700352   434852 S  20.27  11.24 mysqld
 4772 dba    10.0.0.4 20  5634564   430864 S  19.99  11.14 mysqld
 2061 dba    10.0.0.3 20  5780688   459160 S  19.91  11.87 mysqld
  602 root   10.0.0.7 20  2331624    38636 S   8.26   1.00 cmon
  509 mysql  10.0.0.7 20  2613836   163124 S   0.66   4.22 mysqld
...

Similar to top command, you can get a real-time summary for all nodes under this cluster. The first line tells us the cluster name, current time and the cluster state. The second, third and fourth lines are accumulated resources of all nodes in the cluster combined. In this example, we used 4 hosts (1 ClusterControl + 3 Galera), each has 4 cores, ~4 GB RAM and around 2GB swap.

To list out processes (similar to ps output) of all nodes for cluster ID 1, you can do:

$ s9s process --list --cluster-id=1
PID   USER   HOST     PR  VIRT      RES    S   %CPU   %MEM COMMAND
 2061 dba    10.0.0.3 20  5780688   459160 S  25.03  11.87 mysqld
 4623 dba    10.0.0.5 20  5700352   434852 S  23.87  11.24 mysqld
 4772 dba    10.0.0.4 20  5634564   430864 S  20.86  11.14 mysqld
  602 root   10.0.0.7 20  2331624    42436 S   8.73   1.10 cmon
  509 mysql  10.0.0.7 20  2613836   173688 S   0.66   4.49 mysqld
...

You can see there is another column called “HOST”, which represents the host where the process is running. This centralized view approach will surely save you time, so you do not have to get individual outputs for each node and then compare them.

Create database schema and user

Now we know our cluster is ready and healthy. We can then create a database schema:

$ s9s cluster --create-database --cluster-name='PXC_Cluster_57' --db-name=db1
Database 'db1' created.

The command below does the same thing but using the cluster ID as the cluster identifier instead:

$ s9s cluster --create-database --cluster-id=1 --db-name=db2
Database 'db2' created.

Then, create a database user associated with this database together with proper privileges:

$ s9s cluster --create-account --cluster-name='PXC_Cluster_57' --account='userdb1:password@10.0.0.%' --privileges='db1.*:SELECT,INSERT,UPDATE,DELETE,INDEX,CREATE'
Account 'userdb1' created.
Grant processed: GRANT SELECT, INSERT, UPDATE, DELETE, INDEX, CREATE ON db1.* TO 'userdb1'@'10.0.0.%'

You can now import or start to work with the database.

Take backups with mysqldump and Xtrabackup

Creating a backup is simple. You just need to decide which node to backup and the backup method. The storage location by default will be located on the controller node, unless you specify the --on-node flag. If the backup destination directory does not exist, ClusterControl will create it for you.

Backup completion time varies depending on the database size. It’s good to let the backup job run in the background:

$ s9s backup --create --backup-method=mysqldump --cluster-id=1 --nodes=10.0.0.5:3306 --backup-directory=/storage/backups
Job with ID 4 registered.

The ID for the backup job is 4. We can list all jobs by simply listing them out:

$ s9s job --list
ID CID STATE    OWNER  GROUP  CREATED  RDY  COMMENT
 1   0 FINISHED dba    users  06:19:33 100% Create Galera Cluster
 2   1 FINISHED system admins 06:33:48 100% Galera Node Recovery
 3   1 FINISHED system admins 06:36:04 100% Galera Node Recovery
 4   1 RUNNING3 dba    users  07:21:30   0% Create Backup
Total: 4

The job list tells us that there is a running job with state RUNNING3 (job is running on thread #3). You can then attach to this job if you would like to monitor the progress:

$ s9s job --wait --job-id=4
Create Backup
\ Job  4 RUNNING3   [    █     ] ---% Job is running

Or, inspect the job messages using the --log flag:

$ s9s job --log --job-id=4
10.0.0.5:3306: Preparing for backup - host state (MYSQL_OK) is acceptable.
10.0.0.7: creating backup dir: /storage/backups/BACKUP-1
10.0.0.5:3306: detected version 5.7.17-13-57.
Extra-arguments be passed to mysqldump:  --set-gtid-purged=OFF
10.0.0.7: Starting nc -dl 9999 > /storage/backups/BACKUP-1/mysqldump_2017-05-09_072135_mysqldb.sql.gz 2>/tmp/netcat.log.
10.0.0.7: nc started, error log: 10.0.0.7:/tmp/netcat.log.
Backup (mysqldump, storage controller): '10.0.0.5: /usr/bin/mysqldump --defaults-file=/etc/my.cnf --flush-privileges --hex-blob --opt   --set-gtid-purged=OFF  --single-transaction --skip-comments --skip-lock-tables --skip-add-locks --databases mysql |gzip  - | nc 10.0.0.7 9999'.
10.0.0.5: MySQL >= 5.7.6 detected, enabling 'show_compatibility_56'
A progress message will be written every 1 minutes
...

The same applies to xtrabackup. Just change the backup method accordingly. The supported values are xtrabackupfull (full backup) and xtrabackupincr (incremental backup):

$ s9s backup --create --backup-method=xtrabackupfull --cluster-id=1 --nodes=10.0.0.5:3306 --backup-directory=/storage/backups
Job with ID 6 registered.

Take note that an incremental backup requires that there is already a full backup made of the same databases (all or individually specified), else the incremental backup will be upgraded to a full backup.

You can then list out the backups created for this cluster:

$ s9s backup --list --cluster-id=1 --long --human-readable
ID CID STATE     OWNER HOSTNAME CREATED  SIZE FILENAME
 1   1 COMPLETED dba   10.0.0.5 07:21:39 252K mysqldump_2017-05-09_072135_mysqldb.sql.gz
 1   1 COMPLETED dba   10.0.0.5 07:21:43 1014 mysqldump_2017-05-09_072135_schema.sql.gz
 1   1 COMPLETED dba   10.0.0.5 07:22:03 109M mysqldump_2017-05-09_072135_data.sql.gz
 1   1 COMPLETED dba   10.0.0.5 07:22:07  679 mysqldump_2017-05-09_072135_triggerseventsroutines.sql.gz
 2   1 COMPLETED dba   10.0.0.5 07:30:20 252K mysqldump_2017-05-09_073016_mysqldb.sql.gz
 2   1 COMPLETED dba   10.0.0.5 07:30:24 1014 mysqldump_2017-05-09_073016_schema.sql.gz
 2   1 COMPLETED dba   10.0.0.5 07:30:44 109M mysqldump_2017-05-09_073016_data.sql.gz
 2   1 COMPLETED dba   10.0.0.5 07:30:49  679 mysqldump_2017-05-09_073016_triggerseventsroutines.sql.gz

Omit the “--cluster-id=1” option and to see the backup records for all your clusters.

Cluster and node operations

Performing a rolling restart (one node at a time) can be done with a single command line:

$ s9s cluster --rolling-restart --cluster-id=1 --wait
Rolling Restart
| Job  9 RUNNING    [███       ]  31% Stopping 10.0.0.4

For configuration management, we can get a list of configuration options defined inside a node’s my.cnf, and pipe the stdout to grep for filtering:

$ s9s node --list-config --nodes=10.0.0.3 | grep max_
MYSQLD      max_heap_table_size                    64M
MYSQLD      max_allowed_packet                     512M
MYSQLD      max_connections                        500
MYSQLD      wsrep_max_ws_rows                      131072
MYSQLD      wsrep_max_ws_size                      1073741824
mysqldump   max_allowed_packet                     512M

Let’s say we would like to reduce the max_connections. Then, we can use the “node” command option to perform the configuration update as shown in the following example:

$ s9s node --change-config --nodes=10.0.0.3 --opt-group=mysqld --opt-name=max_connections --opt-value=200
Variable 'max_connections' set to '200' and effective immediately.
Persisted change to configuration file /etc/my.cnf.
$ s9s node --change-config --nodes=10.0.0.4 --opt-group=mysqld --opt-name=max_connections --opt-value=200
Variable 'max_connections' set to '200' and effective immediately.
Persisted change to configuration file /etc/my.cnf.
$ s9s node --change-config --nodes=10.0.0.5 --opt-group=mysqld --opt-name=max_connections --opt-value=200
Variable 'max_connections' set to '200' and effective immediately.
Persisted change to configuration file /etc/my.cnf.

As stated in the job response, the changes are effective immediately. So it is not necessary to perform a node restart.

Scaling up and down

Adding a new database node is simple. First, setup a passwordless SSH to the new node:

(clustercontrol)$ ssh-copy-id 10.0.0.9

Then, specify the node’s IP address or hostname together with the cluster identifier (assume we want to add the node into a cluster with ID=2):

(client)$ s9s cluster --add-node --nodes=10.0.0.9 --cluster-id=2 --wait
Add Node to Cluster
| Job  9 FINISHED   [██████████] 100% Job finished.

To remove a node, one would do:

$ s9s cluster --remove-node --nodes=10.0.0.9 --cluster-id=2
Job with ID 10 registered.

Summary

ClusterControl CLI can be nicely integrated with your infrastructure automation tools. It opens a new way to interact and manipulate your databases. It is completely open source and available on GitHub. Go check it out!

Video: ClusterControl Command Line Client (CLI) Product Demonstration

$
0
0

The video below details the features and functions that are available in the new ClusterControl Command Line Client (CLI) Included in the video are…

  • Overview of the Command Line Interface
  • Installation of the CLI
  • Authentication & Security
  • Benefits of using the CLI
  • Context-based help through the CLI
  • Deploying new clusters
  • Viewing status of new clusters via CLI
  • Creating a database user
ClusterControl
Single Console for Your Entire Database Infrastructure
Find out what else is new in ClusterControl

ClusterControl Command Line Client (CLI)

Included in the latest release of ClusterControl is the new Command Line Interface (CLI), which allows users to perform most of the functions available within ClusterControl using simple commands. Used in conjunction with the powerful GUI, it gives users alternative ways to manage their open source database environments using whatever engine they prefer.

The ClusterControl CLI allows users who prefer not to manage their databases through a GUI to have an alternative method to deploy and manage their databases, while still benefiting from the advanced automation and management features found in ClusterControl. All actions, such as deploying a cluster, using the CLI will be visible in the UI and vice versa.

The new CLI allows you to…

  • Deploy and manage database clusters
    • MySQL
    • PostgreSQL
  • Monitoring Features
    • Status of nodes and clusters
    • Cluster properties can be extracted
  • Management features
    • Create clusters
    • Stop or start clusters
    • Add or remove nodes
    • Restart nodes in the cluster
    • Create database users (CREATE USER, GRANT privileges to user)
    • Create load balancers (HAProxy, ProxySQL.)
    • Create and Restore backups
    • Enable maintenance mode.
    • Configuration changes of db nodes

Learn more about the new ClusterControl CLI check it out here.

ClusterControl on Docker

$
0
0

(This blog was updated on June 20, 2017)

We’re excited to announce our first step towards dockerizing our products. Please welcome the official ClusterControl Docker image, available on Docker Hub. This will allow you to evaluate ClusterControl with a couple of commands:

$ docker pull severalnines/clustercontrol

The Docker image comes with ClusterControl installed and configured with all of its components, so you can immediately use it to manage and monitor your existing databases. Supported database servers/clusters:

  • Galera Cluster for MySQL
  • Percona XtraDB Cluster
  • MariaDB Galera Cluster
  • MySQL/MariaDB Replication
  • MySQL/MariaDB single instance
  • MongoDB Replica Set
  • PostgreSQL single instance

As more and more people will know, Docker is based on the concept of so called application containers and is much faster or lightweight than full stack virtual machines such as VMWare or VirtualBox. It's a very nice way to isolate applications and services to run in a completely isolated environment, which a user can launch and tear down the whole stack within seconds.

Having a Docker image for ClusterControl at the moment is convenient in terms of how quickly it is to get it up and running and it's 100% reproducible. Docker users can now start testing ClusterControl, since we have an image that everyone can pull down and then launch the tool.

ClusterControl Docker Image

The image is available on Docker Hub and the code is hosted in our Github repository. Please refer to the Docker Hub page or our Github repository for the latest instructions.

The image consists of ClusterControl and all of its components:

  • ClusterControl controller, CLI, cmonapi, UI and NodeJS packages installed via Severalnines repository.
  • Percona Server 5.6 installed via Percona repository.
  • Apache 2.4 (mod_ssl and mod_rewrite configured)
  • PHP 5.4 (gd, mysqli, ldap, cli, common, curl)
  • SSH key for root user.
  • Deployment script: deploy-container.sh

The core of automatic deployment is inside a shell script called “deploy-container.sh”. This script monitors a custom table inside CMON database called “cmon.containers”. The database container created by run command or some orchestration tool shall report and register itself into this table. The script will look for new entries and perform the necessary actions using s9s, the ClusterControl CLI. You can monitor the progress directly from the ClusterControl UI or using “docker logs” command.

New Deployment - ClusterControl and Galera Cluster on Docker

There are several ways to deploy a new database cluster stack on containers, especially with respect to the container orchestration platform used. The currently supported methods are:

  • Docker (standalone)
  • Docker Swarm
  • Kubernetes

A new database cluster stack would usually consist of a ClusterControl server with a three-node Galera Cluster. From there, you can scale up or down according to your desired setup. We have built another Docker image to serve as database base image called “centos-ssh”, which can be used with ClusterControl for automatic deployment. This image comes with a simple bootstrapping logic to download the public key from ClusterControl container (automatic passwordless SSH setup), register itself to ClusterControl and pass the environment variables for the chosen cluster setup.

Docker (Standalone)

To run on standalone Docker host, one would do the following:

  1. Run the ClusterControl container:

    $ docker run -d --name clustercontrol -p 5000:80 severalnines/clustercontrol
  2. Then, run the DB containers. Define the CC_HOST (the ClusterControl container's IP) environment variable or simply use container linking. Assuming the ClusterControl container name is ‘clustercontrol’:

    $ docker run -d --name galera1 -p 6661:3306 --link clustercontrol:clustercontrol -e CLUSTER_TYPE=galera -e CLUSTER_NAME=mygalera -e INITIAL_CLUSTER_SIZE=3 severalnines/centos-ssh
    $ docker run -d --name galera2 -p 6662:3306 --link clustercontrol:clustercontrol -e CLUSTER_TYPE=galera -e CLUSTER_NAME=mygalera -e INITIAL_CLUSTER_SIZE=3 severalnines/centos-ssh
    $ docker run -d --name galera3 -p 6663:3306 --link clustercontrol:clustercontrol -e CLUSTER_TYPE=galera -e CLUSTER_NAME=mygalera -e INITIAL_CLUSTER_SIZE=3 severalnines/centos-ssh

ClusterControl will automatically pick the new containers to deploy. If it finds the number of containers is equal or greater than INITIAL_CLUSTER_SIZE, the cluster deployment shall begin. You can verify that with:

$ docker logs -f clustercontrol

Or, open the ClusterControl UI at http://{Docker_host}:5000/clustercontrol and look under Activity -> Jobs.

To scale up, just run new containers and ClusterControl will add them into the cluster automatically:

$ docker run -d --name galera4 -p 6664:3306 --link clustercontrol:clustercontrol -e CLUSTER_TYPE=galera -e CLUSTER_NAME=mygalera -e INITIAL_CLUSTER_SIZE=3 severalnines/centos-ssh

Docker Swarm

Docker Swarm allows container deployment on multiple hosts. It manages a cluster of Docker Engines called a swarm.

Take note of some prerequisites for running ClusterControl and Galera Cluster on Docker Swarm:

  • Docker Engine version 1.12 and later.
  • Docker Swarm Mode is initialized.
  • ClusterControl must be connected to the same overlay network as the database containers.

In Docker Swarm mode, centos-ssh will default to look for 'cc_clustercontrol' as the CC_HOST. If you create the ClusterControl container with 'cc_clustercontrol' as the service name, you can skip defining CC_HOST variable.

We have explained the deployment steps in this blog post.

Kubernetes

Kubernetes is an open-source platform for automating deployment, scaling, and operations of application containers across clusters of hosts, and provides a container-centric infrastructure. In Kubernetes, centos-ssh will default to look for “clustercontrol” service as the CC_HOST. If you use other name, please specify this variable accordingly.

Example YAML definition of ClusterControl deployment on Kubernetes is available in our Github repository under Kubernetes directory. To deploy a ClusterControl pod using a ReplicaSet, the recommended way is:

$ kubectl create -f cc-pv-pvc.yml
$ kubectl create -f cc-rs.yml

Kubernetes will then create the necessary PersistentVolume (PV) and PersistentVolumeClaim (PVC) by connecting to the NFS server. The deployment definition (cc-rs.yml) will then run a pod and use the created PV resources to map the datadir and cmon configuration directory for persistency. This allows the ClusterControl pod to be relocated to other Kubernetes nodes if it is rescheduled by the scheduler.

Import Existing DB Containers into ClusterControl

If you already have a Galera Cluster running on Docker and you would like to have ClusterControl manage it, you can simply run the ClusterControl container in the same Docker network as the database containers. The only requirement is to ensure the target containers have SSH related packages installed (openssh-server, openssh-clients). Then allow passwordless SSH from ClusterControl to the database containers. Once done, use “Add Existing Server/Cluster” feature and the cluster should be imported into ClusterControl.

Example of importing the existing DB containers

We have a physical host, 192.168.50.130 installed with Docker, and assume that you already have a three-node Galera Cluster in containers, running under the standard Docker bridge network. We are going to import the cluster into ClusterControl, which is running in another container. The following is the high-level architecture diagram:

Adding into ClusterControl

  1. Install OpenSSH related packages on the database containers, allow the root login, start it up and set the root password:

    $ docker exec -ti [db-container] apt-get update
    $ docker exec -ti [db-container] apt-get install -y openssh-server openssh-client
    $ docker exec -it [db-container] sed -i 's|^PermitRootLogin.*|PermitRootLogin yes|g' /etc/ssh/sshd_config
    $ docker exec -ti [db-container] service ssh start
    $ docker exec -it [db-container] passwd
  2. Start the ClusterControl container as daemon and forward port 80 on the container to port 5000 on the host:

    $ docker run -d --name clustercontrol -p 5000:80 severalnines/clustercontrol
  3. Verify the ClusterControl container is up:

    $ docker ps | grep clustercontrol
    59134c17fe5a        severalnines/clustercontrol   "/entrypoint.sh"       2 minutes ago       Up 2 minutes        22/tcp, 3306/tcp, 9500/tcp, 9600/tcp, 9999/tcp, 0.0.0.0:5000->80/tcp   clustercontrol
  4. Open a browser, go to http://{docker_host}:5000/clustercontrol and create a default admin user and password. You should now see the ClusterControl landing page.

  5. The last step is setting up the passwordless SSH to all database containers. Attach to ClusterControl container interactive console:

    $ docker exec -it clustercontrol /bin/bash
  6. Copy the SSH key to all database containers:

    $ ssh-copy-id 172.17.0.2
    $ ssh-copy-id 172.17.0.3
    $ ssh-copy-id 172.17.0.4
  7. Start importing the cluster into ClusterControl. Open a web browser and go to Docker’s physical host IP address with the mapped port e.g, http://192.168.50.130:5000/clustercontrol and click “Add Existing Cluster/Server” and specify following information:

    Ensure you got the green tick when entering the hostname or IP address, indicating ClusterControl is able to communicate with the node. Then, click Import button and wait until ClusterControl finishes its job. The database cluster will be listed under the ClusterControl dashboard once imported.

Happy containerizing!

Docker: All the Severalnines Resources

$
0
0

While the idea of containers have been around since the early days of Unix, Docker made waves in 2013 when it hit the market with its innovative solution. What began as an open source project, Docker allows you to add your stacks and applications to containers where they share a common operating system kernel. This lets you have a lightweight virtualized system with almost zero overhead. Docker also lets you bring up or down containers in seconds, making for rapid deployment of your stack.

Severalnines, like many other companies, got excited early on about Docker and began experimenting and developing ways to deploy advanced open source database configurations using Docker containers. We also released, early on, a docker image of ClusterControl that lets you utilize the management and monitoring functionalities of ClusterControl with your existing database deployments.

Here are just some of the great resources we’ve developed for Docker over the last few years...

Severalnines on Docker Hub

In addition to the ClusterControl Docker Image, we have also provided a series of images to help you get started on Docker with other open source database technologies like Percona XtraDB Cluster and MariaDB.

Check Out the Docker Images

ClusterControl on Docker Documentation

For detailed instructions on how to install ClusterControl utilizing the Docker Image click on the link below.

Read More

Top Blogs

MySQL on Docker: Running Galera Cluster on Kubernetes

In our previous posts, we showed how one can run Galera Cluster on Docker Swarm, and discussed some of the limitations with regards to production environments. Kubernetes is widely used as orchestration tool, and we’ll see whether we can leverage it to achieve production-grade Galera Cluster on Docker.

Read More

MySQL on Docker: Swarm Mode Limitations for Galera Cluster in Production Setups

This blog post explains some of the Docker Swarm Mode limitations in handling Galera Cluster natively in production environments.

Read More

MySQL on Docker: Composing the Stack

Docker 1.13 introduces a long-awaited feature called Compose-file support. Compose-file defines everything about an application - services, databases, volumes, networks, and dependencies can all be defined in one place. In this blog, we’ll show you how to use Compose-file to simplify the Docker deployment of MySQL containers.

Read More

MySQL on Docker: Deploy a Homogeneous Galera Cluster with etcd

Our journey to make Galera Cluster run smoothly on Docker containers continues. Deploying Galera Cluster on Docker is tricky when using orchestration tools. With this blog, find out how to deploy a homogeneous Galera Cluster with etcd.

Read More

MySQL on Docker: Introduction to Docker Swarm Mode and Multi-Host Networking

This blog post covers the basics of managing MySQL containers on top of Docker swarm mode and overlay network.

Read More

MySQL on Docker: Single Host Networking for MySQL Containers

This blog covers the basics of how Docker handles single-host networking, and how MySQL containers can leverage that.

Read More

MySQL on Docker: Building the Container Image

In this post, we will show you two ways how to build a MySQL Docker image - changing a base image and committing, or using Dockerfile. We’ll show you how to extend the Docker team’s MySQL image, and add Percona XtraBackup to it.

Read More

MySQL Docker Containers: Understanding the basics

In this post, we will cover some basics around running MySQL in a Docker container. It walks you through how to properly fire up a MySQL container, change configuration parameters, how to connect to the container, and how the data is stored.

Read More

ClusterControl
Single Console for Your Entire Database Infrastructure
Find out what else is new in ClusterControl

ClusterControl on Docker

ClusterControl provides advanced management and monitoring functionality to get your MySQL replication and clustered instances up-and-running using proven methodologies that you can depend on to work. Used in conjunction with other orchestration tools for deployment to the containers, ClusterControl makes managing your open source databases easy with point-and-click interfaces and no need to have specialized knowledge about the technology.

ClusterControl delivers on an array of features to help manage and monitor your open source database environments:

  • Management & Monitoring: ClusterControl provides management features to repair and recover broken nodes, as well as test and automate MySQL upgrades.
  • Advanced Monitoring: ClusterControl provides a unified view of all MySQL nodes and clusters across all your data centers and lets you drill down into individual nodes for more detailed statistics.
  • Automatic Failure Detection and Handling: ClusterControl takes care of your replication cluster’s health. If a master failure is detected, ClusterControl automatically promotes one of the available slaves to ensure your cluster is always up.

Learn more about how ClusterControl can enhance performance here or pull the Docker Image here.

We hope that these resources prove useful!

Happy Clustering!

Announcing ClusterControl 1.4.2 - the DevOps Edition

$
0
0

Today we are pleased to announce the 1.4.2 release of ClusterControl - the all-inclusive database management system that lets you easily deploy, monitor, manage and scale highly available open source databases - and load balancers - in your infrastructure.

Release Highlights

For MySQL

Set up transparent failover of ProxySQL with Keepalived and Virtual IP

Keep query rules, users and other settings in sync across multiple instances

For PostgreSQL

New primary - standby deployment wizard for streaming replication

Automated failover and slave to master promotion

For MySQL, MongoDB & PostgreSQL

New Integrations with communications or incident response management systems such as Pagerduty, VictorOps, Telegram, Opsgenie and Slack

New Web SSH Console

And more! Read about the full details below.

Download ClusterControl

View release details and resources

Release description

This maintenance release of ClusterControl is all about consolidating the popular database management features our users have come to appreciate. And we have some great new features aimed at DevOps teams!

Our new integration with popular incident management and chat services lets you customise the alarms and get alerted in the ops tools you are already using - e.g., Pagerduty, VictorOps, Telegram, Opsgenie and Slack. You can also run any command available in the ClusterControl CLI from your CCBot-enabled chat.

ProxySQL can now be deployed in active standby HA mode with Keepalived and Virtual IP. It is also possible to export and synchronize configurations across multiple instances, which is an essential feature in a distributed environment.

And we’re introducing automatic failover and replication management of your PostgreSQL replication setups.

In more detail …

ChatOps with ClusterControl’s CCBot

In our previous ClusterControl release we included the new ClusterControl command line client (CLI). We have now made a new and improved CCBot available that has full integration with the CLI. This means you can use any command available in the CLI from your CCBot-enabled chat!

The command line client is intuitive and easy to use, and if you are a frequent command line user it will be quick to get accustomed to. However, not everyone has command line access to the hosts installed with ClusterControl, and if external connections to this node are prohibited, the CLI will not be able to send commands to the ClusterControl backend. Also some users may not be accustomed to work on the command line. Adding the CLI to our chatbot, CCBot, addresses both problems: this will empower those users to send commands to ClusterControl that they normally wouldn't have been able to.

New Integrations with popular notification systems

Alarms and Events can now easily be sent to incident management services like PagerDuty and VictorOps, or to chat services like Slack and Telegram. You can also use Webhooks if you want to integrate with other services to act on status changes in your clusters. The direct connections with these popular incident communication services allow you to customize how you are alerted from ClusterControl when something goes wrong with your database environments.

  • Send Alarms and Events to:
    • PagerDuty, VictorOps, and OpsGenie
    • Slack and Telegram
    • User registered Webhooks

Automated failover for PostgreSQL

Starting from ClusterControl 1.4.2, you can deploy an entire PostgreSQL replication setup in the same way as you would deploy MySQL and MongoDB: you can use the “Deploy Cluster” menu to deploy a primary and one or more PostgreSQL standby servers. Once the replication setup is deployed, ClusterControl will manage the setup and automatically recover failed servers.

Another feature is “Rebuild Replication Slave” job which is available for all slaves (or standby servers) in the replication setup. This is to be used for instance when you want to wipe out the data on the standby, and rebuild it again with a fresh copy of data from the primary. It can be useful if a standby server is not able to connect and replicate from the primary for some reason.

You can now easily check which queries are responsible for the load on your PostgreSQL setup. You’ll see here some basic performance data - how many queries of a given type have been executed? What was their maximum and average execution time? How the total execution time for that query looks like? Download ClusterControl to get started.

ProxySQL Improvements

In this release we have improvements for ProxySQL to help you deploy active/standby setups with Keepalived and Virtual IP. This improved integration with Keepalived and Virtual IP brings high availability and automatic failover to your load balancing.

And you can also easily synchronize a ProxySQL configuration that has query rules, users, and host groups with other instances to keep them identical.

  • Copy, Export and Import ProxySQL configurations to/from other instances to keep them in sync
  • Add Existing standalone ProxySQL instance
  • Add Existing Keepalived in active/passive setups
  • Deploy up to 3 ProxySQL instances with a Keepalived active/passive setup
  • Simplified Query Cache creation

New Web-based SSH Console

From the ClusterControl GUI, you now have SSH access to any of the database nodes right from your browser. This can be very useful if you need to quickly log into a database server and access the command line. Communication is based on HTTPS, so it is possible to access your servers from behind a firewall that restricts Internet access to only port 443. Access to WebSSH is configurable by the ClusterControl admin through the GUI.

  • Open a terminal window to any cluster nodes
    • Only supported with Apache 2.4+

There are a number of other features and improvements that we have not mentioned here. You can find all details in the ChangeLog.

We encourage you to test this latest release and provide us with your feedback. If you’d like a demo, feel free to request one.

Thank you for your ongoing support, and happy clustering!

PS.: For additional tips & tricks, follow our blog: https://severalnines.com/blog/.

Viewing all 385 articles
Browse latest View live