Quantcast
Channel: Severalnines - clustercontrol
Viewing all 385 articles
Browse latest View live

Live webinar on ClusterControl 1.2.11: features support for MariaDB’s MaxScale and is our best PostgreSQL release yet!

$
0
0

Join us for this live webinar on Tuesday, October 27th, led by our colleague Art van Scheppingen, Senior Support Engineer at Severalnines. Art recently joined us from Spil Games in Amsterdam, where he was Head of Database Engineering. He’ll be discussing and demonstrating the new release of ClusterControl and will be available for questions on its new features.

Register here for Asia PAC / Europe MEA timezones

Register here for Latin AMER / North AMER timezones

This is our best release yet for Postgres users and we’re also introducing key new features for our MySQL / MariaDB users, such as support for MaxScale, an open-source, database-centric proxy that works with MariaDB Enterprise, MariaDB Enterprise Cluster, MariaDB 5.5, MariaDB 10 and Oracle MySQL. The release further includes a range of performance improvements and bug fixes. 

Some of the highlights of ClusterControl 1.2.11 include: 

  • For PostgreSQL
    • Deployment and Management of Postgres Replicated Setups
    • Customisable dashboards
    • Database performance charts for nodes
    • Enablement of ClusterControl DevStudio
  • Support for MaxScale
    • Deployment and management of MaxScale load balancer
  • For MySQL
    • Add Existing HAProxy and Keepalived
    • Deployment of MySQL Replication setups
    • Improvements in charting of metrics
    • Revamped Configuration Management
    • New Database Logs Page
    • Revamped MySQL User Management

  

Full details of the release:

We encourage you to provide feedback on your testing. And if you’d like a demo, feel free to request one.

Thank you for your ongoing support, and we look forward to seeing you at the webinar!

Blog category:


Severalnines’ Vinay Joosery named UK top 50 data leader & influencer

$
0
0

Information Age today unveiled the inaugural list of the UK’s top 50 data leaders and influencers

“Very strong on product and technical development of open-source databases, Vinay has helped global and UK businesses like BT, AutoTrader Group and Ping Identity to scale, manage and develop (data) cloud operations.” - as just announced by Information Age.

Congratulations to all the nominees and thanks to the selection committee at Information Age for this distinction!

Vinay is a passionate advocate of open source databases for mission-critical business. Prior to co-founding Severalnines, Vinay served as VP EMEA at Pentaho Corporation and held senior management roles at MySQL / Sun Microsystems / Oracle and Ericsson.

As our CEO, Vinay steers all aspects of the company from product development, support, marketing and sales through to ensuring that everyone has a seat at the table when we’re out for a company get together. 

vinay_new.png

First and foremost though, Vinay is a customer champion at Severalnines and they’re happy to say so: 

“Vinay Joosery and his Severalnines team were superb on giving us advice on how to maximise the potential of ClusterControl and our database platforms. My team can now spend more time on creating and delivering innovative customer services.” said our UK customer BT Expedite in a recent interview.

As a company, our aim is to help companies build smart database infrastructure for mission-critical business, while benefiting from open source economics. We’re excited to see our accomplishments recognised by industry experts and peers via Vinay’s nomination as a data leader and influencer in the UK. 

Here’s to further success and content customers! Happy Severalnines clustering to all!

Blog category:

ClusterControl Tips & Tricks: Updating your MySQL Configuration

$
0
0

Requires ClusterControl 1.2.11 or later. Applies to MySQL based clusters.

From time to time it is necessary to tune and update your configuration. Here we will show you how you can change/update  individual parameters using the ClusterControl UI. Navigate to Manage > Configurations.

Pretend that you want to change max_connections from 200 to 500 on all DB nodes in your cluster.

Click on Change Parameter. Select all MySQL Servers in the DB Instances drop down and select the Group (in this case MYSQLD) where the Parameter that you want to change resides, select the parameter (max_connections), and set the New Value to 500:

Press Proceed, and then you will be presented with a Config Change Log of your parameter change:

The Config Change Log says that:

  1. Change was successful
  2. The change was possible with SET GLOBAL (in this case SET GLOBAL max_connections=500)
  3. The change was persisted in my.cnf
  4. No restart is required

What if you don’t find the parameter you want to change in the Parameter drop-down? You can type in the parameter by hand then, and give it a new value. If possible, SET GLOBAL variable_name=value will be executed, and if not, then a restart may be required. Remember, the change will be persisted in my.cnf upon successful execution.

Happy Clustering!

PS.: To get started with ClusterControl, click here!

Blog category:

ClusterControl Tips & Tricks: Securing your MySQL Installation

$
0
0

Requires ClusterControl 1.2.11 or later. Applies to MySQL based clusters.

During the life cycle of Database installation it is common that new user accounts are created. It is a good practice to once in a while verify that the security is up to standards. That is, there should at least not be any accounts with global access rights, or accounts without password.

Using ClusterControl, you can at any time perform a security audit.

In the User Interface go to Manage > Developer Studio. Expand the folders so that you see s9s/mysql/programs. Click on security_audit.js and then press Compile and Run.

If there are problems you will clearly see it in the messages section:

Enlarged Messages output:

Here we have accounts that do not have a password. Those accounts should not exist in a secure database installation. That is rule number one. To correct this problem, click on mysql_secure_installation.js in the s9s/mysql/programs folder.

Click on the dropdown arrow next to Compile and Run and press Change Settings. You will see the following dialog and enter the argument “STRICT”:

Then press Execute. The mysql_secure_installation.js script will then do on each MySQL database instance part of the cluster:

  1. Delete anonymous users
  2. Dropping 'test' database (if exists).
  3. If STRICT is given as an argument to mysql_secure_installation.js it will also do:
    • Remove accounts without passwords.

In the Message box you will see:

The MySQL database servers part of this cluster have now been secured and you have reduced the risk of compromising your data.

You can re-run security_audit.js to verify that the actions have had effect.

Happy Clustering!

PS.: To get started with ClusterControl, click here!

Blog category:

ClusterControl Tips & Tricks: User Management for MySQL

$
0
0

Requires ClusterControl 1.2.11 or later. Applies to MySQL based clusters.

In this example we will look at how you can use ClusterControl to create a user and assign privileges to the user. We will create a user that has enough privileges to perform an xtrabackup.

In the ClusterControl UI press Manage > Schemas and Users, and then press Create Account. You will see the following screen, and here we have filled out the details to create a user with enough privileges to run xtrabackup:

Server refers to the server from which the user is allowed to connect. In Create On DB Node you can select a particular server to execute the CREATE USER/GRANT on. However, if you are using Galera clustering, then the CREATE USER/GRANT will be replicated to all DB Nodes in the cluster.

Then press Save User and the following will be displayed.

Following this you can then look at the user from the Active Accounts page (a reload of the page may be needed before the user is visible - known bug to be fixed):

You have now created a backup user that is allowed to perform backups.

Next you can go to Manage > Configurations and edit the configuration files of the DB nodes that you want to execute backups on. Add following lines:

[xtrabackup]
user=backupuser
password=supersecret

Don’t forget to save the configuration file. This user will then be used by xtrabackup the next time you perform a backup.

Happy Clustering!

PS.: To get started with ClusterControl, click here!

Blog category:

Become a ClusterControl DBA - Deploying your Databases and Clusters

$
0
0

Many of our users speak highly of our product ClusterControl, especially how easy it is to install the software package. Installing new software is one thing, but using it properly is another.

We all are impatient to test new software and would rather like to toy around in a new exciting application than to read documentation up front. That is a bit unfortunate as you may miss the most important features or find out the way of doing things yourself instead of reading how to do things the easy way.

This new blog series will cover all the basic operations of ClusterControl for MySQL, MongoDB & PostgreSQL with examples explaining how to do this, how to make most of your setup and provides a deep dive per subject to save you time. 

These are the topics we'll cover in this series:

  • Deploying the first clusters
  • Adding your existing infrastructure
  • Performance and health monitoring
  • Make your components HA
  • Workflow management
  • Safeguarding your data
  • Protecting your data
  • In depth use case

In today’s post we cover installing ClusterControl and deploying your first clusters. 

Preparations

In this series we will make use of a set of Vagrant boxes but you can use your own infrastructure if you like. In case you do want to test it with Vagrant, we made an example setup available from the following Github repository:
https://github.com/severalnines/vagrant

Clone the repo to your own machine:

git clone git@github.com:severalnines/vagrant.git

The topology of the vagrant nodes is as follows:

  • vm1: clustercontrol
  • vm2: database node1
  • vm3: database node2
  • vm4: database node3

Obviously you can easily add additional nodes if you like by changing the following line:

4.times do |n|

The Vagrant file is configured to automatically install ClusterControl on the first node and forward the user interface of ClusterControl to port 8080 on your hos that runs Vagrantt. So if your host’s ip address is 192.168.1.10 you find ClusterControl UI here: http://192.168.1.10:8080/clustercontrol/

Installing ClusterControl

You can skip this if you chose to use the Vagrant file and received the automatic installation for free. But installation of ClusterControl is easy and will take less than five minutes of your time.

With the package installation all you have to do is issue the following three commands on the ClusterControl node to get it installed:

$ wget http://www.severalnines.com/downloads/cmon/install-cc
$ chmod +x install-cc
$ ./install-cc   # as root or sudo user

That’s it: it can’t get easier than this. If the installation script did not encounter any issues ClusterControl has been installed and is up and running. You can now log into ClusterControl on the following URL:
http://192.168.1.210/clustercontrol

After creating an administrator account and logging in, you will be prompted to add your first cluster.

Deploy a Galera cluster

In case you have installed ClusterControl via the package installation or when there are no clusters defined in ClusterControl you will be prompted to create a new database server/cluster or add an existing (i.e., already deployed) server or cluster:

severalnines-blogpost1-add-new-cluster-or-node.png

In this case we are going to deploy a Galera cluster and this only requires one screen to fill in:

several-nines-blogpost-add-galera-cluster.png

To allow ClusterControl to install the Galera nodes we use the root user that was granted ssh access by the Vagrant bootstrap scripts. In case you chose to use your own infrastructure, you must enter a user here that is allowed to do passwordless ssh to the nodes that ClusterControl is going to control.

Also make sure you disable AppArmor/SELinux. See here why.

After filling in all the details and you have clicked Deploy, a job will be spawned to build the new cluster. The nice thing is that you can keep track of the progress of this job by clicking on the spinning circle between the messages and settings icon in the top menu bar:

severalnines-blogpost1-progress-indicator.png

Clicking on this icon will open a popup that keeps you updated on the progress of your job.

severalnines-blogpost-add-galera-progress1.png

severalnines-blogpost-add-galera-progress2.png

Once the job has finished, you have just created your first cluster. Opening the cluster overview should look like this:

severalnines-blogpost1-cluster-overview.png

In the nodes tab, you can do about any operation you normally would like to do on a cluster and more. The query monitor gives you a good overview of both running and top queries. The performance tab will help you to keep a close eye upon the performance of your cluster and also features the advisors that help you to act pro-actively on trends in data. The backups tab enables you to easily schedule backups that are either stored on the DB nodes or the controller host  and the managing tab enables you to expand your cluster or make it highly available for your applications through a load balancer.

All this functionality will be covered in later blog posts in this series.

Deploy a MySQL replication set

A new feature in ClusterControl 1.2.11 is that you can not only add slaves to existing clusters/nodes but you can also create new masters. In order to create a new replication set, the first step would be creating a new MySQL master:

severalnines-blogpost-add-mysql-master.png

After the master has been created you can now deploy a MySQL slave via the “Add Node” option in the cluster list:

severalnines-blogpost-add-mysql-slave-dialogue.png

Keep in mind that adding a slave to a master requires the master’s configuration to be stored in the ClusterControl repository. This will happen automatically but will take a minute to be imported and storedAfter adding the slave node, ClusterControl will provision the slave with a copy of the data from its master using Xtrabackup. Depending on the size of your data this may take a while.

Deploy a PostgreSQL replication set

Creating a PostgreSQL cluster requires one extra step as compared to creating a Galera cluster as it gets divided in adding a standalone PostgreSQL server and then adding a slave. The two step approach lets you decide which server will become the master and which one becomes the slave.

A side note is that the supported PostgreSQL version is 9.x and higher. Make sure the correct version gets installed by adding the correct repositories by PostgreSQL: http://www.postgresql.org/download/linux/

First we create a master by deploying a standalone PostgreSQL server:

severalnines-blogpost-postgresql-master.png

After deploying, the first node will become available in the cluster list as a single node instance. 

You can either open the cluster overview and then add the slave, but the cluster list also gives you the option to immediately add a replication slave to this cluster:

severalnines-blogpost1-adding-postgresql-slave1.png

And adding a slave is as simple as selecting the master and filling in the fqdn for the new slave:

severalnines-blogpost-postgresql-slave.png

The PostgreSQL cluster overview gives you a good insight in your cluster:

severalnines-blogpost-postgresql-overview.png

Just like with the Galera and MySQL cluster overviews you can find all the necessary tabs and functions here: the query monitor, performance, backups tabs enables you to do the necessary operations.

Deploy a MongoDB replicaSet

Deploying a new MongoDB replicaSet is similar to PostgreSQL. First we create a new master node:

severalnines-blogpost-mongodb-master.png

After installing the master we can add a slave to the replicaSet using the same dropdown from the cluster overview:

severalnines-blogpost-mongodb-add-node-dropdown.png

Keep in mind that you need to select the saved Mongo template here to start replicating from a replicaSet, in this case select the mongod.conf.shardsvr configuration.

severalnines-blogpost-mongodb-slave.png

After adding the slave to the MongoDB replicaSet, a job will be spawned. Once this job has finished it will take a short while before MongoDB adds it to the cluster and it becomes visible in the cluster overview.

severalnines-blogpost-mongodb-cluster-overview.png

Similar to the PostgreSQL, Galera and MySQL cluster overviews you can find all the necessary tabs and functions here: the query monitor, performance, backups tabs enables you to do the necessary operations.

Final thoughts

With these three examples we have shown you how easy it is to set up new clusters for MySQL, MongoDB and PostgreSQL from scratch in only a couple of minutes. The beauty of using this Vagrant setup is that you can as easy as spawning this environment also take it down again and then spawn again. Impress your fellow colleagues on how easily you can setup a working environment and convince them to use it as their own test or devops environment.

Of course it would be equally interesting to add existing hosts and clusters into ClusterControl and that’s what we'll cover next time.

Blog category:

Become a ClusterControl DBA: Adding Existing Databases and clusters

$
0
0

In our previous blog post we covered the deployment of four types of clustering/replication: MySQL Galera, MySQL master-slave replication, PostgreSQL replication set and MongoDB replication set. This should enable you to create new clusters with great ease, but what if you already have 20 replication setups deployed and wish to manage them with ClusterControl?

This blog post will cover adding existing infrastructure components for these four types of clustering/replication to ClusterControl and how to have ClusterControl manage them.

Adding an existing Galera cluster to ClusterControl

Adding an existing Galera cluster to ClusterControl requires: mysql user with the proper grants and a ssh user that is able to login (without password) from the ClusterControl node to your existing databases and clusters.
 
Install ClusterControl on a separate VM. Once it is up, open the dialogue for adding an existing cluster. All you have to do is to add one of the Galera nodes and ClusterControl will figure out the rest:

severalnines-blogpost-add-existing-galera-cluster.png

After this behind the scenes, ClusterControl will connect to this host and detect all the necessary details for the full cluster and register the cluster in the overview.

Adding an existing MySQL master-slave to ClusterControl

Adding of an existing MySQL master-slave topology requires a bit more work than adding a Galera cluster. As ClusterControl is able to extract the necessary information for Galera, in the case of master-slave, you need to specify every host within the replication setup.

severalnines-blogpost-add-existing-mysql-master-slave.png

After this, ClusterControl will connect to every host, see if they are part of the same topology and register them as part of one cluster (or server group) in the GUI.

Adding an existing PostgreSQL replication set to ClusterControl

Similar to adding the MySQL master-slave above, the PostgreSQL replication set also requires to fill in all hosts within the same replication set.

severalnines-blogpost-add-existing-postgresql-replication-set.png

After this, ClusterControl will connect to every host, see if they are part of the same topology and register them as part of the same group. 

Adding an existing MongoDB replica set to ClusterControl

Adding an existing MongoDB replica set is just as easy as Galera: just one of the hosts in the replica set needs to be specified with its credentials and ClusterControl will automatically discover the other nodes in the replica set.

severalnines-blogpost-add-existing-mongodb-replica-set.png

Expanding your existing infrastructure

After adding the existing databases and clusters, they now have become manageable via ClusterControl and thus we can scale out our clusters.

For MySQL, MongoDB and PostgreSQL replication sets, this can easily be achieved via the same way we showed in our previous blogpost: simply add a node and ClusterControl will take care of the rest.

severalnines-blogpost1-adding-postgresql-slave1.png

For Galera, there is a bit more choice. The most obvious choice is to add a (Galera) node to the cluster by simply choosing “add node” in the cluster list or cluster overview. Expanding your Galera cluster this way should happen with increments of two to ensure your cluster always can have majority during a split brain situation.

Alternatively you could add a replication slave and thus create asynchronous slave in your synchronous cluster that looks like this:

magtid_arch_full.png

Adding a slave node blindly under one of the Galera nodes can be dangerous since if this node goes down, the slave won’t receive updates anymore from its master. We blogged about paradigm earlier and you can read how to solve this in this blog post.

Final thoughts

We showed you how easy it is to add existing databases and clusters to ClusterControl, you can literally add clusters within minutes. So nothing should hold you back from using ClusterControl to manage your existing infrastructure. If you have a large infrastructure, the addition of ClusterControl will give you more overview and save time in troubleshooting and maintaining your clusters.

Now the challenge is how to leverage ClusterControl to keep track of key performance indicators, show the global health of your clusters and proactively alert you in time when something is predicted to happen. And that’s the subject we'll cover next time.

Read also in the same series: 

Blog category:

ClusterControl Tips & Tricks: wtmp Log Rotation Settings for Sudo User

$
0
0

Requires ClusterControl. Applies to all supported database clusters. Applies to all supported operating systems (RHEL/CentOS/Debian/Ubuntu).

ClusterControl requires a super-privileged SSH user to provision database nodes. If you are running as non-root user, the corresponding user must able to execute sudo commands with or without sudo password. Unfortunately, this could generate another issue where performing remote command with “sudo” requires an interactive session (tty). We will explain this in details in the next sections.

What’s up with sudo?

By default, most of the RHEL flavors have the following configured under /etc/sudoers:

Defaults requiretty

When an interactive session (tty) is required, each time the sudo user SSH into the box with -t flag (force pseudo-tty allocation), entries will be created in /var/log/wtmp for the creation and destruction of terminals, or the assignment and release of terminals. These logs only record interactive sessions. If you didn’t specify -t, you would see the following error:

sudo: sorry, you must have a tty to run sudo

The root user does not require an interactive session when running remote SSH command, the entries only appear in /var/log/secure or /var/log/auth.log depending on the system configuration. Different distributions have different defaults in this regards. SSH does not make a file into wtmp if it is a non-interactive session.

To check the content of wtmp, we use the following command:

$ last -f /var/log/wtmp
ec2-user pts/0        ip-10-0-0-79.ap- Wed Oct 28 11:16 - 11:16  (00:00)
ec2-user pts/0        ip-10-0-0-79.ap- Wed Oct 28 11:16 - 11:16  (00:00)
ec2-user pts/0        ip-10-0-0-79.ap- Wed Oct 28 11:16 - 11:16  (00:00)
...

On Debian/Ubuntu system, sudo user does not need to acquire tty as it defaults to have no “requiretty” configured. However, ClusterControl defaults to append -t flag if it detects the SSH user as a non-root user. Since ClusterControl performs all the monitoring and management tasks as this user, you may notice that /var/log/wtmp will grow rapidly, as shown in the following section.

Log rotation for wtmp

Example: Take note of the following default configuration of wtmp in RHEL 7.1 inside /etc/logrotate.conf:

/var/log/wtmp {
    monthly
    create 0664 root utmp
    minsize 1M
    rotate 1
}

By running the following commands on one of the database nodes managed by ClusterControl, we can see how fast /var/log/wtmp grows every minute:

[user@server ~]$ a=$(du -b /var/log/wtmp | cut -f1) && sleep 60 && b=$(du -b /var/log/wtmp | cut -f1) && c=$(expr $b - $a ) && echo $c
89088

From the above result, ClusterControl causes the log file to grow 89 KB per minute, which equals to 128MB per day. If the mentioned logrotate configuration is used (monthly rotation), /var/log/wtmp alone may consume 3.97 GB of disk space! If the partition where this file resides (usually under “/” partition) is small (it’s common to have “/” partition smaller, especially if it’s a cloud instance), there is a potential risk you would fill up the disk space on that partition in less than one month. 

Workaround

The workaround is to play with the log rotation of wtmp. This is applicable to all operating systems mentioned in the beginning of this post. For those who are affected by this, you have to change the log rotation behaviour so it does not grow more than expected. The following is what we recommend:

/var/log/wtmp {
     size 100M
     create 0664 root utmp
     rotate 3
     compress
}

The above settings specify that the maximum size of wtmp should be 100 MB and, and we should keep the 3 most recent (compressed) files and remove older ones.

Logrotate uses crontab (under /etc/cron.daily/logrotate) to work. It is not a daemon so no need to reload its configuration. When the crontab executes logrotate, it will use the new config file automatically.

Happy Clustering!

PS.: To get started with ClusterControl, click here!

Blog category:


Webinar Replay & Slides: Deploying MongoDB, MySQL, PostgreSQL & MariaDB’s MaxScale in 40min

$
0
0

Live demo recording of ClusterControl 1.2.11 and its new features 

Thanks to everyone who joined us for our recent live webinar on ‘ClusterControl 1.2.11 - new release’ led by Art van Scheppingen , Senior Support Engineer at Severalnines. The replay and slides to the webinar are now available to watch and read online via the links below.

During this live webinar, Art demonstrated the latest ClusterControl release and its new features such as support for MariaDB’s MaxScale and related MySQL updates; this is also the best ClusterControl release for PostgreSQL yet.

In fact, and during a live demo, Art not only introduced the new features, but also proceeded to deploy MongoDB, PostgreSQL, MySQL clusters and replicated setups as well as MariaDB’s MaxScale proxy all in the one session and the one ClusterControl instance - within 40min! Impressive stuff, which can be viewed again in this recording!

Watch the replay

Read the slides

 

1211_banner.png

TOPICS COVERED

  • For PostgreSQL
    • Deployment and Management of Postgres Replicated Setups
    • Customisable dashboards
    • Database performance charts for nodes
    • Enablement of ClusterControl DevStudio
  • Support for MaxScale
    • Deployment and management of MaxScale load balancer
  • For MySQL
    • Add Existing HAProxy and Keepalived
    • Deployment of MySQL Replication setups
    • Improvements in charting of metrics
    • Revamped Configuration Management
    • New Database Logs Page
    • Revamped MySQL User Management

SPEAKER

Art van Scheppingen is a Senior Support Engineer at Severalnines. He’s a pragmatic MySQL and Database expert with over 15 years experience in web development. He previously worked at Spil Games as Head of Database Engineering, where he kept a broad vision upon the whole database environment: from MySQL to Couchbase, Vertica to Hadoop and from Sphinx Search to SOLR. He regularly presents his work and projects at various conferences (Percona Live, FOSDEM) and related meetups.

cc_deploy_multiple.png

For further blogs on ClusterControl visit: http://www.severalnines.com/blog-categories/clustercontrol

To view all our webinar replays visit: http://www.severalnines.com/webinars-replay

Blog category:

Become a ClusterControl DBA: performance monitoring and health

$
0
0

The blog series for MySQL, MongoDB & PostgreSQL administrators

In the previous two blog posts we covered both deploying the four types of clustering/replication (MySQL/Galera, MySQL Replication, MongoDB & PostgreSQL) and managing/monitoring your existing databases and clusters. So, after reading these two first blog posts you were able to add your 20 existing replication setups to ClusterControl, expand them and additionally deployed two new Galera clusters while doing a ton of other things. Or maybe you deployed MongoDB and/or PostgreSQL systems. So now, how do you keep them healthy?

That’s exactly what this blog post is about: how to leverage ClusterControl’s performance monitoring and advisors functionality to keep your MySQL, MongoDB and/or PostgreSQL databases and clusters healthy. So how is this done in ClusterControl?

The cluster list

The most important information can already be found in the cluster list: as long as there are no alarms and no hosts are shown to be down, everything is functioning fine. An alarm is raised if a certain condition is met, e.g. host is swapping, and brings to your attention the issue you should investigate. That means that alarms not only are raised during an outage but also to allow you to proactively manage your databases.

Suppose you would log into ClusterControl and see a cluster listing like this, you will definitely have something to investigate: one node is down in the Galera cluster for example and every cluster has various alarms.

severalnines-blogpost-cluster-list-node-down-alarms.png

Once you click on one of the alarms, you will go to a detailed page on all alarms of the cluster. The alarm details will explain the issue and in most cases also advise the action to resolve the issue.

You can set up your own alarms by creating custom expressions, but that has been deprecated in favor of our new Developer Studio that allows you to write custom Javascripts and execute these as Advisors. We will get back to this topic later in this post.

The cluster overview - Dashboards

When opening up the cluster overview, we can immediately see the most important performance metrics for the cluster in the tabs. This overview may differ per cluster type as, for instance, Galera has different performance metrics to watch than traditional MySQL, Postgres or MongoDB.

severalnines-blogpost-cluster-overview-performance.png

Both the default overview and the pre-selected tabs are customizable. By clicking on Overview > Dash Settings you are given a dialogue that allows you to define the dashboard.

severalnines-blogpost-cluster-overview-add-dashboard.png

By pressing the plus sign you can add and define your own metrics to graph the dashboard. In our case we will define a new dashboard featuring the Galera specific receive and send queue:

severalnines-blogpost-cluster-overview-add-dashboard2.png

This new dashboard should give us good insight in the average queue length of our Galera cluster.

Once you have pressed save, the new dashboard will become available for this cluster:

severalnines-blogpost-cluster-overview-new-dashboard-added.png

Similarly you can do this for PostgreSQL as well by combining the checkpoints with the number of commits:

severalnines-blogpost-performance-overview-pgsql-add-metric.png

severalnines-blogpost-performance-overview-pgsql-add-metric2.png

severalnines-blogpost-performance-overview-pgsql2.png

So as you can see, it is relatively easy to customize your own (default) dashboard.

Cluster overview - Query Monitor

The Query Monitor tab is available for both MySQL and PostgreSQL based setups and consists out of three dashboards: Top Queries, Running Queries and Query Histogram.

In the Running Queries dashboard, you will find all current queries that are running. This is basically the equivalent of SHOW PROCESSLIST in ClusterControl.

Top Queries and Query Histogram both rely on the input of the slow query log. To prevent ClusterControl to be too intrusive and the slow query log to grow too large, ClusterControl will sample the slow query log by turning it on and off again. This loop is by default set to 1 second capturing and the long_query_time is set to 0.5 seconds. If you wish to change these settings for your cluster, you can change this via Settings -> Query Monitor.

Top Queries will, like the name says, show the top queries that were sampled. You can sort them on various columns: for instance the frequency, average execution time or the total execution time.

severalnines-blogpost-top-queries-overview.png

You can get more details about the query by selecting it and this will present the query execution plan (if available) and optimization hints/advisories. If necessary you can also select the query and have the details emailed to you by clicking on the “email query” button.

The Query Histogram is similar to the Top Queries but then allows you to filter the queries per host and compare them in time.

Cluster overview - Operations

Similar to the PostgreSQL and MySQL systems the MongoDB clusters have the Operations overview and is similar to the Running Queries. This overview is similar to issuing the db.currentOp() command within MongoDB.

severalnines-blogpost-mongodb-current-ops.png

Cluster overview - Performance

MySQL / Galera

The performance tab is probably the best place to find the overall performance and health of your clusters. For MySQL and Galera it consists of an Overview page, the Advisors, status/variables overviews, the Schema Analyzer and the Transaction log.

The Overview page will give you a graph overview of the most important metrics in your cluster. This is, obviously, different per cluster type. Eight metrics have been set by default, but you can easily set your own - up to 20 graphs if needed.

severalnines-blogpost-define-graphs.png

The Advisors is one of the key features of ClusterControl: the Advisors are scripted checks that can be run on demand. The advisors can evaluate almost any fact known about the host and/or cluster and give its opinion on the health of the host and/or cluster and even can give advice on how to resolve issues or improve your hosts!

severalnines-blogpost-mysql-advisors.png

The best part is yet to come: you can create your own checks in the Developer Studio (Cluster -> Manage -> Developer Studio), run them on a regular interval and use them again in the Advisors section. We blogged about this new feature earlier this year.

We will skip the status/variables overview of MySQL and Galera as this is useful for reference but not for this blog post: it is good enough that you know it is here. It is also good to mention that  the Status Time Machine can help you track specific status variables and see how they change in time.

Now suppose your database is growing but you want to know how fast it grew in the past week. You can actually keep track of the growth of both data and index sizes from right within ClusterControl:

And next to the total growth on disk it can also report back the top 25 largest schemas.

Another important feature is the Schema Analyzer within ClusterControl.

ClusterControl will analyze your schemas and look for redundant indexes, MyISAM tables and tables without a primary key. Of course it is entirely up to you to keep a table without a primary key because some application might have created it this way, but at least it is great to get the advice here for free. The Schema Analyzer even constructs the necessary ALTER statement to fix the problem.

PostgreSQL

For PostgreSQL the Advisors, DB Status and DB Variables can be found here.

severalnines-blogpost-postgresql-advisors.png

MongoDB

For MongoDB the Mongo Stats and performance overview can be found under the Performance tab. The Mongo Stats is an overview of the output of mongostat and
the Performance overview gives a good graphical overview of the Mongo opcounters:

severalnines-blogpost-mongodb-performance.png

Final thoughts

We showed you how to keep your eyeballs on the most important monitoring and health checking features of ClusterControl. Obviously this is only the beginning of the journey as we will soon start another blog series about the Developer Studio capabilities and how you can make most of your own checks. Also keep in mind that our support for MongoDB and PostgreSQL is not as extensive as our MySQL toolset, but we are continuously improving on this.

You may ask yourself why we have skipped over the performance monitoring and health checks of HA Proxy and MaxScalel. We did that deliberately as the blog series covered only deployments of clusters up till now and not the deployment of HA components. So that’s the subject we'll cover next time.

Blog category:

Severalnines’ Vinay Joosery named UK top 50 data leader & influencer

$
0
0

Information Age today unveiled the inaugural list of the UK’s top 50 data leaders and influencers

“Very strong on product and technical development of open-source databases, Vinay has helped global and UK businesses like BT, AutoTrader Group and Ping Identity to scale, manage and develop (data) cloud operations.” - as just announced by Information Age.

Congratulations to all the nominees and thanks to the selection committee at Information Age for this distinction!

Vinay is a passionate advocate of open source databases for mission-critical business. Prior to co-founding Severalnines, Vinay served as VP EMEA at Pentaho Corporation and held senior management roles at MySQL / Sun Microsystems / Oracle and Ericsson.

As our CEO, Vinay steers all aspects of the company from product development, support, marketing and sales through to ensuring that everyone has a seat at the table when we’re out for a company get together. 

vinay_new.png

First and foremost though, Vinay is a customer champion at Severalnines and they’re happy to say so: 

“Vinay Joosery and his Severalnines team were superb on giving us advice on how to maximise the potential of ClusterControl and our database platforms. My team can now spend more time on creating and delivering innovative customer services.” said our UK customer BT Expedite in a recent interview.

As a company, our aim is to help companies build smart database infrastructure for mission-critical business, while benefiting from open source economics. We’re excited to see our accomplishments recognised by industry experts and peers via Vinay’s nomination as a data leader and influencer in the UK. 

Here’s to further success and content customers! Happy Severalnines clustering to all!

Blog category:

ClusterControl Tips & Tricks: Updating your MySQL Configuration

$
0
0

Requires ClusterControl 1.2.11 or later. Applies to MySQL based clusters.

From time to time it is necessary to tune and update your configuration. Here we will show you how you can change/update  individual parameters using the ClusterControl UI. Navigate to Manage > Configurations.

Pretend that you want to change max_connections from 200 to 500 on all DB nodes in your cluster.

Click on Change Parameter. Select all MySQL Servers in the DB Instances drop down and select the Group (in this case MYSQLD) where the Parameter that you want to change resides, select the parameter (max_connections), and set the New Value to 500:

Press Proceed, and then you will be presented with a Config Change Log of your parameter change:

The Config Change Log says that:

  1. Change was successful
  2. The change was possible with SET GLOBAL (in this case SET GLOBAL max_connections=500)
  3. The change was persisted in my.cnf
  4. No restart is required

What if you don’t find the parameter you want to change in the Parameter drop-down? You can type in the parameter by hand then, and give it a new value. If possible, SET GLOBAL variable_name=value will be executed, and if not, then a restart may be required. Remember, the change will be persisted in my.cnf upon successful execution.

Happy Clustering!

PS.: To get started with ClusterControl, click here!

Blog category:

ClusterControl Tips & Tricks: Securing your MySQL Installation

$
0
0

Requires ClusterControl 1.2.11 or later. Applies to MySQL based clusters.

During the life cycle of Database installation it is common that new user accounts are created. It is a good practice to once in a while verify that the security is up to standards. That is, there should at least not be any accounts with global access rights, or accounts without password.

Using ClusterControl, you can at any time perform a security audit.

In the User Interface go to Manage > Developer Studio. Expand the folders so that you see s9s/mysql/programs. Click on security_audit.js and then press Compile and Run.

If there are problems you will clearly see it in the messages section:

Enlarged Messages output:

Here we have accounts that do not have a password. Those accounts should not exist in a secure database installation. That is rule number one. To correct this problem, click on mysql_secure_installation.js in the s9s/mysql/programs folder.

Click on the dropdown arrow next to Compile and Run and press Change Settings. You will see the following dialog and enter the argument “STRICT”:

Then press Execute. The mysql_secure_installation.js script will then do on each MySQL database instance part of the cluster:

  1. Delete anonymous users
  2. Dropping 'test' database (if exists).
  3. If STRICT is given as an argument to mysql_secure_installation.js it will also do:
    • Remove accounts without passwords.

In the Message box you will see:

The MySQL database servers part of this cluster have now been secured and you have reduced the risk of compromising your data.

You can re-run security_audit.js to verify that the actions have had effect.

Happy Clustering!

PS.: To get started with ClusterControl, click here!

Blog category:

ClusterControl Tips & Tricks: User Management for MySQL

$
0
0

Requires ClusterControl 1.2.11 or later. Applies to MySQL based clusters.

In this example we will look at how you can use ClusterControl to create a user and assign privileges to the user. We will create a user that has enough privileges to perform an xtrabackup.

In the ClusterControl UI press Manage > Schemas and Users, and then press Create Account. You will see the following screen, and here we have filled out the details to create a user with enough privileges to run xtrabackup:

Server refers to the server from which the user is allowed to connect. In Create On DB Node you can select a particular server to execute the CREATE USER/GRANT on. However, if you are using Galera clustering, then the CREATE USER/GRANT will be replicated to all DB Nodes in the cluster.

Then press Save User and the following will be displayed.

Following this you can then look at the user from the Active Accounts page (a reload of the page may be needed before the user is visible - known bug to be fixed):

You have now created a backup user that is allowed to perform backups.

Next you can go to Manage > Configurations and edit the configuration files of the DB nodes that you want to execute backups on. Add following lines:

[xtrabackup]
user=backupuser
password=supersecret

Don’t forget to save the configuration file. This user will then be used by xtrabackup the next time you perform a backup.

Happy Clustering!

PS.: To get started with ClusterControl, click here!

Blog category:

Become a ClusterControl DBA - Deploying your Databases and Clusters

$
0
0

Many of our users speak highly of our product ClusterControl, especially how easy it is to install the software package. Installing new software is one thing, but using it properly is another.

We all are impatient to test new software and would rather like to toy around in a new exciting application than to read documentation up front. That is a bit unfortunate as you may miss the most important features or find out the way of doing things yourself instead of reading how to do things the easy way.

This new blog series will cover all the basic operations of ClusterControl for MySQL, MongoDB & PostgreSQL with examples explaining how to do this, how to make most of your setup and provides a deep dive per subject to save you time. 

These are the topics we'll cover in this series:

  • Deploying the first clusters
  • Adding your existing infrastructure
  • Performance and health monitoring
  • Make your components HA
  • Workflow management
  • Safeguarding your data
  • Protecting your data
  • In depth use case

In today’s post we cover installing ClusterControl and deploying your first clusters. 

Preparations

In this series we will make use of a set of Vagrant boxes but you can use your own infrastructure if you like. In case you do want to test it with Vagrant, we made an example setup available from the following Github repository:
https://github.com/severalnines/vagrant

Clone the repo to your own machine:

git clone git@github.com:severalnines/vagrant.git

The topology of the vagrant nodes is as follows:

  • vm1: clustercontrol
  • vm2: database node1
  • vm3: database node2
  • vm4: database node3

Obviously you can easily add additional nodes if you like by changing the following line:

4.times do |n|

The Vagrant file is configured to automatically install ClusterControl on the first node and forward the user interface of ClusterControl to port 8080 on your hos that runs Vagrantt. So if your host’s ip address is 192.168.1.10 you find ClusterControl UI here: http://192.168.1.10:8080/clustercontrol/

Installing ClusterControl

You can skip this if you chose to use the Vagrant file and received the automatic installation for free. But installation of ClusterControl is easy and will take less than five minutes of your time.

With the package installation all you have to do is issue the following three commands on the ClusterControl node to get it installed:

$ wget http://www.severalnines.com/downloads/cmon/install-cc
$ chmod +x install-cc
$ ./install-cc   # as root or sudo user

That’s it: it can’t get easier than this. If the installation script did not encounter any issues ClusterControl has been installed and is up and running. You can now log into ClusterControl on the following URL:
http://192.168.1.210/clustercontrol

After creating an administrator account and logging in, you will be prompted to add your first cluster.

Deploy a Galera cluster

In case you have installed ClusterControl via the package installation or when there are no clusters defined in ClusterControl you will be prompted to create a new database server/cluster or add an existing (i.e., already deployed) server or cluster:

severalnines-blogpost1-add-new-cluster-or-node.png

In this case we are going to deploy a Galera cluster and this only requires one screen to fill in:

several-nines-blogpost-add-galera-cluster.png

To allow ClusterControl to install the Galera nodes we use the root user that was granted ssh access by the Vagrant bootstrap scripts. In case you chose to use your own infrastructure, you must enter a user here that is allowed to do passwordless ssh to the nodes that ClusterControl is going to control.

Also make sure you disable AppArmor/SELinux. See here why.

After filling in all the details and you have clicked Deploy, a job will be spawned to build the new cluster. The nice thing is that you can keep track of the progress of this job by clicking on the spinning circle between the messages and settings icon in the top menu bar:

severalnines-blogpost1-progress-indicator.png

Clicking on this icon will open a popup that keeps you updated on the progress of your job.

severalnines-blogpost-add-galera-progress1.png

severalnines-blogpost-add-galera-progress2.png

Once the job has finished, you have just created your first cluster. Opening the cluster overview should look like this:

severalnines-blogpost1-cluster-overview.png

In the nodes tab, you can do about any operation you normally would like to do on a cluster and more. The query monitor gives you a good overview of both running and top queries. The performance tab will help you to keep a close eye upon the performance of your cluster and also features the advisors that help you to act pro-actively on trends in data. The backups tab enables you to easily schedule backups that are either stored on the DB nodes or the controller host  and the managing tab enables you to expand your cluster or make it highly available for your applications through a load balancer.

All this functionality will be covered in later blog posts in this series.

Deploy a MySQL replication set

A new feature in ClusterControl 1.2.11 is that you can not only add slaves to existing clusters/nodes but you can also create new masters. In order to create a new replication set, the first step would be creating a new MySQL master:

severalnines-blogpost-add-mysql-master.png

After the master has been created you can now deploy a MySQL slave via the “Add Node” option in the cluster list:

severalnines-blogpost-add-mysql-slave-dialogue.png

Keep in mind that adding a slave to a master requires the master’s configuration to be stored in the ClusterControl repository. This will happen automatically but will take a minute to be imported and storedAfter adding the slave node, ClusterControl will provision the slave with a copy of the data from its master using Xtrabackup. Depending on the size of your data this may take a while.

Deploy a PostgreSQL replication set

Creating a PostgreSQL cluster requires one extra step as compared to creating a Galera cluster as it gets divided in adding a standalone PostgreSQL server and then adding a slave. The two step approach lets you decide which server will become the master and which one becomes the slave.

A side note is that the supported PostgreSQL version is 9.x and higher. Make sure the correct version gets installed by adding the correct repositories by PostgreSQL: http://www.postgresql.org/download/linux/

First we create a master by deploying a standalone PostgreSQL server:

severalnines-blogpost-postgresql-master.png

After deploying, the first node will become available in the cluster list as a single node instance. 

You can either open the cluster overview and then add the slave, but the cluster list also gives you the option to immediately add a replication slave to this cluster:

severalnines-blogpost1-adding-postgresql-slave1.png

And adding a slave is as simple as selecting the master and filling in the fqdn for the new slave:

severalnines-blogpost-postgresql-slave.png

The PostgreSQL cluster overview gives you a good insight in your cluster:

severalnines-blogpost-postgresql-overview.png

Just like with the Galera and MySQL cluster overviews you can find all the necessary tabs and functions here: the query monitor, performance, backups tabs enables you to do the necessary operations.

Deploy a MongoDB replicaSet

Deploying a new MongoDB replicaSet is similar to PostgreSQL. First we create a new master node:

severalnines-blogpost-mongodb-master.png

After installing the master we can add a slave to the replicaSet using the same dropdown from the cluster overview:

severalnines-blogpost-mongodb-add-node-dropdown.png

Keep in mind that you need to select the saved Mongo template here to start replicating from a replicaSet, in this case select the mongod.conf.shardsvr configuration.

severalnines-blogpost-mongodb-slave.png

After adding the slave to the MongoDB replicaSet, a job will be spawned. Once this job has finished it will take a short while before MongoDB adds it to the cluster and it becomes visible in the cluster overview.

severalnines-blogpost-mongodb-cluster-overview.png

Similar to the PostgreSQL, Galera and MySQL cluster overviews you can find all the necessary tabs and functions here: the query monitor, performance, backups tabs enables you to do the necessary operations.

Final thoughts

With these three examples we have shown you how easy it is to set up new clusters for MySQL, MongoDB and PostgreSQL from scratch in only a couple of minutes. The beauty of using this Vagrant setup is that you can as easy as spawning this environment also take it down again and then spawn again. Impress your fellow colleagues on how easily you can setup a working environment and convince them to use it as their own test or devops environment.

Of course it would be equally interesting to add existing hosts and clusters into ClusterControl and that’s what we'll cover next time.

Blog category:


Become a ClusterControl DBA: Adding Existing Databases and clusters

$
0
0

In our previous blog post we covered the deployment of four types of clustering/replication: MySQL Galera, MySQL master-slave replication, PostgreSQL replication set and MongoDB replication set. This should enable you to create new clusters with great ease, but what if you already have 20 replication setups deployed and wish to manage them with ClusterControl?

This blog post will cover adding existing infrastructure components for these four types of clustering/replication to ClusterControl and how to have ClusterControl manage them.

Adding an existing Galera cluster to ClusterControl

Adding an existing Galera cluster to ClusterControl requires: mysql user with the proper grants and a ssh user that is able to login (without password) from the ClusterControl node to your existing databases and clusters.
 
Install ClusterControl on a separate VM. Once it is up, open the dialogue for adding an existing cluster. All you have to do is to add one of the Galera nodes and ClusterControl will figure out the rest:

severalnines-blogpost-add-existing-galera-cluster.png

After this behind the scenes, ClusterControl will connect to this host and detect all the necessary details for the full cluster and register the cluster in the overview.

Adding an existing MySQL master-slave to ClusterControl

Adding of an existing MySQL master-slave topology requires a bit more work than adding a Galera cluster. As ClusterControl is able to extract the necessary information for Galera, in the case of master-slave, you need to specify every host within the replication setup.

severalnines-blogpost-add-existing-mysql-master-slave.png

After this, ClusterControl will connect to every host, see if they are part of the same topology and register them as part of one cluster (or server group) in the GUI.

Adding an existing PostgreSQL replication set to ClusterControl

Similar to adding the MySQL master-slave above, the PostgreSQL replication set also requires to fill in all hosts within the same replication set.

severalnines-blogpost-add-existing-postgresql-replication-set.png

After this, ClusterControl will connect to every host, see if they are part of the same topology and register them as part of the same group. 

Adding an existing MongoDB replica set to ClusterControl

Adding an existing MongoDB replica set is just as easy as Galera: just one of the hosts in the replica set needs to be specified with its credentials and ClusterControl will automatically discover the other nodes in the replica set.

severalnines-blogpost-add-existing-mongodb-replica-set.png

Expanding your existing infrastructure

After adding the existing databases and clusters, they now have become manageable via ClusterControl and thus we can scale out our clusters.

For MySQL, MongoDB and PostgreSQL replication sets, this can easily be achieved via the same way we showed in our previous blogpost: simply add a node and ClusterControl will take care of the rest.

severalnines-blogpost1-adding-postgresql-slave1.png

For Galera, there is a bit more choice. The most obvious choice is to add a (Galera) node to the cluster by simply choosing “add node” in the cluster list or cluster overview. Expanding your Galera cluster this way should happen with increments of two to ensure your cluster always can have majority during a split brain situation.

Alternatively you could add a replication slave and thus create asynchronous slave in your synchronous cluster that looks like this:

magtid_arch_full.png

Adding a slave node blindly under one of the Galera nodes can be dangerous since if this node goes down, the slave won’t receive updates anymore from its master. We blogged about paradigm earlier and you can read how to solve this in this blog post.

Final thoughts

We showed you how easy it is to add existing databases and clusters to ClusterControl, you can literally add clusters within minutes. So nothing should hold you back from using ClusterControl to manage your existing infrastructure. If you have a large infrastructure, the addition of ClusterControl will give you more overview and save time in troubleshooting and maintaining your clusters.

Now the challenge is how to leverage ClusterControl to keep track of key performance indicators, show the global health of your clusters and proactively alert you in time when something is predicted to happen. And that’s the subject we'll cover next time.

Read also in the same series: 

Blog category:

ClusterControl Tips & Tricks: wtmp Log Rotation Settings for Sudo User

$
0
0

Requires ClusterControl. Applies to all supported database clusters. Applies to all supported operating systems (RHEL/CentOS/Debian/Ubuntu).

ClusterControl requires a super-privileged SSH user to provision database nodes. If you are running as non-root user, the corresponding user must able to execute sudo commands with or without sudo password. Unfortunately, this could generate another issue where performing remote command with “sudo” requires an interactive session (tty). We will explain this in details in the next sections.

What’s up with sudo?

By default, most of the RHEL flavors have the following configured under /etc/sudoers:

Defaults requiretty

When an interactive session (tty) is required, each time the sudo user SSH into the box with -t flag (force pseudo-tty allocation), entries will be created in /var/log/wtmp for the creation and destruction of terminals, or the assignment and release of terminals. These logs only record interactive sessions. If you didn’t specify -t, you would see the following error:

sudo: sorry, you must have a tty to run sudo

The root user does not require an interactive session when running remote SSH command, the entries only appear in /var/log/secure or /var/log/auth.log depending on the system configuration. Different distributions have different defaults in this regards. SSH does not make a file into wtmp if it is a non-interactive session.

To check the content of wtmp, we use the following command:

$ last -f /var/log/wtmp
ec2-user pts/0        ip-10-0-0-79.ap- Wed Oct 28 11:16 - 11:16  (00:00)
ec2-user pts/0        ip-10-0-0-79.ap- Wed Oct 28 11:16 - 11:16  (00:00)
ec2-user pts/0        ip-10-0-0-79.ap- Wed Oct 28 11:16 - 11:16  (00:00)
...

On Debian/Ubuntu system, sudo user does not need to acquire tty as it defaults to have no “requiretty” configured. However, ClusterControl defaults to append -t flag if it detects the SSH user as a non-root user. Since ClusterControl performs all the monitoring and management tasks as this user, you may notice that /var/log/wtmp will grow rapidly, as shown in the following section.

Log rotation for wtmp

Example: Take note of the following default configuration of wtmp in RHEL 7.1 inside /etc/logrotate.conf:

/var/log/wtmp {
    monthly
    create 0664 root utmp
    minsize 1M
    rotate 1
}

By running the following commands on one of the database nodes managed by ClusterControl, we can see how fast /var/log/wtmp grows every minute:

[user@server ~]$ a=$(du -b /var/log/wtmp | cut -f1) && sleep 60 && b=$(du -b /var/log/wtmp | cut -f1) && c=$(expr $b - $a ) && echo $c
89088

From the above result, ClusterControl causes the log file to grow 89 KB per minute, which equals to 128MB per day. If the mentioned logrotate configuration is used (monthly rotation), /var/log/wtmp alone may consume 3.97 GB of disk space! If the partition where this file resides (usually under “/” partition) is small (it’s common to have “/” partition smaller, especially if it’s a cloud instance), there is a potential risk you would fill up the disk space on that partition in less than one month. 

Workaround

The workaround is to play with the log rotation of wtmp. This is applicable to all operating systems mentioned in the beginning of this post. For those who are affected by this, you have to change the log rotation behaviour so it does not grow more than expected. The following is what we recommend:

/var/log/wtmp {
     size 100M
     create 0664 root utmp
     rotate 3
     compress
}

The above settings specify that the maximum size of wtmp should be 100 MB and, and we should keep the 3 most recent (compressed) files and remove older ones.

Logrotate uses crontab (under /etc/cron.daily/logrotate) to work. It is not a daemon so no need to reload its configuration. When the crontab executes logrotate, it will use the new config file automatically.

Happy Clustering!

PS.: To get started with ClusterControl, click here!

Blog category:

Webinar Replay & Slides: Deploying MongoDB, MySQL, PostgreSQL & MariaDB’s MaxScale in 40min

$
0
0

Live demo recording of ClusterControl 1.2.11 and its new features 

Thanks to everyone who joined us for our recent live webinar on ‘ClusterControl 1.2.11 - new release’ led by Art van Scheppingen , Senior Support Engineer at Severalnines. The replay and slides to the webinar are now available to watch and read online via the links below.

During this live webinar, Art demonstrated the latest ClusterControl release and its new features such as support for MariaDB’s MaxScale and related MySQL updates; this is also the best ClusterControl release for PostgreSQL yet.

In fact, and during a live demo, Art not only introduced the new features, but also proceeded to deploy MongoDB, PostgreSQL, MySQL clusters and replicated setups as well as MariaDB’s MaxScale proxy all in the one session and the one ClusterControl instance - within 40min! Impressive stuff, which can be viewed again in this recording!

Watch the replay

Read the slides

 

1211_banner.png

TOPICS COVERED

  • For PostgreSQL
    • Deployment and Management of Postgres Replicated Setups
    • Customisable dashboards
    • Database performance charts for nodes
    • Enablement of ClusterControl DevStudio
  • Support for MaxScale
    • Deployment and management of MaxScale load balancer
  • For MySQL
    • Add Existing HAProxy and Keepalived
    • Deployment of MySQL Replication setups
    • Improvements in charting of metrics
    • Revamped Configuration Management
    • New Database Logs Page
    • Revamped MySQL User Management

SPEAKER

Art van Scheppingen is a Senior Support Engineer at Severalnines. He’s a pragmatic MySQL and Database expert with over 15 years experience in web development. He previously worked at Spil Games as Head of Database Engineering, where he kept a broad vision upon the whole database environment: from MySQL to Couchbase, Vertica to Hadoop and from Sphinx Search to SOLR. He regularly presents his work and projects at various conferences (Percona Live, FOSDEM) and related meetups.

cc_deploy_multiple.png

For further blogs on ClusterControl visit: http://www.severalnines.com/blog-categories/clustercontrol

To view all our webinar replays visit: http://www.severalnines.com/webinars-replay

Blog category:

Become a ClusterControl DBA: performance monitoring and health

$
0
0

The blog series for MySQL, MongoDB & PostgreSQL administrators

In the previous two blog posts we covered both deploying the four types of clustering/replication (MySQL/Galera, MySQL Replication, MongoDB & PostgreSQL) and managing/monitoring your existing databases and clusters. So, after reading these two first blog posts you were able to add your 20 existing replication setups to ClusterControl, expand them and additionally deployed two new Galera clusters while doing a ton of other things. Or maybe you deployed MongoDB and/or PostgreSQL systems. So now, how do you keep them healthy?

That’s exactly what this blog post is about: how to leverage ClusterControl’s performance monitoring and advisors functionality to keep your MySQL, MongoDB and/or PostgreSQL databases and clusters healthy. So how is this done in ClusterControl?

The cluster list

The most important information can already be found in the cluster list: as long as there are no alarms and no hosts are shown to be down, everything is functioning fine. An alarm is raised if a certain condition is met, e.g. host is swapping, and brings to your attention the issue you should investigate. That means that alarms not only are raised during an outage but also to allow you to proactively manage your databases.

Suppose you would log into ClusterControl and see a cluster listing like this, you will definitely have something to investigate: one node is down in the Galera cluster for example and every cluster has various alarms.

severalnines-blogpost-cluster-list-node-down-alarms.png

Once you click on one of the alarms, you will go to a detailed page on all alarms of the cluster. The alarm details will explain the issue and in most cases also advise the action to resolve the issue.

You can set up your own alarms by creating custom expressions, but that has been deprecated in favor of our new Developer Studio that allows you to write custom Javascripts and execute these as Advisors. We will get back to this topic later in this post.

The cluster overview - Dashboards

When opening up the cluster overview, we can immediately see the most important performance metrics for the cluster in the tabs. This overview may differ per cluster type as, for instance, Galera has different performance metrics to watch than traditional MySQL, Postgres or MongoDB.

severalnines-blogpost-cluster-overview-performance.png

Both the default overview and the pre-selected tabs are customizable. By clicking on Overview > Dash Settings you are given a dialogue that allows you to define the dashboard.

severalnines-blogpost-cluster-overview-add-dashboard.png

By pressing the plus sign you can add and define your own metrics to graph the dashboard. In our case we will define a new dashboard featuring the Galera specific receive and send queue:

severalnines-blogpost-cluster-overview-add-dashboard2.png

This new dashboard should give us good insight in the average queue length of our Galera cluster.

Once you have pressed save, the new dashboard will become available for this cluster:

severalnines-blogpost-cluster-overview-new-dashboard-added.png

Similarly you can do this for PostgreSQL as well by combining the checkpoints with the number of commits:

severalnines-blogpost-performance-overview-pgsql-add-metric.png

severalnines-blogpost-performance-overview-pgsql-add-metric2.png

severalnines-blogpost-performance-overview-pgsql2.png

So as you can see, it is relatively easy to customize your own (default) dashboard.

Cluster overview - Query Monitor

The Query Monitor tab is available for both MySQL and PostgreSQL based setups and consists out of three dashboards: Top Queries, Running Queries and Query Histogram.

In the Running Queries dashboard, you will find all current queries that are running. This is basically the equivalent of SHOW PROCESSLIST in ClusterControl.

Top Queries and Query Histogram both rely on the input of the slow query log. To prevent ClusterControl to be too intrusive and the slow query log to grow too large, ClusterControl will sample the slow query log by turning it on and off again. This loop is by default set to 1 second capturing and the long_query_time is set to 0.5 seconds. If you wish to change these settings for your cluster, you can change this via Settings -> Query Monitor.

Top Queries will, like the name says, show the top queries that were sampled. You can sort them on various columns: for instance the frequency, average execution time or the total execution time.

severalnines-blogpost-top-queries-overview.png

You can get more details about the query by selecting it and this will present the query execution plan (if available) and optimization hints/advisories. If necessary you can also select the query and have the details emailed to you by clicking on the “email query” button.

The Query Histogram is similar to the Top Queries but then allows you to filter the queries per host and compare them in time.

Cluster overview - Operations

Similar to the PostgreSQL and MySQL systems the MongoDB clusters have the Operations overview and is similar to the Running Queries. This overview is similar to issuing the db.currentOp() command within MongoDB.

severalnines-blogpost-mongodb-current-ops.png

Cluster overview - Performance

MySQL / Galera

The performance tab is probably the best place to find the overall performance and health of your clusters. For MySQL and Galera it consists of an Overview page, the Advisors, status/variables overviews, the Schema Analyzer and the Transaction log.

The Overview page will give you a graph overview of the most important metrics in your cluster. This is, obviously, different per cluster type. Eight metrics have been set by default, but you can easily set your own - up to 20 graphs if needed.

severalnines-blogpost-define-graphs.png

The Advisors is one of the key features of ClusterControl: the Advisors are scripted checks that can be run on demand. The advisors can evaluate almost any fact known about the host and/or cluster and give its opinion on the health of the host and/or cluster and even can give advice on how to resolve issues or improve your hosts!

severalnines-blogpost-mysql-advisors.png

The best part is yet to come: you can create your own checks in the Developer Studio (Cluster -> Manage -> Developer Studio), run them on a regular interval and use them again in the Advisors section. We blogged about this new feature earlier this year.

We will skip the status/variables overview of MySQL and Galera as this is useful for reference but not for this blog post: it is good enough that you know it is here. It is also good to mention that  the Status Time Machine can help you track specific status variables and see how they change in time.

Now suppose your database is growing but you want to know how fast it grew in the past week. You can actually keep track of the growth of both data and index sizes from right within ClusterControl:

And next to the total growth on disk it can also report back the top 25 largest schemas.

Another important feature is the Schema Analyzer within ClusterControl.

ClusterControl will analyze your schemas and look for redundant indexes, MyISAM tables and tables without a primary key. Of course it is entirely up to you to keep a table without a primary key because some application might have created it this way, but at least it is great to get the advice here for free. The Schema Analyzer even constructs the necessary ALTER statement to fix the problem.

PostgreSQL

For PostgreSQL the Advisors, DB Status and DB Variables can be found here.

severalnines-blogpost-postgresql-advisors.png

MongoDB

For MongoDB the Mongo Stats and performance overview can be found under the Performance tab. The Mongo Stats is an overview of the output of mongostat and
the Performance overview gives a good graphical overview of the Mongo opcounters:

severalnines-blogpost-mongodb-performance.png

Final thoughts

We showed you how to keep your eyeballs on the most important monitoring and health checking features of ClusterControl. Obviously this is only the beginning of the journey as we will soon start another blog series about the Developer Studio capabilities and how you can make most of your own checks. Also keep in mind that our support for MongoDB and PostgreSQL is not as extensive as our MySQL toolset, but we are continuously improving on this.

You may ask yourself why we have skipped over the performance monitoring and health checks of HA Proxy and MaxScalel. We did that deliberately as the blog series covered only deployments of clusters up till now and not the deployment of HA components. So that’s the subject we'll cover next time.

Blog category:

ClusterControl Tips & Tricks for MySQL: Max Open Files

$
0
0

Requires ClusterControl 1.2.11 or later. Applies to MySQL single instances, replications and Galera clusters.

You have created a large database with thousands of tables (> 5000 in MySQL 5.6). Then you want to create a backup using xtrabackup. Or, if it is a Galera cluster, you have to recover a galera node using wsrep_sst_method=xtrabackup[-v2].

Unfortunately it fails and the following is emitted in the Job Logs messages:

xtrabackup: Generating a list of tablespaces
2015-11-03 19:36:02 7fdef130a780  InnoDB: Operating system error number 24 in a file operation.
InnoDB: Error number 24 means 'Too many open files'.
InnoDB: Some operating system error numbers are described at
InnoDB: http://dev.mysql.com/doc/refman/5.6/en/operating-system-error-codes.html
InnoDB: Error: could not open single-table tablespace file ./DB75/t69.ibd

In this case you can simply increase the open_files_limit of the MySQL server(s) by going to Manage > Configurations, click on Change Parameter.

We want to make the change on all MySQL hosts, and add the open_files_limit to the ‘MYSQLD’ group. We then have to type in ‘open_files_limit’ in the “Parameter” field, since the parameter has not yet been set in the my.cnf file of the selected servers. Then it is simply to set the New Value to something appropriate. We have > 80000 tables in the database, so we’ll set the new value to 100000.

Next, press Proceed, and then you will be presented with the Config Change Log dialog:

The parameter change was successful, and the next step is to restart the MySQL servers as indicated in the dialogs. Please note that many operating systems impose an upper limit on how many open files can be set on a process.

PS.: To get started with ClusterControl, click here!

Blog category:

Viewing all 385 articles
Browse latest View live