Quantcast
Channel: Severalnines - clustercontrol
Viewing all 385 articles
Browse latest View live

ClusterControl Tips & Tricks: How to Manage Configuration Templates for your databases

$
0
0

ClusterControl makes it easy to deploy a database setup - just fill in some values (database vendor, database data directory, password and hostnames) in the deployment wizard and you’re good to go. The rest of the configuration options will be automatically determined (and calculated) based on the host specifications (CPU cores, memory, IP address etc) and applied to the template file that comes with ClusterControl. In this blog post, we are going to look into how ClusterControl uses default template files and how users can customize them to their needs.

Base Template Files

All services configured by ClusterControl use a base configuration template available under /usr/share/cmon/templates on the ClusterControl node. The following are template files provided by ClusterControl v1.4.0:

FilenameDescription
config.ini.mcMySQL Cluster configuration file.
haproxy.cfgHAProxy configuration template for Galera Cluster.
haproxy_rw_split.cfgHAProxy configuration template for read-write splitting.
garbd.cnfGalera arbitrator daemon (garbd) configuration file.
keepalived-1.2.7.confLegacy keepalived configuration file (pre 1.2.7). This is deprecated.
keepalived.confKeepalived configuration file.
keepalived.initKeepalived init script.
MaxScale_template.cnfMaxScale configuration template.
mongodb-2.6.conf.orgMongoDB 2.x configuration template.
mongodb.conf.orgMongoDB 3.x configuration template.
mongodb.conf.perconaMongoDB 3.x configuration template for Percona Server for MongoDB.
mongos.conf.orgMongo router (mongos) configuration template.
my.cnf.galeraMySQL configuration template for Galera Cluster.
my57.cnf.galeraMySQL configuration template for Galera Cluster on MySQL 5.7.
my.cnf.groupreplMySQL configuration template for MySQL Group Replication.
my.cnf.gtid_replicationMySQL configuration template for MySQL Replication with GTID.
my.cnf.mysqlclusterMySQL configuration template for MySQL Cluster.
my.cnf.pxc55MySQL configuration template for Percona XtraDB Cluster v5.5.
my.cnf.repl57MySQL configuration template for MySQL Replication v5.7.
my.cnf.replicationMySQL configuration template for MySQL/MariaDB without MySQL’s GTID.
mysqlchk.galeraMySQL health check script template for Galera Cluster.
mysqlchk.mysqlMySQL health check script template for MySQL Replication.
mysqlchk_xinetdXinetd configuration template for MySQL health check.
mysqld.service.overrideSystemd unit file template for MySQL service.
proxysql_template.cnfProxySQL configuration template.

The above list depends upon the feature set provided by the installed ClusterControl release. In an older version, you might not find some of them. You can modify these template files directly, although we do not recommend it as explained in the next sections.

Configuration Manager

Depending on the cluster type, ClusterControl will then import the necessary base template file into CMON database and accessible via Manage -> Configurations -> Templates once deployment succeeds. For example, consider the following configuration template for a MariaDB Galera Cluster:

ClusterControl will load the base template content of Galera configuration template from /usr/share/cmon/templates/my.cnf.galera into CMON database (inside cluster_configuration_templates table) after deployment succeeds. You can then customize your own configuration file directly in the ClusterControl UI. Whenever you hit the Save button, the new version of configuration template will be stored inside CMON database, without overwriting the base template file.

Once the cluster is deployed and running, the template in the UI takes precedence. The base template file is only used during the initial cluster deployment via ClusterControl -> Deploy -> Deploy Database Cluster. During the deployment stage, ClusterControl will use a temporary directory located at /var/tmp/ to prepare the content, for example:

/var/tmp/cmon-003862-6a7775ca76c62486.tmp

Dynamic Variables

There are a number configuration variables which configurable dynamically by ClusterControl. These variables are represented with capital letters enclosed by at sign ‘@’, for example @DATADIR@. For full details on supported variables, please refer to this page. Dynamic variables are automatically configured based on the input specified during cluster deployment, or ClusterControl performs automatic detection based on hostname, IP address, available RAM, number of CPU cores and so on. This simplifies the deployment where you only need to specify minimal options during cluster deployment stage

If the dynamic variable is replaced with a value (or undefined), ClusterControl will skip it and use the configured value instead. This is handy for advanced users, where usually have their own set of configuration options that tailored for specific database workloads.

Pre-deployment Configuration Template Example

Instead of relying on ClusterControl’s dynamic variable on the number of max_connections for our database nodes, we can change the following line inside /usr/share/cmon/templates/my57.cnf.galera, from:

max_connections=@MAX_CONNECTIONS@

To:

max_connections=50

Save the text file and on the Deploy Database Cluster dialog, ensure ClusterControl uses the correct base template file:

Click on Deploy button to start the database cluster deployment.

Post-deployment Configuration Template Example

After the database cluster deployment completes, you might have done some fine tuning on the running servers before deciding to scale it up. When scaling up, ClusterControl will use the configuration template inside CMON database (the one populated under ClusterControl -> Configurations -> Templates) to deploy the new nodes. Hence do remember to apply the modification you made on the database server to the template file.

Before adding a new node, it’s a good practice to review the configuration template to ensure that the new node gets what we expected. Then, go to ClusterControl -> Add Node and ensure the correct MySQL template file is selected:

Then, click on “Add Node” button to start the deployment.

That’s it. Even though ClusterControl does various automation jobs when it comes to deployment, it still provides freedom for users to customize the deployment accordingly. Happy clustering!


What’s New in ClusterControl 1.4 - Backup Management

$
0
0

ClusterControl 1.4 introduces some major improvements in the area of backup management, with a revamped interface and simplified options to create backups. In this blog post, we’ll have a look at the new backup features available in this release.

Upgrading to 1.4

If you upgrade ClusterControl from version 1.3.x to version 1.4, the CMON process will internally migrate all backup related data/schedules to the new interface. The migration will happen during the first startup after you have upgraded (you are required to restart the CMON process after a package upgrade). To upgrade, please refer to the documentation.

Redesigned User Interface

In the user interface, we have now consolidated related functionality onto a single interface. This includes Backup Settings, which were previously found under ClusterControl -> Settings -> Backups. It is now accessible under the same backup management tab:

The interface is now responsive to any action taken and requires no manual refresh. When a backup is created, you will see it in the backup list with a spinning arrows icon:

It is also possible now to schedule a backup every minute (the lowest interval) or year (the highest interval):

The backup options when scheduling or creating a backup now appear on the right side:

This allows you to quickly configure the backup, rather than having to scroll down the page.

Backup Report

Here is how it used to look pre v1.4:

After upgrading to ClusterControl v1.4, the report will look like this:

All incremental backups are automatically grouped together under the last full backup and expandable with a drop down. This makes the backups more organized per backup set. Each created backup will have “Restore” and “Log” buttons. The “Time” column also now contains timezone information, useful if you are dealing with geographically distributed infrastructure.

Restore to an Incremental Backup Point

You are now able to restore up to a certain incremental backup. Previously, ClusterControl supported restoration per backup set. All incremental backups under a single backup set would be restored and there was no way, for instance, to skip some of the incremental backups.

Consider the below example:

Full backup happens every day around 5:15 AM while incremental backup was scheduled every 15 minutes after the hour. If something happened around 5:50 AM and you would like to restore up to the backup taken just before that, you can skip the 6 AM backup by just clicking on the “Restore” link of the 5:45 AM incremental backup. You should then see the following Restore wizard and a couple of post-restoration options:

ClusterControl will then prepare the backup up until the selected point and the rest will be skipped. It also highlights “Warning” and “Notes” so you are aware of what will happen with the cluster during the restoration process. Note that mysqldump restoration can be performed online, while Xtrabackup requires the cluster/database instance to be stopped.

Operational Report

You might have multiple database systems running, and perhaps in different datacenters. Would it not be nice to get a consolidated report of the systems, when they were last backed up, and if there were any failed backups? This is available in 1.4. Note that you have other types of ops reports available in ClusterControl.

The report contains two sections and gives you a short summary of when the last backup was created, if it completed successfully or failed. You can also check the list of backups executed on the cluster with their state, type and size. This is as close you can get to check that backups work correctly without running a full recovery test. However, we definitely recommend that such tests are regularly performed.

The operational report can be scheduled and emailed to a set of recipients under Settings -> Operational Reports section, as shown in the following screenshot:

Access via ClusterControl RPC interface

The new backup features are now exposed under ClusterControl RPC interface, which means you can interact via API call with a correct RPC key. For example, to list the created backup on cluster ID 2, the following call should be enough:

$ curl -XPOST -d '{"operation": "listbackups", "token": "RB81tydD0exsWsaM"}' http://localhost:9500/2/backup
{"cc_timestamp": 1477063671,"data": [
  {"backup": [
      {"db": "mysql","files": [
          {"class_name": "CmonBackupFile","created": "2016-10-21T15:26:40.000Z","hash": "md5:c7f4b2b80ea439ae5aaa28a0f3c213cb","path": "mysqldump_2016-10-21_172640_mysqldb.sql.gz","size": 161305,"type": "data,schema"
          } ],"start_time": "2016-10-21T15:26:41.000Z"
      } ],"backup_host": "192.168.33.125","cid": 101,"class_name": "CmonBackupRecord","config":
      {"backupDir": "/tmp","backupHost": "192.168.33.125","backupMethod": "mysqldump","backupToIndividualFiles": false,"backup_failover": false,"backup_failover_host": "","ccStorage": false,"checkHost": false,"compression": true,"includeDatabases": "","netcat_port": 9999,"origBackupDir": "/tmp","port": 3306,"set_gtid_purged_off": true,"throttle_rate_iops": 0,"throttle_rate_netbw": 0,"usePigz": false,"wsrep_desync": false,"xtrabackupParallellism": 1,"xtrabackup_locks": false
      },"created": "2016-10-21T15:26:40.000Z","created_by": "","description": "","finished": "2016-10-21T15:26:41.000Z","id": 5,"job_id": 2952,"log_file": "","lsn": 140128879096992,"method": "mysqldump","parent_id": 0,"root_dir": "/tmp/BACKUP-5","status": "Completed","storage_host": "192.168.33.125"
  },
  {"backup": [
      {"db": "","files": [
          {"class_name": "CmonBackupFile","created": "2016-10-21T15:21:50.000Z","hash": "md5:538196a9d645c34b63cec51d3e18cb47","path": "backup-full-2016-10-21_172148.xbstream.gz","size": 296000,"type": "full"
          } ],"start_time": "2016-10-21T15:21:50.000Z"
      } ],"backup_host": "192.168.33.125","cid": 101,"class_name": "CmonBackupRecord","config":
      {"backupDir": "/tmp","backupHost": "192.168.33.125","backupMethod": "xtrabackupfull","backupToIndividualFiles": false,"backup_failover": false,"backup_failover_host": "","ccStorage": false,"checkHost": false,"compression": true,"includeDatabases": "","netcat_port": 9999,"origBackupDir": "/tmp","port": 3306,"set_gtid_purged_off": true,"throttle_rate_iops": 0,"throttle_rate_netbw": 0,"usePigz": false,"wsrep_desync": false,"xtrabackupParallellism": 1,"xtrabackup_locks": true
      },"created": "2016-10-21T15:21:47.000Z","created_by": "","description": "","finished": "2016-10-21T15:21:50.000Z","id": 4,"job_id": 2951,"log_file": "","lsn": 1627039,"method": "xtrabackupfull","parent_id": 0,"root_dir": "/tmp/BACKUP-4","status": "Completed","storage_host": "192.168.33.125"
  } ],"requestStatus": "ok","total": 2
}

Other supported operations are:

  • deletebackup
  • listschedules
  • schedule
  • deleteschedule
  • updateschedule

By having those operations extensible via ClusterControl RPC, one could automate the backup management and list the backup schedule via scripting or application call. However, to create a backup, ClusterControl handles it differently via job call (operation: createJob) since some backups may take hours or days to complete. To create a backup on cluster ID 9, one would do:

$ curl -XPOST -d '{"token": "c8gY3Eq5iFE3DC4i", "username":"admin@domain.com","operation":"createJob","job":{"command":"backup", "job_data": {"backup_method":"xtrabackupfull", "hostname": "192.168.33.121", "port":3306, "backupdir": "/tmp/backups/" }}}' http://localhost:9500/9/job

Where:

  • The URL format is: http://[ClusterControl_host]/clusterid/job
  • Backup method: Xtrabackup (full)
  • RPC token: c8gY3Eq5iFE3DC4i (retrievable from cmon_X.cnf)
  • Backup host: 192.168.33.121, port 3306
  • Backup destination: /tmp/backups on the backup host

For example, it’s a good idea to create a backup when testing DDL queries like TRUNCATE or DROP because those are not transactions, meaning they are impossible to rollback. We are going to cover this in details in an upcoming blog post.

With a BASH script together with correct API call, it is now possible to have an automated script like the following:

$ test_disasterous_query.sh --host 192.168.33.121 --query 'TRUNCATE mydb.processes' --backup-first 1

There are many other reasons to upgrade to the latest ClusterControl version, the backup functionality is just one of many exciting new features introduced in ClusterControl v1.4. Do upgrade (or install ClusterControl if you haven’t used it yet), give it a try and let us know your thoughts. New installations come with a 30-days trial.

Demonstration Videos: Top Four Feature Sets of ClusterControl for MySQL, MongoDB & PostgreSQL

$
0
0

The videos below demonstrate the top features and functions included in ClusterControl.  

Deploy

Deploy the best open source database for the job at hand using repeatable deployments with best practice configurations for MySQL, MySQL Cluster, Galera Cluster, Percona, PostgreSQL or MongoDB databases. Reduce time spent on manual provisioning and more time for experimentation and innovation.

Management

Easily handle and automate your day to day tasks uniformly and transparently across a mixed database infrastructure. Automate backups, health checks, database repair/recovery, security and upgrades using battle tested best practices.

Monitoring

Unified and comprehensive real-time monitoring of your entire database and server infrastructure. Gain access to 100+ key database and host metrics that matter to your operational performance. Visualize performance in custom dashboards to establish operational baselines and support capacity planning.

Scaling

Handle unplanned workload changes by dynamically scaling out with more nodes. Optimize resource usage by scaling back nodes.

ClusterControl
Single Console for Your Entire Database Infrastructure
Find out what else is new in ClusterControl

MySQL & MariaDB load balancing with ProxySQL & ClusterControl: introduction webinar

$
0
0

Proxies are building blocks of high availability setups for MySQL and MariaDB. They can detect failed nodes and route queries to hosts which are still available. If your master failed and you had to promote one of your slaves, proxies will detect such topology changes and route your traffic accordingly. More advanced proxies can do much more: route traffic based on precise query rules, cache queries or mirror them. They can even be used to implement different types of sharding.

Introducing ProxySQL!

Join us for this live joint webinar with ProxySQL’s creator, René Cannaò, who will tell us more about this new proxy and its features. We will also show you how you can deploy ProxySQL using ClusterControl. And we will give you an early walk-through of some of the  exciting ClusterControl features for ProxySQL that we have planned.

Date, Time & Registration

Europe/MEA/APAC

Tuesday, February 28th at 09:00 GMT (UK) / 10:00 CET (Germany, France, Sweden)

Register Now

North America/LatAm

Tuesday, February 28th at 9:00 Pacific Time (US) / 12:00 Eastern Time (US)

Register Now

Agenda

  1. Introduction
  2. ProxySQL concepts (René Cannaò)
    • Hostgroups
    • Query rules
    • Connection multiplexing
    • Configuration management
  3. Demo of ProxySQL setup in ClusterControl (Krzysztof Książek)
  4. Upcoming ClusterControl features for ProxySQL

Speakers

René Cannaò, Creator & Founder, ProxySQL. René has 10 years of working experience as a System, Network and Database Administrator mainly on Linux/Unix platform. In the last 4-5 years his experience was focused mainly on MySQL, working as Senior MySQL Support Engineer at Sun/Oracle and then as Senior Operational DBA at Blackbird, (formerly PalominoDB). In this period he built an analytic and problem solving mindset and he is always eager to take on new challenges, especially if they are related to high performance. And then he created ProxySQL …

Krzysztof Książek, Senior Support Engineer at Severalnines, is a MySQL DBA with experience managing complex database environments for companies like Zendesk, Chegg, Pinterest and Flipboard.

We look forward to “seeing” you there and to insightful discussions!

If you have any questions or would like a personalised live demo, please do contact us.

Video Interview with ProxySQL Creator René Cannaò

$
0
0

In anticipation of this month’s webinar MySQL & MariaDB Load Balancing with ProxySQL & ClusterControl that will happen on February 28th Severalnines sat down with the creator of ProxySQL founder and creator René Cannaò to discuss his revolutionary product, how it’s used, and what he plans to cover in the webinar. Watch the video or read the transcript below of the interview.

ClusterControl
Single Console for Your Entire Database Infrastructure
Find out what else is new in ClusterControl

Transcript of Interview

Hi I’m Forrest and I’m from the Severalnines marketing team and I’m here interviewing René Cannaò from ProxySQL. René thanks for joining me. Let’s start by introducing yourself, where did you come from?

Thank you, Forrest. Without going into too many details, I came from a system administrator background, and as a system administrator I got fascinated by databases, so I then become a DBA. In my past experience I worked as a Support Engineer for MySQL/Sun/Oracle, where I got of experience about MySQL... then remote DBA for PalominoDB and after that working as a MySQL SRE for Dropbox and I founded of ProxySQL.

So What is ProxySQL?

ProxySQL is a lightweight yet complex protocol aware proxy that sits between the MySQL clients and servers. I like to describe it as a gate, in fact the Stargate is the logo, so basically it separates clients from databases, therefore an entry point to access all the databases server.

Why did you create ProxySQL?

That’s a very interesting question, and very important. As a DBA, it was always extremely difficult to control the traffic sent to the database. This was the main reason to create ProxySQL: basically it’s a layer that separates the database from the application (de facto splitting them into two different layers), and because it’s sitting in the middle it’s able to control and manage all the traffic between the two, and also transparently managing failures.

So there are several database load balancers in the market, what differentiates ProxySQL from others?

First, most of the load balancers do not understand MySQL protocol: ProxySQL understands it, and this allows the implementation of features otherwise impossible to implement. Among the few proxies that are able to understand the MySQL protocol, ProxySQL is the only one designed from DBAs for DBAs, therefore it is designed to solve real issues and challenges as a DBA. For example, ProxySQL is the only proxy supporting connections multiplexing and query caching.

I noticed on your website that you say that ProxySQL isn’t battle-tested, its WAR-tested. What have you done to put ProxySQL through its paces?

The point is that from the very beginning ProxySQL was architected and designed to behave correctly in extremely demanding and very complex setups with millions of clients connections and thousands of database servers. Other proxies won’t be able to handle this. So, a lot of effort was invested in making sure ProxySQL is resilient in such complex setups. And, of course, no matter how resient it is set up it should not sacrifice performance.

On the 28th of February you will be co-hosting a webinar with Severalnines; with Krzysztof one or our Support Engineers. What are some of the topics you are going to cover at that event?

ProxySQL is built upon new technology data not present in other load balancers, its features and concepts are not always intuitive. Some concepts are extremely original in ProxySQL. For this reason the topics I plan to cover at the event are hostgroups, query rules, connection multiplexing, failures handling, and configuration management. Again, those are all the features and concepts that are only present in ProxySQL.

Excellent, well thank you for joining me, I’m really looking forward to this webinar on the 28th.

Thank you, Forrest.

MongoDB Tutorial - Top MongoDB Resources from Severalnines

$
0
0

As we continue to announce all the great new features in ClusterControl we have been developing for MongoDB we always want to take a moment to look back at some of the top content from the recent past that can help you securely deploy and manage your MongoDB instances.

Top MongoDB Blogs

Over the past year Severalnines has been developing blog content targeted at helping the MySQL DBA learn about MongoDB and how to integrate it into their database infrastructures. In our blog series “Become a MongoDB DBA (if you’re really a MySQL DBA)” we covered an array of topics to help you maximize your MongoDB knowledge

  • MongoDB Provisioning and Deployment If you are a MySQL DBA you may ask yourself why you would install MongoDB? That is actually a very good question as MongoDB and MySQL were in a flame-war a couple of years ago. But there are many cases where you simply have to.
  • MongoDB: The Basics of Configuration This blog covers configuration of MongoDB, especially around ReplicaSet, security, authorization, SSL and HTTP / REST api.
  • MongoDB: Monitoring and Trending Part 1Part 2 These blogs will give a primer in monitoring MongoDB: how to ship metrics using free open source tools. Part two will deep dive into monitoring MongoDB, what metrics to pay attention to and why.
  • MongoDB: Backing Up Your Data How to make a good backup strategy for MongoDB, what tools are available and what you should watch out for.
  • MongoDB: Recovering Your Data This blog covered how to recover MongoDB using a backup.
  • MongoDB: How to Scale Reads The first in the series on how to scale MongoDB
  • MongoDB: Sharding Ins and Outs Part 1Part 2 These blogs cover how to shard your MongoDB databases and the theory behind it. How to monitor the shards to make sure they are performing, and that data is distributed evenly between your shards and how to consistently backup your data across shards.

Top MongoDB Webinars

If you are in the mood for some deep dives into the technical world of MongoDB check out our top MongoDB webinar replays below and make sure to signup for our next webinar “How to Secure MongoDB” scheduled for March 14, 2017

  • Become a MongoDB Webinar So, maybe you’ve been working with MySQL for a while and are now being asked to also properly maintain one or more MongoDB instances. It is not uncommon that MySQL DBAs, developers, network/system administrators or DevOps folks with general backgrounds, find themselves in this situation at some point in time. In fact, with more organisations operating polyglot environments, it’s starting to become commonplace.
  • What to Monitor in MongoDB Webinar To operate MongoDB efficiently, you need to have insight into database performance. And with that in mind, we’ll dive into monitoring in this second webinar in the ‘Become a MongoDB DBA’ series. MongoDB offers many metrics through various status overviews and commands, but which ones really matter to you? How do you trend and alert on them? What is the meaning behind the metrics?
  • Scaling and Sharding with MongoDB Webinar In this third webinar of the ‘Become a MongoDB DBA’ series, we've focused on scaling and sharding your MongoDB setup.
  • Management and Automation of MongoDB Clusters This webinar gives you the tools to more effectively manage your MongoDB cluster, immediately. The presentation includes code samples and a live Q&A session.

Securing MongoDB

The recent ransom hack has shown a vulnerability in default deployments of MongoDB. While ClusterControl solves these issues by automatically providing the security people need to stay protected here are some other blogs we previously released to help keep you secure.

  • How to Secure MongoDB from Ransomware - 10 Tips Need tips on how to secure your MongoDB setup and protect yourself against ransomware? We have collected ten tips that are easy to follow and execute on your MongoDB databases.
  • Secure MongoDB and Protect Yourself from the Ransom Hack Recently, several attackers were able to break into thousands of MongoDB systems, wipe the databases and leave a ransom note. This could have been prevented if those in charge would have followed some standard security procedures. This blog post describes how to protect yourself from MongoDB ransomware. What is it, why is it a problem and what can you do to protect yourself against it?
ClusterControl
Single Console for Your Entire Database Infrastructure
Find out what else is new in ClusterControl

Using MongoDB with ClusterControl

The ClusterControl team has spent the last year developing a full array of expanded features for MongoDB to provide developers and DBAs an alternative system for deploying and managing their infrastructures. Here are some useful resources for deploying and managing MongoDB using ClusterControl.

We have a lot more exciting and technical content for MongoDB in the works so follow us on our ClusterControl LinkedIn Page for even more information.

Let the new ClusterControl secure your MongoDB deployments

$
0
0

Today we’re happy to tell you about our release of ClusterControl for MongoDB, which completes our vision to let you fully manage MongoDB whether on premise or in the cloud. Our team has spent the last year developing a full array of expanded features for MongoDB to provide developers and DBAs an alternative system with which to securely deploy and manage their open source database infrastructures.

The ransom hack that’s been extensively covered in the press recently has shown a vulnerability in default deployments of MongoDB. For while it’s easy enough to get started with MongoDB, once installed, a good bit of manual configuration is needed; which is where security issues crept in. ClusterControl solves this for you by automatically providing the security you need to stay protected.

ClusterControl is used by enterprises with mission-critical environments worldwide, so you know you can depend on it for security and stability for your MongoDB infrastructures. And what’s more, it provides comparable functionality to existing ops managers at half the price.

Release Highlights

ClusterControl offers a rich set of features to securely deploy, monitor, manage and scale your MongoDB databases, including the following top 9 highlights.

  1. Single Interface: ClusterControl provides one single interface to automate your mixed MongoDB, MySQL, and PostgreSQL database environments.
  2. Easy Deployment: You can now automatically and securely deploy sharded MongoDB clusters or Replica Sets with ClusterControl’s free community version; as well as automatically convert a Replica Set into a sharded cluster if that’s required.
  3. Advanced Security: ClusterControl removes human error and provides access to a suite of security features automatically protecting your databases from hacks and other threats.
  4. Monitoring: ClusterControl provides a unified view of all sharded environments across your data centers and lets you drill down into individual nodes.
  5. Scaling: Easily add and remove nodes, resize instances, and clone your production clusters with ClusterControl.
  6. Management: ClusterControl provides management features that automatically repair and recover broken nodes, and test and automate upgrades.
  7. Consistent Backups of Sharded Clusters: Using the Percona MongoDB Consistent Backup tool, ClusterControl allows you to make consistent snapshots of your MongoDB sharded clusters.
  8. Advisors: ClusterControl’s library of Advisors allows you to extend the features of ClusterControl to add even more MongoDB management functionality.
  9. Developer Studio: The ClusterControl Developer Studio lets you customize your own MongoDB deployment to enable you to solve your unique problems.

View release details and resources

ClusterControl
Single Console for Your Entire Database Infrastructure
Find out what else is new in ClusterControl

On MongoDB & sharded clusters

Extending our support for MongoDB, the rising star in the open source database world, has brought sharded clusters in addition to replica sets. This meant we had to retrieve more metrics to our monitoring, add advisors and provide consistent backups for sharding. As a result, you can now convert a ReplicaSet cluster to a sharded cluster, add or remove shards from a sharded cluster as well as add Mongos/routers to a sharded cluster.

On the new Severalnines database advisors for MongoDB

Advisors are mini-programs for specific database issues and we’ve added three new advisors for MongoDB in the latest ClusterControl release. The first one calculates the replication window, the second watches over the replication window, and the third checks for un-sharded databases/collections. In addition to this, we’ve also added a generic disk advisor. This advisor verifies if any optimizations can be done, like noatime and noop I/O scheduling, on the data disk that is being used for storage.

Download ClusterControl - it’s free

We encourage you to test ClusterControl and provide us with your feedback. If you’d like a demo, feel free to request one.

Thank you for your ongoing support, and happy clustering!

PS.: For additional tips & tricks, follow our blog: http://www.severalnines.com/blog/

Video: How to Secure MongoDB

$
0
0

Recently the IT world has seen a lot of news about tens of thousands of unsecured MongoDB instances that have been hacked and held for ransom.

In his blog “How to Secure MongoDB from Ransonware - Ten Tips” Severalnines Support Engineer Art van Scheppingen gave us some great ways to help to secure your data and prevent hackers from invading and ransoming your MongoDB instances.  In the video below you will find Art’s ten tips and some more information about how ClusterControl provides automated security to help you stay safe.

ClusterControl
Single Console for Your Entire Database Infrastructure
Find out what else is new in ClusterControl

How MongoDB Database Automation Improves Security

$
0
0

The growing number of cyberattacks on open source database deployments highlights the industry’s poor administrative and operational practices.

If 2016 taught us anything, it was the importance of sound operational practices and security measures in open source database deployments. For several years, researchers had warned about publicly exposed databases - with estimates ranging in the tens of thousands of servers. If the scale of the problem had not been apparent or frightening, well it surely is now.

Recently, ransomware groups deleted over 10,000 MongoDB databases within just a few days. Other open source databases (ElasticSearch, Hadoop, CouchDB) were also hit. Meanwhile, the number of exposed databases has gone up to about 100,000 instances.

What has led to this? Open source databases, and open source software in general, power a significant portion of today’s online services. Thanks to the increased use of agile development lifecycles, the cloud has become home to a variety of applications that are quickly deployed. Many businesses have also moved beyond using the cloud for non-critical functions, and now rely on the cloud for storing valuable data. This means more databases are being deployed in public clouds, in environments that are directly exposed to the Internet.

MongoDB in particular is very popular amongst developers, because of its convenience and expediency. But here’s the problem - quickly spinning up an environment for development is not the same thing as setting up for live production. They both demand very different levels of expertise. Thousands of database instances were not secured, and anyone could get read and write access to the databases (including any sensitive data) without any special tools or without having to circumvent any security measures. This is not a lapse of concentration from a few individuals that got us here, we’re facing a problem which is more widespread than anyone could imagine. We need to recognise that finding the middle ground between ease of use, speed of deployment and operational/security readiness is hard to find. So this begs the question - how can we collectively get beyond this type of problem?

If we could train every single individual who deploys MongoDB into a deployment engineer, it might help. At least, there will be some level of protection so not just anyone can walk in through an open door.

Operations is not rocket science, but it might not be reasonable to expect all developers, who are the primary users of MongoDB, to turn into full-fledged systems/deployment engineers. The IT industry is moving towards faster, leaner implementations and deployment of services. The middle ground between ease of use, deployment speed and sound operational practices might seem even further away. Automation might just be the thing that helps us find that middle ground.

ClusterControl
Single Console for Your Entire Database Infrastructure
Find out what else is new in ClusterControl

Database configurations suitable for production tend to be a bit more complex, but once designed, they can be duplicated many times with minimal variation.

Automation can be applied to initial provisioning and configuration, as well as ongoing patching, backups, anomaly detection and other maintenance activities. This is the basis for our own automation platform for MongoDB, ClusterControl. A well deployed and managed system can mitigate operational risk, and would certainly have prevented these thousands of databases from getting hacked.

MongoDB Tutorial - Monitoring & Securing MongoDB with ClusterControl Advisors

$
0
0

Database ops management consists of 80% reading and interpreting your monitoring systems. Hundreds of metrics can be interpreted and combined in various ways to give you deep insights into your database systems and how to optimize them. When running multiple database systems, the monitoring of these systems can become quite a chore. If the interpretation and combination of metrics takes a lot of time, wouldn’t it be great if this could be automated in some way?

This is why we created database advisors in ClusterControl: small scripts that can interpret and combine metrics for you, and give you advice when applicable. For MySQL we have created an extensive library of the most commonly used MySQL monitoring checks. But also for MongoDB we have a broad library of advisors to your disposal. For this blog post, we have picked the nine most important ones for you. We’ll describe each and every one of them in detail.

The nine MongoDB advisors we will cover in this blog post are:

  • Disk mount options check
  • Numa check
  • Collection lock percentage (MMAP)
  • Replication lag
  • Replication Window
  • Un-sharded databases and collections (sharded cluster only)
  • Authentication enabled check
  • Authentication/authorization sanity check
  • Error detection (new advisor)

Disk mount options advisor

It is very important to have your disks mounted in the most optimal way. With the ClusterControl disk mount options advisor, we look more closely at your data disk on a daily basis. In this advisor, we investigate the filesystem used, mount options and the io scheduler settings of the operating system.

We check if your disks have been mounted with noatime and nodiratime. Setting these will decrease the performance of the disks, as on every access to a file or directory the access time has to be written to disk. Since this happens continuously on databases, this is a good performance setting and also increases the durability of your SSDs.

For file systems we recommend to use modern file systems like xfs,zfs,ext4 or btrfs. These file systems are created with performance in mind. The io scheduler is advised to be either on noop or deadline.Deadline has been the default for databases for years, but thanks to faster storage like SSDs the noop scheduler is making more sense nowadays.

Numa check advisor

For MongoDB we enable our NUMA check advisor. This advisor will check if NUMA (Non-Uniform Access Memory) has been enabled on your system, and if this is the case, to switch it off.

When Non-Uniform Access Memory has been enabled, the CPU of the server is only able to address its own memory directly and not the other CPUs in the machine. This way the CPU is only able to allocate memory from its own memory space, and allocating anything in excess will result in swap usage. This architecture has a strong performance benefit on multi-processor applications that allocate all CPUs, but as MongoDB isn’t a multi-processor application it will decrease the performance greatly and could lead to huge swap usage.

Collection lock percentage (MMAP)

As MMAP is a file based storage system, it doesn’t support the document level locking as found in WiredTiger and RocksDB. Instead the lowest level of locking for MMAP is the collection locks. This means any writes to a collection (insert, update or delete) will lock the entire collection. If the percentage of locks is getting too high, this indicates you have contentions problems on the collection. When not addressed properly, this could bring your write throughput to a grinding halt. Therefore having an advisor warning you up front is very helpful.

MongoDB Replication Lag advisor

If you are scaling out MongoDB for reads via secondaries, the replication lag is very important to keep an eye on. The MongoDB client drivers will only use secondaries that don’t lag too far behind, else you may risk serving out stale data.

Inside MongoDB the primary will keep track of the replication status of its secondaries. The advisor will fetch the replication information and guards the replication lag. If the lag becomes too high it will send out a warning or critical status message.

MongoDB Replication Window advisor

Next to replication lag, the replication window is an important metric to watch. The MongoDB oplog is a single collection, that has been limited in a (preset) size. Once the oplog reaches the end and a new transaction needs to be stored, it will evict the oldest transaction to make room for the new transaction. The replication window reflects the number of seconds between the oldest and newest transaction in the oplog.

This metric is very important as you need to know for how long you can take a secondary out of the replicaSet, before it will no longer be able to catch up with the master due to being too far behind in replication. Also if a secondary starts lagging behind, it would be good to know how long we can tolerate this before the secondary is no longer able to catch up.

In the MongoDB shell, a function is available to calculate the replication window. This advisor in ClusterControl uses the function to make the same calculation. The benefit would be that you now can be alerted on a too short replication window.

ClusterControl
Single Console for Your Entire Database Infrastructure
Find out what else is new in ClusterControl

MongoDB un-sharded databases and collections advisor

In a sharded MongoDB cluster, all un-sharded databases and collections are assigned to a default primary shard by the MongoDB shard router. This primary shard can vary between the databases and collections, but in general this would be the shard with the most disk space available.

Having a un-sharded database or collection doesn’t immediately pose a risk for your cluster. However if an application or user starts to write large volumes of data to one of these, the primary shard could fill up quickly and create an outage on this shard. As the database or collection is not sharded, it will not be able to make use of other shards.

Because of this reason we have created an advisor that will prevent this from happening. The advisor will scan all databases and collections, and warn you if it has not been sharded.

Authentication enabled check

Without enabling authentication in MongoDB, any user logging in will be treated as an admin. This is a serious risk as normally admin tasks, like creating users or making backups, now have become available to anyone. This combined with exposed MongoDB servers, resulted in the recent MongoDB ransom hacks, while a simple enabling of authentication would have prevented most of these cases.

We have implemented an advisor that verifies if your MongoDB servers have authentication enabled. This can be done explicitly by setting this in the configuration, or implicitly by enabling the replication keyfile. If this advisor fails to detect the authentication has been enabled you should immediately act upon this, as you server is vulnerable to be compromised.

Authentication/authorization sanity check

Next to the authentication enabled advisor, we also have built an advisor that performs a sanity check for both authentication and authorization in MongoDB.

In MongoDB the authentication and authorization is not placed in a central location, but is performed and stored on database level. Normally users will connect to the database, authenticating against the database they intend to use. However, with the correct grants, it is also possible to authenticate against other (unrelated) databases and still make use of another database. Normally this is perfectly fine, unless a user has excessive rights (like the admin role) over another database.

In this advisor, we verify if these excessive roles are present, and if they could pose a threat. We also check at the same time for weak and easy to guess passwords.

Error detection (new advisor)

In MongoDB, any error encountered will be counted or logged. Within MongoDB there is a big variety of possible errors: user asserts, regular asserts, warnings and even internal server exceptions. If there are trends in these errors, it is likely that there is either a misconfiguration or an application issue.

This advisor will look at the statistics of MongoDB errors (asserts) and makes sense of this. We interpret the trends found and advice on how to decrease errors in your MongoDB environment!

MongoDB Webinar - How to Secure MongoDB with ClusterControl

$
0
0

Join us for our new webinar on “How to secure MongoDB with ClusterControl” on Tuesday, March 14th!

In this webinar we will walk you through the essential steps necessary to secure MongoDB and how to verify if your MongoDB instance is safe.

How to secure MongoDB with ClusterControl

The recent MongoDB ransom hijack caused a lot of damage and outages, which could have been prevented with maybe two or three simple configuration changes. MongoDB offers a lot of security features out of the box, however it disables them by default.

In this webinar, we will explain which configuration changes are necessary to enable MongoDB’s security features, and how to test if your setup is secure after enabling. We will demonstrate how ClusterControl enables security on default installations. And we will discuss how to leverage the ClusterControl advisors and the MongoDB Audit Log to constantly scan your environment, and harden your security even more.

Date, Time & Registration

Europe/MEA/APAC

Tuesday, March 14th at 09:00 GMT / 10:00 CET (Germany, France, Sweden)

Register Now

North America/LatAm

Tuesday, March 14th at 09:00 Pacific Time (US) / 12:00 Eastern Time (US)

Register Now

Agenda

  • What is the MongoDB ransom hack?
  • What other security threats are valid for MongoDB?
  • How to enable authentication / authorisation
  • How to secure MongoDB from ransomware
  • How to scan your system
  • ClusterControl MongoDB security advisors
  • Live Demo

Speaker

Art van Scheppingen is a Senior Support Engineer at Severalnines. He’s a pragmatic database expert with over 16 years experience in web development. He previously worked at Spil Games as Head of Database Engineering, where he kept a broad vision upon the whole database environment: from MySQL to MongoDB, Vertica to Hadoop and from Sphinx Search to SOLR. He regularly presents his work and projects at various conferences (Percona Live, MongoDB Open House, FOSDEM) and related meetups.

We look forward to “seeing” you there!

This session is based upon the experience we have securing MongoDB and implementing it for our database infrastructure management solution, ClusterControl. For more details, read through our ‘Become a MongoDB DBA’ blog series.

MongoDB tools from the community that complement ClusterControl

$
0
0

Since MongoDB is the favored database for many developers, it comes to no surprise that the community support is excellent. You can quickly find answers to most of your problems on knowledge sites like Stack Overflow, but the community also creates many tools, scripts and frameworks around MongoDB.

ClusterControl is part of the community tools that allow you to deploy, monitor, manage and scale any MongoDB topology. ClusterControl is designed around the database lifecycle, but naturally it can’t cover all aspects of a development cycle. This blog post will cover a selection of community tools that can be used to complement ClusterControl in managing a development cycle.

Schema management

The pain of schema changes in conventional RDBMS was one of the drivers behind the creation of MongoDB: we all suffered from painfully slow or failed schema migrations. Therefore MongoDB has been developed with a schemaless document design. This allows you to change your schema whenever you like, without the database holding you back.

Schema changes are generally made whenever there is application development. Adding new features to existing modules, or creating new modules may involve the creation of another version of your schema. Also schema and performance optimizations may create new versions of your schemas.

Even though many people will say it’s brilliant not being held back by the database, it also brings a couple of issues as well: since old data is not migrated to the new schema design, your application should be able to cope with every schema version you have in your database. Alternatively you could update all (old) data with the newer schema right after you have deployed the application.

The tools discussed in this section will all be very helpful in solving these schema issues.

Meteor2 collection

The Meteor2 collection module will ensure that from both client and server side, the schema will be validated. This will ensure that all data gets written according to the defined schema. The module will only be reactive, so whenever data does not get written according to the schema, a warning will be returned.

Mongoose

Mongoose is Node.js middleware for schema modelling and validation. The schema definition is placed inside your Node.js application, and this will allow Mongoose to act as an ORM. Mongoose will not migrate existing data into the new schema definition.

MongoDB Schema

So far we only have spoken about schema changes, so it is time to introduce MongoDB Schema. MongoDB Schema is a schema analyzer that will take a (random) sample of your data and output the schema for the sampled data. This doesn’t necessarily mean it will be 100% accurate on its schema estimation though.

With this tool you could regularly check your data against your schema and detect important or unintentional changes in your schema.

Backups

ClusterControl supports two implementations for backing up MongoDB: mongodump and Percona Consistent Backup. Still, some less regular used functionalities, like partial/incremental backups and streaming backups to other clusters, will not be available out of the box.

MongoDB Backup

MongoDB Backup is a NodeJS logical backup solution that offers similar functionality as mongodump. In addition to this, it can also stream backups over the network, making it useful for transporting a collection from one MongoDB instance to another.

Another useful feature is that it has been written in NodeJS. This means it will be very easy to integrate in a Hubot chatbot, and automate the collection transfers. Don’t be afraid if your company isn’t using Hubot as a chatbot: it can also function as either a webhook or be controlled via the CLI.

Mongob

Mongob is another logical backup solution, but in this case it has been written in Python and is only available as a CLI tool. Just like MongoDB Backup, it is able to transfer databases and collections between MongoDB instances, but in addition to that, it can also limit the transfer rate.

Another useful feature of Mongob is that it will be able to create incremental backups. This is good if you wish to have more compact backups, but also if you need to perform a point in time recovery.

MongoRocks Strata

MongoRocks Strata is the backup tool for the MongoRocks storage engine. Percona Server for MongoDB includes the MongoRocks storage engine, however it lacks the Strata backup tool for making file level backups. In principle mongodump and Percona Consistent Backup are able to make reliable backups, but as they are logical dumps the recovery time will be long.

MongoRocks is a storage engine that relies on a LSM tree architecture. This basically means it is an append only storage. To be able to do this, it operates with buckets of data: older data will be stored in larger (archive) buckets, recent data will be stored in smaller (recent) buckets and all new incoming data will be written into a special memory bucket. Every time a compaction is done, data will trickle down from the memory bucket to the recent buckets, and recently changed data back to the archive bucket.

To make a backup of all buckets, Strata instructs MongoDB to flush the memory bucket to disk, and then it copies all buckets of data on file level. This will create a consistent backup of all available data. It will also be possible to instruct Strata to only copy the recent buckets and effectively take an incremental backup.

Another good point of Strata is that it provides the mongoq binary, that allows you to query the backups directly. This means there is no need to restore the backup to a MongoDB instance, before being able to query it. You would be able to leverage this functionality to ship your production data offline to your analytics system!

MongoDB GUIs

WIthin ClusterControl we allow querying the MongoDB databases and collections via advisors. These advisors can be developed in the ClusterControl Developer Studio interface. We don’t feature a direct interface with the databases, so to make changes to your data you will either need to log into the MongoDB shell, or have a tool that allows you to makes these changes.

PHPMoAdmin

PHPMoAdmin is the MongoDB equivalent of PHPMyAdmin. It features similar functionality as PHPMyAdmin: data and admin management. The tool will allow you to perform CRUD operations in both JSON and PHP syntax on all databases and collections. Next to all that, it also features an import/export functionality of your current data selection.

Mongo-Express

If you seek a versatile data browser, Mongo-Express is a tool you definitely need to check out. Not only does it allow similar operations as PHPMoAdmin, it also is able to display images and videos inline. It even supports fetching large objects from GridFS buckets.

Robomongo

The tool that goes one step further is Robomongo. Being a crowd funded tool, the feature list is huge. It is able to perform all the same operations as Mongo-Express, but in addition to this also allows user, role and collection management. For connections it supports direct MongoDB connections, but also supports replicaSet topologies and MongoDB Atlas instances.

Conclusion

With this selection of free community tools, we hope we have given you a good overview how to manage MongoDB data next to ClusterControl.

Happy clustering!

Video: ClusterControl Developer Studio Introduction Video

$
0
0

The free ClusterControlDeveloper Studio provides you a set of monitoring and performance advisors to use and lets you create custom advisors to add security and stability to your MySQL, Galera, and MongoDB infrastructures.

ClusterControl’s library of Advisors allows you to extend the features of ClusterControl to add even more database management functionality.

Advisors in ClusterControl are powerful constructs; they provide specific advice on how to address issues in areas such as performance, security, log management, configuration, storage space, etc. They can be anything from simple configuration advice, warning on thresholds or more complex rules for predictions, or even cluster-wide automation tasks based on the state of your servers or databases.

ClusterControl
Single Console for Your Entire Database Infrastructure
Find out what else is new in ClusterControl

Developer Studio Resources

Want to learn more about the Developer Studio in ClusterControl check out the information below!

Advisor Highlights

Here is some information on particular advisors that can help you with your instances

High Availability in ProxySQL: new webinar with René Cannaò

$
0
0

Following the interest we saw in this topic during our recent introduction webinar to ProxySQL, we’re pleased to invite you to join this new webinar on high availability in ProxySQL.

As you will know, the proxy layer is crucial when building a highly available MySQL infrastructure. It is therefore imperative to not let it become a single point of failure on its own. And building a highly available proxy layer creates additional challenges, such as how to manage multiple proxy instances, how to ensure that their configuration is in sync, Virtual IP and fail-over.

In this new webinar with ProxySQL’s creator, René Cannaò, we’ll discuss building a solid, scalable and manageable proxy layer using ProxySQL. And we will demonstrate how you can make your ProxySQL highly available when deploying it from ClusterControl.

Date, Time & Registration

Europe/MEA/APAC

Tuesday, April 4th at 09:00 BST (UK) / 10:00 CEST (Germany, France, Sweden)

Register Now

North America/LatAm

Tuesday, April 4th at 9:00 Pacific Time (US) / 12:00 Eastern Time (US)

Register Now

Agenda

  • Introduction
  • High Availability in ProxySQL
    • Layered approach
    • Virtual IP
    • Keepalived
  • Configuration management in distributed ProxySQL clusters
  • Demo: ProxySQL + keepalived in ClusterControl
    • Deployment
    • Failover
  • Q&A

Speakers

René Cannaò, Creator & Founder, ProxySQL. René has 10 years of working experience as a System, Network and Database Administrator mainly on Linux/Unix platform. In the last 4-5 years his experience was focused mainly on MySQL, working as Senior MySQL Support Engineer at Sun/Oracle and then as Senior Operational DBA at Blackbird, (formerly PalominoDB). In this period he built an analytic and problem solving mindset and he is always eager to take on new challenges, especially if they are related to high performance. And then he created ProxySQL …

Krzysztof Książek, Senior Support Engineer at Severalnines, is a MySQL DBA with experience managing complex database environments for companies like Zendesk, Chegg, Pinterest and Flipboard.

We look forward to “seeing” you there and to insightful discussions!

If you have any questions or would like a personalised live demo, please do contact us.

MySQL Replication: All the Severalnines Resources

$
0
0

MySQL Replication has become an instrumental part of scale-out architectures in LAMP environments. MySQL offers plenty of solutions when there is a need to scale out, the most common being to add read replicas.

Building a database HA stack for production can be daunting. It is not just about setting up replication between a master and some slave servers, it’s also about how to restore broken topologies and fail-over, how applications can keep track of the writable master and the read-only slaves, what to do when servers are corrupted, how to perform backups, and more.

We’ve produced a number of resources aimed at helping users to get started with MySQL Replication or to get more out of their existing setups.

The White Papers

The MySQL© Replication Blueprint

This is a great resource for anyone wanting to build or optimise a MySQL replication setup. The MySQL Replication Blueprint is about having a complete ops-ready solution from end to end. From monitoring, management and through to load balancing, all important aspects are covered.

Download the whitepaper

MySQL Replication for High Availability

This whitepaper covers MySQL Replication with information on the latest features introduced in 5.6 and 5.7. There is also a hands-on, practical section on how to quickly deploy and manage a replication setup using ClusterControl.

Download the whitepaper

The On-Demand Webinars

Top 9 Tips for Building a Stable MySQL© Replication Environment

MySQL replication is a widely known and proven solution to build scalable clusters of databases. It is very easy to deploy, even easier with GTID. However, ease of deployment doesn't mean you don't need knowledge and skills to operate it correctly. If you'd like to learn what is needed to build a stable environment using MySQL replication, then this webinar is for you.

Watch the replay!

Introducing the Severalnines MySQL© Replication Blueprint

The Severalnines Blueprint for MySQL Replication includes all aspects of a MySQL Replication topology with the ins and outs of deployment, setting up replication, monitoring, upgrades, performing backups and managing high availability using proxies as ProxySQL, MaxScale and HAProxy. This webinar provides an in-depth walk-through of this blueprint and explains how to make best use of it.

Watch the replay!

Managing MySQL Replication for High Availability

This webinar covers deployment and management of MySQL replication topologies using ClusterControl. We show you how to schedule backups, promote slaves and what the most important metrics are worth keeping a close eye on. We also demonstrate how you can deal with schema and topology changes and how to solve the most common replication issues.

Watch the replay!

Become a MySQL DBA: Schema Changes for MySQL Replication & Galera Cluster

Find out how to implement schema changes in the least impacting way to your operations and ensure availability of your database. This webinar also covers some real-life examples and discusses how to handle them.

Watch the replay!

Become a MySQL DBA: Replication Topology Changes for MySQL and MariaDB

Discover how to perform replication topology changes in MySQL / MariaDB, and what the failover process may look like. This webinar also discusses some external tools you may find useful when dealing with these operations.

Watch the replay!

Tutorials

MySQL Replication for High Availability - Tutorial

Learn about a smarter Replication setup that uses a combination of advanced replication techniques including mixed binary replication logging, auto-increment offset seeding, semi-sync replication, automatic fail-over/resynchronization and one-click addition of read slaves.  Our tutorial covers the concepts behind our MySQL Replication solution and explains how to deploy and manage it.

Read the Tutorial!

Top Blogs

How to deploy and manage MySQL multi-master replication setups with ClusterControl 1.4

MySQL replication, while simple and popular, may come in different shapes and flavors. Master slave or master master topologies can be configured to suit your environment.  ClusterControl 1.4 brings a list of enhancements to deploy and manage different types of MySQL replication setups. This blog outlines the different topologies that can be deployed, the merits of each topology, and shows how each can be managed in a live environment.

Read More!

Automatic failover of MySQL Replication - New in ClusterControl 1.4

MySQL replication setups are inevitably related to failovers - what do you do when your master fails and your applications are not able to write to the database anymore? Automated failover is required if you need to quickly recover an environment to keep your database up 24x7. This blog post discusses this new replication feature recently introduced in ClusterControl 1.4.

Read More!

Automating MySQL Replication with ClusterControl 1.4.0 - what’s new

This blog post will go through new replication features in ClusterControl 1.4.0, including enhanced multi-master deployment, managing replication topology changes, automated failover and handling of replication errors.

Read More!

MySQL Replication failover: Maxscale vs MHA (A Four Part Series)

This series describes how you can implement automated failover with MariaDB MHA, how you can implement automated failover with MariaDB using Maxscale and MariaDB Replication Manager, how you can implement automated failover with MariaDB using Maxscale and MHA and compares the two with each other, and an addendum on the MariaDB Replication Manager covering the new improved

Read More!

ClusterControl
Single Console for Your Entire Database Infrastructure
Find out what else is new in ClusterControl

ClusterControl for MySQL Replication

ClusterControl provides advanced deployment, management, monitoring, and scaling functionality to get your MySQL replication instances up-and-running using proven methodologies that you can depend on to work. It makes MySQL Replication easy and secure with point-and click interfaces and no need to have specialized knowledge about the technology or multiple tools. It covers all aspects one might expect for a production-ready replication setup.

ClusterControl delivers on an array of features to help deploy, manage, monitor, and scale your MySQL Replication environments.

  • Point-and-Click Deployment: Point-and-click, automatic deployment for MySQL replication is available in both community and enterprise versions of ClusterControl.
  • Management & Monitoring: ClusterControl provides management features to repair and recover broken nodes, as well as test and automate MySQL upgrades. It also provides a unified view of all MySQL nodes across your data centers and lets you drill down into individual nodes for more detailed statistics.
  • Automatic Failure Detection and Handling: ClusterControl takes care of your replication cluster’s health. If a master failure is detected, ClusterControl automatically promotes one of the available slaves to ensure your cluster is always up.
  • Proxy Integration: ClusterControl makes it easy to build a proxy layer over your replication setup; it shields applications from replication topology changes, server failures and changed writable masters. With just a couple of clicks you can improve the availability of your stack.

Learn more about how ClusterControl can simply deployment and enhance performance here.

We trust that these resources prove useful!

Happy replicating!


Video: MySQL Replication & ClusterControl Product Demonstration

$
0
0

The video below details the features and functions that are available in ClusterControl for MySQL Replication.  Included in the video are…

  • How to Deploy Master-Slave Replication
  • How to Deploy Multi-Master Replication
  • MySQL Replication overview including metrics
  • Individual Node overview & management
  • Backup management from Slaves or Masters
  • Adding Nodes
  • Adding Load Balancers
ClusterControl
Single Console for Your Entire Database Infrastructure
Find out what else is new in ClusterControl

ClusterControl for MySQL Replication

ClusterControl provides advanced deployment, management, monitoring, and scaling functionality to get your MySQL replication instances up-and-running using proven methodologies that you can depend on to work.  It makes MySQL Replication easy and secure with point-and click interfaces and no need to have specialized knowledge about the technology or multiple tools. It covers all aspects one might expect for a production-ready replication setup.

ClusterControl delivers on an array of features to help deploy, manage, monitor, and scale your MySQL Replication environments.

  • Point-and-Click Deployment:  Point-and-click, automatic deployment for MySQL replication is available in both community and enterprise versions of ClusterControl.
  • Management & Monitoring: ClusterControl provides management features to repair and recover broken nodes, as well as test and automate MySQL upgrades. It also provides a unified view of all MySQL nodes across your data centers and lets you drill down into individual nodes for more detailed statistics.
  • Automatic Failure Detection and Handling: ClusterControl takes care of your replication cluster’s health. If a master failure is detected, ClusterControl automatically promotes one of the available slaves to ensure your cluster is always up.
  • Proxy Integration: ClusterControl makes it easy to build a proxy layer over your replication setup; it shields applications from replication topology changes, server failures and changed writable masters. With just a couple of clicks you can improve the availability of your stack.

Learn more about how ClusterControl can simply deployment and enhance performance here.

Webinar Replay and Q&A: Load balancing MySQL & MariaDB with ProxySQL & ClusterControl

$
0
0

Thanks to everyone who participated in our recent webinar on how to load balance MySQL and MariaDB with ClusterControl and ProxySQL!

This joint webinar with ProxySQL creator René Cannaò generated a lot of interest … and a lot of questions!

We covered topics such as ProxySQL concepts (with hostgroups, query rules, connection multiplexing and configuration management), went through a live demo of a ProxySQL setup in ClusterControl (try it free) and discussed upcoming ClusterControl features for ProxySQL.

These topics triggered a lot of related questions, to which you can find our answers below.

If you missed the webinar, would like to watch it again or browse through the slides, it is available for viewing online.

Watch the webinar replay

You can also join us for our follow-up webinar next week on Tuesday, April 4th 2017. We’re again joined by René and will be discussing High Availability in ProxySQL.

Sign up for the webinar on HA in ProxySQL

Webinar Questions & Answers

Q. Thank you for your presentation. I have a question about connection multiplexing: does ProxySQL ensure that all statements from start transaction to commit are sent through the same backend connection?

A. This is configurable.

A small preface first: at any time, each client’s session can have one or more backend connections associated with it. A backend connection is associated to a client when a query needs to be executed, and normally it returns immediately back to the connection pool. “Normally” means that there are circumstances when this doesn’t happen. For example, when a transaction starts, the connection is not returned anymore to the connection pool until the transaction completes (either commits or rollbacks). This means that all the queries that should be routed to the same hostgroup where the transaction is running, are guaranteed to run in the same connection.

Nonetheless, by default, a transaction doesn’t disable query routing. That means that while a transaction is running on one connection to a specific hostgroup and this connection is associated with only that client, if the client sends a query destinated to another hostgroup, that query could be sent to a different connection.

Whatever the query could be sent to a different connection or not based on query rules is configurable by the value of mysql_users.transaction_persistent:

  • 0 = queries for different hostgroup can be routed to different connections while a transaction is running;
  • 1 = query routing will be disabled while the transaction is running.

The behaviour is configurable because it depends on the application. Some applications require that all the queries are part of the same transaction, other applications don’t.

Q. What is the best way to set up a ProxySQL cluster? The main concern here is configuration of the ProxySQL cascading throughout the cluster.

A. ProxySQL can be deployed in numerous ways.

One typical deployment pattern is to deploy a ProxySQL instance on every application host. The application would then connect to the proxy using very low latency connection via Unix socket. If the number of application hosts increase, you can deploy a middle-layer of 3-5 ProxySQL instances and configure all ProxySQL instances from application servers to connect via this middle-layer. Configuration management, typically, would be handled using Puppet/Chef/Ansible infrastructure orchestration tools. You can also easily use home-grown scripts as ProxySQL’s admin interface is accessible via MySQL command line and ProxySQL reconfiguration can be done by issuing a couple of SQL statements.

Q. How would you recommend to make the ProxySQL layer itself highly available?

There are numerous methods to achieve this.

One common method is to deploy a ProxySQL instance on every application host. The application would then connect to the proxy using very low latency connection via Unix socket. In such a deployment there is no single point of failure as every application host connects to the ProxySQL installed locally.

When you implement a middle-layer, you will also maintain HA as 3-5 ProxySQL nodes would be enough to make sure that at least some of them are available for local proxies from application hosts.

Another common method of deploying a highly available ProxySQL setup is to use tools like keepalived along with virtual IP. The application will connect to VIP and this IP will be moved from one ProxySQL instance to another if keepalived detects that something happened to the “main” ProxySQL.

Q. How can ProxySQL use the right hostgroup for each query?

A. ProxySQL route queries to hostgroups is based on query rules - it is up to the user to build a set of rules which make sense in their environment.

Q. Can you tell us more about query mirroring?

A. In general, the implementation of query mirroring in ProxySQL allows you to send traffic to two hostgroups.

Traffic sent to the “main” hostgroup is ensured to reach it (unless there are no hosts in that hostgroup); on the other hand, mirror hostgroup will receive traffic on a “best effort” basis - it should but it is not guaranteed that the query will indeed reach the mirrored hostgroup.

This limits the usefulness of mirroring as a method to replicate data. It is still an amazing way to do load testing of new hardware or redesigned schema. Of course, mirroring reduces the maximal throughput of the proxy - queries have to be executed twice so the load is also twice as high. The load is not split between the two, but duplicated.

Q. And what about query caching?

Query cache in ProxySQL is implemented as a simple key->value memory store with Time To Live for every entry. What will be cached and for how long - this is decided on the query rules level. The user can define a query rule matching a particular query or a wider spectrum of them. To identify query results set in cache, ProxySQL uses query hash along with information about user and schema.

How to set TTL for a query? The simplest answer is: to the maximum value of replication lag which is acceptable for this query. If you are ok to read stale data from slave, which is lagging 10 seconds, you should be fine reading stale data from cache when TTL is set to 10000 milliseconds.

Q. Connection limit to backends?

A. ProxySQL indeed implements a connection limit to backend servers. The maximum number of connections to any backend instance is defined in mysql_servers table.

Because the same backend server can be present in multiple hostgroups, it is possible to define the maximum number of connections per server per hostgroup.

This is useful for example in the case of a small set of connections where specific long running queries are queued without affecting the rest of the traffic destinated to the same server.

Q. Regarding the connection limit from the APP: are connections QUEUED?

A. If you reach the mysql-max_connections, further connections will be rejected with the error “Too many connections”.

It is important to remember that there is not a one-to-one mapping between application connections and backend connections.

That means that:

  • Access to the backends can be queued, but connections from the application are either accepted or rejected.
  • A large number of application connections can use a small number of backend connections.

Q. I haven’t heard of SHUN before: what does it mean?

A. SHUN means that the backend is temporarily marked as non-available but ProxySQL will attempt to connect to it after mysql-shun_recovery_time_sec seconds

Q. Is query sharding available across slaves?

A. Depending on the meaning of sharding, ProxySQL can be used to perform sharding across slaves. For example, it is possible to send all traffic for a specific set of tables to a set of slaves (in a hostgroup). Splitting the slaves into multiple hostgroups and performing query sharding accordingly is possible to improve performance, as each slave won’t read from disk data from tables for which it doesn’t process any query.

Q. How do you sync the configuration of ProxySQL when you have many instances for H.A ?

A. Configuration management, typically, would be handled using Puppet/Chef/Ansible infrastructure orchestration tools. You can also easily use home-grown scripts as ProxySQL’s admin interface is accessible via MySQL command line and ProxySQL reconfiguration can be done by issuing a couple of SQL statements.

Q. How flexible or feasible it is to change the ProxySQL config online, eg. if one database slave is down, how is that handled in such a scenario ?

A. ProxySQL configuration can be changed at any time; it’s been designed with such level of flexibility in mind.

‘Database down’ can be handled differently, it depends on how ProxySQL is configured. If you happen to rely on replication hostgroups to define writer and reader hostgroups (this is how ClusterControl deploys ProxySQL), ProxySQL will monitor state of read_only variable on both reader and writer hostgroups and it will move hosts as needed.

If master is promoted by external tools (like ClusterControl, for example), read_only values will change and ProxySQL will detect a topology change and it will act accordingly. For a standard “slave down” scenario there is no required action from the management system standpoint - without any changes in read_only value ProxySQL will just detect that the host is not available and it will stop sending queries to it, re-executing on other members of the hostgroup those queries which didn’t complete on dead slave.

If we are talking about a setup not using replication hostgroups then it is up to the user and their scripts/tools to implement some sort of logic and reconfigure ProxySQL on runtime using admin interface. Slave down, though, most likely wouldn’t require any changes.

Q. Is it somehow possible to SELECT data from one host group into another host group?

A. No, at this point it is not possible to execute cross-hostgroup queries.

Q. What would be RAM/Disk requirements for logs , etc?

A. It basically depends on the amount of log entries and how ProxySQL log is verbose in your environment. Typically it’s neglectable.

Q. Instead of installing ProxySQL on all application servers, could you put a ProxySQL cluster behind a standard load balancer?

A. We see no reason why not? You can put whatever you like in front of the ProxySQL - F5, another layer of software proxies - it is up to you. Please keep in mind, though, that every layer of proxies or load balancers adds latency to your network and, as a result, to your queries.

Q. Can you please comment on Reverse Proxy, whether it can be used in SQL or not?

A. ProxySQL is a Reverse Proxy. Contrary to a Forward Proxy (that acts as an intermediary that simply forwards requests), a Reverse Proxy processes clients’ requests and retrieves data from servers. ProxySQL is a Reverse Proxy: clients send requests to ProxySQL, that will understand the request, analyze it, and decide what to do: rewrite, cache, block, re-execute on failure, etc.

Q. Does the user authentication layer work with non-local database accounts, e.g. with the pam modules available for proxying LDAP users to local users?

A. There is no direct support for LDAP integration but, as configuration management in ProxySQL is a child’s play, it is really simple to put together a script which will pull the user details from LDAP and load them into ProxySQL. You can use cron to sync it often. All ProxySQL needs is a username and password hash in MySQL format - this is enough to add a user to ProxySQL.

Q. It seems like the prescribed production deployment includes many proxies - are there any suggestions or upcoming work to address how to make configuration changes across all proxies in a consistent manner?

A. At this point it is recommended to leverage configuration management tools like Chef/Ansible/Puppet to manage ProxySQL’s configuration.

Watch the webinar replay

You can also join us for our follow-up webinar next week on Tuesday, April 4th 2017. We’re again joined by René and will be discussing High Availability in ProxySQL.

Sign up for the webinar on HA in ProxySQL

Top mistakes to avoid in MySQL replication

$
0
0

Setting up replication in MySQL is easy, but managing it in production has never been an easy task. Even with the newer GTID auto-positioning, it still can go wrong if you don’t know what you are doing. After setting up replication, all sorts of things can go wrong. Mistakes can easily be made and can have a disastrous ending for your data.

This post will highlight some of the most common mistakes made with MySQL replication, and how you can prevent them.

Setting up replication

When setting up MySQL replication, you need to prime the slave nodes with the dataset from the master. With solutions like Galera cluster, this is automatically handled for you with the method of your choice. For MySQL replication, you need to do this yourself, so naturally you take your standard backup tool.

For MySQL there is a huge variety of backup tools available, but the most commonly used one is mysqldump. Mysqldump outputs a logical backup of the dataset of your master. This means the copy of the data is not going to be a binary copy, but a big file containing queries to recreate your dataset. In most cases this should provide you with a (near) identical copy of your data, but there are cases where it will not - due to the dump being on a per object basis. This means that even before you start replicating data, your dataset is not the same as the one on the master.

There are a couple of tweaks you can do to make mysqldump more reliable like dump as a single transaction, and also don’t forget to include routines and triggers:

mysqldump -uuser -ppass --single-transaction --routines --triggers --all-databases > dumpfile.sql

A good practice is to check if your slave node is 100% the same, is by using pt-table-checksum after setting up the replication:

pt-table-checksum --replicate=test.checksums --ignore-databases mysql h=localhost,u=user,p=pass

This tool will calculate a checksum for each table on the master, replicate the command to the slave and then the slave node will perform the same checksum operation. If any of the tables are not the same, this should be clearly visible in the checksum table.

Using the wrong replication method

The default replication method of MySQL was the so called statement-based replication. This method is exactly what it is: a replication stream of every statement run on the master that will be replayed on the slave node. Since MySQL itself is multi-threaded but it’s (traditional) replication isn’t, the order of statements in the replication stream may not be 100% the same. Also replaying a statement may give different results when not executed on the exact same time.

This may result in different datasets between the master and slave, due to data drift. This wasn’t an issue for many years, as not many ran MySQL with many simultaneous threads, but with modern multi-CPU architectures, this actually has become highly probable on a normal day-to-day workload.

The answer from MySQL was the so called row-based replication. Row based replication will replicate the data whenever possible, but in some exceptional cases still use statements. A good example would be the DLL change of a table, where the replication then would have to copy every row in the table through replication. Since this is inefficient, such a statement will be replicated in the traditional way. When row based replication detects data drift, it will stop the slave thread to prevent making things worse.

Then there is a method in between these two: mixed mode replication. This type of replication will always replicate statements, except when the query contains the UUID() function, triggers, stored procedures, UDFs and a few other exceptions are used. Mixed mode will not solve the issue of data drift and, together with statement-based replication, should be avoided.

Circular replication

Running MySQL replication with multi-master is often necessary if you have a multi-datacenter environment. Since the application can’t wait for the master in the other datacenter to acknowledge your write, a local master is preferred. Normally the auto increment offset is used to prevent data clashes between the masters. Having two masters perform writes to each other in this way is a broadly accepted solution.

MySQL Master-Master replication
MySQL Master-Master replication

However if you need to write in multiple datacenters into the same database, you end up with multiple masters that need to write their data to each other. Before MySQL 5.7.6 there was no method to do a mesh type of replication, so the alternative would be to use a circular ring replication instead.

MySQL ring replication topology
MySQL ring replication topology

Ring replication in MySQL is problematic for the following reasons: latency, high availability and data drift. Writing some data to server A, it would take three hops to end up on server D (via server B and C). Since (traditional) MySQL replication is single threaded, any long running query in the replication may stall the whole ring. Also if any of the servers would go down, the ring would be broken and currently there is no failover software that can repair ring structures. Then data drift may occur when data is written to server A and is altered at the same time on server C or D.

Broken ring replication
Broken ring replication

In general circular replication is not a good fit with MySQL and it should be avoided at all costs. Galera would be a good alternative for multi-datacenter writes, as it has been designed with that in mind.

Stalling your replication with large updates

Often various housekeeping batch jobs will perform various tasks, ranging from cleaning up old data till calculating averages of ‘likes’ fetched from another source. This means at set intervals, a job will create a lot of database activity and, most likely, write a lot of data back to the database. Naturally this means the activity within the replication stream will increase equally.

Statement-based replication will replicate the exact queries used in the batch jobs, so if the query took half an hour to process on the master, the slave thread will be stalled for at least the same amount of time. This means no other data can replicate and the slave nodes will start lagging behind the master. If this exceeds the threshold of your failover tool or proxy, it may drop these slave nodes from the available nodes in the cluster. If you are using statement-based replication, you can prevent this by crunching the data for your job in smaller batches.

Now you may think row-based replication isn’t affected by this, as it will replicate the row information instead of the query. This is partly true, as for DDL changes, the replication reverts back to statement-based format. Also large numbers of CRUD operations will affect the replication stream: in most cases this is still a single threaded operation and thus every transaction will wait for the previous one to be replayed via replication. This means that if you have high concurrency on the master, the slave may stall on the overload of transactions during replication.

To get around this, both MariaDB and MySQL offer parallel replication. The implementation may differ per vendor and version. MySQL 5.6 offers parallel replication as long as the queries are separated by schema. MariaDB 10.0 and MySQL 5.7 both can handle parallel replication across schemas, but have other boundaries. Executing queries via parallel slave threads may speed up your replication stream if you are write heavy. However if you aren’t, it would be best to stick to the traditional single threaded replication.

Schema changes

Performing schema changes on a running production setup is always a pain. This has to do with the fact that a DDL change will most of the time lock a table and only release this lock once the DDL change has been applied. It even gets worse once you start replicating these DDL changes through MySQL replication, where it will in addition stall the replication stream.

A frequently used workaround is to apply the schema change to the slave nodes first. For statement-based replication this works fine, but for row-based replication this can work up to a certain degree. Row-based replication allows extra columns to exist at the end of the table, so as long as it is able to write the first columns it will be fine. First apply the change to all slaves, then failover to one of the slaves and then apply the change to the master and attach that as a slave. If your change involves inserting a column in the middle or removal of a column this will work with row-based replication.

There are tools around that can perform online schema changes more reliably. The Percona Online Schema Change (as known as pt-osc) will create a shadow table with the new table structure, insert new data via triggers and backfill data in the background. Once it is done creating the new table, it will simply swap the old for the new table inside a transaction. This doesn’t work in all cases, especially if your existing table already has triggers.

An alternative is the new Gh-ost tool by Github. This online schema change tool will first make a copy of your existing table layout, alter the table to the new layout and then hook up the process as a MySQL replica. It will make use of the replication stream to find new rows that have been inserted into the original table and at the same time it backfills the table. Once it is done backfilling, the original and new tables will switch. Naturally all operations to the new table will end up in the replication stream as well, thus on each replica the migration happens at the same time.

Memory tables and replication

While we are on the subject of DDLs, a common issue is the creation of memory tables. Memory tables are non-persistent tables, their table structure remains but they lose their data after a restart of MySQL. When creating a new memory table on both a master and a slave, both will have an empty table and this will work perfectly fine. Once either one gets restarted, the table will be emptied and replication errors will occur.

Row-based replication will break once the data in the slave node returns different results, and statement-based replication will break once it attempts to insert data that already exists. For memory tables this is a frequent replication-breaker. The fix is easy: simply make a fresh copy of the data, change the engine to InnoDB and it should now be replication safe.

Setting the read_only variable to true

As we described earlier, not having the same data in the slave nodes can break replication. Often this has been caused by something (or someone) altering the data on the slave node, but not on the master node. Once the master node’s data gets altered, this will be replicated to the slave where it can’t apply the change and this causes the replication to break.

There is an easy prevention for this: setting the read_only variable to true. This will disallow anyone to make changes to the data, except for the replication and root users. Most failover managers set this flag automatically to prevent users to write to the used master during failover. Some of them even retain this after the failover.

This still leaves the root user to execute an errant CRUD query on the slave node. To prevent this from happening, there is a super_read_only variable since MySQL 5.7.8 that even locks out the root user from updating data.

Enabling GTID

In MySQL replication, it is essential to start the slave from the correct position in the binary logs. Obtaining this position can be done when making a backup (xtrabackup and mysqldump support this) or when you have stopped slaving on a node that you are making a copy of. Starting replication with the CHANGE MASTER TO command would look like this:

mysql> CHANGE MASTER TO MASTER_HOST='x.x.x.x',MASTER_USER='replication_user', MASTER_PASSWORD='password', MASTER_LOG_FILE='master-bin.0001', MASTER_LOG_POS=  04;

Starting replication at the wrong spot can have disastrous consequences: data may be double written or not updated. This causes data drift between the master and the slave node.

Also when failing over a master to a slave involves finding the correct position and changing the master to the appropriate host. MySQL doesn’t retain the binary logs and positions from its master, but rather creates its own binary logs and positions. For re-aligning a slave node to the new master this could become a serious problem: the exact position of the master on failover has to be found on the new master, and then all slaves can be re-aligned.

To solve this issue, the Global Transaction Identifier (GTID) has been implemented by both Oracle and MariaDB. GTIDs allow auto aligning of slaves, and in both MySQL and MariaDB the server figures out by itself what the correct position is. However both have implemented the GTID in a different way and are therefore incompatible. If you need to set up replication from one to another, the replication should be set up with traditional binary log positioning. Also your failover software should be made aware not to make use of GTIDs.

Conclusion

We hope to have given you enough tips to stay out of trouble. These are all common practices by the experts in MySQL. They had to learn it the hard way and with these tips we ensure you don’t have to.

We have some additional white papers that might be useful if you’d like to read more about MySQL replication.

MySQL High Availability tools - Comparing MHA, MRM and ClusterControl

$
0
0

We previously compared two high availability solutions for MySQL - MHA and MariaDB Replication Manager and looked into how they performed fail-over. In this blog post, we’ll see how ClusterControl stacks up against these solutions. Since MariaDB Replication Manager is under active development, we decided to take a look at the not yet released version 1.1.

Flapping

All solutions provide flapping detection. MHA, by default, executes a failover once. Even after you restart masterha_manager, it will still check if the last failover didn’t happen too recently. If yes (by default, if it happened in the last 8 hours), no new failover will happen. You need to explicitly change the timeout or set --ignore_last_failover flag.

MariaDB Replication Manager has less strict defaults - it will allow up to three failovers as long as each of them will happen more than 10 seconds after the previous one. In our opinion this is a bit too flexible - if the first failover didn’t solve the problem, it is unlikely that another attempt will give better results. Still, default settings are there to be changed so you can configure MRM however you like.

ClusterControl uses similar approach to MHA - only one failover is attempted. Next one can happen only after the master has been detected successfully as online (for example, ClusterControl recovery or manual intervention by the admin managed to promote one of the slaves to a master) or after restart of cmon process.

Lost transactions

MHA can work in two modes - GTID or non-GTID. Those modes differ regarding to how missing transactions are handled. Traditional replication, actually, is handled in a better way - as long as the old master is reachable, MHA connects to it and attempts to recover missing transactions from its binary logs. If you use GTID mode, this does not happen which may lead to more significant data loss if your slaves didn’t manage to receive all relay logs - another very good reason to use semi-synchronous replication, which has you covered in this scenario.

MRM does not connect to the old master to get the logs. By default, it elects the most advanced slave and promotes it to master. Remaining slaves are slaved off this new master, making them as up to date as the new master. There is a potential for a data loss, on par with MHA’s GTID mode.

ClusterControl behaves similarly to MRM - it picks the most advanced slave as a master candidate and then, as long as it is safe (for example, there are no errant transactions), promote it to become a new master. Remaining slaves get slaved off this new master. If ClusterControl detects errant transactions, it will stop the failover and alert the administrator that manual intervention is needed. It is also possible to configure ClusterControl to skip errant transaction check and force the failover.

Network partitioning

For MHA, this has been taken care of by adding a second MHA Manager node, preferably in another section of your network. You can query it using secondary_check_script. It can be used to connect to another MHA node and execute masterha_check_repl to see how the cluster can be seen from that node. This gives MHA a better view on the situation and topology, it might not failover as it is unnecessary.

MRM implements another approach. It can be configured to use slaves, external MaxScale proxy or scripts executed through HTTP protocol on a custom port (like the scripts which governs HAProxy behavior) to build a full view of the topology and then make an informed decision based on this.

ClusterControl, at this moment, does not perform any advanced checks regarding availability of the master - it uses only its own view of the system, therefore it can take an action if there are network issues between the master and the ClusterControl host. Having said that, we are aware this can be a serious limitation and there is a work in progress to improve how ClusterControl detects failed master - using slaves and proxies like MaxScale or ProxySQL to get a broader picture of the topology.

Roles

Within MHA you are able to apply roles to a specific host, so for instance ‘candidate_master’ and ‘no_master’ will help you determine which hosts are preferred to become master. A good example could be the data center topology: spread the candidate master nodes over multiple racks to ensure HA. Or perhaps you have a delayed slave that may never become the new master even if it is the last node remaining.

This last scenario is likely to happen with MariaDB Replication Manager as it can’t see the other nodes anymore and thus can’t determine that this node is actually, for instance, 24 hours behind. MariaDB does not support the Delayed Slave command but it is possible to use pt-slave-delay instead. There is a way to set the maximum slave delay allowed for MRM, however MRM reads the Seconds_Behind_Master from the slave status output. Since MRM is executed after the master is dead, this value will obviously be null.

At the beginning of the failover procedure, ClusterControl builds a list of slaves which can be promoted to master. Most of the time, it will contain all slaves in the topology but the user has some additional control over it. There are two variables you can set in the cmon configuration:

replicaton_failover_whitelist

and

replicaton_failover_blacklist

The whitelist contains a list of IP’s or hostnames of slaves which should be used as potential master candidates. If this variable is set, only those hosts will be considered. The second variable may contain a list of hosts which will never be considered as master candidate. You can use it to list slaves that are used for backups or analytical queries. If the hardware varies between slaves, you may want to put here the slaves which use slower hardware.

Replication_failover_whitelist takes precedence, meaning the replication_failover_blacklist is ignored if replication_failover_whitelist is set

Integration

MHA is a standalone tool, it doesn’t integrate well with other external software. It does however provide hooks (pre/post failover scripts) which can be used to do some integration - for instance, execute scripts to make changes in the configuration of an external tool. MHA also uses read_only value to differentiate between master and slaves - this can also be used by external tools to drive topology changes. One example would be ProxySQL - MHA can work with this proxy using both pre/post failover scripts and with read_only values, depending on the ProxySQL configuration. It’s worth mentioning that, in GTID mode, MHA doesn’t support MariaDB GTID - it only supports Oracle MySQL or Percona Server.

MRM integrates nicely with MaxScale - it can be used along MaxScale in a couple of ways. It could be set so MaxScale will do the work to monitor the health of the nodes and execute MRM as needed, to perform failovers. Another option is that MRM drives MaxScale - monitoring is done on MRM’s side and MaxScale’s configuration is updated as needed. MRM also sets read_only variables so it makes it compatible with other tools which understand those settings (like ProxySQL, for example). A direct integration with HAProxy is also available - MRM, if collocated, may modify the HAProxy configuration whenever the topology changes. On the cons side, MRM works only with MariaDB installations - it is not possible to use it with Oracle MySQL’s version of GTID.

ClusterControl uses read_only variables to differentiate between master and slave nodes. This is enough to integrate with every kind of proxy which could be deployed from ClusterControl: ProxySQL, MaxScale and HAProxy. Failover executed by ClusterControl will be detected and handled by any of those proxies. ClusterControl also integrates with external tools regarding management. It provides access to management console for MaxScale and, to some extend, to HAProxy. Advanced support for ProxySQL will be added shortly. Metrics are provided for HAProxy and ProxySQL. ClusterControl supports both Oracle GTID and MariaDB GTID.

Conclusion

If you are interested in details how MHA, MRM or ClusterControl handle failover, we’d like to encourage you to take a look at the blog posts listed below:

Below is a summary of the differences between the different HA solutions:

 MHAMRMClusterControl
Replication supportnon-GTID, Oracle GTIDMariaDB GTIDOracle GTID and MariaDB GTID
FlappingOne failover allowedDefaults are less restrictive but can be modifiedOne failover allowed unless it brings the master online
Lost transactionsVery good handling for non-GTID, no checking for transactions on master for GTID setupsNo checking for transactions on masterNo checking for transactions on master
Network PartitioningNo support built in, can be added through user-created scriptsVery good false positive detection using slaves, proxy or external scriptsNo support at this moment, work in progress to build false positive detection using proxy and slaves
RolesSupport for whitelist and blacklist of hosts to promote to masterNo supportSupport for whitelist and blacklist of hosts to promote to master
IntegrationCan be integrated with external tools using hooks. Uses read_only variable to identify master and slaves which helps to integrate with other tools that understand this pattern.Close integration with MaxScale, integration with HAProxy is also available. Uses read_only variable to identify master and slaves which helps to integrate with other tools that understand this pattern.Can be integrated with external tools using hooks. Uses read_only variable to identify master and slaves which helps to integrate with other tools that understand this pattern.

If we are talking about handling master failure, each of the solutions does its job well and feature-wise they are mostly on par. There are some differences in almost every aspect that we compared but, at the end, each of them should handle most of the master failures pretty well. ClusterControl lacks more advanced network partitioning detection but this will change soon. What could be important to keep in mind that those tools support different replication methods and this alone can limit your options. If you use non-GTID replication, MHA is the only option for you. If you use GTID, MHA and MRM are restricted to, respectively, Oracle MySQL and MariaDB GTID setups. Only ClusterControl (you can test it for free) is flexible enough to handle both types of GTID under one tool - this could be very useful if you have a mixed environment while you still would like to use one single tool to ensure high availability of your replication setup.

Database TCO - Calculating the Total Cost of Ownership for MySQL Management

$
0
0

Cost analysis and cost effectiveness for databases are hardly ever performed, while it actually makes a lot of sense to perform such calculations. The easiest method of gaining insights into these is by performing a total cost of ownership (TCO) calculation. You might have theories on what your greatest cost factor is, but do you really know for sure?

Why would you perform such a TCO analysis? As with most research: prove your theory of the highest cost factor wrong. The TCO is a great tool to give a precise cost analysis and would give you some surprising insights!

Cost factors for databases

The cost factors for databases can be divided into two separate groups: capital expenses (CAPEX) and operational expenses (OPEX). Both cost factors are part of the infrastructure lifecycle.

TCO lifetime cycle
TCO lifetime cycle

Capital expenses are the costs you pay upfront during the acquire phase: hardware purchases, (non-recurring) licensing cost and any other one time cost factors like replacement parts. These expenses are a constant factor in the TCO and are spread out over the lifetime of your database servers. Most of these costs will happen in the acquire phase, however the replacement parts will obviously take place in the maintenance phase.

Operational expenses are the costs for running the database servers. As these costs are recurring, you pay them on a regular (e.g., yearly) interval and mostly during the maintenance phase. These cost include datacenter/rack rental, power consumption, network usage and operational costs like (remote) hands and personnel. The last one includes sysops, DBAs and all costs made to facilitate them like desks, office space and training. Since these expenses are recurring, they will continue to grow during the lifetime of your database servers. The longer you operate these servers, the higher the operational expenses (OPEX) will be.

This means that the longer you use your database servers, the share between CAPEX and OPEX will shift towards a higher share of OPEX. The one time purchase of hardware may be considered a high cost upfront, but given that you will probably use the hardware for more than three years, it justifies the upfront cost.

For cloud hosting, the calculation will be similar. However, since you don’t have hardware to purchase upfront, the CAPEX will be a lot lower. As cloud hosting has a recurring monthly cost, the OPEX will be higher. In some cases, your cloud provider may calculate some (setup) cost upfront and this should be treated as CAPEX.

Example calculation for hardware

In this example we will make a calculation of a small company (under 100 employees) that hosts on hardware in their own racks in a data center. This company has two dedicated sysops and one experienced DBA (1-4 years), where the DBA is managing around 20 databases and the sysops around 200 hosts. The average DBA salary for this is $65,000, so the annual cost per database would be $3,250. The sysops average around $50,000 for the same experience, and cost $250 per host per year. The sysops are also the people who manage the datacenter. We will not factor in the facilitation costs as this would get over complicated.

For our example cluster, we will make use of a three node MySQL replication setup: one master and two slave nodes. Hardware is based upon the Dell R730 with 64GB of memory and six 400GB SSDs, as this is a very popular model for this purpose. The price of a R730 with this configuration is currently $7655.

Rental cost of a full rack is nowadays around $350[1], so the colocation cost per U is roughly $8 per month. Since the R730 is a 2U unit, the total cost for our databases would be $48 per month.

Modern colocation costs factor out the power consumption, as the power is a variable factor. Prices for power with colocation can vary a lot, but it currently averages around $0.20 per kWh. The average database server consumes around 200 watts, which results in a 144kWh consumption per month per server. For our three database servers this would result in $86 per month.

This results in the following TCO:

Cost itemCAPEXOPEX (per year)TCO (3 years)
Purchase: hardware$22,965  
Professional support (DBA / Sysop) $10,500 
Colocation cost $576 
Power cost (200W) $1,032 
Replacement parts$1,500  
Total$24,465$12,108$60,789
TCO for database servers (hardware)
TCO for database servers (hardware)

There are a couple of conclusions we can draw from this calculation. Cost for colocation, power and replacement parts are neglectable, compared to the other cost factors. Also during the lifetime of a database server, the support costs make up more than half of the total costs. And are far higher than the original purchase price of the servers.

Example calculation cloud hosting

In this example we will make a calculation for a company that hosts in the cloud. To compare fairly, we will again make use of a three node MySQL replication setup on EC2. Amazon provides a nice TCO calculator for these purposes, so we made use of this as input for the calculations below.

To make the database servers comparable, we chose the i3.2xlarge, which (currently) has 8 vCPUs, 61GB of RAM and 1900GB of SSD storage. This currently costs $0.624 per hour, which is slightly below $15 per day and $5466 per year.

In the cloud the upfront investments (CAPEX) are not necessary. This is true in many cases, except if you make use of reserved instances like in AWS. With reserved instances, you make a claim on Amazon to reserve (performant) capacity for you, that you can use at will. In our calculation, we will not make use of reserved instances. Next to the lower CAPEX, our OPEX should be lower since our sysops don’t have to go to the data center or install these servers.

This results in the following TCO:

Cost itemCAPEXOPEX (per year)TCO (3 years)
Professional support (DBA only) $9,750 
AWS 3x i3.2xlarge $16,399 
Total$0$27,149$78,447
TCO for databases (cloud hosted)
TCO for databases (cloud hosted)

Even though we have eliminated our upfront costs and capital investments (CAPEX), the OPEX is really high due to the premium we have to pay for high performance instances in AWS. Over a three year period, your TCO will be higher than having your own hardware

OPEX has a large influence

As you can see from these calculations, the influence of the operational costs (OPEX) during the lifetime of the servers is far greater than the initial large investment of the CAPEX. This is mostly due to running (and owning) these servers for multiple years.

In the case of owning your own hardware, we have shown that the operational costs even outweigh the initial costs for purchasing these servers. For the AWS example, the total costs of “owning” these servers is even higher than for the hardware example. This is the premium paid for flexibility, as with a cloud environment you are free to upgrade to a newer instance every year.

For both examples it is clear that the professional support for running these databases is relatively high. It looks like the sysops are clearly far more efficient when they are managing more than 200 hosts, but they don’t have to bother with the additional tasks that the DBA is supposed to do. If you could only make the DBA more efficient.

Making the DBA more efficient

Luckily, there are a few methods to make your DBA more efficient and handle more database servers. Either have the DBA relieved from various tasks (others will perform these tasks) or have the DBA perform less tasks through automation. The low hanging fruit would be to automate the most time consuming tasks or the most error prone ones.

The most time consuming DBAs tasks are provisioning, deployments, performance tuning, troubleshooting, backups and scaling clusters. Provisioning and installation of software is a repetitive task that can easily be automated, just like copying data and setting up replication when scaling out with read slaves. Similarly backups can be automated and with the help of a backup manager the restore process as well.

For the most error prone actions we could identify setting up replication, failover and schema management. In these tasks a single typo could lead to disastrous proportions where the only way to resolve is to restore from an earlier made backup.

Automation of repetitive tasks and chores is a tedious, but useful task. It takes time to automate each and every one of these tasks, and your DBA will most certainly be more busy with automation than with the other (daily) tasks. This automation may actually interfere with the normal day to day jobs, especially if the DBA isn’t a developer type and is struggling with the automation jobs. Wouldn’t it be more productive to offer the DBA a readily available toolset to work with instead? Or perhaps provide the sysops with a way that allows them to perform the tasks instead?

Do you even need a DBA?

Not all companies have full-time DBAs, at least not if you have a small number of databases. A DBA is a very specialized role, where a single person is dedicated to perform all database related tasks. This requires specialized knowledge of specific hardware, specific software, operating systems and in depth knowledge of SQL. Placing such a specialized person on a small number of databases, means the person will not have a lot to do and only cost money.

A sysop (or system administrator) is more of a generalist, and has to do a bit of everything. They generally manage hardware, operating systems, network, security, applications, databases and storage. They may have specific knowledge on one or more of these systems, but they can’t be specialized in all of them. The knowledge gap becomes more apparent when it comes to distributed setups (replication or clusters) and need for high availability.

As shown in the example calculations, the difference in salary expresses this picture as well. A DBA will cost more than a sysop and is more difficult to find as there’s not many of them around. That means that you probably have to do without a DBA. The challenge for the sysop is to have enough time to keep the databases perform well, troubleshoot any issues, monitor for any anomalies, maintain high availability, ensure data integrity and that data is backed up (and backups are verified to be ok).

ClusterControl saving costs

In ClusterControl the full database lifetime cycle has already been automated for the most popular open source databases. This means the DBA doesn’t necessarily have to automate his/her job anymore, as most of the tasks already have been automated in ClusterControl. Reliability will increase as the automation done in ClusterControl has been tested through and through, while what the DBA produced will only be tested directly in production.

Implementing a complete lifetime cycle management tool like ClusterControl means the DBA can now spend more time on useful things and manage more database servers. Or it could also mean people with less knowledge on databases can perform the same tasks.

Viewing all 385 articles
Browse latest View live