Quantcast
Channel: Severalnines - clustercontrol
Viewing all 385 articles
Browse latest View live

ClusterControl 1.2.12 - The Full Monty Release for MySQL, MariaDB, MongoDB & PostgreSQL

$
0
0

The Severalnines team is pleased to announce the release of ClusterControl 1.2.12.

This release contains key new features, such as support for the latest versions of MySQL, MariaDB, MongoDB & PostgreSQL, operational reports and enhanced backup options, along with performance improvements and bug fixes.

Highlights

  • New for MySQL, MariaDB, MongoDB & PostgreSQL
    • Operational Reports
    • Local Mirrored Repositories
    • Enhanced Backup Options
  • New for MySQL
    • Support for Oracle 5.7
    • New Replication Features for Master & Slave
    • Manage Garbd and MaxScale configurations
  • New for MariaDB
    • Support for 10.1
    • SSL Encryption of Galera Replication Links
    • Manage Garbd and MaxScale configurations
  • New for MongoDB
    • Support for 3.2
  • New for PostgreSQL
    • Support for 9.5

For additional details about the release:

Operational Reports for MySQL, MariaDB, MongoDB & PostgreSQL

It is now possible to generate, schedule and email out operational reports on the status of all databases managed by ClusterControl. See the Change Log for more details.

Enhanced Backup Options

Typically, sysadmins and DBAs tend to backup their databases from the same host; but what happens if the host goes down? Or the host changes role from slave to master? How does it impact database performance?

Setting the new auto select feature enables ClusterControl to always choose the optimal host to backup from. In case a specific host is configured for the backup, users are now able to select a failover host to backup from. See the full range of new backup options in the Change Log.

New MySQL Replication Features for Master & Slave

Whether users are looking for the most advanced MySQL slave server to use for Master promotion or whether they are in need of delayed replication slaves in their cluster, ClusterControl now comes with a series of new features simplify managing MySQL Replication set ups. To find out about more about these new features, read the Change Log.

Manage Garbd and MaxScale configurations

Configuration Management now also supports Garbd and MaxScale configurations. This allows you to customize both components after their deployment without having to edit the configurations commandline.

SSL Encryption of Galera Replication Links

This new feature is now available to MySQL / MariaDB Galera Cluster users, who are running clusters in a multi-datacenter environment or on less trusted networks. They can now enable/disable SSL encryption of Galera replication links at a click of a button in ClusterControl.

There are a bunch of other features and improvements that we have not mentioned here. You can find all details in the ChangeLog.

We encourage you to test this latest release and provide us with your feedback. If you’d like a demo, feel free to request one.

With over 8,000 users to date, ClusterControl is the leading, platform independent automation and management solution for MySQL, MariaDB, MongoDB and PostgreSQL.

Thank you for your ongoing support, and happy clustering!

For additional tips & tricks, follow our blog: http://www.severalnines.com/blog/

Blog category:


Automate your Database with CCBot: ClusterControl Hubot integration

$
0
0

With our new ClusterControl 1.2.12 release we have added many new features like operational reports, enhanced backup options, SSL Encryption for Galera replication links and improved the support for external tools. One of these tools is CCBot, the ClusterControl chatbot.

CCBot is based on the popular Hubot framework originally created by Github. Github uses Hubot as their DevOps tools of choice and allowing them to do Continuous Integration and Continuous Delivery on their entire infrastructure. So what does Hubot allow you to do?

Hubot

Hubot is a chatbot that has been modelled after Github’s internal bot called hubot. The Hubot framework allows you to quickly create your own bot and extend it with various pre made scripts and integrate it with many popular chat and messaging services.

Hubot is meant as a tool to help your team automate and operate at scale, like for instance when you are part of a DevOps team. You can give it a command and it will execute that command for you. For instance you could do code deployments, kick off continuous integrations, upgrade schemas using schema control, schedule backups and even scale infrastructure.

Not only is Hubot capable of executing tasks, it can also monitor systems for you. For instance if you create a post-commit hook that interfaces with Hubot you can alert all team members that someone just committed code. Take that one step further and you could even monitor your database servers.

Automating your Database with CCBot

CCBot is the Severalnines integration of ClusterControl in the Hubot framework and therefore supports most of the major chat services like Slack, Flowdock, Hipchat, Campfire, any XMPP based chat service and also IRC. We have tested and verified CCBot to work with Slack, Flowdock, Hipchat and Campfire. CCBot follows the philosophy of Severalnines by implementing the four pillars of ClusterControl: Deploy, Manage, Monitor and Scale.

Monitor and manage

The first release of CCBot covers the Manage and Monitor parts of ClusterControl, meaning CCBot will be able to keep your team up to date on the status of your clusters, jobs and backups. At the same time you can also create impromptu backups, read the last log lines of the MySQL error logs, schedule and create the daily reports.

Installing CCBot

There are two ways to integrate CCBot with Hubot: either as a standalone chatbot that operates from your ClusterControl host or if you already have a Hubot based chatbot in your company integrate it into your existing Hubot framework. The latter may need a bit of adjustments to your startup script.

Installing CCBot is only a few minutes of work. You can find our repository here:
https://github.com/severalnines/ccbot

Integrate CCBot on an existing Hubot framework

In principle this should be relatively easy as you already have a working Hubot chatbot, thus only copying the source files to your chatbot and add the CCBot parameters would be sufficient to make it work.

Installing CCBot scripts

Copy the following files from our ccbot repository to your existing Hubot instance in the respective directories:

git clone https://github.com/severalnines/ccbot
cd ccbot
cp -R src/config <hubot rootdir>/
cp -R src/scripts <hubot rootdir>/
cp -R src/utils <hubot rootdir>/

Then add the following parameters in your Hubot startup script if necessary:

export HUBOT_CMONRPC_TOKENS=’TOKEN0,TOKEN1,TOKEN2,TOKEN3’
export HUBOT_CMONRPC_HOST=’<your clustercontrol host>’
export HUBOT_CMONRPC_PORT=9500
export HUBOT_CMONRPC_MSGROOM=’General’

These variables will be picked up by the config.coffee file and used inside the cmonrpc calls.

The HUBOT_CMONRPC_TOKENS variable should contain the RPC tokens set under the rpc_key parameter in /etc/cmon.cnf and /etc/cmon.d/cmon_<cluster>.cnf configuration files. These tokens are used to secure the CMON RPC api and hence have to be filled in when used.

NOTE: Currently as of 1.2.12 the ClusterControl web application does not support having a RPC token in the cmon.cnf file. If you want to run both ccbot and access the web application at the same time then comment out the RPC token in the cmon.cnf file.

For configuration of the HUBOT_CMONRPC_MSGROOM variable, see below in the standalone installation.

Bind ClusterControl to external ip addres

As of ClusterControl version 1.2.12 there is a change in binding address of the CMON RPC: by default it will bind to localhost (127.0.0.1) and if your existing Hubot chatbot is living on a different host you need to configure CMON to bind to another ip address as well. You can change this in the cmon default file (/etc/default/cmon):

RPC_PORT=9500
RPC_BIND_ADDRESSES="127.0.0.1,<your ip address>"

Install CCBot as a standalone chatbot

Prerequisites

Firstly we need to have the node.js framework installed. This can best be done by installing npm. This should install the necessary node.js packages as well and allow you to install additional modules via npm.

Installing Hubot framework

For security we create a separate hubot user to ensure Hubot itself can’t do anything outside running Hubot and create the directory to run Hubot from.

sudo useradd -m hubot
sudo mkdir /var/lib/hubot
sudo chown hubot /var/lib/hubot

To install the Hubot framework from scratch follow the following procedure where the adapter is the chat service you are using (e.g. slack, hipchat, flowdock):

sudo npm install -g yo generator-hubot
sudo su - hubot
cd /var/lib/hubot
yo hubot --name CCBot --adapter <adapter>

So if you are using, for instance, Slack as your chat provider you would need to provide “slack” as your adapter. A complete list of all the Hubot adapters can be found here:
https://hubot.github.com/docs/adapters/
Don’t forget to configure your adapter accordingly in the hubot startup script.

Also if you choose to change CCBot’s name keep in mind not to name the bot to Hubot: the Hubot framework attempts to create a module named exactly the same as the name you give to the bot. Since the framework is already named Hubot this will cause a non-descriptive error.

Installing CCBot scripts

Copy the following files to the ccbot directory:

cd ~/
git clone https://github.com/severalnines/ccbot
cd ccbot
cp -R src/config /var/lib/hubot/
cp -R src/scripts /var/lib/hubot/
cp -R src/utils /var/lib/hubot/

Installing Hubot startup scripts

Obviously you can run Hubot in the background or a Screen session, but it would be much better if we can daemonize Hubot using proper start up scripts. We supply three startup scripts for CCBot: a traditional Linux Standard Base init script (start, stop, status), a systemd wrapper for this init script and a supervisord script.

Linux Standard Base init script:

For Redhat/Centos 6.x (and lower):

cp scripts/hubot.initd /etc/init.d/hubot
cp scripts/hubot.env /var/lib/hubot
chkconfig hubot on

For Debian/Ubuntu:

cp scripts/hubot.initd /etc/init.d/hubot
cp scripts/hubot.env /var/lib/hubot
ln -s /etc/init.d/hubot /etc/rc3.d/S70hubot

Systemd:

For systemd based systems:

sudo cp scripts/hubot.initd /sbin/hubot
cp scripts/hubot.env /var/lib/hubot
sudo cp scripts/hubot.systemd.conf /etc/systemd/hubot.conf
sudo systemctl daemon-reload
sudo systemctl enable hubot

Supervisord

For this step it is necessary to have supervisord installed on your system.

For Redhat/Centos:

sudo yum install supervisord
sudo cp scripts/hubot.initd /sbin/hubot
sudo cp scripts/hubot.supervisord.conf /etc/supervisor/conf.d/hubot.conf
sudo supervisorctl update

For Debian/Ubuntu:

sudo apt-get install supervisord
cp scripts/hubot.initd /sbin/hubot
sudo cp scripts/hubot.supervisord.conf /etc/supervisor/conf.d/hubot.conf
sudo supervisorctl update

Hubot parameters

Then modify the following parameters in the Hubot environment script (/var/lib/hubot/hubot.env) or supervisord config if necessary:

export HUBOT_CMONRPC_TOKENS=’TOKEN0,TOKEN1,TOKEN2,TOKEN3’
export HUBOT_CMONRPC_HOST=’localhost’
export HUBOT_CMONRPC_PORT=9500
export HUBOT_CMONRPC_MSGROOM=’General’

The HUBOT_CMONRPC_TOKENS variable should contain the RPC tokens set in /etc/cmon.cnf and /etc/cmon.d/cmon_<cluster>.cnf configuration files. These tokens are used to secure the CMON RPC api and hence have to be filled in when used. If you have no tokens in your configuration you can leave this variable empty.

The HUBOT_CMONRPC_MSGROOM variable contains the team’s room the chatbot has to send its messages to. For the chat services we tested this with it should be something like this:

  • Slack: use the textual ‘General’ chatroom or a custom textual one.
  • Hipchat: similar to “17723_yourchat@conf.hipchat.com”. You can find your own room via “Room Settings”
  • Flowdock: needs a room identifier similar to “a0ef5f5f-9d97-42aa-b6a3-c1a6bb87510e”. You can find your own identifier via Integrations -> Github -> popup url
  • Campfire: a numeric room, which is in the url of the room

Hubot commands

You can operate Hubot by giving it commands in the chatroom. In principle it does not matter whether you issue to command in a general chatroom where Hubot is present or if it were in a private chat with the bot itself. Sending a command will be as following:

botname command

Where botname is the name of your Hubot bot, so if in our example Hubot is called “ccbot” and the command is “status” you would send the command be as following:

@ccbot status

Note: when you are in a private chat with the chatbot you must omit the addressing of the bot.

Command list

Status

Syntax:

status

Lists the clusters in ClusterControl and shows their status.

Example:

@ccbot status

Full backup

Syntax:

backup cluster <clusterid> host <hostname>

Schedules a full backup for an entire cluster using xtrabackup. Host is an optional parameter, if not provided CCBot will pick the first host from the cluster.

Example:

@ccbot backup cluster 1 host 10.10.12.23

Schema backup

Syntax:

@backup cluster <clusterid> schema <schema> host <hostname>

Schedules a backup for a single schema using mysqldump. Host is an optional parameter, if not provided CCBot will pick the first host from the cluster.

Example:

@ccbot backup cluster 1 schema important_schema

Create operational report

Syntax:

createreport cluster <clusterid>

Creates an operational report for the given cluster

Example:

@ccbot createreport cluster 1

List operational reports

Syntax:

listreports cluster <clusterid>

Lists all available reports for the given cluster

Example:

@ccbot listreports cluster 1

Last loglines

Syntax:

lastlog cluster <cluster> host <host> filename <filename> limit <limit>

Returns the last log lines of the given cluster/host/filename.

Example:

@ccbot lastlog cluster 1 host 10.10.12.23 filename /var/log/mysqld.log limit 5

CCBot roadmap

As CCBot is meant to compliment the ClusterControl UI we will review what makes sense to put into a chatbot and what not. So far we have identified that adding schemas, users, scaling clusters and running (custom) advisors makes the most sense and we will continue to extend CCBot with that functionality in the upcoming months. Obviously if you have the urge and need to automate additional ClusterControl functions we are all ears.

Blog category:

Planets9s: Download the new ClusterControl for MySQL, MongoDB & PostgreSQL

$
0
0

Welcome to today’s installment of Planets9s, our weekly communication on all the latest resources and technologies we create around automation and management of open source databases. I trust that these resources will be useful to you and would love to get your feedback on them.

Download ClusterControl - The Full Monty Release for MySQL, MongoDB & PostgreSQL

This week we’re pleased to announce the release of ClusterControl 1.2.12. This release contains key new features, such as support for the latest versions of MySQL, MongoDB & PostgreSQL, operational reports and enhanced backup options, along with performance improvements and bug fixes. Worth highlighting are also the new replication features for Master & Slave, as well as SSL encryption of Galera

Replication links. Read the release blog for more details.

Download ClusterControl today.

Automate your Database with CCBot: ClusterControl Hubot integration

With our new ClusterControl 1.2.12 release we have added many new features as outlined above and improved the support for external tools. One of these tools is CCBot, the ClusterControl chatbot. CCBot is based on the popular Hubot framework originally created by Github. Github uses Hubot as their DevOps tools of choice, allowing them to do Continuous Integration and Continuous Delivery on their entire infrastructure.

Find out more in our CCBot blog.

Sign up for our webinar on building scalable database infrastructures with MariaDB & HAProxy

There’s still time to sign up for our webinar on Tuesday next week, February 23rd. Our friends from WooServers will be giving an overview of their project at cloudstats.me and discussing their past infrastructure challenges of scaling MySQL “write” performance with minimal cost, performance overhead and database management burden. The session will cover how they came to choose a MariaDB with MaxScale and HAProxy clustering solution, and how they leveraged ClusterControl.

Sign up here.

Do share these resources with your colleagues and follow us in our social media channels.

Have a good end of the week,

Jean-Jérôme Schmidt
Planets9s Editor
Severalnines AB

Blog category:

Planets9s: Building scalable database infrastructures with MariaDB & HAProxy

$
0
0

Welcome to this week’s Planets9s, covering all the latest resources and technologies we create around automation and management of open source databases.

Watch the replay: How to build scalable database infrastructures with MariaDB & HAProxy

You can now sign up to watch the replay of this week’s webinar with our partner WooServers - How CloudStats.me moved from MySQL to clustered MariaDB for High Availability. This webinar covered how CloudStats.me evaluated solutions from NDB Cluster to MySQL Replication with application sharding in order to scale MySQL write performance.  

Sign up to watch the replay.

MySQL Replication failover: Maxscale vs MHA (Parts 1 to 3)

At present, the most commonly used products for automated failover are MySQL Master HA (aka MHA), Percona Replication Manager, but also newer options like Orchestrator and MaxScale + MariaDB Replication Manager have become available lately. In this three-parts blog series, we first focus on MySQL Master HA (MHA), in our second part we cover MaxScale + MariaDB Replication Manager and the final part compares the two with each other.

Polyglot Persistence for the MongoDB, PostgreSQL & MySQL DBA

The introduction of DevOps in organisations has changed the development process, and perhaps introduced some challenges. Developers, in addition to their own preferred programming languages, also have their own preference for backend storage.The former is often referred to as polyglot languages and the latter as polyglot persistence. This webinar covered the four major operational challenges for MySQL, MongoDB & PostgreSQL: Deployment, Management, Monitoring, Scaling and how to deal with them.

Sign up to watch the replay!

Do share these resources with your colleagues and follow us in our social media channels.

Have a good end of the week,

Jean-Jérôme Schmidt
Planets9s Editor
Severalnines AB

Webinar Replay & Slides: How to build scalable database infrastructures with MariaDB & HAProxy

$
0
0

Thanks to everyone who participated in last week’s live webinar on how CloudStats.me moved from MySQL to clustered MariaDB for high availability with Severalnines ClusterControl. The webinar included use case discussions on cloudstats.me’s database infrastructure bundled with a live demonstration of ClusterControl to illustrate the key elements that were discussed.

We had a lot of questions in the audience and you can read through the transcript of these further below in this blog.

If you missed the session and/or would like to watch the replay in your own time, it is now available online for sign up and viewing.

Replay Details

Get access to the replay

Agenda

  • CloudStats.me infrastructure overview
  • Database challenges
  • Limitations in cloud-based infrastructure
  • Scaling MySQL - many options
    • MySQL Cluster, Master-Slave Replication, Sharding, ...
  • Availability and failover
  • Application sharding vs auto-sharding
  • Migration to MariaDB / Galera Cluster with ClusterControl & NoSQL
  • Load Balancing with HAProxy & MaxScale
  • Infrastructure set up provided to CloudStats.me
    • Private Network, Cluster Nodes, H/W SSD Raid + BBU
  • What we learnt - “Know your data!”

Speakers

Andrey Vasilyev, CTO of Aqua Networks Limited - a London-based company which owns brands, such as WooServers.com, CloudStats.me and CloudLayar.com, and Art van Scheppingen, Senior Support Engineer at Severalnines, discussed the challenges encountered by CloudStats.me in achieving database high availability and performance, as well as the solutions that were implemented to overcome these challenges.

If you have any questions or would like a personalised live demo, please do contact us.

Follow our technical blogs: http://severalnines.com/blog


Questions & Answers - Transcript

Maybe my question is not directly related to the topic of the webinar... But will your company (I mean Severalnines) in the future also consider the possibility to install and setup Pivotal's Greenplum database?
Currently, there are no plans for us that, as we have not received requests to support Greenplum yet. But it’s something we’ll keep in mind!

What about Spider and ClusterControl? Is this combination available / being used?
Spider can be used independently of ClusterControl, since ClusterControl can be used to manage the individual MySQL instances. We are not aware of any ClusterControl users who are using Spider.

Is MySQL Cluster NDB much faster than a Galera Cluster?
MySQL NDB and Galera Cluster are two different types of clustering. The main difference is that in Galera Cluster all nodes are equal and contain the same dataset, while with NDB Cluster the data nodes contain sharded/mirrored data sets. NDB Cluster can handle larger data sets to write, but if you need multiple equal MySQL master nodes Galera is a better choice. Galera is also faster in replicating data than a traditional MySQL replication due to the ability to write all queries in parallel.

Does CloudStats also support database backups on the end user level?
CloudStats can backup your files to S3, Azure, locally etc., but for database backup, it’s best to use ClusterControl, while CloudStats is for the rest of files.

Is it possible to restore the structure and the whole setup of a previous ClusterControl infrastructure from the backups?
Yes, that would be possible, if you make backups of your existing ClusterControl database and configuration files.

I'm using Maxscale with Galera. The Read/Write Split modules drop the connection on very intensive operations involving reading and then writing of rows up to 80,000, but works fine with readconnroute module (which doesn't split). Anyway I can scale writes involving just Galera?
You could create two readconnroute interfaces using MaxScale and use one for writes only. You can do this by adding router_options=master to the configuration and with a Galera cluster this will only write to one single node in the Galera cluster.

Cluster is fast as the slowest node? like NODE1-SSD, NODE2-SSD, NODE3-SATA...
Yes, within Galera Cluster your slowest node will determine the speed of the whole cluster

Galera cluster is INNODB only. If it is. Is it recommended not to use MyISAM?
In principle Galera is InnoDB only, however there is limited support for MyISAM if you encapsulate your queries in a transaction. As there is no guarantee the data will be kept equal on all nodes due to MyISAM not being a transactional storage engine and this could cause data drift to happen. Using MyISAM with Galera is not advised to do.

Virtualized nodes should then be on SSD host storage. Not network storage because IOPS will be low. Correct?
Yes, that's correct, its best to store it on a local ssd.

MySQLdump is slow right?
MySQLdump is dumping the entire contents of your database as a logical backup and therefore slower than Xtrabackup.

HAProxy instances are installed on 2x cluster control servers?
HAProxy instances are usually installed on dedicated hosts, not on the CC node.

What about MySQL proxy, and use cases with that tool? And would it be better to just split query R/W at application level?
MySQL proxy can be used, but the tool is not maintained anymore and we found HAProxy and MaxScale are better options.

HAProxy can run custom scripts and these statuses can also be manually created right?
Exactly, you can do that. ClusterControl just has a few preset checks by default but you can change them if you like

In your experience, how does EC2 perform with a MariaDB based cluster?
According to our Benchmarks, EC2 M3.Xlarge instances showed a Read Output performance of 16914 and Write Input Performance of 31092, which is 2 times higher than a similar sized Microsoft Azure DS3 instance (16329 iops for Reads and 15900 iops for Writes). So yes, according to our test AWS might perform better than Azure for Write performance, but it will depend on your application size and requirements. A local SSD storage on a server might be recommended for higher iops performance.

Blue - Reads
Red  - Writes

Does WooServers offer PCI DSS compliant servers?
Yes, WooServers offer PCI DSS servers and are able to manage your current infrastructure that you have, either on Azure, on premises or AWS.

AAA pluggable / scriptable? A customer came up with Radius recently…
Unfortunately the authentication/authorization is limited to either ClusterControl internal AAA or LDAP only.

Also: GUI functions accessible via JSON/HTTP API ?
Yes, our most important GUI functions are available through our RPC api. So you would be able to automate deploying, backups and scaling easily.

Become a ClusterControl DBA: User Management

$
0
0

In the previous posts of this blog series, we covered deployment of clustering/replication (MySQL / Galera, MySQL Replication, MongoDB & PostgreSQL), management & monitoring of your existing databases and clusters, performance monitoring and health, how to make your setup highly available through HAProxy and MaxScale, how to prepare yourself against disasters by scheduling backups, how to manage your database configurations and in the last post how to manage your log files.

One of the most important aspects of becoming a ClusterControl DBA is to be able to delegate tasks to team members, and control access to ClusterControl functionality. This can be achieved by utilizing the User Management functionality, that allows you to control who can do what. You can even go a step further by adding teams or organizations to ClusterControl and map them to your DevOps roles.

Organizations

Organizations can be seen either as a full organization or groups of users. Clusters can be assigned to organizations and in this way the cluster is only visible for the users in the organization it has been assigned to. This allows you to run multiple organizations within one ClusterControl environment. Obviously the ClusterControl admin account will still be able to see and manage all clusters.

You can create a new Organization via Settings > User Management and clicking on the plus sign on the left side under Organizations:

After adding a new Organization, you can assign users to the organization.

Users

After selecting the newly created organization, you can add new users to this organization by pressing the plus sign on the right dialogue:

By selecting the role, you can limit the functionality of the user to either an Super Admin, Admin or User. You can extend these default roles in the Access Control section.

Access Control

Standard Roles

Within ClusterControl the default roles are: Super Admin, Admin and User. The Super Admin is the only account that can administrate organizations, users and roles. The Super Admin is also able to migrate clusters across organizations. The admin role belongs to a specific organization and is able to see all clusters in this organization. The user role is only able to see the clusters he/she created.

User Roles

You can add new roles within the role based access control screen. You can define the privileges per functionality whether the role is allowed (read-only), denied (deny), manage (allow change) or modify (extended manage).

If we create a role with limited access:

As you can see, we can create a user with limited access rights (mostly read-only) and ensure this user does not break anything. This also means we could add non-technical roles like Manager here.

Notice that the Super Admin role is not listed here as it is a default role with the highest level of privileges within ClusterControl and thus can’t be changed.

LDAP Access

ClusterControl supports Active Directory, FreeIPA and LDAP authentication. This allows you to integrate ClusterControl within your organization without having to recreate the users. In earlier blog posts we described how to set up ClusterControl to authenticate against OpenLDAP, FreeIPA and Active Directory.

Once this has been set up authentication against ClusterControl will follow the chart below:

Basically the most important part here is to map the LDAP group to the ClusterControl role. This can be done fairly easy in the LDAP Access page under User Management.

The dialog above would map the DevopsTeam to the Limited User role in ClusterControl. Then repeat this for any other group you wish to map.

After this any user authenticating against ClusterControl will be authenticated and authorized via the LDAP integration.

Final thoughts

Combining all the above allows you to integrate ClusterControl better into your existing organization, create specific roles with limited or full access and connect users to these roles. The beauty of this is that you are now much more flexible in how you organize around your database infrastructure: who is allowed to do what? You could for instance offload the task of backup checking to a site reliability engineer instead of having the DBA check them daily. Allow your developers to check the MySQL, Postgres and MongoDB log files to correlate them with their monitoring. You could also allow a senior developer to scale the database by adding more nodes/shards or have a seasoned DevOps engineer write advisors.

As you can see the possibilities here are endless, it is only a question of how to unlock them. In the Developer Studio blog series, we dive deeper into automation with ClusterControl and for DevOps integration we recently released CCBot.

Planets9s: Sign up for our best practices webinar on how to upgrade to MySQL 5.7

$
0
0

Welcome to this week’s Planets9s, covering all the latest resources and technologies we create around automation and management of open source databases.

Sign up for our best practices webinar on how to upgrade to MySQL 5.7

Join us on Tuesday, March 15th for this new webinar on best practices for upgrading to MySQL 5.7 led by Krzysztof Książek, Senior Support Engineer at Severalnines.

MySQL 5.7 has been around for a while now, and if you haven’t done so yet, it’s probably about time to start thinking about upgrading. There are a few things to keep in mind when planning an upgrade, such as important changes between versions 5.6 and 5.7 as well as detailed testing that needs to precede any upgrade process. Amongst other things, we’ll look at how to best research, prepare and perform such tests before the time comes to finally start the upgrade.

Sign up today

Newly Updated “MySQL Load Balancing with HAProxy” Tutorial

We are glad to announce that one of our most popular tutorials, MySQL Load Balancing with HAProxy, has just been updated and is available online. In this tutorial, we cover how HAProxy works, and how to deploy, configure and manage HAProxy in conjunction with MySQL using ClusterControl.

Read the tutorial

Watch the replay: How to build scalable database infrastructures with MariaDB & HAProxy

Thanks to everyone who participated in last week’s live webinar on how CloudStats.me moved from MySQL to clustered MariaDB for high availability with Severalnines ClusterControl. The webinar included use case discussions on CloudStats.me’s database infrastructure bundled with a live demonstration of ClusterControl to illustrate the key elements that were discussed.

Watch the replay

Do share these resources with your colleagues and follow us in our social media channels.

Have a good end of the week,

Jean-Jérôme Schmidt

Planets9s Editor
Severalnines AB

Press Release: Severalnines serves up Turkish delight for iyzico’s payment platform

$
0
0

iyzico uses ClusterControl to increase MySQL database uptime

Stockholm, Sweden and anywhere else in the world - 09 March 2016 - Severalnines, the provider of database automation and management software, today announced its latest customer, iyzico, a Turkish Payment Service Provider (PSP) that offers ecommerce merchants and marketplaces like sahibinden.com, Modanisa and Babil an efficient way to accept online payments in Turkey. It also provides other services such as analytics, fraud protection and settlement.

iyzico helps over 27,000 registered merchants navigate the difficult and complicated merchant registration process for vPOS in Turkey. The complicated process often results in rejection rates as high as 80% for some businesses. iyzico makes it easier for merchants to start selling in Turkey via a single integration of iyzico module and becomes the primary contact for online payment procedures.

Offering online payments services requires iyzico to be online around the clock. iyzico needed to provide high-availability and a seamless service to its merchants in order to stay competitive. After being recommended by the IT team, Severalnines’ ClusterControl product was used by iyzico to help keep their MySQL databases highly available. They needed a database management tool to communicate between the priority data centre in Istanbul and the fail-safe in Ankara, this required an active/active database cluster that could assist in failovers.

Severalnines was chosen as it could offer database replication at scale and diagnostics required to manage iyzico databases. Including the trial period, it took only three weeks for iyzico to go live on ClusterControl, due to easy integration and the Severalnines support in coding fail-safes between nodes and databases.

The collaboration between Severalnines and iyzico created a secure database management system offering high-availability, even when a data centre was affected by a power cut. iyzico intends to move to the enterprise ClusterControl solution so it can manage encrypted data and work on developing the capabilities of a new data centre.

Tahsin Isin, co-founder and CTO of iyzico stated: “Severalnines is the perfect solution to help us combat the problem of using erratic data centres, last year we experienced several outages. Severalnines has helped us optimise the process of database replication and supporting active/active database clusters, so we can continue offering our services to our clients even when our main data centre is down.”

Vinay Joosery, Severalnines CEO, stated: “We are delighted to have such a fast-growing FinTech company working with us. We are fully committed to helping iyzico solve problems like data centre outages and continue to stay online with maximum uptime. We have enjoyed working with the iyzico team, helping them to continue innovating in a very challenging Turkish Financial Services environment.”

About Severalnines

Severalnines provides automation and management software for database clusters. We help companies deploy their databases in any environment, and manage all operational aspects to achieve high-scale availability.

Severalnines' products are used by developers and administrators of all skills levels to provide the full 'deploy, manage, monitor, scale' database cycle, thus freeing them from the complexity and learning curves that are typically associated with highly available database clusters. The company has enabled over 7,000 deployments to date via its popular online database configurator. Currently counting BT, Orange, Cisco, CNRS, Technicolour, AVG, Ping Identity and Paytrail as customers. Severalnines is a private company headquartered in Stockholm, Sweden with offices in Singapore and Tokyo, Japan. To see who is using Severalnines today visit, http://www.severalnines.com/company

About iyzico

iyzico is a payment service provider (PSP) for online businesses and enterprises, particularly e- commerce platforms. iyzico’s payment system provides fast onboarding and an easy integration - in less than 24 hours, PCI-DSS certified to ensure maximum security. It offers online businesses and enterprises the ability to collect payments in their local currency through installments. Founded in 2013, iyzico has over 27.000 registered merchant accounts and is one of the fastest growing financial technology company in the region. http://www.iyzico.com


Planets9s - Watch the replay: How To Set Up SQL Load Balancing with HAProxy

$
0
0

Welcome to this week’s Planets9s, covering all the latest resources and technologies we create around automation and management of open source databases.

Watch the replay: How To Set Up SQL Load Balancing with HAProxy

This webinar covers the concepts around the popular open-source HAProxy load balancer, and shows you how to use it with your SQL-based database clusters. High availability strategies for HAProxy with Keepalived and Virtual IP are also discussed. This is a great webinar to watch as a complement to our popular load balancing tutorial.

Watch the replay

Sign up for our best practices webinar on how to upgrade to MySQL 5.7

Join us next Tuesday for this live webinar on best practices for upgrading to MySQL 5.7!

There are a few things to keep in mind when planning an upgrade like this, such as important changes between versions 5.6 and 5.7 as well as the detailed testing that needs to precede any upgrade process. Amongst other things, we’ll look at how to best research, prepare and perform such tests before the time comes to finally start the upgrade.

Sign up today

Success story: iyzico uses ClusterControl to increase MySQL database uptime

Discover why ClusterControl was chosen by iyzico, a PCI DSS Level-1 certified Payment Service Provider in Turkey, to manage its high availability databases across multiple datacenters. It took only three weeks for iyzico to go live on ClusterControl.

Read the customer success story

Do share these resources with your colleagues and follow us in our social media channels.

Have a good end of the week,

Jean-Jérôme Schmidt
Planets9s Editor
Severalnines AB

ClusterControl Developer Studio: automatically scale your clusters

$
0
0

In the previous blog posts, we gave a brief introduction to ClusterControl Developer Studio and the ClusterControl Domain Specific Language and how to extract information from the Performance Schema. ClusterControl’s Developer Studio allows you to write your own scripts, advisors and alerts. With just a few lines of code, you can already automate your clusters!

In this blog post we will dive deeper into Developer Studio and show you how you can keep an eye on performance and at the same time scale out the number of read slaves in your replication topology whenever it is necessary.

CMON RPC

The key element in our advisor will be talking to the CMON RPC: ClusterControl’s API that enables you to automate tasks. Many of the components of ClusterControl make use of this API as well and a great deal of functionality is accessible via the API.

To be able to talk to the CMON RPC we will need to install/import the cmonrpc.js helper file from the Severalnines Github Developer Studio repository into your own Developer Studio. We described this process briefly in our introductory blog post. Alternatively you could create a new file named common/cmonrpc.js and paste the contents in there.

This helper file has only one usable function that interacts with the CMON RPC at the moment: addNode. All the other functions in this helper are supporting this process, like for instance the setCmonrpcToken function that adds the RPC token in the JSON body if RPC tokens are in use.

The cmonrpc helper expects the following variables to be present:

var CMONRPC_HOST = 'localhost';
var CMONRPC_PORT = '9500';
var CMONRPC_TOKEN = ["token0", "token1", “token2”];
var FREE_HOSTS = ["10.10.10.12", "10.10.10.13", "10.10.10.14"];

The FREE_HOSTS variable contains the ip addresses of the hosts we want to use as read slaves. This variable will be used by the findUnusedHosts function and compared against the hosts already present in the cluster and return an unused host or false in case there is no unused host available.

The CMONRPC_TOKEN variable contains the RPC tokens when used. The first token will be the token found in the main cmon.cnf. If you are not using RPC tokens in your configuration, you can leave them empty.

NOTE: Currently as of 1.2.12 the ClusterControl web application does not support having a RPC token in the cmon.cnf file. If you want to run both this advisor and access the web application at the same time, then comment out the RPC token in the cmon.cnf file and leave the CMON_RPCTOKEN variable empty.

Auto Scale

Our auto scaling advisor is a very basic one: we simply look at the number of connections on our master and slaves. If we find the number of connections excessive on the slaves, we need to scale out our reads and we can do this by adding fresh servers.

We will look at long(er) term connections to prevent our advisor from scaling unnecessarily. Therefore we use the SQL statistics functionality from Developer Studio and determine the standard deviation of each node in the cluster. You could customize this to either the nth-percentile, average or maximum connections if you like, but that last one could cause unnecessary scaling.

var endTime   = CmonDateTime::currentDateTime();
var startTime = endTime - 3600;
var stats     = host.sqlStats(startTime, endTime);
var config      = host.config();
var max_connections    = config.variable("max_connections")[0]['value'];

We retrieve the SQL statistics using the host.sqlStats function, and passing it a start- and endtime, we retrieve the configured maximum number of connections as well. The sqlStats function returns an array of maps containing all statistics collected during the period we selected. Since the statistical functions of Developer Studio expect arrays containing only values, the array of maps isn’t useable in this form. So we need to create a new array and copy all the values for the number of connections.

var connections = [];
for(stx = 0; stx < stats.size(); ++stx) {
    connections[stx] = stats[stx]['connections'];
}

Then we can calculate the connections used during our selected period of time and express that as an percentage:

stdev_connections_pct = (stdev(connections) / max_connections) * 100;
if(stdev_connections_pct > WARNING_THRESHOLD) {
    THRESHOLD_MET = true;
}

Once our threshold is met, we add a new node to our cluster and this is when we call the cmonrpc helper functions. However we only want to do this once during our run, hence we set the variable THRESHOLD_MET. At the very end, we also add an extra line of advice to show we are scaling out our cluster

if (THRESHOLD_MET == true)
{
    /* find unused node */
    node = findUnusedHost();
    addNode(node);

    advice = new CmonAdvice();
    advice.setTitle(TITLE);
    advice.setAdvice("Scaling out cluster with new node:"+ node);
    advice.setJustification("Scaling slave nodes is necessary");
    advisorMap[idx+1]= advice;
}

Conclusion

Obviously, there are still a few shortcomings with this advisor: it should obviously not run more frequently than the period used for the SQL statistics selection. In our example we set it to 1 hour of statistics, so do not run the advisor more frequently than once per hour.

Also the advisor will put extra stress on the master by copying its dataset to the new slave, so you better also keep an eye on the master node in your Master-Slave topology. The advisors are limited to a runtime of 30 seconds at this moment, so if there is a slow response in the curl calls, it could exceed the runtime if you use the cmonrpc library for other purposes.

On the good side, this advisor shows how easy you can use advisors beyond what they were designed for and use them to trigger actions. Examples of such actions could be the scheduling of backups or setting hints in your configuration management tool (Zookeeper/Consul). The possibilities with Developer Studio are almost only limited by your imagination!

The complete advisor:

#include "common/mysql_helper.js"
#include "common/cmonrpc.js"

var CMONRPC_HOST = 'localhost';
var CMONRPC_PORT = '9500';
var CMONRPC_TOKEN = ["test12345", "someothertoken"];
var FREE_HOSTS = ["10.10.19.12", "10.10.19.13", "10.10.19.14"];

/**
 * Checks the percentage of used connections and scales accordingly
 * 
 */ 
var WARNING_THRESHOLD=85;
var TITLE="Auto scaling read slaves";
var THRESHOLD_MET = false;
var msg = '';

function main()
{
    var hosts     = cluster::mySqlNodes();
    var advisorMap = {};

    for (idx = 0; idx < hosts.size(); ++idx)
    {
        host        = hosts[idx];
        map         = host.toMap();
        connected     = map["connected"];
        var advice = new CmonAdvice();
        var endTime   = CmonDateTime::currentDateTime();
        var startTime = endTime - 10 * 60;
        var stats     = host.sqlStats(startTime, endTime);
        var config      = host.config();
        var max_connections    = config.variable("max_connections")[0]['value'];
        var connections = [];

        if(!connected)
            continue;
        if(checkPrecond(host) && host.role() != 'master')
        {
            /* Fetch the stats on connections over our selection period */
            for(stx = 0; stx < stats.size(); ++stx)
                connections[stx] = stats[stx]['connections'];
            stdev_connections_pct = (stdev(connections) / max_connections) * 100;
            if(stdev_connections_pct > WARNING_THRESHOLD)
            {
                THRESHOLD_MET = true;
                msg = "Slave node";
                advice.setJustification("Percentage of connections used (" + stdev_connections_pct + ") above " + WARNING_THRESHOLD + " so we need to scale out slaves.");
                advice.setSeverity(Warning); 
            }
            else
            {
                msg = "Slave node";
                advice.setJustification("Connections used ok.");
                advice.setSeverity(Ok);
            }
        }
        else
        {
            if (host.role() == 'master')
            {
                msg = "Master node";
                advice.setJustification("Master node will not be taken into consideration");
                advice.setSeverity(Ok);  
            }
            else
            {
                msg = "Cluster is not okay and there is no data";
                advice.setJustification("there is not enough load on the server or the uptime is too little.");
                advice.setSeverity(Ok);
            }
        }

        advice.setHost(host);
        advice.setTitle(TITLE);
        advice.setAdvice(msg);
        advisorMap[idx]= advice;
    }

    if (THRESHOLD_MET == true)
    {
        /* find unused node */
        var node = findUnusedHost();
        addNode(node);

        advice = new CmonAdvice();
        advice.setTitle(TITLE);
        advice.setAdvice("Scaling out cluster with new node:"+ node);
        advice.setJustification("Scaling slave nodes is necessary");
        advisorMap[idx+1]= advice;
    }


    return advisorMap;
}

Planets9s - Watch the replay: how to upgrade to MySQL 5.7 - best practices

$
0
0

Welcome to this week’s Planets9s, covering all the latest resources and technologies we create around automation and management of open source databases.

Watch the replay: how to upgrade to MySQL 5.7 - best practices

Thanks to everyone who participated in this week’s live webinar on how to upgrade to MySQL 5.7. Amongst other things, we discussed important changes between versions 5.6 and 5.7 and how to best research, prepare and perform adequate tests before the time comes to finally start the upgrade. The replay is now online to watch at your own leisure.

Watch the replay

Introducing NinesControl: Your Database, Any Cloud

We recently announced NinesControl, a developer friendly service to deploy and manage MySQL, Percona, MariaDB and MongoDB clusters using your preferred Cloud Provider. NinesControl is specifically designed with developers in mind. It is currently in beta for DigitalOcean users, before we expand the service to other public cloud providers. We’re now inviting users to get an early look at the new NinesControl (beta).

Sign up up to stay informed and apply for early access

Automagically scale your MySQL topologies with ClusterControl Developer Studio

This new blog post describes how to automatically scale a MySQL Replication topology via advisors, written in the ClusterControl Developer Studio. We show you how you can keep an eye on performance and at the same time scale out the number of read slaves in your replication topology whenever it is necessary.

Read the blog

That’s it for this week! Feel free to share these resources with your colleagues and follow us in our social media channels.

Have a good end of the week,

Jean-Jérôme Schmidt
Planets9s Editor
Severalnines AB

New Whitepaper: MySQL Replication for High Availability

$
0
0

MySQL Replication is probably the most popular high availability solution for MySQL, and widely used by top web properties like Twitter and Facebook. Although easy to set up, ongoing maintenance like software upgrades, schema changes, topology changes, failover and recovery have always been tricky. At least until MySQL 5.6.

Having recently discussed deployment and management of MySQL replication topologies, we’re now making this handy whitepaper available to all those of you who are currently working with or intending to work with MySQL Replication.

Our new whitepaper covers all you need to know about MySQL Replication, including information on the latest features introduced in 5.6 and 5.7. It also provides a more hands-on, practical section on how to quickly deploy and manage a replication setup using ClusterControl.

More specifically, the following topics are discussed:

  • What is MySQL Replication?
  • Topology for MySQL Replication
  • Deploying a MySQL Replication Setup
  • Connecting Application to the Replication Setup
  • Failover with ClusterControl
  • Operations: managing your MySQL Replication Setup
  • Issues & troubleshooting

To download your free copy of this new whitepaper, click here.

Planets9s - Download our new whitepaper: MySQL Replication for High Availability

$
0
0

Welcome to this week’s Planets9s, covering all the latest resources and technologies we create around automation and management of open source databases.

Download our new whitepaper: MySQL Replication for High Availability

This new whitepaper covers all you need to know about MySQL Replication, including information on the latest features introduced in 5.6 and 5.7. It also provides a more hands-on, practical section on how to quickly deploy and manage a replication setup using ClusterControl. This is a great resource for anyone wanting to get started with or refresh their knowledge on MySQL Replication.

Download the whitepaper

Sign up for our webinar: introducing a new Blueprint for MySQL Replication

In this new webinar, we will introduce the Severalnines Blueprint for MySQL Replication - this includes all aspects of a MySQL Replication topology with the ins and outs of deployment, setting up replication, monitoring, upgrades, performing backups and managing high availability using proxies as ProxySQL, MaxScale and HAProxy.

Sign up for the webinar

Sign up for NinesControl: Your Database, Any Cloud

We recently announced NinesControl, a developer friendly service to deploy and manage MySQL and MongoDB clusters using your preferred Cloud Provider. NinesControl is specifically designed with developers in mind. It is currently in beta for DigitalOcean users, before we expand the service to other public cloud providers. We’re now inviting users to get an early look and provide feedback.

Apply for early access

That’s it for this week! Feel free to share these resources with your colleagues and follow us in our social media channels.

Have a good end of the week,

Jean-Jérôme Schmidt
Planets9s Editor
Severalnines AB

Planets9s - Join us next Tuesday: introducing a new Blueprint for MySQL Replication

$
0
0

Welcome to this week’s Planets9s, covering all the latest resources and technologies we create around automation and management of open source databases.

Join us next Tuesday: introducing a new Blueprint for MySQL Replication

In this new webinar, we will introduce the Severalnines Blueprint for MySQL Replication - this includes all aspects of a MySQL Replication topology with the ins and outs of deployment, setting up replication, monitoring, upgrades, performing backups and managing high availability using proxies as ProxySQL, MaxScale and HAProxy.

Sign up for the webinar

Read-Write Splitting for Java Apps using Connector/J, MySQL Replication and HAProxy

In this new blog post, we play around with Java, Connector/J, MySQL Replication and HAProxy. And we describe how to manually deploy a MariaDB Replication setup and add it into ClusterControl using “Add Existing Server/Cluster” as well as deploy HAProxy with Keepalived, and create a simple Java application to connect to our Replication setup.

Read the blog

How ProxySQL adds Failover and Query Control to your MySQL Replication Setup

In this new blog post we describe how to set up ProxySQL to work in a MySQL Replication environment managed by ClusterControl. We take a look at the metrics it provides to a DBA, and how this data can be used to ensure smooth operations.

Read the blog

That’s it for this week! Feel free to share these resources with your colleagues and follow us in our social media channels.

Have a good end of the week,

Jean-Jérôme Schmidt
Planets9s Editor
Severalnines AB

‘Become a MySQL DBA’ at Percona Live this month!

$
0
0

As per a related post from last week, we’re looking forward to Percona Live 2016 in Santa Clara, the go-to conference for MySQL & MongoDB users in particular, and open source database enthusiasts in general.

This year we’ve been given the opportunity to conduct a one-day tutorial based on our popular blog series ‘Become a MySQL DBA’. We’re reaching our 20th post in this series (the latest one is on trouble-shooting with pt-stalk) and are excited to be able to share that content and knowledge during this one day session in Santa Clara this month.

Art van Scheppingen

Our colleagues Art van Scheppingen and Ashraf Sharif will be walking participants through the content and practical exercises for about 6 hours spread across day 1 at Percona Live. The tutorial is scheduled to take place in Ballroom H. For more information, visit the Tutorials Schedule.

We look forward to seeing you there; and do come visit us at booth 308 in the exhibition hall as well!

Tutorial Description

Ashraf Sharif

Someone came to your desk and said: “You seem to know MySQL, can you take care of our database? It looks like it needs some love”. Does that sound familiar? Lot’s of DBAs have gone that way. At the beginning, you need to learn a lot about MySQL and how to operate it.

This hands-on tutorial is intended to help you navigate your way through the steps that lead to becoming a MySQL DBA. We are going to talk about the most important aspects of managing MySQL infrastructure and we will be sharing best practices and tips on how to perform the most common activities.

In this tutorial we are going to cover:

  • Monitoring and trending for your MySQL installation
  • What’s the most important to look after?
  • What tools to use?
  • How to ensure you are proactive in monitoring health of your MySQL?
  • How to diagnose issues with your MySQL setup?
  • Slow queries
  • Performance problems - what to look for?
  • Error logs
  • Hardware and OS issues
  • Backups
  • Binary and logical backup
  • What tools to use?
  • Most common maintenance operations
  • Schema changes
  • Batch operations
  • Replication topology changes
  • Database upgrades
  • How to prepare for an upgrade?
  • Performing minor and major version upgrades

We will provide a setup using virtuals where you can freely test upon.


Webinar Replay & Slides: the MySQL Replication Blueprint

$
0
0

Thanks to everyone who participated in this week’s live webinar introducing our blueprint for MySQL Replication. If you missed the session and/or would like to watch the replay in your own time, it is now available online for sign up and viewing.

MySQL replication has become an essential component within the scalable architectures and has become the hub for scalable LAMP environments. The major bottleneck for our data is generally not so much oriented around writing our data, but rather around reading it back. Therefore the easiest way to scale MySQL is to add read replicas.

In this webinar, we introduced the Severalnines Blueprint for MySQL Replication - this includes all aspects of a MySQL Replication topology with the ins and outs of deployment, setting up replication, monitoring, upgrades, performing backups and managing high availability using proxies as ProxySQL, MaxScale and HAProxy.

Replay Details

Get access to the replayView the slides

Agenda

  • Why a Blueprint for Replication?
  • MySQL Replication Blueprint - Components & Topology
  • Monitoring & Trending
  • Management
  • Load balancing
  • Live Demo - ClusterControl

Speaker

Art van Scheppingen is a Senior Support Engineer at Severalnines. He’s a pragmatic MySQL and Database expert with over 15 years experience in web development. He previously worked at Spil Games as Head of Database Engineering, where he kept a broad vision upon the whole database environment: from MySQL to Couchbase, Vertica to Hadoop and from Sphinx Search to SOLR. He regularly presents his work and projects at various conferences (Percona Live, FOSDEM) and related meetups.

Related Resources

 

Planets9s - Watch the replay: Introducing a New Blueprint for MySQL Replication

$
0
0

Welcome to this week’s Planets9s, covering all the latest resources and technologies we create around automation and management of open source database infrastructures.

Watch the replay: Introducing a New Blueprint for MySQL Replication

MySQL failover or fallover? And why a Blueprint for MySQL Replication? These questions and more were answered during this week’s live webinar on the Severalnines Blueprint for MySQL Replication. Thanks to everyone who participated in it. If you missed the session and/or would like to watch the replay in your own time, it is now available online for sign up and viewing.

Watch the replay

Download the Whitepaper: MySQL Replication for HA

This whitepaper covers all you need to know about MySQL Replication, including information on the latest features introduced in 5.6 and 5.7. It also provides a more hands-on, practical section on how to quickly deploy and manage a replication setup using ClusterControl.

Download the whitepaper

ClusterControl Developer Studio: Automagically Scale Your Clusters

This blog post explores some of the cool advisor features in our Developer Studio and shows you how you can keep an eye on performance and at the same time scale out the number of read slaves in your replication topology whenever it is necessary.

Read the blog

That’s it for this week! Feel free to share these resources with your colleagues and follow us in our social media channels.

Have a good end of the week,

Jean-Jérôme Schmidt
Planets9s Editor
Severalnines AB

Infrastructure Automation - Ansible Role for ClusterControl

$
0
0

If you are automating your server infrastructure with Ansible, then this blog is for you. We are glad to announce the availability of an Ansible Role for ClusterControl. It is available at Ansible Galaxy. For those who are automating with Puppet or Chef, we already published a Puppet Module and Chef Cookbook for ClusterControl. You can also check out our Tools page.

ClusterControl Ansible Role

The Ansible role is also available from our Github repository. It does the following:

  1. Configure Severalnines repository.
  2. Install and configure MySQL (MariaDB for CentOS/RHEL 7).
  3. Install and configure Apache and PHP.
  4. Set up rewrite and SSL module for Apache.
  5. Install and configure ClusterControl suite (controller, UI and CMONAPI).
  6. Generate an SSH key for cmon_ssh_user (default is root).

This role is built on top of Ansible v1.9.4 and is tested on Debian 8 (Jessie), Ubuntu 12.04 (Precise), RHEL/CentOS 6 and 7.

Example Deployment

Let’s assume that we already have Ansible installed on a host, and we want to have a ClusterControl host to deploy and manage a three-node Percona XtraDB Cluster. The following is our architecture diagram:

  1. Get the ClusterControl Ansible role from Ansible Galaxy or Github.

    For Ansible Galaxy:

    $ ansible-galaxy install severalnines.clustercontrol

    For Github:

    $ git clone https://github.com/severalnines/ansible-clustercontrol
    $ cp -rf ansible-clustercontrol /etc/ansible/roles/severalnines.clustercontrol
  2. Configure the ClusterControl host in Ansible. Add the following line into /etc/ansible/hosts:

    192.168.0.10
  3. Create a playbook. In this example, we create a minimal Ansible playbook called cc.yml and add the following lines:

    - hosts: 192.168.0.10
      roles:
        - { role: severalnines.clustercontrol }
  4. Generate an SSH key and set up passwordless SSH from the Ansible control host to ClusterControl host as root user:

    $ ssh-keygen -t rsa
    $ ssh-copy-id 192.168.0.10
  5. Run the Ansible playbook.

    $ ansible-playbook cc.yml
  6. Once ClusterControl is installed, go to https://192.168.55.190/clustercontrol and create the default admin user/password.

  7. On ClusterControl node, setup passwordless SSH key to all target DB nodes. For example, if ClusterControl node is 192.168.0.10 and DB nodes are 192.168.0.11,192.168.0.12, 192.168.0.13:

    $ ssh-copy-id 192.168.0.11 # DB1
    $ ssh-copy-id 192.168.0.12 # DB2
    $ ssh-copy-id 192.168.0.13 # DB3

    ** Enter the password to complete the passwordless SSH setup.

  8. To deploy a new database cluster, click on “Create Database Cluster” and specify the following:

    Grab a cup of coffee while waiting for the cluster to be deployed. It usually takes 15~20 minutes depending on the internet connection. Once the job is completed, you should see the cluster in the database cluster list, similar to the screenshot below:

Galera cluster is now deployed.

Example Playbook

The simplest playbook would be (as shown in the above example):

- hosts: clustercontrol-server
  roles:
    - { role: severalnines.clustercontrol }

If you would like to specify custom configuration values as explained above, create a file called vars/main.yml and include it inside the playbook:

- hosts: 192.168.10.15
  vars:
    - vars/main.yml
  roles:
    - { role: severalnines.clustercontrol }

Inside vars/main.yml:

mysql_root_username: admin
mysql_root_password: super-user-password
cmon_mysql_password: super-cmon-password
cmon_mysql_port: 3307

If you are running as a non-root user, ensure the user has the ability to escalate as super user via sudo. Example playbook for Ubuntu 12.04 with sudo password enabled:

- hosts: ubuntu@192.168.10.100
  become: yes
  become_user: root
  roles:
    - { role: severalnines.clustercontrol }

Then, execute the command with --ask-become-pass flag:

$ ansible-playbook cc.yml --ask-become-pass

For more details on the Role Variables, check out the Ansible Galaxy or Github repository. Happy clustering!

Press Release: Severalnines expands the reach of European scientific discovery

$
0
0

Stockholm, Sweden and anywhere else in the world - 20 April 2016 - Severalnines, the provider of database infrastructure management software, today announced its latest customer, the National Center for Scientific Research (CNRS), which is a subsidiary of the French Ministry of Higher Education and Research.

The CNRS has over 1,100 research units and is home to some of the largest scientific research facilities in the world. It partners with other global institutions and employs over 33,000 people. Working in partnership with universities, laboratories and dedicated scientists, CNRS has delivered advanced research in areas such as obesity, malaria and organic matter in space.

Having an international outreach means they have a dedicated department to handle the information infrastructure of the organisation called the Directorate of Information Systems (CNRS-DSI). Thousands of gigabytes (GB) of administrative data are processed by CNRS-DSI internal systems every week, but with a tight budget CNRS needed software, which was both cost effective whilst delivering a high quality, robust service.

To manage the high volume of data, CNRS deployed over 100 open source LAMP applications. The growth of the institution led to unprecedented usage of CNRS data from tens of thousands of users across the world accessing or transporting information. There was a need to increase the scalability, availability and robustness of the systems.

After launching a study to find a suitable database solution and realising traditional MySQL clusters were too complicated without a database administrator (DBA), they found Severalnines’ ClusterControl in conjunction with MariaDB Galera Cluster, MySQL’s “little sister fork”. ClusterControl offered a comprehensive solution, which is easy to access for all CNRS-DSI technical staff. The solution integrated well across the technological environment and was able to detect anomalies in the system.

Since Severalnines was deployed, the CNRS-DSI team runs a development and a production MariaDB Galera cluster thanks to ClusterControl with future plans to have all of its LAMP applications running in this environment. In fact, CNRS-DSI just recently extended all of its ClusterControl subscriptions.

Furthermore, beside these classical LAMP applications, CNRS-DSI is deploying a cloud storage solution for thousands of its users. For obvious performance and availability reasons, MariaDB Galera has also been chosen as the database component in place of the classical standalone MySQL; and Severalnines ClusterControl has been naturally chosen as the management solution for this critical service as well.

Olivier Lenormand, Technical Manager of CNRS-DSI, stated: “Technology is the backbone of scientific discovery which ultimately leads to human advancement. Data management is very important at CNRS because we want to continue our groundbreaking research and protect our data. Severalnines has helped us keep costs down whilst increasing the potential of our open source systems. We’ve found a database platform, which can both manage and use our LAMP applications, as well as cloud services. Severalnines is helping us enhance the capabilities at CNRS-DSI for the benefit of the global scientific community.”

Vinay Joosery, Severalnines CEO, said: “Data management in a large organisation like CNRS can present technical as well as economical challenges, but it should not get into the way of scientific research. We are really excited we can help CNRS use the best of open source software to increase collaboration in new, potentially life-saving research projects.

About Severalnines

Severalnines provides automation and management software for database clusters. We help companies deploy their databases in any environment, and manage all operational aspects to achieve high-scale availability.

Severalnines' products are used by developers and administrators of all skills levels to provide the full 'deploy, manage, monitor, scale' database cycle, thus freeing them from the complexity and learning curves that are typically associated with highly available database clusters. The company has enabled over 8,000 deployments to date via its ClusterControl solution. Currently counting BT, Orange, Cisco, CNRS, Technicolour, AVG, Ping Identity and Paytrail as customers. Severalnines is a private company headquartered in Stockholm, Sweden with offices in Singapore and Tokyo, Japan. To see who is using Severalnines today visit, http://www.severalnines.com/customers

Watch the replay: Become a MongoDB DBA (if you’re really a MySQL user)

$
0
0

Thanks to everyone who participated in this week’s webinar on ‘Become a MongoDB DBA’! Our colleague Art van Scheppingen presented from the perspective of a MySQL DBA who might be called to manage a MongoDB database, which included a live demo on how to carry out the relevant DBA tasks using ClusterControl.

The replay and the slides are now available online in case you missed Tuesday’s live session or simply would like to see it again in your own time.

Watch the replayRead the slides

This was the first session of our new webinar series: ‘How to Become a MongoDB DBA’ to answer the question: ‘what does a MongoDB DBA do’?

In this initial webinar, we went beyond the deployment phase and demonstrated how you can automate tasks, monitor a cluster and manage MongoDB; whilst also automating and managing your MySQL and/or PostgreSQL installations. Watch out for invitations for the next session in this series!

This Session's Agenda

  • Introduction to becoming a MongoDB DBA
  • Installing & configuring MongoDB
  • What to monitor and how
  • How to perform backups
  • Live Demo

Speaker

Art van Scheppingen is a Senior Support Engineer at Severalnines. He’s a pragmatic MySQL and Database expert with over 15 years experience in web development. He previously worked at Spil Games as Head of Database Engineering, where he kept a broad vision upon the whole database environment: from MySQL to Couchbase, Vertica to Hadoop and from Sphinx Search to SOLR. He regularly presents his work and projects at various conferences (Percona Live, FOSDEM) and related meetups.

This series is based upon the experience we have using MongoDB and implementing it for our database infrastructure management solution, ClusterControl. For more details, read through our ‘Become a ClusterControl DBA’ blog series.

Viewing all 385 articles
Browse latest View live