How you can Hyperscale your Applications Using Mesos & Marathon

MSys Editorial Jun 09 - 15 min read

Audio : Listen to This Blog.

In a previous blog post we have seen what Apache Mesos is and how it helps to create dynamic partitioning of our available resources which results in increased utilization, efficiency, reduced latency, and better ROI. We also discussed how to install, configure and run Mesos and sample frameworks. There is much more to Mesos than above.
In this post we will explore and experiment with a close to real-life Mesos cluster running multiple master-slave configurations along with Marathon, a meta-framework that acts as cluster-wide init and control system for long running services. We will set up a 3 Mesos master(s) and 3 Mesos slaves(s), cluster them along with Zookeeper and Marathon, and finally run a Ruby on Rails application on this Mesos cluster. The post will demo scaling up and down of the Rails application with the help of Marathon. We will use Vagrant to set up our nodes inside VirtualBox and will link the relevant Vagrantfile later in this post.
To follow this guide you will need to obtain the binaries for

    • Ubuntu 14.04 (64 bit arch) (Trusty)
    • Apache Mesos
    • Marathon
    • Apache Zookeeper
    • Ruby / Rails
    • VirtualBox
    • Vagrant
    • Vagrant Plugins

Let me briefly explain what Marathon and Zookeeper are.
Marathon is a meta-framework you can use to start other Mesos frameworks or applications (anything that you could launch from your standard shell). So if Mesos is your data center kernel, Marathon is your “init” or “upstart”. Marathon provides an excellent REST API to start, stop and scale your application.
Apache Zookeeper is coordination server for distributed systems to maintain configuration information, naming, provide distributed synchronization, and group services. We will use Zookeeper to coordinate between the masters themselves and slaves.
For Apache Mesos, Marathon and Zookeeper we will use the excellent packages from Mesosphere, the company behind Marathon. This will save us a lot of time from building the binaries ourselves. Also, we get to leverage bunch of helpers that these packages provide, such as creating required directories, configuration files and templates, startup/shutdown scripts, etc. Our cluster will look like this:

The above cluster configuration ensures that the Mesos cluster is highly available because of multiple masters. Leader election, coordination and detection is Zookeeper’s responsibility. Later in this post we will show how all these are configured to work together as a team. Operational Guidelines and High Availability are goodreads to learn and understand more about this topic.

Installation

In each of the nodes we first add Mesosphere APT repositories to repository source lists and relevant keys and update the system.
$ sudo apt-key adv --keyserver keyserver.ubuntu.com --recv E56151BF
$ echo "deb http://repos.mesosphere.io/ubuntu trusty main" |  sudo tee /etc/apt/sources.list.d/mesosphere.list
$ sudo apt-get -y update
If you are using some version other than Ubuntu 14.04 then you will have to change the above line accordingly and if you are using some other distribution like CentOS then you will have to use relevant rpm and yum commands. This applies everywhere henceforth.

On master nodes:

In our configuration, we are running Marathon on the same box as Mesos masters. The folks at Mesosphere have created a meta-package called mesosphere which installs Mesos, marathon and also zookeeper.
$ sudo apt-get install mesosphere

On slave nodes:

On slave nodes, we require only zookeeper and mesos installed. The following command should take care of it.
$ sudo apt-get install mesos
As mentioned above, installing the above packages will do more that just installing packages. Much of the plumbing work is taken care of for the better. You need not worry if the mandatory “work_dir” has been created in the absence of which Apache Mesos would not run and other such important things. If you want to understand more, extracting scripts from the package and studying them is highly recommended. That is what I did as well.
You can save a lot of time if you clone this repository and then run the following command inside your copy.
$ vagrant up
This command will launch a cluster, set the IPs for all nodes, install all required packages to follow this post. You are now ready to configure your cluster.

Configuration

In this section we will configure each tool/application one by one. We will start with Zookeeper, then Mesos servers, then Mesos slaves and finally Marathon.

Zookeeper

Let us stop Apache Zookeeper on all nodes (masters and slaves).
$ sudo service zookeeper stop
Let us configure Apache Zookeeper on all masters. Do the following steps on each master.
Edit /etc/zookeeper/conf/myid on each of the master nodes. Replace the boilerplate text in this file with a unique number (per server) from 1 to 255. These numbers will be the IDs for the servers being controlled by Zookeeper. Lets chose 10, 30 and 50 as IDs for the 3 Mesos master nodes. Save the files after adding 10, 30 and 50 respectively in /etc/zookeeper/conf/myid for the nodes. Here’s what I had to do on the first master node. Same has to be repeated on other nodes with respective IDs.
$ echo 10 | sudo tee /etc/zookeeper/conf/myid
Next we configure the Zookeeper configuration file ( /etc/zookeeper/conf/zoo.cfg ) for each master node. For the purpose of this blog we are just adding the master node IPs and relevant server IDs that was selected in the previous step.
Note the configuration template line below. server.id=host:port1:port2. port1 is used by peer ZooKeeper servers to communicate with each other, and port2 is used for leader election. The recommended values are 2888 and 3888 for port1 and port2 respectively but you can choose to use custom values for your cluster.
Assuming that you have chosen the IP range 10.10.20.11-13 for your Mesos servers as mentioned above, edit /etc/zookeeper/conf/zoo.cfg to reflect the following:
# /etc/zookeeper/conf/zoo.cfg
server.10=10.10.20.11:2888:3888
server.30=10.10.20.12:2888:3888
server.50=10.10.20.13:2888:3888
This file will have many other Zookeeper related configurations which are beyond the scope of this post. If you are using the packages mentioned above, the configuration templates should be a lot of help. Definitely read the comments sections, a lot to learn there.
This is a good tutorial on understand fundamentals of Zookeeper. And this document is perhaps the latest and best document to know more about administering Apache Zookeeper, specifically this section is of relevance of what we are doing.

All Nodes

Zookeeper Connection Details

For all nodes (masters and slaves), we have to set up Zookeeper connection details. This will be stored in /etc/mesos/zk, a configuration file that you will get thanks to the packages. Edit this file on each node and add the following url carefully.
#/etc/mesos/zk
zk://10.10.20.11:2181,10.10.20.12:2181,10.10.20.13:2181/mesos
Port 2181 is Zookeeper’s client port that it listens to for client connections. IP addresses will differ if you have chosen IPs for your servers differently.

IP Addresses

Next we set up IP address information for all nodes (masters and slaves).

Masters

$  echo  | sudo tee /etc/mesos-master/ip
$ sudo cp /etc/mesos-master/ip /etc/mesos-master/hostname
Write the IP of the node in the file. Save and close the file.

Slaves

$ echo  | sudo tee /etc/mesos-slave/ip
$ sudo cp /etc/mesos-slave/ip /etc/mesos-slave/hostname
Write the IP of the node in the file. Save and close the file. Keeping the hostname same as IP makes it easier to resolve DNS.
If you are using the Mesosphere packages, then you get a bunch of intelligent defaults. One of the
most important things you get is a convenient way to pass CLI options to Mesos. All you need to do is
create a file with same name as that of the CLI option and put the correct value that you want to
pass to Mesos (master or slave). The file needs to be copied to a correct directory. In case of Mesos
masters, you need to copy the correct file to /etc/mesos-master and for slaves you should copy the
file to /etc/mesos-slave For example: echo 5050 > sudo tee /etc/mesos-slave/port
We will see some examples of similar configuration setup below. Here you can find all the CLI options that you can pass to Mesos master/slave.

Mesos Servers

We need to set a quorum for the servers. This can be done by editing /etc/mesos-master/quorum and setting it to a correct value. For our case, the quorum value can be 2 or 3. We will use 2 in this post. Quorum is the strict majority. Since we chose 2 as quorum value it means that out of 3 masters, we will definitely need at least 2 master nodes running for our cluster to run properly.
We need to stop the slave service on all masters if they are running. If they are not, the following command might give you a harmless warning.
$ sudo service mesos-slave stop
Then we disable the slave service by setting a manual override.
$ echo manual | sudo tee /etc/init/mesos-slave.override

Mesos Slaves

Similarly we need to stop the master service on all slaves if they are running. If they are not, the following command might give you a harmless warning. We also set the master and zookeeper service on each slave to manual override.
$ sudo service mesos-master stop
$ echo manual | sudo tee /etc/init/mesos-master.override
$ echo manual | sudo tee /etc/init/zookeeper.override
The above .override files are read by upstart on Ubuntu box to start/stop processes. If you are using a different distribution or even Ubuntu 15.04 then you might have to do this differently.

Marathon

We can now configure Marathon, for which we need some work to be done. We will configure Marathon only on the server nodes.
First create a directory for Marathon configuration.
$ sudo mkdir -p /etc/marathon/conf
Then like we did before, we will set configuration properties by creating files with same name as that of property to be set and adding the value of the property as the only content of the file (see box above).
Marathon binary needs to know the values for –master and –hostname. We can reuse the files that we used for Mesos configuration.
$ sudo cp /etc/mesos-master/ip /etc/marathon/conf/hostname
$ sudo cp /etc/mesos/zk /etc/marathon/conf/master
To make sure Marathon can use Zookeeper, do the following (note the endpoint is different in this case i.e. marathon):
$ echo zk://10.10.20.11:2181,10.10.20.12:2181,10.10.20.13:2181/marathon \
| sudo tee /etc/marathon/conf/zk
Here you can find all the command line options that you can pass to Marathon.

Starting Services

Now that we have configured our cluster, we can resume all services.

Master

$ sudo service zookeeper start
$ sudo service mesos-master start
$ sudo service marathon start
 

Slave

$ sudo service mesos-slave start
 

Running Your Application

Marathon provides nice Web UI to set up your application. It also provides an excellent REST API to create, launch, scale applications, check health status and more.
Go to your Marathon Web UI, if you followed the above instructions then the URL should be one of the Mesos masters on port 8080 ( i.e. http://10.10.20.11:8080 ). Click on “New App” button to deploy a new application. Fill in the details. Application ID is mandatory. Select relevant values for CPU, Memory, Disk Space for your application. For now let number of instances be 1. We will increase them later when we scale up the application in our shiny new cluster.
There are a few optional settings that you might have to take care depending on our your slaves are provisioned and configured. For this post, I made sure each slave had Ruby, Ruby related dependencies and Bundler gem were installed. I took care of this when I launched and provisioned the slaves nodes.
One of the important optional settings is “Command” that Marathon can execute. Marathon monitors this command and reruns it if it stops for some reason. Thus Marathon claims to fame as “init” and runs long running applications. For this post, I have used the following command (without the quotes).
“cd hello && bundle install && RAILS_ENV=production bundle exec unicorn -p 9999”
This command reads the Gemfile in the Rails application, installs all the necessary gems required for the application, and then runs the application on port 9999.
I am using a sample Ruby On Rails application. I have put the url of the tarred application in the URI field. Marathon understands a few archive/package formats and takes care of unpacking them so we needn’t worry about them. Applications need resources to run properly, URIs can be used for this purpose. Read more about applications and resourceshere.
Once you click “Create”, you will see that Marathon starts deploying the Rails application. A slave is selected by Mesos, the application tarball is downloaded, untarred, requirements are installed and the application is run. You can monitor all the above steps by watching the “Sandbox” logs that you should find on Mesos main web UI page. When the state of task will change from “Staging” to “Running” we have a Rails application run via Marathon on a Mesos slave node. Hurrah!
If you followed the steps from above, and you read the “Sandbox” logs you know the IP of the node where the application was deployed. Navigate to the SLAVE_NODE_IP:9999 to see your rails application running.






Scaling Your Application

All good but how do we scale? After all, the idea is for our application to reach web scale and become the next Twitter, and this post is all about scaling application with Mesos and Marathon. So this is going to be difficult! Scaling up/down is difficult but not when you have Mesos and Marathon for company. Navigate to the application page on Marathon UI. You should see a button that says “Scale”. Click on it and increase the number to 2 or 3 or whatever you prefer (assuming that you have that many slave nodes). In this post we have 3 slave nodes, so I can choose 2 or 3. I chose 3. And voila! The application is deployed seamlessly to the other two nodes just like it was deployed to the first node. You can see for yourself by navigating to SLAVE_NODE_IP:9999 where SLAVE_NODE_IP will be the IP of the slave where the application was deployed. And there you go, you have your application running on multiple nodes.
It would be trivial to put these IPs behind a load-balancer and a reverse proxy so that access to your application is as simple as possible.

Graceful Degradation (and vice versa)

Sometimes nodes in your clusters go down for one reason or other. Very often we get an email from your IaaS provider that your node will be retired in few days time and at other times a node dies before you could figure out what happened. When such inevitable things happen and the node in question is part of the cluster running the application, the dynamic duo of Mesos and Marathon have your back. The system will detect the failure, will de-register the slave and deploy the application to a different slave available in the cluster. You could tie it up with your IaaS-provided scaling option and spawn required number of new slave nodes as a part of your cluster which once registered with the Mesos cluster can run your application.

Marathon REST API

Although we have used the Web UI to add a new application and scale it. We could have done the same (and much more) using REST API and thus do Marathon operations via some program or scripts. Here’s an simple example that will scale the application to 2 instances. Use any REST client or just curl to make a PUT request to the application ID, in our case http://10.10.20.11:8080/v2/apps/rails-app-on-mesos-marathon with the following JSON data as payload. You will notice that Marathon deploys the application to another instance if there was only 1 instance before.
{
    "instances" : 2
}
You can do much more than above, do health checks, add/suspend/kill/scale applications etc. This can become a complete blog post in itself and will be dealt at a later time.

Conclusion

Scaling your application becomes as easy as pressing buttons with a combination of Mesos and Marathon. Setting up a cluster can become almost trivial once you get your requirements in place and ideally automate the configuration and provisioning of your nodes. For this post, I relied on simple Vagrantfile and a shell script that provision the system. Later I configured the system by hand as per above steps. Using Chef or alike would make the configuration step a single command work. In fact there are a few open-source projects that are already very successful and do just that. I have played witheverpeace/vagrant-mesos and it is an excellent starting point. Reading the code from these projects will help you understand a lot about building and configuring clusters with Mesos.
There are other projects that do similar things like Marathon and sometimes more. I definitely would like to mention Apache Aurora and HubSpot’s Singularity.

Comments are closed here.

As a modern-day enterprise or ISV, it is prevalent to stay ahead of competitors where your products are concerned. MSys Technologies can help you with the latest technologies and processes to implement in your product development so that you can give your customers the best experience.