Thursday, July 2, 2015

WSO2 API Manager Distributed Architecture : Configuration Tips (Ubuntu)

Hello all....!

I thought to write another blog on WSO2 API Manager. Well, you may have a basic understanding on what is WSO2 API Manager. If not you can go through my previous blog post or refer to the WSO2 API Manager documentation.

When I am writing this blog it is only two weeks after the WSO2 API Manager 1.9.0 release and now we are working on releasing the next versions of WSO2 API Manager. In the  version 1.9.0  I could work closely on the distributed Architecture of WSO2 API Manager.

There are lot of configurations to do when creating a distributed setup for WSO2 API Manager and you will get the chance to use several other WSO2 Products as well. Other than that you have to fulfill some prerequisites before installing the products.

The intention of this blog post is not to describe how to create an API Manager Distributed setup. To get that information you can refer to the WSO2 Documentation. My intention is to guide you in some configuration steps of the distributed setup to save your time finding here and there what to do and how to do by pointing out where are the places you might do mistakes and  how to work with the remote machines when configuring the instances.

First of all lets get a basic understanding on the distributed architecture of WSO2 API Manager.

API Deployment Distributed Architecture

There are four main components in the WSO2 API Manager distributed setup. They may be one instance per each or more than one instance  (clustered/not clustered) according to the requirement.

1. API Gateway  - This is the component which manage the API calls by securing them and scaling them.

2. Key Manager - Responsible for the Key-related security operations

3. Publisher - Instance which is used by API providers to publish APIs, share documents, provision keys and gather feedback on the features

4. Store - Instance which is used by the consumers to do self sign up,  to subscribe the APIs, invoke them and interact with the API publishers.

Other than that you need a DBMS (e.g. MySQL Server) to create the three databases described in following.

Databases :

There are three databases we need to have for the cluster deployment architecture. Those are,

  • API Manager Database (Information about APIs and API sbscription details)
  • User Management Database  (Information about users and user roles)
  • Registry Database (Shared information between Publisher and Store)

Single Sign On Login with WSO2 Identity Server (WSO2 IS)

To enhance the efficiency, accessibility and for a better user experience you are encouraged to Configure the Publisher and Store in your distributed setup. You can follow the WSO2 documentation for SSO configuration to accomplish this task after setting up the cluster. For this we use another WSO2 Product : WSO2 Identity Server which manage the identities across internal, Shared an SaaS services.

In this particular situation, as an example this allows the authenticated users to access API Manager Store without repeatedly authenticating if the user is authenticated in API Manager Publisher in WSO2 API Manager.

Publishing Runtime Statistics with WSO2 Business Activity Monitor (WSO2 BAM)

To collect the runtime statistics of WSO2 API Manager and to analyse them you are encouraged to use another WSO2 Product which has developed to aggregate, analyse data and present information about business activities. WSO2 API Manager use this product to configure publishing the run time statistics about API related activities. Follow the WSO2 Documentation to Configure WSO2 API Manager with WSO2 BAM to publish API runtime statistics.

Load Balancing with nginx

For load balancing we used WSO2 Elastic Load Balancer (ELB) earlier. Now we encourage the customer to use Nginx for load balancing. To do Nginx configuration follow the WSO2 Documentation.

Now lets move into the important tips.

1. Access remote instances 

When configuring a distributed setup for WSO2 API Manager you need to access the remote instances through terminal. To access the remote machine you may have a key included in a .pem file. First of all you have to give the read and write rights to the .pem file

<path_to_pem_file>/chmod 600

Then you access the remote instance using the following command.

ssh -i <pem_file_name>.pem <remote_host_name>@<remote_host_ip> 
e.g. ssh -i permission.pem

You will log into the remote machine and you can work with it using terminal. 

2. Coping product binaries to remote machines 

Since we are using remote machines and we have to use several software products it is required to download those files to the machines or trasfer them. You can download the products to the remote machines through terminal using the following command.

wget "<download_link_to_the_file>"

wget ""

But this is a time consuming task. Therefore the proper solution is to copy the files to the remote machine through the SSH connection you built. To do that we use secure copy commands which allow coping files through an SSH connection. For that also you need the .pem file.

scp -i <path_to_pem_file>/<pem_file_name>.pem  <path_to_file_in_your_machine>/<file_to_copy> <remote_host_name>@<remote_host_ip>:<location_to_be_copied>

e.g. scp -i /home/API-deployment-Architecture/permission.pem

3. Installing prerequisites 

For each of the WSO2 product to run there is a set of prerequisites to be installed in the Machine which it is running. In this setup we are using WSO2 API Manager, WSO2 Business Activity Monitor and WSO2 Identity Server. To get more information on prerequisits got through installation prerequisites.

You need to install all these through the terminal. Following are the steps to install them.

Install  Java : 
Follow my this blog post to install Java in the remote machine.

Installing Maven : 
sudo apt-get install maven

By running above command you can install maven in the remote machine. To check the version execute the following command.

mvn -version

installing ActiveMQ : 

download ActiveMQ :

tar -zxvf apache-activemq-5.6.0-bin.tar.gz

go to bin folder of ActiveMQ and run get the read write permission.
chmod 755 activemq

start ActiveMQ :
sudo sh activemq start

test the installation
netstat -an|grep 61616

Since you are installing the binary distribution not building the source you do not need to install SVN client and if you do not compile and run the product samples in the remote machine you do not need to install Apache Ant. You need to install the SVN client in the remote machine where the deployment synchronization is going to be configured. Use the following command for that.

sudo apt-get install subversion

4. Installing MySQL Server 

Since you have to configure the three databases described above you have to install a database server. Assuming that you will choose MySQL server I will give the configuration tips. You can follow my previous post to install MySQL Server in your remote machines.

5. Port Offset

In this cluster setup you will have to run more than one WSO2 product in the same cluster and also more than one WSO2 product in the same machine. To avoid the port conflicts you have to make sure setting the port offset correctly by editing the <PRODUCT_HOME>/repository/conf/carbon.xml file.


According to the value that you set it increment the ports used by the server starting from the 9443 which is the default port.

6. Who need which ?

Another thing you need to understand when setting up this distributed architecture is that which component need which component to be connected together to perform. We configure WSO2 BAM to publish run time statistics. It is required by Gateway, Publisher, KeyManager and Store. We configure Single Sign On which is required or affected by Publisher and Store. We need load balancing of we are going to balance the load in the gateway cluster which manage the API calls.
Considering the databases, API Manager database is needed by Publisher, Store and KeyManager. But gateway do not need it. User Manager database and Registry database is needed by all the four components in the cluster. Likewise when doing the configuration we should have it in mind who need which.

7. Configuring databases  - username, password, database names, coping mysql files - bam do not need, but creating the database is needed for all, apim and registry db shoud run the query. proper db name in both xml and db

8. Configuring API Manager for Stats

properly setting the IP addresses in the API Manager configuration in stat configuration is needed to avoid errors. According to that Event Receiver configurations and Data Analyser Configurations should have the URL with IP address where the WSO2 BAM instance is running. Statistics summary datasource should be the URL of the Statistical database and the username password should be the username and password of your MySQL server.

9. Locating Database Driver 

Another main important point that you may have missed is that locate/save the MySQL JDBC driver in the product. According to the setup describing here since we ar eusing MySQL server we need connector to perform the database calls. Therefore in every WSO2 product you are using here you need to place MySQL JDBC Driver jar file in <PRODUCT_HOME>/repository/components/lib folder.

12. SVN based deployment synchronization

In the distributed setup we used to cluster the Gateway nodes (some times the key Manager as well) to be in one internal cluster domain with a Manager node and a set of worker nodes. It leads to set up the load balancing configurations in the gateway. For that enabling deployment synchronization in the manager node is essential. Registry-based Deployment Synchronizer does not work for WSO2 products based on Carbon 4.2.0 onwards. We are using SVN here. The deployment synchronizer uses this configuration to identify the manager and synchronize deployment artifacts across the nodes of a cluster. WSO2 Documentation on configuring SVN based deployment synchronizer will follow you through this process.

13. Updating known hosts

Make sure to update the host IP addresses with the host names in the machines that you wish to access the instances of the cluster setup. To that open the "hosts" file in the etc folder in the Ubuntu machine you are using  add the IP addresses and the host names as indicated in the following image.

vi /etc/hosts

For that you must log into the system root.

14. Configuring the databases: 

There are three main databases in the cluster deployment setup that we need to configure. Those are API Manager database, Registry database and User Management database. If you are using MySQL server you should create the databases and run the mysql script on it to create the schema of the databases.

mysql> \. mysql.sql

API Manager database : <APIM_HOME>/dbscripts/apimgt/mysql.sql
Registry database : <APIM_HOME>/dbscripts/mysql.sql
User Management database : <APIM_HOME>/dbscripts/mysql.sql

Other than that you need to configure the Statistical database for publishing statistics where you only need to create the database. When running WSO2 BAM instance it execute the relevant queries and create the database schema.

MySQL remote settings:

For all the instances in the cluster setup there need to be one API Manager database, one Registry database and one User Management database. Therefore we should the set the configuration by binding the address where the database is located in each of the instance.

For that open the my.cnf file in /etc/myql directory and give the IP where the database is located as the bind address. Then log into the mysql root and grant the permission to the database users to read and write to the database.

Grant permission :


You can check the access from the remote host with this command.

mysql -u root -h -p

This will give the remote access to the databases in the cluster setup.

Other than that, make sure the database names, URLs and the username and password are correctly configured in the master-datasources.xml file where you add the XML configurations for the databases.

Concentrating on the above mentioned points you can reduce the time taken to do the configuration and can get better understanding about the cluster set up that you are implementing.