Tomcat 7 installation via yum on CentOS/RHEL



Before we dive in with the steps for Tomcat7 installation, let’s first talk a little about what Apache Tomcat is.
Apache Tomcat is nothing but a Web Server and also OPEN SOURCE (probably the most commonly one used today). It’s been developed by Apache Software Foundation (ASF).
It’s a web server and also has a servlet container. Tomcat is used to run Java applications - Java servlets, JSP , WebSocket etc.

It works on HTTP port 8080.

In this tutorial, we will learn how to install Tomcat7 on CentOS and then also install admin and manager packages and see how to use the manager-gui to upload, deploy, remove etc a JAVA .war file.

Step 1 - Check for Java Version

Tomcat 7 needs Java SE 6 or later.

Check for the existing java version in your machine using the following command -

~ Java -version



Tutorial - How To Install Java on CentOS/RHEL

Step 2 - Install Tomcat

Use the following command -

~ sudo yum install tomcat7



Answer yes at the confirmation question to be able to install all the related dependencies for Tomcat 7.
All the important Tomcat files will reside at /usr/share/tomcat7/

If you have a Tomcat application file ready to run then you can simply place the file in the /usr/share/tomcat7/webapps/ folder.


Step 3 -
Till now you have just installed the Tomcat server but the Tomcat service is not running yet.

So you need to start the Tomcat Server to be able to use it.

~ sudo service tomcat7 start

To check the status of the Tomcat server -

~ sudo service tomcat7 status



Step 4 -
To be able to access your Tomcat Server on a web browser -
Take your machine’s IP address and use port 8080 -

http://<IP Address>:8080 You should be able to see something like this -




Step 5 - Install ROOT page and Admin packages
There are some admin packages that are available to help you with the deployment of your Java applications and much more.

To install the packages, use the following command -

~ sudo yum install tomcat7-webapps tomcat7-admin-webapps



Answer yes for the confirmation question.

The above command will install the ROOT page, some example samples, manager to the /usr/share/tomcat7/webapps folder.

Step 6 - Create Users
To be able to use the manager package we need to login into the Tomcat Server. For this we will be creating users.

We need to edit the tomcat-users.xml file.

~ vi /usr/share/tomcat7/conf/tomcat-users.xml

This file already has the instructions on how to add the configuration details, read them carefully.

Everything is written inside the <tomcat-users> ….. </tomcat-users> tag.

We need to create users for the manager-gui and admin-gui to be able to access the manager we installed in the above step.

<!-- manager user can access only manager section, as specified in the roles -->
<role rolename="manager-gui" />
<user username="manager" password="_SECRET_PASSWORD_" roles="manager-gui" />
<!-- admin user can access manager and admin sections, as specified in the roles -->
<role rolename="admin-gui" />
<user username="admin" password="_SECRET_PASSWORD_" roles="manager-gui,admin-gui" />

Step 7 - Restart the Tomcat Server
Once done editing the tomcat-users.xml file, you will have to restart the server.

~ sudo service tomcat7 restart



Step 8 - Open the manager gui
After restarting the server, the last step is to go to your web browser and open up the server as shown in the Step 4.

Click on Manager button on the right. You will be prompted to login with the credentials you made in Step 6.

You should see something like this -





This manager can be used to manage your Java applications. You can use this simple GUI to start, stop, reload, deploy and undeploy your applications. You can also run some diagnostics on your application.
And at the bottom, you can also see information of your server.

That’s it! You are now set to deploy your applications on Tomcat Server! :)



How to Install Oracle Java on CentOS/RHEL

A quick introduction -
Java is a common and popular software platform that let’s you run Java applications and applets.
You can find three different variations of Java Platform on the official website - Standard Edition (SE), Enterprise Edition (EE) and Micro Edition (ME). We will be using SE in this tutorial.


Now even the Java SE Platform has two different packages that can be installed - Java Runtime Environment (JRE) and Java Development KIt (JDK).
JRE - This is used to run the compiled Java applications.
JDK - As the name suggests, it is used for writing, developing/building and compiling the Java applications.
We will be covering both the installations.


  1. Installation of Oracle Java 8 JRE

Step 1.1 -
First we will go to the official Site - Official Oracle Java 8 JRE Downloads Page and download the correct .rpm package using the wget command.

Note: You can choose any other version too, according to your requirements, just copy the respective download link and replace it in the below command.


wget --no-cookies --no-check-certificate --header "Cookie:  gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie"
“http://download.oracle.com/otn-pub/java/jdk/8u102-b14/jre-8u102-linux-x64.rpm”

Step 1.2 -

Now we need to install the rpm package.


sudo yum localinstall jre-8u102-linux-x64.rpm
After this, Java should be installed at /usr/java/jre1.8.0_102/bin/java
Java 8 JRE Installation is done!

2. Installation of Oracle Java 8 JDK
Step 2.1 -
First we will go to the official Site - Official Oracle Java 8 JDK Downloads Page and download the correct .rpm package using the wget command.
Note: You can choose any other version too, according to your requirements, just copy the respective download link and replace it in the below command.


wget --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" “http://download.oracle.com/otn-pub/java/jdk/8u102-b14/jdk-8u102-linux-x64.rpm”
Step 2.2 -
Now we need to install this rpm package.
Java should be installed at /usr/java/jdk1.8.0_102/bin/java
Java 8 JDK Installation is done!

3. Using the alternatives command
The alternatives command can be used to set the default java to be used.


Parse a text file uploaded in S3 and add it's content to DynamoDB

We are going to read a text file as soon as it's uploaded in S3 and add it's data into Dynamo DB using Lambda – Node JS.

Things we will need to do:-
1.       Create S3 bucket and Dynamodb Table.
2.       Create a role (S3 access and dynamodb access) for the lambda function that you will be creating.
3.       Create lambda function with S3 Object creation Event source.
4.       Test.

Before we start, just some names that you should know that I have used in the example -

S3 bucket Name: s3-to-lambda-object-creation-01

 Lambda Function Name: s3-dynamodb-lambda-02

Dynamodb Table Name: Movies

Dynamodb Table Attributes:
Key - year,
Other Attribute - title.


Step 1:-
Before we dive into how to use lambda functions and it’s details, we need to have an S3 bucket and a Dynamodb Table in place for the current problem statement.
Creating these two is pretty straight forward. You can refer the links below :-
S3 :-

Dynamodb :-

Step 2:-
Before you start creating a lambda function, you need to assign it an Execution Role so that lambda has the permission to access S3 and Dynamodb and make changes to them.
We will be giving S3 Access and Dynamodb Access.
We are also going to give it some policies to be able to write logs in CloudWatch.

The policy is -

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "Stmt146834439900",
            "Effect": "Allow",
            "Action": [
                "dynamodb:PutItem"
            ],
            "Resource": [
                "arn:aws:dynamodb:us-east-1:64236701746:table/Movies"    -------> Table ARN
            ]
        },
        {
            "Sid": "Stmt146834444000",
            "Effect": "Allow",
            "Action": [
                "logs:CreateLogGroup",
                "logs:CreateLogStream",
                "logs:PutLogEvents"
            ],
            "Resource": [
                "arn:aws:logs:*:*:*"
            ]
        },
        {
            "Sid": "Stmt146834847000",
            "Effect": "Allow",
            "Action": [
                "s3:GetObject"
            ],
            "Resource": [
                "arn:aws:s3:::<bucket_name>/*"
            ]
        }
    ]
}

Step 3:-
Now that you have your role in place, we can create the lambda function -

Step 3.1 -
Select Blueprint - You can skip this step since we are going to write our own custom code.

Step 3.2 -

Configure triggers -


Select S3 from the drop-down list.

After selecting S3 you will be asked for filling in the bucket name and other details -


Make sure you select Object Created option for event type.
You can enable the trigger from now itself or you can enable it later too after creating the function.

Step 3.3 -
Configure function - Now here we will be defining the function including writing it’s code.


Below the code entry type - you will have to insert your code.

Code -

'use strict';
console.log('Loading function');

let aws = require('aws-sdk');
let s3 = new aws.S3({ apiVersion: '2006-03-01' });

exports.handler = (event, context, callback) => {
    const bucket = event.Records[0].s3.bucket.name;
    const key = decodeURIComponent(event.Records[0].s3.object.key.replace(/\+/g, ' '));
    const params = {
        Bucket: bucket,
        Key: key
    };
    var docClient = new aws.DynamoDB.DocumentClient();
    var table = "Movies";
  

    s3.getObject(params, (err, data) => {
        if (err) {
            console.log(err);
            const message = `Error getting object ${key} from bucket ${bucket}. Make sure they exist and your bucket is in the same region as this function.`;
            console.log(message);
            callback(message);
        } else {
        
            //get data in the file using data.Body
            var file_data=String(data.Body);

            // split the data first based on the newline character and add each line into an array
            var split_data_newline= file_data.split("\n");
            var final_split_data= []; // 2D array with final split data from file
            var temp_comma_split= []; // Temp array for storing "," seperated values of each line
            var i,j=0;
          
           // interate through the array which contains full full lines as it's element.. now take each element of that array and split using ","
           // take the "," seperated items into a temp array (temp_comma_split)
           // using the second inner for loop.. iterate the temp array and add each element of it into the 2D array (final_split_data)

            for(i=0; i<split_data_newline.length; i++) {
                final_split_data[i]= [];
                temp_comma_split = split_data_newline[i].split(",");
           
                for(j=0; j<temp_comma_split.length; j++) {
                    final_split_data[i][j]=temp_comma_split[j]; 
               }
            }
    

            // Iterate through the final 2D array with data to add into DB fields

            for(i=0; i<final_split_data.length; i++) {
                var year = final_split_data[i][0];
                var title = final_split_data[i][1];

                var DB_params = {
                    TableName:table,
                    Item:{
                        "year": year,
                        "title": title,
                    }
                };

                console.log("Adding a new item...");
                docClient.put(DB_params, function(err, data) {
                    if (err) {
                        console.error("Unable to add item. Error JSON:", JSON.stringify(err, null, 2));
                    } else {
                        console.log("Added item:", JSON.stringify(data, null, 2));
                    }
                });
            }
        }
    });
};




Code assumption that the input file will be something like this -
2011,baghban
2012,dhoom


After the code, AWS will pre-populate the Handler field for you. It is the filename.handler-function.
It is a function in your code that AWS Lambda can invoke when the service executes your code.

Next it will ask you for a role to be attached, simply choose the role you created in step 2 here.
Later setting you can leave as it is since our use-case doesn’t require any special settings.

Step 3.4 -

Review - Review your function and hit Create Function button.

 Once done you should see something like this under your Triggers Tab -


Step 4:-

For Testing the setup -
Take the input file shown below and upload it to your S3 bucket.
You should see the two rows getting added in your Dynamodb Table.
Also go to your CloudWatch Logs - you should see a new Log Group created with the same name as your Lambda Function. Inside the group you can see all your logs getting accumulated.

Logs should look something like this- 


Launch an EC2 instance for every new row added into DynamoDB Table using Lambda

Problem Statement:- As soon as a row gets added to the Dynamodb Table, launch an EC2 machine and use the Key column value of the table as the tag name for the instance.
Solution:-
Things we will need to do:-
1.       Create the Dynamodb Table.
2.       Create a role (EC2 access and dynamodb access) for the lambda function that you will be creating.
3.       Create lambda function with Dynamodb Trigger.
4.       Test.
Before we start, just some names that you should know that I have used in the example -
Lambda Function Name: Dynamo_Trigger_Ec2_Function_v2
Dynamodb Table Name: Dynamo_Trigger
Dynamodb Table Attributes:
Key - Item_Id
Step 1:-
Before we dive into how to use lambda functions and it’s details, we need to have a Dynamodb Table in place for the current problem statement.
Creating a table is pretty straight forward. You can refer the links below :-
Dynamodb :-

Step 2:-
Before you start creating a lambda function, you need to assign it an Execution Role so that lambda has the permission to access Dynamodb and EC2.
We will be giving EC2 Access and Dynamodb Access.
We are also going to give it some policies to be able to write logs in CloudWatch.


The policy is -
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "Stmt146917013000",
            "Effect": "Allow",
            "Action": [
                "ec2:CreateTags",
                "ec2:RunInstances"
            ],
            "Resource": [
                "*"
            ]
        },
        {
            "Sid": "Stmt149170813000",
            "Effect": "Allow",
            "Action": [
                "dynamodb:GetItem",
                "dynamodb:GetRecords"
            ],
            "Resource": [
                "arn:aws:dynamodb:us-east-1:64237017496:table/Dynamo_Trigger"
            ]
        },
        {
            "Sid": "Stmt146917088600",
            "Effect": "Allow",
            "Action": [
                "logs:CreateLogGroup",
                "logs:CreateLogStream",
                "logs:PutLogEvents"
            ],
            "Resource": [
                "arn:aws:logs:*:*:*"
            ]
        }
    ]
}

Step 3:-
Now that you have your role in place, we can create the lambda function -
Step 3.1 -
Select Blueprint - You can skip this step since we are going to write our own custom code.
Step 3.2 -
Configure triggers -

Select DynamoDB from the drop-down list.
After selecting DynamoDB you will be asked for filling in the table name and other details -

Make sure you select the right table for the trigger.
You can enable the trigger from now itself or you can enable it later too after creating the function.
Step 3.3 -
Configure function - Now here we will be defining the function including writing it’s code.

Below the code entry type - you will have to insert your code.
Code -
'use strict';
console.log('Loading function');

exports.handler = (event, context, callback) => {
    //console.log('Received event:', JSON.stringify(event, null, 2));
    //var docClient = new AWS.DynamoDB.DocumentClient();
    var tag_name;
    var event_name;
    var AWS = require('aws-sdk');
    AWS.config.region = 'us-east-1';
    var ec2 = new AWS.EC2();
    var params = {
        ImageId: 'ami-a4827dc9', /* Use this AMI ID for now */
        MaxCount: 1,
        MinCount: 1,
                DryRun: false,
                EbsOptimized: false,
                InstanceType: 't2.micro',
                KeyName: 'US-EAST-CLOUDAPI-LB-key',
        Monitoring: {
                Enabled: false
        },
        Placement: {
                Tenancy: 'default'
        },
        SecurityGroupIds: [
                'sg-436abe3b',
        ],
        SubnetId: 'subnet-5877fa2e',
    };

    event.Records.forEach((record) => {
        console.log(record.eventID);
        event_name = record.eventName;

        switch(event_name){

            case "INSERT":
                console.log('DynamoDB Record: %j', record.dynamodb);
                tag_name = record.dynamodb.Keys.Item_Id.S; // adding instance tag as the same key column value from the db.
                console.log(tag_name);

                // Create the instance
                ec2.runInstances(params, function(err, data) {
                    if (err) { console.log("Could not create instance", err); return; }

                    var instanceId = data.Instances[0].InstanceId;
                    console.log("Created instance", instanceId);

                    // Add tags to the instance
                    params = {Resources: [instanceId], Tags: [
                        {Key: 'Name', Value: tag_name}
                    ]};
                    ec2.createTags(params, function(err) {
                        console.log("Tagging instance", err ? "failure" : "success");
                    });
                });

                break;

            case "REMOVE":
                console.log('DynamoDB Record: %j', record.dynamodb);
                //console.log('key value: %j', record.dynamodb.Keys.Item_Id.S);
                console.log("Since it's a REMOVE so not doing anything.");

        }
       
    });
    callback(null, `Successfully processed ${event.Records.length} records.`);
  

};



After the code, AWS will pre-populate the Handler field for you. It is the filename.handler-function.
It is a function in your code that AWS Lambda can invoke when the service executes your code.

Next it will ask you for a role to be attached, simply choose the role you created in step 2 here.
Later setting you can leave as it is since our use-case doesn’t require any special settings.
Step 3.4 -
Review - Review your function and hit Create Function button.
Once done you should see something like this under your Triggers Tab -

Step 4:-
For Testing the setup -
Add a row in your database..
You should see an EC2 machine getting launched with the tag same as the Key Column value in your database.
Like in the example here, I added Item_Id = dummy, so in my EC2 console I can see an instance getting launched with the tag “dummy”.
Also go to your CloudWatch Logs - you should see a new Log Group created with the same name as your Lambda Function. Inside the group you can see all your logs getting accumulated.
Logs should look something like this-