Skip to content

Insight and analysis of technology and business strategy

One-Click E-Business Suite Cloning on AWS Using Systems Manager

Cloning an Oracle E-Business Suite system is a complicated process, and depending on the environment topology, it can involve various steps to accomplish the task.

 

 

Partial automation of the E-Business Suite cloning procedure is often done with local scripts. In this post, I’ll explain how to use Systems Manager (SSM) in AWS to orchestrate and execute the cloning procedure without logging on to your servers.

Before we start, there are a few assumptions:

  • You have a basic understanding of AWS Systems Manager and its documents.
  • Your Oracle E-Business Suite environment runs in AWS Cloud.
  • This is not a new system setup, but a cloning process of E-Business Suite applications and databases.
  • Security, networking, storage, compute and IAM fundamentals are already established in AWS.
  • Underlying cloning scripts should have been set up and verified as working.
  • Systems Manager agent is already installed and running on the target EC2 servers.
  • Necessary IAM roles are granted e.g. S3 roles, SSM roles.

Basic E-Business Suite cloning workflow

Following is a basic E-Business Suite cloning workflow:

  1. Log on to the source app server and run adpreclone.
  2. Initiate tar/zip backup of app tier.
  3. Log on to the source DB server and  run adpreclone.
  4. Initiate tar/zip backup of DB Oracle home and rman backup.
  5. Transfer artifacts from the source application server to the target application server.
  6. Transfer artifacts from the source database server to the target database server.
  7. Shut down the target application.
  8. Shut down the target database.
  9. Setup Oracle Home software and create the database on target Database server
  10. Set-up application by running adcfgclone on target application server
  11. Post clone steps
  12. Sanity Testing

As you can see, the entire process can entail logging on to different servers and running a bunch of commands to create the clone environment.

It’s possible to accomplish the entire cloning task in the following steps by using AWS SSM:

  1. Log  on to the AWS console.
  2. Execute the “Clone Automation Runbook” in Systems Manager(SSM).
  3. Monitor  the process and optionally look at clone log files in AWS console or S3.
  4. Perform sanity testing.

That’s it. Four steps instead of twelve, and we didn’t even log on to the servers.

Sounds too good to be true? Keep reading on to find out.

A brief overview of AWS Systems Manager (SSM)

Please remember that this is just an overview of SSM with context to this blog. There are a myriad of options and functionalities available with SSM. You can find more information about AWS Systems Manager in the AWS official documentation.

The systems manager is an AWS cloud-native tool that provides an option to run commands on EC2 instances without logging on to them.

It requires an agent to be installed on the EC2 servers and necessary IAM privileges and roles to be granted.

The interesting bit about the systems manager is that it provides automation orchestration via its documents, known as Run Command documents, Automation Runbooks, etc.

The Run Command document is what is used to run commands on the EC2 instances without logging on to them. You can create a run command document, as an example, to download a script from S3 and run it on your EC2 instance. This document, when executed, will ask for inputs such as ec2 instance-id, output storage options, SNS notifications options, etc.

You can also assemble related run command documents to run in an orchestrated workflow fashion using the Automation runbook.

Automation runbooks

The Automation runbook is where all the inputs required for a Run Command document are nested in as input values. Once set up properly, it provides an interface to run the Run Command document without any manual intervention. These runbooks can also provide numerous functionalities like Branched workflows, Approvals, etc. For the purpose of this post, we’ll discuss only a few related options here.

Therefore, in the context of E-Business Suite cloning, I’ll provide an example of using Run Command documents to run major cloning tasks. As I mentioned earlier in the assumptions section, the scripts to perform the actual cloning task should already be set up, verified, and working.

The list of cloning tasks is as follows:

  1. Prepare source system applications for cloning.
  2. Prepare source system database for cloning.
  3. Transfer source artifacts to an S3 bucket.
  4. Download source artifacts from S3 bucket onto target servers.
  5. Prepare target database Oracle Home and database.
  6. Prepare target applications.
  7. Post clone steps.

Each of these steps will have a corresponding script on the server to perform the task.

Each script will have a corresponding Run Command SSM document, which, when executed via the AWS console (after providing the necessary inputs), will execute the script on the server.

For final orchestration, add all the above-mentioned Run Command documents to an Automation Runbook with specific input values to provide a one-click E-Business Suite cloning interface.

Example

Following is a demonstration of Step 1: Prepare source system APP and DB tiers.

A sample source app tier preparation script is shown below; this script is named: Prepare_Source_App_For_Cloning.sh:

#!/bin/bash

# Variables
LOGD=/u02/Scripts/Log/oracle_logs/`date +%F`
LOGF=$LOGD/`(basename $0)`.`date +%d%m%y%H%M%S`.log
BACKD=/u01/APP_Tar_Files
AP_BASE=/u02/oracle/vision
S3_BUCKET=your_s3_bucket_name


function Run_Adpreclone {
echo "Started adpreclone execution at `date`"
echo "Sourcing run file system env file"
cd $AP_BASE
. EBSapps.env run
time perl $ADMIN_SCRIPTS_HOME/adpreclone.pl appsTier << EOF
<APPS_PASSWORD>
<WLS_PASSWORD>
EOF
Y=`echo $?`
echo "adpreclone execution completed at `date` with status code of $Y"
}

function Create_Tar_File {
echo "Starting tar file creation at `date`"
cd $AP_BASE
FL_NAME=Vision_app_tar_`date +%d%m%y%H%M%S`.tar.gz
time tar cvfz $BACKD/$FL_NAME .
X=`echo $?`
echo "Tar file creation completed at `date` with status code of $X"
}

function Upload_To_S3 {
echo "Starting tar file copy to S3 at `date`"
cd $BACKD
time aws s3 cp $FL_NAME s3://$S3_BUCKET/`date +%F`/$FL_NAME
Z=`echo $?`
echo "Copy completed at `date` with status code of $Z"
}

#main
mkdir -p $LOGD
exec 2>&1 > $LOGF
Run_Adpreclone
Create_Tar_File
Upload_To_S3

As you can see, this script runs adpreclone.pl on the source app tier, creates a tar file, uploads the tar file to an S3 bucket, and logs all output to a directory on the local server.

Following is a sample JSON content of a “Run Command” document for this script:

{
  "schemaVersion": "2.2",
  "description": "Execute Script to prepare source App for Cloning",
  "parameters": {
    "commandLine": {
      "description": "(Required) Script to be executed",
      "type": "String",
      "default": "runuser -l applmgr -c 'sh /u02/Scripts/Prepare_Source_App_For_Cloning.sh'"
    },
    "executionTimeout": {
      "description": "(Optional) The time in seconds for a command to complete before it is considered to have failed. Default is 3600 (1 hour). Maximum is 28800 (8 hours).",
      "type": "String",
      "default": "7200",
      "allowedPattern": "([1-9][0-9]{0,3})|(1[0-9]{1,4})|(2[0-7][0-9]{1,3})|(28[0-7][0-9]{1,2})|(28800)"
    }
  },
  "mainSteps": [
    {
      "precondition": {
        "StringEquals": [
          "platformType",
          "Linux"
        ]
      },
      "action": "aws:runShellScript",
      "name": "runShellScript",
      "inputs": {
        "runCommand": [
          "",
          "directory=$(pwd)",
          "export PATH=$PATH:$directory",
          "  ",
          ""
        ],
        "timeoutSeconds": ""
      }
    }
  ]
}

When executed via the AWS console, what this does is that it logs on to the source server and executes the command line which points to the script.

However, when you run this Run Command document, you still have to provide the value of the instance-id of your source EC2 instance.

To automate this further, we create an Automation runbook that will use this run command document and have the instance-id added as an input parameter:

description: |-
  *This automation document prepares the source Application tier for cloning*  

  ---
  # Following run command document is used

  1. Prepare_Source_App_For_Clone
schemaVersion: '0.3'
mainSteps:
  - name: Prepare_Source_App_For_Clone
    action: 'aws:runCommand'
    inputs:
      DocumentName: Prepare_Source_App_For_Clone
      Targets:
        - Key: InstanceIds
          Values:
            - i-sample-id

Now, this is the document you can just execute without providing the source instance-id. It will log on to the source system and run the script without any manual intervention.

To orchestrate the complete cloning process, create scripts along with their corresponding Run Command documents for each of the steps mentioned earlier and tie them together in an Automation runbook. For the purpose of brevity, the following is a sample JSON content of the final EBS Cloning “Automation Runbook”

description: |-
  *This document is used to start the entire vision UAT stack*  

  ---
  # Sequence of actions. 
  The following run command documents will be executed sequentially:

  1. Prepare_Source_App_Tier_and_Transfer_to_S3
  2. Prepare_Source_DB_Tier_and_Transfer_to_S3  
  3. Download-backups-from-S3
  4. Clone-UAT-Database
  5. EBS-Vision-DB-Run-Chkdb
  6. Clone-UAT-Applications
schemaVersion: '0.3'
mainSteps:
  - name: Prepare_Source_App_Tier_and_Transfer_to_S3
    action: 'aws:runCommand'
    inputs:
      DocumentName: Prepare_Source_App_Tier_and_Transfer_to_S3
      Targets:
        - Key: InstanceIds
          Values:
            - i-app-tier-instance-id
  - name: Prepare_Source_DB_Tier_and_Transfer_to_S3
    action: 'aws:runCommand'
    inputs:
      DocumentName: Prepare_Source_DB_Tier_and_Transfer_to_S3
      Targets:
        - Key: InstanceIds
          Values:
            - i-db-tier-instance-id
  - name: Download_backups_from_S3
    action: 'aws:runCommand'
    inputs:
      DocumentName: Download-backups-from-S3
      Targets:
        - Key: InstanceIds
          Values:
            - i-app-tier-instance-id
  - name: Clone_UAT_Database
    action: 'aws:runCommand'
    inputs:
      DocumentName: Clone-UAT-Database
      Targets:
        - Key: InstanceIds
          Values:
            - i-db-tier-instance-id
  - name: Check_UAT_Database_Status
    action: 'aws:runCommand'
    inputs:
      DocumentName: EBS-Vision-DB-Run-Chkdb
      Targets:
        - Key: InstanceIds
          Values:
            - i-db-tier-instance-id
  - name: Clone_UAT_Applications
    action: 'aws:runCommand'
    inputs:
      DocumentName: Clone-UAT-Applications
      Targets:
        - Key: InstanceIds
          Values:
            - i-app-tier-instance-id

End Result

This document will run all of the steps mentioned in the document when you hit the Execute Automation button in the AWS Console Systems Manager section. It will not ask for instance ids since they are already provided as input parameters to the action: ‘aws:runCommand’

Each action ‘aws:runCommand‘ corresponds to a run command document, and in turn, that document runs the script on the server for that task.

Conclusion

Initially, it appears to be a bit complicated, but once you get the hang of how the automation Runbook works, the next question you will ask yourself is, “What else can I run as a one-click setup”?

I found the following one-click setups useful for my test environments:

  1. Start up the entire test lab stack, including EC2 instances, Database Listener, Database, and Applications services, in a specified order.
  2. Shutdown entire test lab stack, including applications services, Database Listener, Database, and EC2 instances, in that order.
  3. Running status check scripts on database and applications tier.
  4. Running status checks for standby databases.

 

Even further, all that can be easily integrated with the AWS CLI commands if you leverage the native cloud features to provision new resources, like AMI builds, storage snapshots, or sync between the availability zones or regions.

Make sure to sign up for updates so you don’t miss the next post, and happy automating!

 

 

Top Categories

  • There are no suggestions because the search field is empty.

Tell us how we can help!

dba-cloud-services
Upcoming-Events-banner