How durable are your EPM backups?

Summary

Image result for backup failedThis article discusses requirements and solutions for backing up Oracle EPM applications. It highlights a new procedure that is available in Oracle EPM cloud to help meet requirements for keeping application backups viable over a long period of time.

Introduction

Backups are one of those things we don't think too much about until we need them, then they suddenly become very important. It's something we know we have to do, routines are set up to do it, and we hope it works when it's needed. If we're really diligent, we test the recovery process from time to time to ensure it works as expected.

One thing I've learned about backups is that they are not all the same. We take backups for different reasons with different expectations. Just saying "I have a backup" sounds good to check it off a list and make our functional application owners feel good. However, depending on the circumstances when a request comes in to restore a backup, we may find we are not as good as we thought we were.

Different flavors of backups

What I've come to realize is that we take backups for different reasons and to satisfy different requirements.
  1. We take backups for operational purposes. This is the traditional backup we think about when we want to recover an application from a failure or mistake. Common examples are an application that becomes corrupted or a user accidentally deletes something they weren't supposed to. In these scenarios a request is made to restore the application to it's last backup in order to recover the application and allow users to get back to work.
  2. Sometimes we take backups as a safety precaution when doing work in the application or on the underlying platform. Common examples are applying patches, break fix work, or new development enhancements being implemented in production. The backups become part of the rollback plan in case something doesn't go as planned.
  3. Another reason for backups are to satisfy data retention requirements. Sometimes we take snapshots for a point in time recovery to bring an application back online "as it was" at the time the backup was taken. Depending on the purpose, the timeframe to retain these snapshots could extend over a considerable period, often years.
It's this last scenario that introduces some challenges that you wouldn't necessarily have to think about with the first two. Backups for operational integrity and backups for recovery during patching, or development changes, are all backups that will be used in the near term and if they are not needed, they are typically discarded. We may even refer to these kinds of backups as being stale after a while; often taking up a lot of space somewhere, but not of much use once the particular activity is completed. We wouldn't typically have much use for a backup of a Planning app two weeks after it was taken because users have been entering new data and the backup is too old to be meaningful, except maybe for recovering an artifact like a business rule or something like that. The third scenario for data retention definitely requires a lot more thought.

Real world example

I learned a good lesson about application snapshot retention soon after I came to work at GE. I had been onboard for a few weeks when I was copied on an email thread about a severity one Essbase ticket. Auditors were onsite working on a project and requested some information from prior years, in fact they were looking back about five years. A request came in to restore a backup of the application from five years ago! At first I thought this was odd, who would have backups going back that far, but I learned this was documented in the application SLA to have quarterly snapshots that would be retained after each close and archived. The ops team had in fact been taking backups of the app but in this case, they were not able to restore it.

I got involved with the recovery effort and discovered they were not using a very good method for taking the snapshots. The backups were copies of the server app folder that were copied, when the app was stopped, to an archive directory. The thought process was that if the app needed to be recovered they would create a new app with the same name and then copy the app directory onto the server and access the application. Now if you've worked with Essbase you know this isn't a best practice, but in the real world it could work; at least they thought it would. What they had not accounted for was version upgrades to Essbase that would make those archive copies incompatible with the current version of the software running on the server; remember this was a copy from five years prior.

A number of days were spent on this task, ultimately we found an old copy of the install files for the version the snapshots were taken in. We spun up the old version on a dev server, recovered the apps then upgraded them to the newer version and the auditors were able to access and recover what they needed. Needless to say the functional executive was not happy with the amount of time it took to bring the application online.

Architecting a better solution

As we moved forward as a team and began our endeavor to implement a shared service within GE running on Exalytics servers, it was up to me to address this requirement and come up with a solid solution to ensure we didn't run into a similar issue. I also found it wasn't just Essbase apps, but there were some planning apps too, and as we were going to be moving to HFM on the new Exa platform, we would have the same snapshot requirements for HFM apps. This one was going to be particularly tricky since all HFM apps on a server reside in the same relational schema.

I discussed with Oracle and the initial thought was to take LCM backups of the applications and store them offline. We could then import them in a non-prod environment when needed. This sounded good at first, but I had learned a valuable lesson not too long ago; would those LCM snapshots be compatible with the future version of the software I would be running when I needed to restore it? I proposed this hypothetical scenario to Oracle and asked "how long is an LCM snapshot supported?" the answer was "within one release of the current version". Well that was going to be a problem. It was completely unrealistic to think my LCM snapshots would be viable 2, 3, 5 years down the road.

I spent a lot of time working with the functional owners, infrastructure team, and Oracle PMs. I proposed a number of different approaches to meet this requirement and ultimately we went with a process where we keep the applications live on non-prod servers for Essbase and Planning; this ensures the archive copies are upgraded when the system is patched and keeps them viable. Fortunately on Exalytics we had a tremendous amount of space to store all these copies. If the apps were not started they weren't doing much harm and even if they were started by accident, we have enough processors and RAM to handle it. HFM was a bit more painful, however. Unlike Essbase and Planning, where each app is independent, in HFM multiple applications all reside in a single schema. Over time this schema would become extremely bloated and could suffer from performance degradation. To solve the requirement we actually created a stand alone archive zone to store HFM snapshots. All the apps are "live" in the archive zone and it is patched and maintained the same as the other zones used by the business. Overall this works, but it is an expensive and time consuming solution to the problem.

Moving to the cloud

I am now working on our roadmap to migrate our EPM applications to the cloud as our Exa platform reaches end of life and same as before, I need to address this application archiving requirement. I knew early on it was not going to be practical to have multiple pods to support all my snapshots, we were going to have to come up with a way to keep our snapshots offline, but still keep them up to date with the EPM cloud latest version to ensure they are viable. I discussed this requirement with a few PMs at Oracle along with Matt Bradley who is the senior executive at Oracle responsible for EPM Cloud.

I discussed with them how the LCM process in EPM cloud was superior to on-premise and I loved how quickly I could recover an application. I felt confident that as long as the LCM was compatible with future versions it would be a great way to keep snapshots in an archive directory and spin them up as needed. Oracle confirmed they could still only guarantee that an LCM export would be officially supported within one version. So what could we do? I hypothesized that if there were a way to load the LCM up into the cloud periodically and apply the latest patch, we could then export it back out and have viable backups in perpetuity. Assuming we could automate this process, I would be able to run a job during off hours to use one of my environments to keep my snapshots up to date.

The solution

To solve this requirement I was introduced to Vinay Gupta from the cloud ops team. Vinay developed two scripts, one for Windows and one for Linux, utilizing EPM automate to cycle through a directory of LCM snapshots, load them into EPM cloud, apply the latest patch, export the snapshot back down to our directory, and store it in a new folder with the same name. I tested the script provided by Vinay and it worked very well doing exactly what was needed.

As a result of this process the value proposition for moving to the cloud increases dramatically. Since we will be able to take all of our snapshots offline, and we will not need to maintain a separate archive environment, moving to the cloud will actually save us quite a bit of money. It will reduce our EPM foot print and provide a very logical and stable approach to managing application snapshots over a long period of time.

This is a big win for us and I am once again pleased with my collaboration with the Oracle product team. The process for this and the sample scripts are documented in the online EPMA documentation under the sample use cases. I am providing a copy here as well for reference.

Additional consideration

One side note to mention that I still have to work out is how to ensure my current snapshots are viable when we switch products moving to the cloud. How do I restore an on-premise HFM application if I no longer have HFM because I moved to FCCS? This is something I will have to ponder further and work with my functional counter parts. It may be necessary to keep a VM running HFM just for the purpose of restoration. Sounds costly and now I have to make sure I keep my VM up to date :/



--------------------------------------------------------------------------------------------------------------------------

Copy of Oracle EPM Automate documentation for reference



Recreating an Old EPM Cloud Environment for Audits


Oracle Enterprise Performance Management Cloud supports snapshot compatibility for one monthly cycle only; you can migrate maintenance snapshots from the test environment to the production environment and vice versa. However, the auditing requirements of some customers may necessitate restoring snapshots from multiple years on the latest environment, and accessing application in a short period of time.
This scenario details a self-service solution to maintain an up-to-date library of snapshots using a script. You require an environment dedicated for the purpose of upgrading and maintaining a library of up-to-date snapshots. The script contained in this section should be scheduled to run once every month to convert the available snapshots and make them compatible with the latest EPM Cloud patch level. Oracle recommends that you run the script after the third Friday of the month to ensure that all issues within the production environment have been resolved.
How the Script Works
For every snapshot stored by the customer, the upgrade script completes these tasks using the EPM Automate Utility :
  1. Using the information in the input.properties file, logs into an environment
  2. Uses the recreate command to refurbish the environment
  3. Imports the snapshot into the environment
  4. Runs daily maintenance on the environment, which results in the snapshot being converted into the format compatible with the current EPM Cloud patch level.
  5. Downloads Artifact Snapshot (the maintenance snapshot) into a folder. If you recreated an 18.05 environment by uploading snapshots from snapshots/18.05Artifact Snapshot is downloaded into snapshots/18.06.
Running the Script
  1. Create the input.properties file and update it with information for your environment. Save the file in a local, directory. This directory, referred to as parentsnapshotdirectory in this discussion. Contents of this file differs depending on your operating system.
    Make sure that you have write privileges in this directory.
  2. Create upgradeSnapshots.ps1 (Windows) or upgradeSnapshots.sh (Linux/UNIX) script and save it in the parentsnapshotdirectory where input.properties is located.
  3. Create a sub-directory, for example, snapshots, within the parentsnapshotdirectory.
  4. Within the directory that you created in the preceding step (snapshots), create a sub-directory for the monthly snapshot that you want to convert to make it compatible with the current tEPM Cloud patch level. Name the directory using the YY.MMformat; for example, 18.05 for the directory to store the May 2018 snapshots.
  5. Copy snapshots into the appropriate sub-directory. For example, copy the snapshots for May 2018 into snapshots/18.05.
  6. Launch the script. On Linux/UNIX, run ./upgradeSnapshots.sh.
Note:
If you are using the PDF version of this document: To avoid line breaks and footer information that will render these scripts unusable, copy them from the HTML version of this topic.
Windows
Create input.properties and upgradeSnapshots.ps1 script by copying the scripts in this section.
Creating input.properties
username=exampleAdmin
userpassword=examplePassword
serviceurl=exapleURL
identitydomain=exampleDomain
proxyserverusername=proxyServerUserName
proxyserverpassword=proxyPassword
proxyserverdomain=proxyDoamin
parentsnapshotdirectory=C:/some_directory/snapshots
Updating input.properties
Note:
If authentication at proxy server is not enabled for your Windows network environment, remove the properties proxyserverusernameproxyserverpassword, and proxyserverdomain from the input.properties file.
Table 3-3 input.properties Parameters
ParameterDescription
usernameUser name of a Service Administrator.
userpasswordPassword of the Service Administrator .
serviceurlURL of the environment that is used for this activity.
identitydomainIdentity domain of the environment.
proxyserverusernameThe user name to authenticate a secure session with the proxy server that controls access to the internet.
proxyserverpasswordThe password to authenticate the user with the proxy server.
proxyserverdomainThe name of the domain defined for the proxy server.
parentsnapshotdirectoryAbsolute path of the directory that is to be used as the parent directory of the directory that stores the snapshots to be processed. Use forward slashes (/) as directory separators.
Creating upgradeSnapshots.ps1
Use this sample script to create upgradeSnapshots.ps1
# Script for recreating an old EPM Cloud environment

# read in key/value pairs from input.properties file
$inputproperties=ConvertFrom-StringData(Get-Content ./input.properties -raw)

# Global variables
$parentsnapshotdirectory="$($inputproperties.parentsnapshotdirectory)"
$username="$($inputproperties.username)"
$userpassword="$($inputproperties.userpassword)"
$serviceurl="$($inputproperties.serviceurl)"
$identitydomain="$($inputproperties.identitydomain)"
$proxyserverusername="$($inputproperties.proxyserverusername)"
$proxyserverpassword="$($inputproperties.proxyserverpassword)"
$proxyserverdomain="$($inputproperties.proxyserverdomain)"
$operationmessage="EPM Automate operation:"
$operationfailuremessage="EPM Automate operation failed:"
$operationsuccessmessage="EPM Automate operation completed successfully:"
$epmautomatescript="epmautomate.bat"

$workingdir="$pwd"
$logdir="$workingdir/logs/"
$logfile="$logdir/epmautomate-upgradesnapshots.log"

function LogMessage 
{
    $message=$args[0]
    $_mydate=$(get-date -f dd_MM_yy_HH_mm_ss)

    echo "[$_mydate] $message" >> $logfile
}

function LogAndEchoMessage
{
    $message=$args[0]
    $_mydate=$(get-date -f dd_MM_yy_HH_mm_ss)

    echo "[$_mydate] $message" | Tee-Object -Append -FilePath $logfile
}

function LogOutput
{
    $_mydate=$(get-date -f dd_MM_yy_HH_mm_ss)
    $op=$args[0]
    $opoutput=$args[1]
    $returncode=$args[2]

    #If error
    if ($returncode -ne 0) {
        $failmessage="[$_mydate] $operationfailuremessage $op"
        LogMessage $failmessage
        LogMessage $opoutput
        LogMessage "return code: $returncode"
    } else { 
        $successmessage="[$_mydate] $operationsuccessmessage $op"
        LogMessage $successmessage
        LogMessage $opoutput
        LogMessage "return code: $returncode"
    }
}

function ExecuteCommand
{
    $op=$args[0]
    $epmautomatecall="$epmautomatescript $op"
    $date=$(get-date -f dd_MM_yy_HH_mm_ss)

    LogMessage "$operationmessage $epmautomatecall"
    $operationoutput=iex "& $epmautomatecall" >> $logfile 2>&1
    LogOutput $op $operationoutput $LastExitCode
}

function ProcessCommand
{
    $command=$args[0]
    $date=$(get-date -f dd_MM_yy_HH_mm_ss)

    if (!([string]::IsNullOrWhitespace($command))) {
        if (!($command.StartsWith("#"))) {
            ExecuteCommand $command
        }
    }
}

function Init
{
    $logdirexists=Test-Path $logdir
    if (!($logdirexists)) {
        mkdir $logdir 2>&1 | out-null
    }

    # removing existing epmautomate debug logs
    rm ./*.log

    # remove existing log file
    rm  $logfile
}

function GetNextDate
{
    $latestyearmonth=$args[0]
    LogMessage "latest year.month: $latestyearmonth"
    $latestyear,$latestmonth=$latestyearmonth.split('\.')
    LogMessage "latest year: $latestyear"
    LogMessage "latest month: $latestmonth"
    $intlatestyear=[int]$latestyear
    $intlatestmonth=[int]$latestmonth

    if ($intlatestmonth -eq 12) {
        $intnextmonth=1
        $intnextyear=$intlatestyear+1
    } else {
        $intnextmonth=$intlatestmonth+1
        $intnextyear=$intlatestyear
    }

    $nextyear="{0:D2}" -f $intnextyear
    $nextmonth="{0:D2}" -f $intnextmonth

    echo "$nextyear.$nextmonth"
}

function ProcessSnapshot
{
    $snapshotfile=$args[0]
    LogMessage "snapshotfile: $snapshotfile"
    $nextdate=$args[1]
    LogMessage "nextdate: $nextdate"
    $snapshotfilename=$snapshotfile.split('/')[-1]
    LogMessage "snapshotfilename: $snapshotfilename"
    $snapshotname=$snapshotfilename.split('.')[0]
    LogMessage "snapshotname: $snapshotname"

    ProcessCommand "login $username $userpassword $serviceurl $identitydomain $proxyserverusername $proxyserverpassword $proxyserverdomain"
    ProcessCommand "recreate -f"
    ProcessCommand "uploadfile $snapshotfile"
    ProcessCommand "importsnapshot $snapshotname"
    ProcessCommand "runDailyMaintenance -f skipNext=true"
    ProcessCommand "downloadfile 'Artifact Snapshot'"
    ProcessCommand "deletefile $snapshotname"
    ProcessCommand "logout"

    $nextdatedirexists=Test-Path $parentsnapshotdirectory/$nextdate
    if (!($nextdatedirexists)) {
        mkdir $parentsnapshotdirectory/$nextdate 2>&1 | out-null
    }

    LogMessage "Renaming 'Artifact Snapshot.zip' to $snapshotname.zip and moving to $parentsnapshotdirectory/$nextdate"
    mv $workingdir/'Artifact Snapshot.zip' $workingdir/$snapshotname.zip >> $logfile 2>&1
    mv $workingdir/$snapshotname.zip $parentsnapshotdirectory/$nextdate >> $logfile 2>&1
}

#----- main body of processing
date
Init
LogAndEchoMessage "Starting upgrade snapshots processing"
$snapshotdirs=@(Get-ChildItem -Directory "$parentsnapshotdirectory" -name)
LogMessage "snapshot directories: $snapshotdirs"
$latestreleasedate=$snapshotdirs[-1]
LogMessage "latest release date: $latestreleasedate"
$latestreleasesnapshotdir="$parentsnapshotdirectory/$latestreleasedate"
LogMessage "latest release snapshot dir: $latestreleasesnapshotdir"
$nextdate=$(GetNextDate "$latestreleasedate")
$snapshotfiles=@(Get-ChildItem -File "$latestreleasesnapshotdir")
if ($snapshotfiles.length -eq 0) {
    LogAndEchoMessage "No snapshot files found in directory $latestreleasesnapshotdir. Exiting script."
    exit
}
foreach ($snapshotfile in $snapshotfiles) {
    LogAndEchoMessage "Processing snapshotfile: $snapshotfile"
    ProcessSnapshot $latestreleasesnapshotdir/$snapshotfile $nextdate
}
LogAndEchoMessage "Upgrade snapshots processing completed"
date
Linux/UNIX
Create upgradeSnapshots.sh and input.properties by copying the following scripts.
Creating input.properties for Linux/UNIX
Note:
If your network is not configured to use a proxy server to access the internet, remove the properties proxyserverusernameproxyserverpassword, and proxyserverdomain from the input.propertiesfile.
username=exampleAdmin
userpassword=examplePassword
serviceurl=exapleURL
identitydomain=exampleDomain
proxyserverusername=
proxyserverpassword=
proxyserverdomain=
jdkdir=/home/user1/jdk160_35
epmautomatescript=/home/exampleAdmin/epmautomate/bin/epmautomate.sh
parentsnapshotdirectory=/home/exampleAdmin/some_directory/snapshots
Updating input.properties
Table 3-4 input.properties Parameters
ParameterDescription
usernameUser name of a Service Administrator.
userpasswordPassword of the Service Administrator .
serviceurlURL of the environment that is being used for this activity.
identitydomainIdentity domain of the environment.
proxyserverusernameThe user name to authenticate a secure session with the proxy server that controls access to the internet.
proxyserverpasswordThe password to authenticate the user with the proxy server.
proxyserverdomainThe name of the domain defined for the proxy server.
jdkdirJAVA_HOME location.
epmautomatescriptAbsolute path of the EPM Automate Utility executable (epmautomate.sh).
parentsnapshotdirectoryAbsolute path of the directory that is to be used as the parent directory of the directory that stores the snapshot to be processed.
Creating upgradeSnapshots.sh
Use this sample script to create upgradeSnapshots.sh
#!/bin/sh

. ./input.properties
workingdir=$(pwd)
logdir="${workingdir}/logs/"
logfile=epmautomate-upgradesnapshots.log
operationmessage="EPM Automate operation:"
operationfailuremessage="EPM Automate operation failed:"
operationsuccessmessage="EPM Automate operation completed successfully:"
logdebugmessages=true

if [ ! -d ${jdkdir} ]
then 
    echo "Could not locate JDK/JRE. Please set value for "jdkdir" property in input.properties file to a valid JDK/JRE location."
    exit
fi

if [ ! -f ${epmautomatescript} ]
then 
    echo "Could not locate EPM Automate script. Please set value for "epmautomatescript" property in the input.properties file."
    exit
fi

export JAVA_HOME=${jdkdir}

debugmessage() {
    # logdebugmessages is defined (or not) in testbase input.properties
    if [ "${logdebugmessages}" = "true" ]
    then
        logmessage "$1"
    fi
}

logmessage() 
{
    local message=$1
    local _mydate=$(date)

    echo "[$_mydate] ${message}" >> "$logdir/$logfile"
}

echoandlogmessage() 
{
    local message=$1
    local _mydate=$(date)

    echo "[$_mydate] ${message}" | tee -a ${logdir}/${logfile}
}

logoutput()
{
    date=`date`
    op="$1"
    opoutput="$2"
    returncode="$3"

    #If error
    #if grep -q "EPMAT-" <<< "$2"
    if [ $returncode -ne 0 ]
    then
        failmessage="[${date}] ${operationfailuremessage} ${op}"
        logmessage "${failmessage}"
        logmessage "${opoutput}"
        logmessage "return code: ${returncode}"
    else
        successmessage="${operationsuccessmessage} ${op}"
        logmessage "${successmessage}"
        logmessage "${opoutput}"
        logmessage "return code: ${returncode}"
    fi
}

getLatestReleaseSnapshotDir()
{
    local snapshotdirs=$(find ${parentsnapshotdirectory} -type d | sort)
    debugmessage "snapshot directories: ${snapshotdirs}"
    local latestreleasesnapshotdir=$(echo ${snapshotdirs##*$\n} | rev | cut -d' ' -f1 | rev)
    debugmessage "latest release snapshot dir: ${latestreleasesnapshotdir}"
    echo "${latestreleasesnapshotdir}"
}

getNextDate()
{
    local thisyearmonth=$1
    local thisyear=$(echo ${thisyearmonth} | cut -d'.' -f1)
    local thismonth=$(echo ${thisyearmonth} | cut -d'.' -f2)

    intthismonth=$(bc <<< ${thismonth})
    intthisyear=$(bc <<< ${thisyear})

    if [ ${intthismonth} -eq 12 ]
    then
        local intnextmonth=1
        local intnextyear=$((intthisyear+1))
    else 
        local intnextmonth=$((intthismonth+1))
        local intnextyear=${intthisyear}
    fi
    
    nextmonth=$(printf "%02d\n" ${intnextmonth})
    nextyear=$(printf "%02d\n" ${intnextyear})

    debugmessage "next date: ${nextyear}.${nextmonth}"

    echo "${nextyear}.${nextmonth}"
}

init()
{
    if [ ! -d "$logdir" ]
    then
        mkdir $logdir
    fi

    # removing existing epmautomate debug logs
    if ls ./*.log >/dev/null 2>&1
    then
       rm ./*.log
    fi

    # remove existing log files
    if [ -f "${logdir}/${logfile}" ]
    then
        rm ${logdir}/${logfile}
    fi
}

processCommand()
{
    op="$1"
    date=`date`

    logmessage "$operationmessage $op"
    operationoutput=`eval "$epmautomatescript $op"`
    logoutput "$op" "$operationoutput" "$?"
}

processSnapshot()
{
    local snapshotfile="$1"
    local nextdate="$2"
    local snapshotname=$(echo "${snapshotfile}" | rev | cut -d'/' -f1 | rev | cut -d'.' -f1)

    processCommand "login ${username} ${userpassword} ${serviceurl} ${identitydomain} ${proxyserverusername} ${proxyserverpassword} ${proxyserverdomain}"
    processCommand "recreate -f"
    processCommand "uploadfile ${snapshotfile}"
    processCommand "importsnapshot \"${snapshotname}\""
    processCommand "runDailyMaintenance -f skipNext=true"
    processCommand "downloadfile \"Artifact Snapshot\""
    processCommand "deletefile \"${snapshotname}\""
    processCommand "logout"

    if [ ! -d ${parentsnapshotdirectory}/${nextdate} ]
    then
        mkdir ${parentsnapshotdirectory}/${nextdate}
    fi

    logmessage "Renaming \"Artifact Snapshot.zip\" to ${snapshotname}.zip and moving to ${parentsnapshotdirectory}/${nextdate}"
    mv "${workingdir}/Artifact Snapshot.zip" "${workingdir}/${snapshotname}.zip" >> "$logdir/$logfile" 2>&1
    mv "${workingdir}/${snapshotname}.zip" ${parentsnapshotdirectory}/${nextdate} >> "$logdir/$logfile" 2>&1
}

#----- main body of processing
date
echoandlogmessage "Starting upgrade snapshots processing"
init
latestreleasesnapshotdir=$(getLatestReleaseSnapshotDir)
latestreleasedate=$(echo "${latestreleasesnapshotdir}" | rev | cut -d'/' -f1 | rev)
debugmessage "latest release date: ${latestreleasedate}"
nextdate=$(getNextDate ${latestreleasedate})

snapshotfiles=$(find ${latestreleasesnapshotdir} -type f -name \*.zip | tr "\n" "|")
if [ ${#snapshotfiles} -eq 0 ]
then
    echoandlogmessage "No snapshot files found in directory ${latestreleasesnapshotdir}"
fi

IFS="|"
for snapshotfile in $snapshotfiles
do
    echoandlogmessage "Processing snapshotfile: ${snapshotfile}"
    processSnapshot ${snapshotfile} ${nextdate}
done
unset IFS
echoandlogmessage "Upgrade snapshots processing completed."

Comments

kw said…
Great post and it is a huge concern for many companies that need data retention. Data retention was NOT planned for by Oracle very well. The whole POD thing is ridiculous and limits customers ability to keep backups or archive copies as you could in On Prem. Customers should have the ability to have as many applications/cubes as they wish on a POD and just pay for it. Having to buy a different URL/POD is cumbersome and not a very robust solution in my opinion.

As for the backups and updating them, Oracle should have included this functionality in the product as a standard feature. Maybe the snapshots could be saved on their cloud environment by day that customers get to choose how many and upgrades automatically upgrade those snapshots somehow. Maybe it would be for a fee, but still it should be an option. They missed on so many things with PBCS/FCCS. The backups of PBCS are horrid. They are actually saving off the .pag, .ind, .tct, .otl, .db, .dbb files. Thus if you want a Lev0 data backup you have to script it up yourself. That's what we did. We wrote a script to run a Lev0 DATAEXPORT, in conjunction with a full LCM extract excluding the data piece. Then we move those over to our server and save them off for 60 days. The reason is the .pag files are 50GB while the Lev0 exports are 125MB. Prior to moving them over to our server we convert the Lev0 DATAEXPORTS to SUI format. So in the event of a restore, they load right in via the interface and we don't need to build any FDMEE load. Maybe we should export the metadata to a file also as the apps could be rebuilt via FDMEE as long as you had a valid file for each dimension. That way it doesn't matter if they upgrade and your backups aren't any good anymore. You can always rebuild from flat files if need be. Security, Rules, Forms etc. might be a concern though.

Just a lot to think about and it's kind of concerning too in a way :)
Yes kw, one of the scenarios I proposed when evaluating this for our Exa platform on-premise was to break the apps down to the most basic components by extracting the data and metadata to text files. My thought process was not only versions but what if we changed to a different product? My functional app owners did not favor this approach because it focuses primarily on data retention. The requirement I had was to also keep artifacts like forms and business rules "as-is" at the time of the snapshot. They basically want the ability to go back in time and navigate the app "as it was on that date". So this method will give us that ability without as much overhead.
Larry Lapp said…
You brought back alot of nightmares about those app restorations, Gary 🙂. Very cool to see how Oracle is improving the backup and restore processes in the cloud, especially with Essbase. I think it's kind of a shame that you guys are going to be getting rid of those Exa servers after how long it took to architect those solutions. Those servers are still performance beasts and far from obsolete. If GE is looking to dump off any of those Superclusters, I'll pay for shipping 🙂. See you out in Seattle!
Tim Faitsch said…
Clever idea about LCM backups to/from the cloud. I was skeptical about LCM but once you get comfortable it makes life so much easier.
Unknown said…
I am trying to follow the steps above and i am getting import error at the end. As i am doing it from Prod to UAT. however, if it fails for security and groups. it should have been fine. This one fails for import. I am worried. if any suggestions here...