Tuesday, July 3, 2018

Building interactive charts and tables in Power Point with Smart View

Do you get tired of recreating the same PowerPoint decks each month when your numbers change?  Wouldn't it be great if you could just push a button and have the numbers in your ppt slide update to what is in the database?  Wouldn't it be even better if the data was used in visually rich MS Office objects such as tables and charts?  And, wouldn't it be awesome if you could interact with the data in real time during your presentation?

Well, did you know Smart View for Power Point does all of that?  That's right I said Smart View for PowerPoint; Smart View is not just for Excel.

Smart View has had the functionality to work with the MS Office suite for some time, but frankly the functionality outside of Excel has been limited and challenging to work with at times.  While the Power Point and Word functionality are not 100%, they have come a long way and with some patience you can make some really nice PowerPoint slides that are interactive with your underlying EPM data sources.

At Kscope18 I did a presentation with fellow Oracle Ace and GE colleague Gary Adashek.  We did a Shark Tank style pitch to "investors" on why they should help us "Save The Beverage Company".  If you don't know what The Beverage Company is, your Essbase street credit is going to take a serious hit.  Of course I am referring to the fabled bottling company made famous by the Essbase Sample Basic database.  Gary and I figured we could use our Smart View skills to create some really slick ppt slides to convince the investors to help us save this favored institution that had been around since the Arbor days.  Besides having some fun trying to convince our panel of investors (see the Photoshop pics at the end of this post) while the audience watched, we wanted to convey the very real message that you can do some interesting things in PowerPoint with Smart View.

In my effort to communicate useful EPM tips across various mediums, this seemed like a good topic for a blog post.  In this tutorial, I am going to walk through how to create interactive Smart View objects in PowerPoint.  I am working with Essbase 11.1.2.4, Office 2016, and Smart View 11.1.2.5.800.  I suggest using this latest version of Smart View since it has some fixes in it specifically associated with PowerPoint.

The first thing you will need to decide is how you are going to link your source data to your ppt.  You have three options to create what is referred to in Smart View as a 'function grid'.  You can base your function grid on

  1. A Hyperion Financial Report (HFR) grid
  2. A Data Form (Hyperion Planning, (e)PBCS, and HFM-presumably-I have not tested)
  3. An ad hoc Smart View retrieve in Excel.

Each one of these have their pros and cons PBCS data forms seem to have the most functionality while also being the most stable.  HFR grids are stable, but they lack the ability to change the POV after they have been turned into a function grid.  Excel has the most functionality in terms of different objects, but it is less stable since you are creating the function grid from an ad hoc report in Excel.

Forms

So to start off let's take a look at building a ppt slide using a PBCS form as a source for a chart.

First step is to either create a form or select one that has the data you are looking for.  Keep in mind if your goal is to make a chart, not all forms are set up correctly to make a nice chart.  In my experience so far, I have created a separate folder for forms called 'Forms for PPT' where I save the ones I have created specifically for this purpose.

This is the form I created for demonstration.  You can see it is pretty straightforward, but note that I did add a dimension to the Page section of the form; you'll see why in a little bit.



When working with a Data Form or HFR report as a source you can begin directly from Power Point; there is no need for Excel.

Steps

  1. Open Power Point and start with a blank presentation
  2. Connect to your data source, in this case PBCS, and navigate to Forms folder and select the form you created as the basis for your chart

  3. At the bottom of the Smart View panel, click on 'Insert Chart'
    1. Be patient this step may take a minute or so while Office renders the object
    2. It may also be a good idea to ensure Excel is closed before doing this.  I have found that if Excel is open prior to inserting the chart it times out.  Technically they are using the Excel chart engine to render the chart and insert it into Power Point
  4. Once the chart is rendered you can resize it and move it around your slide to desired location.  I do not recommend trying to move it to another slide, if you want it on another slide it seems best to repeat the steps.

  5. Once the chart is created you can now make changes to it as you would a typical Office object.  You can go to the chart designer and change the chart type or the color theme or various other options.  Smart View does provide some of the options in a pop-up menu you will see if you click on the chart, but the options there are similar to the ones on the chart design ribbon, with the exception of the filter function, which allows you to filter out certain members.  The filter function gives the option to potentially use a large form with a lot of data and then filter it in ppt, rather than having to create multiple forms.  You can also insert your regular ppt content and wind up with something that looks like this.   
      1.  
  6. Now that I have a nice chart I can take it one step further and make it interactive.  Remember before when I mentioned I put a dimension in the page section of the form?  Let's go back to the Smart View panel hit the drop down next to the little house icon and select 'Document Contents'.  Click on your function grid and then at the bottom of the panel click on 'Insert Reporting Object/Control'.  Now, click on the POV object

  7. You will see a grey box inserted onto the slide.  Note that this POV box will not become active until you enter Power Point presentation mode.  While in presentation mode you can hit the drop down next to the dimension member that was placed in the Page section of the form and select a different member; hit the refresh button and your objects will re-render with the new data.  

So you can see that I was able to very quickly create a presentation quality image based off my PBCS data.  Next time my data changes, I can open this ppt file, go to Smart View refresh and the object will pull in the new data and update the object accordingly.


HFR Grid

Next, let's look at how to insert a HFR report

The steps for inserting an HFR report are similar but there are a few differences.  First, like the data forms, you need to start off with a report that has the data you want.  I created an HFR report similar to the data form in previous example.



Steps

  1. Open Power Point and start with a blank presentation
  2. Connect to your data source, in this case we are still using PBCS but we are going to choose the Reporting provider instead of the EPM provider.  Navigate to the folder where you saved your report and select it.  Then hit the Open link at the bottom of the Smart View panel 

  3. When you click Open, the Import Workspace Document window will open.  By default you can import the report as an image, but we want to hit the drop down and select Function Grid instead
  4.  Click Finish
  5. You will be taken back to your slide and the Document Contents pane will be active in the Smart View panel. Click on the Insert New Reporting Object/Control link
  6. A new window pops up, scroll down and select Chart (note there is no option for POV).  Your chart is inserted and associated with the function grid, same as above with PBCS form.
You can now work with the chart the same way you did in the steps above.  So now, let's take a minute to explore some of the other objects (note these work the same if you are using a form).
  1. Return to the Document Contents pane, select your function grid connection and insert another object 
  2. This time let's add an Office Table
  3. Once the table is inserted, click on the table and then go to the PowerPoint Design Ribbon and select a format for the table; you can then repeat for the chart.  You may also want to increase the font for the table. 
  4. Insert a new slide into your ppt
  5. On slide 2, insert a new reporting object/control, select Function Grid
  6. Note that unlike the Office table, the function grid inserts multiple text boxes, some with labels, and others with active data links to your data source.  You can arrange these objects anywhere you would like and again click on the design ribbon to alter the way the object are formatted. 

There are a number of options to play with to get the format the way you would like.  Note that from time to time I have encountered a few bugs and some inconsistencies in behavior between data sources.  I encourage you to log an SR with Oracle for any you come across to get this product working as well as possible.


Excel


For the last data source, let's look at an ad hoc from Excel.  Note I will use PBCS but this works for other data sources such as Essbase as well.


Steps

  1. Open Excel and start with a blank workbook
  2. Connect to EPM data source via Smart View
  3. Using ad hoc analysis, create a basic retrieve 

  4. Select your data range by dragging mouse over the cells with data
  5. Go to Smart View ribbon and click Copy (note this is not the same as Excel Copy or ctrl + C)
  6. Open Power Point blank presentation
  7. Go to Smart View Ribbon and click Paste (note this is not the same as Paste on the Home ribbon or ctrl + V)
  8. At this point you will see that the function grid is actually placed in the slide.  Go ahead and run the refresh 

  9. Now let's add a chart and a POV slider: go to Smart View panel and go to document contents, select your Smart View link, and then click on Insert New Reporting Object/Control.
  10. Select chart
  11. Go back to Document Content, select your Smart View link, and then click on Insert New Reporting Object/Control.
  12. Scroll to bottom and select Slider
  13. Select the dimension you want the slider for, I am going to choose Years, with members FY18, FY17, and FY16.
  14. Now when I enter presentation mode, my slider becomes active.  I can use my mouse to slide to different year selections 



Conclusion

There are an endless number of combinations and examples I could show, but I think this is a good stopping point.  If you were able to follow along and complete the steps you now have the basic understanding of how to create Smart View Power Point objects that are linked to your EPM data source.  Experiment with different objects, and different data sources; I think you will find some very cool features.  Don't be discouraged if you run up against something that doesn't work right, the Oracle team has been very responsive and you just need to log an SR so they become aware of the issue.  Sometimes it is what we are doing, sometimes it is bug, but as I said at the beginning of the post, the product has come a long way and I believe it can be very useful in the hands of the right users.

Best of luck, let me know your thoughts in the comment section.


Post-Conclusion

Pics from Kscope18 Save The Beverage Company presentation

1. Meet The Sharks, The Founders, and The Presenters!





Thursday, June 28, 2018

Implementing PBCS at ODTUG

As a Director on the ODTUG Board, I have the privilege of serving as the organization's Treasurer.  Some of my duties include overseeing the organization's financial reporting as well as budgeting and forecasts.  When I took over the role two years ago I did an assessment of our reporting capabilities and I was unhappy about our dependency on Excel as our primary reporting and analysis tool.  Our controller did a fantastic job of pulling the data each month for our close and finance review, but we were limited to the reports that were created and any follow up questions or curiosities that required digging deeper posed a challenge and required someone to go off and manually work on it.

As an EPM professional, I knew there were better tools out there and my colleagues on the board agreed.  Of course this wasn't news, previous boards had also thought about this, but they were impaired by infrastructure requirements.  As a not-for-profit organization ODTUG runs on tight margins; an investment in servers to support an on-premise implementation of EPM software was not practical.  However, our board now had something our predecessors did not have, we had access to EPM in the Cloud.  Running a SaaS application in the cloud would eliminate all the obstacles such as not having a data center or hardware.  We also would not have to carry assets on our balance sheet; with a subscription service, EPM cloud would be a monthly operating expense.

About a year ago we began discussing this during a board meeting, we wanted to take our FP&A activities into PBCS and we put a plan in place.  Working with our partners at Oracle we obtained a small lot of PBCS licenses to build out our application.  I knew there would be a number of benefits if we could successfully implement PBCS at ODTUG.  We would be able to run the organization better, and we could share the experience with our members as a training opportunity.

My role on the board, along with the other board members, is a volunteer position; pretty much everyone at ODTUG is a volunteer, so embarking on a full scale implementation did make me a little nervous.  "Would I have enough time to work on this?", "Am I biting off more than I can chew?" were a couple of the thoughts that went through my head, but having prior experience with PBCS I had some confidence that it could be done in a reasonable amount of time.  So with the help of our controller from our management company, YCC, and some advice from fellow board member Jake Turrel, I jumped into the project.

I've been in EPM for a long time, and I have implemented a number of Hyperion Planning applications.  While PBCS may be considered by some to be "Hyperion Planning in the cloud", it's really so much more.  I was able to have the basic construct of the application up in a couple of hours and was already beginning to load some test data.  When I look at the length of the implementation, which lasted a couple of months in total (working part time on weekends and evenings), the least amount of time was spent on PBCS activities.  It was really all the pre-work at the ledger that took the bulk of the time.  Early on in the process I discovered I was going to have some issues loading data because our ledger, which is managed in Quickbooks, did not have rigid rules around master data.  I quickly found multiple accounts with similar or same name and I knew this was going to be an issue that would haunt us if we didn't deal with it.  I discussed with Jake and we agreed the first step to making the project a success was to clean up the ledger and implement a real chart of accounts.  I brought it up to the finance committee and we voted unanimously to approve the project.  A couple of weeks later I was on a plane to Wilmington, NC for a two day workout with our controller, Nancy.  We spent two days recoding the chart, extracting the data, and loading it into PBCS.  By the time we were done we had an enterprise class COA in place and all of our history was tied out in PBCS.

Over the next couple of weeks I continued to work with the data and took full advantage of what PBCS has to offer, most notably the dashboards.  As the Treasurer it is my job to report to the board each month how we are doing financially.  Prior to the board meeting the finance sub-committee meets to review the financials.  Both of these meetings would take a considerable amount of time as we combed through Excel reports; if we had any questions we would sometimes have to adjourn and reconvene at a later date after the information was collected.  With the PBCS implementation underway, I was able to create various dashboards to show quickly and clearly the financial health of the organization.  The dashboards showed more content than we had available in the past and with features like drill down enabled, we are able to explore the data more fluidly.  Finance reviews now take about 5-10 minutes each month and board members have the ability to login and look at the data anytime they want.

In addition to making life easier for the board and other ODTUG leaders and volunteers, we have also tried to identify where this project benefits our members.  One thing the board has discussed is how can we be more transparent with the inner workings of ODTUG.  We have discussed what financial information is appropriate for us to disclose to our members.  In the past this would have incurred additional cost to ODTUG to prepare the information, but now that we have PBCS we can easily add some reports with various metrics for external consumption.  This is an enhancement I am currently working on and will be disclosing in the near future.

Overall our PBCS implementation has been a huge success and I am happy to be able to share this information with you.  As mentioned above, I want to seize this moment and turn it into a learning opportunity for the EPM community.  Since the implementation, I have shared this information at a few events and I plan to continue doing so in various formats.

Back in May I did a presentation at a NYC meet-up on PBCS that focused on many of the learnings from the ODTUG implementation.  At this year's Kscope I did a presentation: Happily Ever After: ODTUG and Oracle Enterprise Planning and Budgeting Cloud Solution (EPBCS).  If you were unable to attend Kscope, the presentation was recorded and is available to ODTUG members.  I will also be doing a webinar on Aug 21 at 12pm EDT; you can register here.  I am also presenting this topic at OOW in October.

For those of you who prefer to get your content from a blog, I am planning a multi-part blog series on the ODTUG PBCS implementation.  I plan to write about the technical approach to building the PBCS app and touch on various topics such as:
  • How to create a new EPBCS application
  • How to customize PBCS settings to personalize the look and feel of the application
  • How to build and update dimensions using both the web interface and Smart View
  • How to import data
  • How to build forms, dashboard, and financial reports

I hope you will follow this series to see all the interesting content, you can follow this blog to get notifications when the posts are added.

Thanks for reading and please check back soon.


P.S. - here are a couple of other references on this subject to take a look at if you are interested.

ODTUG PBCS press release
DevLive interview with me discussing ODTUG implementation of EPBCS


Friday, April 14, 2017

Deleting a large number of groups in PBCS

Adjusting to working in the cloud takes some time; there are things we are used to being able to do on-premise where the functionality may be more mature or we have additional utilities to assist us.

Recently I ran into an issue in PBCS where we had imported a large number of groups and then decided we did not need them.  At first glance it appeared the only way to remove them was one by one via the GUI.  Since there were over 500 groups I was not willing to do that.  I opened an SR and unfortunately Oracle confirmed there was no way to do it and I would need to submit an enhancement request.

I did some poking around on blogs and the documentation, along with some more trial and error and actually figured out a way to do it by altering LCM exports and the Import Settings.  Below are the steps I took to accomplish this.  I suspect this method might be "unsupported" by Oracle, I'm not sure, so full disclosure that you are doing this at your own risk.  Please be sure to take a full back up of your environment before you attempt this in case you have to roll back.

With that said, these are the steps to delete groups en masse in PBCS.


1. LCM Export your current security, and name the snapshot GROUPS




2. Download the GROUPS Snapshot












3. Once downloaded, extract GROUPS.zip to a temp directory on your PC, such as C:\Temp. 

4. Open C:\Temp\GROUPS\HSS-Shared Services\resource\Native Directory\Groups.csv with a text editor.
     - Remove the groups you want to KEEP from the #group section
     - Remove all #group_children sections (you don't need them for delete operation)

5. Go to C:\Temp\GROUPS\HSS-Shared Services\resource\Native Directory\Assigned Roles
     - Open the various folders and modify the .csv files in them.  You have to remove all references to the groups you are deleting.  Leave references to groups that you are keeping.
6. Now that you are done editing, you can Zip your modified files for re-import to PBCS.

*Note that I had a lot of issues when I first tried to re-zip my files.  I was getting a lot of errors on upload and import.  I will show the method I used that seems to work, this may or may not be optional.

-If you do not already have 7zip, download and install it.
-Open 7zip and navigate to the temp directory where the modified files are located.
Note it is important that you do not zip the GROUPS folder, you have to archive the folder 'HSS-Shared Services', and the two xml files 'Export.xml' and 'Import.xml' (see screenshot)








-Click the Add button 

In the 'Add to Archive' options select 
Archive format = Zip 
Compression level = Store

(This is a case where other compression methods may work but I had errors with the few I tried. Stored essentially puts the file in a zip container but it doesn't actually compress it.  This was the way I got it to import back into PBCS successfully.)



7. Rename your zip to GROUPSMOD.zip

8. Go to PBCS and import your zip file

9. The next setting is extremely important.  You need to tell LCM to Delete instead of add when you import the files back in.  Next to the Refresh button is the 'Import Settings' button, it looks like a hammer and wrench.  Open 'Import Settings' and change 'Groups and Membership - Import Mode' to 'Delete' and click 'Save and Close'.


10. Import your modified LCM


Check the migration status report for Completion

Navigate back to Access Control and you should see your groups have been removed.

*** Important - Be sure to go back to 'Import Options' and change it from 'Delete' to the default 'Create or Update'.  Failing to do this could result in a big problem the next time someone tries to restore a back up or migrate across environments.


Tuesday, August 23, 2016

Smart View prompts user to upgrade when connecting to HFM provider

Ran into this issue a little while back and with HFM 11.1.2.4 it was a little tricky to figure out, so I thought it was worth posting.

Problem
Smartview users get a pop-up when connecting to HFM provider that there is a newer version of Smartview and for them to upgrade.
Many of our customers have stated they do not want users upgrading Smartview versions with each update and do not want Smartview to prompt them to do so.

Solution
The solution to this can be found in Oracle Knowledge Document 1993603.1

Please make sure you have a backup of your system before making any changes.
As these options are now removed from the EPM configurator, the following steps can be followed to turn off the warning messages.
 1. Log On to the shared services console.
 2. Expand Application Groups>Foundation>Deployment Metadata
 3. Financial Management> LWA-hfmadf
 4. Select> Properties > right click
5. Export export to some local drive and edit the file Component.properties
                  ForceClientUpgrade=true -----Change to false
                  WarnIfClientIsNewer=true ------change to false
 5. Save the file
 6. Financial Management> LWA-hfmadf
 7. Select> Properties > right click import
 8. Restart HFM services for the change to take effect.


*Please note that this solution is for HFM Smartview provider. Users getting this pop-up when connecting to other products require a different approach such as updating APS settings for Essbase provider.

Monday, August 15, 2016

Finding EPM Logs on Exalytics T5 (Solaris)

When troubleshooting an EPM issue, reviewing log files is an important step.  The following explains how to find EPM log files on the Exalytics T5 (Solaris)

Step-by-step guide

  1. Connect to the host using a supported tool such as WinSCP, Filezilla, PuTTy, vSSH, etc.
  2. cd to the following directories
    1. Essbase server - cd /u01/Oracle/Middleware/user_projects/{host}/diagnostics/logs/essbase/essbase
    2. Essbase application - cd /u01/Oracle/Middleware/user_projects/{host}/diagnostics/logs/essbase/essbase/app/{app}
    3. Essbase Studio - cd /u01/Oracle/Middleware/user_projects/{host}/diagnostics/logs/EssbaseStudio
    4. HFM - cd /u01/Oracle/Middleware/user_projects/{host}/diagnostics/logs/hfm
    5. Planning - cd /u01/Oracle/Middleware/user_projects/{host}/diagnostics/logs/planning
    6. Weblogic - cd  /u01/Oracle/Middleware/user_projects/domains/EPMSystem/servers/{JVM_name}/logs
    7. Install - cd  /u01/Oracle/Middleware/EPMSystem11R1/diagnostics/logs
    8. Opatch / Config - cd  /u01/Oracle/Middleware/EPMSystem11R1/cfgtoollogs/
    9. Others - cd /u01/Oracle/Middleware/user_projects/{host}/diagnostics/logs

Wednesday, July 27, 2016

Firefox right click menu not working properly with Workspace

As more people continue to move away from IE and adopt Firefox as the default Oracle EPM browser, some run into an annoying little issue.

When performing a right click in Workspace, using Firefox browser, the Firefox shortcut menu overlays the Oracle Workspace menu making it difficult to access the Oracle menu items.




A quick way to get to the Oracle menu is to press the escape key, but having to do that all the time can be annoying.

There is a setting in Firefox that drives this behavior.  The solution to this is to change the Context Menu option in Firefox to True. 

Basic solution


1) Open Firefox and type 'about:config' in the search bar


2) Search for 'dom.event.contextmenu.enabled and change value to 'true'



You can now go into Workspace and the right click menu will work as expected.

Advanced solution

We actually had an interesting time with this for a while.  Once we figured out how to correct the issue we were getting complaints from users that it worked for a while but then would revert back to the old behavior.  This took some more digging and I finally figured out the issue.

Some enterprise deployments of Firefox will actually distribute a 'Firefox preferences configuration file'.  This file has various settings that the administrator wants for the browser by default.  When a user logs in, the file is read and Firefox settings are updated.  So in our case we were changing the option to true but the config file had it set to false, so the next time the users went back into Firefox the options were reseting.

The steps for updating the config file are as follows:

32 bit Windows
  1. Make sure Firefox is closed
  2. Go to C:\Program Files\Mozilla Firefox
  3. Scroll to find mozilla.cfg
  4. Make a copy of the file called mozilla.cfg_old (this is just so you have a backup of original)
  5. Open mozilla.cfg in text editor
  6. Scroll to pref("dom.event.contextmenu.enabled”,false);
  7. Change it to pref("dom.event.contextmenu.enabled",true);
  8. Save and close file
  9. Restart Firefox

64 bit Windows
  1. Make sure Firefox is closed
  2. Go to C:\Program Files (x86)\Mozilla Firefox
  3. Scroll to find mozilla.cfg
  4. Make a copy of the file called mozilla.cfg_old (this is just so you have a backup of original)
  5. Open mozilla.cfg in text editor
  6. Scroll to pref("dom.event.contextmenu.enabled”,false);
  7. Change it to pref("dom.event.contextmenu.enabled",true);
  8. Save and close file
  9. Restart Firefox

Now depending on how your administrator distributes this file you could still wind up with an issue if they overwrite your updated file, but at least now you know where to check.  You could discuss with the administrator and find out why they want it to false and have them change it.  In our case there was a concern this was a security risk, but the admin team agreed it wasn't and changed it back to true.  In the worst case scenario if the admin insists it needs to be false, depending on your willingness to break the rules, you could always create a batch script that runs on login and edits the file.  Of course depending on how strict your company is this could result in you getting fired so take that advice with a grain of salt and exercise your own discretion.  If you are stuck with the option set to false, refer to the beginning of this post where I suggested using the 'Escape' key!


Wednesday, June 1, 2016

Big feature comes to Essbase in standard PSU

Introduction

Back when Essbase version 7 came out and we were introduced to ASO, Hyperion also introduced us to MDX.  It was added as the calculation language for member formulas in ASO cubes but we also were able to use it as a query language against the database.  As a bonus we were not limited to just ASO we could write queries against BSO databases too.  Having a powerful query language similar to SQL had tremendous potential for Essbase.  We could write complex scripts to pull very specific data from our cubes and even had the ability to create custom members and sets in the queries alleviating the need to modify the actual outline just to get a calculated result.

Overtime the MDX query language has been improved and Oracle has done some very cool things with it behind the scenes to drive things like OBIEE.  For the average power user though, MDX for Essbase had one major limitation; there was no good way to get the results of the query in a useable format.  When using MDX against Essbase you had a few options to get data.

  1. You could pull the data through EAS using an MDX script.  
    1. While EAS could be useful for writing a script, the result set was returned into the EAS window with no way to get it out to Excel or even Notepad.  Large MDX scripts could time out and sometimes even take EAS down completely.
  2. You could copy and paste your MDX script into the Smartview query window.  
    1. This worked well enough for small to medium size queries.  Large queries would tend to cause issues with APS and Excel.  In addition, Excel became unresponsive while the query was running, so once you kicked it off you had to wait for it to complete.  If it was a 20 minute query, you weren't doing anything else in Excel in that time.  This also meant you had to run the query manually with no way to automate or run multiple queries as a batch.
  3. Running the queries via MaxL.  
    1. MaxL had the benefit of running the query in the background allowing you to run larger queries that typically didn't time out.  You could pull back really large data sets and you could write a script to run multiple queries in a batch.  The problem with MaxL was the output was not useable (see samples below).  MaxL did not allow for a delimiter so results were fixed width; that itself was not a show stopper but it was often problematic.  The real issue however was that MaxL would echo additional info in the result set about the query itself.  It would echo the query before the result set and then insert other info about the process.  If your intention was to have a user run these queries or to feed this data to another system you found that no one was going to be happy because you would need to clean up the result set before it became useable.  This meant a one off query could be done, but you weren't going to be able to use this as a standard solution.
  4. Use a third party tool.  
    1. While a few have been developed getting access to them usually involved additional licensing fees and adding another component to the system.
So for me looking at all of those options, MaxL would often seem to be a good choice except for the formatting issues.  At least that was the case up until now!

The Fix

I've been a vocal proponent of fixing this functionality for a long time, often discussing it at conferences with product and development managers, and submitting enhancement requests.  Recently I had the opportunity to work with the Essbase development team on some features and the issue of MDX output came up.  We talked through the short list of what was needed to get over this hump and the development team was very responsive.

With this latest release of Essbase 11.1.2.4.010, Oracle has added a few key enhancements to MaxL that will allow you to modify the MDX output as a clean and useful data set.  The rest of this post will be dedicated to these new features and how to use them.  I will also discuss the use of some existing MaxL features that will be needed to get the output the way you want it.

One more very important point to mention is that these enhancements are not limited to just MDX output, they also work for DISPLAY and QUERY commands. 

11.1.2.4.010 new features in Maxl

  • set column_separator
    • Allows you to define a delimiter to separate your columns.  This could be a comma, tab, vertical bar, or pretty much any other characters.  As a nice bonus the dev team made this a multi-byte delimiter, meaning you can specify a sequence of characters as your delimiter.  This comes in real handy for when you have that brilliant financial analyst who thought he was being so cool when he decided to use a pipe (|) in his member descriptions and inadvertently broke all the exports going from one system to another.
    • Examples:
      • Pipe - set column_separator "|";
      • Comma - set column_separator ",";
      • Tab - set column_separator " ";
      • Multi character - set column_separator "~#";
  • set column_header
    • This setting has an ON | OFF switch allowing you to have column headers in your result set or suppress them.  In most cases you will want them on, but turning them off can be useful particularly when passing the output to another system.  In case you were wondering, it shows the column heading for dimensions in the rows as well, even when using complex Crossjoins.
    • Examples: 
      • On - set column_header ON;
      • Off - set column_header OFF;
  • set echo_mode
    • This setting has an ON | OFF switch that allows you to suppress all that excess content I was describing above.  By setting this to OFF, you will no longer get the query repeated back in the result set or any of the other system messages that you would find in prior versions.
    • Examples: 
      • On - set echo_mode OFF;
      • Off - set echo_mode ON;

Existing features in MaxL you will need or want to get this working

  • set column_width
    • The set column width feature for this use case is being used as a kind of a safety feature.  If you don't set it large enough there is a chance some data can get cut off.  This is especially true in some of the DISPLAY result sets.  So to be safe we set it really high since it has no downside impact when using a delimiter
    • Example: 
      • set column_width 10000;
  • set message level
    • Even with echo_mode off certain messages will show up in your result set if you allow them to.  To be safe, I suggest setting this to only allow FATAL messages through.  Basically if you have a fatal error your query is not going to finish anyway.
    • Example:
      • set message level fatal;
  • alter session set dml_output
    • dml_output allows you to apply some formatting to your query results.  Note that MDX output is not a replacement for the fully featured report script.  However there are a few key alters you will want to consider when generating an MDX output set.
      • Turning aliases on and off.  Often I will get requests from users to either turn aliases on or off when running a query.
        • Examples:
          • alter session set dml_output alias off;
          • alter session set dml_output alias on;
      • Formatting numbers (i.e. decimal places) is often useful when running a query
        • Examples:
          • alter session set dml_output numerical_display fixed_decimal;
          • alter session set dml_output precision 2;
      • Turn off cell status.  Cell status will return additional information about each cell.  In addition to providing extra data you don't want, on really large queries it could even impact performance.
        • Examples
          • alter session set dml_output cell_status off;

Samples

Sample 1

This is an example of a basic MDX query and the result set generated in 11.1.2.4.009.  In other words this is a "Before" example.

Script

login 'user' 'pwd' on 'host';
set message level fatal;
alter session set dml_output alias off;
alter session set dml_output cell_status off;
alter session set dml_output numerical_display fixed_decimal;
alter session set dml_output precision 2;
spool on to 'c:\Temp\CO\output.txt';

select {[May],[Jun]} on AXIS(0), 
{[100],[200]} on AXIS(1) 
from [Sample].[Basic];

spool off;
logout;
exit;

Output

MAXL > ; select {[May],[Jun]} on AXIS(0), 
     > ; {[100],[200]} on AXIS(1) 
     > ; from [Sample].[Basic];

 Axis-1              (May)               (Jun)              
+-------------------+-------------------+-------------------
 (100)                           2571.00             2859.00
 (200)                           2302.00             2445.00


Sample 2

This is the same query run in 11.1.2.4.010 with new features

Script

login 'user' 'pwd' on 'host';
set column_width 10000;
set message level fatal;
set column_separator "|";
set column_header on;
set echo_mode off;
alter session set dml_output alias off;
alter session set dml_output cell_status on;
alter session set dml_output numerical_display fixed_decimal;
alter session set dml_output precision 2;
spool on to 'c:\Temp\CO\output.txt';

select {[May],[Jun]} on AXIS(0), 
{[100],[200]} on AXIS(1) 
from [Sample].[Basic];

spool off;
logout;
exit;

Output

Product|May|Jun
100|2571.00|2859.00
200|2302.00|2445.00

Notes

You will see that in the second query we do not get the actual query returned to us in the spool.  We do not get mention of "AXIS(1)" as a column header, we get the actual dimension name for the rows.  We do not get the repetitive dashes as a line break -------------.  You will also note that column headers are not enclosed in parenthesis.  This output can be provided to any user to open in Excel or import into another application.

Sample 3

This is the same query run in Sample 2 but with column headers turned off

Script

login 'user' 'pwd' on 'host';
set column_width 10000;
set message level fatal;
set column_separator "|";
set column_header off;
set echo_mode off;
alter session set dml_output alias off;
alter session set dml_output cell_status on;
alter session set dml_output numerical_display fixed_decimal;
alter session set dml_output precision 2;
spool on to 'c:\Temp\CO\output.txt';

select {[May],[Jun]} on AXIS(0), 
{[100],[200]} on AXIS(1) 
from [Sample].[Basic];

spool off;
logout;
exit;

Output

100|2571.00|2859.00
200|2302.00|2445.00


Sample 4

This is the same query run in Sample 1 but a tab delimiter

Script

login 'user' 'pwd' on 'host';
set column_width 10000;
set message level fatal;
set column_separator " ";
set column_header on;
set echo_mode off;
alter session set dml_output alias off;
alter session set dml_output cell_status off;
alter session set dml_output numerical_display fixed_decimal;
alter session set dml_output precision 2;
spool on to 'c:\Temp\CO\output.txt';

select {[May],[Jun]} on AXIS(0), 
{[100],[200]} on AXIS(1) 
from [Sample].[Basic];

spool off;
logout;
exit;

Output

Product May Jun
100 2571.00 2859.00
200 2302.00 2445.00

Notes

You will see that to get tab delimiter I simply hit the tab key between two double quotes.


Sample 5

This is a similar query but now adding additional dimensions to the rows with the use of a Crossjoin

Script

login 'user' 'pwd' on 'host';
set column_width 10000;
set message level fatal;
set column_separator "|";
set column_header on;
set echo_mode off;
alter session set dml_output alias off;
alter session set dml_output cell_status off;
alter session set dml_output numerical_display fixed_decimal;
alter session set dml_output precision 2;
alter session set dml_output get_missing_cells off
spool on to 'c:\Temp\CO\output.txt';

select NON EMPTY 
{[Year].children} on AXIS(0), 
{CrossJoin([Market].children, [Product].children)} on AXIS(1) 
from [Sample].[Basic];

spool off;
logout;
exit;

Output

Market|Product|Qtr1|Qtr2|Qtr3|Qtr4
East|100|2747.00|3352.00|3740.00|2817.00
East|200|562.00|610.00|372.00|990.00
East|300|591.00|922.00|522.00|592.00
East|400|1480.00|1615.00|1712.00|1537.00
East|Diet|555.00|652.00|644.00|557.00
West|100|1042.00|849.00|899.00|759.00
West|200|2325.00|2423.00|2540.00|2439.00
West|300|2363.00|2739.00|2937.00|2692.00
West|400|1407.00|1504.00|1563.00|1380.00
West|Diet|2025.00|1975.00|2100.00|1987.00
South|100|1051.00|1198.00|1312.00|1212.00
South|200|1465.00|1540.00|1612.00|1498.00
South|300|561.00|529.00|591.00|669.00
South|400|#Missing|#Missing|#Missing|#Missing
South|Diet|1146.00|1289.00|1310.00|1167.00
Central|100|2208.00|2473.00|2560.00|2249.00
Central|200|2369.00|2457.00|2481.00|2271.00
Central|300|2414.00|2579.00|2648.00|2450.00
Central|400|2118.00|2317.00|2423.00|2245.00
Central|Diet|3291.00|3420.00|3478.00|3230.00

Sample 6

This is a similar query to Sample 5 but now using multiple axis vs. Crossjoin.

Script

login 'user' 'pwd' on 'host';
set column_width 10000;
set message level fatal;
set column_separator "|";
set column_header on;
set echo_mode off;
alter session set dml_output alias off;
alter session set dml_output cell_status off;
alter session set dml_output numerical_display fixed_decimal;
alter session set dml_output precision 2;
alter session set dml_output get_missing_cells off
spool on to 'c:\Temp\CO\output.txt';

select NON EMPTY 
{[Year].children} on AXIS(0), 
{[Market].children} on AXIS(1),
{[Product].children} on AXIS(2)
from [Sample].[Basic];

spool off;
logout;
exit;

Output

Market|Product|Qtr1|Qtr2|Qtr3|Qtr4
East|100|2747.00|3352.00|3740.00|2817.00
East|200|562.00|610.00|372.00|990.00
East|300|591.00|922.00|522.00|592.00
East|400|1480.00|1615.00|1712.00|1537.00
East|Diet|555.00|652.00|644.00|557.00
West|100|1042.00|849.00|899.00|759.00
West|200|2325.00|2423.00|2540.00|2439.00
West|300|2363.00|2739.00|2937.00|2692.00
West|400|1407.00|1504.00|1563.00|1380.00
West|Diet|2025.00|1975.00|2100.00|1987.00
South|100|1051.00|1198.00|1312.00|1212.00
South|200|1465.00|1540.00|1612.00|1498.00
South|300|561.00|529.00|591.00|669.00
South|400|#Missing|#Missing|#Missing|#Missing
South|Diet|1146.00|1289.00|1310.00|1167.00
Central|100|2208.00|2473.00|2560.00|2249.00
Central|200|2369.00|2457.00|2481.00|2271.00
Central|300|2414.00|2579.00|2648.00|2450.00
Central|400|2118.00|2317.00|2423.00|2245.00
Central|Diet|3291.00|3420.00|3478.00|3230.00

Notes

You will see that even with multiple Axis the column headings line up nicely.

Sample 7

The handling of #Missing poses some issues as there is still no way to remove #Missing from an MDX result set.  Structuring the query a certain way and making use of the NON EMPTY directive can usually overcome this.  In most cases you can find a dimension where you are only selecting a single member (think year, scenario, view, etc.) using a single member from one of those dimensions would allow you to place it as the sole member on the column axis effectively making it a "Data" column.  In some cases you could even use an Attribute dimension, although use of attribute could impact performance.

In this example I format the query in such a way to get a level 0 export from Sample.Basic for Actual and Budget.  Note that the if you run this without the NON EMPTY you will get #Missing values in the "Data" column but with NON EMPTY those rows are suppressed.

login 'user' 'pwd' on 'host';
set column_width 10000;
set message level fatal;
set column_separator "|";
set column_header on;
set echo_mode off;
alter session set dml_output alias off;
alter session set dml_output cell_status off;
alter session set dml_output numerical_display fixed_decimal;
alter session set dml_output precision 2;
alter session set dml_output get_missing_cells off;
spool on to 'c:\Temp\CO\output.txt';

WITH
SET [_Year] AS '{Descendants([Year], [Year].levels(0))}'
SET [_Measures] AS '{Descendants([Measures], [Measures].levels(0))}'
SET [_Product] AS '{Descendants([Product], [Product].levels(0))}'
SET [_Market] AS '{Descendants([Market], [Market].levels(0))}'
SET [_Scenario] AS '{[Actual], [Budget]}'
MEMBER [Caffeinated].[Data] AS '[Caffeinated]'

SELECT
{[Caffeinated].[Data]} ON AXIS(0),
NON EMPTY
CrossJoin([_Year], 
Crossjoin([_Measures], 
Crossjoin([_Product], 
Crossjoin([_Market], [_Scenario])
)
)
)
ON AXIS(1) 
FROM [Sample].[Basic]
;

Output (only partial output due to size)

Year|Measures|Product|Market|Scenario|Data
Jan|Sales|100-10|New York|Actual|678.00
Jan|Sales|100-10|New York|Budget|640.00
Jan|Sales|100-10|Massachusetts|Actual|494.00
Jan|Sales|100-10|Massachusetts|Budget|460.00
Jan|Sales|100-10|Florida|Actual|210.00
Jan|Sales|100-10|Florida|Budget|190.00
Jan|Sales|100-10|Connecticut|Actual|310.00
Jan|Sales|100-10|Connecticut|Budget|290.00
Jan|Sales|100-10|New Hampshire|Actual|120.00
Jan|Sales|100-10|New Hampshire|Budget|110.00
Jan|Sales|100-10|California|Actual|678.00

Closing

So there you have it, a simple enhancement with tremendous potential.  With clean formatted output, users can run complex queries and work with the results in a format they expect.

I want to give credit to the Essbase Development and Product management team for working on this and turning it around so quickly.  They really did a great job with this and I am looking forward as a customer to continued collaboration with our primary vendor/partner.