Monday, December 8, 2008

MDX Resources

Today I received an email from someone asking what resource material I recommend for MDX with Essbase. After I wrote up and sent the response, I thought it would be something good to post, so below is the body of the email I sent:

I would recommend the following reference material for working with MDX for Essbase
If you have block storage experience then the following PDF is very helpful. You can find a copy at
This content can also be found in the technical reference under the MDX node; there is a section called Aggregate Storage Topics.

Beyond the material provided by Hyperion, there are two books I recommend:
Fast Track to MDX – Whitehorn, Zare, and Pasumansky.
Fast Track is not Essbase specific, it is MS Analysis Services focused, but it provides a good introduction to the basics of MDX that I found very useful. Moshe Pasumansky created the MDX language, so it stands to reason if you are serious about learning MDX you should read at least one book that he has authored or co-authored.

MDX Solutions with Microsoft SQL Server Analysis Services 2005 and Hyperion Essbase, Second Edition – Spofford, Harinath, Webb, Huang, Civardi
Make sure you get the second edition of this book; the second edition is the one that focuses some chapters on Essbase. Again, the content is more on Analysis Services, but at least there are a couple of chapters dedicated to Essbase.
Beyond those resources, practice and experimentation, and leverage the user forums.

Friday, December 5, 2008

Dec Update

Quick update since I haven't posted in a while. I actually have a topic to post on I just haven't gotten around to putting it up, but should have it up soon. Things have been very busy lately, working in the financial services industry is a challenge these days, actually I suppose working in any industry is a challenge these days. Fortunately I do have some distractions, one of which is my participation on the ODTUG Hyperion SIG Board of Directors. Currently the BOD is engaged in putting together the agenda for the Kaleidoscope user conference in Monterey, CA. June 2009. Over the past few weeks, we have been reviewing abstracts and structuring the different tracks for the five days of the conference. I am very pleased with the content we have and really like the way the program is coming together. I strongly urge you to check out and consider attending this year. While expenses are going to be very tight this year for most of us, I strongly suggest you try to attend this conference. Remember one of the main differentiators with this conference is the amount of training you will get. This conference will not be a typical wine and dine vendor junket. The bulk of the conference is devoted to technical training and you will get a lot of it, dollar for dollar there is no better deal on the amount of content you will learn for the price. So I hope to see many of you there and if you do attend, be sure to look for my presentations, I'd love to see you there.

Best Regards

Wednesday, July 16, 2008

Time Functionality

Something I've worked on a number of times and even spoke about at the ODTUG Kaleidoscope conference is MDX and specifically how to use it to do time related calcs like period-to-date functions and Time Balancing. I've seen quite a few posts on the subject and figured it was worth putting up some examples here.

More recent releases of Essbase will do time balancing natively, but period-to-date is something you still have to work out. My preferred method of doing this is to add an analytic dimension to the database and set the formula in one of the calculated members. For those of you who aren't familiar with the term, an analytic dimension is one that you add to your database that has a single default load member, all other members are calculated. In the case of time functionality I usually call this dimension [View] and have a default member called [Periodic]. I will then have some other members as needed, like [Time Balance], [QTD], [YTD], etc.

For Example:

The period-to-date functionality is pretty easy using the PeriodsToDate MDX function, the key is remembering to wrap it in a SUM function because the PTD function returns a member set, not a value.

SUM(PeriodsToDate([Time].Generations(1), [Time].CurrentMember), [View].[Per])

QTD is almost identical with a modification to the generation reference:
SUM(PeriodsToDate([Time].Generations(2), [Time].CurrentMember), [View].[Per])

Time Balancing is little more involved. First you need to flag the members you want time balanced with a User Defined Attribute (UDA). Once you have that set you have to decide what kind of time balancing you wan to do. If you just want to grab the last child member, that's fairly easy.

CASE WHEN IsUDA([Measures].CurrentMember, "TB") THEN IIF(IsLeaf([Time].CurrentMember), [View].[Per],
(ClosingPeriod ([Time].Levels(0), [Time].CurrentMember), [View].[Per]))
ELSE [View].[Per] END

If you want to show the last value with data, known as Time Balance Skip missing, then you would use something like this:

CASE WHEN IsUDA([Measures].CurrentMember, "TB_Last") THEN
IIF(IsLeaf([Time].CurrentMember), [View].[Per],
IIF(NonEmptyCount ([Time].CurrentMember.Children, [View].[Per]) > 0,
([View].[Per], Tail(Filter(Leaves([Time].CurrentMember),
Not IsEmpty ([View].[Per]))).Item(0).Item(0)), MISSING))
ELSE [View].[Per] END

Now you'll want to modify your QTD and YTD functions to work with your Time balancing

CASE WHEN IsUDA([Measures].CurrentMember, "TB_Last") THEN
IIF(IsLeaf([Time].CurrentMember), [View].[Per],
IIF(NonEmptyCount ([Time].CurrentMember.Children, [View].[Per]) > 0,
([View].[Per], Tail(Filter(Leaves([Time].CurrentMember),
Not IsEmpty ([View].[Per]))).Item(0).Item(0)), MISSING))
ELSE SUM(PeriodsToDate([Time].Generations(2), [Time].CurrentMember), [View].[Per]) END

CASE WHEN IsUDA([Measures].CurrentMember, "TB_Last") THEN
IIF(IsLeaf([Time].CurrentMember), [View].[Per],
IIF(NonEmptyCount ([Time].CurrentMember.Children, [View].[Per]) > 0,
([View].[Per], Tail(Filter(Leaves([Time].CurrentMember),
Not IsEmpty ([View].[Per]))).Item(0).Item(0)), MISSING))
ELSE SUM(PeriodsToDate([Time].Generations(1), [Time].CurrentMember), [View].[Per]) END

Tuesday, July 1, 2008

Alternate Time roll-up in ASO cube with EIS

Ok, so for my first post I figured I would write about something that just happened the other day that I thought was pretty cool. By the way, this is pretty much going to be the format of this blog, when I do something that I think is cool, I'll write about it and if anyone can get any benefit out of it cool. So as I was saying, I came up against a little challenge the other day and the result came out pretty nicely. At the end of the day I had to create two different time roll-ups in my ASO cube. I thought about actually doing two different time dimensions (what the hell it's an ASO cube I can have as many dimensions as I want) I resisted this temptation and went back to my OLAP basics, would the two dimensions have any meaning if I cross tabbed them in a report? Answer: no; so I went back to alternate roll-up.

Now an Alternate roll-up is certainly nothing new, and probably not worth writing about in and of itself, but this one did pose a couple of challenges. (For those of you who are new, alternate roll-ups are pretty much what they sound like, an alternate way of rolling certain members up in a hierarchy. For example you might have a member called [New York] that rolls up to a member called [North East Region]. You could then share the member [New York] and roll it up under another member like [Major Markets]).

So first let me set the stage. For starters, this database goes down to the daily level. The data set in this application flows on what is called a Production calendar. Basically the Production calendar runs a few days short on each month, so the month of June ends on June 23th and the month of July begins on June 24th. Now we had already hashed this out with the users and had them provide the production calendar for the next two years so I could integrate it with my standard calendar. Now is a good time to mention that I've built my own little master data repository where I store approx 4 - 5 standard hierarchies that I use throughout multiple applications. I have a standard time dim (which also has built in the company's fiscal calendar) then I have a structure for the GL, a Cost Center roll-up, and another one for a Branch hierarchy. For most of these structures I have both Parent/Child and Gen/Level tables so I have a lot of flexibility in using with EIS or SQL load rules. Whenever I build a new app chances are one of these structures will be used so I just go grab them from the central repository. Perhaps I can post another time and get more in depth about what I am doing there. Getting back to this post, I pull down the time structure and I join it with the Prod calendar the users provided me and we built the Production calendar for the app. After a while the users kept coming up with issues they were having, it seems all internal reporting is on Production calendar but anything going external, especially reporting that needed to match up against other third parties was all done on regular calendar (i.e. June is 6/1-6/30). I gave it some thought and came up with a pretty quick way to do it. I already had the regular time roll-up and my time dim in EIS was pointing to a view so I could simply modify my view with a UNION query to bring in the regular time roll-up. I had to do some manipulation to keep the upper level members unique. All I had to do was prefix all Prod calendar members with 'PY' (i.e. [PY2008-Q1]) and then Fiscal year with 'FY' (i.e. [FY2008-Q1]).

The SQL query looked like this

SELECT 'Production Calendar' AS Hierarchy, 'PY' + CAST(prod_yr AS varchar) AS fiscal_year, 'PY' + CAST(prod_yr AS varchar) + '-Q' + CAST(prod_qtr_no AS varchar) AS fiscal_qtr, 'Q' + CAST(prod_qtr_no AS varchar) + '-' + 'PY' + CAST(prod_yr AS varchar) AS fiscal_qtr_desc, 'PY' + CAST(prod_yr AS varchar) + '-' + RIGHT('00' + CAST(prod_mo_no AS varchar), 2) AS month, prod_mo_short + '-' + 'PY' + CAST(prod_yr AS varchar) AS month_desc, CAST(cal_year AS varchar) + '-' + RIGHT('00' + CAST(cal_mo_no AS varchar), 2) + '-' + RIGHT('00' + CAST(cal_day_no AS varchar), 2) AS day, prod_mo_short AS month_attr, prod_mo_sort AS month_attr_sort, cal_day_short AS day_attr, cal_day_of_week_no AS day_attr_sort, weekday, sort_id, NULL AS alias
FROM dbo.Periods



SELECT TOP 100 PERCENT 'Fiscal Calendar' AS Hiearchy, 'FY' + CAST(fiscal_year AS varchar) AS fiscal_year, 'FY' + CAST(fiscal_year AS varchar)
+ '-Q' + CAST(fiscal_qtr_no AS varchar) AS fiscal_qtr, 'Q' + CAST(fiscal_qtr_no AS varchar) + '-' + 'FY' + CAST(fiscal_year AS varchar) AS fiscal_qtr_desc, 'FY' + CAST(cal_year AS varchar) + '-' + RIGHT('00' + CAST(cal_mo_no AS varchar), 2) AS month, cal_month_short + '-' + 'FY' + CAST(cal_year AS varchar) AS month_desc, CAST(cal_year AS varchar) + '-' + RIGHT('00' + CAST(cal_mo_no AS varchar), 2) + '-' + RIGHT('00' + CAST(cal_day_no AS varchar), 2) AS day, cal_month_short AS month_attr, prod_mo_sort AS month_attr_sort, cal_day_short AS day_attr, cal_day_of_week_no AS day_attr_sort, weekday, 1000000 + sort_id AS sort_id, NULL AS alias
FROM dbo.Periods AS Periods_1

ORDER BY sort_id

The result set looked like this

Fiscal Calendar
Production Calendar


The next step was to modify my EIS meta-outline to allow for multiple hierarchies. So I went to the properties for the dim and went to the Outline Build properties. I set it to 'Stored for All Alternate Hierarchies'. This setting tells Essbase when building this dimension to make the root member allow multiple hierarchies and set all Gen 2 members to Stored. This was ideal for this app, I didn't want to loose any performance with the alternate roll-up so by having both hierarchies stored I'm able to maximize performance on both hierarchies.

The end result worked out really well

Here's what the Prod Year looked like

And here's what the Fiscal year looked like

Once I had the Time roll-ups in place I loaded the data and ran an aggregation. I cranked it up to about 10x the initial data load size and it flies! Query response is very fast.
End users can now see data both ways.
This was a quick win, total development time was less than two hours.

Feel free to post comments.