Quantcast
Channel: Glenn Schwartzberg's Essbase Blog
Viewing all 110 articles
Browse latest View live

Solid State to the rescue

$
0
0

Today I’ve got a story of a client with a problem. It will be a short story but one that is more common than you might think. When we first started our engagement with the client, we recommended a physical server and dedicated disk. Much to our dismay they decided they could get the performance they needed using VMs and SAN for storage.

Their system does EXTENSIVE allocations and needless to say, they were not getting acceptable performance. For a long time I tried we argued their environment was part of the problem. They kept showing us stats that there was no bottleneck on either the VM or the SAN. Their calculation times ranged from 8 hours to 27 hrs. It should be noted between calculations the database was set to the same initial state and the same data files were being rerun. Yes there were minor changes to one or two drivers, but nothing to make that big a difference.

Finally, a wise soul in IT at the client decided to try bringing up a parallel environment with a physical server and dedicated disk. Performance improved and they were getting more consistent times, but still longer than they liked. He went one step further and got loan of some solid state drives.  With them, the calculation time when down to 5-6 hours (depending on data volume) and it was consistently that. With proof of the improvement, they have implemented Solid state drives in production and maintain the 5-6 hour time.

We have debugged issues with multiple clients with SAN issues and I have come to dislike them immensely. On the other hand, while I had trepidation in the past about Solid State drives, I am a convert and think they can provide a huge performance boost for many application especially if the app does read/write and calculations. 


Smart View Compatibility

$
0
0

I have been speaking at a lot of conferences and client events over the last year touting the great new features of the new version of Smart View. As a matter of fact, I’m repeating the session on Feb 16th and 18th in  interRel Webinars. (contact dwhite@interrel.com  for more info) The questions I often get are “What do I need to upgrade to get functionality” and “If I just upgrade Smart View what functionality do I still get?” 

Before I answer thos questions I’ll first answer “What version of Smart View should you upgrade to?” Since Smart View is pretty backward compatible I would upgrade to the latest version 11.1.2.2.310. This is a patch that was released last Monday and includes connectivity to OBIEE or as many Oracle people now call it BIFS (Business Intelligence Foundation Suite). Note, you must be on OBIEE 11.1.1.7 for the connectivity to work. There are a number of other enhancements and bug fixes to this version.

Next, what you you have to upgrade to get full connectivity? Well, for Smart View itself, you should be on 11.1.2.1.102 (or higher) as well as APS and Essbase 11.1.2.1.102 or higher. If you are going the patch route or 11.1.2.1 then I would recommend you get the Smart View patch 11.1.2.1.103 and APS and Essbase patches 11.1.2.1.104. Of course , you are even better if you upgrade to 11.1.2.2.310.

Finally, If you just upgrade Smart View without upgrading APS or Essbase, the functionality you can expect to get (Thank you Smart View development team for this list) is:

  • Ribbon specific to Provider
  • Re-designed Options Dialog
  • Smart View Panel
  • Sheet level Options
  • Retain Excel Formatting
  • Improved Function Builder
  • Fix Links (Removes path references in cells with Smart View functions) Before: “=D:\Oracle\Smartview\bin\hstbar.xla’HSGETVALUE(…..
  • After: =HSGETVALUE(...
  • Performance Improvements

So while you won’t get cool things like multiple grids on a sheet or member name and alias, you get a few perks. The sheet level options are one thing that was missing in prior versions of Smart View that I’m glad got put in.

If you upgrade to at least 11.1.2.2 you also get an awesome new Smart Query tool that enables you to create extremely complex queries that return sets of members/numbers which can be combined and saved. It is fantastic feature that is not getting the press it deserves. In a future post, I‘ll go through a detailed example so you can see its power. 

InterRel News and news from down under

$
0
0

Typically I don’t blog about the company I work for, devoting my time to more technical articles, but I decided to deviate a little today to talk about a couple of things.

First, after a long time interRel has a new website. It looks pretty nice.Check it out at  www.interrel.com. I think it is nicer looking and more informative than our old site.

Second, interRel is hiring. If you have experience in the Hyperion line of products or OBIEE and are looking for a cool consulting firm to work for, we could be interested in you. interRel is a great company to work for and we believe in consultant growth and training as importantly as we believe in customer service. I’m not going to give you the marketing on why you should join interRel, you probably already know. If you have an interest in talking to us, email info@interrel.com

finally , some news not associated to interRel in any way.  If you plan to be in New Zealand or Australia next month, why not attend ODTUG’s Seriously Practical Conference in Melbourne March 21st and 22nd or the  NZOUG conference on March 18th and 19th at Te Papa, Wellington (sounds like a kind of steak to me). Sounds like a great way to get a paid vacation, Go to the conference and see the countryside. This is truly not associated with interRel as we will will not be speaking at either conference, but my friend, mmip, Cameron Lackpour put together the EPM agenda for both conferences and will be speaking there. If you are there, tell him Glenn said to say Hi. That is the secret phrase and he might have a present for you (not really, It would just be fun).

I have issues

$
0
0

Were I Cameron Lackpour, I would call these stupid pet tricks, but since I’m not, I’ll say they are issues I’ve encountered. Luckily I’ve resolved them so perhaps I can save you the pain I went through.

The first issue I encountered when I tried to use a Custom Defined Function(CDF) that runs a SQL statement or stored procedure from within a calc script. This function was written by Toufic Walkim(Thanks you) and was given to me a while ago. I’ve used this a few times at different clients but on older versions of Essbase. In trying to get it to run on 11.1.2.2 I encountered a number of issues.

First, It could not find the correct ODBCJDBC driver. That was resolved by downloading the driver from Microsoft and changing the properties file to point to it (or so I thought).  Turns out there are two drivers downloaded. ODBCJDBC.DLL and ODBCJDBC4.DLL. After experimentation, I had put the  ODBCJDBC.DLL in the UDF directory and got an error that basically said, I need to use ODBCJDBC4.DLL. Adding it to the directory did not solve the issue even if I removed the ODBCJDBC.DLL. So thinking swiftly (Ok, I was pretty slow) , I renamed ODBCJDBC4.DLL to be ODBCJDBC.DLL. Voilà, now it recognized the driver and knew it was the correct one.

My next issue was that once connected, even if trying to run a simple SQL delete statement, the calc script would hang and I would have to kill the process. Thanks to help from Robb Salzmann narrowing  the issue down  , I was able to Google a few things and found a bug published by Sun that basically says the version of the JDK installed with 11.1.2.2 will hang on connections. I found a later version of the JDK (jdk160_43 to be exact), installed it in the Oracle\middleware directory and pointed the JVMModulelocation parameter in the Essbase.cfg to use it. Now my life is good and the CDF works fine. I did need to remember to bounce Essbase and it took me a while to remember what I needed to do to get Essbase to run in foreground so I could see the messages in the application window (But that is another story)

My next opportunity was with Essbase Studio(11.1.2.2). I was trying to build dimensions and got an error “Cannot get async process state”. I started investigating and found the errors were all with my Entity dimension. If I built without that dimension, everything worked fine.

I should mention I’m not the only one working on the model. My client has SQL developers working to create views and add content. So I looked further and did a refresh of the Entity View. Imagine my surprise when I found there were columns removed from the view I was using in one of my alias tables.The Studio table refresh would not let me update the view since it knew something was being used that it was trying to remove. I tried having the column added back to the view, but still could not get they refresh working. So I went through my Essbase  model properties and removed the alias table the column was in, then went into the Alias table and removed the column from there as well. I was now able to refresh the view with the column changes. Moral of the story, if you get the message, see if your data source changed.

I’ve been reading the blogs and readmes for 11.1.2.3 and like the new features added. While Essbase Studio really only got bug fixes and Essbase itself only got a few new changes, I like what I see and can’t wait to try ASO planning.

Humor for the day

$
0
0

While most of my posts are technical or Hyperion related, I think this one is informational as well, but in a different way. I got this in an email this morning from Alaska Airlines and I think they are trying to tell us something. What do you think?

image

How not to reverse engineer an Essbase cube to allow drill through

$
0
0

It has been awhile since I’ve posted. No apologies but I was busy getting ready for KScop13 (which was a great conference. Sorry if you missed it). Then I was a bit burned out and needed time to recover.  In this post, I’m going to take a little from my KScope presentation of Advanced Studio Tips and Tricks to hopefully help you. This will deal with reverse engineering a cube to get drill through functionality.

First why would you want to reverse engineer a cube?

  • Cube already exists
    • Want to add drill through capabilities
    • Want to start migrating to Studio
    • Want to have hierarchies available for building other cubes

So you can learn from my mistakes, I’ll discuss the wrong way to try to reverse engineer a cube.  I had a client that wanted to do this. I had recommended extracting the hierarchies from their existing cube and loading it into dimension tables. They wanted to try a different route.  Their target table had all of the level zero dimension members in the fact table. They wanted to see if we could just build the level zero members (since they would only drill through at level zero) from the fact table. The actual data load would be done outside of Studio, so there would be no change in that.

The first thing I tried was to create user defined tables that made fake dimensions tables. I used dummy parent names so it looked like a parent/child build.  I created the hierarchies and the Essbase model. In the Model properties, I told it to ignore shared members so they would not build the new relationship.

I encountered my first problem. I could not create the custom SQL in the drill through report. I fixed this problem by making the fake dimension hierarchies recursive.

Then I ran into my second problem I had a dimension (Scenario) that I created as a manually defined hierarchy. The deploy would not refresh the Essbase cube. So thinking swiftly,  I created it as a user defined table and joined to that “table”. That solved that issue (Or at least I thought it did)

This change  did allow me to deploy the cube, but I could not get the drill through intersections to work in my existing test cube. If I built a new cube with the dummy intersections, the drill through report would work. I figured out it was because of the "ignore share members” It was not actually creating the intersections in the cube it just ignored what I was trying to build.

Bummer. What this meant was I could not build the cube from just level zero members. I would need to have at least level 0 and level 1 members to build the dimension tables.

I reminded the client of my original suggestion on how to reverse engineer. They decided to take my suggestion. Basically we extract all the dimensions using the outline extractor from Applied Olap (you could also do it with ODI or other ways) then load the dimensions into the same database as the fact table exists in.

Once they are there, we can do our joins and normal Essbase Studio steps to update the cube and drill through reports.

This is going to be a busy rest of the year from me. I’ve already spoken at a Hyperion Solutions roadshow with Oracle (losing my voice before the final of my 4 presentations). I’m scheduled to speak at the ODTUG sponsored Sunday Symposium at Oracle Open World, Attend Oracle Ace Director meetings in Redwood city, and speak at 4-5 other events in the remainder of the year. This is in addition to trying to do real work. While I love to share information with you all,  my first love is being a technical resource and solving problems in the Essbase/Hyperion world. I do as much of that as I can. 

I’ll try to be more frequent in my blog posts. I think I might share more from my Studio tips and tricks next or perhaps some things from my Thinking outside the box optimizations session from Kscope. That session almost allowed me to beat out Edward Roske for best conference speaker. I bare no hard feelings toward Edward, He deserved to win the award, but I gave him a run for the money (ok wooden kaleidoscope).  Funny both our presentations were on optimization.

Till next time

Pimping my Ride

$
0
0

The other day, Edward Roske and I participated in a podcast hosted by Kevin McGinley and Stewart Bryson called Real Time BI. We spoke with them on integration of Essbase and OBIEE. It was a really great time. If you want to see what I really look like or now monotone I really am or the witless  banter between Edward and me, take a look at part one  on YouTube (http://youtu.be/wwTIml_b4mE) or iTunes (http://bit.ly/QhwuSq)!

Part 2 will be out soon. In addition to being informational, it is also somewhat entertaining.

Smart View hangs on Studio Drill through issue

$
0
0

It is amazing at least to me,  that I have two posts in less than a month.

Recently I had both a client and another consultant in our firm with the same (or similar) issues. They had implemented Essbase Studio and were using Rill through reporting. In one case, when a number of users would try to retrieve from Smart View , Smart View would hang for all users. It would last a number of minutes before it would give an error message. If the users tried to retrieve again, they would get another message about “the prior request is still running”. The other message that would appear was “Decompression failed”. A third instance, Smart View would hang for exactly 4 minutes, then free up.  If the Drill through reports were turned off, Smart View performance went back to normal.

A number of things were attempted to try to remedy the situation including:

1. Turning off compression in APS Essbase.properties file

2. Updated the registry on the APS server changing the port timeout from 4 minutes to 30 seconds

Neither of these worked and I was befuddled (This is similar to Elmer fuddled but you don’t try to shoot rabbits). Since I was not working directly on this and the client had turned to Oracle support with no real help, my colleague continued to carve away at it. His email to me speaks better of it than I could, so here is his synopsis of what he tried.

“ So we got something to work… However, the answer makes me think of the chicken dance, where chickens are given food at random intervals and start to develop a pattern of behavior from what they had previously done when the reward arrived. And in solving for this problem, that is exactly what I was doing: dancing like a chicken.

As previously stated we thought it had something to do with the ports. I knew ports were refreshed every 4 minutes, so I thought if it takes 4 minutes after SmartView freezes to come back … the answer must have to do with the ports being used and not available. We timed it … and it took 4 minutes exactly every time. Thus like the chicken, I did a little dance.

So we increased the ports, frequency of port refresh … however it did not work. We then checked the ports that were being used and found only 144 were being used when it froze. I then acquired another little move to my chicken dance. I was starting to move.

We tried the following: •Web logic – EPM Managed Servers Tuning

• APS – Essbase Property File Settings

• APS – Logging.xml

• Essbase – Compact Outline

• Essbase Config File

• Java Heap Size

• IE – Timeout Settings / Registries

Then I realized that if I stopped the application and restarted it, that it would immediately become available. No waiting 4 minutes. I was pretty sure that perhaps if I changed something in the Essbase config file I could get it to work. Now I was really dancing.

I started to look for settings that had a 4 minute time out setting… could not find any. I found a setting called Serverthreads … I decided to try it. So the next morning I asked the administrator to restart the server so we could test it. He made one additional change to increase the logging detail. We went ahead and tried it.

It worked!!! Now all we had to do was verify that this was really the fix.

We removed the serverthreads setting and restarted services, and it still worked. Wow, that was strange. What had caused it to work, since it was still working after removing the change and restarting services? We would need to retrace all of our steps.

So then we removed the detail logging. To our surprise it now failed. Wow … I was really dancing now. We tested this again and found that it only worked if we set Essbase log file to show detail.

My guess is that there is some setting that we can adjust so that we do not always have to have detail logging. However, I have been dancing so hard that I think it is time to pass this dance on to Oracle support.

Like random droppings of a positive stimulus I had danced long into the night finding the right patterns to get my next little dropping of reinforcement.”

As for the solution …

What worked was:

In Provider Services :

D:\Applications\Oracle\Middleware\user_projects\domains\EPMSystem\config\fmwconfig\servers\AnalyticProviderServices0\Logging.xml

Original Entries :

  <logger name='' level='WARNING:1'>

  <logger name='oracle.EPMOHPS' level='WARNING:1' useParentHandlers='false'>

Modified Entries :

  <logger name='' level='TRACE:1'>

  <logger name='oracle.EPMOHPS' level='TRACE:1' useParentHandlers='false'>

“Why changing the logging level should have any impact … that I do not know!!  I wish I was smart enough to answer that.

We only stumbled upon this by dumb luck when the Oracle asked us to change the xml log so that we could send them the more informative log.  Like I said earlier … we were attempting things that made sense only to find that the thing that made no sense worked.”

So while I did not find the solution, one was figured out, If you run into this hopefully, you can benefit from our research.

 

On another un-related note, the second part of the Podcast Edward and I did with Kevin and Stewart is available. take a look, it was fun to do and I think informative. Here it is :

 


Exalytics T5-8 is here

$
0
0

While I fully expected my boss (Edward Roske) to blog about the new Exalytics box on his blog Looksmarter.blogspot.com, he has been silent about it. Rather than leave you in the dark about this, I decided I can’t wait for him spew the details so I’ll do my best to give you the info.

Prior to Oracle Open World (Sep 12th to be exact) a new price list was available and in the Exalytics section was this entry that we had not heard of before Exalyytics T5-8. There was no press about it, but at Oracle Open World a few weeks later, they talked about the box.

Here is what I found out.  Prior versions were labeled X2-4 and X3-4. Apparently the X stands for Intel , the 2 or 3 for the Chip generation and the last 3 or 4 for the number of sockets.  As Edward mentioned when the X3-4 came out earlier this year, there is an upgrade kit available for the X2-3 to really turn it into a X3-4.

So what is the new machine? It is listed as a T5-8. So T instead of X. Yep it is not intel chips but Sparc T5 processors. This machine runs on Solaris operating system instead of Linus and includes 4 TB of DRAM, 3.2 TB of Flash Storage and 7.2 TB of hard disk. This box comes with up to 128 CPUs, much more than the 40 you can get with the X3-4.

image

I’ve not had a chance to play with this box but have been told the main reason for is is scalability. It is meant for a large number of concurrent users. What I’ve not heard (officially) is how it performs vs. the X3-4.  Historically intel chips have been faster for Essbase than Sparc chips, and the paperwork says nothing about a performance comparison. I’m guessing it is a little slower, but with the ability to consolidate 3-4 X3-4 machines into one, the user scalability should be really good.

So how much will this box set you back? According to the price list, the box itself is $330K, pretty cheap. You do have a cost per CPU and user, which makes it much more, but that is not all that different from the older models.  It sounds worth it to me.  If/when I have a chance to test it out, I’ll let you know more.

If you like my brief summary here, I’ll be talking about Exalytics in more depth at the OAUG Connection Point Conference in our beautiful Capital , Washington D.C. on Oct 23rd. If the spending limit isn’t fixed by then, traffic in the city should be light! (This is not a political statement, just an observation).

Edward or I will also be talking about it at the Hyperion Solutions Road Show in So Cal on Thursday Oct 17th. It is downtown LA, so there will be traffic. If you want more info on that event email Danielle White at dwhite@interrel.com or register at So Cal Road Show registration  This event is limited to current and potential Oracle clients and not to partners, Sorry.  I hope to see you at one of the events.

Smart View Member Misnomer

$
0
0

Believe it or not, I  will actually be updating my blog more frequently in the future. I’ve gotten 3 articles each half written and will be finishing them soon (I hope)! Two of them are on Formatted columns and an undocumented change to Smart view behavior. look for them soon.

But in the meantime, just to prove I’m not dead, Here is  a little tidbit that I have been asked too many times.  In Smart View connecting to Essbase, you have the options for member display of “Member Name Only” and “Member Name and Alias”  (For now I’m going to ignore qualified member names)

This confuses people as when they select Member Name Only, they see the Alias. What this setting really means is “Member Name OR Alias depending what is selected in Alias”. If you select None, you get the member name if you select an Alias table you get an Alias. Simple isn’t it.

During the beta long ago, I tried to get them to change the wording, but Oracle could not come up with anything meaningful that fit into the selection box so we are stuck with what it is.

Formatted columns in Essbase

$
0
0

Am I crazy(Yes)! Formatted columns in Essbase? What am I talking about, we know you format your data in the front end, why would I do it in Essbase.  That is a good question and brings up the topic of this post.  This is one of those items I put into the category of “Little used features of Essbase” an ever evolving presentation I give at various events.  What I am talking about is Format strings that became available with Essbase 11.1. Lots of people jumped on the Text and Date measures bandwagon and that are in use a lot now but few if any have implemented format strings and they can be very useful.

How about taking a date stored in Essbase and returning it as a formatted date in the format you want, or taking a numerical value and returning it as text. Wait you say, I can do that in a text list. Well you can sort of, but formatted text gives you more flexibility. For example , I can tell the format string that if the value of a column is between 0 and 28.5 then return the text “Bad”, if the value is greater than 28.5 and less than 80.3 return “Good” and if it is greater than that return “Great”. Text lists have distinct integer values and can’t do that without some manipulation.

Using one of the examples in the Tech reference

 http://docs.oracle.com/cd/E12825_01/epm.111/esb_techref/frameset.htm?mdx_cellvalue.htm

I took Sample.Basic and enabled Typed Measures

image

I then went into the Variance % measure and added the following format string:

MdxFormat(
CASE
    WHEN CellValue() <= 5 THEN  "Low"
    WHEN CellValue() <= 10 THEN "Medium"
    WHEN CellValue() <= 15 THEN  "High"
    ELSE  "Very High"
END
)

Note, The example in the web page has the quotes as intelligent quotes and you have to change them back to regular quotes or you will get an error something like “Error on line 3 unknown name ?”

So what does the output look like? For help in checking the values, I added a member named Variance % Unformatted.  You can see I now have text in my report that will change as the data does and does not require the results to be integer values.

image

There are a lot of possible uses for this to create more customized reporting.  I should note that this is only possible with Smart View as the Add-in does not support the text output.

Now that I have expanded your horizons, explore the possibilities

Undocumented Change to Smart View 11.1.2.5

$
0
0

As we all know (or at least the cool kids know) the changes to Smart View have been coming quickly. So fast that the wise at Oracle decided to decouple its releases from the rest of the EPM stack. That is how we are at 11.1.2.5.200 while the rest of the stack is 11.1.2.3. I think this is a good thing as it allows Smart View to be more proactive in introducing changes to help us do more and better reporting.  I applaud Oracle for doing this.

However, because the changes are coming so quickly, not everything gets documented very well. From the 11.1.2.5 new features documents we see this.

“Change to Display of Duplicate Variable Names

With this release, Smart View added functionality to display fully qualified variable names when variable names are duplicated. This helps to identify variables defined at the global, application, and database levels. “

But what does it mean? One of the cool features of Smart View is the ability to use substitution variables. You are thinking, what is so cool about that, the add-in could always do it and Smart View could do it since version 9X. Well yes, in both cases you could enter your substitution variable like &CurrMonth and when you retrieved, you would get the value of the substitution variable returned when you refresh the data. With this comes the limitation that if you save the spreadsheet, you save the actual value of the substitution variable and not the substitution variable itself. Huh? What I mean is suppose the substitution variable is set to Mar. It saves Mar and not &CurrMonth. 

Starting in Smart View 11.1.2.1.102 a new way to use variables was introduced. HSGETVARIABLE. When used in a worksheet, it would retrieve the value of the variable but keep it as a formulaic member so next month when you changed the variable, your report would update with the new information. Pretty cool!

So lets get on to the change. Suppose I have the following substitution variables:

image

Notice I have both a global and database specific variable with the same name.

In 11.1.2.3.X and below, I could enter

=HSGetVariable(“HSActive”,”CurrMonth”) and when refreshed, get the variable. Note, HSActive means the current active connection for the sheet, I could also put in a private connection name.

image

Starting in 11.1.2.5, the command allows you to put in a qualified name

=HSGetVariable(“HSACTIVE”,”Sample.CurrMonth”) to determine the scope of where to get the variable from. If left blank, it seems to get the global variable. If qualified, it picks up the variable from where you specify

image

The screenshot above shows what the formulas look like, but in reality when you enter them you get is

image

Once you refresh you get

image

The difference between rows 1 and 2 is Row one has been physically been changed to Mar while row 2 is still a formula.  The interesting thing is unless you qualify the application, you will get the global variable. 

Further, I’m using HSACTIVE for the connection name, you can actually use a different connection instead. For example if I created a private connection for Demo.Basic called DEMO, I could use it and it would pull from the Demo.Basic version of the variable even if I’m connected to Sample.Basic.

So here is where it is getting more interesting. In 11.1.2.5.200 I tried the same thing. When trying to use a global variable

=HsGetVariable("HSActive","currmth")

I get an error message

image

and what did not work before supplying both the application and database now does work

=HsGetVariable("HSActive","sample.basic.currmth")

As a test, I deleted the application and database level variables and then tried:

=HsGetVariable("HSActive","currmth")

and now it returns the global variable.

By the way &Currmth stopped working as well. 

Between the two versions Oracle development has apparently been refining how this functionality works. So what worked in 11.1.2.5 does not necessarily work the same way in 11.1.2.5.200 and of course different than prior versions.

Anyway this is a good enhancement to substitution variables and I urge you to give it a try.

Another post on 11.1.2.3.500

$
0
0

It seems like everyone and their brother (and Cameron Lackpour, the younger brother I NEVER had and NEVER wanted) has jumped in on relating the cool things the latest Essbase patch has to offer. So as not to be left out in the cold, I thought I make a few comments as well.

I think this new patch is a real game changer with a lot of cool features. It will have many of us throwing out old optimization techniques and coming up with a whole new set. We will really have to think out side the box to figure out what is best.

That said, There are a few things you might want to consider. First, although this is listed as 11.1.2.3.500, it has significant changes in it. Why Oracle has such significant changes in something that is just a patch, I don’t know, but it certainly better than waiting for a full release of the product. One supposes you could install the patch to get the features like enhanced aggregation for MDX functions and bug fixes, I would test very carefully before using some of the other new functionality without extensive testing.

Next, I would like to go into a little detail on a couple of the new features. In general, the word of caution I would give is to test extensively if you are using any of the new features. While they can give you significant gains in performance, they can cause you some issues.

Fix Parallel

The idea behind fix parallel, is there are many situations where calc parallel goes into serial mode and we as developers know better. We can use Fix parallel to force the calculations into a parallel mode. This implies that we actually know what we are doing and that there will be no conflicts if we go into this mode. During the beta testing, it was determined that Fix Parallel is not as fast as calc parallel in most cases, but is faster than not calculating in parallel mode at all.

Hybrid Mode

We have all been drooling over this idea since it was revealed at Kscope last summer. The poser of a BSO cube with the aggregation speed of an ASO cube. How this is implemented is you take your sparse dimensions and make them dynamic . In addition, you add a parameter to the Essbase.cfg file to urn this feature on. During the beta, TimG tested hundreds of queries against hybrid mode and most performed very well.

This is a huge game changer, or at least it will be. I say that because this initial implementation is limited. There are a lot of things that will cause the cube to revert to BSO mode. Using Dynamic Time series (DTS), cross dimensional operators in formulas, some very common functions in formulas (a list too long to list here), and attribute dimensions forces the cube into BSO mode.  Frankly, while I think this feature is fantastic, currently it has a limited use case and until some of limitations are removed, tread lightly. Of course if your cube is doing simple aggregation, then go for it an gain the benefits.

Exalytics – writing blocks back to the same location

As Cameron mentions in his blog post, many though this already occurred, and to a certain extent it does. While blocks are written to a new location, Essbase will look at the free spaces to see if a block can fit into a spot vacated by another block, my guess it seldom happens.  Having blocks rewritten to the same location can reduce fragmentation a lot. I’m guessing this is Exalytics only right now because most data in Exalytics is actually in Flash memory or on solid state disk. This is just a guess on my part, but from testing the effect of fragmentation on BSO cubes in the past, I can say  heavily calculated cube (like planning) will have vast improvement in speed without having the constant maintenance of defragging the DB.

 

As I said in the beginning of this post, this release is a real game changer. Oracle development should be commended and applauded for thinking outside the box and leapfrogging to this level of functionality.  I can’t wait to see what improvements are on the horizon. See I’m never happy with what I get, I always want more. Gaby Ruben told me at Kscope, his job is to make us all rewrite or optimization presentations every year or two.  I think he is keeping his word. We all need to re-examine how we optimize given these fantastic changes to the product.

Don’t always believe what you read

$
0
0

I was helping another consultant with a calc script as they were getting incorrect results. They wanted to do a @sumrange for a particular intersection of data..  They had coded the statement like :

@sumrange(actual,"no product","Final",.... 

I  asked why they didn't use a cross dimensional operator. They referred me to the tech reference:

SNAGHTML2a2beef8

 

For those of you who can’t read it, the note says

“Member name cannot be a cross-dimensional member combination.”

  Having heard that, I looked for another solution and offered a couple of ideas but I kept thinking about cross dimensional operators. I could swear I had done that before. I asked the consultant to humor me and set the @sumrange to use a cross dim instead of how they had coded it. Amazingly the code validated and ran and actually gave the correct set of numbers.

The moral of the story,  the tech reference is not always right. Even  when things are in print, question them and experiment. This typo cost the consultant hours trying to figure out why the calc didn't work. Think outside the box and experiment

KScope Deep Dive input needed

$
0
0

As the time for KScope draws near, I’ve been busy revising my presentations and getting ready to go. Ill be there starting Saturday afternoon. I doubt I’ll get any sleep until next Thursday.

Speaking of Thursday. This year at KScope, something new is being tried. The different tracks will have “Deep Dive” sessions Thursday morning that are 2 hours long. These will be well worth sticking around for. For the Essbase track, a panel like no other will happen. Why do I say like no other, well, Some of the greatest minds in the Essbase world will be on the panel including

  • Carol Crider – Senior Technical Support Specialist, Oracle
  • Steve Liebermensch – Essbase Product Manager, Oracle
  • Mark Rittman – Ace Director, Rittman Mead
  • Tim Tow – Ace Director, Applied OLAP
  • Sarah Zumbrum – Ace Associate, Finit Solutions

Either Edward Roske or I will also be on the panel. Edward my have to sit in instead of me because whenever I get with Carol Crider I get tongue tied. She is the internals support guru that we all go to when we need real help (No Cameron, she dies not have a degree in psychology). Steve Liebermensch knows more about Essbase features and functionality than anyone else I know. Tim is an expert on the Java API and Mark know more about OBIEE than anyone lese I know.  Why I was included on the panel, I have no idea with this esteemed group.

That said, without questions, the group will be sitting there with nothing to say. WE need you to submit questions for the group to answer. Please don’t ask. things like “In my cube I have 29 dimensions and it takes 12 minutes longer to calculate than it did two years ago. How do I optimize it” That is way to specific. But if you asked about the reasons calculations might take longer than they used to. that might be able to be answered.

Your questions can be submitted in a couple of ways.  First, by tweeting with the hashtag #EssbaseDeepDive, or at @CubeCoderDotCom (throwing in the #Kscope14 hashtag will help others to see your question, too).  Alternatively, by email to EssbaseDeepDive@gmail.com.

I look forward to your questions and to meeting many of you at KScope 

(Note, even if you are not going and have a thought provoking question, go ahead and submit it. )


OBIEE and Essbase a few observations

$
0
0

I’m sitting in a hotel room in Panama writing this post. Why you might ask. I’m in Panama as part of an Oracle Technology Network (OTN) Latin America tour. I will be going to 6 countries in 15 days.. Perhaps at the end I’ll blog in detail about it.  As I write, Edward Roske is in Brazil doing the same thing. Why am I writing this instead of out exploring? I will be later today as the local host arranged a tour for the visiting Oracle Ace Directors.

This post on OBIEE and Essbase is based on my creation of my KScope 14 presentation and quirks I noticed Note I was using Essbase 1112.3.5 and the latest patched version of OBIEE 11.1.1.7.1.X (I don’t remember the X). I was using a VM instance of the Sample Oracle supplies and was using Essbase Studio with it to create the cube. I then brought the cube back into OBIEE and also into Publisher to create a Mobile dashboard.

My initial problems came trying to install the BI Admin console on ly laptop. I run Windows 8 (not by choice) and IE 11. BI admin would not install no matter what I tried. I also had problems with Enterprise Manager and Analysis. I worked around these by using remote desktop to an environment that had lower versions but only after spending hours trying to make these applications work.

After I got things working I created a relational schema in an ORacle instance and tried to import the DLL and sample data from Essbase Studio. The tables created with no problems and created the keys and indexes as needed. When I tried to load the data, I got errors The import did not account for transforming dates into the correct format. I added cast statements to each row needing it and it worked fine, but I should not have had to do it.

Next, I brought in relational tables into OBIEE for creating my Essbase cube through Studio.  That was very easy until I started talking to Wayne Van Sluys a great OBIEE resource. He wanted me to create all sorts of logical tables and aggregations in my business mapping layer to turn my snowflake schema into a star schema and allow aggregation summaries.  If I had wanted to use the data source to do anything but feed Essbase Studio, (report from the source, do federated Essbase/relational reports) I would have had to go this work which seemed very daunting. Luckily, I only wanted to feed Essbase Studio, so it was easy. Just bring in the source, move it to the business model and presentation layers and I was done.

Next, in Essbase Studio, when I brought in the OBIEE tables, Studio did most of the work for me. In the Minischema, Studio would not let me do joins between the tables (This was defined in the relational source) but it would and needed me to set up the self join for the parent/child accounts table. It was easy-peasy (as Cameron Lackpour would say) to create the hierarchies and build the Essbase cube from this. Of course the sample tables were set up with Essbase in mind

I brought the completed cube back into the OBIEE RPD just because I could. I actually did not use it there. I just wanted to show for consistency of reporting how to do it. there were a few little quirks. I had multiple alias tables and although I only asked that the default table be used, When I got to the presentation layer all of the aliases showed up. I did remove them easily enough, but it was extra steps. Also for accounts,

I had set it up a a value dimension so I would not worry about if additional generations were added. In the presentation layer, I changed the ugly wording of “gen7,accounts” to “All Accounts” as it is more meaningful to the end users.

Working with Mobile App Designer (MAD)proved to be more of a challenge than I thought it would be based on the demos I’ve seen. It could be a distinct possibility that I don’t know enough about it, but the multiple steps were difficult. For those of you how don’t know, MAD bypasses the RPD and uses queries from BI Publisher.  My first issue was trying to create the MDX query. The Query designer gave me problem after problem. I finally bypassed it, wrote my query in EAS , tested it and pasted it into the the query editor in Publisher. I found I  had to do very simple queries as complex queries with multiple row or column members would not work in subsequent steps.

Once I had the query in place, I went into the actual query designer). It is a simple drag and drop interface and I got the basics pretty quickly. Because the input query, you are limited in what you can display. I will say I spent hours trying to get a single graph with three dimensions represented, Products, Periods and measures. I could never figure out how to do it, so I ended up with a graph for each quarter. It would have been more impressive if I did figure it out. I ended up going back to the MDX and making it simpler after trying multiple different things. Just before KScope, a new version of MAD came out, but I’ve not had a chance to see if it would have been solved my problems.

I know this is a long post without pictures, but I thought you might be interested in my ramblings about using OBIEE and Essbase for my demo. Using OBIEE against Essbase is much easier than using it against relational as I found out and I was able to complete the demo it a fairly short amount of time even if I did have issues. If you are going to go down this path, don’t travel alone, bring along a friend who knows OBIEE to help you with the obstacles along the way, you will be happier for it.

Is it really fame and fortune?

$
0
0

Fame and fortune and everything that goes with it, I thank you all!

I was reminded about this line from a Queen song as I returned from a speaking tour in Latin America through Oracle Technology Network and the Oracle ACE program.  I travelled to 6 countries in 15 days and may or may not blog about it later. Back to the song; I was asked a few times about how much Oracle paid me to do the tour and was I an Oracle employee. While Oracle paid for the airline and hotel, I paid for food and incidentals and no I don’t work for Oracle. In reality, with lost income and expenses, it costs me a lot to do the tour.

So why did I do it?  Is it for fame as the song says? While I found it funny that participants in other countries wanted to take pictures with me and others on the tour, I didn’t do it for notoriety. It is not fame or infamy that drives me to share my limited knowledge.  As I explained to one person, through the years working in technology, others helped me along the way. They patiently answered my questions, suggested solutions, shared their knowledge and gave me encouragement.  Like most others in the Oracle ACE Director program, I am giving back to the community that helped me. The current overused phrase is paying it forward but this fits to well in this case to use any other phrase. 

My good friend Cameron Lackpour spends too many hours doing the same, researching, giving his knowledge for free for the betterment of the community, all at a great personal loss as much of what he does is non-billable.  Why do I bring this up? I is certainly not for a pat on the back for your pity (for me or MMIP, Cameron), but to urge you to get involved in the same manner.

It sounds like a cliché, but volunteer work is an incredibly rewarding activity. In part volunteering is giving back to the community that helped you get where you are professionally (and even personally) – call it paying forward, or paying your psychic debt, or just helping others as you have been helped. But there’s a deeper aspect to volunteer work as well. Humans are imperfect moral beings, but one of our better drives is to Do Good Things. It just feels good to be good. Try it, you’ll like it.

I know what you are going to say “I don’t know as much as Cameron does or Glenn pretends to (, so I can’t help”. To this I say poppycock. Firstly you probably know more than you think you do and can help others with their questions. Secondly, even if it is true that your knowledge is limited, you can still help. Get involved with your local user group, on-line community, favorite conference, or write a blog about your experiences, trial and tribulations. If you read my blog, you are part of the Hyperion EPM community, are stalking me, or are weird. The Hyperion EPM community is a growing living entity that only gets better by sharing.  I know Cameron is looking for people to help with the ODTUG EPM community that just started up. Become involved, if not with that then with something. Pay it forward and have a personal satisfaction that you helped and I will thank you all!

 

post script. I communicated with Cameron about this post since I mention him heavily in It and got this response:

This is Cameron aka MMIP. If you are interested in getting involved with ODTUG’s EPM community, I encourage you to sign up at ODTUG’s volunteer page: http://www.odtug.com/volunteer

We have many exciting initiatives including:

· Local meetups

· Content sourcing for:

· Webinars

· ODTUG Technical Journal articles

· EPM Newsletter articles

They are starting up and need volunteers to make them happen. This is your chance to define the future of the EPM community.

To Glenn’s point, don’t be shy about contributing. When I first met Glenn at Kaleidoscope 2008, he wasn’t:

· My friend

· An Oracle ACE Director (only an ACE)

· An Oralce EPM community rock star

· A trusted advisor and voice to many

But he is today, in large part because of his endless and valuable volunteer work. I’ve done my best to emulate him and the results have been very rewarding. The same can be true for you. I look forward to talking to you on an EPM community initiative conference call and look even more forward to the great work you’ll do.

Be seeing you.

I’m sure you are better than I am!

$
0
0

Have you sat through a conference session and thought “I can do a better job than the presenter” (most likely me) or thought, it would be really cool to talk about (insert cool  thing you did here).  Here is your chance, submissions for the Kscope conference are due by October 15th. Click Here to enter your submission.

If you are a little concerned about speaking.then partner with your favorite consulting firm on the presentation.  Only the primary presenter gets the free pass, but the consulting firm would most likely be willing to speak with you anyway.  Just make sure you include them as the secondary speaker in the abstract or they won’t be allowed to speak. This is a great way to give back for all the help you received along your learning path.

People are really interested in what you have to share. Give it a try, it won’t hurt

Essbase Config file changes

$
0
0

Throughout time, I have come up with a list of Essbase configuration file settings that I typically use in my implementations. As the versions come out, new settings are added and it appears in the 11.1.2.3.5X versions a bunch of new setting are there, or at least I now noticed them.

Some of the new settings I’ll be adding to my list are

DIMBUILDERRORLIMIT – signifies the number of error rows during a dimension build. This is similar to the DATALOADERRORLIMIT Both has a maximum of 65K rows

ENABLERTSVLOGGING– Logs Run time variable usage in log files

ESTIMATEDHASHSIZE – Specifies in millions the number of member name and aliases loaded in memory for outline maintenance. While I don’t know for sure, I think this is meant to allow up to open really big ASO outlines.

ENABLESWITCHTOBACKUPFILE – enables the automated switching to the backup security log file if the Essbase.sec file gets corrupted. Looks good for automated recovery.

SSINVALIDTEXTDETECTION – Controls if an error is shown when a user enter in text that could cause Essbase to misinterpret the the data. Especially useful for asymmetrical grids.

 

There are a lot more settings that have been added over time. Take time to go back through the tech reference and read each setting. Some have changed like SSPROCROWLIMIT) other have been removed and many more added. Stay current and adjust your systems accordingly. Remember, many settings require a Essbase server restart to take effect.

FixParallel–How fast is fast?

$
0
0

I have finally been able to use FixParallel introduced in 11.1.2.3.500 on an Exalytics server. I’ve used it for for calculations and dataexports, so how fast is it and does it really make a difference?

For my allocations calculations, I really can’t tell you how much of a difference it made, but I know it was a lot faster to do my allocations with FixParalled than without it. I just didn’t capture the times.

For my DataExport, I was able to measure the difference.  I was exporting 1083702 level 0 blocks in column format with a block size of 9984b. I created a Dataexport calc script and set CalcParallel to16 in the script. Running it took 336.95 seconds. I thought that was reasonable, but I wanted better.

I changed the script to use FixParallel using 16 threads across my location dimension which has abut 800 members. The calculation took 9.94 seconds. If I multiply out that number by 16 I come up with 159.04 seconds so it it telling me the FixParrallel calculation is improving performance more than just the parallelization of the script.

What I did not expect is; just like ParallelExport, the FixParallel dataexport created a file for each thread, so instead of one file I ended up with 15. They were named with a suffix of _T? where ? was a number between 1 and 15.  (not sure why I didn’t have 16 files).  I also don’t know what would happen f the file size spanned 2 gig. Would it append a _1 to the file name? I tried reducing the number of threads  to 3 and reran the script. Alas, I ended up with only three files so I can’t give you an answer. But interestingly the script took 690.63 seconds, much longer that the script without FixParallel, so apparently there it tuning we can do to the script.  I could try including another dimension in my FixParallel, but am happy with my less than 10 second export. Perhaps a test for another day.

So is FixParallel worth it, my testing says YES! FixParallel for me was an awesome new feature and one I will use often.

Viewing all 110 articles
Browse latest View live