top of page

MicroStrategy Tips and Tricks / Tuning Server Performance

Updated: May 14, 2020




I’ve been working with MicroStrategy for the better part of a decade. It’s a robust Business Intelligence platform but optimizing your installation can be tricky. MicroStrategy doesn’t have a “cookbook” for tuning performance and finding best practices are typically spread out across multiple tech notes and the experiences of developers.


The standard answer to improve performance is to throw more resources at the servers but that is the easy way out. In every project I’ve worked on squeezing every ounce of performance out of the hardware I had available was win for our team.


Over the years I’ve identified a few settings which are my “go to's”. These general settings have helped me in getting more performance out of my MicroStrategy installation. Many of these methods I’ve posted directly on the MicroStrategy community but this allowed me the opportunity to consolidate those in a single place. Also to note, my installations have used MS SQL server for the database connection.


I’ve listed and detailed my top 5 methods to easily gain performance below or watch the video here.


1. Data Population For Reports (for Normalizing the Report Data)


Whenever a user runs a report or publishes a cube, the data will need to be normalized. In this context the data normalization refers to how duplicate data will be handled for attribute elements which appear multiple times. The default setting is to allow the duplicate data to flow through and it will be handled in memory.


The better option is the “Normalize report/cube data in Intelligence Server”. The Intelligence Server is built for this type of in-memory processing and is even noted to be faster within the MicroStrategy documentation.


In my sample tests I found that found minimal Intelligence server hardware impact with a performance gain from anywhere between 1-15 seconds on reports which would typically run from 30-60 seconds. That’s a significant performance boost. I also found that the Web Server API calls went down by 20%- 60% depending on the report.


The is the example provided in the MicroStrategy documentation

"When a report is executed, the description information for the attributes (all data mapped to non-ID attribute forms) included on the report is repeated for every row. For example, a report includes the attributes Region and Store, with each region having one or more stores. Without performing normalization, the description information for the Region attribute would be repeated for every store. If the South region included five stores, then the information for South would be repeated five times."

This can be enabled in the VLDB setting of either the report or the database connection.


VLDB Properties > Query Optimizations > Data population for Reports > Normalize report data in Intelligence Server


2. Parallel Query Execution (PQE)


This is a great feature which can add significant performance gains but is disabled out of the box. This features does exactly what the name describes. When this feature is enabled the analytical/SQL engine checks if there are multiple passes to the report, and if so, will also check if they are dependent on one another. If not, those SQL passes are executed in parallel rather than chronologically and individually.


This can be enabled in the VLDB setting of either the report or the database connection.

VLDB Properties > Query Optimizations > Parallel Query Execution > Enable Parallel Query Execution for all reports that support it.


2b. Parallel Query Execution Passes


As a bonus, you can set the the number of parallel queries which are allowed to be executed at the same time at a project level. By default the parallel passes are set to two when enabled. If you have the hardware to support more, go for it! Something to note; parallel queries are counted on a per report basis.


To configure at the project level

Project Configuration > Project Definition > Advanced > Configure > Query Optimizations > Maximum Parallel Queries per Report


3. Sub Query Type


In just about every scenario that I’ve tested, changing the sub query type has improved performance significantly. The default server setting is to perform an actual subquery which usually comes from an intermediate table that was created from a previous pass. I’ve found that changing this setting to use a true temporary table instead of a subquery produces a join directly to that intermediate table which increases performance


This can be enabled in the VLDB setting of either the report or the database connection.

VLDB Properties > Query Optimizations > Sub Query Type > Use Temporary Table....


4. Intermediate Table Indexes


Now that we have the intermediate table type as a true table we can leverage indexes to further increase performance. I’ve had the best result by selecting the “only secondary index on intermediate table” option.


This can be enabled in the VLDB setting of either the report or the database connection.

VLDB Properties > Indexing > Intermediate Table Indexes > Create Only secondary index on intermediate table


4b. Indexing Metric Columns


With indexes now enabled we have the option to also index Metric columns as well. The main consideration would be if intermediate table have more than the allotted columns/bytes. Intermediate passes won’t typically have more than the 16 columns so enabling this option is typically okay but you’ll have to practice discretion when enabling.


This can be enabled in the VLDB setting of either the report or the database connection.

VLDB Properties > Indexing > Allow Index On Metrics > Allow the creation of indexes on metrics


4c. Indexes: Composite or Column?


If columns are a consideration when creating indexes you do have the choice to select either a composite index or column based indexes. Composite indexes will contain all of the columns in the intermediate table. Column indexes will create a single index per column in the table. I’ve found that composite indexes perform best.


This can be enabled in the VLDB setting of either the report or the database connection.

VLDB Properties > Indexing > Secondary Index Type > Composite/Individual


5. Transaction Isolation Level


Typically data warehouses are read only so databases can be set to to read uncommitted to prevent locks. MicroStrategy as a BI platform a more unique since we have write back functionality so there may be other considerations when configuring the Transaction Isolation level. Luckily, we can discreetly, on a per report basis, set the transaction isolation level to read uncommitted. I’ve been doing this for years. Although the performance gains aren’t always obvious, the gains become clear when there is heavy database activity.


This can be enabled in the VLDB setting of either the report or the database connection.

VLDB Properties > Pre/Post Statements > Report Pre Statement > Type in “Set Transaction Isolation Level Read Uncommitted”.



There are other setting I frequently tweak but these are my top 5. If there are tweaks or adjustments you've made to you MicroStrategy implementation, I would love to hear about it.




 

Thanks for checking out this blog post. If you found this post helpful please consider donating. Any contribution is appreciated! Just click the PayPal icon at the bottom of this page.




5,228 views0 comments

Comments


bottom of page