US division of Prestige European Auto Manufacturer

Prestige Auto Manufacturer reports CPU savings from utilizing EZ-DB2 of over 25% of their total prime shift DB2 CPU usage:

EZ-DB2 takes a unique end-to-end ‘Workload-Centric & Workload-Aware’ approach to performance optimization. The EZ-DB2 approach first captures an SQL Workload then identifies ‘top-n’ SQL statements by Consolidating statistics (e.g. total CPU consumed) across all instances of like SQLs in the Workload. EZ-DB2 Consolidates statistics for ’otherwise identical’ statements even when such statements have different literals and other variable information. In effect, EZ-DB2 determines how the Workload is distributed across its Consolidated SQL statements (“Workload SQL Distribution”) then provides the tools to leverage this information to help better manage and optimize Workload performance.


I have been using Cogito's EZ-DB2 for over a year now at a major Prestige Auto Manufacturer client. About 18 months ago, my client implemented a new distributed web system for their dealers. The DB2 CPU usage for that new system was enough to cause the entire CPU to grossly exceed capacity forcing an emergency CPU upgrade.

As a result of that implementation, management wanted a better way to track and tune systems. There was a good mechanism for tracking and tuning batch and CICS jobs, but web processes and DB2 Enclaves were another matter entirely. So I was charged with looking at various performance collection and SQL tuning products. I looked at every product available in the industry at the time, even trialing one from a major third party vendor. Unfortunately, nothing seemed to handle the situation where you have a very large number of identical SQL statements that take very little time. Then we trialed EZ-DB2 and found that it gave you the option of consolidating identical SQL statements. So instead of a million different lines of the same SQL statement, we had one line showing that SQL statement was executed a million times, with the total amount of CPU that all of those SQL statement executions used, as well as the average amount of time.

It turned out that, except for a few cases, the SQL that was causing the performance problems was not the real CPU pigs, but was instead some of the very frequently executed SQL that used relatively small amounts of CPU per execution. Using the simple reporting by EZ-DB2 enabled us to go after these and provide some very significant CPU tuning savings besides reducing the average response times even further for these short but frequently executed SQL statements.

This tuning effort has saved my client a very large amount of money and totally eliminated the need for another very costly CPU upgrade. We now run systems in test prior to being implemented into production allowing us to anticipate and tune the systems before problems actually happen. EZ-DB2 provides the capability to not only report by program (which helps for batch and CICS but is useless in a web-environment) but also by Authid as we have each web app using a common Authid. It also allows us to track changes over time, as we save the trace histories and thus are able to look back two weeks or 6 months and see how that SQL performed then versus now. It allows us to capture problems due to changes in systems.

In line with the tuning efforts, we use the EZ-DB2 Stats copy feature (EZ-Stats) to populate the DB2 catalog with stats from production for various test copies of those production tables. This feature easily enables us to ensure that test access paths accurately reflect production.

One last feature that we use for maintenance tracking is the system change feature (EZ-Impact Analyzer). It allows us to see what access path changes happen when we make various changes to an environment, including DB2 code fixes. It eliminates those nasty surprises that one can get when moving in new DB2 maintenance or from rebinds.
Myron W. Miller, CEO
Premier Database Consultants

EZ-DB2 SQL Workload Optimization Suite

  • Industry's only end-to-end Workload-Centric & Workload-Aware suite for SQL Optimization

  • Unique Consolidation feature sees beyond literals and other variable information to group and rank otherwise-identical SQL

  • Only solution offering Workload-weighted access path impact analysis for Dynamic SQL

  • Only solution offering Workload-weighted index analysis and automated optimization

  • Low overhead SQL Workload capture; near-zero overhead for Dynamic SQL

  • SQL Consolidation in parallel with SQL capture means low storage requirements for even extremely large Workloads

  • Reports are based on Consolidated data and are available for instant viewing