DBTA Article on Database Administration Issues

By: Gregory Keller

Abstract: Greg Keller interviewed by DBTA on DBA issues

  1. Why is database performance so important to the business? Is this not a matter typically “contained” within the IT department?

GREGK: For some businesses, response time of applications could literally, not figuratively, make or break their business. Think Wall Street. And when you think Wall Street think of the 100,000’s of thousands of transactions that occur between brokers and customers. If a single operation which is not failing but rather executes in a more than sub second rates, and is compounded by the 100,000’s of thousands of times a day it executes, hours on the day are lost. And with it more transactions…and closed business,

  1. What have been the approaches to database performance optimization to date? Why aren’t these methods as effective as they once were?

GREGK: There have been a few schools of varying success ranging from deep dives on the code itself, e.g. looking at and tuning packages, procedures and functions to the literal ‘throwing the hands in the air’ method of applying more hardware to the problem to increase processing power and provide more float and tolerance for I/O. But this is not unlike putting a band-aid on a cyst that lies under the skin. In other words, truly understanding what is happening IN the database itself that could be the root cause of the wait time issues. To do that, the database, like any other code or language, needs to be profiled.

  1. What are the most common issues impeding database performance? How much time do DBAs spend addressing recurring problems?

GREGK: Overutilization

  1. Aren’t the native tools provided by RDBMS vendors with their products enough to handle most of these problems?

GREGK: Therein lies a major issue. Some are, and some aren’t. There is a major inconsistency across these tools and when a DBA, who in our surveys is typically managing 2-3 database ‘types’ (e.g. Oracle, SQL Server, DB2 et al), consistency to ensure optimal job productivity is key. Further, and more importantly, consistency on the ‘depth’ and validity of the diagnosis is absolutely critical. DBA’s need precision in analysis consistently across all of the platforms they oversee.

  1. Tracing and resolving database performance issues often takes time; perhaps even days or weeks in some cases. This isn’t acceptable in today’s Internet environment – is there a way to speed this up?

GREGK: Categorically, a DBA’s ability to resolve an issue must be measured in hours, if not minutes. As mentioned before, if one errant process is executing in a greater than sub-second response and repeated 1000’s of times a day, this spells disaster for a business in a single 9-5 working day. DBA’s require early detection followed by immediate diagnosis with a quick turn-around of correction. It all begins with setting tolerance thresholds the DBA’s set based upon their SLA’s. Conservative DBAs will never let certain operations get within 50% of those SLA’s…thus the early detection. Consistent monitoring to watch or impedance on the agreements is the critical step before diving into profiling database instances to get to root causes.

  1. Profiling technology to analyze performance issues has been available to software developers for some time now. Could database managers benefit from the same kind of tools?

GREGK: Exactly! Java and C++ developers have had sophisticated code profiling technology for a decade or more. It’s a complex affair so the most successful implementations require very clear visualization of the profiled code…or database in this case. In other words graphically displaying all operations executing which draws the eye to the suspect or likely gross violators of the waits. From here, the DBA needs to be able to examine the activity metrics of the suspect operation including explain plans to gauge the ‘cost’ to the database. It’s got to be visual to ultimately reduce the ‘noise’ and get the DBA or developer to the problem immediately to correct.

  1. How would such a tool be implemented in a database environment?

GREGK: Tools like this should be ‘production worthy’. In other words, cause no impact to the system vis-à-vis the operations it needs to run in order to profile the SQL and without question, install nothing on the systems it needs to examine…e.g. packages, procedures…code of any kind. No intrusion. The product should be available to all the DBA to ‘point and shoot’ and get reliable information back in minutes, again so that decisions can be made quickly.

  1. Also, as with the case of software applications, is there a “lifecycle” approach that can be taken with managing database performance? Can you discuss the stages of the database performance lifecycle, and who gets involved at which stage?

GREGK: Generally speaking, DBA’s are the modern and more sophisticated equivalent to assembly line architects: they are process driven and methodical as nothing, absolutely nothing can take down production systems. So, while there are a few well know practice implementations, the general rules are to: Set SLAs, monitor SLA’s to detect encroachment through performance degradation, diagnose the issue, correct the issue and finally push the correction through the change management process. Specifically on performance, DBA’s should not be reactive…but proactive with performance. Start profiling to see issues before they arise, and moreover start ensuring the development of SQL is done with writing quality and performant code with every upfront keystroke. Not post facto when in production. This is harder to do with off the shelf apps, but is valuable in these situations as well to understand how the vendor implemented the code.

  1. Performance may not always be an issue of hardware or SQL. In fact, the design itself is often a major contributor to the databases performance. How can developers and administrators work together more closely to optimize the database design up front?

GREGK: Poor data models are proportionally a greater violator to database performance than anything! What typically occurs is that a design is received by a developer from a data architect who is less savvy with database performance, but is applying the data model rules verbatim as the ‘business’ sees it. Not how the application needs to see it. That design is taken as Gospel by development and implemented nearly ‘as is’. No account of denormalization needs to reduce table joins which grind queries to halt, let alone an effective way to ‘see’ where critical indexes should be applied. So tooling to help examine the schema before it is implemented to provide design guidance to ensure performance is a must.

  1. There’s a trend where DBAs are putting more onus back on developers to write code and perform various database functions, freeing up DBAs for truly required SLAs. How can DBAs trust that what they will continue to get from the developers will be ‘good’ code?

GREGK: Well, that’s the vicious circle right? DBAs have always re-written code as developers creating it often do not have the background to know how to leverage the vendor’s optimization ‘hints’ (to use an Oracle term). But DBAs now are becoming less specialized and more generalized across many systems and databases. Companies have less human resources and more systems to manage and therefore need to ensure that code is written right up front, not after the fact by them. The problem is developers too need knowledge on all of the platform nuances to write highly performant code. They can achieve these skills by studying the vendor’s resources (time consuming and may require more than one database platform to learn) or rely on productivity tooling that shows them how to write this ‘pre-optimized’ code as they type it. Clearly, this is the fastest way.

  1. Please cite examples of the kinds of returns a business will get as a result of having higher-performing databases.

GREGK: The benefits range from internal to external with all points leading to positivity for the business. Internally speaking, optimized databases will reduce physical resources and assist with consolidation planning and virtualization, reduce employee/user wait time frustrations ultimately ensuring DBA’s meet their prescribed SLA’s. Externally, partners of systems interfacing via web services making calls to database systems will also benefit from processing optimization on the database (and application server). Beyond business process improvements such as workflow and execution engine implementations, pre-optimization of database assets is the key to success in meeting business objectives.


Server Response from: ETNASC03