The Overly Shared Pool

Share Button

Ever been in a really crowded pool? I have… and chances are so have your queries. This article will go into a little bit of high level detail about the shared pool and a portion of what it does for us, then show a real “good pools gone bad” issue resolution. If you feel like the shared pool is something you’ve already done triple backflips with a jackknife for a 10/10 finish into and you’d rather just sit poolside and order an extravagant beverage while reading my client’s tale of woe, you can skip down a bit.

Parsing = Translation and Optimization

When someone walks up to you and says, “Hello, how are you doing?” your brain receives audio input from your ears in the form of the English language. This input is then ‘parsed’ by the cerebral cortex in a place called “Wernicke’s Area”. This section of the brain takes the audio information and breaks it down based on inflection, syntax, and other factors. Once the question registers and is ‘understood’, your brain is able to come up with a suitable response. It can either dig through short-term memory to piece together recent events and describe, in detail, exactly how you are doing; or more efficiently, it can offer up a quick instinctual response like “Good, you?”

Oracle parsing works much the same way. When a statement is given to Oracle in the form of Structured Query Language (SQL), the statement must first be understood by the Oracle Instance (the brain, as it were). The statement is checked for syntax and permissions, and then broken down to a hash value so Oracle can offer an optimal response. The optimization phase of the parse is most important, and is done via one of three methods: hard parse, soft parse, or session cached cursor.

  • Hard Parse – Oracle searches for the statement in an area called the Library Cache, a subsection of the Shared Pool. If the statement is not found, Oracle invokes a tool called the Cost Based Optimizer (CBO) in order to come up with an optimal execution plan. This invocation is widely considered a very expensive process for CPU, just as a well thought out response to a question requires more brainpower.
  • Soft Parse – Oracle searches for the statement in the Library Cache and finds it. This allows Oracle to re-use execution plans without having to re-optimize the query and is akin to responding “Good, you?” when asked how you are doing.
  • Cached Cursors – Oracle notices that you have run the same statement over and over again, and caches the execution plan into your private memory so the Shared Pool is no longer burdened for your session. This is akin to someone asking you the same question over and over again, at which point the answer is immediately given with no real thought.

Hard Parses are a necessary evil in order to initially optimize a query. Soft parses are generally considered to be good, but can be problematic under heavy concurrency (explained later). Cached Cursor responses are great for performance, but require RAM resources granted explicitly via Oracle parameters or client/OCI settings.

Query executions go through three main phases:

  1. Parse – Translate and optimize the query
  2. Execute – Lay down the execution plan and run the query
  3. Fetch – Pull back data from Oracle objects based on the execution plan

It is important to note that most programming languages explicitly call each of the three phases. When a query is run, it is possible to parse the query (known as a parse call) and then not execute or fetch the data. Reusing parsed queries (parse once, execute many) is a highly recommended practice.

The Shared Pool

When queries are hard parsed, the execution plans generated by the CBO are placed in the Shared Pool; specifically, in a subsection called the Library Cache. If the same query comes in again, Oracle must simply search the Library Cache and return the stored execution plan (soft parse).

However, both hard and soft parses put strain on the Shared Pool. The Shared Pool is simply a chunk of RAM, and CPU must manage RAM. This means that if 100 users are simultaneously trying to access the Shared Pool, CPUs will be dedicated to these users to access the RAM. If there are more users than there are CPUs, a queue will form and the CPU will work until all the users are done.

Think of it as a trip to the grocery store. To take items from the store, a cashier must first process you. If there are 100 shoppers trying to check out and only two cashiers, a large queue will form and long wait times will ensue. The bottleneck in this case is cashiers, and you could even call the event “waiting on resource: cashiers”. We can fix this by adding more cashiers, thereby cutting down on the wait times overall despite high concurrency.

In Oracle, the items are RAM segments, the cashiers are CPUs, and the shoppers are users or sessions. When more sessions need chunks of RAM than there are CPUs, enqueues and waits occur which cause the overall accessibility to the database to degrade. Wait events such as “latch: library cache” and “cursor pin S wait on X” are Oracle’s internal names for various types of wait events, just as “waiting on the slow cashier with no bagger” is our name for a wait event at the grocery store.

Continuing on this analogy, let’s say we have 20 shopper and 10 cashiers. This type of queue is very reasonable and will be processed rather quickly. Now imagine we throw in 20 more shoppers who come in to buy the exact same three or four items every few minutes. Then imagine we bring in a few shoppers who load up their cart, get to the front of the line, and then decide they don’t want the items after all. On top of that, we throw in a few store employees who walk through the line demanding various sub-tasks on top of the normal cashier duties. This situation will fly into chaos, as anyone who has ever gone to Wal-Mart on a Saturday afternoon can tell you.

The same thing occurs on Oracle. The shared pool is required for parsing, for queries from dual, for sequences, and countless other components. There are sessions which come through over and over and over to run the same thing every time, sessions that make a parse call but don’t actually execute or fetch, sessions that bring in huge queries that take forever to parse, and so on. In addition, Oracle has its own control structures, sequences, views, and other components that must be handled behind the scene while all this is going on. And here we’ve only discussed the checkout process! Imagine if during all this the cashier also had to order custom or out-of-stock items on the fly from warehouses (akin to disk reads from storage).

However, the answer to this predicament is not always to add more CPUs. We can also cut down on this behavior by introducing better design and optimizations. For instance:

  • Making the shared pool bigger allows more queries to be soft parsed instead of hard parsed, cutting down on CPU and allocating more available latches.
  • Allowing repeat queries to parse in session memory instead of the shared pool reduces strain.
  • Ensuring that queries parse only when absolutely necessary at the application level reduces parse calls and therefore contention on the shared pool.
  • Not running meaningless queries like “select 1 from dual” to check for database availability cuts down on parse calls, not to mention hits to the dual table which is in the shared pool.

A Real Life Shared Pool Problem

I had a client who called me in due to an incredibly severe issue with the shared pool. They followed all the “rules”: they used bind variables religiously, avoided anonymous PL/SQL blocks, and set a decent value for session_cached_cursors. Yet the system would become unresponsive and the following waits kept occurring when concurrency got really heavy:

  • latch: library cache
  • row cache lock
  • cursor pin S wait on X
  • latch: shared pool

AWR/ASH and V$ data showed that soft parses were happening almost exclusively. But even though the system was properly caching and re-using SQL in the shared pool resulting in soft parse, the contention buildup against the shared pool was too much for the system to handle and it would come to a grinding halt.

In the end, we went through a massive effort to reduce usage on the shared pool (hard or soft). This was accomplished by removing unnecessary DUAL calls (the code was doing a select from DUAL on every page) and most importantly, setting up true client cursor caching in the JBoss application server.

First have a look at this bit of nastiness:

   Elapsed:               59.50 (mins)
   DB Time:           16,227.94 (mins)

Remember that Average Active Sessions (AAS) is equal to DB Time over Elapsed Time. This gives us a good indication of wait/work load during the 60 minute window. As you can see, the AAS during this time was ~273, far too high for most any respectable system.

Looking at the Top 5 Timed Events, you can see that we’re abnormally tied up in concurrency wait:

Top 5 Timed Events                                         Avg %Total
~~~~~~~~~~~~~~~~~~                                        wait   Call
Event                                 Waits    Time (s)   (ms)   Time Wait Class
------------------------------ ------------ ----------- ------ ------ ----------
latch: library cache                169,214   3,677,789  21735  377.7 Concurrenc
row cache lock                       31,355      60,136   1918    6.2 Concurrenc
enq: TX - row lock contention        20,268      58,719   2897    6.0 Applicatio
latch: shared pool                  102,219      54,985    538    5.6 Concurrenc
cursor: pin S wait on X           4,895,909      53,456     11    5.5 Concurrenc

There were reams of other statistics and information used to correctly identify the issue, but the key symptom could be seen right here in the AWR under the SQL ordered by Parse Calls section (query text has been removed to protect the guilty):

                            % Total
 Parse Calls  Executions     Parses    SQL Id
------------ ------------ --------- -------------
     283,466      283,061     12.07 bd7nzg41xqmtu
Module: JDBC Thin Client
CONFIDENTIAL QUERY #1

     204,990      204,987      8.73 bajtpw34f1h8n
Module: JDBC Thin Client
CONFIDENTIAL QUERY #2
     166,868      166,870      7.11 avc1jqzz04wpr
SELECT 'x' FROM DUAL

     108,239      108,239      4.61 4rg9kqppkwb2x
Module: JDBC Thin Client
CONFIDENTIAL QUERY #3

You should notice three things from the above info:

  1. The SELECT ‘x’ FROM DUAL query is definitely excessive, having occurred 46 times per second.
  2. More importantly, the Parse Calls match Executions 100% in every query
  3. There are a LOT of query executions

Interestingly enough conventional wisdom, advice, and even training always warns of the dangers of hard parsing vs. soft parsing, and vaguely mentions parse to execute ratio and how important it is, but for the most part the focus is given to the type of parse. But the case above shows that even soft parses aren’t enough with massive concurrency, an overabundance of small queries, and meticulous bind variable practices combined to create a huge case of shared pool contention with ~99% soft parses.

The application software in use was JBoss which has a feature called the Prepared Statement Cache. Since querying in JBoss is done by setting up a Prepared Statement and running it against the DB, the Prepared Statement Cache allowed JBoss to dedicate memory to keep the cursor open and reusable following a successful parse/execute/fetch. Even with all the bind variables in the world, we couldn’t “Parse Once, Execute Many” without the client keeping the parse cursor open. Take a look what happened when we found the root cause of the issue in the application server settings and corrected it (a value of 15 for prepared-statement-cache-size worked nicely):

                            % Total
Parse Calls  Executions     Parses    SQL Id
------------ ------------ --------- -------------
      13,481       66,349      2.32 8g5h090r1uxmc
Module: JDBC Thin Client
CONFIDENTIAL QUERY #1

      11,346      104,237      1.95 5nz8u1h2qyr92
Module: JDBC Thin Client
CONFIDENTIAL QUERY #2

      11,083       63,029      1.90 78xz4j1g270uw
Module: JDBC Thin Client
CONFIDENTIAL QUERY #3

      10,663       35,397      1.83 02s458bkx4z4a
Module: JDBC Thin Client
CONFIDENTIAL QUERY #4

Notice that the parse to execute ratio was improved dramatically. We also got rid of the DUAL queries. Consequently, the latch issue went away never to return. The shared pool operates at about 10% of the wait it used to incur and never hits that breaking point where performance degrades exponentially.

The final lessons:

  • Soft parses are preferable to hard parses in an OLTP application where query stability is preferred (though auto-gathered histograms sometimes messed this up in 10g before Adaptive Cursor Sharing was introduced in 11g)
  • Bind variables can and should be used to improve plan reuse and reduce hard parses (CURSOR_SHARING is something of an alternative, but not preferred to actual bind variable usage in the code)
  • To actually lower your parse to execute ratio, bind variables aren’t always enough. Your client needs to actually be configured to take advantage of them. This one doesn’t seem to make it into all the how-tos or best practices, but it should.
Share Button

5 comments

  1. Hello Steve,

    nice post – very simple and understandable explanation. I would recommend this post to all Oracle beginners/experts who don’t quite understand how Oracle is handling query executions.

    Regards,
    Marko

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.