Last night I was called in on an unfortunate problem. A client has an update that they must run (unfortunately) which updates ~830,000 rows, setting one column equal to the other (two column table). On their development environment this was taking roughly twenty seconds to perform. However, on their soon-to-be production environment it was taking roughly 45 minutes.
Explain plans were checked, trace files examined, and not much popped up except that the production machine was attempting larger I/Os during the update and was consequently taking much longer. Comparing the initialization parameters between production and development showed the exact same parameters, except that the upcoming production box was using a 16k block size and development was using a 4k block size.
There was one other difference, which was that the upcoming production server was 64-bit and the development server was 32-bit. In order to make sure everything was the same, I created a fresh instance on the 64-bit production server with a block size of 4k, exported the objects from the instance with the 16k block size, and imported them.
The final result: When the update was run against the 16k blocksize DB, it took 45 minutes. Against the 4k blocksize DB on the same box with the same parameters and the same FS mounts, it took 2 minutes 20 seconds.
I even took it a step further to see if we could squeeze any more performance out of it. Setting FILESYSTEMIO_OPTIONS=’SETALL’ (instead of none) I was able to get the update down to 1.5 minutes. Turning off DB_BLOCK_CHECKING (not recommended) I was able to get it down to 10 seconds.
By going from a 16k blocksize to a 4k blocksize with all other things being equal, we experienced roughly a twenty times improvement. We shaved off even more time setting FILESYSTEMIO_OPTIONS = SETALL. And then we changed DB_BLOCK_CHECKING, a parameter Oracle documentation says only adds a 1 to 10% overhead depending on concurrency of DML, which made the update 6 times faster alone.
The final result was a 270 times improvement over the original.
To be fair, I also tried setting the FILESYSTEMIO_OPTIONS and DB_BLOCK_CHECKING the same on the 16k blocksize instance, which resulted in the update taking 30 minutes as opposed to 45. The results were better, but the 4k blocksize database still won by 180 times.
What’s more, all queries both large and small performed the same or better than in production, and a test insert of 100,000 rows went from 20 seconds on the 16k blocksize to 3 seconds on the 4k.
Oracle 18.104.22.168 on RHEL4 64-bit, 8CPU, 2+2 RAID10 for Data with a 128k segment size, 2+2 RAID10 for Redo, Hardware diagnostics report no issues
Honestly, there are no final conclusions yet. Like a weight loss commercial that shows a 400 pound man slimming up to 150 pounds, ‘results may not be typical’. While it definitely shows that changing only blocksize can have a profound effect, more thorough analysis will help get to the core of why it had such an effect. Whatever the case, it definitely shows us to keep an open mind in tuning. Leave no stone unturned!
NOTE: There is one interesting outcome that I’ve observed: on the 16k blocksize database, the UPDATE statement performs full table scans against FILE$. On the 4k blocksize database, it performs range scans using the FILE_I2 index. This only makes a difference of about 1 consistent get per execution, but it is worth noting.
NOTE: It is also worth reiterating, this is and Oracle 22.214.171.124 database.