The problem: select from table (full scan) runs for hours on 400MB table generating lot of (physical) IO on UNDO. The table has 250MB of indexes (counting 3.)
Every time the excessive UNDO IO is generated the process A “select from table full scan” starts at the same time as other processes which modify the same table. Those OTHER processes finish quickly. But process A then is left to slowly crawl through 400MB table for hours.
The following statistics are excessive:
-“transaction tables consistent reads – undo records applied” = 37.711.804
– “consistent changes” = 37.712.584
– “physical reads” = 36.708.796
AWR segment statistics of the Table segment (during the researched period of 7 hours):
– “db block changes” = 2.200.592. That corresponds to row count of the table.
Table segment size is: 48.128 blocks (394.264.576 bytes)
Ratio of “Excessive Statistics”/”Table Segment Size in blocks” ~ 37.000.000/48.000 = 770
I.e. to get one block of table segment about 800 blocks of physical IO must be done.
But ratio of “Excessive Statistics” / Db Block Changes(and table rows) ~ 16.
But the number of commits (AWR stats) is 2000.
Process A is:
open cursor (full table scan);
bulk fetch 1000 records into varray
for all in varray loop
do some pl/sql only processing