Oracle 碎片 优化 一例


一个客户打过来的,说是碰到一个很奇怪的问题。在一张上千万记录的大表里,做一个SELECT * FROM <TAB_NAME> WHERE ROWNUM<100,居然十多秒钟才出来。我问他这张表是不是碎片很厉害,他所不可能有碎片,昨天才IMP进去的,昨天还没问题,今天就出问题了。而且这张是话单表,不可能会做删除操作的,不会有碎片。我让他马上做个10046发过来。

10分钟后,他通过QQ把TRACE发过来了: SELECT * FROM ttt where rownum<100

 

call count cpu elapsed disk query currentrows

 ------- ------ -------- ---------- -------------------- ---------- ----------

Parse 1 0.14 0.17 44 198 0 0

Execute 1 0.00 0.00 0 0 0 0

Fetch 8 3.71 5.86 67489 68340 0 99

------- ------ -------- -------------------- ---------- ---------- ----------

total 10 3.85 6.03 67533 68538 0 99

 

从这上面看,确实产生了67533个物理读和68538个逻辑读。执行时间为6.03秒。从等待事件来看:

BINDS #39:

EXEC #39:c=0,e=88,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,tim=1422207486718

WAIT #39: nam='SQL*Net message to client'ela= 7 driver id=1650815232 #bytes=1 p3=0 obj#=206418 tim=1422207486810

WAIT #39: nam='SQL*Net more data to client'ela= 203 driver id=1650815232 #bytes=2002 p3=0 obj#=206418 tim=1422207487071

WAIT #39: nam='SQL*Net more data to client'ela= 66 driver id=1650815232 #bytes=2020 p3=0 obj#=206418 tim=1422207487175

WAIT #39: nam='db file scattered read' ela=515 file#=146 block#=92900 blocks=5 obj#=206418 tim=1422207488208

WAIT #39: nam='db file scattered read' ela=918 file#=146 block#=92905 blocks=8 obj#=206418 tim=1422207489579

WAIT #39: nam='db file scattered read' ela=2121 file#=146 block#=92914 blocks=7 obj#=206418 tim=1422207492091

WAIT #39: nam='db file scattered read' ela=617 file#=146 block#=92921 blocks=8 obj#=206418 tim=1422207493135

WAIT #39: nam='db file scattered read' ela=493 file#=146 block#=92930 blocks=7 obj#=206418 tim=1422207494016

WAIT #39: nam='db file scattered read' ela=1666 file#=147 block#=897417 blocks=8 obj#=206418 tim=1422207496049

WAIT #39: nam='db file scattered read' ela=1026 file#=147 block#=897426 blocks=7 obj#=206418 tim=1422207497350

WAIT #39: nam='db file scattered read' ela=378 file#=147 block#=897433 blocks=8 obj#=206418 tim=1422207498049

WAIT #39: nam='db file scattered read' ela=1075 file#=147 block#=897442 blocks=7 obj#=206418 tim=1422207499416

WAIT #39: nam='db file scattered read' ela=1649 file#=147 block#=897449 blocks=3 obj#=206418 tim=1422207501237

WAIT #39: nam='db file scattered read' ela=2768 file#=147 block#=897453 blocks=4 obj#=206418 tim=1422207504191

WAIT #39: nam='db file scattered read' ela=653 file#=147 block#=897458 blocks=7 obj#=206418 tim=1422207505141

WAIT #39: nam='db file scattered read' ela=1588 file#=147 block#=897465 blocks=8 obj#=206418 tim=1422207507029

WAIT #39: nam='db file scattered read' ela=460 file#=147 block#=897474 blocks=7 obj#=206418 tim=1422207507787

WAIT #39: nam='db file scattered read' ela=608 file#=147 block#=897481 blocks=8 obj#=206418 tim=1422207508697

WAIT #39: nam='db file scattered read' ela=564 file#=147 block#=897490 blocks=7 obj#=206418 tim=1422207509571

WAIT #39: nam='db file scattered read' ela=832 file#=147 block#=897497 blocks=8 obj#=206418 tim=1422207510668

WAIT #39: nam='db file scattered read' ela=846 file#=148 block#=102411 blocks=16 obj#=206418 tim=1422207512030

WAIT #39: nam='db file scattered read' ela=4872 file#=148 block#=102427 blocks=16 obj#=206418 tim=1422207517488

WAIT #39: nam='db file scattered read' ela=1624 file#=148 block#=102443 blocks=16 obj#=206418 tim=1422207520062

确实存在大量的DB FILE SCATTERD READ。这更加坚信了我的观点,表里存在大量的碎片。找第一个SCATTERD READ的参数 file#=146 block#=92900,让客户执行alter system dump datafile 146 block min 92900 block max 92904。

获得的结果如下:

data_block_dump,data header at0x6000000000208e64

===============

tsiz: 0x1f98

hsiz: 0x4c

pbl: 0x6000000000208e64

bdba: 0x24816ae4 76543210

flag=--------

ntab=1

nrow=29

frre=0

fsbo=0x4c

fseo=0xf7

avsp=0x1f4c

tosp=0x1f4c

0xe:pti[0] nrow=29 offs=0

0x12:pri[0] sfll=1

0x14:pri[1] sfll=2

0x16:pri[2] sfll=3

0x18:pri[3] sfll=4

0x1a:pri[4] sfll=5

0x1c:pri[5] sfll=6

0x1e:pri[6] sfll=7

0x20:pri[7] sfll=8

0x22:pri[8] sfll=9

0x24:pri[9] sfll=10

0x26:pri[10] sfll=11

0x28:pri[11] sfll=12

0x2a:pri[12] sfll=13

0x2c:pri[13] sfll=14

0x2e:pri[14] sfll=15

0x30:pri[15] sfll=16

0x32:pri[16] sfll=17

0x34:pri[17] sfll=18

0x36:pri[18] sfll=19

0x38:pri[19] sfll=20

0x3a:pri[20] sfll=21

0x3c:pri[21] sfll=22

0x3e:pri[22] sfll=23

0x40:pri[23] sfll=24

0x42:pri[24] sfll=25

0x44:pri[25] sfll=26

0x46:pri[26] sfll=27

0x48:pri[27] sfll=28

0x4a:pri[28] sfll=-1

block_row_dump:

end_of_block_dump

 

里面全部是空块。建议客户做一个ALTER TABLE <table> MOVE;表重组后,发现原来12G的表只剩下800M了。再执行这个SQL,只有12个BUFFER GET了:


Statistics

----------------------------------------------------------

1 recursive calls

0 db block gets

12 consistent gets

1 physical reads

0 redo size

18921 bytes sent via SQL*Net to client

558 bytes received via SQL*Net from client

8 SQL*Net roundtrips to/from client

老白的这个小例子很简单,但是从这个例子里可以看到优化的一个流程。遇到SQL 的问题,可以做10046 事件,获取详细的信息,通过trace,分析原因,找到原因后,就可以解决问题,这里发现是碎片的问题,通过Move table 后,表从原来的12G 变成了800M,解决了碎片的问题,SQL 的性能得到提高。

相关内容