Back home and writing

After a few trips abroad and very interesting projects with vip-big customers, new features, that demanded lot of time and concentration … I’m back!

Want to share with you some goodies from my trips … and advice.

Trinidad & Tobago: be aware of displicent and slow service, lot of chinese places.

Brazil, Rio de Janeiro: try Feijoada at Rondinella Rest., Av. Atlantica, Copacabana Beach… you won’t regret.

Update under the hood … and some extreme side effects

Add to Technorati Favorites

Ver este artículo en Español

At first glance, this topic may look trivial, but may be the source for severe performance issues, unnecessary maintenance tasks and/or storage waste.

Yes, you can say “update is so basic, why I should care about it?!”, well it may be some cases when using update without care, things may get ugly. I found it playing with my old friend ALTER SYSTEM DUMP BLOCK and a few inserts and updates, over a table with a few columns, two of them VARCHAR2 type.

This was done on an Oracle 10.2.0.4 release database, let’s see how it goes

create table table5 (col1 number, col2 varchar(10), col3 varchar(46));

insert into table5 (col1,col2,col3) values (1,'row one','This is the first row');

commit;

insert into table5 (col1,col2,col3) values (2,'row two','This is the 2nd row,updates will be Committed!');

commit;
alter session set tracefile_identifier='table5_04_b103';
alter system dump datafile 6 block 103;

I strongly recommend using parameter TRACEFILE_IDENTIFIER, because helps you differentiate your trace quickly.

Initial state dump for block where rows were inserted:


CF8F390 C1023320 01FFFF02 C102FF80 0203022C [ 3..........,...]
CF8F3A0 720703C1 7420776F 542D6F77 20736968 [...row two-This ]
CF8F3B0 74207369 32206568 7220646E 752C776F [is the 2nd row,u]
CF8F3C0 74616470 77207365 206C6C69 43206562 [pdates will be C]
CF8F3D0 696D6D6F 21646574 0203012C 720702C1 [ommitted!,.....r]
CF8F3E0 6F20776F 5415656E 20736968 74207369 [ow one.This is t]
CF8F3F0 66206568 74737269 776F7220 14A30601 [he first row....]
Block header dump: 0x01800067
...
0xe:pti[0] nrow=2 offs=0
0x12:pri[0] offs=0x1f74
0x14:pri[1] offs=0x1f38
block_row_dump:
tab 0, row 0, @0x1f74
tl: 36 fb: --H-FL-- lb: 0x1 cc: 3
col 0: [ 2] c1 02
col 1: [ 7] 72 6f 77 20 6f 6e 65
col 2: [21]
54 68 69 73 20 69 73 20 74 68 65 20 66 69 72 73 74 20 72 6f 77
tab 0, row 1, @0x1f38
tl: 60 fb: --H-FL-- lb: 0x2 cc: 3
col 0: [ 2] c1 03
col 1: [ 7] 72 6f 77 20 74 77 6f
col 2: [45]
54 68 69 73 20 69 73 20 74 68 65 20 32 6e 64 20 72 6f 77 2c 75 70 64 61 74
65 73 20 77 69 6c 6c 20 62 65 20 43 6f 6d 6d 69 74 65 64 21
end_of_block_dump
End dump data blocks tsn: 7 file#: 6 minblk 103 maxblk 103

It shows our two records: starting bottom with first row; on top of it, second row (that’s why a ‘regular’ Oracle table is properly named HEAP ORGANIZED TABLE)

Now let’s issue some updates on row #2, column 2 (col2) will change value from ‘row two’ and ‘row two-02’ to ‘row two-25’

update table5b set col2 = 'row two-02' where col1 = 2;
...
update table5b set col2 = 'row two-25' where col1 = 2;


CF8F350 C3041100 FF373005 1702C102 03012C2F [.....07...../,..]
CF8F360 0A03C102 20776F72 2D6F7774 542D3532 [....row two-25-T]
CF8F370 20736968 74207369 32206568 7220646E [his is the 2nd r]
CF8F380 752C776F 74616470 77207365 206C6C69 [ow,updates will ]
CF8F390 43206562 696D6D6F 21646574 0203002C [be Commited!,...]
CF8F3A0 720703C1 7420776F 542D6F77 20736968 [...row two-This ]
CF8F3B0 74207369 32206568 7220646E 752C776F [is the 2nd row,u]
CF8F3C0 74616470 77207365 206C6C69 43206562 [pdates will be C]
CF8F3D0 696D6D6F 21646574 0203002C 720702C1 [ommited!,......r]
CF8F3E0 6F20776F 5415656E 20736968 74207369 [ow one.This is t]
CF8F3F0 66206568 74737269 776F7220 14F10602 [he first row....]
Block header dump: 0x01800067
...
0xe:pti[0] nrow=2 offs=0
0x12:pri[0] offs=0x1f74
0x14:pri[1] offs=0x1ef9
block_row_dump:
tab 0, row 0, @0x1f74
tl: 36 fb: --H-FL-- lb: 0x0 cc: 3
col 0: [ 2] c1 02
col 1: [ 7] 72 6f 77 20 6f 6e 65
col 2: [21]
54 68 69 73 20 69 73 20 74 68 65 20 66 69 72 73 74 20 72 6f 77
tab 0, row 1, @0x1ef9
tl: 63 fb: --H-FL-- lb: 0x1 cc: 3
col 0: [ 2] c1 03
col 1: [10] 72 6f 77 20 74 77 6f 2d 32 35
col 2: [45]
54 68 69 73 20 69 73 20 74 68 65 20 32 6e 64 20 72 6f 77 2c 75 70 64 61 74
65 73 20 77 69 6c 6c 20 62 65 20 43 6f 6d 6d 69 74 65 64 21
end_of_block_dump

Changing col2 from ‘row two‘ to ‘row two-02′, required to duplicate the whole row… of course, the register with col2=’row two’ was logically deleted, remaining in the block.
Is this a dead record? yes, it’s a dead record…

The 24 consecutive updates, didn’t duplicate any information … sorry to disappoint you if expected to see duplication in this case. But what happened? Seems that Oracle update algorithm has intelligence to identify that old and new values have the same length, then simply updating col2 on the spot. Update on CHAR columns (fixed length datatypes) behave the same way.

Ok, now let’s try modifying length of updated value between transactions. First alternate updates on col2 and col3:

update table5 set col2 = 'row two-38' where col1 = 2;
commit;

update table5 set col3 = 'Now update the second row, shorter text!' where col1 = 2;
commit;

update table5 set col2 = 'row 2-39' where col1 = 2;
commit;

update table5 set col3 = 'Now re-update the second row, another text!' where col1 = 2;
commit;

update table5 set col2 = 'row two-40' where col1 = 2;
commit;

update table5 set col3 = 'Now re-re-update the second row, another text!' where col1 = 2;
commit;

update table5 set col2 = 'row 2-41' where col1 = 2;
commit;

update table5 set col3 = 'Now re-re-re-update the second row, will fit?' where col1 = 2;
commit;

update table5 set col2 = 'row two-42' where col1 = 2;
commit;

update table5 set col3 = 'Now re-re-re-re-update the 2nd row, changed?' where col1 = 2;
commit;

update table5 set col2 = 'row two-43' where col1 = 2;

and later, updates only on col2…

update table5 set col2 = 'row two-44' where col1 = 2;
commit;

update table5 set col2 = 'row 2-45' where col1 = 2;
commit;

update table5 set col2 = 'row two-46' where col1 = 2;
commit;

update table5 set col2 = 'row 2-47' where col1 = 2;
commit;

update table5 set col2 = 'row two-48' where col1 = 2;
commit;

update table5 set col2 = 'row 2-49' where col1 = 2;

And see how looks now the block dump, suddenly the block is filled like we issued inserts instead of updates, strange to say the least


D84D000 61623763 4E5F3163 2C6F6765 C1020302 [c7bac1_Nego,....]
D84D010 6F720803 2D322077 4E2C3934 7220776F [..row 2-49,Now r]
D84D020 65722D65 2D65722D 752D6572 74616470 [e-re-re-re-updat]
D84D030 68742065 6E322065 6F722064 63202C77 [e the 2nd row, c]
D84D040 676E6168 2C3F6465 C1020300 6F720A03 [hanged?,......ro]
D84D050 77742077 38342D6F 776F4E2C 2D657220 [w two-48,Now re-]
D84D060 722D6572 65722D65 6470752D 20657461 [re-re-re-update ]
D84D070 20656874 20646E32 2C776F72 61686320 [the 2nd row, cha]
...... 46 rows not shown .....
D84D360 0A03C102 20776F72 2D6F7774 542D3833 [....row two-25-T]
D84D370 20736968 74207369 32206568 7220646E [his is the 2nd r]
D84D380 752C776F 74616470 77207365 206C6C69 [ow,updates will ]
D84D390 43206562 696D6D6F 21646574 0203002C [be Commited!,...]
D84D3A0 720703C1 7420776F 542D6F77 20736968 [...row two-This ]
D84D3B0 74207369 32206568 7220646E 752C776F [is the 2nd row,u]
D84D3C0 74616470 77207365 206C6C69 43206562 [pdates will be C]
D84D3D0 696D6D6F 21646574 0203002C 720702C1 [ommited!,......r]
D84D3E0 6F20776F 5415656E 20736968 74207369 [ow one.This is t]
D84D3F0 66206568 74737269 776F7220 61470602 [he first row..Ga]
Block header dump: 0x01800067

Seems that the block contains a lot of records, but look at the block header nrow figures: block contains 2 live rows, and everything else is logically deleted, dead… space wasted.


ntab=1
nrow=2
frre=-1
fsbo=0x16
fseo=0x1ba7
avsp=0x1f20
tosp=0x1f22
0xe:pti[0] nrow=2 offs=0
0x12:pri[0] offs=0x1f74
0x14:pri[1] offs=0x1ba7

Yep, this is something you must take care of… conclusion: you cannot take lightly update operations on variable length fields.

Remember there is always a trade-off, in this case update algorithm for variable length update is optimized for performance, which is one of Oracle’s virtues & pillar.

It’s desirable to prevent this behavior at the design or programing stages, avoiding repeating updates over same record or making a weighted decision over CHAR, meaning a fixed and constant waste of space versus VARCHAR with this corner case.

If preventive solution is not possible, then Oracle provides ALTER TABLE instructions to cope with this and give maintenance to table, removing this “dead rows”.

Quiz: ALTER TABLE {table} SHRINK SPACE or ALTER TABLE {table} SHRINK SPACE COMPACT are our saving sentences?
Which pre-requisite have in order to use any of them?
Are they interchangeable or what is the behavior distinction?

Thanks for your time reading today, I’ll engage next time with updates on tables having INDEXES…

Subscribe to Oracle Database Dissected by Email

Delicious Bookmark this on Delicious

Starting Oracle Coherence 3.5 book review

Add to Technorati Favorites

Ver este articulo en Español

Past Friday, Packt Publishing gave me a copy of his newest “Oracle Coherence 3.5” book, which you can find clicking the book cover image below.

Coherence is a Middleware product from Oracle which main purpose is being a high performance Cache on the 2nd tier, thus improving performance reducing unnecessary traffic or requests to the database server. On next weeks my comments on this book…

Subscribe to Oracle Database Disected by Email
Delicious
Bookmark this on Delicious

Oracle Exadata/Database Machine… nice to meet you

Add to Technorati Favorites

Ver este articulo en Español

During the last four or five weeks I’ve had the rare privilege of working with this amazing piece of engineering, the Oracle Database Machine.

Engaged during deployment of it, I’ve faced the setup steps on the SW side, applied upgrade patches & side-to-side with Oracle Sun engineers bringing this marvel to life… it’s just exciting, watching all the slightly-bluish-green leds showing HD activity or admiring the most precise cabling I’ve ever seen… there is just one word: precision. I remember my first experience with an HP XP12000 SAN, five years ago, being not that exciting as this recent experience.

Few months ago, started my preparation taking a web seminar for Oracle Exadata, appeared to me quite revolutionary the way it works, had to admit that first perception quickly got surpassed once started working directly with this beauty.

A new paradigma is needed when working with Exadata, it’s clear the tuning role will change, one (for instance) that may consider an index useless or even harmful.

Please stay with me on this exciting journey, I’ll try to share my experience on this as much as I can. Thank you for your attention.

Subscribe to Oracle Database Disected by Email
Delicious
Bookmark this on Delicious

Oracle 11g Streams Implementer’s Guide – Highlights – Part I

Add to Technorati Favorites

Ver este articulo en Español

I finished reading this very nice book on Oracle Streams… yes, the one I shared you weeks ago (See post My first assignment on book reviews). Now as promised, getting back to you and share my comments on the book, starting with those chapters that clearly make the difference against other Oracle Streams books on the market, those which are more like “rephrasing” the manual.

Chapter 2 and Chapter 3 provide the foundation for success on implementing Streams, this is the Analysis and Design phase that is often overlooked … but pays high rewards to those who comply with it!!!

Particularly those chapters provide you with a practical ready-to-use framework, if you may tweak it to fit your needs, that is your call.

Chapter 8 is packed with a useful list of recipes for change, like adding a new replication site, detailed with full steps. Also recommendations to keep an eye on when changes are implemented. However, I liked most the troubleshooting cases, very handy BTW, which authors shown mapped against the replication process diagram… that is cool!!!

Wait for final part next week, this Transaction is not committed yet!

Subscribe to Oracle Database Disected by Email
Delicious
Bookmark this on Delicious

My first assignment on book reviews

Add to Technorati Favorites

Ver este articulo en Español

During the past week, I’ve got a copy of “Oracle 11g Streams Implementer’s Guide” book from Packt Publishing, by the way, it’s their newest book on Oracle StreamsGo to book at Packt Publishing and the only one covering release 11g2; see more clicking the book cover image on left, that will lead you to the book’s page. There you may even get a free sample chapter (Ch 5 on N-Way Replication), for free!!!.

So, on days to come you may see here postings related to this endeavor… stay tuned!

Subscribe to Oracle Database Disected by Email
Delicious
Bookmark this on Delicious

How to innocently corrupt a data table block

Add to Technorati Favorites

Ver este articulo en Español

This really happened to me when trying to learn how the INITRANS
attribute works, was very surprising how easily you can have this
transitory error state.

Replicate the error is very straightforward, you just need one table and
4 transactions. It happened on a 10.2.0.4 database.

CREATE TABLESPACE UNIF
DATAFILE '/u01/MASTER/unif.dbf'
SIZE 20M
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 64K
SEGMENT SPACE MANAGEMENT AUTO;

create table scott.t2 (c1 number, c2 char(10)) tablespace unif;

Table is created with storage defaults, therefore INITRANS is 2 initially.

insert into scott.t2 (c1, c2) values (0,'TEST me!');

commit;

delete from scott.t2 where c1 = 0;

commit;

See this block dump after the DELETE/COMMIT pair… well, that is also
something to think about too: seems that data is logically erased.
Please observe the 2 ITL (Interested Transaction List) slots, which
corresponds to INITRANS=2.

DFAA3E0 00000000 00000000 00000000 0102012C  [............,...]
DFAA3F0 45540A80 6D205453 20202165 1CE60601 [..TEST me! ....]
Block header dump: 0x01800017
Object id on Block? Y
seg/obj: 0xd580 csc: 0x00.e1c2e itc: 2 flg: E typ: 1 - DATA
brn: 0 bdba: 0x1800011 ver: 0x01 opc: 0
inc: 0 exflg: 0

Itl Xid Uba Flag Lck Scn/Fsc
0x01 0x0001.013.000001ac 0x0080b037.01c8.17 --U- 1 fsc
0x0000.000e1ce6
0x02 0x0000.000.00000000 0x00000000.0000.00 ---- 0 fsc
0x0000.00000000

Now we change the INITRANS attribute for T2, and increment to say 8.

alter table scott.t2 initrans 8;

Another block dump and you can see the new ITL slots are not allocated
right away (a detailed explanation of ITL may be seen at this place
http://advait.wordpress.com/2008/06/16/interested-transaction-list-itl/
and there you can see that this allocation is dynamic).

Now we are going to try something trivial, throw some transactions into
this table and see what happens. For that, we need at least 3 sqlplus
sessions, because ITLs seem to be allocated on a session/block basis
(this to be confirmed on a later article).

Session 1:

insert into scott.t2 (c1, c2) values (1,'TEST y0u!');
commit;

update scott.t2
set c2 = 'TeST You!'
where c1 = 1;
(no commit for the moment, we want all transactions in pending state)

Session 2:
insert into scott.t2 (c1, c2) values (2,'kiSS him!');
(no commit for the moment, same as above session)

Session 3:
insert into scott.t2 (c1, c2) values (4,'TeaCH DeY!');
(no commit for the moment, same as above session)

What our knowledge and logic may tell us, is that up to now the database
should have created a new ITL slot, counting 3.
However this data block dump shows us a distinct reality:
1) The database engine didn’t create a new ITL slot
2) It did the 3rd insert on same block as the other transactions
3) A careful look to the dump, shows that the block appears to be
“corrupted”… this is the shocking part.

buffer tsn: 7 rdba: 0x01800016 (6/22)
scn: 0x0000.000e1cad seq: 0x01 flg: 0x04 tail: 0x1cad0601
frmt: 0x02 chkval: 0x89f7 type: 0x06=trans data
Hex dump of corrupt header 3 = CHKVAL
Dump of memory from 0x0DA66400 to 0x0DA66414
DA66400 0000A206 01800016 000E1CAD 04010000 [................]
DA66410 000089F7 [....]
Hex dump of corrupt block <<--
Dump of memory from 0x0DA66414 to 0x0DA683FC
DA66410 00000001 0000D580 000E1C2E [............]
DA66420 00000000 00320002 01800011 0001000A [......2.........]
DA66430 000001BC 00800AB7 001302B1 00000001 [................]
DA66440 00000000 00000000 00000000 00000000 [................]
Repeat 1 times
DA66460 00000000 00010100 0014FFFF 1F731F87 [..............s.]
DA66470 00001F73 1F870001 00000000 00000000 [s...............]
DA66480 00000000 00000000 00000000 00000000 [................]
Repeat 501 times
DA683E0 00000000 00000000 2C000000 C1020201 [...........,....]
DA683F0 65540A05 20484361 21596544 [..TeaCH DeY!]
Dump of memory from 0x0DA66414 to 0x0DA683FC
DA66410 00000001 0000D580 000E1C2E [............]
DA66420 00000000 00320002 01800011 0001000A [......2.........]
DA66430 000001BC 00800AB7 001302B1 00000001 [................]
DA66440 00000000 00000000 00000000 00000000 [................]
Repeat 1 times
DA66460 00000000 00010100 0014FFFF 1F731F87 [..............s.]
DA66470 00001F73 1F870001 00000000 00000000 [s...............]
DA66480 00000000 00000000 00000000 00000000 [................]
Repeat 501 times
DA683E0 00000000 00000000 2C000000 C1020201 [...........,....]
DA683F0 65540A05 20484361 21596544 [..TeaCH DeY!] -- 3rd Trx
buffer tsn: 7 rdba: 0x01800017 (6/23)
scn: 0x0000.000e219d seq: 0x01 flg: 0x00 tail: 0x219d0601
frmt: 0x02 chkval: 0x0000 type: 0x06=trans data
Hex dump of block: st=0, typ_found=1
Dump of memory from 0x0DA66400 to 0x0DA68400
DA66400 0000A206 01800017 000E219D 00010000 [.........!......]
DA66410 00000000 00000001 0000D580 000E2163 [............c!..]
DA66420 00000000 00320002 01800011 000E0002 [......2.........]
DA66430 000001E5 0080001E 0009028F 00000001 [................]
DA66440 00000000 00000004 000001B0 008017B6 [................]
DA66450 000701D2 00000001 00000000 00000000 [................]
DA66460 00000000 00030100 00180000 1F5E1F66 [............f.^.]
DA66470 00001F5E FFFF0003 1F661F77 00000000 [^.......w.f.....]
DA66480 00000000 00000000 00000000 00000000 [................]
Repeat 499 times
DA683C0 00000000 00000000 022C0000 03C10202 [..........,.....]
DA683D0 53696B0A 69682053 2C20216D C1020201 [.kiSS him! ,....] -- 2nd Transaction
DA683E0 65540A02 59205453 2021756F 0102023C [..TeST You! <...] -- 1st Transaction
DA683F0 45540A80 6D205453 20202165 219D0601 [..TEST me! ...!] -- dead row
Block header dump: 0x01800017
Object id on Block? Y
seg/obj: 0xd580 csc: 0x00.e2163 itc: 2 flg: E typ: 1 - DATA
brn: 0 bdba: 0x1800011 ver: 0x01 opc: 0
inc: 0 exflg: 0

Itl Xid Uba Flag Lck Scn/Fsc
0x01 0x0002.00e.000001e5 0x0080001e.028f.09 ---- 1 fsc
0x0000.00000000
0x02 0x0004.000.000001b0 0x008017b6.01d2.07 ---- 1 fsc
0x0000.00000000

data_block_dump,data header at 0xda66464
===============

At this point, we may ask ourselves ‘Where is the rdbms keeping track of
ITL for the last transaction?’
… we need to get that answer.

This needed a confirmation which tried to found with
DBMS_REPAIR.CHECK_OBJECT , however everything looked just fine

SQL> set serveroutput on
declare
v_corrupted_blocks number;
begin
dbms_repair.check_object('SCOTT','T2', object_type =>
dbms_repair.table_object,
repair_table_name=>'REPAIR_TABLEAU',corrupt_count=>v_corrupted_blocks);
dbms_output.put_line('Corrupted Blocks = '||v_corrupted_blocks);
end;
/
SQL> 2 3 4 5 6 7
Corrupted Blocks = 0

PL/SQL procedure successfully completed.

Confused I’ve tried a second block dump for same table, and surprisingly
this time no corruption was present, no new ITL slot was created but
instead database used another block for last transaction.

buffer tsn: 7 rdba: 0x01800016 (6/22)
scn: 0x0000.000e222b seq: 0x01 flg: 0x04 tail: 0x222b0601
frmt: 0x02 chkval: 0x3930 type: 0x06=trans data
Hex dump of block: st=0, typ_found=1
Dump of memory from 0x0D26B400 to 0x0D26D400
...
D26B480 00000000 00000000 00000000 00000000 [................]
Repeat 501 times
D26D3E0 00000000 00000000 2C000000 C1020201 [...........,....]
D26D3F0 65540A05 20484361 21596544 222B0601 [..TeaCH DeY!..+"]
Block header dump: 0x01800016
Object id on Block? Y
seg/obj: 0xd580 csc: 0x00.e1c2e itc: 2 flg: E typ: 1 - DATA
brn: 0 bdba: 0x1800011 ver: 0x01 opc: 0
inc: 0 exflg: 0

Itl Xid Uba Flag Lck Scn/Fsc
0x01 0x000a.001.000001bc 0x00800ab7.02b1.13 ---- 1 fsc
0x0000.00000000
0x02 0x0000.000.00000000 0x00000000.0000.00 ---- 0 fsc
0x0000.00000000

data_block_dump,data header at 0xd26b464
===============

buffer tsn: 7 rdba: 0x01800017 (6/23)
scn: 0x0000.000e219d seq: 0x01 flg: 0x04 tail: 0x219d0601
frmt: 0x02 chkval: 0x5eaa type: 0x06=trans data
Hex dump of block: st=0, typ_found=1
Dump of memory from 0x0D26B400 to 0x0D26D400
...
D26B480 00000000 00000000 00000000 00000000 [................]
Repeat 499 times
D26D3C0 00000000 00000000 022C0000 03C10202 [..........,.....]
D26D3D0 53696B0A 69682053 2C20216D C1020201 [.kiSS him! ,....]
D26D3E0 65540A02 59205453 2021756F 0102023C [..TeST You! <...]
D26D3F0 45540A80 6D205453 20202165 219D0601 [..TEST me! ...!]
Block header dump: 0x01800017
Object id on Block? Y
seg/obj: 0xd580 csc: 0x00.e2163 itc: 2 flg: E typ: 1 - DATA
brn: 0 bdba: 0x1800011 ver: 0x01 opc: 0
inc: 0 exflg: 0

Itl Xid Uba Flag Lck Scn/Fsc
0x01 0x0002.00e.000001e5 0x0080001e.028f.09 ---- 1 fsc
0x0000.00000000
0x02 0x0004.000.000001b0 0x008017b6.01d2.07 ---- 1 fsc
0x0000.00000000

data_block_dump,data header at 0xd26b464
===============

Try to replicate this on your environments and share with us your
release and behavior. Note: I tried this on 11g2 and at least no block
corruption showed up.

Thank you for your time, and see you next time

Subscribe to Oracle Database Disected by Email
Delicious
Bookmark this on Delicious

Latest storage Adventure

Add to Technorati Favorites

Ver este articulo en Español

Datamarts/Datawarehouse performance issues can be scaring events, taking into account the massive amount of data they handle; the tuning task is even more daunting if they’re used for operational day-to-day decision making.

During the last week and days, I’ve been involved on a 15 Tb datamart with performance issues on dataloading, not querying. Tablespaces for indexes and data share filesystems, nowadays this is not a concern anymore, however I prefer to encourage customers to allocate independent storage for each type, that way they are able to discriminate the source of IO.

They have concurrent schedules for dataloading processes and parallelism degree for tables and indexes involved; a very good meassure to speed up data loading is to use /*+ APPEND */ hint on INSERT … AS SELECT, which they’re using also with PARALLEL. The parallelism has a multiplying factor and if you’ve experienced the downsides of it you may be suspecting when the wall is going to be hitted: at the storage level.

I generated a heat map for filesystems and focusing on write statistics found the correlation expected: very high average write times on filesystems starting at the time dataloading processes were scheduled… huge avg times around seconds!

After a quick chat with the Sysadmin and Storage guys, something very important arose: the striping factor for all those overheated filesystems was 2, said in other words: “storage you have, performance … what’s that?” That is a common mistake and results from working on almost isolated process, then is when I say: “Guys, you need to talk to each other, learn the jargon, the working of the “machines” the other teams deal with… end even the inner workings, why not?!”

After identified a set of objects that participated heavily on the filesystem overheating, brand new filesystems with striping factor of 8 were allocated and new tablespaces created to receive this objects. As expected, just the maintenance task of moving all tables/partitions/subpartitions took a breeze. Timings for uploading processes even on high load days improved dramatically… and all this for a subtle “configuration number” on the OS storage options.

Subscribe to Oracle Database Disected by Email
Delicious
Bookmark this on Delicious

Resuming and the search for the 4Mb lost

Add to Technorati Favorites

Ver este articulo en Español

It’s been a while this place has been quiet, not anymore. I changed jobs on February and was on a stabilization process, a very long one if you like, or at least for me.

I missed very much the moment to face the keyboard and share something to you, however since today I’m committing to thoroughly give an update every 2 weeks, at least.

Have you been through the Oracle Server startup process, step by step? Here is an excercise for you:

As you may know, we already have an instance, background processes and memory, after having the database started up in NOMOUNT state. If you don’t believe me do the test, best if you try this on a Unix/Linux environment.

1) startup nomount at SQL*Plus prompt.
2) on another OS shell, issue a ps -ef | grep ora_ you’ll see all background processes started.
3) now issue an ipcs -m and you’ll see all shared memory segments allocated to Oracle server.

Take note the amount of memory allocated for oracle, write down this figure, we’ll need it.

Now go all the way and manually go thru all steps of database startup, issue following commands on SQL*Plus.
1) alter database mount
2) alter database open

Now you have all memory allocated to Oracle server, this internally reported set of information provides a grand total labeled “Total System Global Area”. If you convert the memory reported by ipcs and this one to bytes, you’ll see we have missing 4Mb.

It happens the same on your databases? Somebody has the answer for this result?
I did this test under 10.2.0.4 and on Linux, this behavior may not be present on another releases and/or platforms; please, share with us your results on a comment.

I want to thank you for giving me your attention, hope to see you real soon.

Subscribe to Oracle Database Disected by Email
Delicious
Bookmark this on Delicious

Index Dynamics – Part 3

Add to Technorati Favorites

Ver este articulo en Español

See previous post Index Dynamics – Part II or first part Index Dynamics – Part I

On this part, I’m going to share with you the final results for my 2 week experiment with indexes on their habitat, a production environment.

I’ve to forewarn that some data was not available, as the ANALYZE method fails for objects currently locked, given that our environment is a “live” production database and for consistency we scheduled all tests for a given time, which some days collided with business process. The Used Space graph shows those NA data points; however for INDEX1 and INDEX4 the Nov 7th data for Deltas was interpolated, the only index for which we have all samples is INDEX2, coincidentally our research subject.

First let’s see the Used Space graph, here you may see how the Used Space percentage continues the decay trend, which means that free space within the index nodes is increasing.


Compare the current Used Space figures to those sampled before the rebuild, observe that after 15 days some indexes are midway to have the same amount of free space. That recalls the frequent doubts regarding index maintenance: “Must rebuild indexes? How long will last the structure ‘optimus’ state?”

Used Space    First    Now
INDEX1(10c)    69%     99%
INDEX4(6c)     63%     74%
INDEX2(4c)     53%     77%
INDEX3(2c)     45%     92%

Given our partial-conclusion stated last post (Index Dynamics – Part II)
”The branch blocks are the stressed part of the B-Tree right after the rebuild”
We now might see less activity for Branch Blocks during the following days, which actually occurred.


One remark: observe the Nov 6th high for INDEX1 and the previous behavior, was on that day the transaction rate motivated a higher reorganization within the index, showing us the impact business patterns have on data structures.

I have to mention that my production environment went through a period close week, therefore increased activity and closing processes made their mark on observations. An example of this is the resulting graphs for Leaf Blocks, with noticeable inter-day spikes.

Let’s pay a visit to our research subject, INDEX2. This chart is online with everything previously stated, no surprises here.


This study motivated the following questions (that’s the problem with research, you answer one question and many new take the place):
1) How PCT_FREE (or PCT_USED) impacts the decay curve?
2) Are the splitting constants embedded in the code, optimal figures?
3) Do we need a feature for PCT_FREE change after rebuild has been done?

Conclusions

* High stress within the branch blocks may be observed as soon as transactions start to modify the index structure; the stress period or intensity, will depend on transaction rate and index attributes.

Recommendation

* Do not rebuild indexes right before processes or high load days, if heavy writing is expected: the index leaf block split overhead may impact performance.

I’ve the pending assignment of proposing a mathematical model for the Index Decay Rate… I’m working on that, with the help of Calculus. My next delivery will talk about that, I’m pretty close…

Thank you for reading, don’t forget to leave your comments.

See previous post Index Dynamics – Part II or first part Index Dynamics – Part I

Delicious
Bookmark this on Delicious

Follow

Get every new post delivered to your Inbox.