Monday, October 26, 2009

PLSQL_OPTIMIZE_LEVEL

I came to Austin for interview and i am on my way to Newjersy. I am hanging in the Austin Airport and i have another three hours for my flight departure. I thought, i write about one of the new Oracle10g optimization parameter PLSQL_OPTIMIZE_LEVEL.

This parameter determine the optimization level to compile the PLSQL code. The higher setting, oracle use more efforts to compile the code. This parameter will eliminate the dead code and moving the code out of the loop which does the same thing for each iteration. This has three valid values, which are 0,1 and 2. But default value for this parameter is 2.

Let us discuss about each value for this parameter. Please note, Oracle has not provided any detail level example for each value of this parameter. So i can't demonstrate exactly what oracle does for each level. Indeed, we can see the performance improvement in each level.

PLSQL_OPTIMIZE_LEVEL = 0 The value 0 works some what as pre 10g release. Oracle documentation says it works better than 9i. Let me write a procedure and run in oracle10g with value 0.

Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - Production
With the Partitioning, OLAP and Data Mining options

SQL> alter session set plsql_optimize_level =0;

Session altered.

SQL> set serveroutput on
SQL> create or replace procedure test as
2 a integer;
3 b integer;
4 c integer;
5 d integer;
6 v_time integer;
7 begin
8 v_time := Dbms_Utility.GET_CPU_TIME();
9 for j in 1..10000000 loop
10 a:= 100;
11 b:= null;
12 c:= nvl(b,1)+a;
13 end loop;
14 Dbms_Output.Put_Line(Dbms_Utility.GET_CPU_TIME()-v_time);
15 end;
16 /

Procedure created.

SQL> execute test;
770

PL/SQL procedure successfully completed.

The above procedure runs in 770 mseconds in oracle10g with plsql_optimize_level=0.

PLSQL_OPTIMIZE_LEVEL = 1 It eliminates unnecessary computation and exceptions. Since oracle has not given any example for the each value, i guess, it removes the statement b:=NULL in the TEST procedure. It does not make any sense to assign NULL value for each iteration of the loop in the TEST Procedure.

SQL> alter procedure test compile plsql_optimize_level = 1;

Procedure altered.

SQL> execute test;
502

PL/SQL procedure successfully completed.

The above procedure executes in 502 seconds and it is better then the plsql_optimize_level=0.

PLSQL_OPTIMIZE_LEVEL = 2 It moves the unnecessary dead code relatively far from its original location. I guess, it moves the assignment statement out of loop. Since it assigns the same value for each iteration which is not meaningful. Be aware that, some time, oracle takes long time to compile the procedure when we have value 2. Since oracle has to rewrite the code during the compilation stage and not during the execution stage.

SQL> alter procedure test compile plsql_optimize_level = 2;

Procedure altered.

SQL> execute test;
301

PL/SQL procedure successfully completed.

SQL>

The above procedure runs in 301 seconds and performance is far better then the value 1.

Monday, October 19, 2009

How to use histogram in Oracle

I would like to write about Oracle Histogram today. Histogram is very nice feature to help cost based optimizer to make right decision.

What is Histogram? Histograms are feature in CBO and it helps to optimizer to determine how data are skewed(distributed) with in the column. Histogram is good to create for the column which are included in the WHERE clause where the column is highly skewed. Histogram helps to optimizer to decide whether to use an index or full-table scan or help the optimizer determine the fastest table join order.

What are the advantage of Histogram? Histograms are useful in two places.

1. Histograms are useful for Oracle optimizer to choose the right access method in a table.

2. It is also useful for optimizer to decide the correct table join order. When we join multiple tables, histogram helps to minimize the intermediate result set. Since the smaller size of the intermediate result set will improve the performance.

Type of Histograms: Oracle uses two types of histograms for column statistics: height-balanced histograms and frequency histograms.

1. Height - balanced Histograms : The column values are divided into bands so that each band contains approximately the same number of rows. For instances, we have 10 distinct values in the column and only five buckets. It will create height based(Height balanced) histograms and it will evenly spread values through the buckets. A height-based histogram is when there are more distinct values than the number of buckets and the histogram statistics shows a range of rows across the buckets

2. Frequency Histograms : Each value of the column corresponds to a single bucket of the histogram. This is also called value based histogram. Each bucket contains the number of occurrences of that single value. Frequency histograms are automatically created instead of height-balanced histograms when the number of distinct values is less than or equal to the number of histogram buckets specified.

Method_opt Parameter: This is the parameter which tells about creating histogram while collecting the statistics. The default is FOR ALL COLUMNS SIZE AUTO in Oracle10g. But in oracle9i, the default is FOR ALL COLUMN SIZE 1 which will turn off the histogram collection.

FOR ALL [INDEXED HIDDEN] COLUMNS [size_clause]
FOR COLUMNS [size clause] columnattribute [size_clause] [,columnattribute [size_clause]...]

size_clause is defined as size_clause := SIZE {integer REPEAT AUTO SKEWONLY}

integer : Number of histogram buckets. Must be in the range [1,254]

REPEAT : Collects histograms only on the columns that already have histograms.

AUTO : Oracle determines the columns to collect histograms based on data distribution and the workload of the columns. We have a table called sys.col_usage$ that stores information about column usage. dbms_stats use this information to determine whether histogram is required for the columns.

SKEWONLY : Oracle determines the columns to collect histograms based on the data distribution of the columns.

Let me demonstrate how optimizer works with and without histogram as below two scenario. We take the emp table for this demonstration. The table has around 3.6 million records. The table emp_status column is highly skewed. It has two distinct values(Y,N). We have bitmap index on emp_status column.

Scenario 1 Let us generate the statistics without any histogram and see what kind of execution path optimizer is using. Without the histogram, oracle assume that, the data is evenly distributed and optimizer think that, we will have around 1.8 million record for emp_status Y and around another 1.8 million records for emp_status N.

Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - Production
With the Partitioning, OLAP and Data Mining options

SQL> select count(*),emp_status from scott.emp
2 group by emp_status;

COUNT(*) E
---------- -
1 N
3670016 Y

SQL> execute DBMS_STATS.GATHER_TABLE_STATS(OWNNAME => 'SCOTT', TABNAME => 'EMP',ESTIMATE_PERCENT =>
10, METHOD_OPT => 'FOR ALL COLUMNS SIZE 1',CASCADE => TRUE);

PL/SQL procedure successfully completed.

SQL> select ename from scott.emp where emp_status='Y';

3670016 rows selected.

--------------------------------------------------------------------------
Id Operation Name Rows Bytes Cost (%CPU) Time
--------------------------------------------------------------------------
0 SELECT STATEMENT 1832K 15M 5374 (5) 00:01:05
* 1 TABLE ACCESS FULL EMP 1832K 15M 5374 (5) 00:01:05
--------------------------------------------------------------------------

SQL> select ename from scott.emp where emp_status='N';

--------------------------------------------------------------------------
Id Operation Name Rows Bytes Cost (%CPU) Time
--------------------------------------------------------------------------
0 SELECT STATEMENT 1832K 15M 5374 (5) 00:01:05
* 1 TABLE ACCESS FULL EMP 1832K 15M 5374 (5) 00:01:05
--------------------------------------------------------------------------

Conclusion: Optimizer is using full table scan for the query which returns 3670016 records as well as it using full table scan for query which returns just only one record. This is obvisouly incorrect. This problem will be resolved by collecting histogram. Let us see in the next scenario.

Scenario 2 : Let us generate the statistics with histogram and see what kind of execution path optimizer is using. FOR COLUMN SIZE 2 EMP_STATUS will create two bucket for column emp_status. If we are not sure the distinct number of values in the column, then we can use AUTO option to collect histogram. With this histogram, oracle optimizer knows that, the column emp_status is highly skewed and it has two bucket and one bucket has around 3.6 million records with emp_status Y and another bucket has only one record with emp_status N. Now depends upon the query, optimizer decides whether to use index or Full table scan.

SQL> execute DBMS_STATS.GATHER_TABLE_STATS(OWNNAME => 'SCOTT', TABNAME => 'EMP',ESTIMATE_PERCENT =>
10, METHOD_OPT => 'FOR COLUMNS SIZE 2 EMP_STATUS',CASCADE => TRUE);

PL/SQL procedure successfully completed.

SQL> select ename from scott.emp where emp_status='Y';

3670016 rows selected.

--------------------------------------------------------------------------
Id Operation Name Rows Bytes Cost (%CPU) Time
--------------------------------------------------------------------------
0 SELECT STATEMENT 3681K 31M 5375 (5) 00:01:05
* 1 TABLE ACCESS FULL EMP 3681K 31M 5375 (5) 00:01:05
--------------------------------------------------------------------------

SQL> select ename from scott.emp where emp_status='N';

--------------------------------------------------------------------------
Id Operation Name Rows Bytes Cost (%CPU) Time
--------------------------------------------------------------------------
0 SELECT STATEMENT 1 9 1 (0) 00:00:01
1 TABLE ACCESS BY INDEX ROWID EMP 1 9 1 (0) 00:00:01
2 BITMAP CONVERSION TO ROWIDS
* 3 BITMAP INDEX SINGLE VALUE IDX_EMP
--------------------------------------------------------------------------

Conclusion: Optimizer is using full table scan for the query which returns 3670016 records. At the same time, optimizer is using index scan when for other query which returns one record. This scenario, the optimizer choose the right execution plan based on the query WHERE clause.

Data dictionary objects for Histogram:
user_histograms
user_part_histograms
user_subpart_histograms
user_tab_histograms
user_tab_col_statistics

Thursday, October 8, 2009

Transferring statistics between database

In general, development DB usually will have only portion of the data when we compared to Production database. In such a scenario, when we fix any production issues, obviously we make the changes in Dev DB and test the code and move to Prod DB. While testing the code in Dev DB, if we want to compare the execution plan between Dev and Prod, then we can copy the Prod DB statistics into Dev DB and forcast the optimizer behaviour in development server.

DBMS_STATS has an ability to transfer statistics between servers allowing consistent execution plans between servers with varying amounts of data. This article is tested in oracle10g. Here are the steps to transfer the statistics.

Source database : orcl
Source schema : sales
Target database : oradev
Target schema : sales


Now our goal is to copy the statistics from sales@orcl to sales@ordev.

Let us follow the below steps to copy the statistics from source(production) to target(development). I am running all the steps in System Schema..

step1. First create a stat table in the source database. The statistics table is created in SYSTEM schema.

Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - Production
With the Partitioning, OLAP and Data Mining options

SQL> connect system/password@orcl
Connected.
SQL> EXEC DBMS_STATS.create_stat_table('SYSTEM','STATS_TABLE');

PL/SQL procedure successfully completed.

SQL>

Step2. Export the sales schema statistics.

SQL> EXEC DBMS_STATS.export_schema_stats('SALES','STATS_TABLE',NULL,'SYSTEM');

PL/SQL procedure successfully completed.

SQL>

Step3. Export the STATS_TABLE by using expdp or exp utility and move the dump file to target(ordev) server.

Step4. Import the dump file into target database by using impdp or imp utility. Here i imported the dump file in system schema at target server.

Step5. Import the statistics into application schema(sales@ordev). Please remember, previous step, we imported the stats_table content into system schema by using impdp method. But this step, we are importing the statistics into relevant data dictionary table by using dbms_stats pacakge.

SQL> EXEC DBMS_STATS.import_schema_stats('SALES','STATS_TABLE',NULL,'SYSTEM');

PL/SQL procedure successfully completed.

SQL>

Step6. Drop the stats_table in target server.

SQL> EXEC DBMS_STATS.drop_stat_table('SYSTEM','STATS_TABLE');

PL/SQL procedure successfully completed.

SQL>

Note : We can follow step1 and step2 to backup the statistics before we gather new statistics. It is always good to backup the statistics before we overwrite the new statistics. In case, if we see any performance problem with new statistics, then we can import the old statistics. This option is very useful to transfer the statistics from one DB to another DB.

In oracle10g, it automatically save the statistics for last 31 days by default. We can restore the past statistics within the database at any time. This option is useful to restore the statistics in the same database. Please refer this Restoring statistics

Wednesday, October 7, 2009

Refreshing Stale Statistics

Oracle optimizer use the statistics information to choose the right path and execute the query efficiently. It is important to maintain the recent statistics to run the reports efficiently. Oracle highly recommends to use DBMS_STATS to gather statistics. Why oracle recommends to use DBMS_STATS? Click here to answer your question. This article is based on Oracle10g.

DBMS_STATS package has wonderful feature that capable of analyzing the stale statistics. I am going to discuss about collecting stale statistics in dbms_stats package.

In general, Gathering statistics consumes lot of resource and CPU time. Once we gathered statistics on a table, we do not need to collect the statistics on the same table until we make reasonable amount of data changes. Let us say, we have a schema called sales. This schema has lot of tables and many table has huge number of records. We schedule to analyze the entire schema every day at 2AM. In day to day DML activities, some of the tables are not having any changes or very minimum changes. In this scenario, we do not need to analyze the tables which are having very minimum changes or no changes. But scheduler automatically start analyzing all the tables in the schema at 2AM every day. This process unnecessarily consuming extra resource and degrade the server performance.

How do we stop analyzing the tables when there is no DML activity or very minimal DML activity? Yes... We can... Oracle introduced feature in DBMS_STATS package where oracle collect statistics on schema level or database level, only when the statistics are stale or out of date.

What is stale statistics? Oracle will record an approximate count of the number of rows that have been inserted,updated, and deleted in a table. The information will be recorded in user_tab_modifications view. When that count reaches a threshold percentage of the number of rows in the table , then the statistics are considered stale. The table monitoring should be enabled for recording the DML changes in user_tab_modification view. In oracle10g, Oracle automatically enable table monitoring and record the DML changes in user_tab_modifications view. Prior to oracle10g, we need to enable the table monitoring manually.

How do we enable table monitoring? In oracle10g, the table monitoring is default when statistic_level parameter is TYPICAL. Prior to Oracle10g, we need to enable table monitoring manually. Prior to Oracle10g, the below command is used to enable or disable the table monitoring. But since Oracle10g, the below command does not have any effect.

ALTER TABLE table_name MONITORING[NOMONITORING]

What is threshold percentage? Oracle automatically determines the threshold. Oracle doesn't officially document the threshold, so the threshold, and the entire algorithm, is subject to change over time.

Let me give an example how to analyze stale statistics :

Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - Production
With the Partitioning, OLAP and Data Mining options

SQL> BEGIN
2 DBMS_STATS.GATHER_SCHEMA_STATS (
3 ownname => 'SALES',
4 estimate_percent => 20,
5 block_sample => TRUE,
6 method_opt => 'FOR COLUMNS SIZE 10',
7 options => 'GATHER AUTO',
8 cascade => TRUE);
9 END;
10 /

PL/SQL procedure successfully completed.

SQL>

Note : It is very important that, we should use GATHER AUTO or GATHER STALE to analyze the stale statistics. Also table monitoring is mandatory. Since oracle10g, the table monitoring is enabled by default. So we do not need to worry about table monitoring since oracle10g.

This feature is very useful for large and complex databases where refreshing statistics for all objects can cause a heavy drain on server resources.

Thursday, October 1, 2009

Analyze Versus DBMS_STATS

Cost based optimizer is preferred method for oracle optimizer. In order to make good use of the CBO, you need to create accurate statistics. Prior to oracle8i, we use ANALYZE command to gather statistics.

DBMS_STATS package is introduced in oracle8i. Since Oracle8i, Oracle highly recommeds to use DBMS_STATS instead of ANALYZE command. This article is written in oracle10g. I am going to address below topics in this thread....

1. Why oracle recommends to use DBMS_STATS package?
2. What are the advantages of DBMS_STATS compared to ANALYZE?
3. How do we use DBMS_STATS package to analyze the table?
4. What are new features in each version for DBMS_STATS?

Why oracle recommends to use DBMS_STATS since Oracle8i?

1. Gathering statistics can be done in Parallel. This option is not available in ANALYZE command.

2. It is used to collect the stale statistics. I discussed about collecting stale statistics in another topic. Please refer stale statistics to know more about collecting stale statistics.

3. DBMS_STATS is a PLSQL pacakge. So it is easy to call. But ANALYZE does not.

4. It is used to collect statistics for external tables. But ANALYZE does not.

5. DBMS_STATS used to collect system statistics. But ANLAYZE does not.

6. Some time, ANALYZE does not produce accurate statistics. But DBMS_STATS does.

7. We can not use ANLAYZE command to gather statistics for partition or sub partition level. But we can use DBMS_STATS to analyze any specific partition or sub partition. This is especially useful for partition table. We do not need to analyze the Historical data whenever we refresh the current partition.

8. We can transfer statistics from one DB to another DB when we collected statistics through DBMS_STATS. But it can not be done when we use ANALYZE command to collect the statistics. Please refer statistics transfer to know more about trasferring statistics.

What force you to use ANALYZE command in all Oracle versions? ANALYZE can be used to collect the statistics like CHAIN_CNT, AVG_SPACE, and EMPTY_BLOCKS. DBMS_STATS will not collect these statistics. We might need to use ANALYZE in case if we want to see chained rows, average space and empty blocks.

There are several parameter exists for collecting statistics on table level, schema level, database level and system level. But i do not want to explain all the parameters which are already in Oracle help. Still i would like to explain some parameters.

estimate_percent: Percentage of rows or blocks to estimate. The valid rage is 0.000001 to 100. when we pass NULL for this parameter, then it computes. Compute is same as 100% sample. For instance, if we pass 20%, then it takes roughly around 20% of rows or 20% blocks depends on the BLOCK_SAMPLE parameter. This Parameter is used for analyzing on table, index, schema level and database level.

block_sample: This determines whether or not to use random block sampling instead of random row sampling. Block sampling would be slightly less accurate in the case where rows have roughly the same lifecycle and, thus, values are spread non-uniformly throughout the table. In case if you want to drive in deep on this, David Aldridge has a nice article on block sampling. This Parameter is used for analyzing on table, schema level and database level.

method_opt: This parameter tells about histogram in table. It determine which column should have histogram and number of histogram created for the table columns. This Parameter is used for analyzing on table, schema level and database level. Please refer histogram to know more about Histogram in Oracle.

granularity:This parameter is useful when you want to gather statistics on specific partition or sub partition in a table. The valid parameters are ALL, AUTO, GLOBAL, GLOBAL AND PARTITION, PARTITION, SUBPARTITION. This Parameter is used for analyzing on table, index, schema level and database level only if the table or index is partitioned.

no_invalidate: Does not invalidate the dependent cursors or currently parsed SQL statement if set to TRUE. The procedure invalidates the dependent cursors immediately if set to FALSE. Use DBMS_STATS.AUTO_INVALIDATE to have Oracle decide when to invalidate dependent cursors. This Parameter is used for analyzing or deleting statistics on table, index, schema level, database level.

Degree: Degree of parallelism. It has three valid values.

NULL : Oracle takes the value which is specified in degree clause of create or alter table statement.

DBMS_STATS.DEFAULT_DEGREE : It takes the value based on number of CPU's and init parameters.

DBMS_STATS.AUTO_DEGREE : It determines the value automatically. It is either 1 or default degree according to the size of the object.

Options: This parameter is used only for analyzing the data on schema level or DB level. There are multiple values for this parameters. The valid value for this parameters are GATHER, GATHER AUTO, GATHER STALE, GATHER EMPTY, LIST AUTO, LIST STALE, LIST EMPTY. Let me explain these valid values in short. Since these values are important to gather statistics on schema level.

GATHER-Gather statistics for all the objects in a schema or database.

GATHER AUTO-Gather statistics when the statistics are stale or when there is no statistics. It does both GATHER STALE and GATHER EMPTY.

GATHER STALE-Gather statistics only when it is stale. Does not collect when there is no statistics.

GATHER EMPTY-Gather statistics only when no statistics.

LIST AUTO: Returns a list of objects to be processed with GATHER AUTO.

LIST STALE: Returns list of stale objects as determined by looking at the user_tab_modifications

LIST EMPTY: Returns list of objects which currently have no statistics.

Gathering_mode : This parameter is used only for gathering system statistics. The valid modes are NOWORKLOAD, INTERVAL,START and STOP. The default is NOWORKLOAD. The START and STOP is used to stop and start the system statistics.

Example for collecting statistics on table:

DBMS_STATS.GATHER_TABLE_STATS(
OWNNAME => 'TANDEB',
TABNAME => 'CUSTOMER',
PARTNAME => 'PART092009'
GRANULARITY => 'PARTITION',
ESTIMATE_PERCENT => 10,
METHOD_OPT => 'FOR ALL COLUMNS SIZE 1',
CASCADE => TRUE,
NO_INVALIDATE => TRUE);

Example for collecting statistics on Schema:

DBMS_STATS.GATHER_SCHEMA_STATS(
OWNNAME => 'SCOMPPRD',
ESTIMATE_PERCENT => 10,
METHOD_OPT => 'FOR ALL COLUMNS SIZE 1',
OPTIONS => 'GATHER',
CASCADE => TRUE,
NO_VALIDATE => TRUE);


Example for collecting system statistics:

DBMS_STATS.GATHER_SYSTEM_STATS(
GATHERING_MODE => 'INTERVAL',
INTERVAL => 10);

Example for collecting database statistics:

DBMS_STATS.GATHER_DATABASE_STATS(
ESTIMATE_PERCENT => 10,
METHOD_OPT => 'FOR ALL COLUMNS SIZE 1',
CASCADE => TRUE,
NO_VALIDATE => TRUE,
GATHER_SYS => FALSE)


New feature in Oracle9i:

1. Introduced to gather system statistics. Such as I/O and CPU utilization.

2. It can direct the database to select the appropriate sample size to generate accurate statistics. A new value for the ESTIMATE_PERCENT parameter, DBMS_STATS.AUTO_SAMPLE_SIZE will let Oracle decide the sample size necessary to ensure generation of accurate statistics.

3. Oracle9i introduced new values for the size clause in the METHOD_OPT parameter automate the decisions regarding the columns on which histograms need to be created while letting administrators control the factors affecting such decisions. Besides specifying a numeric value for the size clause, administrators have the new options (AUTO, SKEWONLY, REPEAT)

4. Oracle9i introduced new feature to enable or disable the table monitoring in schema level or DB level in one command.

DBMS_STATS.alter_schema_tab_monitoring('MYSCHEMA', TRUE); DBMS_STATS.alter_schema_tab_monitoring('MYSCHEMA', FALSE);

DBMS_STATS.alter_database_tab_monitoring(TRUE); DBMS_STATS.alter_database_tab_monitoring(FALSE);

New feature in Oracle10g:

1. Oracle10g enable table monitoring automatically. Table monitoring is required to collect the stale statisics. We do not need to enable monitoring explicitly. This feature is disabled when statistics_level is BASIC. It enables the table monitoring feature when statistics_level is TYPICAL. ALTER TABLE [NO] MONITORING clauses as well as alter_schema_tab_monitoring and alter_database_tab_monitoring procedures of the dbms_stats package are now obsolete in oracle10g. But still it runs without any error. But there is no effect.

2. Oracle10g introduced two new values for Granularity parameter. These are AUTO and GLOBAL AND PARTITION. This parameter is applicable for analyzing partitioning tables.

AUTO : Oracle collect statistics GLOBAL level, Partition level, and sub-partition level only if sub partition method is LIST. If sub parition is not a LIST, then it collects only GLOBAL, Partition level.

GLOBAL AND PARTITION : Oracle gathers the global and partition level statistics. No sub-partition level statistics are gathered.

3. Oracle10g introduced new value DBMS_STATS.AUTO_DEGREE for Degree parameter. When you specify the auto_degree, Oracle will determine the degree of parallelism automatically. It will be either 1 (serial execution) or default_degree (the system default value based on number of CPUs and initialization parameters), according to the size of the object.

4. Oracle10g has ability to restore the previous statistics. Oracle saves last 31 days statistics by default. We can recover previous days statistics in case, optimizer behaves differently with current statistics. Please refer my another post restoring statistics

5. We can lock the table statistics. This would be helpful if you want to avoid gathering statistics during the maintenance window. Please refer my another post Locking statistics

6. Oracle10g has automatic statistics gathering feature. Oracle gather statistics for the entire database every day during the maintenance window. Please refer my another post automatic statistics gathering

7. The statistics will be collected automatically when we create index. In oracle9i, we need to use compute statistics clause to collect statistics while creating index. Please refer my another post compute index statistics

What is impact when we analyze the tables during the peak hours?

1. Oracle consume more resource when we gather statistics. This will slow down the overall performance in the sever.

2. When statistics are updated for a database object, Oracle invalidates any currently parsed SQL statements that access the object. The next time such a statement executes, the statement is re-parsed and the optimizer automatically chooses a new execution plan based on the new statistics. This will degrade the server performance during the peak hours. But we can control this by using the parameter NO_INVALIDATE. This has three values(TRUE, FALSE, DBMS_STATS.AUTO_INVALIDATE). TRUE will not invalidate the already parsed SQL statement. NO will invalidate parsed SQL statement immediately. AUTO_INVALIDATE decides when to invalidate the already parsed Statement.

Here is the oracle help to know more about dbms_stats procedure. Oracle Help

Tuesday, September 29, 2009

Resetting High Water Mark in Oracle10g

I would like to write how to reset HWM in oracle10g. Prior to Oracle10g, resetting high water mark is painful procedure in busy environment. Oracle made our life easy to reset the High Water Mark in oracle10g. I am not going to discuss about what is HWM. Since i already discussed this in another topic. Please click to read about what is HWM and what are the various options available to reset the HMW.

We have traditional ways to reset the HWM prior to Oracle10g. We might need Table downtime when we use below traditional methods....

1. Imp/exp
2. Alter tablespace move
3. truncate and insert
4. user dbms_redefinition package to copy the table

Okay... Let us talk about resetting HWM in Oracle10g. The tablespace should be ASSM(Automatic segment space Management) enabled to leverge this feature. In oracle10g, we do not need table downtime to reset the HWM. It would be easy to reset the HWM in 24/7 environment.

What does Oracle do while using Oracle10g feature to reset the HMW?

Oracle split this process as two phase. Let me explain what is happening in each phase.

Phase I. Oracle move the rows which are located in the middle or at the end of a segment further more down to the beginning of the segment and make the segment more compact. This shrinking process is kind of delete and insert. But it is not really!!!. This process is moving the data row by row. It acquires a row level lock when the row is moved down to the begining of the segment. The corresponding index data will be handled like any other row level DML. So we do not need to worry about rebuilding the indexes for the row. Also row level lock will happen for very short moment. Before we start this phase, we need to enable row movement. Here is the command to complete the phase I. Here i am using the table called bookings.

SQL> alter table bookings enable row movement;

Table altered.

SQL> alter table bookings shrink space compact;

Table altered

Phase II This step will reset the high water mark. This will acquire table level lock for very short moment while resetting the HWM. Here is the command to accomplish this task.

SQL> alter table bookings shrink space;

Table altered

SQL> alter table bookings shrink space cascade; (it is for all dependent objects as well)

Table altered

If we want to reset the HWM in one go, then below command will accomplish the task. The below command moves the rows and reset the HWM.

SQL> alter table bookings shrink space;

Table altered

What are the advantages of using Oracle10g new feature to reset the HWM?

There are serveral advantages over traditional methods. Let me list the advantages here.

1. It can be done in online. There is a table level lock for very short moment. Traditional methods are not supporting to reset the HWM in online.

2. It does not take extra space while resetting the HWM. If we use traditional method, DBMS_REDEFINITION package use double the amount of space.

3. It does acquire only row level lock while performing majority of the shrinking(moving rows) work. It acquires table level lock only when it resets the HWM which is in phase II. But traditional methods requires table down time for resetting the HWM except using dbms_redefinition package.

4. Index will be maintained and remain usable. But in traditional methods, we need to rebuild the index. Especially when we use ALTER TABLESPACE MOVE command.

5. It can be made in one command(alter table emp shrink space). In traditional method, there are multiple steps.

6. If you are not sure that you can afford table level lock at specific time, then you can do the majority of the shriniking work and later we can reset the HWM. Since table level lock is required only while resetting the HWM. The whole process can be done in two steps as i explained above. This advantage is not available in traditional methods.

What are the restriction of using Oracle10g new feature to reset the HWM?

1. It is only possible in ASSM tablespace
2. Not supporting for clustered tables, tables with column data type LONG

What circumstances we can reset the HWM as two phase?

The table is not frequently used(insert/update/delete) and it can afford to have table level lock, then we can reset the HWM in one go. We can reset the HWM as two steps when the table is used by several people and it is always busy and it does not permit the table level lock even for short moment at specific time. This scenario, we can move the rows to shrink the table. Then in the night time or off peak hours, we can reset the HWM.

Tuesday, September 15, 2009

DBMS_PROFILER

DBMS_PROFILER is introduced in oracle8i. It is powerful tool to find the PLSQL execution time and determine the bottleneck of the program unit. This tool provides information about how many times each line is executing and how much time it is spending to execute for each line of the code. The basic idea behind the profiling is, developers can understand where the code is taking most time and they can detect and optimize the PLSQL code. We use SQL trace to determine the bottleneck for SQL code. But for PLSQL code, we can use dbms_profiler utility to profile the run time behaviour. Steps might very for other oracle versions.

How do we set up dbms_profiler utility?

Please remember, the dbms_profiler setup is not part of Oracle installation. We need to setup manually if we want to profile the PLSQL code. There are five simple steps to configure the dbms_profiler. Let us start configure the profiler. This article is tested in oracle10g R2.

Step1. The dbms_profiler package can be loaded by running the $ORACLE_HOME/rdbms/admin/profload.sql script as the SYS user. Execute profload.sql in sys schema.

Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - Production
With the Partitioning, OLAP and Data Mining options

SQL> connect sys/password@orcl as sysdba
Connected.
SQL> start c:/oracle/product/10.2.0/db_1/rdbms/admin/profload.sql

Package created.

Grant succeeded.

Synonym created.

Library created.

Package body created.

Testing for correct installation
SYS.DBMS_PROFILER successfully loaded.

PL/SQL procedure successfully completed.

Step2. dbms_profiler package requires some schema objects which should be created in central schema or application schema. Create a new schema called profiler. We can either create a new schema or use existing schema. In this case, i created new schema, named Profiler.

SQL> create user profiler
2 identified by profiler;

User created.

SQL> grant create session,resource,connect to profiler;
Grant succeeded.

SQL>

Step3. Run the $ORACLE_HOME/rdbms/admin/proftab.sql file on the profiler schema to create some schema objects to store profiler information. This proftab.sql file creates below three tables with some other objects.

1.PLSQL_PROFILER_RUNS
2.PLSQL_PROFILER_UNITS
3.PLSQL_PROFILER_DATA

SQL> connect profiler/profiler@orcl
Connected.
SQL> start c:/oracle/product/10.2.0/db_1/rdbms/admin/proftab.sql
drop table plsql_profiler_data cascade constraints
*
ERROR at line 1:
ORA-00942: table or view does not exist

drop table plsql_profiler_units cascade constraints
*
ERROR at line 1:
ORA-00942: table or view does not exist

drop table plsql_profiler_runs cascade constraints
*
ERROR at line 1:
ORA-00942: table or view does not exist

drop sequence plsql_profiler_runnumber
*
ERROR at line 1:
ORA-02289: sequence does not exist

Table created.

Comment created.

Table created.

Comment created.

Table created.

Comment created.

Sequence created.

SQL>

Step4. Connect into profiler schema and grant the below privileges.

GRANT SELECT ON plsql_profiler_runnumber TO PUBLIC;
GRANT SELECT, INSERT, UPDATE, DELETE ON plsql_profiler_data TO PUBLIC;
GRANT SELECT, INSERT, UPDATE, DELETE ON plsql_profiler_units TO PUBLIC;
GRANT SELECT, INSERT, UPDATE, DELETE ON plsql_profiler_runs TO PUBLIC;

SQL> connect profiler/profiler@orcl
Connected.
SQL> GRANT SELECT ON plsql_profiler_runnumber TO PUBLIC;

Grant succeeded.

SQL> GRANT SELECT, INSERT, UPDATE, DELETE ON plsql_profiler_data TO PUBLIC;

Grant succeeded.

SQL> GRANT SELECT, INSERT, UPDATE, DELETE ON plsql_profiler_units TO PUBLIC;

Grant succeeded.

SQL> GRANT SELECT, INSERT, UPDATE, DELETE ON plsql_profiler_runs TO PUBLIC;

Grant succeeded.

SQL>

Step5. Connect into sys schema and grant the below privileges for dbms_profiler package.

CREATE PUBLIC SYNONYM plsql_profiler_runs FOR profiler.plsql_profiler_runs;
CREATE PUBLIC SYNONYM plsql_profiler_units FOR profiler.plsql_profiler_units;
CREATE PUBLIC SYNONYM plsql_profiler_data FOR profiler.plsql_profiler_data;
CREATE PUBLIC SYNONYM plsql_profiler_runnumber FOR profiler.plsql_profiler_runnumber;

SQL> connect sys/password@orcl as sysdba
Connected.
SQL> CREATE PUBLIC SYNONYM plsql_profiler_runs FOR profiler.plsql_profiler_runs;

Synonym created.

SQL> CREATE PUBLIC SYNONYM plsql_profiler_units FOR profiler.plsql_profiler_units;

Synonym created.

SQL> CREATE PUBLIC SYNONYM plsql_profiler_data FOR profiler.plsql_profiler_data;

Synonym created.

SQL> CREATE PUBLIC SYNONYM plsql_profiler_runnumber FOR profiler.plsql_profiler_runnumber;

Synonym created.

SQL>

Once we are successful with five steps, we can start profiling the PLSQL code.

How do we profile the PLSQL Procedure?

Let us create sample procedure and profile the procedure.

1. start profiler
2. run the procedure
3. stop profiler
4. flush data from memory and save into table
5. Analyze the data and see where it is taking more time

Here i create a procedure called do_something in SCOTT schema. You can also profile the procedure which is existing in any schema in the database.

SQL> CREATE OR REPLACE PROCEDURE do_something (p_times IN NUMBER) AS
2 v_cnt NUMBER;
3 BEGIN
4 FOR i IN 1 .. p_times LOOP
5 SELECT count(*) + p_times
6 INTO v_cnt
7 FROM EMP;
8 END LOOP;
9 END;
10 /

Procedure created.

The below unnamed PLSQL code starts the profiler and call the procedure. Once the procedure is executed, it stop the profiler. It flush the data from memory and save into table.

SQL> DECLARE
2 l_result BINARY_INTEGER;
3 BEGIN
4 l_result := DBMS_PROFILER.start_profiler(run_comment => 'do_something: ' SYSDATE);
5 do_something(p_times => 100);
6 l_result := DBMS_PROFILER.stop_profiler;
7 dbms_profiler.flush_data;
8 END;
9 /

PL/SQL procedure successfully completed.

SQL>

Here is query to check the profiler result.

SQL> SET LINESIZE 200
SQL> SET TRIMOUT ON
SQL>
SQL> COLUMN runid FORMAT 99999
SQL> COLUMN run_comment FORMAT A50
SQL> SELECT runid,
2 run_date,
3 run_comment,
4 run_total_time
5 FROM plsql_profiler_runs
6 ORDER BY runid;

RUNID RUN_DATE RUN_COMMENT RUN_TOTAL_TIME
------ --------- -------------------------------------------------- --------------
4 15-SEP-09 do_something: 15-SEP-09 686370753

SQL> SELECT d.line#,
2 d.total_occur,
3 d.total_time
4 FROM plsql_profiler_units u
5 JOIN plsql_profiler_data d ON u.runid = d.runid AND u.unit_number = d.unit_number
6 WHERE u.runid = 4
7 and unit_name='DO_SOMETHING'
8 and unit_owner='SCOTT'
9 and unit_type='PROCEDURE'
10 ORDER BY u.unit_number, d.line#;

LINE# TOTAL_OCCUR TOTAL_TIME
---------- ----------- ----------
1 1 199466
4 101 2247771
5 100 513261322
9 1 85485

SQL> SELECT line ' : ' text
2 FROM all_source
3 WHERE owner = 'SCOTT'
4 AND type = 'PROCEDURE'
5 AND name = 'DO_SOMETHING';

LINE':'TEXT
----------------------------------------------------------------------------------------------------
1 : PROCEDURE do_something (p_times IN NUMBER) AS
2 : v_cnt NUMBER;
3 : BEGIN
4 : FOR i IN 1 .. p_times LOOP
5 : SELECT count(*) + p_times
6 : INTO v_cnt
7 : FROM EMP;
8 : END LOOP;
9 : END;

9 rows selected.

SQL>

Conclusion : The line number 4 runs 101 times and line number 5 runs 100 times. The procedure spends most of the time at line number 5. Now we figured out exactly where it is taking longer time. We can focus on tuning the line 5 in case if we want to....

Where do we use dbms_profiler? TKPROF and Explain plan helps to find where it is taking long time to complete the SQL. Dbms_profiler is useful, if we want to find out which line is consuming most time in entire PLSQL.

The source of this article is oracle-base.