Wednesday, December 26, 2012

Multiple For Loops Part I

I have a new blog post http://wp.me/p2XmOn-3k on wordpress.com.

This block post sets the stage for testing multiple levels of FOR LOOPS and the distribution of data among the different levels.

The idea situation would be to refactor the FOR LOOPS to Bulk Collects but like most refactoring efforts that doesn't happen over night.  So we need to understand the performance implications.

Enjoy!

Wednesday, December 5, 2012

I'm Moving To Wordpress



I've moved to the above address.  I will try to link my posts back to blogspot, but that's no guarantee.


Monday, December 3, 2012

Why Exception Handling

A common theme that I run across is the lack of exception handling within production code.  The amount of man hours wasted tracking down resolutions on production issues gets lost amongst the obscure error message, or the error message that sends everyone down the wrong path.  It seems so obvious to those of us charged with incident management, well constructed exception handling will save time and frustration.

A typical fire fighting scenario without well written exception handling begins with the end user calling the help desk with an error message "ORA-01858 a non-numeric character was found where a numeric was expected" .  Not too obscured and fairly straight forward on the surface.  So we pull up the code and the line number where the error states its being generated.   The line number points to a straight insert statement.

Insert into table_detail (table_seq, desc, message) values (vTbl_seq, vdesc, vmessage);

Only one column is defined as a number:

desc table_detail

table_detail
table_seq number(9),
desc varchar2(255),
message varchar2(4000));

The only number column is populated via a sequence which is a number.  So where is the error being introduced?

In further review of the code, the actual error message is being inserted into the audit_log (which is used to track the progress of different batch processes) and the process continues to the next step which is an insert into the table_detail.  This insert raises the ora-01858 because its trying to insert previous variables since new values were not populated since the file could not be read.  The actual error that triggered these events and the reason the file could not be read ora-29283 Invalid File Operation.   To further complicate the troubleshooting neither the line number nor the directory in question is included and for this particular process there are few different selections.  

To recap, there is an error which is logged in the audit_log (which will never be seen by the end user unless they review the audit_log), the process continues but the next step fails and raises the ora-01858.  Now that we have the right error message, we know that we can review the permissions within the DBA_TAB_PRIVS for each of the directories, as well as the permissions on the OS directory.   The time spent troubleshooting and debugging the code approximately 2 hours.

If  the original error message had included the actual oracle error number, text and line numbers as well as some additional information like "Invalid File Operation for Directory FILE_DIR and Filename Table_Load.txt Line 512", then the troubleshooting efforts would have equaled 5 minutes maximum for one individual instead of the multiple people and 2 hours.

Production is not the place to debug errors.  The exceptional handling should account for errors in a straight forward manner allowing production support personnel to efficiently and effectively troubleshoot incidents without involving developers.   In my opinion the only type of exception handling worse than obscure messages is WHEN EXCEPTION THEN NULL. 


Tuesday, November 27, 2012

Enable Data Guard Broker in an Existing DG Envrionrment

There are several advantages to using the data guard broker to configure and manage your data guard environment.  Don't get me wrong, I believe in command line and the power of understanding what tools, and GUIs provide.  The broker provides a single integrated view of a data guard configuration.  Configuration changes, monitoring and switchover/failover are all available through the broker providing efficient management of a the environment.  When you are working with several environments that include logical standbys an efficient method of managing all aspects is not only a time saving but a necessities.  Performing these tasks through command line is more of luxury. 

So I already have a standby database (or multiple standbys in some cases) how do I set up the broker to make my life easier?

The broker requires a special static listener in order for the failover/switchover to work.  The broker will use this listener to connect as SYSDBA remotely and perform the startup.  The key here is the global name will be the db_unique_name appended with DGMGRL and the domain the database resides.  This must be done for both the primary and the standby.   No TNS entry is required because the broker knows this is the listener to utilize.

On the primary server:

SID_LIST_LISTENER =
   (SID_LIST =
      (SID_DESC =
         (GLOBAL_DBNAME = PRIMARY_DGMGRL)
         (ORACLE_HOME = /u01/app/oracle/product/11.2.0.3)
         (SID_NAME = PRIMARY)
    )
)

On the standby server:

SID_LIST_LISTENER =
   (SID_LIST =
      (SID_DESC =
         (GLOBAL_DBNAME = STANDBY_DGMGRL)
         (ORACLE_HOME = /u01/app/oracle/product/11.2.0.3)
         (SID_NAME = PRIMARY)
    )
)

Enable data guard broker by setting the DG_BROKER_START parameter to TRUE:

ALTER SYSTEM SET DG_BROKER_START=TRUE;

There are two configuration files for the broker the default values:

$ORACLE_HOME/dbs/dr1.dat
$ORACLE_HOME/dbs/dr2.dat

For RAC it is suggested that these are changed to a shared file system.

Start the data guard broker command line utility and connect as SYS.  It is assumed by the broker that sysdba is the privilege and therefore you don't need to specify it.

dgmgrl
DGMGRL> CONNECT SYS
Password:

Next we create the configuration by specifying the primary database name, and the configuration name.

DGMGRL> CREATE CONFIGURATION DG1' AS
PRIMARY DATABASE IS 'PRIMARY'
CONNECT IDENTIFIER IS 'PRIMARY' ;

Configuration "DG1" created with primary database "PRIMARY"

Add the standby databases to the configuration.

DGMGRL> ADD DATABASE 'STANDBY' AS
CONNECT IDENTIFIER IS   'STANDBY';

Database 'STANDBY' added

The data guard configuration must then be enabled.

DGMGRL> ENABLE CONFIGURATION;
Enabled


 







 

Wednesday, November 21, 2012

Data Pump Export Schema within Pl/SQL

I have one client where I don't have access to the database server, but still need to perform exports to move metadata.  This can easily be accomplished using the Data Pump APIs.  Here's an example of the script that I use, its the same script from The Data Pump API Utilities manual from Oracle.



DECLARE
  ind NUMBER;              
  h1 NUMBER;               
  percent_done NUMBER;     
  job_state VARCHAR2(30);  
  le ku$_LogEntry;         
  js ku$_JobStatus;        
  jd ku$_JobDesc;          
  sts ku$_Status;          
BEGIN

 h1 := DBMS_DATAPUMP.OPEN('EXPORT','SCHEMA',NULL,'MYJOB','LATEST');
 
 DBMS_DATAPUMP.ADD_FILE(h1,'myexport.dmp','EXPORT_DIR');
 
 DBMS_DATAPUMP.ADD_FILE(handle=>h1,filename=>'myexport.log',directory=>'EXPORT_DIR',
    filetype=>DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE); 
 
 DBMS_DATAPUMP.METADATA_FILTER(h1,'SCHEMA_EXPR','IN (''MY'')');

  DBMS_DATAPUMP.START_JOB(h1);

  percent_done := 0;
  job_state := 'UNDEFINED';
  while (job_state != 'COMPLETED') and (job_state != 'STOPPED') loop
    dbms_datapump.get_status(h1,
           dbms_datapump.ku$_status_job_error +
           dbms_datapump.ku$_status_job_status +
           dbms_datapump.ku$_status_wip,-1,job_state,sts);
    js := sts.job_status;

    if js.percent_done != percent_done
    then
      dbms_output.put_line('*** Job percent done = ' ||
                           to_char(js.percent_done));
      percent_done := js.percent_done;
    end if;

   if (bitand(sts.mask,dbms_datapump.ku$_status_wip) != 0)
    then
      le := sts.wip;
    else
      if (bitand(sts.mask,dbms_datapump.ku$_status_job_error) != 0)
      then
        le := sts.error;
      else
        le := null;
      end if;
    end if;
    if le is not null
    then
      ind := le.FIRST;
      while ind is not null loop
        dbms_output.put_line(le(ind).LogText);
        ind := le.NEXT(ind);
      end loop;
    end if;
  end loop;

  dbms_output.put_line('Job has completed');
  dbms_output.put_line('Final job state = ' || job_state);
  dbms_datapump.detach(h1);
END;
/

For a table level export simply replace the lines as follows:


h1 := DBMS_DATAPUMP.OPEN('EXPORT','SCHEMA',NULL,'MYJOB','LATEST');
 
   h1 := DBMS_DATAPUMP.OPEN('EXPORT','TABLE',NULL,'MYJOB',LATEST'); 
    
 
DBMS_DATAPUMP.METADATA_FILTER(h1,'SCHEMA_EXPR','IN (''MY'')');
 
    DBMS_DATAPUMP.METADATA_FILTER(h1,'NAME_EXPR',IN (''TABLE1'',''TABLE2''...)'); 

   

 
 

   

Wednesday, October 24, 2012

Commands I Always Forget

This post is aimed at one location for maintaining commands I can't seem to keep straight and end up googling.  PSOUG and Morgan's library our great resources, but some times I just want a quick place to stop by and retrieve what I need.  As I run into these commands I'll update this page.

Dropping a database 11.2 Single Instance

 SQLPLUS:

 SQL> shutdown immediate
 SQL> startup restricted mount
 SQL> drop database;

RMAN:

> RMAN
RMAN>  CONNECT TARGET SYS@orcl

target database Password: password
connected to target database:  orcl (DBID=4920931111)

 RMAN > STARTUP FORCE MOUNT
 RMAN > SQL 'ALTER SYSTEM ENABLE RESTRICTED SESSION';
 RMAN> DROP DATABASE INCLUDING BACKUPS NOPROMPT;

Stop and Start Physical Standby 

SQLPLUS

Start 
 SQL> ALTER DATABASE RECOVER MANAGED STANDBY DISCONNECT FROM SESSION;

Stop

SQL> ALTER DATABASE RECOVER MANAGED STANDBY CANCEL; 

CRSCTL

crsctl start crs 
   start cluster 11.2 must be root or have sudo rights

crsctl stop crs
    stop cluster 11.2 must be root or have sudo rights

crsctl stat res -t
   displays the shorten form of the status (10.2 crs_stat)  

crsctl enable crs
     enable clusterware daemons to auto start

crsctl disable crs
      disables automatic start of clusterware daemons  

crsctl add css votedisk path
     add a vote disk

crsctl delete css votedisk path
     remove voting disk
crsctl check crs
    status of the crs  

 crsctl check cssd

crsctl check crsd

crsctl check evmd

crsctl check cluster

crsctl query crs softwareversion

crsctl query crs activeversion


Check Inventory

./opatch lsinventory
    ** view inventory using Opatch

cat /etc/oraInst.loc|grep inventory_loc 
     ** to find the location of the inventory

cat /opt/app/oraInventory/ContentsXML/inventory.xml
     ** View inventory without using Opatch












Skipping Users In Logical Standby

Logical standby's have their place for reporting requirements. They do require their own special attention. Logical standby's are generally used to fulfill a reporting requirement which would only cause performance issues in the production instance. However, not everything needs to be replicated to the logical standby. Individual users who have access to production environments and can create objects are one type of events that can be skipped. If they are creating objects and data with a requirement to replicate, that's a whole other issue. When we create a user in a production environment it will be replicated to the logical standby, however, we do not want their objects or the data to be replicated from production to the logical standby. So we issue the following with each user create:  
ALTER database stop logical standby apply; 
EXECUTE DBMS_LOGSTDBY.SKIP(stmt =>'DML',schema_name => 'NewUser',object_name => '%'); 
EXECUTE DBMS_LOGSTDBY.SKIP(stmt =>'SCHEMA_DDL',schema_name => 'NewUser',object_name => '%');
 alter database start logical standby apply immediate;
 **Note Substitute the actual user for NewUser.

The key is to remember to do this when each user is created. Or have an automated process that will trigger these command on the logical standby.