Thursday, January 24, 2008

plsql developer

When trying to compile a package using plsql developer in a 9.2.0.1 Oracle database, the developer ran into an ORA-00600 error. The error is generated in plsql developer and looks like this...

This problem can easily be fixed when bouncing the database, but the problem was that this production database could not be stopped. I had to find another way to solve this.
Could it be they are using a wrong version of plsql developer ? I couldn't find out if they were using a wrong version, so upgrading plsql developer seems useless....
I found another work around for this problem.
Maybe it has something to do with the "debug information" mode. To verify this, go to Tools > Preferences > Debugger tab page, disable the "Add debug information when compiling" option, and retry the compilation.
The developers tried to compile the package again and it worked...

Tuesday, January 22, 2008

Using logminer

A customer wanted to know how a certain record in their EBS database was updated.
A colleque of mine decided to use logminer to find out what happened to that specific record.
Using logminer, all changes in our archived logfiles are written to the v$logmnr_contents and you can query this view using sql statements.

So, how does it work ? We choose the online catalog variant..(see Meatlink note 291686.1 for more options)

First, try to find out which logfiles you should examine. In our case, when was the record updated. We figured it had to be between 10:00 am untill 04:00 pm. So we had to use like 33 archived logfiles ! Teh change had to be in one of them.
Once you have decided which logfiles to use, you must 'add' them using the following syntax :

SQL> execute DBMS_LOGMNR.ADD_LOGFILE (LOGFILENAME => '/XXX01.LOG',OPTIONS =>DBMS_LOGMNR.NEW);

The first logfile must be added with the LOGMNR.NEW option. If you have more logfiles, the follwoing ones must have the LOGMNR.addfile option. Otherwise, only the last logfile is used :)

When all logfiles are added, you can start the logminer

SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(OPTIONS => RBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);

Now, you can query the v$logmnr_contents view. We decided to spool the output, make it a little bit eassier to search.

SQL> select username, operation, SQL_REDO,SQL_UNDO FROM V$LOGMNR_CONTENTS where sql_redo like '%updated_table%';

When finished..

SQL> EXECUTE DBMS_LOGMNR.end_LOGMNR

Now the searching begins. We finally succeeded, but searching is not easy.

Friday, January 18, 2008

vmware converter


For a colleague ( an Oracle EBS functional consultant), I had to install 2 EBS environments on a hard disk (500gb). Using vmware, he would be able to play around, or use the the environments for a demo. I already created both environments using vmware workstation, but decided it would be better to use vmware server. Because vmware server cannot work with the already created workstation environments, I thought starting over waas the only option. Just to check if someone else already faced the same problem, I decided to google for 'vmware conversion' and I found something about Vmware Converter. It seems vmware already created a tool for this kind of conversions !! You can download the tool from the vmware site. I converted the first workstation environment and it worked. I started the new vmware server environment and everything worked like before. Great tool !!

Friday, January 11, 2008

Tagged.....

I have been tagged.....three times already :). Gareth Roberts , OCP Advisor and Atul Kumar tagged me.
Following the rules of this new tagging feature among Oracle bloggers, I shuold now blog 8 things about myself and tag other Oracle bloggers. Look here for the rules.

So 8 things about myself you don't know already....
Publish Post

1. My name is actually Bastiaan
2. I have a seasons ticket for Ajax for 18 years now...
3. My gaming alias is Djewie
4. I am part of a gaming (COD4) clan called Bullet Proof Underwear
5. I have studied economics, but never finished it
6. I love to get together for a drink with friends
7. My youngest son has been sick for three years now (..he is three years old :(..)
8. I love running to clear my head

Now, who am I going to tag ?

1. Marco Gralike
2. Rene Kuss
3. Arian Stijf
4. Bandari Huang
5. Johan Louwers
6. Richard Byrom
7. Madan Mohan
8. Navdeep Saini

Maybe some of them are already tagged by now, but we will see.

Thursday, January 10, 2008

Recover database

Monday evening I was told a company needed some help recovering an important database.
They were not able to get the database back online. The next morning, after the 'oke' from my manager, I visited the company. It seemed the just moved to a new building, stopped all servers and started them again in the new building. The Oracle Dba who used to work over there was gone, so nowbody really knew what to do or how i should work.
The database that could not be started was a online clone, using snapshots, from a production database. The production database could not be stopped, so they used the online snapshot technology to clone the database. So, what was the problem. Using a online backup for your clone means recovery, so I first checked the status of the database. It seemed the status was ' mounted', so the control creation succeeded ?
Next I tried a to open the database, but this command resulted in the following error..


SQL> alter database open resetlogs;
alter database open resetlogs
*

ERROR at line 1:
ORA-01113: file 1 needs media recovery
ORA-01110: data file 1: '/ORA/ONTW/DATA/system01.dbf'


More recovery ??
Because I did not know what happened exactly, I decided to clone again. This way I could see with my own eyes what was going wrong.
During the copy fase of cloning, I also decided to create a new backup of the controlfile. The script used for the new controlfile creation was already a year old. On the production database I created a new backup using the ' alter database backup control file to trace' statement. I was going to use this one instead of the old one. Maybe they had removed or added some datafiles...
After the copy was ready I could start the actual clone.

$ sqlplus /nolog

SQL*Plus: Release 9.2.0.6.0 - Production on Wed Jan 9 09:24:30 2008

Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.

SQL> conn / as sysdba
Connected to an idle instance.
SQL> startup nomount
ORACLE instance started.

Total System Global Area 1060074720 bytes
Fixed Size 738528 bytes
Variable Size 922746880 bytes
Database Buffers 134217728 bytes
Redo Buffers 2371584 bytes
SQL> @control_ontw.sql
SQL> select status from v$instance;

STATUS
------------
MOUNTED

SQL> select status, checkpoint_change#,
2 to_char(checkpoint_time, 'DD-MON-YYYY HH24:MI:SS') as
3 checkpoint_time,
4 count(*)
5 from v$datafile_header
6 group by status, checkpoint_change#, checkpoint_time
7 order by status, checkpoint_change#, checkpoint_time;

STATUS CHECKPOINT_CHANGE# CHECKPOINT_TIME COUNT(*)
------- ------------------ -------------------- ----------
ONLINE 1.2965E+10 09-JAN-2008 08:45:49 43



In the create controlfile I used the resetlogs, which is needed when renaming the database, and the query showed me the datafiles were in sync.
So, now lets try to recover..

SQL> RECOVER DATABASE USING BACKUP CONTROLFILE
ORA-00279: change 12964929028 generated at 01/09/2008 08:45:49 needed for
thread 1
ORA-00289: suggestion : /ORA/ONTW/ARCH/arch60558.dbf
ORA-00280: change 12964929028 for thread 1 is in sequence #60558


Specify log: {=suggested filename AUTO CANCEL}


Strange, the archive logfile suggested is not copied. And on the production server, the file isn't there neither ??? So, the recover process is asking me for a future archive log ?? This seems strange, the data that's missing must be in the online log files.
Searching the net showed me the solution. Just enter the path to your online log files !!
The first logfile didn't fix the problem, but the second one did..

Specify log: {=suggested filename AUTO CANCEL}
/ORA/ONTW/DATA/redo02a.log
Log applied.
Media recovery complete.
SQL> alter database open resetlogs;

Database altered.

SQL> select status from v$instance;

STATUS
------------
OPEN


The database is open and ready to use.

Friday, January 04, 2008

Cloning error

Yesterday I started cloning an EBS environment. After running the preclone, I started copying the enviroment on our netapp filer.
After the environment was copied, I started the actual clone.
But when adcfgclone is trying to create my new controlfile, the following error is reported in my alert.log..

Fri Jan 4 10:30:01 2008
scumnt: failed to lock /oebs/pubtst22/pubtst22db/9.2.0/dbs/lkPUBTST22 exclusive
Fri Jan 4 10:30:01 2008
ORA-09968: scumnt: unable to lock file
Linux Error: 37: No locks available
Additional information: 300
Fri Jan 4 10:30:01 2008
ORA-1503 signalled during: CREATE CONTROLFILE REUSE SET DATABASE "PUBTST22"

It seems linux cannot lock the file that's needed to mount the database...
I searched Metalink and google, but could not find anything that could cause my problem.
Finanlly we decided to umount the complete mount point. But all directories were still visible ???
We umounted again, adn this time the directories were gone. It seemed the mount point was mounted twice ?
After mounting againg, I started the clone again. This time no lock problems.....

Fri Jan 4 11:04:30 2008
Successful mount of redo thread 1, with mount id 2212425917
Fri Jan 4 11:04:30 2008
Completed: CREATE CONTROLFILE REUSE SET DATABASE "PUBTST22"