tag:blogger.com,1999:blog-207374092024-03-05T11:24:59.195-08:00Jared Still's RamblingsHere you will find posts that are mostly about my work as an Oracle DBA. There may occasionally be other topics posted, but by and large this blog will be about Oracle and other geeky IT topics. Perl will likely be mentioned from time to time.Jaredhttp://www.blogger.com/profile/09187267459019576469noreply@blogger.comBlogger28125tag:blogger.com,1999:blog-20737409.post-47515054879821056022012-01-06T10:25:00.000-08:002012-01-06T10:25:27.851-08:00Display only the active archive log destinationsOne thing I find annoying is when I want to see the archive log destinations in an oracle database.<br />
I usually want to see only those that are enabled, and have a non null value for the destination.<br />
<br />
show parameter log_archive_dest shows more than I care to look at.<br />
<br />
Try this:<br />
<br />
<pre><code>
select name, value
from v$parameter
where name = 'log_archive_dest'
and value is not null
union all
select p.name, p.value
from v$parameter p where
name like 'log_archive_dest%'
and p.name not like '%state%'
and p.value is not null
and 'enable' = (
select lower(p2.value)
from v$parameter p2
where p2.name = substr(p.name,1,instr(p.name,'_',-1)) || 'state' || substr(p.name,instr(p.name,'_',-1))
)
union all
select p.name, p.value
from v$parameter p
where p.name like 'log_archive_dest_stat%'
and lower(p.value) = 'enable'
and (
select p2.value
from v$parameter p2
where name = substr(p.name,1,16) || substr(p.name,instr(p.name,'_',-1))
) is not null
/
</code>
</pre><div><br />
</div>Jaredhttp://www.blogger.com/profile/09187267459019576469noreply@blogger.com4tag:blogger.com,1999:blog-20737409.post-27425683456025318762011-12-05T16:42:00.000-08:002011-12-05T16:42:33.993-08:00SSH root attacks on the riseThis is not directly Oracle related, but probably still of interest.<br />
<br />
<a href="http://isc.sans.edu/diary/SSH+Password+Brute+Forcing+may+be+on+the+Rise/12133">SSH Password Brute Forcing may be on the Rise</a><br />
<br />
Out of curiosity I pulled the ssh login attempts from /var/log/messages an internet facing server, and the data corresponds to what was shown in the article.<br />
<br />
What was interesting was that all ssh attempts that I saw were for root. In the past when I have looked at these there are a number of different accounts being attacked, but now the attacks are all for root.<br />
<br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiUeZ1rXgWD2XcgZV-EDseizbyS0S41sVuRo6qPxPD6Z6QDXpPOd4bQz0ndNgfQOwyWbYKma33N798-w1e0maTu8CykwqnrJxpv8dFaLREwUaGw5RnTmorqo6wnQhBo6pCqiISV/s1600/root_ssh_attacks.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="161" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiUeZ1rXgWD2XcgZV-EDseizbyS0S41sVuRo6qPxPD6Z6QDXpPOd4bQz0ndNgfQOwyWbYKma33N798-w1e0maTu8CykwqnrJxpv8dFaLREwUaGw5RnTmorqo6wnQhBo6pCqiISV/s320/root_ssh_attacks.png" width="320" /></a></div>Jaredhttp://www.blogger.com/profile/09187267459019576469noreply@blogger.com1tag:blogger.com,1999:blog-20737409.post-18894527666073941242011-07-15T14:11:00.000-07:002011-07-15T14:11:31.432-07:00Blogging at PythianAs I started working for Pythian at the beginning of the year, I have started to blog there as well.<br />
<br />
First post is today: <a href="http://www.pythian.com/news/24447/applying-external-timing-data-to-untimed-events/">Applying External Timing Data to Untimed Events</a><br />
<br />
I may still post here from time to time. Work at Pythian is quite enjoyable, but it is always so busy there is less time for blogging. At least for me anyway, as I have non-Oracle interests to attend to as well.Jaredhttp://www.blogger.com/profile/09187267459019576469noreply@blogger.com0tag:blogger.com,1999:blog-20737409.post-34357981859297597242010-11-27T11:57:00.000-08:002010-11-27T12:00:36.793-08:00You can always learn something new.It’s high time for this blog to come off hiatus. I really don’t know why I let it go so long, just pre-occupied with work and extra curricular activities I guess.<br />
<br />
One of those activities was to contribute two chapters to a new book from Apress, <a href="http://apress.com/book/view/1430232285">Pro Oracle SQL</a>. Though it was only two chapters, it did consume a significant amount of time. Some folks seem to be able to bang out well written prose and code with seemingly little effort. It seems that I labor over it more than most, at least it feels that way at times. <br />
<br />
On to something new. Not really new, but it was new to me the other day. Or if it was not new, I had completely forgotten about it.<br />
<br />
It has to do with the innocuous date formats used with to_date(). I ran into to some unexpected behavior from to_date() while running one of the scripts used for the aforementioned book.<br />
When logging into a data base, part of my normal login includes setting the nls_date_format for my session:<br />
<br />
<pre style="background-image: URL(https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEih9YisCN7NXvqdCcRvWKqpT4CqQAKDRBIVmPnM2vsHhDEEYf3EweuVAyP6P-d3KvpTQXJofD-1bI_PAaYZMxQDMvCNpRFPI194WiMtSwy9DGHY5Z19RuzOdEpZAcntn6uIGfj-AQ/s320/codebg.gif); background: #f0f0f0; border: 1px dashed #CCCCCC; color: black; font-family: arial; font-size: 12px; height: auto; line-height: 20px; overflow: auto; padding: 0px; text-align: left; width: 99%;"><code style="color: black; word-wrap: normal;"> alter session set nls_date_format='mm/dd/yyyy hh24:mi:ss'
</code></pre><br />
The main purpose of doing so is so that DBA scripts that include dates will display in my preferred format without the need to use to_date(to_char()) to display the preferred format while preserving the data type.<br />
<br />
When writing scripts that may be used by others, or any circumstance where I cannot rely on a setting for nls_date_format, I will use to_char() and to_date() with format masks to ensure the script will run without error.<br />
<br />
When developing scripts for use in published writing, I normally do not set nls_date_format for my sessions, but this time I had forgot to disable it.<br />
<br />
So, when double checking the scripts to be included with the book chapters, I was rather surprised to see that one of them did not work.<br />
<br />
<pre style="background-image: URL(https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEih9YisCN7NXvqdCcRvWKqpT4CqQAKDRBIVmPnM2vsHhDEEYf3EweuVAyP6P-d3KvpTQXJofD-1bI_PAaYZMxQDMvCNpRFPI194WiMtSwy9DGHY5Z19RuzOdEpZAcntn6uIGfj-AQ/s320/codebg.gif); background: #f0f0f0; border: 1px dashed #CCCCCC; color: black; font-family: arial; font-size: 12px; height: auto; line-height: 20px; overflow: auto; padding: 0px; text-align: left; width: 99%;"><code style="color: black; word-wrap: normal;"> SQL> l
1 select
2 to_date('01/01/2011 12:00:00','mm/dd/yyyy hh24:mi:ss')
3 , to_date('01/01/2011')
4* from dual;
SQL>
, to_date('01/01/2011')
*
ERROR at line 3:
ORA-01843: not a valid month
</code></pre><br />
The SQL session I was checking it from was connected to a completely new and different database, setup just for the purpose of verifying that the scripts all worked as I expected, but one script failed on the to_date(). I at first thought it just do to not having a format mask specified in the second to_date(), but then immediately wondered why script had always worked previously. You can probably guess why, though at first I did not understand what was occurring.<br />
<br />
The new environment was not setting nls_date_format upon login. I had inadvertently setup my initial test environment where the scripts were developed with nls_date_format=’mm/dd/yyyy hh24:mi:ss’.<br />
<br />
What surprised me was that to_date(‘01/01/2011’) had worked properly without a specific date format mask, and a date format that did not match the nls_date_format.<br />
<br />
The “new” bit is that as long as the date format corresponds to part of the session nls_date_format setting, the conversion will work.<br />
<br />
So, with nls_date_format set to ‘mm/dd/yyyy hh24:mi:ss’, we should expect to_date(‘01/01/2011’) to succeed.<br />
<br />
This can easily be tested by setting a more restrictive nls_date_format, and then attempting to use to_date() without a format mask.<br />
<br />
<pre style="background-image: URL(https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEih9YisCN7NXvqdCcRvWKqpT4CqQAKDRBIVmPnM2vsHhDEEYf3EweuVAyP6P-d3KvpTQXJofD-1bI_PAaYZMxQDMvCNpRFPI194WiMtSwy9DGHY5Z19RuzOdEpZAcntn6uIGfj-AQ/s320/codebg.gif); background: #f0f0f0; border: 1px dashed #CCCCCC; color: black; font-family: arial; font-size: 12px; height: auto; line-height: 20px; overflow: auto; padding: 0px; text-align: left; width: 99%;"><code style="color: black; word-wrap: normal;"> SQL> alter session set nls_date_format = 'mm/dd/yyyy';
Session altered.
SQL> select to_date('01/01/2011 12:00') from dual;
select to_date('01/01/2011 12:00') from dual
*
ERROR at line 1:
ORA-01830: date format picture ends before converting entire input string
</code></pre><br />
When I saw that error message, I then understood what was happening. to_date() could be used without a format mask, as long as the date corresponded to a portion of the nls_date_format. When the specified date exceeded could be specified with nls_date_format, an ORA-1830 error would be raised.<br />
<br />
In this sense it is much like number formats. I was a little surprised that I didn’t already know this, or had forgotten it so completely.<br />
<br />
But, here’s the real surprise. The following to_date calls will also be correctly translated by nls_date_format.<br />
<br />
<pre style="background-image: URL(https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEih9YisCN7NXvqdCcRvWKqpT4CqQAKDRBIVmPnM2vsHhDEEYf3EweuVAyP6P-d3KvpTQXJofD-1bI_PAaYZMxQDMvCNpRFPI194WiMtSwy9DGHY5Z19RuzOdEpZAcntn6uIGfj-AQ/s320/codebg.gif); background: #f0f0f0; border: 1px dashed #CCCCCC; color: black; font-family: arial; font-size: 12px; height: auto; line-height: 20px; overflow: auto; padding: 0px; text-align: left; width: 99%;"><code style="color: black; word-wrap: normal;"> SQL> select to_date('Jan-01 2011') from dual;
TO_DATE('JAN-012011
-------------------
01/01/2011 00:00:00
1 row selected.
SQL> select to_date('Jan-01 2011 12:00:00') from dual;
TO_DATE('JAN-012011
-------------------
01/01/2011 12:00:00
1 row selected.
</code></pre><br />
This was quite unexpected it. It also is not new. I tested it on various Oracle versions going back to 9.2.0.8, and it worked the same way on all.<br />
<br />
There’s always something to learn when working with complex pieces of software such as Oracle, even something as seemingly simple as formatting dates.Jaredhttp://www.blogger.com/profile/09187267459019576469noreply@blogger.com2tag:blogger.com,1999:blog-20737409.post-43877108051784014132010-03-05T10:41:00.000-08:002010-03-07T08:13:38.833-08:00Who's using a database link?Every once in awhile it is useful to find out which sessions are using a database link in an Oracle database. It's one of those things that you may not need very often, but when you do need it, it is usually rather important.<br />
<br />
Unfortunately for those of us charged with the care and feeding of the Oracle RDBMS, this information is not terribly easy to track down.<br />
<br />
Some years ago when I first had need to determine which sessions were at each end of database link, I found a script that was supplied courtesy of <a href="http://www.linkedin.com/in/markjbobak">Mark Bobak</a>. When asked, Mark said he got the script from <a href="http://asktom.oracle.com/">Tom Kyte</a>. <a href="http://yong321.freeshell.org/">Yong Huang</a> includes this <a href="http://yong321.freeshell.org/computer/x$table.html">script</a> on his website, <span style="font-size: small;"><span style="font-family: inherit;">and notes that Mark further attributed authorship in Metalink Forum thread </span></span><span style="font-family: Arial; font-size: small;">524821.994. Yong has informed me that this note is no longer available. </span>I have found the <span style="font-size: small;"><span style="font-family: inherit;"><a href="http://www.morganslibrary.org/reference/xdollarsign.html">script</a></span></span> in other locations as well, <span style="font-size: small;">such <a href="http://www.morganslibrary.com/about.html">Dan Morgan's</a> website.</span><span style="font-family: inherit; font-size: small;"> So now you know the scripts provenance.</span><br />
<br />
<span style="font-family: inherit; font-size: small;">Here's the script, complete with comments. Following the script is an example of usage. </span><br />
<br />
<br />
<pre style="background-color: #eeeeee; border: 1px dashed rgb(153, 153, 153); color: black; font-family: Andale Mono,Lucida Console,Monaco,fixed,monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"><code>-- who is querying via dblink?
-- Courtesy of Tom Kyte, via Mark Bobak
-- this script can be used at both ends of the database link
-- to match up which session on the remote database started
-- the local transaction
-- the GTXID will match for those sessions
-- just run the script on both databases
Select /*+ ORDERED */
substr(s.ksusemnm,1,10)||'-'|| substr(s.ksusepid,1,10) "ORIGIN",
substr(g.K2GTITID_ORA,1,35) "GTXID",
substr(s.indx,1,4)||'.'|| substr(s.ksuseser,1,5) "LSESSION" ,
s2.username,
substr(
decode(bitand(ksuseidl,11),
1,'ACTIVE',
0, decode( bitand(ksuseflg,4096) , 0,'INACTIVE','CACHED'),
2,'SNIPED',
3,'SNIPED',
'KILLED'
),1,1
) "S",
substr(w.event,1,10) "WAITING"
from x$k2gte g, x$ktcxb t, x$ksuse s, v$session_wait w, v$session s2
where g.K2GTDXCB =t.ktcxbxba
and g.K2GTDSES=t.ktcxbses
and s.addr=g.K2GTDSES
and w.sid=s.indx
and s2.sid = w.sid
</code></pre><br />
Now let's take a look a the results of the script.<br />
<br />
Logging on to DB1 as system, create a database link to db2 using the SCOTT account:<br />
<br />
<pre style="background-color: #eeeeee; border: 1px dashed rgb(153, 153, 153); color: black; font-family: Andale Mono,Lucida Console,Monaco,fixed,monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"><code>create database link scott_link connect to scott identified by "tiger" using 'db2';
</code></pre><br />
Make sure it works:<br />
<br />
<pre style="background-color: #eeeeee; border: 1px dashed rgb(153, 153, 153); color: black; font-family: Andale Mono,Lucida Console,Monaco,fixed,monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"><code>system@db1 SQL> select sysdate from dual@scott_link;
SYSDATE
-------------------
03/05/2010 10:13:00
1 row selected.
</code></pre><br />
Now logon to DB1 as sysdba and run who_dblink.sql:<br />
<br />
<pre style="background-color: #eeeeee; border: 1px dashed rgb(153, 153, 153); color: black; font-family: Andale Mono,Lucida Console,Monaco,fixed,monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"><code>sys@db1 SQL> @who_dblink
ORIGIN GTXID LSESSION USERNAME S WAITING
--------------------- ----------------------------------- ---------- ---------- - ----------
oraserver.-<b>21901</b> DB1.d6d6d69e.3.16.7190 500.15059 SYSTEM I SQL*Net me
1 row selected.
</code></pre><br />
Now do the same on DB2:<br />
<br />
<pre style="background-color: #eeeeee; border: 1px dashed rgb(153, 153, 153); color: black; font-family: Andale Mono,Lucida Console,Monaco,fixed,monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"><code>sys@db2 SQL> @who_dblink
ORIGIN GTXID LSESSION USERNAME S WAITING
--------------------- ----------------------------------- ---------- ---------- - ----------
ordevdb01.-<b>21903</b> DB1.d6d6d69e.3.16.7190 138.28152 SCOTT I SQL*Net me
1 row selected.
</code></pre><br />
How do you identify the session on the database where the database link connection was initiated?<br />
<br />
Notice that the output from DB1 shows the PID in the ORIGIN column. In this case it is <b>21901</b>.<br />
<br />
Running the following SQL on DB1 we can identify the session from which the SYSTEM user initiated the database link connection:<br />
<br />
<pre style="background-color: #eeeeee; border: 1px dashed rgb(153, 153, 153); color: black; font-family: Andale Mono,Lucida Console,Monaco,fixed,monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"><code>select
b.username,
b.sid SID,
b.serial# SERIAL#,
b.osuser,
b.process
from v$session b,
v$process d,
v$sess_io e,
v$timer
where
b.sid = e.sid
and b.paddr = d.addr
and b.username is not null
-- added 0.0000001 to the division above to
-- avoid divide by zero errors
-- this is to show all sessions, whether they
-- have done IO or not
--and (e.consistent_Gets + e.block_Gets) > 0
-- uncomment to see only your own session
--and userenv('SESSIONID') = b.audsid
order by
b.username,
b.osuser,
b.serial#,
d.spid,
b.process,
b.status,
e.consistent_Gets,
e.block_Gets,
e.Physical_reads;
USERNAME SID SERIAL # OS USER PROCESS
---------- ------ -------- -------------------- ------------------------
SYS 507 12708 oracle 22917
SYSTEM 500 15059 oracle <b>21901</b>
2 rows selected.</code></pre><br />
The session created using the SCOTT account via the database link can also be seen using the same query on DB2 and looking for the session with a PID of 21903:<br />
<br />
<pre style="background-color: #eeeeee; border: 1px dashed rgb(153, 153, 153); color: black; font-family: Andale Mono,Lucida Console,Monaco,fixed,monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"><code>USERNAME SID SERIAL# OSUSER PROCESS
---------- ---- ------- ------- ------------
SCOTT 138 28152 oracle <b>21903</b>
SYS 147 33860 oracle 22991
146 40204 oracle 24096
3 rows selected.
</code></pre>Jaredhttp://www.blogger.com/profile/09187267459019576469noreply@blogger.com19tag:blogger.com,1999:blog-20737409.post-16592449768656825922010-03-02T09:32:00.000-08:002010-03-02T09:38:06.724-08:00Treasure Trove of Oracle Security DocumentsThis morning I stumbled across a veritable treasure trove of Oracle Security documents at the web site of the Defense Information Systems Agency.<br />
<br />
I've only spent a few minutes glancing through a small sampling of them, and they appear to be fairly comprehensive. When you consider that these are documents used to audit sensitive database installations, it makes a lot of sense that they would cross all the t's and dot all the i's.<br />
<br />
The index of documents can be found here: <a href="http://iase.disa.mil/stigs/checklist/index.html">DISA IT Security Documents</a><br />
<br />
As you can see for yourself, there are documents covering security concerns for a number of other systems.Jaredhttp://www.blogger.com/profile/09187267459019576469noreply@blogger.com0tag:blogger.com,1999:blog-20737409.post-34923628016809872092010-02-19T09:53:00.000-08:002010-02-19T09:54:15.787-08:00Cool but unknown RMAN featureUnknown to me anyway until just this week.<br />
<br />
Some time ago I read a post about RMAN on Oracle-L that detailed what seemed like a very good idea.<br />
<br />
The poster's RMAN scripts were written so that the only connection while making backups was a local one using the control file only for the RMAN repository.<br />
<pre style="background-color: #eeeeee; border: 1px dashed rgb(153, 153, 153); color: black; font-family: Andale Mono,Lucida Console,Monaco,fixed,monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"><code>rman target sys/manager nocatalog</code></pre><br />
After the backups were made, a connection was made to the RMAN catalog and a SYNC command was issued.<br />
<br />
The reason for this was that if the catalog was unavailable for some reason, the backups would still succeed, which would not be the case with this command:<br />
<br />
<br />
<pre style="background-color: #eeeeee; border: 1px dashed rgb(153, 153, 153); color: black; font-family: Andale Mono,Lucida Console,Monaco,fixed,monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"><code>rman target sys/manager catalog rman/password@rcat</code></pre><br />
This week I found out this is not true.<br />
<br />
Possibly this is news to no one but me, but I'm sharing anyway. :)<br />
<br />
Last week I cloned an apps system and created a new OID database on a server. I remembered to do nearly everything, but I did forget to setup TNS so that the catalog database could be found.<br />
<br />
After setting up the backups vie NetBackup, the logs showed that there was an error condition, but the backup obviously succeeded:<br />
<br />
<pre style="background-color: #eeeeee; border: 1px dashed rgb(153, 153, 153); color: black; font-family: Andale Mono,Lucida Console,Monaco,fixed,monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"><code>archive log filename=/u01/oracle/oradata/oiddev/archive/oiddev_arch_1_294_709899427.dbf recid=232 stamp=710999909
deleted archive log
archive log filename=/u01/oracle/oradata/oiddev/archive/oiddev_arch_1_295_709899427.dbf recid=233 stamp=710999910
Deleted 11 objects
Starting backup at 16-FEB-10
released channel: ORA_DISK_1
allocated channel: ORA_SBT_TAPE_1
channel ORA_SBT_TAPE_1: sid=369 devtype=SBT_TAPE
channel ORA_SBT_TAPE_1: VERITAS NetBackup for Oracle - Release 6.0 (2008081305)
channel ORA_SBT_TAPE_1: starting full datafile backupset
channel ORA_SBT_TAPE_1: specifying datafile(s) in backupset
including current controlfile in backupset
channel ORA_SBT_TAPE_1: starting piece 1 at 16-FEB-10
channel ORA_SBT_TAPE_1: finished piece 1 at 16-FEB-10
piece handle=OIDDEV_T20100216_ctl_s73_p1_t711086776 comment=API Version 2.0,MMS Version 5.0.0.0
channel ORA_SBT_TAPE_1: backup set complete, elapsed time: 00:00:45
Finished backup at 16-FEB-10
Starting Control File and SPFILE Autobackup at 16-FEB-10
piece handle=c-3982952863-20100216-02 comment=API Version 2.0,MMS Version 5.0.0.0
Finished Control File and SPFILE Autobackup at 16-FEB-10
RMAN> RMAN>
Recovery Manager complete.
Script /usr/openv/netbackup/scripts/oiddev/oracle_db_rman.sh
==== ended in error on Tue Feb 16 04:07:59 PST 2010 ====
</code></pre><br />
That seemed rather strange, and it was happening in both of the new databases.<br />
The key to this was to look at the top of the log file, where I found the following:<br />
<br />
<pre style="background-color: #eeeeee; border: 1px dashed rgb(153, 153, 153); color: black; font-family: Andale Mono,Lucida Console,Monaco,fixed,monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"><code>BACKUP_MODE: lvl_0
BACKUP_TYPE: INCREMENTAL LEVEL=0
ORACLE_SID : oiddev
PWD_SID : oiddev
ORACLE_HOME: /u01/oracle/oas
PATH: /sbin:/usr/sbin:/bin:/usr/bin:/usr/X11R6/bin
Recovery Manager: Release 10.1.0.5.0 - Production
Copyright (c) 1995, 2004, Oracle. All rights reserved.
RMAN;
connected to target database: OIDDEV (DBID=3982952863)
RMAN;
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-04004: error from recovery catalog database: ORA-12154: TNS:could not resolve the connect identifier specified
RMAN;
Starting backup at 16-FEB-10
<b>using target database controlfile instead of recovery catalog</b>allocated channel: ORA_SBT_TAPE_1
channel ORA_SBT_TAPE_1: sid=369 devtype=SBT_TAPE
channel ORA_SBT_TAPE_1: VERITAS NetBackup for Oracle - Release 6.0 (2008081305)
channel ORA_SBT_TAPE_1: starting incremental level 0 datafile backupset
</code></pre><br />
Notice the line near the bottom of the displayed output?<br />
<br />
The one that says "using target database controlfile instead of recovery catalog" ?<br />
<br />
RMAN will go ahead with the backup of the database even though the connection to the catalog database failed. This apparently only works when running in a scripted environment, as when I tried connecting on the command line RMAN would simply exit when the connection to the catalog could not be made.<br />
<br />
The RMAN scripts are being run on a linux server in the following format:<br />
<br />
<pre style="background-color: #eeeeee; border: 1px dashed rgb(153, 153, 153); color: black; font-family: Andale Mono,Lucida Console,Monaco,fixed,monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"><code>$OH/bin/rman target sys/manager catalog rman/password@rcat <<-EOF >> $LOGFILE
rman commands go here
EOF
</code></pre><br />
This was quite interesting to discover, and my be old news to many of you, but it was new to me.<br />
<br />
This is not exactly a new feature either - one of the databases being backed up is 9.2.0.6. And of course there is now no need to update the backup scripts.Jaredhttp://www.blogger.com/profile/09187267459019576469noreply@blogger.com9tag:blogger.com,1999:blog-20737409.post-71070271826686887182009-11-12T11:24:00.000-08:002009-11-12T11:24:13.231-08:00Data Modeling<span class="Apple-style-span" style="font-family: arial; font-size: small;"></span><br />
<div>Most readers of the blog are probably DBA's, or do DBA work along with development or other duties.</div><div><br />
</div><div>Though my title is DBA, Data Modeling is something I really like to do.</div><div><br />
</div><div>When first learning Oracle, I cut my teeth on data modeling, and used CASE 5.1 on unix to model a database system. True, CASE 5.0 used an Oracle Forms 3.x based interface, and the GUI modeling was unix only.</div><div><br />
</div><div>That was alright with me, as the Form interface allowed manual changes to be made quite quickly.</div><div><br />
</div><div>And the graphic modeling tool was fairly decent, even on a PC running Hummingbird X Server.</div><div><br />
</div><div>When <a href="http://www.oracle.com/technology/products/designer/index.html">Designer 2000</a> came out, it was clearly a more capable tool. Not only did it do everything that CASE 5.1 could do, it could do more. I won't make any silly claim that I was ever able to fully exploit D2K, as it was an end-to-end tool that could do much more than model data and databases.</div><div><br />
</div><div>What it could do with just the databases however was quite good. Data models could be created, and then a physical database could be generated from the model.</div><div><br />
</div><div>Changes in the database model could be reverse engineered back to the model, and changes in the model could be forward engineered in to the physical model. D2K could truly separate logical and physical models, and allow changes to be migrated back and forth between the two.</div><div><br />
</div><div>There are other high end tools such as Erwin which can no doubt accomplish the same thing, but I have not used them. </div><div><br />
</div><div>One important differentiation for me between D2K and other tools was that D2K worked with <a href="http://en.wikipedia.org/wiki/Barker's_Notation">Barker Notation</a>, which is the notation I first learned, and the one I still prefer. </div><div><br />
</div><div>I should not speak of <a href="http://www.oracle.com/technology/products/designer/index.html">Designer 2000</a> in past tense I guess, as it is still available from Oracle as part of the Oracle Development Suite, but is now called Oracle Designer. It just hasn't received much attention in the past few years, as I think many people have come to think of data modeling as too much overhead. </div><div><br />
</div><div>I've tried several low end tools in the past few years, and while some claim to separate logical and physical models, those that I have tried actually do a rather poor job of it.</div><div><br />
</div><div>All this leads to some new (at least, new to me) developments from of all places, Microsoft.</div><div><br />
</div><div>Maybe you have heard of <a href="http://www.microsoft.com/soa/products/oslo.aspx">Oslo</a>, Microsoft's Data Modeling toolset that has been in development for the past couple years.</div><div><br />
</div><div>If you're just now hearing about it, you will likely be hearing much more. The bit I have read has made me think this will be a very impressive tool.</div><div><br />
</div><div>If you have done data modeling, you have likely used traditional tools that allow you to define entities, drop them on a graphical model, and define relationships.</div><div><br />
</div><div>The tool you used may even have allowed you to create domains that could be used to provide data consistency among the entities.</div><div><br />
</div><div>Oslo is different. </div><div><br />
</div><div>Oslo incorporates a data definition language <a href="http://msdn.microsoft.com/en-us/library/ee424598.aspx">M</a>. The definitions can be translated to T-SQL, which in turn can be used to create the physical aspects of the model. M also allows easy creation of strongly typed data types which are carried over into the model.</div><div><br />
</div><div>Whether Oslo will allow round trip engineering ala D2K, I don't yet know.</div><div><br />
</div><div>I do however think this is a very innovative approach to modeling data. </div><div><br />
</div><div>Here are a few Oslo related links to peruse :</div><div><br />
</div><div><a href="http://www.microsoft.com/soa/products/oslo.aspx">About Oslo</a><br />
</div><div><a href="http://msdn.microsoft.com/en-us/library/ee424598.aspx">Using Oslo to Speed Up Database Development</a></div><div><a href="http://msdn.microsoft.com/en-us/oslo/default.aspx">Compendium of Oslo Articles</a></div><div><br />
</div><div>You may be thinking that I have given <a href="http://www.oracle.com/technology/products/database/datamodeler/index.html">SQL Developer Data Modeler</a> short shrift.</div><div><br />
</div><div>Along with a lot of other folks, I eagerly anticipated the arrival of SQL Developer Data Modeler.</div><div><br />
</div><div>And along with many others, was disappointed to learn that this add on to SQL Developer would set us back a cool $3000 US per seat. That seems a pretty steep price for tool that is nowhere near as capable as Oracle Designer, which is included as part of the Oracle Internet Developer Suite. True the price is nearly double that of SQL Modeler at $5800, but you get quite a bit more than just Designer with the Suite.</div><div><br />
</div><div>As for the cost of Oslo, it's probably too early to tell.</div><div><br />
</div><div>Some reading suggests that it will be included as part of SQL Server 2008, but it's probably too soon to tell.</div><div><br />
</div><div>Why all the talk about a SQL Server specific tool?</div><div><br />
</div><div>Because data modeling has been in a rut for quite some time, and Microsoft seems to have broken out of that rut. It's time for Oracle to take notice and provide better tools for modeling, rather than upholding the status quo.</div><div><br />
</div><div><br />
</div><div><br />
</div><div><br />
</div><br />
<div><br />
</div><div><span style="font-family: Arial; font-size: small;"><span class="Apple-style-span" style="font-size: 13px;"><br />
</span></span></div>Jaredhttp://www.blogger.com/profile/09187267459019576469noreply@blogger.com7tag:blogger.com,1999:blog-20737409.post-10421831368128236452009-11-09T12:08:00.000-08:002009-11-09T12:11:19.397-08:00MetaLink, we barely knew yeBut, we wish we had more time to get better acquainted.<br />
<br />
If you work with Oracle, you probably know that MetaLink went the way of the <a href="http://en.wikipedia.org/wiki/Dodo">Dodo</a> as part of an upgrade to <a href="https://support.oracle.com/CSP/ui/flash.html">My Oracle Support</a> during the weekend of November 6th, 2009. <br />
<br />
And so far it hasn't gone too well, as evidenced by these threads on Oracle-L:<br />
<br />
<a href="http://www.freelists.org/post/oracle-l/Issues-with-My-Oracle-Support">Issues with My Oracle Support</a><br />
<a href="http://www.freelists.org/post/oracle-l/Metalink-Fiasco">Metalink Fiasco</a><br />
<br />
Many people were <a href="http://jkstill.blogspot.com/2009/07/classic-metalink-vs-my-oracle-support.html">lamenting the loss of MetaLink</a> well before its demise, but I don't think any were quite expecting the issues that are currently appearing.<br />
<br />
A few have reported that it is working fine for them, but personally, I have found it unusable all morning.<br />
<br />
At least one issue with MetaLink appears to have been cleared up with MOS, that is while I was able to login to it last week.<br />
<br />
During a routine audit of who had access to our CSI numbers, I came across a group of consultants that were no longer working for our company, and froze their accounts. The next day I received a frantic voice mail from a member of the consulting firm, and he informed me that they had no access to MetaLink because I had frozen their accounts.<br />
<br />
<div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"><br />
</div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;">I returned the call just a few minutes later, but they had already been able to resolve the issue, as one of their consultants with admin rights had been unaffected, and was able to unfreeze their accounts.<br />
</div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"><br />
</div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"><br />
</div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;">Removing them from the CSI is the better procedure, but in the past when I have attempted to do so, I found that there were still open issues owned by the accounts, and could not remove them. The application owners had been very clear that this access should be removed, so I froze the accounts, so that is what I did on this occasion as well.<br />
</div><div><br />
</div><br />
<div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;">This all seemed quite bizarre to me. This must be a very strange schema in the ML user database, and some strange logic to go along with it. By granting a user access to a CSI, MetaLink was giving me Carte Blanche to effectively remove them from MetaLink.<br />
</div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"><br />
</div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;">How has My Oracle Support fixed this? Try as I might, I could not find a 'freeze' button in user administration in MOS. So the fix seems to have been "remove the button"<br />
</div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"><br />
</div>Jaredhttp://www.blogger.com/profile/09187267459019576469noreply@blogger.com8tag:blogger.com,1999:blog-20737409.post-40000385142840146382009-10-15T12:13:00.000-07:002009-12-30T08:58:51.053-08:00Oracle Open World 2009 Report - Part Two<span style="font-family: Arial; font-size: small;"><span style="font-size: 13px;"></span></span><br />
<span style="font-family: Arial; font-size: small;"><b>Tuesday October 13th</b><br />
<br />
<b>Unconference on Indexes</b><br />
<b>Richard Foote</b><br />
<b>10/13/2009 10:00 AM</b><br />
<br />
I started off the day attending the indexing presentation of fellow <a href="http://oaktable.net/">Oak Table</a> member Richard Foote. Foote has become quite well known for his expertise on index internals since the publication of <a href="http://richardfoote.wordpress.com/2007/12/11/index-internals-rebuilding-the-truth">Oracle B-Tree Index Internals: Rebuilding the Truth</a><br />
<br />
This was basically a Q&A session, and I will include just a couple of the questions.<br />
<br />
Q: Have you ever seen an index Skip Scan used correctly?<br />
A: The short answer was 'No'<br />
<br />
Foote mentioned that he had only rarely seen an index skip scan used, and then inappropriately. For more information on skip scan, see <a href="http://richardfoote.wordpress.com/2008/03/10/index-skip-scan-does-index-column-order-matter-any-more-warning-sign">Foote's blog entry on Index Skip Scans</a><br />
<br />
Q: When can you safely drop an index that doesn't seem to be used?<br />
A: That is very difficult to determine<br />
<br />
The explanation for this answer is that it is very difficult to determine in an index is never used. It does require some patience, as the code that uses the index may be run only rarely, making it difficult to determine if it is actually used<br />
<br />
<b>Oracle Closed World</b><br />
<br />
OCW actually started on Monday, though due to the wonders of technology I missed it on that day. The event was invitation only, either by being present when it was mentioned, or by receiving an SMS text on your phone.<br />
<br />
This is where technology comes in. The SMS was rather garbled, and I received through a series of very short SMS messages what seemed to be an invitation to stroll into a dark alley somewhere in downtown San Francisco. It was later cleared up and I attended on Tuesday.<br />
<br />
Oracle Closed World is the brain child of Mogens Norgaard, another Oak Table member, and co-founder of <a href="http://www.miracleas.com/">Miracle AS</a> Oracle consulting<br />
<br />
On Tuesday <a href="http://www.jlcomp.demon.co.uk/">Jonathan Lewis</a> reprised his "How to be an Expert" presentation, the difference being that this audience was comprised of folks with a wide breadth of Oracle knowledge.<br />
<br />
Lewis took advantage of this by making the questions harder, and chiding the audience for not knowing the answers. All was in good fun. Undoubtedly the presence of beer didn't make the questions any easier to answer.<br />
<br />
Wednesday was a presentation by <b>Jeremiah Wilton</b>, Oak Table member and formerly a DBA at Amazon.com. <br />
<br />
Wilton presented a live demo on using Amazon's Elastic Compute Cloud (EC2) to provision a linux server, using Elastic Block Storage (EBS) to provide persistant storage, and preconfigured Amazon Machine Instances (AMI) to build provision the server with Oracle already installed.<br />
<br />
The fact that Wilton was able to do this during a 1 hour live demo, complete with the inevitible mishaps that can occur during a live demo, and complete the task was quite impressive.<br />
<br />
This appears to be a great method to setup test instances of Oracle for experimentation. There are companies using this for production use as well.<br />
<br />
<a href="http://aws.amazon.com/">Amazon Web Services</a><br />
<br />
<b>Perl - A DBA's and Developers Best (forgotten) Friend</b><br />
<b>Arjen Visser - Avisit Solutions</b><br />
<b>10/13/2009 </b><br />
<br />
Perl is a topic near and dear to my heart.<br />
<br />
I have been using it since version 4 in the early 1990's, and have advocated it's use ever since. It is a robust and powerful language with a huge collection of libraries developed by the user community and archived in the Comprehensive Perl Archive Network (URL HERE: http://cpan.org/)<br />
<br />
When I spotted the Perl session on the schedule I immediately signed up for it.<br />
<br />
What I had not notice was the subtitle indicating it was a session for beginners.<br />
<br />
No matter, I had to go.<br />
<br />
The sesssion began with a concise but clear introduction to Perl basics.<br />
<br />
So far, so good.<br />
<br />
When the time came to discuss Perl connectivity to Oracle, it was a bit surprising to be confronted with a slide showing how to use Perl as a wrapper for sqlplus.<br />
<br />
"Surely" I thought, "this is going to be a slide showing how not to do it"<br />
<br />
If you have used Perl with Oracle, you are no doubt familiar with <a href="http://search.cpan.org/%7Etimb/DBI-1.609/DBI.pm">DBI </a> and <a href="http://search.cpan.org/%7Epythian/DBD-Oracle-1.23/Oracle.pm">DBD::Oracle</a><br />
<br />
DBI is the Perl Database Interface module developed and maintained by <a href="http://blog.timbunce.org/">Tim Bunce</a><br />
<br />
DBD::Oracle is the Oracle driver for DBI, also originally developed and mainted by Tim Bunce, and now being maintained by <a href="http://www.pythian.com/">The Pythian Group</a><br />
<br />
DBI and DBD::Oracle are very mature and robust Perl packages for using Perl with Oracle.<br />
<br />
You would also likely know that using Perl as a wrapper for sqlplus is something that is very cumbersome and inelegant. So as to not write whole treatise on why you shouldn't do this, I will simply say that doing so is rarely necessary, and never an optimal method.<br />
<br />
Which brings us to the next slide in the presentation, which had a diagram showing the how DBI and DBD::Oracle fit into the Perl architecture.<br />
<br />
The speaker then told the audience that these were hard to install and difficult to use, and didn't recommend using them.<br />
<br />
After picking my jaw back up off the floor, I lost all interest in the rest of the presentation. I don't remember what the rest of the slides were. Maybe I blacked out from the shock. What I remember is walking away from the presentation rather incrudulous.<br />
<br />
Just last week, a friend that had not used Perl asked my how to install it on a Solaris server. With only a few lines of email that I typed from memory he was able to successfully install DBI and DBD::Oracle.<br />
<br />
Hard to install indeed.<br />
<br />
<b>11 Things about 11gR2</b><br />
<b>Tom Kyte</b><br />
<br />
Really it was Tom's top 10 list for 11gR2 - he liked his favorite feature so much he counted it twice.<br />
<br />
And that is the one I will mention.<br />
<br />
It is Edition Based Redefinition,<br />
<br />
In a nutshell this feature allows you to create a new namespace for PL/SQL objects, creating new versions in a production database.<br />
<br />
This will allow upgrading applications with little or no downtime, something that has always been on of the DBA holy grails.<br />
<br />
Rather than try to explain it (OK, I don't yet know know it works) I will just tell you to take a look at Chapter 19 in the 11gR2 Advanced Application Developers Guide.<br />
<br />
<b>Wednesday Keynote</b><br />
<b>Larry Ellison</b><br />
<br />
Ellison promised to discuss 4 topics, I will include 3 of them.<br />
<br />
I left before the Fusion Middleware discussion.<br />
<br />
<b>Oracle enterprise linux update</b><br />
<br />
One interesting fact presented was a survey performed by HP detailing Linux usage in corporate data centers. The numbers are rather surprising.<br />
<br />
* Oracle Enterprise Linux 65%<br />
* Redhat 37%<br />
* Suse 15%<br />
* Other 2%<br />
<br />
<br />
Next was the second generation of the <b>Exadata Database Machine</b>.<br />
<br />
Essentially it is faster then gen 1.<br />
<br />
It was used to set a new TPCC benchmark record - I believe it was 1,000,000 transactions per seond.<br />
<br />
Ellison was proud of the record being 16 times faster than the record previously set by IBM, and rightfully so if those numbers are correct.<br />
<br />
It seems IBM has challenged the results however, claiming the Exadata 2 as 'only 6 times faster'. As you might imagine, Ellison had some fun with that, even offering a $10 million prize to anyone that can show that a Sun Exadata machine cannot run the app at least twice as fast as another other system. IBM is invicted to participate.<br />
<br />
At this time Ellison welcomed a special guest to the stage. Californie Governor <b>Arnold Schwarzenegger</b>.<br />
<br />
Commenting on being in a room with so many IT folks Schwarzenegger commented "As I came out on stage I felt my IQ shoot up 10 pts."<br />
<br />
Schwarzenegger talked for a few minutes on the impact of technology on peoples lives. "Technologies impact is flesh and blood" in reference to how tech is used to aid response of public services such as firefighting.<br />
<br />
Arnold called for a round of applause for Larry Ellison and Scott McNeely for being technology leaders.<br />
<br />
The camera cut to Ellison, looking uncharacteristically humble as he mouthed 'Thank you'. <br />
<br />
After Schwarzenegger left the stage, Ellison continued, this time discussing<b> My Oracle Support.</b><br />
<br />
My Oracle Support has been a hot topic lately, as the majority of DBA's are less than thrilled with the new Flash interface being imposed. It is my understanding that a HTML version of the interface will be maintained, so we won't have to deal with Flash if we don't want to.<br />
<br />
Here's where it gets interesting - the unification of Oracle Enterprise Manager and My Oracle Support.<br />
<br />
There is now a 'My Oracle Support' tab in OEM.<br />
<br />
DBAs will be allowed to opt in to OCM, Oracle Configuration Manager, allowing Oracle to perform automated discovery of bugs and patches needed, either in Oracle or other vendors on server (OS bugs)<br />
<br />
Oracle will will then have a global database to mine for proactive response to possible problems.<br />
<br />
When a configuration is found to have issues, all users with that configuration can be proactively notified.<br />
<br />
The real news IMO though is the impact on patching.<br />
<br />
Oracle recently started offering a new patch pacakge - PSU.<br />
<br />
This is different than the CPU patch system, as it may require merge patches to resolve patch conflicts.<br />
<br />
If OEM My Oracle Support determines that a merge patch is needed, it will automatically file an SR requesting the patch and notify you when it is available.<br />
<br />
Even if you don't like OEM, this may be a good use of it.<br />
<br />
Ok, that's enough for now, time for lunch.<br />
<br />
<br />
<br />
</span>Jaredhttp://www.blogger.com/profile/09187267459019576469noreply@blogger.com4tag:blogger.com,1999:blog-20737409.post-50741338594774222332009-10-14T10:03:00.000-07:002009-10-14T10:03:13.978-07:00Oracle Open World Report for October 11th and 12thAs I am attending Open World 2009 on blogger credentials, it seems proper I should actually blog about it.<br />
<br />
So, here it is. I won't be blogging about keynotes or other things that will appear in the news the following day, but rather on some of the sessions I attend.<br />
<br />
As I got back to my room too late and too tired to do this properly on Monday, I am putting Sunday and Monday in the same post.<br />
<br />
Here goes:<br />
<br />
<br />
<b><span class="Apple-style-span" style="font-size: x-large;">Open World - Sunday 10/11/2009</span></b><br />
<br />
While attending Oracle Open 2009, I thought it a good idea to make some report of sessions attended, and any interesting developments at OOW.<br />
<br />
Some of the sessions I attended may not be considered DBA topics. I thought it would be interesting to break out of the DBA mold for a bit and attend some sessions that might be a bit outside the DBA realm.<br />
<br />
<b>Sue Harper - Everyday Tasks with Oracle SQL Developer</b><br />
<br />
Sue Harper is the product manager for SQL Developer, and was presenting some of the useful new features of the SQL Developer 2.1 Early Adopter release.<br />
<br />
While I have used SQL Developer from the time it was first released as Raptor, I have not until recently used it simply as a database browsing tool. After exploring some of the features that allow writing reports with master/detail sections, I converted some SQLPLus scripts for use with SQL Developer. <br />
<br />
SQL Developer is a very capable tool, so I attended this session to see what else I might be missing out on.<br />
<br />
There was only one hour allocated for the session, and given the first 15 minutes were consumed convincing the audience why they should be using SQL Developer, there was just that much less time available to see the new features.<br />
<br />
Taking a few minutes to market it is probably just in the product manager DNA.<br />
<br />
Some of the features demonstrated were actually available in 1.5, but maybe not widely known. As I have not used 2.1, I won't always differentiate between versions here. Some of these features may not be new to 2.1, maybe just improved.<br />
<br />
Though not a new feature in 2.1, a few minutes were used to demonstrate the use of the built in version control integration. This is a very useful feature, and can be setup for seamless integration for CVS, SubVersion, Perforce, and one other I can't recall now. It's definitely worth a look.<br />
<br />
Some features that are new to 2.1 looked very useful:<br />
<br />
Persistent column organizing and hiding. When viewing data in SQL Developer, the columns may be easily rearranged and selected or de-selected for viewing. While previous versions allowed dragging columns around, 2.1 has a nice dialog that makes this much easier.<br />
<br />
New to 2.1 is column filtering. By right clicking on a cell in the data pane, a dialog can be brought up to filter the data based on values found. This allows filtering the data without requerying the table.<br />
<br />
Also new to 2.1 is an XML DB Repository Navigator. It was mentioned, but alas there was not time to demonstrate it.<br />
<br />
<a href="http://www.oracle.com/technology/products/database/sql_developer/index.html">http://www.oracle.com/technology/products/database/sql_developer/index.html</a><br />
<br />
<a href="http://sqldeveloper.oracle.com/">http://sqldeveloper.oracle.com/</a><br />
<br />
<br />
<b>Christoper Jones - Python/Django Framework</b><br />
<br />
This was a hands on developer session centered on using the Python scripting language with the Django Web application framework. This was a fun session. The lab was already setup, running Oracle Linux VM's with access via individual laptops setup in the training room.<br />
<br />
The lab was a go at your own pace session, with instructions both printed and available via browser. Browser based was the way to go with the instructions, as the examples could be cut and pasted, saving a lot of time typing.<br />
<br />
I wasn't able to quite complete the lab as I kept encountering an error when running the web app. It was probably due to an error in one of the scripts I modified during the session, but enough was accomplished to see that the Django Framework looked very interesting. Perhaps even simple enough to use for a DBA. Yes, I did search the schedule for a similar Perl session, perhaps using Mason or somesuch. <br />
<br />
The training materials are going to be placed on OTN in the Oracle By Example section after Open World concludes.<br />
<br />
<a href="http://www.oracle.com/technology/obe/start/index.html">http://www.oracle.com/technology/obe/start/index.html</a><br />
<br />
<b>Ray Smith - Linux Shell Scripting Craftmanshi</b><b>p</b><br />
<br />
The last session I attended on Sunday was all about shell script craftsmanship. Ray Smith was sharing some common sense methods that can be used to greatly enhance your shell scripts.<br />
<br />
If you have done any software development, the information presented would be similar to what you already know.<br />
<br />
<ul><li>Use white space and format your code for readability.</li>
<li>Don't be overly clever - other people will have to read the shell script.</li>
<li>Format your scripts with a header explaining the purpose of the script, and separate sections for independent and dependent variables, and a section for the code.</li>
<li>Use getops to control command line arguments.</li>
</ul><br />
<br />
<div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;">Smith strongly advocated that everyone in the audience obtain a copy of the book "The Art of Unix Programming" by Eric S. Raymond. This is not a new book by any means, but Smith drew upon it for many of the principles he advocated in scripting.<br />
</div><div><br />
</div><div><a href="http://www.amazon.com/exec/obidos/tg/detail/-/0131429019">The Art of Unix Programming</a><br />
</div><br />
<br />
A couple of tools new to me were mentioned:<br />
<br />
Zenity and Dialog - both of these are graphical dialog boxes that may be called from shell scripts in linux.<br />
<br />
<a href="http://freshmeat.net/projects/zenity">http://freshmeat.net/projects/zenity</a><br />
<br />
Dialog is installed with linux, so just do <i>man dialog</i> to check it out.<br />
<br />
It was an interesting presentation. Though a lot of it was not new to me, the two dialog tools mentioned were, showing that no matter how well you think you may know a subject, you can always learn something from someone else.<br />
<br />
<br />
<span class="Apple-style-span" style="font-size: x-large; font-weight: bold;">Open World - Monday 10/12/2009</span><br />
<br />
<br />
<b>Jonathan Lewis Unconference - How to be an Expert</b><br />
<a href="http://www.jlcomp.demon.co.uk/">http://www.jlcomp.demon.co.uk/</a><br />
<a href="http://jonathanlewis.wordpress.com/">http://jonathanlewis.wordpress.com/</a><br />
<br />
Jonathan Lewis had an interesting unconference presentation.<br />
<br />
<a href="http://wiki.oracle.com/page/Oracle+OpenWorld+Unconference">http://wiki.oracle.com/page/Oracle+OpenWorld+Unconference</a><br />
<br />
In a nutshell, it comes down to this:<br />
<br />
You must practice, and practice quite a lot.<br />
<br />
To make the point, he used the joke about the American Tourist asking the grounds keeper how the lawns of the Royal Estates are maintained to be so lush, have such and even texture and in short, to be so perfect.<br />
<br />
The groundskeeper explained while the tourist took notes.<br />
<br />
First you must dig down 4 inches.<br />
<br />
Then you must put down a layer of charcoal.<br />
<br />
Then another 1 inch layer find sharp sand.<br />
<br />
Finally a layer of fine loam goes on top.<br />
<br />
You then must seed the lawn, and water it very well for 6 weeks.<br />
<br />
After 6 weeks, you must cut the grass, being very carefully to remove only a small amount as you mow. This must be done three times a week.<br />
<br />
And then you continue doing this for 200 years.<br />
<br />
Ok, everyone had a good laugh at that, but the point was made.<br />
<br />
Reading some books and being able to run some scripts does not make you an expert. Lots and lots of practice may make you an expert, if you apply yourself well.<br />
<br />
During the presentation he asked a number of questions of the audience made up mostly of DBA's. I will reprise a couple of them here.<br />
<br />
Q1: Assuming you have a simple heap table, with no indexes, you update a single column in 1 row of the table. How many bytes of redo will that generate?<br />
<br />
Q2: Who of you in the audience when you insert data into a table, deliberately insert duplicate data into the database?<br />
<br />
I will leave you to speculate on the answers a bit.<br />
<br />
Of those 2 questions, only 1 was answered correctly by the audience.<br />
<br />
<b>Leng Tan and Tom Kyte DBA 2.0 - Battle of the DBA's</b><br />
<br />
What is the difference between a DBA 1.0 (the old days) and a DBA 2.0 ( the modern DBA)<br />
<br />
DBA 2.0 has modern tools, self managing database enabled by AWR and the Diag and Tuning packs.<br />
<br />
DBA 1.0 uses scripts and works from the command line.<br />
<br />
One the stage in addition to Kyte and Tan were two DBA's, each with a laptop and an oracle server to work on.<br />
<br />
Two scenarios were presented for a timed hands on problem that each DBA must work through.<br />
<br />
First scenario - Security Audit<br />
<br />
Each DBA is given 6 minutes to do a database audit and report on possible vulnerabilities<br />
<br />
DBA 1.0 ran scripts to check for open accounts, default passwords, publicly granted packages and umask settings.<br />
<br />
After doing so he ran a script to remove privileges granted to PUBLIC, and locked a couple of accounts.<br />
<br />
DBA 2.0<br />
<br />
DBA 2.0 worked from the Oracle Enterprise Manager console, using the Secure<br />
Configuration for Oracle Database.<br />
<br />
He was able to observe the database security score, navigate through several screens and correct the same security problems that DBA 1.0 did. Following that he was able to see that the security score for the database had improved.<br />
<br />
So the conclusion made by the presenter is that OEM is clearly superior because OEM will automatically generate the needed data every night.<br />
<br />
By contrast DBA 1.0 can only do one db at a time.<br />
<br />
I do not believe this demonstration to be a valid comparison - it's quite simple to run the script against any number of databases from a script, and report on anomalies.<br />
<br />
At this point it should be mentioned that DBA 1.0 spent 4 minutes explaining what he was going to do, another minute explaining what the scripts were doing, with less than 1 minute spent actually running the scripts.<br />
<br />
By comparison, DBA 2.0 was navigating through screens through nearly the entire 6 minutes.<br />
<br />
The statement was made by the presented that doing this with scripts at the command line was far too tedious a task, and DBA 1.0 would never be able to accomplish the task for 200 databases.<br />
<br />
I won't belabor the point (well, not too much) but automating these kinds of tasks is relatively simple for command line tools. Which is easier and more productive? Automating a set of scripts to poll all of your databases, or navigate through OEM for 200 databases?<br />
<br />
The present referred to using OEM as "really convenient" Sorry, but I have never found OEM to be at all convenient. Whenever I run into problems with it, it requires a SR to fix it.<br />
<br />
Thetre was a round 2 as well regarding testing execution plans both before and after setting optimizer_features _enable to a newer version. OEM fared well here compared the the scripting method as the scripts used 'explain plan' and OEM actually executed the queries to gather execution plan information.<br />
<br />
That isn't to say however that the scripts could not be modified to do the same. No, I am not completely against GUI environments. I am just against making more work for DBA tasks.<br />
<br />
Enough for now, I will report on Tuesdays conferences later this week.Jaredhttp://www.blogger.com/profile/09187267459019576469noreply@blogger.com11tag:blogger.com,1999:blog-20737409.post-25187056004423241272009-09-06T16:27:00.000-07:002009-09-08T12:07:10.826-07:00Shell TricksDBAs from time to time must write shell scripts. If your environment is strictly Windows based, this article may hold little interest for you.<br /><br />Many DBAs however rely on shell scripting to manage databases. Even if you use OEM for many tasks, you likely use shell scripts to manage some aspects of DBA work.<br /><br />Lately I have been writing a number of scripts to manage database statistics - gathering, deleting, and importing exporting both to and from statistics tables exp files.<br /><br />Years ago I started using the shell builtin getopts to gather arguments from the command line. A typical use might look like the following:<br /><br /><pre style="border: 1px dashed rgb(153, 153, 153); padding: 5px; overflow: auto; font-family: Andale Mono,Lucida Console,Monaco,fixed,monospace; color: rgb(0, 0, 0); background-color: rgb(238, 238, 238); font-size: 12px; line-height: 14px; width: 100%;"><code>while getopts d:u:s:T:t:n: arg<br />do<br />case $arg in<br />d) DATABASE=$OPTARG<br />echo DATABASE: $DATABASE;;<br />u) USERNAME=$OPTARG<br />echo USERNAME: $USERNAME;;<br />s) SCHEMA=$OPTARG<br />echo SCHEMA: $SCHEMA;;<br />T) TYPE=$OPTARG<br />echo TYPE: $TYPE;;<br />t) TABLE_NAME=$OPTARG;;<br />#echo TABLE_NAME: $TABLE_NAME<br />n) OWNER=$OPTARG<br />echo OWNER: $OWNER;;<br />*) echo "invalid argument specified"; usage;exit 1;<br />esac<br /><br />done<br /><br /></code></pre><br />In this example, the valid arguments are -d, -u, -s, -T, -t and -n. All of these arguments require a value.<br /><br />The command line arguments might look like this:<br />somescript.sh -d orcl -u system -s scott<br /><br />If an invalid argument such as -z is passed, the script will exit with the exit code set to 1.<br /><br />For the script to work correctly, some checking of the arguments passed to the script must be done.<br /><br />For this script, the rules are as follows:<br /><ul><li>-d and -u must always be set</li><li>-s must be set if -T is 'SCHEMA'</li><li>-t and -n must both have a value or be blank</li><li>-s must be used with -T</li></ul><div>In this example, values for -T other than 'SCHEMA' are not being checked.</div><div><br /></div><div>The usual method (at least for me) to test the validity of command line arguments has always been to use the test, or [] operator with combinations of arguments.</div><div><br /></div><div>For the command line arguments just discussed, the tests might look like the following:</div><div><br /></div><pre style="border: 1px dashed rgb(153, 153, 153); padding: 5px; overflow: auto; font-family: Andale Mono,Lucida Console,Monaco,fixed,monospace; color: rgb(0, 0, 0); background-color: rgb(238, 238, 238); font-size: 12px; line-height: 14px; width: 100%;"><code>[ -z "$DATABASE" -o -z "$USERNAME" ] && {<br />echo Database or Username is blank<br />exit 2<br />}<br /><br /># include schema name if necessary<br />[ "$TYPE" == 'SCHEMA' -a -z "$SCHEMA" ] && {<br />echo Please include schema name<br />exit 3<br />}<br /><br /># both owner and tablename must have a value, or both be blank<br />[ \( -z "$TABLE_NAME" -a -n "$OWNER" \) -o \( -n "$TABLE_NAME" -a -z "$OWNER" \) ] && {<br />echo Please specify both owner and tablename<br />echo or leave both blank<br />exit 4<br />}<br /><br /># if -s is set, so must -T<br />[ -n "$SCHEMA" -a -z "$TYPE" ] && {<br />echo Please include a type with -T<br />exit 5<br />}<br /><br /></code></pre><div><br /></div>As you can see, there are a fair number of tests involved to determine the validity of the command line arguments. You may have guessed why I skipped one for this demo - I just did not want to write any more tests.<div><br /></div><div>Validating command line arguments really gets difficult with a larger number of possible arguments. Worse yet, any later modifications to the script that require a new command line argument become dreaded tasks that are put off as long as possible due the complexity of testing the validity of command line arguments.</div><div><br /></div><div>While writing a script that had 11 possible arguments, I was dreading writing the command line argument validation section, I thought there must be a better way.</div><div><br /></div><div>It seemed that there must be a simple method of using regular expressions to validate combinations of command line arguments. I had never seen this done, and after spending a fair bit of time googling the topic it became apparent that there was not any code available for a cut and paste solution, so it seemed a nice opportunity to be innovative.</div><div><br /></div><div>After experimenting a bit, I found what I think <i>is</i> a better way.</div><div><br /></div><div>The method I use is to concatenate all possible command line arguments into a ':' delimited string, and then use a set of pre-prepared regexes to determine whether or not the command line arguments are valid.</div><div><br /></div><div>One immediately obvious drawback to this method is that arguments containing the ':' character cannot be used. However the delimiting character can easily be changed if needed.</div><div><br /></div><div>Using the same example as previous, the command line arguments are all concatenated into a string and converted to upper case:</div><div><br /></div><pre style="border: 1px dashed rgb(153, 153, 153); padding: 5px; overflow: auto; font-family: Andale Mono,Lucida Console,Monaco,fixed,monospace; color: rgb(0, 0, 0); background-color: rgb(238, 238, 238); font-size: 12px; line-height: 14px; width: 100%;"><code>ALLARGS=":$USERNAME:$DATABASE:$OWNER:$TABLE_NAME:$SCHEMA:$TYPE:"<br /># upper case arges<br />ALLARGS=$(echo $ALLARGS | tr "[a-z]" "[A-Z]")<br /><br /></code></pre><div><br /></div>Next a series of regular expressions are created. The first two are generic, and may or may not be used as building blocks for other regular expressions. The others all correspond to a specific command line argument<div><br /><pre style="border: 1px dashed rgb(153, 153, 153); padding: 5px; overflow: auto; font-family: Andale Mono,Lucida Console,Monaco,fixed,monospace; color: rgb(0, 0, 0); background-color: rgb(238, 238, 238); font-size: 12px; line-height: 14px; width: 100%;"><code><br /># alphanumeric only, at least 1 character<br />export ALNUM1="[[:alnum:]]+"<br /># alphanumeric only, at least 3 characters<br />export ALNUM3="[[:alnum:]]{3,}"<br /># username - alphanumeric only at least 3 characters<br />export USER_RE=$ALNUM3<br /># database - alphanumeric only at least 3 characters<br />export DATABASE_RE=$ALNUM3<br /># owner - alphanumeric and _ and $ characters<br />export OWNER_RE='[[:alnum:]_$]+'<br /># table_name - alphanumeric and _, # and $ characters<br />export TABLE_RE='[[:alnum:]_#$]+'<br /># schema - alphanumeric and _ and $ characters<br />export SCHEMA_RE='[[:alnum:]_$]+'<br /><br /></code></pre><div><br /></div><div>These regular expressions could use further refinement (such as username must start with alpha only ) but are sufficient for this demonstration.</div><div><br /></div><div>Next, the regular expressions are concatenated together into ':' delimited strings, with each possible command line argument represented either by its corresponding regex, or by null.</div><div><br /></div><div>The regexes are stuffed into a bash array. For our example, it looks like this:</div><pre style="border: 1px dashed rgb(153, 153, 153); padding: 5px; overflow: auto; font-family: Andale Mono,Lucida Console,Monaco,fixed,monospace; color: rgb(0, 0, 0); background-color: rgb(238, 238, 238); font-size: 12px; line-height: 14px; width: 100%;"><code># : user : db : owner : table : schema : type<br />VALID_ARGS=(<br />":$USER_RE:$DATABASE_RE:$OWNER_RE:$TABLE_RE::(DICTIONARY_STATS|SYSTEM_STATS|FIXED_OBJECTS_STATS):" \<br />":$USER_RE:$DATABASE_RE::::(DICTIONARY_STATS|SYSTEM_STATS|FIXED_OBJECTS_STATS):" \<br />":$USER_RE:$DATABASE_RE:$OWNER_RE:$TABLE_RE:$SCHEMA_RE:(SCHEMA):" \<br />":$USER_RE:$DATABASE_RE:::$SCHEMA_RE:SCHEMA:")<br /><br /></code></pre>Notice that there are four different combitations of command line arguments represented.<div><br /></div><div>In all cases the USERNAME and DATABASE are required and must correspond to the regex provided.</div><div><br /></div><div>In the first combination of arguments, the owner and table must also be specified, and type (-T) must be either one of DICTIONARY_STATS, SYSTEM_STATS or FIXED_OBJECTS_STATS.</div><div><br /></div><div>In the second possible combination, the only argument allowed in addition to DATABASE and USERNAME is the type (-T) argument.</div><div><br /></div><div>The third combination requires the OWNER, TABLE_NAME and SCHEMA argument to have a valid value, and the TYPE argument must be set to SCHEMA.</div><div><br /></div><div>The final combination of arguments requires just the SCHEMA argument and the TYPE argument must be set to SCHEMA, in addition to the USERNAME and DATABASE arguments.</div><div><br /></div><div>By now you likely want to know just how these regular expressions are tested. The following function is used to test the command line arguments against each regular expression:</div><pre style="border: 1px dashed rgb(153, 153, 153); padding: 5px; overflow: auto; font-family: Andale Mono,Lucida Console,Monaco,fixed,monospace; color: rgb(0, 0, 0); background-color: rgb(238, 238, 238); font-size: 12px; line-height: 14px; width: 100%;"><code>function validate_args {<br />typeset arglist<br />arglist=$1<br /><br />while shift<br />do<br />[ -z "$1" ] && break<br />if [ $(echo $arglist | grep -E $1 ) ]; then<br />return 0<br />fi<br /><br />done<br />return 1<br /><br />}<br /><br /></code></pre><div><br /></div>Here's how it is used in the script:</div><div><br /><div><pre style="border: 1px dashed rgb(153, 153, 153); padding: 5px; overflow: auto; font-family: Andale Mono,Lucida Console,Monaco,fixed,monospace; color: rgb(0, 0, 0); background-color: rgb(238, 238, 238); font-size: 12px; line-height: 14px; width: 100%;"><code># VALID_ARGS must NOT be quoted or it will appear as a single arg in the function<br />validate_args $ALLARGS ${VALID_ARGS[*]}<br /><br /></code></pre><br /><div><div>While this method may appear somewhat confusing at first, it becomes less so after using it a few times. It greatly simplifies the use of many command line arguments that may appear in differing combinations.</div><div><br /></div><div>As far as I know, this method only works properly with the bash shell. I have done testing on only two shells, bash and ksh. It does not work properly on ksh.</div><div><br /></div><div>Here's a demonstration of the ksh problem. The following script is run from both ksh and bash:</div><div><br /></div><pre style="border: 1px dashed rgb(153, 153, 153); padding: 5px; overflow: auto; font-family: Andale Mono,Lucida Console,Monaco,fixed,monospace; color: rgb(0, 0, 0); background-color: rgb(238, 238, 238); font-size: 12px; line-height: 14px; width: 100%;"><code>function va {<br /><br /> echo ARG1: $1<br />}<br /><br /><br />R1="[[:alnum]]+"<br />R2="[[:alnum]]{3,}"<br /><br />va $R1<br />va $R2<br /></code></pre><div>And here are the results:</div></div></div><pre style="border: 1px dashed rgb(153, 153, 153); padding: 5px; overflow: auto; font-family: Andale Mono,Lucida Console,Monaco,fixed,monospace; color: rgb(0, 0, 0); background-color: rgb(238, 238, 238); font-size: 12px; line-height: 14px; width: 100%;"><code>18:9-jkstill-18 > ksh t3<br />ARG1: [[:alnum]]+<br />ARG1: [[:alnum]]3<br />[ /home/jkstill/bin ]<br /><br />jkstill-18 > bash t3<br />ARG1: [[:alnum]]+<br />ARG1: [[:alnum]]{3,}<br />[ /home/jkstill/bin ]<br /><br /><br /></code></pre><div><br /></div>Notice that when the script is run with ksh, the '{', '}' and ',' are removed from the regular expression. I could find no combination of quoting and escape characters that could prevent that from happening. This method of command line argument validation could be made to work using ksh if those characters are not used in the regexes. That would be rather limiting though.<div><br /></div><div>One other drawback you may have noticed with this method of validating command line arguments is that when an error condition is encountered, the exit code is always 1. With the [] method it was easy to exit with different codes to indicate the nature of the error. Something similar could likely be done by embedding a code into each set of regexes, but I will leave that as an exercise for the reader.</div><div><br /></div><div>The complete prototype script, as well as a test script can be downloaded:</div><div><br /></div><div><a href="http://jaredstill.com/downloads/valid_args/validate_args_proto.sh">validate_args_proto.sh</a></div><div><a href="http://jaredstill.com/downloads/valid_args/validate_args_proto_unit_test.sh">validate_args_proto_unit_test.sh</a></div><div><br /></div><div>The next article will include a set of functions used along with the validate_args() function to make shell scripts a bit more robust.</div><div><br /></div></div>Jaredhttp://www.blogger.com/profile/09187267459019576469noreply@blogger.com12tag:blogger.com,1999:blog-20737409.post-41827465743131081232009-08-06T14:50:00.000-07:002010-03-18T10:20:29.516-07:00Detecting Corrupt Data BlocksOr more accurately, how not to detect corrupt data blocks.<br />
<br />
This <a href="http://www.freelists.org/post/oracle-l/Lost-Writes">thread</a> on <a href="http://www.freelists.org/list/oracle-l">Oracle-L</a> is regarding lost writes on a database.<br />
<br />
One <a href="http://www.freelists.org/post/oracle-l/Lost-Writes,4">suggestion</a> was made to use the exp utility to export the database, thereby determining if there are corrupt blocks in the database due to disk failure. I didn't give it much thought at first, but fellow <a href="http://oaktable.net/">Oak Table</a> member <a href="http://www.rightsizinginc.com/principals.htm">Mark Farnham</a> got me thinking about it.<br />
<br />
Using exp to detect corrupt blocks, or rather, the absence of corrupt blocks may work, but then again, it may not. It is entirely possible to do a full table scan on a table successfully, as would happen during an export, even though the blocks on disk have been corrupted.<br />
<br />
This can be demonstrated by building a table, ensuring the contents are cached, then destroying the data in the data file, followed by a successful export of the table.<br />
<br />
Granted, there are a lot of mitigating factors that could be taken into consideration as to whether or not this would happen in a production database. That's not the point: the point is that it could happen, so exp is not a reliable indicator of the state of the data files on disk.<br />
<br />
This test was performed on Oracle 10.2.0.4 EE on RH Linux ES 4. Both are 32 bit.<br />
<br />
First create a test tablespace:<br />
<br />
<pre style="background-color: #eeeeee; border: 1px dashed rgb(153, 153, 153); color: black; font-family: Andale Mono,Lucida Console,Monaco,fixed,monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"><code>create tablespace lost_write datafile '/u01/oradata/dv11/lost_write.dbf' size 1m
extent management local
uniform size 64k
/
</code></pre><br />
<br />
Next the table LOST_WRITE is created in the tablespace of the same name. This will be used to test the assertion that a successful export of the table can be done even though the data on disk is corrupt.<br />
<br />
<pre style="background-color: #eeeeee; border: 1px dashed rgb(153, 153, 153); color: black; font-family: Andale Mono,Lucida Console,Monaco,fixed,monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"><code>create table lost_write
cache
tablespace lost_write
as
select * from dba_objects
where rownum <= 1000
/
begin
dbms_stats.gather_table_stats(user,'LOST_WRITE');
end;
/
select tablespace_name, blocks, bytes
from user_segments
where segment_name = 'LOST_WRITE'
/
TABLESPACE_NAME BLOCKS BYTES
------------------------------ ---------- ----------
LOST_WRITE 16 131072
1 row selected.
</code></pre><br />
<br />
Next, do a full table scan and verify that the blocks are cached:<br />
<br />
<pre style="background-color: #eeeeee; border: 1px dashed rgb(153, 153, 153); color: black; font-family: Andale Mono,Lucida Console,Monaco,fixed,monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"><code>select * from lost_write;
Verify in cache:
select file#,block#,class#, status
from v$bh where ts# = (select ts# from sys.ts$ where name = 'LOST_WRITE')
order by block#
/
FILE# BLOCK# CLASS# STATUS
---------- ---------- ---------- -------
40 2 13 xcur
40 3 12 xcur
40 9 8 xcur
40 10 9 xcur
40 11 4 xcur
40 12 1 xcur
40 13 1 xcur
40 14 1 xcur
40 15 1 xcur
40 16 1 xcur
40 17 1 xcur
40 18 1 xcur
40 19 1 xcur
40 20 1 xcur
40 21 1 xcur
40 22 1 xcur
40 23 1 xcur
</code></pre><br />
<br />
Now swap the bytes in the file, skipping the first 2 oracle blocks<br />
Caveat: I don't know if that was the correct # of blocks, and I didn't spend any time trying to find out<br />
Also, I belatedly saw that count probably should have been 22 rather than 16, but the results still served the purpose of corrupting the datafile, as we shall see in a bit.<br />
<br />
What this dd command is doing is using the same file for both input and output, and rewriting blocks 3-18, swapping each pair of bytes.<br />
<br />
<pre style="background-color: #eeeeee; border: 1px dashed rgb(153, 153, 153); color: black; font-family: Andale Mono,Lucida Console,Monaco,fixed,monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"><code>dd if=/u01/oradata/dv11/lost_write.dbf of=/u01/oradata/dv11/lost_write.dbf bs=8129 skip=2 count=16 conv=swab,notrunc
</code></pre><br />
<br />
The effect is demonstrated by this simple test:<br />
<br />
<pre style="background-color: #eeeeee; border: 1px dashed rgb(153, 153, 153); color: black; font-family: Andale Mono,Lucida Console,Monaco,fixed,monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"><code>jkstill-19 > echo hello | dd
hello
0+1 records in
0+1 records out
[ /home/jkstill ]
jkstill-19 > echo hello | dd conv=swab
ehll
o0+1 records in
0+1 records out
</code></pre><br />
<br />
Now we can attempt the export:<br />
<br />
<pre style="background-color: #eeeeee; border: 1px dashed rgb(153, 153, 153); color: black; font-family: Andale Mono,Lucida Console,Monaco,fixed,monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"><code>exp tables=\(jkstill.lost_write\) ...
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Production
With the Partitioning, Oracle Label Security, Data Mining and Real Application Testing options
Export done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
About to export specified tables via Conventional Path ...
. . exporting table LOST_WRITE 1000 rows exported
Export terminated successfully without warnings.
19 > echo hello | dd
hello
0+1 records in
0+1 records out
[ /home/jkstill ]
jkstill-19 > echo hello | dd conv=swab
ehll
o0+1 records in
0+1 records out
</code></pre><br />
<br />
So, even though the data on disk has been corrupted, the export succeeded. That is due to the table being created with the CACHE option, and all the blocks being cached at the time of export. It may not be necessary to use the CACHE option, but I used it to ensure the test would succeed.<br />
<br />
Now let's see what happens when trying to scan the table again. First the NOCACHE option will be set on the table, then a checkpoint.<br />
<br />
<pre style="background-color: #eeeeee; border: 1px dashed rgb(153, 153, 153); color: black; font-family: Andale Mono,Lucida Console,Monaco,fixed,monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"><code>10:42:45 dv11 SQL> alter table lost_write nocache;
10:43:02 dv11 SQL> alter system checkpoint;
Now try to scan the table again:
10:43:14 ordevdb01.radisys.com - js001292@dv11 SQL> /
select * from lost_write
*
ERROR at line 1:
ORA-00376: file 40 cannot be read at this time
ORA-01110: data file 40: '/u01/oradata/dv11/lost_write.dbf'
</code></pre><br />
<br />
A corollary conclusion can drawn from this example.<br />
<br />
If you do discover bad data blocks, you just might be able to do an export of the tables that are in the affected region before doing any recovery. This might be a belt and suspenders approach, but DBA's are not generally being known for taking unnecessary chances when possible data loss is on the line.Jaredhttp://www.blogger.com/profile/09187267459019576469noreply@blogger.com3tag:blogger.com,1999:blog-20737409.post-15340428722597328132009-07-09T16:01:00.000-07:002009-07-15T11:11:35.566-07:00Classic MetaLink vs. My Oracle SupportIf you are in any way involved with supporting Oracle products, then you know that the death knell for Classic MetaLink has sounded. MetaLink will be unplugged at the end of July 2009.<div><br /></div><div>The new support site, My Oracle Support, seems to be causing some pain for quite a few people in the Oracle user community.</div><div><br /></div><div>Some of the complaints regard limited platform support due to the Adobe Flash 9 requirements, navigation and response times.</div><div><br /></div><div>On the other hand there are some cool new features such as Power View, Configuration Manager and the new Advanced Search options.</div><div><br /></div><div>How do you feel about it?</div><div><br /></div><div>Here's a chance to let your opinion be know as a poll has been created where you can vote on it.<br /><br /></div><div><a href="http://tinyurl.com/metalink-vs-myoraclesupport">Poll: MetaLink vs My Oracle Support</a></div><div><br /></div><div>At this time 637 voters have voiced their opinion about MetaLink and My Oracle Support.<br /><br />Current Poll results can be found in this Excel File: <a href="http://jaredstill.com/downloads/metalink_vs_myoraclesupport.zip">MetaLink vs My Oracle Support Results</a><br /></div><div><br /></div>Jaredhttp://www.blogger.com/profile/09187267459019576469noreply@blogger.com4tag:blogger.com,1999:blog-20737409.post-81387683439677595842009-04-29T10:24:00.001-07:002009-08-06T22:25:16.667-07:00Querying v$lockThere have been a number of scripts made available for querying v$lock to diagnose locking issues.<br /><br />One example is one I got long ago from tsawmiller on Oracle-L. The original script showlock.sql, or something close to it is still available at OraFaq.com <a href="http://www.orafaq.com/scripts/performance/lockall.txt">showlock.sql</a><br /><br /><par>showlock.sql has morphed over the years to keep up with changing versions of Oracle.<br /><br />At one time the showlock.sql resembled the OH/rdbms/admin/utllockt.sql script, in that it created a temporary table to speed up the results, as the join on v$lock, dba_sessions and dba_waiters was so slow.<br /><br />That was remedied at one point by the use of the ordered hint. That hint may no longer be necessary, but the script is still fast on all versions of Oracle that I need it on, (9i-11g) and I am too lazy to test something that isn't broken.<br /><br />This script could still be further updated by the use of the v$lock_type view, eliminating the large decode statements in the script. As v$lock_type is not available in 9i though, I leave the decodes in. When the last 9i database is gone from our environment however, the script can be shortened considerably.<br /><br />The decode statements were mostly lifted from a script provided by Oracle. MetaLink document (or My Oracle Support now I guess) # 1020008.6 has a 'fully decoded' locking script that is current though 11g I believe.<br /><br />The problem with that script however is that it does not correctly look up the name of the object that is locked.<br /><br />The reason I have even brought this up is that a bit of my workday yesterday was spent updating the script, and making sure it worked as expected. The COMMAND column was also added. In addition, the outer joins were converted to the much neater ANSI join syntax, and one outer join was eliminated.<br /><br />Here's the output from a test. It may be easier to read if you cut and paste it into a text editor, as the formatting here doesn't work well for wide output. Better yet, test the script and look at the output for yourself.<br /><br /><br /><pre style="font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; color: #000000; background-color: #eee;font-size: 12px;border: 1px dashed #999999;line-height: 14px;padding: 5px; overflow: auto; width: 100%"><code> Oracle Database Lock Mode Mode OS OS<br /> SID Usernam WATR BLKR Object COMMAND Type Lock Description Held Requested Program Process<br />------ ------- ----- ----- ------------------------- --------------- ---- ---------------- --------------- ---------- ------------------ -------<br /> 73 JKSTILL 83 JKSTILL.A SELECT TM DML enqueue lock Exclusive None sqlplus@poirot (TN 21430<br /> 83 JKSTILL 73 JKSTILL.A LOCK TABLE TM DML enqueue lock None Exclusive sqlplus@poirot (TN 21455<br /><br />2 rows selected.<br /><br /><br /></code></pre><br /><br />Though utllockt.sql may work well enough, it does have a couple of drawbacks:<br /><br /> 1. it does not provide enough information<br /> 2. it creates a temporary table.<br /><br />That second item means that you better be sure to run the script from a session separate from any holding locks. In production that probably does not matter, as that is what would normally be done anyway. During testing however it can be a bit frustrating until you realize the the DDL in the script is causing your locks to be released.<br /><br />What I like about this script is that it shows me what I need to know, and it is very fast.<br />Of course, now that I have stated that someone will run it on a system where it performs poorly...<br /><br />For showlock.sql to work, the dba_waiters view must be created.<br />If this has not already been done, it can be created by logging in as SYSDBA and running the OH/rdbms/admin/catblock.sql script.<br /><br />Here's how you can easily test sh0wlock.sql:<br /><br />Session A -<br />create table a (a integer);<br />lock table a in exclusive mode;<br /><br />Session B<br />lock table a in exclusive mode;<br /><br />Now either from session A or a new session, run the showlock.sql script.<br /><br />Here's the script.<br /><br /><br /><pre style="font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; color: #000000; background-color: #eee;font-size: 12px;border: 1px dashed #999999;line-height: 14px;padding: 5px; overflow: auto; width: 100%"><code>-- showlock.sql - show all user locks<br />--<br />-- see ML Note 1020008.6 for fully decoded locking script<br />-- parts of the that script to not work correctly, but the<br />-- lock types are current<br />-- (script doesn't find object that is locked )<br />--<br />-- speeded up greatly by changing order of where clause,<br />-- jks 04/09/1997 - show lock addresses and lockwait<br /><br />-- jks 04/09/1997 - outer join on all_objects<br />-- encountered situation on 7.2<br />-- where there was a lock with no<br />-- matching object_id<br />-- jks 02/24/1999 - join to dba_waiters to show waiters and blockers<br />-- jkstill 05/22/2006 - revert back to previous version without tmp tables<br />-- update lock info<br />-- add lock_description and rearrange output<br />-- jkstill 04/28/2008 - added command column<br />-- updated lock types<br />-- removed one outer join by using inline view on sys.user$<br />-- jkstill 04/28/2008 - added subquery factoring<br />-- converted to ANSI joins<br />-- changed alias for v$lock to l and v$session to s<br /><br />set trimspool on<br />ttitle off<br />set linesize 150<br />set pagesize 60<br />column command format a15<br />column osuser heading 'OS|Username' format a7 truncate<br />column process heading 'OS|Process' format a7 truncate<br />column machine heading 'OS|Machine' format a10 truncate<br />column program heading 'OS|Program' format a18 truncate<br />column object heading 'Database|Object' format a25 truncate<br />column lock_type heading 'Lock|Type' format a4 truncate<br />column lock_description heading 'Lock Description'format a16 truncate<br />column mode_held heading 'Mode|Held' format a15 truncate<br />column mode_requested heading 'Mode|Requested' format a10 truncate<br />column sid heading 'SID' format 999<br />column username heading 'Oracle|Username' format a7 truncate<br />column image heading 'Active Image' format a20 truncate<br />column sid format 99999<br />col waiting_session head 'WATR' format 9999<br />col holding_session head 'BLKR' format 9999<br /><br />with dblocks as (<br />select /*+ ordered */<br /> l.kaddr,<br /> s.sid,<br /> s.username,<br /> lock_waiter.waiting_session,<br /> lock_blocker.holding_session,<br /> (<br /> select name<br /> from sys.user$<br /> where user# = o.owner#<br /> ) ||'.'||o.name<br /> object,<br /> decode(command,<br /> 0,'BACKGROUND',<br /> 1,'Create Table',<br /> 2,'INSERT',<br /> 3,'SELECT',<br /> 4,'CREATE CLUSTER',<br /> 5,'ALTER CLUSTER',<br /> 6,'UPDATE',<br /> 7,'DELETE',<br /> 8,'DROP',<br /> 9,'CREATE INDEX',<br /> 10,'DROP INDEX',<br /> 11,'ALTER INDEX',<br /> 12,'DROP TABLE',<br /> 13,'CREATE SEQUENCE',<br /> 14,'ALTER SEQUENCE',<br /> 15,'ALTER TABLE',<br /> 16,'DROP SEQUENCE',<br /> 17,'GRANT',<br /> 18,'REVOKE',<br /> 19,'CREATE SYNONYM',<br /> 20,'DROP SYNONYM',<br /> 21,'CREATE VIEW',<br /> 22,'DROP VIEW',<br /> 23,'VALIDATE INDEX',<br /> 24,'CREATE PROCEDURE',<br /> 25,'ALTER PROCEDURE',<br /> 26,'LOCK TABLE',<br /> 27,'NO OPERATION',<br /> 28,'RENAME',<br /> 29,'COMMENT',<br /> 30,'AUDIT',<br /> 31,'NOAUDIT',<br /> 32,'CREATE EXTERNAL DATABASE',<br /> 33,'DROP EXTERNAL DATABASE',<br /> 34,'CREATE DATABASE',<br /> 35,'ALTER DATABASE',<br /> 36,'CREATE ROLLBACK SEGMENT',<br /> 37,'ALTER ROLLBACK SEGMENT',<br /> 38,'DROP ROLLBACK SEGMENT',<br /> 39,'CREATE TABLESPACE',<br /> 40,'ALTER TABLESPACE',<br /> 41,'DROP TABLESPACE',<br /> 42,'ALTER SESSION',<br /> 43,'ALTER USER',<br /> 44,'COMMIT',<br /> 45,'ROLLBACK',<br /> 46,'SAVEPOINT',<br /> 47,'PL/SQL EXECUTE',<br /> 48,'SET TRANSACTION',<br /> 49,'ALTER SYSTEM SWITCH LOG',<br /> 50,'EXPLAIN',<br /> 51,'CREATE USER',<br /> 52,'CREATE ROLE',<br /> 53,'DROP USER',<br /> 54,'DROP ROLE',<br /> 55,'SET ROLE',<br /> 56,'CREATE SCHEMA',<br /> 57,'CREATE CONTROL FILE',<br /> 58,'ALTER TRACING',<br /> 59,'CREATE TRIGGER',<br /> 60,'ALTER TRIGGER',<br /> 61,'DROP TRIGGER',<br /> 62,'ANALYZE TABLE',<br /> 63,'ANALYZE INDEX',<br /> 64,'ANALYZE CLUSTER',<br /> 65,'CREATE PROFILE',<br /> 66,'DROP PROFILE',<br /> 67,'ALTER PROFILE',<br /> 68,'DROP PROCEDURE',<br /> 69,'DROP PROCEDURE',<br /> 70,'ALTER RESOURCE COST',<br /> 71,'CREATE SNAPSHOT LOG',<br /> 72,'ALTER SNAPSHOT LOG',<br /> 73,'DROP SNAPSHOT LOG',<br /> 74,'CREATE SNAPSHOT',<br /> 75,'ALTER SNAPSHOT',<br /> 76,'DROP SNAPSHOT',<br /> 79,'ALTER ROLE',<br /> 85,'TRUNCATE TABLE',<br /> 86,'TRUNCATE CLUSTER',<br /> 87,'-',<br /> 88,'ALTER VIEW',<br /> 89,'-',<br /> 90,'-',<br /> 91,'CREATE FUNCTION',<br /> 92,'ALTER FUNCTION',<br /> 93,'DROP FUNCTION',<br /> 94,'CREATE PACKAGE',<br /> 95,'ALTER PACKAGE',<br /> 96,'DROP PACKAGE',<br /> 97,'CREATE PACKAGE BODY',<br /> 98,'ALTER PACKAGE BODY',<br /> 99,'DROP PACKAGE BODY',<br /> command||'-UNKNOWN'<br /> ) COMMAND,<br /> -- lock type<br /> -- will always be TM, TX or possible UL (user supplied) for user locks<br /> l.type lock_type,<br /> decode<br /> (<br /> l.type,<br /> 'BL','Buffer hash table instance lock',<br /> 'CF',' Control file schema global enqueue lock',<br /> 'CI','Cross-instance function invocation instance lock',<br /> 'CS','Control file schema global enqueue lock',<br /> 'CU','Cursor bind lock',<br /> 'DF','Data file instance lock',<br /> 'DL','Direct loader parallel index create',<br /> 'DM','Mount/startup db primary/secondary instance lock',<br /> 'DR','Distributed recovery process lock',<br /> 'DX','Distributed transaction entry lock',<br /> 'FI','SGA open-file information lock',<br /> 'FS','File set lock',<br /> 'HW','Space management operations on a specific segment lock',<br /> 'IN','Instance number lock',<br /> 'IR','Instance recovery serialization global enqueue lock',<br /> 'IS','Instance state lock',<br /> 'IV','Library cache invalidation instance lock',<br /> 'JQ','Job queue lock',<br /> 'KK','Thread kick lock',<br /> 'LA','Library cache lock instance lock (A=namespace)',<br /> 'LB','Library cache lock instance lock (B=namespace)',<br /> 'LC','Library cache lock instance lock (C=namespace)',<br /> 'LD','Library cache lock instance lock (D=namespace)',<br /> 'LE','Library cache lock instance lock (E=namespace)',<br /> 'LF','Library cache lock instance lock (F=namespace)',<br /> 'LG','Library cache lock instance lock (G=namespace)',<br /> 'LH','Library cache lock instance lock (H=namespace)',<br /> 'LI','Library cache lock instance lock (I=namespace)',<br /> 'LJ','Library cache lock instance lock (J=namespace)',<br /> 'LK','Library cache lock instance lock (K=namespace)',<br /> 'LL','Library cache lock instance lock (L=namespace)',<br /> 'LM','Library cache lock instance lock (M=namespace)',<br /> 'LN','Library cache lock instance lock (N=namespace)',<br /> 'LO','Library cache lock instance lock (O=namespace)',<br /> 'LP','Library cache lock instance lock (P=namespace)',<br /> 'LS','Log start/log switch enqueue lock',<br /> 'MB','Master buffer hash table instance lock',<br /> 'MM','Mount definition gloabal enqueue lock',<br /> 'MR','Media recovery lock',<br /> 'PA','Library cache pin instance lock (A=namespace)',<br /> 'PB','Library cache pin instance lock (B=namespace)',<br /> 'PC','Library cache pin instance lock (C=namespace)',<br /> 'PD','Library cache pin instance lock (D=namespace)',<br /> 'PE','Library cache pin instance lock (E=namespace)',<br /> 'PF','Library cache pin instance lock (F=namespace)',<br /> 'PF','Password file lock',<br /> 'PG','Library cache pin instance lock (G=namespace)',<br /> 'PH','Library cache pin instance lock (H=namespace)',<br /> 'PI','Library cache pin instance lock (I=namespace)',<br /> 'PI','Parallel operation lock',<br /> 'PJ','Library cache pin instance lock (J=namespace)',<br /> 'PK','Library cache pin instance lock (L=namespace)',<br /> 'PL','Library cache pin instance lock (K=namespace)',<br /> 'PM','Library cache pin instance lock (M=namespace)',<br /> 'PN','Library cache pin instance lock (N=namespace)',<br /> 'PO','Library cache pin instance lock (O=namespace)',<br /> 'PP','Library cache pin instance lock (P=namespace)',<br /> 'PQ','Library cache pin instance lock (Q=namespace)',<br /> 'PR','Library cache pin instance lock (R=namespace)',<br /> 'PR','Process startup lock',<br /> 'PS','Library cache pin instance lock (S=namespace)',<br /> 'PS','Parallel operation lock',<br /> 'PT','Library cache pin instance lock (T=namespace)',<br /> 'PU','Library cache pin instance lock (U=namespace)',<br /> 'PV','Library cache pin instance lock (V=namespace)',<br /> 'PW','Library cache pin instance lock (W=namespace)',<br /> 'PX','Library cache pin instance lock (X=namespace)',<br /> 'PY','Library cache pin instance lock (Y=namespace)',<br /> 'PZ','Library cache pin instance lock (Z=namespace)',<br /> 'QA','Row cache instance lock (A=cache)',<br /> 'QB','Row cache instance lock (B=cache)',<br /> 'QC','Row cache instance lock (C=cache)',<br /> 'QD','Row cache instance lock (D=cache)',<br /> 'QE','Row cache instance lock (E=cache)',<br /> 'QF','Row cache instance lock (F=cache)',<br /> 'QG','Row cache instance lock (G=cache)',<br /> 'QH','Row cache instance lock (H=cache)',<br /> 'QI','Row cache instance lock (I=cache)',<br /> 'QJ','Row cache instance lock (J=cache)',<br /> 'QK','Row cache instance lock (L=cache)',<br /> 'QL','Row cache instance lock (K=cache)',<br /> 'QM','Row cache instance lock (M=cache)',<br /> 'QN','Row cache instance lock (N=cache)',<br /> 'QO','Row cache instance lock (O=cache)',<br /> 'QP','Row cache instance lock (P=cache)',<br /> 'QQ','Row cache instance lock (Q=cache)',<br /> 'QR','Row cache instance lock (R=cache)',<br /> 'QS','Row cache instance lock (S=cache)',<br /> 'QT','Row cache instance lock (T=cache)',<br /> 'QU','Row cache instance lock (U=cache)',<br /> 'QV','Row cache instance lock (V=cache)',<br /> 'QW','Row cache instance lock (W=cache)',<br /> 'QX','Row cache instance lock (X=cache)',<br /> 'QY','Row cache instance lock (Y=cache)',<br /> 'QZ','Row cache instance lock (Z=cache)',<br /> 'RE','USE_ROW_ENQUEUE enforcement lock',<br /> 'RT','Redo thread global enqueue lock',<br /> 'RW','Row wait enqueue lock',<br /> 'SC','System commit number instance lock',<br /> 'SH','System commit number high water mark enqueue lock',<br /> 'SM','SMON lock',<br /> 'SN','Sequence number instance lock',<br /> 'SQ','Sequence number enqueue lock',<br /> 'SS','Sort segment lock',<br /> 'ST','Space transaction enqueue lock',<br /> 'SV','Sequence number value lock',<br /> 'TA','Generic enqueue lock',<br /> 'TD','DDL enqueue lock',<br /> 'TE','Extend-segment enqueue lock',<br /> 'TM','DML enqueue lock',<br /> 'TO','Temporary Table Object Enqueue',<br /> 'TS',decode(l.id2,<br /> 0,'Temporary segment enqueue lock (ID2=0)',<br /> 1,'New block allocation enqueue lock (ID2=1)',<br /> 'UNKNOWN!'<br /> ),<br /> 'TT','Temporary table enqueue lock',<br /> 'TX','Transaction enqueue lock',<br /> 'UL','User supplied lock',<br /> 'UN','User name lock',<br /> 'US','Undo segment DDL lock',<br /> 'WL','Being-written redo log instance lock',<br /> 'WS','Write-atomic-log-switch global enqueue lock',<br /> 'UNKOWN'<br /> ) lock_description,<br /> decode<br /> (<br /> l.lmode,<br /> 0, 'None', /* Mon Lock equivalent */<br /> 1, 'No Lock', /* N */<br /> 2, 'Row-S (SS)', /* L */<br /> 3, 'Row-X (SX)', /* R */<br /> 4, 'Share', /* S */<br /> 5, 'S/Row-X (SRX)', /* C */<br /> 6, 'Exclusive', /* X */<br /> to_char(l.lmode)<br /> ) mode_held,<br /> decode<br /> (<br /> l.request,<br /> 0, 'None', /* Mon Lock equivalent */<br /> 1, 'No Lock', /* N */<br /> 2, 'Row-S (SS)', /* L */<br /> 3, 'Row-X (SX)', /* R */<br /> 4, 'Share', /* S */<br /> 5, 'S/Row-X (SSX)', /* C */<br /> 6, 'Exclusive', /* X */<br /> to_char(l.request)<br /> ) mode_requested,<br /> s.osuser,<br /> s.machine,<br /> s.program,<br /> s.process<br />from<br /> v$lock l<br /> join v$session s on s.sid = l.sid<br /> left outer join sys.dba_waiters lock_blocker on lock_blocker.waiting_session = s.sid<br /> left outer join sys.dba_waiters lock_waiter on lock_waiter.holding_session = s.sid<br /> left outer join sys.obj$ o on o.obj# = l.id1<br />where s.type != 'BACKGROUND'<br />)<br />select<br /> --kaddr,<br /> sid,<br /> username,<br /> waiting_session,<br /> holding_session,<br /> object,<br /> command,<br /> lock_type,<br /> lock_description,<br /> mode_held,<br /> mode_requested,<br /> --osuser,<br /> --machine,<br /> program,<br /> process<br />from dblocks<br />order by sid, object<br />/<br /><br /></code></pre>Jaredhttp://www.blogger.com/profile/09187267459019576469noreply@blogger.com2tag:blogger.com,1999:blog-20737409.post-79318998310392202232009-03-05T10:48:00.000-08:002009-03-05T12:03:28.567-08:00The Evils of Encoding Meaning Into DataAbout a year ago I worked on collating and transforming data from an application so that it could be imported into another app. I've performed this exercise a number of times in the past 20 or so years, and while it is never boring, it is sometimes quite challenging.<br /><br />Oft times when trying to make the data suitable for inclusion in the new applications, I ask my self "What were they thinking?"<br /><br />I will leave the answer to that up to your imagination, as my answers to that particular question are not always complimentary.<br /><br />One of the problems run into is when the data modeler and database designer chose to allow data with encoded special meanings.<br /><br />In other words, there is key data in the database, the meaning of which can only be ascertained by asking someone that knows, or finding it in the documentation (Is there documentation?)<br /><br />The example I will use is a real one I ran into.<br /><br />Given a table containing projects, and revisions of those projects that are indicated as such by a revision "number".<br /><br />A sensible series of revision numbers might be a numeric range beginning with 1, or even an alpha range beginning with 'A', or even a combination thereof.<br /><br />Personally, I prefer the unambiguous nature of an ascending numeric range. 2 is always greater than 1. There is no case involved as there would be with an alpha range.<br /><br />Which is greater, "a1" or "A1"? You know how this will sort in the database, but will it be what the users expect?<br /><br />While a numeric range would have been preferable, the data I was given to work with used a range of revision "numbers" that was numeric and alpha, with a numeric value being regarding as greater than the "numeric" value. The "numeric" is in quotes, as this obviously must be stored as a string, and in this case with a leading zero.<br /><br />Given this rule, a project with revisions of A,B,C,D,01,02 would have a most recent revision of "02". This is not the way it works in the standard collating order in the database.<br /><br /><p style="font-family: courier new;">11:34:03 SQL> create table rev (version varchar2(2));<br /><br />Table created.<br /><br />11:34:03 SQL><br />11:34:03 SQL> insert into rev values('A');<br />11:34:03 SQL> insert into rev values('B');<br />11:34:03 SQL> insert into rev values('C');<br />11:34:03 SQL> insert into rev values('01');<br />11:34:03 SQL> insert into rev values('02');<br />11:34:03 SQL><br />11:34:03 SQL> select * from rev order by version;<br /><br />VE<br />--<br />01<br />02<br />A<br />B<br />C<br /><br />5 rows selected.<br /><br /></p>In a perfect world, the data modeler or data architect would work with the users to create a reasonable versioning method. In this case however there is no choice but to work with what I was given.<br /><br />From the data provided, only the most recent version was to be included in the imported data.<br />With a versioning system that doesn't follow the normal collating order, this requires some code to determine what is really the greatest version.<br /><br />If you know a method to accomplish this in straight SQL, please feel free to post it. I could not think of a pure SQL solution.<br /><br />The following generic function was created to determine the greatest revision value given a table name and the key columns.<br /><br /><p style="font-family: courier new;">-- maxrev.sql<br />-- return the maximum numeric revision<br />-- returna alpha if no numeric exists<br />-- maximum rev is numeric<br />-- revs are alpha - mixed number and character<br /><br />create or replace function maxrev<br />(<br />table_name_in varchar2,<br />key_column_in varchar2,<br />value_column_in varchar2,<br />key_value_in varchar2<br />)<br />return varchar2<br />as<br />v_max_rev integer := 0;<br />v_tmp_rev integer;<br />v_col_rev varchar2(10);<br />v_return_rev varchar2(30);<br />type curType is ref cursor;<br />l_cursor curType;<br />v_sql varchar2(1000);<br /><br />function is_number( chk_data_in varchar2 )<br />return boolean<br />is<br /> dummy number(38,4);<br />begin<br /> dummy := to_number(chk_data_in);<br /> return true;<br />exception<br />when value_error then<br /> return false;<br />when others then<br /> raise;<br />end;<br /><br />begin<br />-- get the maximum rev, whether alpha or numeric<br />-- there may not be any numerics<br />v_sql := 'select max(' || value_column_in || ') from ' || table_name_in || ' where ' || key_column_in || ' = :1';<br /><br />execute immediate v_sql into v_return_rev using key_value_in;<br /><br />--return v_return_rev;<br /><br />v_sql := 'select ' || value_column_in || ' from ' || table_name_in || ' where ' || key_column_in || ' = :1';<br /><br />open l_cursor for v_sql using key_value_in;<br />loop<br /> fetch l_cursor into v_col_rev;<br /> exit when l_cursor%notfound;<br /><br /> if (is_number(v_col_rev)) then<br /> v_tmp_rev := to_number(v_col_rev);<br /> end if;<br /> if (v_tmp_rev > v_max_rev) then<br /> v_max_rev := v_tmp_rev;<br /> v_return_rev := v_col_rev;<br /> end if;<br />end loop;<br /><br />return v_return_rev;<br />end;<br />/<br /><br /></p>( Reviewing this function just now, I see what could be considered a programming error.<br />Let me know if you spot it. )<br /><br />Here's a test case to prove that the function works as expected.<br /><br /><p style="font-family: courier new;">-- maxrev_test.sql<br />-- should always return numeric if it exists, otherwise alpha<br /><br />drop table maxrev_test;<br /><br />create table maxrev_test ( id varchar2(4), rev varchar2(2));<br /><br />insert into maxrev_test values('T1', 'A');<br />insert into maxrev_test values('T1', 'B');<br />insert into maxrev_test values('T1', '01');<br />insert into maxrev_test values('T1', '02');<br />insert into maxrev_test values('T2', '01');<br />insert into maxrev_test values('T2', '02');<br />insert into maxrev_test values('T2', '03');<br />insert into maxrev_test values('T3', 'X');<br />insert into maxrev_test values('T3', 'Y');<br />insert into maxrev_test values('T3', 'Z');<br /><br />commit;<br /><br />select * from maxrev_test order by id,rev;<br /><br />col rev format a10<br /><br />prompt<br />prompt Incorrect results<br />prompt<br /><br />select id, max(rev) rev<br />from maxrev_test<br />group by id<br />order by id<br />/<br /><br />prompt<br />prompt Correct results<br />prompt<br /><br />select id, maxrev('maxrev_test','id','rev',id) rev<br />from maxrev_test<br />group by id<br />order by id<br />/<br /></p><br /><br />And the results:<br /><br /><p style="font-family: courier new;"><b>Incorrect results</b><br /><br />ID REV<br />---- ----------<br />T1 B<br />T2 03<br />T3 Z<br /><br />3 rows selected.<br /><br /><b>Correct results</b><br /><br />ID REV<br />---- ----------<br />T1 02<br />T2 03<br />T3 Z<br /><br />3 rows selected.<br /><br /></p>Jaredhttp://www.blogger.com/profile/09187267459019576469noreply@blogger.com15tag:blogger.com,1999:blog-20737409.post-67068239928313092712008-10-13T09:56:00.001-07:002008-10-13T09:58:17.320-07:00SQL Developer Data Modeling UpdateOracle has released an 'early adopter' version of the the data modeling enhancements to SQL Developer.<br /><br />See the <a href="http://www.oracle.com/technology/products/database/sql_developer/files/Modeling.html">OTN article</a> for details.<br /><br />I haven't tried it yet, it will be interesting to see just how well it works.Jaredhttp://www.blogger.com/profile/09187267459019576469noreply@blogger.com0tag:blogger.com,1999:blog-20737409.post-42820506340235797192008-10-01T16:58:00.000-07:002008-10-02T07:46:11.513-07:00Undocumented Oracle FunctionsUndocumented functions in Oracle are always fun, and you just may find something useful.<br /><br />The caveat of course is that they are undocumented. They can change without notice between releases or patch levels, so building apps that depend on them may be unwise.<br /><br />They are often quite useful from a DBA perspective when used in SQL scripts.<br /><br />Here are a few that I've played with. These are all found in Oracle 10.2.0.3<br /><br />These functions have one thing in common - they have a prefix of SYS_OP_<br /><br />Some of these appear to be identical to documented functions.<br /><br />I don't know of any official explanation regarding the purpose of undocumented functions that seem to mimic documented functions. It could be that the source for the documented functions are separate from those that are documented, ensuring that functionality will not change for an undocumented function that is used in the Oracle kernel, even though its documented doppelganger may change in future releases.<br /><br />In any case, undocumented functions are always interesting, and here are a few to play with.<br /><br />Just keep in mind that these are undocumented, and as such may change or disappear entirely in future releases<br /><br /><span style="font-weight: bold;">sys_op_vacand</span> - Return the binary AND of two raw values. Results are in hex<br /><br />SELECT sys_op_vecand(hextoraw('FF'),hextoraw('FE')) from dual;<br />FE<br /><br />16:13:12 SQL>SELECT sys_op_vecand(hextoraw('C3'),hextoraw('7E')) from dual;<br />42<br /><br /><span style="font-weight: bold;">sys_op_vecor</span> - Return the binary OR of two raw values. Results are in hex<br /><br />16:14:39 SQL>SELECT sys_op_vecor(hextoraw('FF'),hextoraw('FE')) from dual;<br />FF<br /><br /><span style="font-weight: bold;">sys_op_vecxor</span> - Return the binary XOR of two raw values. Results are in hex<br /><br />16:14:39 SQL>SELECT sys_op_vecor(hextoraw('FF'),hextoraw('FE')) from dual;<br />FF<br /><br /><span style="font-weight: bold;">sys_op_vecbit</span> - Return the value of the bit at position N in a raw value<br /><br />The return value is 0 or 1<br /><br />This is an interesting function as it can be used to determine the value of bits in a number. If for instance some flags are stored in a bit vector and you need to know the value of the 3 bit, this is an easy way to do it.<br /><br />I believe the upper limit on the number of bits is 127.<br /><br />prompt<br />define decnum=10<br />prompt &&decnum dec = 1010 bin<br /><br />16:16:27 SQL>select 'Bit 0 is ' || sys_op_vecbit(SYS_OP_NUMTORAW(&&decnum),0) from dual;<br />Bit 0 is 0<br /><br />16:16:27 SQL>select 'Bit 1 is ' || sys_op_vecbit(SYS_OP_NUMTORAW(&&decnum),1) from dual;<br />Bit 1 is 1<br /><br />16:16:27 SQL>select 'Bit 2 is ' || sys_op_vecbit(SYS_OP_NUMTORAW(&&decnum),2) from dual;<br />Bit 2 is 0<br /><br />16:16:27 SQL>select 'Bit 3 is ' || sys_op_vecbit(SYS_OP_NUMTORAW(&&decnum),3) from dual;<br />Bit 3 is 1<br /><br /><span style="font-weight: bold;">sys_op_bitvec</span> - This appears to be for used to build a bit vector, but I haven't figured out<br />how to use it. Please let me know if you do.<br /><br /><span style="font-weight: bold;">sys_op_map_nonnull</span> - This has been thouroughly discussed on Eddie Awad's blog:<br /><a href="http://awads.net/wp/2006/09/19/cool-undocumented-sql-function-sys_op_map_nonnull-and-some-alternatives/">sys_op_map_nonnull discussion</a><br /><br /><span style="font-weight: bold;">sys_op_descend</span> - Returns value as it would be used in a descending index. This is essentially<br />reverse() function with output in hex.<br /><br />16:32:41 SQL>select sys_op_descend('ABC') from dual;<br />BEBDBCFF<br /><br /><span style="font-weight: bold;">sys_op_undescend</span> - The inverse of sys_op_descend. Well, almost<br /><br />17:12:59 SQL>select sys_op_undescend(sys_op_descend('ABC')) from dual<br />17:12:59 2 /<br /><br />414243<br /><br />Notice the output is in the original order, but in decimal rather than hex.<br /><br /><br /><span style="font-weight: bold;">sys_op_dump</span> - dump the data from an ADT/UDT (Abtract/User Data Type)<br /><br />16:54:13 SQL>CREATE OR REPLACE TYPE my_adt AS OBJECT (<br />16:54:13 2 last_name varchar2(30),<br />16:54:13 3 first_name varchar2(30),<br />16:54:13 4 id number(6)<br />16:54:13 5 )<br />16:54:13 6 /<br />16:54:13 SQL><br />16:54:13 SQL><br />16:54:13 SQL>select sys_op_dump(my_adt('still','jared',234987)) from dual;<br /><br />('still','jared',234987)<br /><br />I don't use objects in the database, but this would likely be useful for someone that does.<br /><br /><span style="font-weight: bold;">sys_op_guid</span> - this appears to be identical to sys_guid<br /><br />17:00:50 SQL>select sys_guid(), sys_op_guid() from dual;<br /><br />52BA7CF06BB488ECE040010A7C646200 52BA7CF06BB588ECE040010A7C646200Jaredhttp://www.blogger.com/profile/09187267459019576469noreply@blogger.com6tag:blogger.com,1999:blog-20737409.post-45411021937602461922008-09-25T22:33:00.000-07:002008-09-26T08:23:58.275-07:00Data Modeling with SQL DeveloperUnlike Open World 2007 there were many database oriented sessions at Oracle Open World 2008. There were many good performance oriented sessions, so many in fact that there were several conflicts in the schedule, and I had to pick one in several time slots that had multiple choices.<br /><br />One of the more interesting sessions (for me anyway) at OOW 2008 was a session not on database performance, but on data modeling.<br /><br />The <a href="http://www.oracle.com/technology/products/database/sql_developer/index.html">SQL Developer</a> team has been hard at working creating a data modeling plugin for SQL Developer.<br /><br />This appears to be a very full featured tool, and appears to be the answer to the question "What will replace Oracle Designer?"<br /><br />While Designer is much more than a data modeling tool, that is one of the core features of the tool, and many folks have used it just for its data modeling capabilities.<br /><br />The new ERD tool is no lightweight, it is quite full featured from a database modeling and design standpoint.<br /><br />Some of the features included:<br /><ul><li>Domains generated from data</li><li>Real logical and physical modeling, not just one model with 2 different names.</li><li>The ability to reverse engineer several schemas at once and have them appear not only as a master model, but each individually as a sub model.</li><li>Sub model views may be created on sets of objects as well.</li><li>The tool can determine all tables related to a table through FKs and create a sub model based on that set.</li><li>Two forms of notation: Barker and IE</li><li>Many options for displaying sub/super types (D2k fans rejoice!)</li><li>Glossary - a predefined set of names. These can be used to enforce naming conventions for entities, tables and relations.</li><li>Schema comparison with DDL change generation</li></ul>Also of note, in addition to Oracle schemas can be imported from SQL Server, DB2, or any ODBC connected database.<br /><br />The repository can be either file based, or database based.<br />There are two versions of the tool, a plugin to SQL Developer, and a stand alone version. The stand alone version will use only the file based repository.<br /><br />Now for the bad news.<br /><br />The release date has not been established. The only release information given was 'sometime in the 2009 calendar year'. As the database repository has not yet been designed, the long time to release is understandable.<br /><br />And finally, licensing has not been established. It might be free, it might not. If not, at least we can hope for reasonably priced. Personally I thinking having a decent data modeling tool that comes free of charge with SQL Developer would contribute to higher quality databases, as more people would use a real database designer rather than a drawing tool.<br /><br />There was probably more that didn't make it into my notes.<br />Suffice it to say this is a great development for data modelers and database designers.<br /><br />Following a few screen shots taken during the presentation.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjFs4i-IWJWso4UP46QsKSHqat258a0H2U2uQ3bi4l6GSkWaPbxqdzU34jIz26DcC_i3u_uOfVqJHiYdpY9Kn_dUtTL9YFNaT2st-v8PymeVUUaRe-DJluLcrXgQTSiKeshF-wj/s1600-h/sql_developer_erd_1.jpg"><img style="margin: 0pt 10px 10px 0pt; float: left; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjFs4i-IWJWso4UP46QsKSHqat258a0H2U2uQ3bi4l6GSkWaPbxqdzU34jIz26DcC_i3u_uOfVqJHiYdpY9Kn_dUtTL9YFNaT2st-v8PymeVUUaRe-DJluLcrXgQTSiKeshF-wj/s320/sql_developer_erd_1.jpg" alt="" id="BLOGGER_PHOTO_ID_5250207033058956946" border="0" /></a><br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiTlAjJdKfmk5Kib7vfx9hAu5dt0bqavOg9A8bXlUQ8FLsL42bhVq1aLE1YByLzwSg84aQZUOZ6S3UYgvxQ-vIeGUH5dskHtE7IBz2A8s1nDFwBRhfGMdo4ZLKZXaSxhTKwdHgK/s1600-h/sql_developer_erd_2.jpg"><img style="margin: 0pt 10px 10px 0pt; float: left; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiTlAjJdKfmk5Kib7vfx9hAu5dt0bqavOg9A8bXlUQ8FLsL42bhVq1aLE1YByLzwSg84aQZUOZ6S3UYgvxQ-vIeGUH5dskHtE7IBz2A8s1nDFwBRhfGMdo4ZLKZXaSxhTKwdHgK/s320/sql_developer_erd_2.jpg" alt="" id="BLOGGER_PHOTO_ID_5250207700529082674" border="0" /></a><br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhSDnYP52uaW5gl0xm4mhXGX9YXNua-UU9CEyD3Ygg9t-ROJrQYarmMUsiCxB4y8TnzLbf_FNaBLutLS695ICZISeeOXecojBYop1rLOdtv7RWylO14-VkaZOZTRDWRwLUUzyEn/s1600-h/sql_developer_erd_3.jpg"><img style="margin: 0pt 10px 10px 0pt; float: left; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhSDnYP52uaW5gl0xm4mhXGX9YXNua-UU9CEyD3Ygg9t-ROJrQYarmMUsiCxB4y8TnzLbf_FNaBLutLS695ICZISeeOXecojBYop1rLOdtv7RWylO14-VkaZOZTRDWRwLUUzyEn/s320/sql_developer_erd_3.jpg" alt="" id="BLOGGER_PHOTO_ID_5250207863607554226" border="0" /></a>Jaredhttp://www.blogger.com/profile/09187267459019576469noreply@blogger.com0tag:blogger.com,1999:blog-20737409.post-3207683091091178432008-08-18T12:04:00.000-07:002008-08-18T12:29:31.998-07:00AWR Usage PollA number of recent threads in the <a href="http://www.freelists.org/list/oracle-l">Oracle-L</a> list have made it pretty clear that Automated Workload Repository (AWR) is a tool that you are expected to use when troubleshooting a database problem.<br /><br />Never mind the fact that AWR is still a product that is licensed separately from the database, and that a large segment of the Oracle DBA population doesn't seem to realize that. Or that Active Session History (ASH) is part of AWR, and falls under the same license restrictions.<br /><br />So I conducted a poll regarding the use of AWR. <a href="http://www.misterpoll.com/polls/352785">AWR Usage Poll</a>. If you haven't in the AWR Poll, please do so.<br /><br />While the web site does provide a chart of results, those results don't include the extra comments made by poll takers. You may are may not be able to download all the results, I'm not sure if that is restricted to the poll owner.<br /><br />Nonetheless, I have compiled the results from a 100 or so respondents in to an Excel workbook, along with a few charts. You may find some of the additional comments of interest as well. <a href="http://jaredstill.com/downloads/awr_usage_poll.xls">AWR Usage Results</a><br /><br />Draw your own conclusions regarding these results. I think it interesting to that AWR appears to be quite widely used. Personally I fall into the category of not using it because of the expense. I may work on changing that for a couple of key servers, as AWR is not <span style="font-weight:bold;">that</span> expensive, but in a small shop, spending $20k on feature that is not often needed is sometimes a hard sell.<br /><br />One question I purposely left out was "Do you use AWR even though you have not licensed it"? While it might satisfy the curiosity of some (including me) I didn't want to give any Oracle sales people (or Oracle attorneys for that matter) any reasons to contact me regarding the poll.<br /><br />In retrospect a good question would have been: "Did you realize AWR/ASH is a separately licensed product?". Too late to add that now, but bringing that up quite often leads to lively discussion.<br /><br />Another interesting bit was that a few people have extended STATSPACK in some way, even using it on Oracle 10g+. One even mentioned the excellent repository of statspack scripts assembled by Tim Gorman. <a href="http://www.evdbt.com/tools.htm">Tim Gorman's Statspack Scripts</a>Jaredhttp://www.blogger.com/profile/09187267459019576469noreply@blogger.com1tag:blogger.com,1999:blog-20737409.post-75693075295481162072008-08-01T15:30:00.000-07:002009-08-06T22:30:02.480-07:00An unusual cause of ORA-12154The ORA-12154 (and its cousin ORA-12514) have been the bane of many a novice DBA.<br /><br />This error is essentially telling you that you have messed up your database naming configuration, whether it be tnsnames, oracle names or OID. The fix is normally quite simple - just correct the naming. <br /><br />This is usually quite easily done with netmgr, a tool that is fairly good at its job. The syntax for setting up a tnsname is a little convoluted, and I've fallen back on netmgr a number of times when I can't seem to get it just right by typing the entries in the file.<br /><br />There's at least one other way to cause ORA-12154 to pop up and consume more time than you may like to admit. I won't tell how long I played with this...<br /><br />The cause is actually due to security policy. While the characters !@#$%^&*()_-=+~` are not normally allowed in Oracle passwords, it is actually quite simple to include them. Simply enclose the password in double quotes.<br /><pre style="font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; color: #000000; background-color: #eee;font-size: 12px;border: 1px dashed #999999;line-height: 14px;padding: 5px; overflow: auto; width: 100%"><code>alter user scott identified by "th!sat^st";<br /></code></pre><br />This was done on a number of accounts on our databases, all in the name of security.<br />These types of passwords have been used without issue for some time now.<br /><br />Today was a notable exception.<br /><br />After spending some time fiddling with tnsnames.ora files, I realized what the problem actually was.<br /><br />Suppose you were to have a password with an '@' in it? Yep, that was the problem.<br />First let's see what it looks like from within sqlplus:<br /><pre style="font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; color: #000000; background-color: #eee;font-size: 12px;border: 1px dashed #999999;line-height: 14px;padding: 5px; overflow: auto; width: 100%"><code><br />15:41:52 SQL> alter user scott identified by "what@mistake";<br /><br />User altered.<br /><br />15:42:03 SQL> connect scott/"what@mistake";<br />Connected.<br /><br />scott SQL> select user from dual;<br /><br />USER<br />------------------------------<br />SCOTT<br /><br />SQL> connect scott/what@mistake<br />ERROR:<br />ORA-12154: TNS:could not resolve the connect identifier specified<br /><br /><br /></code></pre><br />As long as the connect string was in double quotes, no problem.<br /><br />Now let's try it from a linux command line:<br /><pre style="font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; color: #000000; background-color: #eee;font-size: 12px;border: 1px dashed #999999;line-height: 14px;padding: 5px; overflow: auto; width: 100%"><code><br />Linux> sqlplus scott/'what@mistake'<br /><br />SQL*Plus: Release 10.2.0.3.0 - Production on Fri Aug 1 15:42:20 2008<br /><br />ERROR:<br />ORA-12154: TNS:could not resolve the connect identifier specified<br /><br /><br />Linux> sqlplus scott/'what\@mistake'<br /><br />SQL*Plus: Release 10.2.0.3.0 - Production on Fri Aug 1 15:46:14 2008<br /><br />ERROR:<br />ORA-01017: invalid username/password; logon denied<br /><br /><br />Linux> sqlplus scott/"what\@mistake"<br /><br />SQL*Plus: Release 10.2.0.3.0 - Production on Fri Aug 1 15:46:21 2008<br /><br />Copyright (c) 1982, 2006, Oracle. All Rights Reserved.<br /><br />ERROR:<br />ORA-01017: invalid username/password; logon denied<br /><br /><br />Linux> sqlplus 'scott/what\@mistake'<br /><br />SQL*Plus: Release 10.2.0.3.0 - Production on Fri Aug 1 15:47:23 2008<br /><br />Copyright (c) 1982, 2006, Oracle. All Rights Reserved.<br /><br />ERROR:<br />ORA-01017: invalid username/password; logon denied<br /><br />Linux > sqlplus scott/what\@mistake<br /><br />SQL*Plus: Release 10.2.0.3.0 - Production on Fri Aug 1 15:48:52 2008<br /><br />Copyright (c) 1982, 2006, Oracle. All Rights Reserved.<br /><br />ERROR:<br />ORA-12154: TNS:could not resolve the connect identifier specified<br /></code></pre><br />There doesn't seem to be any method to use a password with the @ character in it, at least not from sqlplus. The same password works fine when used on the command line with perl script that logs on to the same database and account:<br /><pre style="font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; color: #000000; background-color: #eee;font-size: 12px;border: 1px dashed #999999;line-height: 14px;padding: 5px; overflow: auto; width: 100%"><code><br />Linux> DBI_template.pl -database dv11 -username scott -password 'what@mistake'<br /> X<br /><br /></code></pre><br />The 'X' is the correct output as this script simply selects * from dual.<br /><br />Lesson learned, do not ever, under any circumstances, use the @ character in the password for an Oracle database account.<br /><br />You might think that goes without saying, but it's actually pretty easy to get a password with '@' in it when the passwords are generated random characters.<br /><br />FWIW, this problem was manifested in Windows as wellJaredhttp://www.blogger.com/profile/09187267459019576469noreply@blogger.com9tag:blogger.com,1999:blog-20737409.post-45798682182221654142008-07-23T14:44:00.000-07:002008-07-23T14:54:46.604-07:00Oracle Locator ExpressIf you do much work with the Oracle database on Windows, and you have 1+N Oracle homes installed, you've probably lamented the fact that the Oracle Home Switcher is no longer included with Oracle. <br /><br />I can't recall exactly what the tool was called or which version Oracle was at when it ceased to be a part of the installation. I do know that it doesn't work with 10g+.<br /><br />A little tool called Oracle Locater Express fills this niche nicely, and it does work with 10g. Sorry, have not yet tried it with 11g.<br /><br /><a href="http://www.dbmotive.com/ole/"> "Oracle Locator Express" <p> <p></a><br />I've used it for several months without issue, save one minor glitch.<br /><br />Sometimes Oracle 10g Homes are not displayed properly in the list of Oracle Homes to choose from. Other than that, no complaintsJaredhttp://www.blogger.com/profile/09187267459019576469noreply@blogger.com3tag:blogger.com,1999:blog-20737409.post-47666236431670842082008-05-13T17:30:00.000-07:002008-05-13T17:53:53.880-07:00Oracle's CPU Patch NamingJust today I discovered that Oracle changed the nomenclature used to refer to Critical Patch Updates.<br /><br />Up through the patch bundle issued in October 2007, these were referred to by the date of release. The October 2007 release was "Oracle Critical Patch Update - October 2007", often shortened to "CPU Oct 2007".<br /><a href="http://www.oracle.com/technology/deploy/security/critical-patch-updates/cpuoct2007.html"> Oracle Critical Patch Update - October 2007</a><br /><br />When viewing the documentation for the October 2007 CPU you will see this:<br /><pre><br /> Oracle® DatabaseCritical Patch Update Note<br /> Release 10.2.0.2 for Microsoft Windows (x64)<br /></pre><br />The documentation for the January 2008 CPU however looks a bit different:<br /><pre><br /> Oracle® Database Server Version 10.2.0.2 Patch 17<br /> Bundle Patch Update Note<br /> Release 10.2.0.2 for Microsoft Windows (x64)<br /></pre><br /><br />Note the difference. The Oct 2007 CPU is referred to by date, while the Jan 2008 CPU is referred to by patch number.<br /><br />By digging down into the Oct 2007 CPU documentation you will find that it is referred to as "Patch 16".<br /><br />Doing the same for the "Patch 17" documentation reveals that it is indeed the Jan 2008 CPU.<br /><br />Why does it matter?<br /><br />Imagine working with consultants for a major application system that uses Oracle as a database. Imagine also that these consultants also know only 'Patch 17', while you know only about 'Jan 2008 CPU'.<br /><br />This can lead to a great deal of confusion, as it did for me (as well as the consultants) recently. Digging down into the docs revealed the change in nomenclature.<br /><br />It led to (I am telling on myself here) me installing both the October 2007 CPU and the January 2008 CPU on a single database. This wouldn't be so bad, if they had been applied in the wrong order.<br /><br />Maybe most other Oracle folks already know about this, and I just missed the memo explaining that Oracle would start referring to their CPU's by a different name.<br /><br />In any case, not knowing this caused a lot of extra work.Jaredhttp://www.blogger.com/profile/09187267459019576469noreply@blogger.com4tag:blogger.com,1999:blog-20737409.post-70576346117874175392008-03-13T12:33:00.000-07:002009-08-06T22:37:32.969-07:00ORA-6502 "Bulk Bind: Truncated Bind" errorORA-6502 is an error that is apparently not well documented when it occurs in conjunction with the use of PL/SQL tables, and possibly bulk binds.<br /><br />I ran into this problem recently when some custom code that had worked well for several years suddenly started failing.<br /><br />As it turns out, and you will see just a little later here, the error is rather easy to fix. What makes it diffucult is if you've never encountered an ORA-6502 under these circumstances. There is precious little about it via MetaLink or Google. Writing about it here may be of help to the next innocent victim of ORA-6502 with bulk binds.<br /><br />My suspicion was that new data loaded into the system from another database had something to do with the problem, the problem was determining where it was occurring and why.<br /><br />The problem went unnoticed for some time due to a quirk in error handling built into the package. (Tom Kyte would likely agree)<br /><a href="http://tkyte.blogspot.com/2008/01/why-do-people-do-this.html">Why Do People Do This?" </a><br />Built into the package is detailed profiling information. This is a good thing. Also built into the package is an error handling routine. This might be a good thing to have if the package is only run from the command line as it provides a call stack and the error message.<br /><br />There is an attribute of this type of error handling that does not lend itself well to use in a job that is run via DBMS_JOB. The error message is printed, but the error is not raised.<br /><br />The consequence of this type of error handling is that regardless of any errors encountered during the execution of the job, they are hidden from DBMS_JOB. No error is ever raised. After a user reports that production data is not propagating to the system, the DBA (me) checks the DBA_JOBS view and find that it is executing successfully.<br /><br />Wondering what is going on, the erstwhile DBA (me again) runs the job manually and discovers that it does not actually work at all, but no error was raised.<br /><br />The printed error message was "PL/SQL: numeric or value error" - bulk bind truncated.<br /><br />The PL/SQL package that generated this is 3000 lines long and has some level of abstration. Translation: hard to debug when you don't know what the code is doing.<br /><br />Before showing a more relevant example, I will show a simple one. If you found this page via google, this is enough to help you understand what is going on.<br /><br />The problem is simply a mismatch of the data type scale. Obvious once you see it. This is a 9i example (10g would take less code) because the database it was written on was 9.2 at the time.<br /><br />If you are accustomed to working with the data dictionary the problem will likely be obvious to you.<br /><br /><pre style="font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; color: #000000; background-color: #eee;font-size: 12px;border: 1px dashed #999999;line-height: 14px;padding: 5px; overflow: auto; width: 100%"><code>declare<br /> type owner_type is table of varchar2(30) index by pls_integer;<br /> type object_name_type is table of varchar2(20) index by pls_integer;<br /> type object_type_type is table of varchar2(18) index by pls_integer;<br /><br /> t_owner owner_type;<br /> t_object_name object_name_type;<br /> t_object_type object_type_type;<br /><br /> cursor c_obj<br /> is<br /> select owner, object_name, object_type<br /> from dba_objects;<br /><br />begin<br /> open c_obj;<br /> while c_obj%isopen<br /> loop<br /> fetch c_obj<br /> bulk collect into t_owner, t_object_name, t_object_type<br /> limit 100;<br /><br /> if c_obj%notfound then<br /> close c_obj;<br /> end if;<br /><br /> end loop;<br />end;<br />/<br /><br />*<br />ERROR at line 1:<br />ORA-06502: PL/SQL: numeric or value error: Bulk Bind: Truncated Bind<br />ORA-06512: at line 19<br /></code></pre><br />The fix is simple: change length of object_name_type from 20 to 30. The data dictionary (DBA_OBJECTS.OBJECT_NAME) actually specifies this as 128, but this particular database has no objects with names longer than the standard length of 30 characters.<br /><br />Where the mystery of what caused this error becomes difficult to unravel is when there are a few levels of indirection.<br /><br />There is a staging table that is the temporary home for data from another system. The PL/SQL code gets its specifications for data types from the staging table, not the source table. You probably already know what the problem is.<br /><br />This example is similar to what was happening.<br /><br />First create the staging table.<br /><pre style="font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; color: #000000; background-color: #eee;font-size: 12px;border: 1px dashed #999999;line-height: 14px;padding: 5px; overflow: auto; width: 100%"><code>create table tbind_test (<br /> owner varchar2(30) not null,<br /> object_name varchar2(20) not null,<br /> object_type varchar2(18) not null<br />)<br />/<br /></code></pre><br />Now run a similar, but somewhat different PL/SQL block.<br /><pre style="font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; color: #000000; background-color: #eee;font-size: 12px;border: 1px dashed #999999;line-height: 14px;padding: 5px; overflow: auto; width: 100%"><code><br />declare<br /> type owner_type is table of tbind_test.owner%TYPE index by pls_integer;<br /> type object_name_type is table of tbind_test.object_name%TYPE index by pls_integer;<br /> type object_type_type is table of tbind_test.object_type%TYPE index by pls_integer;<br /><br /> t_owner owner_type;<br /> t_object_name object_name_type;<br /> t_object_type object_type_type;<br /><br /> cursor c_obj<br /> is<br /> select owner, object_name, object_type<br /> from dba_objects;<br /><br />begin<br /> open c_obj;<br /> while c_obj%isopen<br /> loop<br /> fetch c_obj<br /> bulk collect into t_owner, t_object_name, t_object_type<br /> limit 100;<br /> if c_obj%notfound then<br /> close c_obj;<br /> end if;<br /><br /> end loop;<br />end;<br />/<br /><br />*<br />ERROR at line 1:<br />ORA-06502: PL/SQL: numeric or value error: Bulk Bind: Truncated Bind<br />ORA-06512: at line 19<br /></code></pre><br />Here's the offending bit of code:<br /><pre style="font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; color: #000000; background-color: #eee;font-size: 12px;border: 1px dashed #999999;line-height: 14px;padding: 5px; overflow: auto; width: 100%"><code>19 fetch c_obj<br />20 bulk collect into t_owner, t_object_name, t_object_type<br />21 limit 100;<br /><br /></code></pre><br />Hmmmm. The specification for the PL/SQL tables came straight from the<br />table via %TYPE, what could be the problem? The problem is that you<br />need to look at the cursor, and compare the datatypes in the table<br />referenced by the cursor with those of the TBIND_TEST table.<br /><br />Doing that you will find that TBIND_TEST.OBJECT_NAME should really be 128<br /><pre style="font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; color: #000000; background-color: #eee;font-size: 12px;border: 1px dashed #999999;line-height: 14px;padding: 5px; overflow: auto; width: 100%"><code> 1* alter table tbind_test modify (object_name varchar2(128) )<br /> SQL> /<br /><br />Table altered.<br /></code></pre><br />Now the bulk binds will run without error.Jaredhttp://www.blogger.com/profile/09187267459019576469noreply@blogger.com31tag:blogger.com,1999:blog-20737409.post-91027174512817153382008-01-31T02:03:00.000-08:002008-01-31T02:51:34.634-08:00Detect numbers with TRANSLATE() - Take twoLast week I wrote about using TRANSLATE to detect numbers in data <a href="http://jkstill.blogspot.com/2008/01/using-translate-function-to-detect.html">Using The TRANSLATE() function...</a><br /><br />Andrew Clarke at <a href="http://radiofreetooting.blogspot.com/2008/01/is-this-string-number-really.html">Radio Free Tooting</a> pointed out the shortcomings of using TRANSLATE() to detect numbers.<br /><br />As I said earlier, all I needed to do was detect if the characters in a string were all digits or not, and I wanted it to be very fast.<br /><br />But Andrew's remarks got me thinking - could translate be used to detect more complex numbers?<br /><br />Here's the short list of requirements:<br /><br />* Detect integers<br />* Detect numbers with decimal point ( 4.6, 0.2, .7)<br />* Detect negative and positive ( leading + or - )<br />* Reject text with more than 1 '.', such as an IP address ( 127.0.0.1 )<br />* Reject anything with alpha text<br /><br />And comma's are considered as text. 99,324.1 would be alpha.<br /><br />If you need to do this on 10g, no problem, as a regular expression can handle it.<br /><br />Fist create some test data:<br /><br /><span style="font-family:courier new;">drop table number_test;</span><br /><br /><span style="font-family:courier new;">create table number_test( alphacol varchar2(20));</span><br /><br /><span style="font-family:courier new;">insert into number_test values('.5');</span><br /><span style="font-family:courier new;">insert into number_test values('1');</span><br /><span style="font-family:courier new;">insert into number_test values('2');</span><br /><span style="font-family:courier new;">insert into number_test values(' 3');</span><br /><span style="font-family:courier new;">insert into number_test values('4 ');</span><br /><span style="font-family:courier new;">insert into number_test values('3.14159');</span><br /><span style="font-family:courier new;">insert into number_test values('127.0.0.1');</span><br /><span style="font-family:courier new;">insert into number_test values('+34.45');</span><br /><span style="font-family:courier new;">insert into number_test values('-54.43');</span><br /><span style="font-family:courier new;">insert into number_test values('this is a test');</span><br /><span style="font-family:courier new;">insert into number_test values('th1s is 4 t3st');</span><br /><span style="font-family:courier new;">insert into number_test values('.');</span><br /><span style="font-family:courier new;">commit;</span><br /><br />Now select only columns where the value is a number:<br /><br /><span style="font-family:courier new;">select alphacol</span><br /><span style="font-family:courier new;">from number_test</span><br /><span style="font-family:courier new;">where regexp_instr(trim(alphacol),'^[-+]?[0-9]*(\.?[0-9]+)?$') > 0</span><br /><span style="font-family:courier new;">order by 1</span><br /><br /><span style="font-family:courier new;">SQL> /</span><br /><br /><span style="font-family:courier new;">ALPHACOL</span><br /><span style="font-family:courier new;">--------------------</span><br /><span style="font-family:courier new;">3</span><br /><span style="font-family:courier new;">+34.45</span><br /><span style="font-family:courier new;">-54.43</span><br /><span style="font-family:courier new;">1</span><br /><span style="font-family:courier new;">2</span><br /><span style="font-family:courier new;">3.14159</span><br /><span style="font-family:courier new;">4</span><br /><br /><span style="font-family:courier new;">7 rows selected.</span><br /><br />That seems to work.<br /><br />But what if you're stuck doing this on 9i? REGEXP_INSTR is not available.<br /><br />You can use the user defined function IS_NUMBER(), which works well, but is very slow if used on large amounts of data.<br /><br />Might we be able to use and abuse the TRANSLATE() function to speed this up? Here's a bit of convoluted SQL that works well on the limited test data:<span style="font-family:courier new;"><br /><p>select alphacol, alpha2<br />from<br />(<br />select alphacol,<br /> -- is there a sign +- ? - remove it<br /> decode(substr(alpha2,1,1),<br /> '-',substr(alpha2,2),<br /> '+',substr(alpha2,2),<br /> alpha2<br /> ) alpha2<br />from (<br /> select<br /> alphacol,<br /> -- remove a single '.' if it/they exists<br /> replace(substr(alphacol,1,instr(alphacol,'.')),'.') || substr(alphacol,instr(alphacol,'.')+1) alpha2<br /> from (<br /> select trim(alphacol) alphacol<br /> from number_test<br /> )<br />)<br />)<br />where substr('||||||||||||||||||||||||||||||||',1,length(alpha2)) = translate(alpha2,'0123456789','||||||||||')<br />/<br /></p></span><span style="font-family:courier new;"><p></p></span><br />(Sorry about formatting - I seem to lose all formatting when I paste SQL)<br /><br />Output from this nasty bit of SQL is identical to that when using REGEXP_INSTR:<br /><p style="font-family: courier new;">ALPHACOL ALPHA2<br />-------------------- ----------------------------------------<br />.5 5<br />1 1<br />2 2<br />3 3<br />4 4<br />3.14159 314159<br />+34.45 3445<br />-54.43 5443<br /><br />8 rows selected.<br /></p>To make the TRANLATE() function do what is needed, a lot of data manipulation had to be done in the SQL. There is so much work being done now that it now takes nearly as long to run as does the IS_NUMBER() function, so there isn't much point in using TRANSLATE().<br /><br />Runstats results:<br /><p style="font-family: courier new;">SQL> @th5<br />.047739 secs<br />.037447 secs<br />PL/SQL procedure successfully completed.<br /></p>If nothing else, this was an interesting exercise.Jaredhttp://www.blogger.com/profile/09187267459019576469noreply@blogger.com0