Here you will find posts that are mostly about my work as an Oracle DBA. There may occasionally be other topics posted, but by and large this blog will be about Oracle and other geeky IT topics. Perl will likely be mentioned from time to time.
Monday, October 13, 2008
SQL Developer Data Modeling Update
See the OTN article for details.
I haven't tried it yet, it will be interesting to see just how well it works.
Wednesday, October 01, 2008
Undocumented Oracle Functions
The caveat of course is that they are undocumented. They can change without notice between releases or patch levels, so building apps that depend on them may be unwise.
They are often quite useful from a DBA perspective when used in SQL scripts.
Here are a few that I've played with. These are all found in Oracle 10.2.0.3
These functions have one thing in common - they have a prefix of SYS_OP_
Some of these appear to be identical to documented functions.
I don't know of any official explanation regarding the purpose of undocumented functions that seem to mimic documented functions. It could be that the source for the documented functions are separate from those that are documented, ensuring that functionality will not change for an undocumented function that is used in the Oracle kernel, even though its documented doppelganger may change in future releases.
In any case, undocumented functions are always interesting, and here are a few to play with.
Just keep in mind that these are undocumented, and as such may change or disappear entirely in future releases
sys_op_vacand - Return the binary AND of two raw values. Results are in hex
SELECT sys_op_vecand(hextoraw('FF'),hextoraw('FE')) from dual;
FE
16:13:12 SQL>SELECT sys_op_vecand(hextoraw('C3'),hextoraw('7E')) from dual;
42
sys_op_vecor - Return the binary OR of two raw values. Results are in hex
16:14:39 SQL>SELECT sys_op_vecor(hextoraw('FF'),hextoraw('FE')) from dual;
FF
sys_op_vecxor - Return the binary XOR of two raw values. Results are in hex
16:14:39 SQL>SELECT sys_op_vecor(hextoraw('FF'),hextoraw('FE')) from dual;
FF
sys_op_vecbit - Return the value of the bit at position N in a raw value
The return value is 0 or 1
This is an interesting function as it can be used to determine the value of bits in a number. If for instance some flags are stored in a bit vector and you need to know the value of the 3 bit, this is an easy way to do it.
I believe the upper limit on the number of bits is 127.
prompt
define decnum=10
prompt &&decnum dec = 1010 bin
16:16:27 SQL>select 'Bit 0 is ' || sys_op_vecbit(SYS_OP_NUMTORAW(&&decnum),0) from dual;
Bit 0 is 0
16:16:27 SQL>select 'Bit 1 is ' || sys_op_vecbit(SYS_OP_NUMTORAW(&&decnum),1) from dual;
Bit 1 is 1
16:16:27 SQL>select 'Bit 2 is ' || sys_op_vecbit(SYS_OP_NUMTORAW(&&decnum),2) from dual;
Bit 2 is 0
16:16:27 SQL>select 'Bit 3 is ' || sys_op_vecbit(SYS_OP_NUMTORAW(&&decnum),3) from dual;
Bit 3 is 1
sys_op_bitvec - This appears to be for used to build a bit vector, but I haven't figured out
how to use it. Please let me know if you do.
sys_op_map_nonnull - This has been thouroughly discussed on Eddie Awad's blog:
sys_op_map_nonnull discussion
sys_op_descend - Returns value as it would be used in a descending index. This is essentially
reverse() function with output in hex.
16:32:41 SQL>select sys_op_descend('ABC') from dual;
BEBDBCFF
sys_op_undescend - The inverse of sys_op_descend. Well, almost
17:12:59 SQL>select sys_op_undescend(sys_op_descend('ABC')) from dual
17:12:59 2 /
414243
Notice the output is in the original order, but in decimal rather than hex.
sys_op_dump - dump the data from an ADT/UDT (Abtract/User Data Type)
16:54:13 SQL>CREATE OR REPLACE TYPE my_adt AS OBJECT (
16:54:13 2 last_name varchar2(30),
16:54:13 3 first_name varchar2(30),
16:54:13 4 id number(6)
16:54:13 5 )
16:54:13 6 /
16:54:13 SQL>
16:54:13 SQL>
16:54:13 SQL>select sys_op_dump(my_adt('still','jared',234987)) from dual;
('still','jared',234987)
I don't use objects in the database, but this would likely be useful for someone that does.
sys_op_guid - this appears to be identical to sys_guid
17:00:50 SQL>select sys_guid(), sys_op_guid() from dual;
52BA7CF06BB488ECE040010A7C646200 52BA7CF06BB588ECE040010A7C646200
Thursday, September 25, 2008
Data Modeling with SQL Developer
One of the more interesting sessions (for me anyway) at OOW 2008 was a session not on database performance, but on data modeling.
The SQL Developer team has been hard at working creating a data modeling plugin for SQL Developer.
This appears to be a very full featured tool, and appears to be the answer to the question "What will replace Oracle Designer?"
While Designer is much more than a data modeling tool, that is one of the core features of the tool, and many folks have used it just for its data modeling capabilities.
The new ERD tool is no lightweight, it is quite full featured from a database modeling and design standpoint.
Some of the features included:
- Domains generated from data
- Real logical and physical modeling, not just one model with 2 different names.
- The ability to reverse engineer several schemas at once and have them appear not only as a master model, but each individually as a sub model.
- Sub model views may be created on sets of objects as well.
- The tool can determine all tables related to a table through FKs and create a sub model based on that set.
- Two forms of notation: Barker and IE
- Many options for displaying sub/super types (D2k fans rejoice!)
- Glossary - a predefined set of names. These can be used to enforce naming conventions for entities, tables and relations.
- Schema comparison with DDL change generation
The repository can be either file based, or database based.
There are two versions of the tool, a plugin to SQL Developer, and a stand alone version. The stand alone version will use only the file based repository.
Now for the bad news.
The release date has not been established. The only release information given was 'sometime in the 2009 calendar year'. As the database repository has not yet been designed, the long time to release is understandable.
And finally, licensing has not been established. It might be free, it might not. If not, at least we can hope for reasonably priced. Personally I thinking having a decent data modeling tool that comes free of charge with SQL Developer would contribute to higher quality databases, as more people would use a real database designer rather than a drawing tool.
There was probably more that didn't make it into my notes.
Suffice it to say this is a great development for data modelers and database designers.
Following a few screen shots taken during the presentation.
Monday, August 18, 2008
AWR Usage Poll
Never mind the fact that AWR is still a product that is licensed separately from the database, and that a large segment of the Oracle DBA population doesn't seem to realize that. Or that Active Session History (ASH) is part of AWR, and falls under the same license restrictions.
So I conducted a poll regarding the use of AWR. AWR Usage Poll. If you haven't in the AWR Poll, please do so.
While the web site does provide a chart of results, those results don't include the extra comments made by poll takers. You may are may not be able to download all the results, I'm not sure if that is restricted to the poll owner.
Nonetheless, I have compiled the results from a 100 or so respondents in to an Excel workbook, along with a few charts. You may find some of the additional comments of interest as well. AWR Usage Results
Draw your own conclusions regarding these results. I think it interesting to that AWR appears to be quite widely used. Personally I fall into the category of not using it because of the expense. I may work on changing that for a couple of key servers, as AWR is not that expensive, but in a small shop, spending $20k on feature that is not often needed is sometimes a hard sell.
One question I purposely left out was "Do you use AWR even though you have not licensed it"? While it might satisfy the curiosity of some (including me) I didn't want to give any Oracle sales people (or Oracle attorneys for that matter) any reasons to contact me regarding the poll.
In retrospect a good question would have been: "Did you realize AWR/ASH is a separately licensed product?". Too late to add that now, but bringing that up quite often leads to lively discussion.
Another interesting bit was that a few people have extended STATSPACK in some way, even using it on Oracle 10g+. One even mentioned the excellent repository of statspack scripts assembled by Tim Gorman. Tim Gorman's Statspack Scripts
Friday, August 01, 2008
An unusual cause of ORA-12154
This error is essentially telling you that you have messed up your database naming configuration, whether it be tnsnames, oracle names or OID. The fix is normally quite simple - just correct the naming.
This is usually quite easily done with netmgr, a tool that is fairly good at its job. The syntax for setting up a tnsname is a little convoluted, and I've fallen back on netmgr a number of times when I can't seem to get it just right by typing the entries in the file.
There's at least one other way to cause ORA-12154 to pop up and consume more time than you may like to admit. I won't tell how long I played with this...
The cause is actually due to security policy. While the characters !@#$%^&*()_-=+~` are not normally allowed in Oracle passwords, it is actually quite simple to include them. Simply enclose the password in double quotes.
alter user scott identified by "th!sat^st";
This was done on a number of accounts on our databases, all in the name of security.
These types of passwords have been used without issue for some time now.
Today was a notable exception.
After spending some time fiddling with tnsnames.ora files, I realized what the problem actually was.
Suppose you were to have a password with an '@' in it? Yep, that was the problem.
First let's see what it looks like from within sqlplus:
15:41:52 SQL> alter user scott identified by "what@mistake";
User altered.
15:42:03 SQL> connect scott/"what@mistake";
Connected.
scott SQL> select user from dual;
USER
------------------------------
SCOTT
SQL> connect scott/what@mistake
ERROR:
ORA-12154: TNS:could not resolve the connect identifier specified
As long as the connect string was in double quotes, no problem.
Now let's try it from a linux command line:
Linux> sqlplus scott/'what@mistake'
SQL*Plus: Release 10.2.0.3.0 - Production on Fri Aug 1 15:42:20 2008
ERROR:
ORA-12154: TNS:could not resolve the connect identifier specified
Linux> sqlplus scott/'what\@mistake'
SQL*Plus: Release 10.2.0.3.0 - Production on Fri Aug 1 15:46:14 2008
ERROR:
ORA-01017: invalid username/password; logon denied
Linux> sqlplus scott/"what\@mistake"
SQL*Plus: Release 10.2.0.3.0 - Production on Fri Aug 1 15:46:21 2008
Copyright (c) 1982, 2006, Oracle. All Rights Reserved.
ERROR:
ORA-01017: invalid username/password; logon denied
Linux> sqlplus 'scott/what\@mistake'
SQL*Plus: Release 10.2.0.3.0 - Production on Fri Aug 1 15:47:23 2008
Copyright (c) 1982, 2006, Oracle. All Rights Reserved.
ERROR:
ORA-01017: invalid username/password; logon denied
Linux > sqlplus scott/what\@mistake
SQL*Plus: Release 10.2.0.3.0 - Production on Fri Aug 1 15:48:52 2008
Copyright (c) 1982, 2006, Oracle. All Rights Reserved.
ERROR:
ORA-12154: TNS:could not resolve the connect identifier specified
There doesn't seem to be any method to use a password with the @ character in it, at least not from sqlplus. The same password works fine when used on the command line with perl script that logs on to the same database and account:
Linux> DBI_template.pl -database dv11 -username scott -password 'what@mistake'
X
The 'X' is the correct output as this script simply selects * from dual.
Lesson learned, do not ever, under any circumstances, use the @ character in the password for an Oracle database account.
You might think that goes without saying, but it's actually pretty easy to get a password with '@' in it when the passwords are generated random characters.
FWIW, this problem was manifested in Windows as well
Wednesday, July 23, 2008
Oracle Locator Express
I can't recall exactly what the tool was called or which version Oracle was at when it ceased to be a part of the installation. I do know that it doesn't work with 10g+.
A little tool called Oracle Locater Express fills this niche nicely, and it does work with 10g. Sorry, have not yet tried it with 11g.
"Oracle Locator Express"
I've used it for several months without issue, save one minor glitch.
Sometimes Oracle 10g Homes are not displayed properly in the list of Oracle Homes to choose from. Other than that, no complaints
Tuesday, May 13, 2008
Oracle's CPU Patch Naming
Up through the patch bundle issued in October 2007, these were referred to by the date of release. The October 2007 release was "Oracle Critical Patch Update - October 2007", often shortened to "CPU Oct 2007".
Oracle Critical Patch Update - October 2007
When viewing the documentation for the October 2007 CPU you will see this:
Oracle® DatabaseCritical Patch Update Note
Release 10.2.0.2 for Microsoft Windows (x64)
The documentation for the January 2008 CPU however looks a bit different:
Oracle® Database Server Version 10.2.0.2 Patch 17
Bundle Patch Update Note
Release 10.2.0.2 for Microsoft Windows (x64)
Note the difference. The Oct 2007 CPU is referred to by date, while the Jan 2008 CPU is referred to by patch number.
By digging down into the Oct 2007 CPU documentation you will find that it is referred to as "Patch 16".
Doing the same for the "Patch 17" documentation reveals that it is indeed the Jan 2008 CPU.
Why does it matter?
Imagine working with consultants for a major application system that uses Oracle as a database. Imagine also that these consultants also know only 'Patch 17', while you know only about 'Jan 2008 CPU'.
This can lead to a great deal of confusion, as it did for me (as well as the consultants) recently. Digging down into the docs revealed the change in nomenclature.
It led to (I am telling on myself here) me installing both the October 2007 CPU and the January 2008 CPU on a single database. This wouldn't be so bad, if they had been applied in the wrong order.
Maybe most other Oracle folks already know about this, and I just missed the memo explaining that Oracle would start referring to their CPU's by a different name.
In any case, not knowing this caused a lot of extra work.
Thursday, March 13, 2008
ORA-6502 "Bulk Bind: Truncated Bind" error
I ran into this problem recently when some custom code that had worked well for several years suddenly started failing.
As it turns out, and you will see just a little later here, the error is rather easy to fix. What makes it diffucult is if you've never encountered an ORA-6502 under these circumstances. There is precious little about it via MetaLink or Google. Writing about it here may be of help to the next innocent victim of ORA-6502 with bulk binds.
My suspicion was that new data loaded into the system from another database had something to do with the problem, the problem was determining where it was occurring and why.
The problem went unnoticed for some time due to a quirk in error handling built into the package. (Tom Kyte would likely agree)
Why Do People Do This?"
Built into the package is detailed profiling information. This is a good thing. Also built into the package is an error handling routine. This might be a good thing to have if the package is only run from the command line as it provides a call stack and the error message.
There is an attribute of this type of error handling that does not lend itself well to use in a job that is run via DBMS_JOB. The error message is printed, but the error is not raised.
The consequence of this type of error handling is that regardless of any errors encountered during the execution of the job, they are hidden from DBMS_JOB. No error is ever raised. After a user reports that production data is not propagating to the system, the DBA (me) checks the DBA_JOBS view and find that it is executing successfully.
Wondering what is going on, the erstwhile DBA (me again) runs the job manually and discovers that it does not actually work at all, but no error was raised.
The printed error message was "PL/SQL: numeric or value error" - bulk bind truncated.
The PL/SQL package that generated this is 3000 lines long and has some level of abstration. Translation: hard to debug when you don't know what the code is doing.
Before showing a more relevant example, I will show a simple one. If you found this page via google, this is enough to help you understand what is going on.
The problem is simply a mismatch of the data type scale. Obvious once you see it. This is a 9i example (10g would take less code) because the database it was written on was 9.2 at the time.
If you are accustomed to working with the data dictionary the problem will likely be obvious to you.
declare
type owner_type is table of varchar2(30) index by pls_integer;
type object_name_type is table of varchar2(20) index by pls_integer;
type object_type_type is table of varchar2(18) index by pls_integer;
t_owner owner_type;
t_object_name object_name_type;
t_object_type object_type_type;
cursor c_obj
is
select owner, object_name, object_type
from dba_objects;
begin
open c_obj;
while c_obj%isopen
loop
fetch c_obj
bulk collect into t_owner, t_object_name, t_object_type
limit 100;
if c_obj%notfound then
close c_obj;
end if;
end loop;
end;
/
*
ERROR at line 1:
ORA-06502: PL/SQL: numeric or value error: Bulk Bind: Truncated Bind
ORA-06512: at line 19
The fix is simple: change length of object_name_type from 20 to 30. The data dictionary (DBA_OBJECTS.OBJECT_NAME) actually specifies this as 128, but this particular database has no objects with names longer than the standard length of 30 characters.
Where the mystery of what caused this error becomes difficult to unravel is when there are a few levels of indirection.
There is a staging table that is the temporary home for data from another system. The PL/SQL code gets its specifications for data types from the staging table, not the source table. You probably already know what the problem is.
This example is similar to what was happening.
First create the staging table.
create table tbind_test (
owner varchar2(30) not null,
object_name varchar2(20) not null,
object_type varchar2(18) not null
)
/
Now run a similar, but somewhat different PL/SQL block.
declare
type owner_type is table of tbind_test.owner%TYPE index by pls_integer;
type object_name_type is table of tbind_test.object_name%TYPE index by pls_integer;
type object_type_type is table of tbind_test.object_type%TYPE index by pls_integer;
t_owner owner_type;
t_object_name object_name_type;
t_object_type object_type_type;
cursor c_obj
is
select owner, object_name, object_type
from dba_objects;
begin
open c_obj;
while c_obj%isopen
loop
fetch c_obj
bulk collect into t_owner, t_object_name, t_object_type
limit 100;
if c_obj%notfound then
close c_obj;
end if;
end loop;
end;
/
*
ERROR at line 1:
ORA-06502: PL/SQL: numeric or value error: Bulk Bind: Truncated Bind
ORA-06512: at line 19
Here's the offending bit of code:
19 fetch c_obj
20 bulk collect into t_owner, t_object_name, t_object_type
21 limit 100;
Hmmmm. The specification for the PL/SQL tables came straight from the
table via %TYPE, what could be the problem? The problem is that you
need to look at the cursor, and compare the datatypes in the table
referenced by the cursor with those of the TBIND_TEST table.
Doing that you will find that TBIND_TEST.OBJECT_NAME should really be 128
1* alter table tbind_test modify (object_name varchar2(128) )
SQL> /
Table altered.
Now the bulk binds will run without error.
Thursday, January 31, 2008
Detect numbers with TRANSLATE() - Take two
Andrew Clarke at Radio Free Tooting pointed out the shortcomings of using TRANSLATE() to detect numbers.
As I said earlier, all I needed to do was detect if the characters in a string were all digits or not, and I wanted it to be very fast.
But Andrew's remarks got me thinking - could translate be used to detect more complex numbers?
Here's the short list of requirements:
* Detect integers
* Detect numbers with decimal point ( 4.6, 0.2, .7)
* Detect negative and positive ( leading + or - )
* Reject text with more than 1 '.', such as an IP address ( 127.0.0.1 )
* Reject anything with alpha text
And comma's are considered as text. 99,324.1 would be alpha.
If you need to do this on 10g, no problem, as a regular expression can handle it.
Fist create some test data:
drop table number_test;
create table number_test( alphacol varchar2(20));
insert into number_test values('.5');
insert into number_test values('1');
insert into number_test values('2');
insert into number_test values(' 3');
insert into number_test values('4 ');
insert into number_test values('3.14159');
insert into number_test values('127.0.0.1');
insert into number_test values('+34.45');
insert into number_test values('-54.43');
insert into number_test values('this is a test');
insert into number_test values('th1s is 4 t3st');
insert into number_test values('.');
commit;
Now select only columns where the value is a number:
select alphacol
from number_test
where regexp_instr(trim(alphacol),'^[-+]?[0-9]*(\.?[0-9]+)?$') > 0
order by 1
SQL> /
ALPHACOL
--------------------
3
+34.45
-54.43
1
2
3.14159
4
7 rows selected.
That seems to work.
But what if you're stuck doing this on 9i? REGEXP_INSTR is not available.
You can use the user defined function IS_NUMBER(), which works well, but is very slow if used on large amounts of data.
Might we be able to use and abuse the TRANSLATE() function to speed this up? Here's a bit of convoluted SQL that works well on the limited test data:
select alphacol, alpha2
from
(
select alphacol,
-- is there a sign +- ? - remove it
decode(substr(alpha2,1,1),
'-',substr(alpha2,2),
'+',substr(alpha2,2),
alpha2
) alpha2
from (
select
alphacol,
-- remove a single '.' if it/they exists
replace(substr(alphacol,1,instr(alphacol,'.')),'.') || substr(alphacol,instr(alphacol,'.')+1) alpha2
from (
select trim(alphacol) alphacol
from number_test
)
)
)
where substr('||||||||||||||||||||||||||||||||',1,length(alpha2)) = translate(alpha2,'0123456789','||||||||||')
/
(Sorry about formatting - I seem to lose all formatting when I paste SQL)
Output from this nasty bit of SQL is identical to that when using REGEXP_INSTR:
ALPHACOL ALPHA2
-------------------- ----------------------------------------
.5 5
1 1
2 2
3 3
4 4
3.14159 314159
+34.45 3445
-54.43 5443
8 rows selected.
Runstats results:
SQL> @th5
.047739 secs
.037447 secs
PL/SQL procedure successfully completed.
Sunday, January 27, 2008
RMAN Usage Survey
If you can spare a couple of minutes, please fill out this 10 question multiple choice survey: RMAN Usage Survey
Update:
It seems I made a poor choice of online Survey site.
To get more than 100 responses to this survey, a "Professional" version must be purchased. At a rate of 1 or 2 surveys a year (for me) that is not exactly practical.
Sharing the Survey results also requires the Professional version.
Here's a summary of the results in MS Excel.
RMAN Usage Survey Results
Thank you!
Thursday, January 24, 2008
Using the TRANSLATE() Function to Detect Numeric Data
Those seem to have vanished with the wind, as I could not remember any of them.
Google notebook to the rescue - I have started recording blog ideas.
Now I just need more time to write entries here.
Now that I've made my lame excuse for ignoring this for a few weeks, I will move on to the technical content.
We've recently had a requirement to move some data from one application into another. The apps are similar in purpose, but not identical.
One of challenges in doing this is to cleanup the data into a form that our production app can use.
The source of the data has allowed a revision number for part numbers to be an alphanumeric column in a table. As you well might guess, this 'feature' was somewhat abused.
The data for a single part number may contain numerous rows in a table, each with a different revision number. Some of the revision 'numbers' are numeric, some are alpha.
We are only interested in those that are numeric.
For example, consider the following example:
Part# Revision
----- ----------
123 AA
123 AB
123 AC
123 01
123 02
123 03
The parts we need to import into our system are only those with revisions of 01,02 or 03.
The problem was how to exclude non-numeric revisions in a table.
You might consider the following SQL statement adequate
select * from parts where revision in ('01','02','03',...)
The IN operator could be converted to use a table known numeric revisions
select * from parts where revision in (select revision from good_revisions)
That would work, but would require building the table, which I would rather not do.
The data has not all arrived at one fell swoop - the next batch of data could break this method unless the table good_revisions is maintained. Ugh.
Compounding the problem is that the data is not consistent.
The revision could be ' 2' or '2 ' or '02'.
It seemed that this would call for regular expression.
Had I built the migration schemas in 10g database I could have used the REGEXP_INSTR function to find the numeric only revisions.
As the application is still in 9i, I used a 9i database to build the migration schemas.
Exercising my flawless 20/20 hindsight I realized I should have used 10g for those schemas.
Too late to change now, not enough time to move everything.
The next choice is to use the OWA_PATTERN package to find numeric only revisions.
From a purely functional perspective, this works perfectly.
From a 'waiting around staring at the computer while a job finishes' it was less than perfect.
Using OWA_PATTERN practically guarantees that any SQL it is used in will be slow.
There had to be a faster way.
At that point the TRANSLATE function came to mind.
While I have always found this function rather obtuse, it seemed it might do the job.
The TRANSLATE function accepts a string, a list of characters to locate in the string, and a list of characters to translate them to.
How can that be used to detect numeric only character data?
It works by transforming all numeric characters into a known character, and checking to see if the string consists solely of that character.
An example is the best way to show this.
Create a test table:
create table detect_numeric
as
select cast(chr(a.rownumber) || chr(b.rownumber) as varchar2(2)) fake_number
from
( select rownum rownumber from all_Tables where rownum <= 1000 ) a,
( select rownum rownumber from all_Tables where rownum <= 1000 ) b
where a.rownumber between 48 and 90
and b.rownumber between 48 and 90
/
Here's an example of using OWA_PATTERN to find numeric data:
SQL> l
1 select fake_number
2 from detect_numeric
3 where owa_pattern.amatch(fake_number,1,'^\d+$') > 0
4* order by 1
SQL> /
FA
--
00
01
...
98
99
100 rows selected.
SQL> l
1 select fake_number
2 from detect_numeric
3 where '||' = translate(fake_number,'0123456789','||||||||||')
4* order by 1
SQL> /
FA
--
00
01
...
98
99
100 rows selected.
If the returned value is '||' then this must be all numeric data.
There's a caveat with using this method. The character used in the TRANSLATE function must not appear in the data being checked.
This example is simplified in that it does not account for nulls, spaces or varying data lengths.
Nonetheless it works quite well.
Is it faster?
In the test I ran the TRANSLATE function is 2 orders or magnitude faster than when using OWA_PATTERN.
Tom Kyte's run_stats was used to compare the run times and resource usage of both methods.
Run Stats
Running both methods in loop 20 times yielded the following run times:
10.265529 secs
.010235 secs
PL/SQL procedure successfully completed.
The resource usage was much better for TRANSLATE as well;
SQL> @run_stats
NAME RUN1 RUN2 DIFF
---------------------------------------- ---------- ---------- ----------
...
LATCH.SQL memory manager workarea list l 268 0 -268
atch
LATCH.checkpoint queue latch 640 0 -640
STAT...redo size 27764 28404 640
STAT...Elapsed Time 1028 3 -1025
STAT...recursive cpu usage 1029 3 -1026
STAT...session pga memory max 74048 65536 -8512
STAT...session uga memory 65464 0 -65464
STAT...session uga memory max 65464 0 -65464
STAT...session pga memory 74048 0 -74048
49 rows selected.
The results were surprising.
While REGEXPR_INSTR was very fast, TRANSLATE was still the faster method.
.048662 secs
.008829 secs
PL/SQL procedure successfully completed.
That should not be too big a surprise as there were many PL/SQL optimization included in 10gR2, but this was still somewhat unexpected.
------
Niall Litchfield Niall Litchfield's Blog wondered why I had not tried using an is_number function such as the one shown below.
create or replace function is_number( chk_data_in varchar2 )
return number
is
dummy number(38,4);
begin
dummy := to_number(chk_data_in);
return 1;
exception
when value_error then
return 0;
when others then
raise;
end;
Here are timings for both 9i and 10g. As expected, TRANSLATE is still quite a bit faster.
9i:
SQL> @th3
.092713 secs
.009951 secs
PL/SQL procedure successfully completed.
SQL> @th3
.362097 secs
.008479 secs
PL/SQL procedure successfully completed.