Those seem to have vanished with the wind, as I could not remember any of them.
Google notebook to the rescue - I have started recording blog ideas.
Now I just need more time to write entries here.
Now that I've made my lame excuse for ignoring this for a few weeks, I will move on to the technical content.
We've recently had a requirement to move some data from one application into another. The apps are similar in purpose, but not identical.
One of challenges in doing this is to cleanup the data into a form that our production app can use.
The source of the data has allowed a revision number for part numbers to be an alphanumeric column in a table. As you well might guess, this 'feature' was somewhat abused.
The data for a single part number may contain numerous rows in a table, each with a different revision number. Some of the revision 'numbers' are numeric, some are alpha.
We are only interested in those that are numeric.
For example, consider the following example:
Part# Revision
----- ----------
123 AA
123 AB
123 AC
123 01
123 02
123 03
The parts we need to import into our system are only those with revisions of 01,02 or 03.
The problem was how to exclude non-numeric revisions in a table.
You might consider the following SQL statement adequate
select * from parts where revision in ('01','02','03',...)
The IN operator could be converted to use a table known numeric revisions
select * from parts where revision in (select revision from good_revisions)
That would work, but would require building the table, which I would rather not do.
The data has not all arrived at one fell swoop - the next batch of data could break this method unless the table good_revisions is maintained. Ugh.
Compounding the problem is that the data is not consistent.
The revision could be ' 2' or '2 ' or '02'.
It seemed that this would call for regular expression.
Had I built the migration schemas in 10g database I could have used the REGEXP_INSTR function to find the numeric only revisions.
As the application is still in 9i, I used a 9i database to build the migration schemas.
Exercising my flawless 20/20 hindsight I realized I should have used 10g for those schemas.
Too late to change now, not enough time to move everything.
The next choice is to use the OWA_PATTERN package to find numeric only revisions.
From a purely functional perspective, this works perfectly.
From a 'waiting around staring at the computer while a job finishes' it was less than perfect.
Using OWA_PATTERN practically guarantees that any SQL it is used in will be slow.
There had to be a faster way.
At that point the TRANSLATE function came to mind.
While I have always found this function rather obtuse, it seemed it might do the job.
The TRANSLATE function accepts a string, a list of characters to locate in the string, and a list of characters to translate them to.
How can that be used to detect numeric only character data?
It works by transforming all numeric characters into a known character, and checking to see if the string consists solely of that character.
An example is the best way to show this.
Create a test table:
create table detect_numeric
as
select cast(chr(a.rownumber) || chr(b.rownumber) as varchar2(2)) fake_number
from
( select rownum rownumber from all_Tables where rownum <= 1000 ) a,
( select rownum rownumber from all_Tables where rownum <= 1000 ) b
where a.rownumber between 48 and 90
and b.rownumber between 48 and 90
/
Here's an example of using OWA_PATTERN to find numeric data:
SQL> l
1 select fake_number
2 from detect_numeric
3 where owa_pattern.amatch(fake_number,1,'^\d+$') > 0
4* order by 1
SQL> /
FA
--
00
01
...
98
99
100 rows selected.
SQL> l
1 select fake_number
2 from detect_numeric
3 where '||' = translate(fake_number,'0123456789','||||||||||')
4* order by 1
SQL> /
FA
--
00
01
...
98
99
100 rows selected.
If the returned value is '||' then this must be all numeric data.
There's a caveat with using this method. The character used in the TRANSLATE function must not appear in the data being checked.
This example is simplified in that it does not account for nulls, spaces or varying data lengths.
Nonetheless it works quite well.
Is it faster?
In the test I ran the TRANSLATE function is 2 orders or magnitude faster than when using OWA_PATTERN.
Tom Kyte's run_stats was used to compare the run times and resource usage of both methods.
Run Stats
Running both methods in loop 20 times yielded the following run times:
10.265529 secs
.010235 secs
PL/SQL procedure successfully completed.
The resource usage was much better for TRANSLATE as well;
SQL> @run_stats
NAME RUN1 RUN2 DIFF
---------------------------------------- ---------- ---------- ----------
...
LATCH.SQL memory manager workarea list l 268 0 -268
atch
LATCH.checkpoint queue latch 640 0 -640
STAT...redo size 27764 28404 640
STAT...Elapsed Time 1028 3 -1025
STAT...recursive cpu usage 1029 3 -1026
STAT...session pga memory max 74048 65536 -8512
STAT...session uga memory 65464 0 -65464
STAT...session uga memory max 65464 0 -65464
STAT...session pga memory 74048 0 -74048
49 rows selected.
The results were surprising.
While REGEXPR_INSTR was very fast, TRANSLATE was still the faster method.
.048662 secs
.008829 secs
PL/SQL procedure successfully completed.
That should not be too big a surprise as there were many PL/SQL optimization included in 10gR2, but this was still somewhat unexpected.
------
Niall Litchfield Niall Litchfield's Blog wondered why I had not tried using an is_number function such as the one shown below.
create or replace function is_number( chk_data_in varchar2 )
return number
is
dummy number(38,4);
begin
dummy := to_number(chk_data_in);
return 1;
exception
when value_error then
return 0;
when others then
raise;
end;
Here are timings for both 9i and 10g. As expected, TRANSLATE is still quite a bit faster.
9i:
SQL> @th3
.092713 secs
.009951 secs
PL/SQL procedure successfully completed.
SQL> @th3
.362097 secs
.008479 secs
PL/SQL procedure successfully completed.
No comments:
Post a Comment