tag:blogger.com,1999:blog-9160879674337029782024-02-20T06:30:02.956-08:00DBA in ExileAn exiled DBA's thoughts on Oracle, SQL Server, Unix, Windows or anything else IT related...Stevehttp://www.blogger.com/profile/10758481055115692692noreply@blogger.comBlogger46125tag:blogger.com,1999:blog-916087967433702978.post-69179681493819124222017-08-18T17:02:00.002-07:002017-08-18T17:12:31.748-07:00Knowing your optimizer<hr>
<i>"Sure you won't change your mind?" / "Is there something wrong with the one I have?" - Dr. Gillian Taylor & Mr. Spock - Star Trek IV: The Voyage Home</i>
<hr>
Not so very long ago, I had one of the subject matter experts come and talk to me about what he considered to be the abysmal performance in a query against the ERP database we host. He was trying to find, via a SQL query, all open work orders from the database.
<p>
<b><u>The WHAT</u></b>
<p>
The query in question used two tables:<br>
MSF600 - The equipment register table. This table contains all the pieces of equipment and is keyed on a column called EQUIP_NO<br>
MSF620 - The work order table. This table contains all work orders, whether raised against equipment or not. It is keyed on WORK_ORDER but one of the many alternative indexes is based on WO_STATUS_M. Another alternative index is based upon the EQUIP_NO as well.
<p>
Work Orders have a column called WO_STATUS_M, which is the status of the work order. The valid values for status are:
<p>
A: Authorized<br>
C: Closed<br>
O: Open, not Authorized<br>
<p>
This is going to address a specific query and the understanding of why it took a while to run. Long-running queries are not necessarily a bad thing, depending on what they are doing, however, this particular query seems like it should have run relatively quickly.
<p>
The SQL statement itself seemed fairly inoffensive:
<p><pre><font color=green>
select e.plant_no, e.equip_status, w.work_order, w.wo_desc, w.work_group,
w.raised_date, w.maint_sch_task, w.wo_status_m
from msf600 e
join msf620 w on w.equip_no = e.equip_no
where w.wo_status_m <> 'C'
order by raised_date
/
<p></pre></font>
Why should an apparently simple query perform so badly? Well, let's find out. Sometimes the simplest looking queries can do the weirdest things.
<p>
Oracle has a statement EXPLAIN PLAN you can add to the start of any query to generate an ESTIMATED execution plan. This is what the optimizer THINKS it should be doing. The actual plan is something different but we won’t go into that today. Doing that here gives us:
<p><pre><font color=green>
SQL> explain plan for
2 select e.plant_no, e.equip_status, w.work_order, w.wo_desc, w.work_group,
3 w.raised_date, w.maint_sch_task, w.wo_status_m
4 from msf600 e
5 join msf620 w on w.equip_no = e.equip_no
6 where w.wo_status_m <> 'C'
7 order by raised_date
8 /
Explained.
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------
Plan hash value: 2559632844
--------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
--------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 20366 | 2684K| | 57379 (1)| 00:00:03 |
| 1 | SORT ORDER BY | | 20366 | 2684K| 2976K| 57379 (1)| 00:00:03 |
|* 2 | HASH JOIN | | 20366 | 2684K| | 56764 (1)| 00:00:03 |
|* 3 | TABLE ACCESS FULL| MSF620 | 5006 | 430K| | 55431 (1)| 00:00:03 |
| 4 | TABLE ACCESS FULL| MSF600 | 38297 | 1757K| | 1333 (1)| 00:00:01 |
--------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("W"."EQUIP_NO"="E"."EQUIP_NO")
3 - filter("W"."WO_STATUS_M"<>'C')
17 rows selected.
SQL>
<p></pre></font>
What we can see from this is that with are joining both tables, no indexes and doing full table scans.
<p>
As far as reading the plan, in general, the further a line is indented, the earlier in the whole execution of the statement it is executed.
<p>
So this query as written is doing a TABLE ACCESS FULL on both MSF600 and MSF620. TABLE ACCESS FULL means start at the first record. Read EVERY record in the table and return it. So this query as written is reading ALL of MSF600, ALL of MSF620, picking up the records they want and matching them to MSF600. Then it matches all those records together, sorting them and returning it to the user.
<p>
<b><u>The FIX</u></b>
<p>
<p><pre><font color=green size=-1>
So, I modified the query just a little:
SQL> explain plan for
2 select e.plant_no, e.equip_status, w.work_order, w.wo_desc, w.work_group,
3 w.raised_date, w.maint_sch_task, w.wo_status_m
4 from msf600 e
5 join msf620 w on w.equip_no = e.equip_no
6 where w.wo_status_m in ('A', 'O')
7 order by raised_date
8 /
Explained.
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------
Plan hash value: 2508940585
-------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
-------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 20364 | 2684K| | 2954 (1)| 00:00:01 |
| 1 | SORT ORDER BY | | 20364 | 2684K| 2976K| 2954 (1)| 00:00:01 |
|* 2 | HASH JOIN | | 20364 | 2684K| | 2339 (1)| 00:00:01 |
| 3 | INLIST ITERATOR | | | | | | |
| 4 | TABLE ACCESS BY INDEX ROWID BATCHED| MSF620 | 5005 | 430K| | 1007 (1)| 00:00:01 |
|* 5 | INDEX RANGE SCAN | MSF62001I9 | 5005 | | | 16 (0)| 00:00:01 |
| 6 | TABLE ACCESS FULL | MSF600 | 38297 | 1757K| | 1333 (1)| 00:00:01 |
-------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("W"."EQUIP_NO"="E"."EQUIP_NO")
5 - access("W"."WO_STATUS_M"='A' OR "W"."WO_STATUS_M"='O')
19 rows selected.
SQL>
<p></pre></font>
What happened here?
<p>
Look at our output now. There’s an index in there - this is the index I talked about at the top, the one that uses WO_STATUS_M as the access path to the data. An Index range scan is the optimizer telling us that it’s found an index and can search on a distinct range of values.
<p>
The Cost column is also a VERY abstract thing (represents the amount of effort the engine has to do to satisfy your query) and you shouldn’t go betting your house on it but notice the huge drop in the Cost of the query too.
<p>
So instead of matching reading ALL of MSF620 to find those 8,153 rows, if can go directly to the index, pick up the 8100+ out of the millions, match those to MSF600 (which will always be a full scan for a query like this) and return those. Huge reduction in I/O here.
<p>
There are other issues that can affect execution time for things like this as well - are the rows already in the buffer cache or do they need to be fetched from disk ? Has the statement previously been parsed and compiled and it's sitting in memory waiting to run again?
<p>
<b><u>The WHY</u></b>
<p>
The optimizer is both a beautiful thing and demon spawn all at the same time. It can only operate on what it knows and sometimes it makes a bad decision. The design of this particular status field (WO_STATUS_M) has 3 statuses – A & O which represent non-closed WO’s and C for closed. Which means you have a small number of A’s, then a bunch of C’s then the O’s. Which means you have data scattered all through the file.
<p>
The optimizer takes into consideration the distribution of the data as well.
<p><pre><font color=green>
SQL> select wo_status_m, count(*)
2 from msf620
3 group by wo_status_m
4 order by 1
5 /
W COUNT(*)
- ----------
A 4946
C 1175836
O 62
<p></pre></font>
For this table we have 99.5% of the values are ‘C’. For whatever reason, the optimizer decided that since the range of values of “NOT C” were small it was more efficient to scan the whole table. I can’t tell you why. This is one of those bad decisions I told you about earlier.
<p>
When I changed it to be inclusive it turned around and said “OK, I know I have a few up front and a few at the end, let’s get them separately”.
<p>
It also doesn’t mean that this is always the case. There are ALWAYS exceptions.
<p>
I don’t have the time to go digging in the Oracle Documentation for this at the moment, but there are several blogs and articles I found elsewhere on the internet that backup these findings along the lines of “The Oracle Optimizer doesn’t necessarily like inequalities”:
<p>
<a href="http://oracleinaction.com/conditions-based-inequalities-cant-use-indexes-toresolve/">http://oracleinaction.com/conditions-based-inequalities-cant-use-indexes-toresolve/</a><br>
<a href="http://oracle.readthedocs.io/en/latest/sql/indexes/predicates-equality-before-inequality.html">http://oracle.readthedocs.io/en/latest/sql/indexes/predicates-equality-before-inequality.html</a><br>
<a href="http://grokbase.com/t/freelists.org/oracle-l/11bf3r6eky/strategies-for-dealing-with-not-equal-conditions-and-indexes">http://grokbase.com/t/freelists.org/oracle-l/11bf3r6eky/strategies-for-dealing-with-not-equal-conditions-and-indexes</a><br>
<p>
So bottom line here: if you’re trying to run a query with an inequality and getting bad performance, try an equality instead and see what you get.
Stevehttp://www.blogger.com/profile/10758481055115692692noreply@blogger.comtag:blogger.com,1999:blog-916087967433702978.post-40935289073117222322017-03-23T20:56:00.000-07:002017-08-18T17:12:06.815-07:00Back-Ups, Up and Away!<hr>
"Of all the things I've lost, I miss my mind the most" - apparently attributed to Mark Twain
<hr><p>
Well, I've been a real slacker in adding anything resembling new content to this blog for the last, oh, 4 years. Time to try to make an effort to correct that.
<p>
Let's talk about backups.
<p>
The definition of backup, at least in respect to computers as defined in the <a href="https://en.oxforddictionaries.com/definition/backup">Oxford English Dictionary</a> is "A copy of a file or other item of data made in case the original is lost or damaged."
<p>
So, all we need to do is write a script to copy all the files that make up our database off to a secondary location, right ?
<p>
Right ?
<p>
No so much.
<p>
If the database is down, you could take a copy of all of the files that make up your database purely with a copy at the operating system level. This was historically called a "cold" backup. This, of course, incurs downtime. In years gone by, regular nightly downtime for things like backups was a common affair. I remember when I first got into IT as a computer operator (back at the dawn of time) - we'd start up the machines around 6AM, they'd go from 7AM to 6PM and then we'd shut down the applications, run backups and batch jobs and the like.
<p>
However that was another time. In our modern, internet world, users expect 99.99% uptime (Well they expect 100%, but lots of SLA's seem to float in the 99.99%+ area). This means that you don't have the liberty to shut things down for hours while you copy your database files to secondary disk or tape.
<p>
So why can't we just copy the database files to backup media with the rest of the files on our server and call it good ?
<p>
Oracle knows about changes to the database by a number called the SCN - System Change Number.
<p>
Every datafile in an Oracle database has a header block. This header block contains lots of information but among it all, recorded in every datafile, is the SCN at last checkpoint.
<p>
The database writer (DBWR) likes to take frequent power naps. If it's not doing anything it sleeps. It will wake up every 3 seconds if the database isn't really busy, or more frequently than that if it is and flush unwritten blocks to disk and the database will perform a checkpoint. As a part of that process, the header block in every file in the database is updated with the same SCN. As a part of making sure that database integrity is maintained in the database, the SCN's across the headers must be consistent.
<p>
So, if you're copying a database while the instance is active, the SCN in the datafile headers is constantly being updated. You copy the first file and the SCN is 1000, you copy the second datafile and the SCN is 1100 and the third is at 1200. So then when you try to restore from your backup, the headers are inconsistent and the database won't open.
<p>
There are several ways to implement backup strategies to handle this requirement.
<p>
Oracle defines any type of backup not done using RMAN as a User Managed Backup. They also provide an integrated backup tool, Recovery Manager (RMAN) to perform backups - and most DBA's will probalby want to go that route.
<p>
There are several ways to complicate user managed backups. If you use ASM - you really need to use RMAN for example.
<p>
I think I'll wrap up this first entry here, since we've covered the why of backups and the couple of different options for performing Oracle backups. In the next article, I'll cover user managed backups and how to take them.
<p>
Until next time.
Stevehttp://www.blogger.com/profile/10758481055115692692noreply@blogger.comtag:blogger.com,1999:blog-916087967433702978.post-23936157175224130092013-07-09T07:44:00.001-07:002013-07-09T07:54:30.579-07:00T-SQL Tuesday #44 - Second Chances (or how not to be a Horse's Rear End)<hr>
<i>"Mr Wazowski, what you lack simply cannot be taught. You're just not scary." - Dean Hardscrabble, 'Monsters University'
</i>
<hr>
<a href="http://www.sqlballs.com/2013/07/t-sql-tuesday-44-second-chance.html"><img style="float:left; margin:0 10px 10px 0;cursor:pointer; cursor:hand;width: 150px; height: 150px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiJquyEJB2QJGebzSIF-4krAaC28GclMK-POt-MUKo0E37ELJECQd46GyK2ek6vXVkaAid5vk4Blf2k2jzcdFtGWqSNeFH2Iio77ht_3yj5AuR943ro-LHleuUBUvXvwpXAva8j-Grqf16F/s400/tsqltuesday.jpg" border="0" alt="T-SQL Tuesday" id="BLOGGER_PHOTO_ID_5480538702979664562" /></a>After a year of always realizing on the second Wednesday that T-SQL Tuesday was yesterday, I remembered this month and went searching early. This month I found Bradley Ball(<a href="http://www.sqlballs.com">blog</a>|<a href="https://twitter.com/sqlballs">twitter</a>) asking us to write about "Second Chances" - time to reflect upon a time we could have done something better if given a second chance or when we royally screwed up and wished we could have a "Do over".
<p>
Since my "infrequently updated blog" has been idle for over 6 months now, it is probably due for an update.
<p>
I've been working in IT for over 25 years. Screwing up, particularly in production can be pretty scary. My list of screw ups in that time is probably long and glorious. I'd like to say that I've never repeated the same screw up but I'm sure that I have.
<p>
Making a mistake (unless your "mistake" is something criminal or your sense of humor when developing <a href="http://www.snopes.com/business/consumer/bastard.asp">test data</a> ends up in production) probably isn't going to be the defining point of your career - these things are things you can learn from and grow from.
<p>
Looking back at a quarter of a century of assorted screw ups, I remember such things as:
<ul>
<li>The time I truncated a table in production - except it was the wrong table (all part of being an accidental DBA I suppose)
<li>The time I defined the same physical disk on the SAN to two servers and wondered why I was having daisy chained DB failures
<li>The time I realized that my backups really weren't quite as good as I thought they were after the aforementioned disk issue
</ul>
There's more, I'm sure, but we'll leave it there.
<p>
Today's post is not just about learning from my mistakes, it's also writing about how the company I work for celebrates those mistakes.
<p>
Before I go into detail on that, I should note that I work for a small company. We all seem to get along very well together. Management truly has an open door policy - walk into the office of the President, CEO or anyone else and air concerns or just chat. Every two weeks we have a brief all hands conference call where successes are celebrated, updates on new business are given and at the end of the meeting, failures are also celebrated.
<p>
When you fail, you can expect to be the "proud" recipient of an "HA" award.
<p>
What is an HA you ask? HA stands for "Horse's A**" (since we're dealing with a blog post probably read in places of employment, I'll employ a censor to sanitize my post :-) ).
<p>
I've been with this company for over 5 years. I had done pretty well for the first 3 years and managed to avoid earning an HA of my very own.
<p>
I was doing some testing on one of the development servers. I forget what I was doing at the time - but it was something that required me to stop and restart the SQL Server service (you can see where this is going already, right?).
<p>
During the testing of whatever it was that I was doing, a call came in where I needed to jump onto the production server to check something out. I completed that and forgot to log out of the server.
<p>
I went back to where I had left off in my testing and stopped and restarted the SQL Server Service. Within a few seconds emails start coming in about a down production SQL Server as the monitoring system starts screaming and in short order after that the phone starts ringing as users call in wondering where the ERP system we support went.
<p>
Stopping the database also caused a couple of other failures downstream as other services that relied on the database also stopped because the database wasn't there anymore (and they maintained an active connection).
<p>
It wasn't a huge deal to correct it and systems were back up in a matter of minutes, but there was lots to do afterwards, since every outage requires a Root Cause Analysis to be presented to the customer etc.
<p>
I mentioned earlier about learning from your mistakes. This particular outage caused me to always take the time to be double sure when doing things like stopping services. Now I also try to reconfigure my desktop on production servers (is the Start Button on the left of the screen? Must be a production server).
<p>
Of course in the grand scheme of things, as far as second chances go, this is a minor thing. The main reason I wanted to write about this particular incident was to "show off" the little certificate that you can earn at my place of employment for those times when you wish you had a "Second Chance".
<p>
<div class="separator" style="clear: both; text-align: center;">
<a href="http://home.comcast.net/~sjwales/images/ha_award.png" imageanchor="1" style="clear:left; float:left;margin-right:1em; margin-bottom:1em"><img border="0" height="475" width="615" src="http://home.comcast.net/~sjwales/images/ha_award.png" /></a></div>
<p>
(For the record, these are awarded humorously and without ill intent, just in case you're wondering - and the transgression doesn't necessarily have to be huge to earn one, either. As Forrest Gump says, "It Happens" and when it happens to you here, you're probably getting one of these!).
<p>
Until next time - may you never find yourself facing the behind of the horse!Stevehttp://www.blogger.com/profile/10758481055115692692noreply@blogger.comtag:blogger.com,1999:blog-916087967433702978.post-14030885336552040512012-11-24T22:58:00.001-08:002013-01-14T11:49:58.722-08:00FLASHBACK Technology in Oracle Databases (Part 2)<hr>
"By the way, did I mention it also travels in time?" - The Ninth Doctor, referring to the TARDIS
<hr>
Last time we talked about using Flashback technology to run queries against our database as it looked in the recent past.
<p>
This can certainly be useful, but what if it just isn't enough. What if we need to get back a table we just accidentally dropped, or recover a data screw up so bad that recovery is our only option.
<p>
There are other aspects to Flashback technology that let us do just that.
<p>
What we looked at last time would be covered under the topic sub-titles "Flashback Query" and "Flashback Version Query". What other options do we have available to us?
<p>
Looking into the heart of our TARDIS (refer to Part 1 for the introduction of the Oracle TARDIS), we find the following options awaiting us:
<p>
<ul>
<li>Flashback Database - a physical level restore that restores the entire database to a specific point in time using flashback logs rather than doing a database recovery and rolling archive logs forward
<li>Flashback Table - recover a table to a point in time
<li>Flashback Drop - restores an accidentally dropped table as well as all related indexes, triggers and constraints
<li>Flashback Transaction - rollback a single transaction and optionally, all transactions dependent upon the rolled back transaction
<li>Flashback Transaction Query - see all the changes made by a specific transaction
</ul>
<p>
Today we'll be talking about the first three on that list (we'll deal with Flashback Transaction in part 3 of the series).
<p>
<b><font size=+1>Flashback Database</font></b>
<p>
Let's start with Flashback Database. Much like a database recovery, Flashback Database is a physical level recovery, much like a database point in time recovery, up to and including the OPEN RESETLOGS at the end, but usually much faster because the initial recovery of the data files from backup isn't needed.
<p>
In order to use Flashback Database, the Fast Recovery Area needs to be configured in the database that is a candidate for Flashback.
<p>
This means that the db_recovery_file_dest and db_recovery_file_dest_size initialization parameters need to be set. Furthermore, in order to enable Flashback Database, db_flashback_retention_target also needs to be set. This parameter the number of minutes we can go back into the past and perform a Flashback Database.
<p>
Once this is configured, we must also make sure to issue ALTER DATABASE FLASHBACK ON.
<p>
What this will do is to start writing images of data blocks to the fast recovery area. The flashback logs are written sequentially and often in bulk. The database automatically creates, deletes and resizes these logs as needed. These logs are not archived.
<p>
The only thing that the DBA really needs to know about for the flashback logs is that they exist (from a sizing of the fast recovery area point of view) and that they are adding a performance overhead to the database.
<p>
The database also needs to be in Archivelog Mode for this functionality to work.
<p>
When doing a Flashback Database, the database uses information in these logs, as well as some information from archive logs, in order to perform the requested operation. It is because of this that you can't enable flashback after a failure and expect that you can flashback your database to before the failure.
<p>
Similar to the syntax used in Flashback Query, there are several options you can apply to Flashback Database:
<ul>
<li>TO SCN
<li>TO SEQUENCE
<li>TO TIMESTAMP
<li>TO RESTORE POINT
</ul>
<p>
Each of these also supports a TO BEFORE construct as well. Doing FLASHBACK TO will recover the database right up until the parameter specified. Doing FLASHBACK TO BEFORE will recover the database right up until immediately before the parameter specified.
<p>
<b><font size=+1>Demo Time</font></b>
<p>
First things first, let's check that our database is set up correctly:
<p><pre><font color=green>
SQL> show parameter db_recovery
NAME TYPE VALUE
--------------------------- ----------- ------------------------------
db_recovery_file_dest string c:\oracle\flash_recovery_area
db_recovery_file_dest_size big integer 4977M
SQL> show parameter flashback;
NAME TYPE VALUE
----------------------------- ----------- -----
db_flashback_retention_target integer 1440
SQL> select flashback_on from v$database;
FLASHBACK_ON
------------------
NO
SQL> alter database flashback on;
Database altered.
SQL> select flashback_on from v$database;
FLASHBACK_ON
------------------
YES
SQL> select instance_name, archiver from v$instance;
INSTANCE_NAME ARCHIVE
---------------- -------
demo STARTED
<p></pre></font>
Next, let's check the status of the database as we left it after Part 1, then add a new table and populate it with some related data.
<p><pre><font color=green>
SQL> conn demo/demo
Connected.
SQL> alter session set nls_date_format = 'YYYY-MM-DD:HH24:MI:SS';
Session altered.
SQL> select table_name from user_tables;
TABLE_NAME
---------------
PREZ
SQL> desc prez
Name Null? Type
------------------------------------------ -------- -----------
NAME VARCHAR2(25)
ELECTED VARCHAR2(4)
SQL> select * from prez;
NAME ELEC
------------------------- ----
Barack Obama 2008
George W. Bush 2013
Bill Clinton 2013
George H. W. Bush 2013
Ronald Reagan 2013
SQL> select sysdate from dual;
SYSDATE
-------------------
2012-11-24:22:02:36
SQL> create table viceprez (name varchar2(25), elected varchar2(4));
Table created.
SQL> insert into viceprez values ('Joe Biden', '2008');
1 row created.
SQL> insert into viceprez values ('Dick Cheney', '2000');
1 row created.
SQL> insert into viceprez values ('Al Gore', '1992');
1 row created.
SQL> insert into viceprez values ('Dan Quayle', '1988');
1 row created.
SQL> insert into viceprez values ('George H. W. Bush', '1980');
1 row created.
SQL> select * from viceprez;
NAME ELEC
------------------------- ----
Joe Biden 2008
Dick Cheney 2000
Al Gore 1992
Dan Quayle 1988
George H. W. Bush 1980
SQL> select sysdate from dual;
SYSDATE
-------------------
2012-11-24:22:09:07
SQL> quit
<p></pre></font>
Note the time stamp here. At the time that the timestamp is shown the rows have not been committed to the database. The commit does not happen until we quit from SQL*Plus, where the commit happens implicitly.
<p>
So, a user gets into a database and starts deleting rows. They suddenly realize that they are in prod and not in test!
<p>
The DBA comes to the rescue (after first realizing that he must shut down the database in order to perform the flashback).
<p>
I've also included listings here showing the current incarnation of the database from RMAN between steps.
<p><pre><font color=green>
C:\>rman target /
RMAN> list incarnation of database;
using target database control file instead of recovery catalog
List of Database Incarnations
DB Key Inc Key DB Name DB ID STATUS Reset SCN Reset Time
------- ------- -------- ---------------- --- ---------- ----------
1 1 DEMO 3543868088 CURRENT 1 19-NOV-12
RMAN>quit
C:\> sqlplus demo/demo
Connected.
SQL> delete from viceprez where elected< 2000;
3 rows deleted.
SQL> delete from prez where elected = 2013;
4 rows deleted.
SQL> commit;
Commit complete.
SQL> select * from prez;
NAME ELEC
------------------------- ----
Barack Obama 2008
SQL> select * from viceprez;
NAME ELEC
------------------------- ----
Joe Biden 2008
Dick Cheney 2000
SQL>connect / as sysdba
Connected.
SQL> flashback database to timestamp to_timestamp('2012-11-24:22:09:07','YYYY-MM-DD:HH24:MI:SS');
flashback database to timestamp to_timestamp('2012-11-24:22:09:07','YYYY-MM-DD:HH24:MI:SS')
*
ERROR at line 1:
ORA-38757: Database must be mounted and not open to FLASHBACK.
SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> quit
C:\> sqlplus / as sysdba
SQL> startup mount
ORACLE instance started.
Total System Global Area 535662592 bytes
Fixed Size 1375792 bytes
Variable Size 415236560 bytes
Database Buffers 113246208 bytes
Redo Buffers 5804032 bytes
Database mounted.
SQL> flashback database to timestamp to_timestamp('2012-11-24:22:09:07','YYYY-MM-DD:HH24:MI:SS');
Flashback complete.
SQL> alter database open resetlogs;
Database altered.
SQL> connect demo/demo
Connected.
SQL> select * from prez;
NAME ELEC
------------------------- ----
Barack Obama 2008
George W. Bush 2013
Bill Clinton 2013
George H. W. Bush 2013
Ronald Reagan 2013
SQL> select * from viceprez;
no rows selected
SQL> quit
C:\>rman target /
RMAN> list incarnation of database;
using target database control file instead of recovery catalog
List of Database Incarnations
DB Key Inc Key DB Name DB ID STATUS Reset SCN Reset Time
------- ------- -------- ---------------- --- ---------- ----------
1 1 DEMO 3543868088 PARENT 1 19-NOV-12
2 2 DEMO 3543868088 CURRENT 849611 24-NOV-12
RMAN>quit
<p></pre></font>
This is where the earlier comment about the placement of the commit came into play (and also ties neatly into my earlier post this month about Transaction Isolation Levels in Oracle). Because the changes had not been committed at the timestamp specified, when the flashback database was performed, those rows are missing from the resultant database.
<p>
Let's assume that we now go back and recreate the data in the VICEPREZ table and that our same user goes and deletes the exact same records again. We'll pick it up at the delete and the subsequent rescue by the DBA:
<p><pre><font color=green>
SQL> select sysdate from dual;
SYSDATE
-------------------
2012-11-24:22:22:11
SQL> delete from viceprez where elected < '2000';
3 rows deleted.
SQL> delete from prez where elected = '2013';
4 rows deleted.
SQL> commit;
Commit complete.
SQL> select * from prez;
NAME ELEC
------------------------- ----
Barack Obama 2008
SQL> select * from viceprez;
NAME ELEC
------------------------- ----
Joe Biden 2008
Dick Cheney 2000
SQL> connect / as sysdba
Connected.
SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup mount
ORACLE instance started.
Total System Global Area 535662592 bytes
Fixed Size 1375792 bytes
Variable Size 415236560 bytes
Database Buffers 113246208 bytes
Redo Buffers 5804032 bytes
Database mounted.
SQL> flashback database to timestamp to_timestamp('2012-11-24:22:22:11','YYYY-MM-DD:HH24:MI:SS');
Flashback complete.
SQL> alter database open resetlogs;
Database altered.
SQL> connect demo/demo
Connected.
SQL> select * from prez;
NAME ELEC
------------------------- ----
Barack Obama 2008
George W. Bush 2013
Bill Clinton 2013
George H. W. Bush 2013
Ronald Reagan 2013
SQL> select * from viceprez;
NAME ELEC
------------------------- ----
Joe Biden 2008
Dick Cheney 2000
Al Gore 1992
Dan Quayle 1988
George H. W. Bush 1980
SQL> quit
C:\>rman target /
RMAN> list incarnation of database;
using target database control file instead of recovery catalog
List of Database Incarnations
DB Key Inc Key DB Name DB ID STATUS Reset SCN Reset Time
------- ------- -------- ---------------- --- ---------- ----------
1 1 DEMO 3543868088 PARENT 1 19-NOV-12
2 2 DEMO 3543868088 PARENT 849611 24-NOV-12
3 3 DEMO 3543868088 CURRENT 849931 24-NOV-12
RMAN>
<p></pre></font>
The DBA is the hero of the hour, having managed to successfully recover the database back to pre-delete state without the large time investment of a complete database restore.
<p>
Of course, such functionality needs to be used with great care in a production environment to make sure that the rest of the data you lose isn't more of a pain than the data you've already lost.
<p>
One scenario that immediately comes to mind for the use of something like this would be in a large Enterprise application where Payroll is one of the subsystems in the database. The developers in this theoretical shop are in the process of making major functional changes to the code. They want to test on as close to live data as possible so they request a refresh of the test database.
<p>
Being a large Enterprise level database, it takes 2-3 hours to restore the test server from backup. The developers run their test payroll but it didn't work. They need another fresh copy of the production database in order to run their test again.
<p>
Before Flashback Database, another 2-3 hours would be wasted waiting for the refresh to complete.
<p>
If the DBA setup the test environment for Flashback and let the developers do their testing, if they needed a fresh copy of the database in test, a quick Flashback Database later and they have a fresh copy of the data in a fraction of the time.
<p>
<b><font size=+1>Flashback Table</font></b>
<p>
Our next scenario is for Flashback Table. With this feature you can flashback just one table in the database to a particular SCN, TIMESTAMP or RESTORE POINT. Creating a restore point is as simple as:
<p><pre><font color=green>
SQL> create restore point flashback_table;
Restore point created.
<p></pre></font>
<p>
Again, let's assume that someone has run rampant over some table in the database. The table is stand alone, with no other dependencies on other objects, so it has been determined that it's safe to roll this one table back to a point in time in the past.
<p>
<b><font size=+1>Demo Time</font></b>
<p><pre><font color=green>
SQL> conn demo/demo
Connected.
SQL> select * from viceprez;
NAME ELEC
------------------------- ----
Joe Biden 2008
Dick Cheney 2000
Al Gore 1992
Dan Quayle 1988
George H. W. Bush 1980
SQL> select sysdate from dual;
SYSDATE
-------------------
2012-11-24:22:32:14
SQL> delete from viceprez where elected = '2008';
1 row deleted.
SQL> select * from viceprez;
NAME ELEC
------------------------- ----
Dick Cheney 2000
Al Gore 1992
Dan Quayle 1988
George H. W. Bush 1980
SQL> flashback table viceprez to restore point flashback_table;
flashback table viceprez to restore point flashback_table
*
ERROR at line 1:
ORA-08189: cannot flashback the table because row movement is not enabled
<p></pre></font>
<p>
What does that mean? Row movement is a setting that tells Oracle it's OK to generate new ROWID's for rows being recovered in this manner (since there's no guarantee that the old ROWID is still unique).
<p>
Fortunately, this is a setting that can be enabled after the fact!
<p><pre><font color=green>
SQL> alter table viceprez enable row movement;
Table altered.
SQL> select * from viceprez;
NAME ELEC
------------------------- ----
Dick Cheney 2000
Al Gore 1992
Dan Quayle 1988
George H. W. Bush 1980
SQL> flashback table viceprez to restore point flashback_table;
Flashback complete.
SQL> select * from viceprez;
NAME ELEC
------------------------- ----
Joe Biden 2008
Dick Cheney 2000
Al Gore 1992
Dan Quayle 1988
George H. W. Bush 1980
<p></pre></font>
<b><font size=+1>Flashback Drop</font></b>
<p>
For our last scenario, we're going to look at recovering a table that has been dropped.
<p>
In this scenario (ignoring the fact that a user shouldn't have drop rights on a table anyway), a user inserts a row into the table, realizes their error and attempts to remove it. However, instead of removing the row, they drop the table in error.
<p>
The DBA puts on his red cape and once again comes to the rescue!
<p>
<b><font size=+1>Demo Time</font></b>
<p><pre><font color=green>
SQL> insert into viceprez values ('The Man in the Moon', '2013');
1 row created.
SQL> select * from viceprez;
NAME ELEC
------------------------- ----
Joe Biden 2008
Dick Cheney 2000
Al Gore 1992
Dan Quayle 1988
George H. W. Bush 1980
The Man in the Moon 2013
6 rows selected.
SQL> drop table viceprez;
Table dropped.
SQL> select * from viceprez;
select * from viceprez
*
ERROR at line 1:
ORA-00942: table or view does not exist
SQL> select table_name from user_tables;
TABLE_NAME
------------------------------
PREZ
SQL> show recyclebin;
ORIGINAL NAME RECYCLEBIN NAME OBJECT TYPE DROP TIME
---------------- ------------------------------ ------------ -------------------
VICEPREZ BIN$bsEpu+DLQZWxoPngfpkZJQ==$0 TABLE 2012-11-24:22:40:18
SQL> flashback table viceprez to before drop;
Flashback complete.
SQL> show recyclebin;
SQL> select table_name from user_tables;
TABLE_NAME
------------------------------
PREZ
VICEPREZ
SQL> select * from viceprez;
NAME ELEC
------------------------- ----
Joe Biden 2008
Dick Cheney 2000
Al Gore 1992
Dan Quayle 1988
George H. W. Bush 1980
The Man in the Moon 2013
6 rows selected.
SQL> delete from viceprez where elected = '2013';
1 row deleted.
SQL> select * from viceprez;
NAME ELEC
------------------------- ----
Joe Biden 2008
Dick Cheney 2000
Al Gore 1992
Dan Quayle 1988
George H. W. Bush 1980
<p></pre></font>
<b><font size=+1>Conclusion</font></b>
<p>
Used with caution, the above Flashback options could save you hours of time in the event that you accidentally drop the wrong table, or have the need to perform repetitive testing on a test copy of the database where the testing causes mass updates to the data (such as in a payroll update).
<p>
There are many more options to these commands than what we've talked about here. For full information, please refer to Chapter 18 of the Oracle 11g Database Backup and Recovery User's Guide.
<p>
<b><font size=+1>Next Time</font></b>
<p>
Next time, we'll dig a little into Flashback Transaction and Flashback Transaction Query.
Stevehttp://www.blogger.com/profile/10758481055115692692noreply@blogger.comtag:blogger.com,1999:blog-916087967433702978.post-77361393851311996152012-11-20T23:01:00.000-08:002012-12-04T10:04:47.599-08:00FLASHBACK Technology in Oracle Databases (Part 1)<hr>
"If it helps, I can travel in time as well" - The Tenth Doctor, referring to the TARDIS
<hr>
Today we're talking about Time Travel in Oracle Databases.
<p>
I almost made a reference in the title and quote of this post about Mr. Peabody and the Way Back Machine but that would have perhaps shown my age - besides, quoting The Doctor is way cooler.
<p>
<img style="float:left; margin:0 10px 10px 0;cursor:pointer; cursor:hand;width: 255px; height: 384px;" src="http://25.media.tumblr.com/tumblr_ma0a6lxYwN1rg0y2ro1_1280.jpg" border="0" alt="The Doctor and the Tardis" />
<p>
I recently found myself in a situation where I needed to go back in time. I was rolling forward a copy of a database to a particular point in time from a backup. I needed to stop at that particular point in time because the project manager wanted me to run row counts on the 1500 odd tables in the "before" and "after" databases and I had taken the "before" at that specific point in time.
<p>
During my recovery operation I fat fingered the UNTIL TIME clause and went past my stopping point.
<p>
After the expected cursing had died down I started wishing I had a TARDIS of my very own. Apart from the coolness factor of being able to zip back and forth through time at will, being able to go back about 15 minutes and fix my mistake would have been a miracle in and of itself.
<p>
It was then that I remembered FLASHBACK and it became my own TARDIS - Time and Relational Data in Schemas.
<p>
<b><font size=+1>What is Flashback Query</font></b>
<p>
First introduced in Oracle 9i, Flashback Query enables as to view the database as it existed at a point in time in the recent past. In Oracle 9i and onwards, this is implemented via the DBMS_FLASHBACK package.
<p>
This functionality was enhanced in Oracle 9i Release 2 with the addition of the "AS OF TIMESTAMP" clause to the SELECT statement.
<p>
<b><font size=+1>Prerequisites and Limitations</font></b>
<p>
There are some prerequisites to use Flashback Query
<p>
<ul>
<li>Initialization parameter requirement: undo_management = auto
<li>Initialization parameter requirement: undo_retention = nnnn (maximum number of seconds to be able to query into the past)
<li>Initialization parameter requirement: undo_tablespace = <undo_tablespace_name>
<li>FLASHBACK (specific tables) or FLASHBACK ANY TABLE system privilege
<li>EXECUTE privilege on DBMS_FLASHBACK
</ul>
<p>
There are some restrictions that you should be aware of as well:
<ul>
<li>Prior to 11g, DDL that alters the structure of a table invalidates old UNDO data, rendering Flashback Query inoperative. Post 11g, most DDL is supported.
<li>Timestamps map to an SCN (System Change Number) Value. These are internally updated every 5 minutes. If operating within that window, query results may be inaccurate if using timestamps (use SCN's for increased accuracy).
<li>You must DISABLE/ENABLE the flashback mode before changing time windows.
<li>Only data state is affected. Any table that has had a DDL change will reflect the most recent state of the table.
<li>You cannot use Flashback Query on V$ views, but you can use it on catalog tables like USER_TABLES.
<li>The SYS user cannot use DBMS_FLASHBACK but can use AS OF TIMESTAMP in a SELECT.
<li>DML or DDL operations cannot be performed in Flashback mode. Open a cursor, fetch the data, end the flashback session and manipulate the data or use INSERT INTO ... SELECT AS OF TIMESTAMP constructs.
<li>The undo_retention parameter is a target - it is not a guarantee of data availability.
</ul>
<p>
<b><font size=+1>Demo Time</font></b>
<p>
The first thing we would need to do is to make sure that the user we're going to use has the correct permissions so we'd want to execute the following script:
<p><pre><font color=green>
create user demo identified by demo default tablespace users;
grant create session, flashback any table, create table to demo;
grant execute on dbms_flashback to demo;
alter user demo quota unlimited on users;
<p></pre></font>
Next, we will check our parameters to make sure that automatic undo is turned on and that we're archiving:
<p><pre><font color=green>
SQL> connect / as sysdba
Connected.
SQL> show parameter undo
NAME TYPE VALUE
---------------------------- ----------- ---------------
undo_management string AUTO
undo_retention integer 3600
undo_tablespace string UNDOTBS1
SQL> select instance_name, archiver from v$instance;
INSTANCE_NAME ARCHIVE
---------------- -------
demo STARTED
<p></pre></font>
After that, let's create a table and load a little data and validate what we have. At the end of the inserts, we'll commit the changes, grab our current SCN and our current date and time.
<p><pre><font color=green>
SQL> conn demo/demo
Connected.
SQL> create table prez (name varchar2(25), elected varchar2(4));
Table created.
SQL> insert into prez values ('Barack Obama', '2008');
1 row created.
SQL> insert into prez values ('George W. Bush', '2000');
1 row created.
SQL> insert into prez values ('Bill Clinton', '1992');
1 row created.
SQL> insert into prez values ('George H. W. Bush', '1988');
1 row created.
SQL> insert into prez values ('Ronald Reagan', '1980');
1 row created.
SQL> select * from Prez;
NAME ELEC
------------------------- ----
Barack Obama 2008
George W. Bush 2000
Bill Clinton 1992
George H. W. Bush 1988
Ronald Reagan 1980
SQL> commit;
Commit complete.
SQL> alter session set nls_date_Format = 'YYYY-MM-DD:HH24:MI:SS';
Session altered.
SQL> set serveroutput on
SQL> var scn number;
SQL> exec :scn := dbms_flashback.get_system_change_number;
PL/SQL procedure successfully completed.
SQL> exec dbms_output.put_line ('SCN = '||to_char(:scn));
SCN = 818801
PL/SQL procedure successfully completed.
SQL> select sysdate from dual;
SYSDATE
-------------------
2012-11-20:22:49:06
<p></pre></font>
Next, we'll run an update where we "accidentally" update all the rows in the table to an incorrect value. We'll also grab a timestamp here.
<p><pre><font color=green>
SQL> update prez set elected = '2013';
5 rows updated.
SQL> commit;
Commit complete.
SQL> select sysdate from dual;
SYSDATE
-------------------
2012-11-20:22:49:32
<p></pre></font>
With that done, it's now time to check the table out with current values and values as they were at a specific SCN. The first way uses the DBMS_FLASHBACK package to view the data as it was at that first SCN we assigned to the variable :scn. Then, we do it again using SELECT ... AS OF SCN.
<p><pre><font color=green>
SQL> select * from prez;
NAME ELEC
------------------------- ----
Barack Obama 2013
George W. Bush 2013
Bill Clinton 2013
George H. W. Bush 2013
Ronald Reagan 2013
SQL> exec dbms_flashback.enable_at_system_change_number(:scn);
PL/SQL procedure successfully completed.
SQL> select * from prez;
NAME ELEC
------------------------- ----
Barack Obama 2008
George W. Bush 2000
Bill Clinton 1992
George H. W. Bush 1988
Ronald Reagan 1980
SQL> exec dbms_flashback.disable;
PL/SQL procedure successfully completed.
SQL> select * from prez;
NAME ELEC
------------------------- ----
Barack Obama 2013
George W. Bush 2013
Bill Clinton 2013
George H. W. Bush 2013
Ronald Reagan 2013
SQL> select * from prez as of scn :scn;
NAME ELEC
------------------------- ----
Barack Obama 2008
George W. Bush 2000
Bill Clinton 1992
George H. W. Bush 1988
Ronald Reagan 1980
<p></pre></font>
For the next demo, we'll do effectively the same thing, using Timestamps instead of SCN's. I made sure to wait at least 5 minutes so that the internal SCN to TIMESTAMP mapping was updated correctly and we went and did round 2.
<p><pre><font color=green>
SQL> select * from prez;
NAME ELEC
------------------------- ----
Barack Obama 2013
George W. Bush 2013
Bill Clinton 2013
George H. W. Bush 2013
Ronald Reagan 2013
SQL> exec dbms_flashback.enable_at_time(to_date('2012-11-20:22:49:00','YYYY-MM-DD:HH24:MI:SS'));
PL/SQL procedure successfully completed.
SQL> select * from prez;
NAME ELEC
------------------------- ----
Barack Obama 2008
George W. Bush 2000
Bill Clinton 1992
George H. W. Bush 1988
Ronald Reagan 1980
SQL> exec dbms_flashback.disable;
PL/SQL procedure successfully completed.
SQL> select * from prez
2 as of timestamp
3 to_timestamp('2012-11-20:22:49:00','YYYY-MM-DD:HH24:MI:SS');
NAME ELEC
------------------------- ----
Barack Obama 2008
George W. Bush 2000
Bill Clinton 1992
George H. W. Bush 1988
Ronald Reagan 1980
SQL> select * from prez
2 as of timestamp
3 to_timestamp('2012-11-20:22:49:32','YYYY-MM-DD:HH24:MI:SS');
NAME ELEC
------------------------- ----
Barack Obama 2013
George W. Bush 2013
Bill Clinton 2013
George H. W. Bush 2013
Ronald Reagan 2013
<p></pre></font>
Finally, there's another clause we can add to the SELECT statement to show what versions of a row may exist. You can specify SELECT ... VERSIONS BETWEEN.
<p>
We start by updating one row of the table, getting the new times and SCN's and then running the queries.
<p><pre><font color=green>
SQL> update prez set elected = '2008' where name = 'Barack Obama';
1 row updated.
SQL> select * from prez;
NAME ELEC
------------------------- ----
Barack Obama 2008
George W. Bush 2013
Bill Clinton 2013
George H. W. Bush 2013
Ronald Reagan 2013
SQL> commit;
Commit complete.
SQL> select sysdate from dual;
SYSDATE
-------------------
2012-11-20:23:11:54
SQL> exec dbms_output.put_line ('SCN = '||to_char(:scn));
SCN = 818801
PL/SQL procedure successfully completed.
SQL> var scn2 number;
SQL> exec :scn2 := dbms_flashback.get_system_change_number;
PL/SQL procedure successfully completed.
SQL> exec dbms_output.put_line ('SCN = '||to_char(:scn2));
SCN = 820456
PL/SQL procedure successfully completed.
SQL> select * from prez
2 versions between scn 818801 and 820456
3 where name = 'Barack Obama';
NAME ELEC
------------------------- ----
Barack Obama 2008
Barack Obama 2013
Barack Obama 2008
SQL> select sysdate from dual;
SYSDATE
-------------------
2012-11-20:23:19:24
SQL> select * from prez
2 versions between timestamp
3 to_timestamp('2012-11-20:22:49:00','YYYY-MM-DD:HH24:MI:SS')
4 and to_timestamp('2012-11-20:23:20:00','YYYY-MM-DD:HH24:MI:SS')
5 where name = 'Barack Obama';
NAME ELEC
------------------------- ----
Barack Obama 2008
Barack Obama 2013
Barack Obama 2008
<p></pre></font>
<b><font size=+1>Conclusion</font></b><p>
One of the important things to note here is that while you're in Flashback mode, no DML is allowed. One way to get around that is to use SELECT ... AS OF to create a temporary copy of the data you need and then work from there to correct it.
<p>
The following would work:
<p><pre><font color=green>
create table temp_prez as select * from prez as of scn :scn;
<p></pre></font>
From there you'd be able to compare current and old data and update accordingly as needed.
<p>
For further information on Flashback Technology in Oracle Databases, be sure to check the Oracle Documentation.
<p>
<b><font size=+1>Next Time</font></b><p>
Next time, we'll dig a little deeper into the features added in Oracle 10g and later, specifically Flashback Table, Flashback Drop and Flashback Database.Stevehttp://www.blogger.com/profile/10758481055115692692noreply@blogger.comtag:blogger.com,1999:blog-916087967433702978.post-31299880817547429962012-11-15T13:15:00.000-08:002012-11-15T13:15:38.414-08:00Question to self: Who am I ?<hr>
"I believe the appropriate metaphor here involves a river of excrement and a Native American water vessel without any means of propulsion." - Dr. Sheldon Cooper, The Big Bang Theory.
<hr>
Today we'll talk about some Oracle stuff for a change.
<p>
I had one of my colleagues come to be a couple of weeks ago and say to me "How to I tell what database I'm connected to after I connect to the database" ?
<p>
I had to scratch my head a little on that one. When connected as SYSDBA, it's very simple to select from v$instance and you have your instance name.
<p>
However, if you're not connected as SYSDBA how do you do it.
<p>
Enter the SYS_CONTEXT function. SYS_CONTEXT allows you to extract parameter values associated with a particular namespace. Oracle provides a predefined namespace called USERENV which contains a wealth of information about the current session.
<p>
The following, for example would tell you which database you're currently connected to:
<pre>
<font color="green">
SELECT SYS_CONTEXT ('USERENV', 'INSTANCE_NAME')
FROM DUAL;
</font>
</pre>
<p>
There's a full list of the predefined parameters in the <a href="http://docs.oracle.com/cd/E11882_01/server.112/e26088/functions184.htm#SQLRF06117">Oracle SQL Language Reference Manual</a>.Stevehttp://www.blogger.com/profile/10758481055115692692noreply@blogger.comtag:blogger.com,1999:blog-916087967433702978.post-22880631124705652522012-11-14T23:20:00.001-08:002012-12-21T10:32:12.512-08:00Isolation Levels in Oracle vs SQL Server<hr>
"I am somewhat preoccupied telling the Laws of Physics to shut up and sit down." -Vaarsuvius, Order of the Stick Webcomic
<hr>
I sat in on a session at the PASS Summit in Seattle last week listening to a session delivered by Randy Knight talking about Isolation Levels in SQL Server.
<p>
Knowing that Oracle handles things differently to SQL Server made me want to research things a little closer.
<p>
As a DBA who spends time in both Oracle and SQL Server, it's important to understand Isolation Levels in these two RDBMS's and how they work, because their default behavior is very different.
<p>
According to the ANSI Standard, there are 4 default Isolation Levels in a Relational Database Management System, ordered below from least isolated to most isolated.
<p>
<ol>
<li>Read Uncommitted
<li>Read Committed
<li>Repeatable Read
<li>Serializable
</ol>
<p>
Going into the details of what each of these mean is outside the scope of what I want to write about today. There's a good write up at <a href="http://en.wikipedia.org/wiki/Isolation_%28database_systems%29">Wikipedia</a> or any of a dozen or more blogs, just search "Isolation Levels" on your favorite search engine.
<p>
It should be noted that SQL Server implements all 4 of the ANSI Standard Isolation Levels plus one of its own (SNAPSHOT Isolation - which has 2 different levels in itself - Statement level and Transaction Level). As with the basic tenets, going into the details of snapshot isolation is outside of the scope of what I want to talk about here. To learn more about SNAPSHOT Isolation, refer to Books Online.
<p>
Oracle, on the other hand, does not implement Read Uncommitted and also does not implement Repeatable Read. Oracle adds in their own extra option called Read Only Isolation. You can read more about this option in the <a href="http://docs.oracle.com/cd/E11882_01/server.112/e25789/consist.htm">Oracle Documentation</a>.
<p>
Because I work daily in both Oracle and SQL Server, what I really wanted to find out was what are the core effects of how SQL Server approached the READ COMMITTED Isolation level vs how Oracle handles it.
<p>
In basic terms, SQL Server has a "pessimistic" view of data concurrency by default. When in the middle of a transaction, SQL Server will block readers on any row that has changed until that row is committed.
<p>
Oracle, on the other hand has an "optimistic" view of concurrency by default. When in the middle of a transaction, Oracle will have readers see a version of the row as it existed at the start of the update transaction, thus not blocking readers.
<p>
I found an <a href="http://www.oracle.com/technetwork/issue-archive/2005/05-nov/o65asktom-082389.html">article</a> written by Tom Kyte, Oracle's technology expert, where he talked about possible data inaccuracies when Read Committed is implemented the way that Microsoft does it in SQL Server and wanted to summarize it here as I used it for my own testing.
<p>
The basic setup is like this:
<p>
A bank account database has a table for balances that contains two columns, account and balance. The table has many hundreds of thousands rows.
<p>
A query is started to sum balances. The database begins reading pages and accumulating balances. After 200,000 records are read a transaction is done in the database. An account that has already been read is decremented by $400 and an account that has not yet been read (row number 300,000) is incremented by $400.
<p>
How do both database systems handle this ?
<p>
The timeline would go like this (I am going to represent the summing of balances by selecting one row at a time):
<p>
<ol>
<li>Transaction 1: select balance where account = 1;
<li>Transaction 1: select balance where account = 2;
<p>
... etc ...
<p>
<li>Transaction 2: Decrement the value of account 1 by $400;
<li>Transaction 2: Increment the value of account 300000 by $400;
<p>
... etc ...
<p>
<li>Transaction 1: select balance where account = 300000;
<p>
... etc ...
<p>
<li>Transaction 2: Commit
</ol>
<p>
Let's start by looking at how Oracle handles it.
<p>
I preload a table with three rows. Account 1 has a balance of $500, Account 2 has a balance of $200 and Account 300000 has a balance of $100.
<p>
Oracle's SQL*Plus, differently from SQL Server Management Studio, has an implicit transaction. You don't need to specify BEGIN TRANSACTION - a transaction is assumed from the moment you start until you quit SQL*Plus (implicit commit) or explicitly issue a commit or rollback.
<p>
In Session 1, we execute the following
<p>
<pre>
<font color=green>
SQL> select balance from accounts where account = 1;
BALANCE
----------
500
SQL> select balance from accounts where account = 2;
BALANCE
----------
200
SQL>
</font>
</pre>
<p>
We do not commit the transaction.
<p>
In Session 2 we update the values of our accounts
<p>
<pre>
<font color=green>
SQL> update accounts set balance = 100 where account = 1;
1 row updated.
SQL> update accounts set balance = 500 where account = 300000;
1 row updated.
SQL>
</font>
</pre>
<p>
Once again we leave the transaction open and do not commit.
<p>
We switch back to session 1:
<p>
<pre>
<font color=green>
SQL> select balance from accounts where account = 300000;
BALANCE
----------
100
SQL>
</font>
</pre>
<p>
The query immediately returns the value of account 300000 as it was at the beginning of this current transaction, giving us a consistent view of all the balances as of the time that the query started.
<p>
Returning to Session 2 now, we can commit the transaction over there just to tidy up loose ends.
<p>
<pre>
<font color=green>
SQL> Commit;
Commit complete.
SQL>
</font>
</pre>
<p>
Having seen how Oracle does it (the optimistic view), let's have a look at SQL Server.
<p>
On SQL Server, we need to explicitly tell SQL Server Management Studio that we want a transaction.
<p>
<pre>
<font color=green>
begin transaction
select * from accounts where account = 1;
select * from accounts where account = 2;
account balance
--------------------------------------- ---------------------------------------
1 500
(1 row(s) affected)
account balance
--------------------------------------- ---------------------------------------
2 200
(1 row(s) affected)
</font>
</pre>
<p>
So far so good.
<p>
Over to Session 2, we perform our balance transfer between accounts
<p>
<pre>
<font color=green>
begin transaction
update accounts set balance = 100 where account = 1;
update accounts set balance = 500 where account = 300000;
(1 row(s) affected)
(1 row(s) affected)
</font>
</pre>
<p>
Returning to Session 1, we try to query Account 300000:
<p>
<pre>
<font color=green>
select * from accounts where account = 300000;
</font>
</pre>
<p>
SSMS sits here and spins. It is waiting for the transaction from Session 2 to complete and commit the change to the database. A writer is blocking a reader now.
<p>
We swap back to session 2 and complete the transaction:
<p>
<pre>
<font color=green>
commit transaction
</font>
</pre>
<p>
Immediately the query in Session 1 returns a value:
<p>
<pre>
<font color=green>
account balance
--------------------------------------- ---------------------------------------
300000 500
(1 row(s) affected)
</font>
</pre>
<p>
That's not right .... or is it ? I'd have to say no - it's not right - not from a pure data accuracy standpoint.
<p>
From the SQL Server Read Committed Isolation level and the way it's implemented that is the correct value - the value was read after it was committed to the database but the net result is that the $400 has been counted twice. It was counted in the balance of account 1 and then again in the balance of account 300000 - our query has, in fact, returned incorrect data!
<p>
If I was wanting an accurate picture of my account balances under this scenario, it appears to be that Snapshot Isolation is what's needed, but even then it can be a little tricky.
<p>
I ran through the scenario above again using Snapshot Isolation, using these scripts in the same order of command execution as presented above::
<p>
<u>Session 1</u>
<p>
<pre>
<font color=green>
set transaction isolation level snapshot
begin transaction
select * from accounts
select * from accounts where account = 300000
commit transaction
</font>
</pre>
<p>
<u>Session 2</u>
<p>
<pre>
<font color=green>
set transaction isolation level snapshot
begin transaction
update accounts set balance = 100 where account = 1;
update accounts set balance = 500 where account = 300000
commit transaction
</font>
</pre>
<p>
The value in the last query of Session 1 showed 100 before I committed Session 2, with an immediate return, just like Oracle did. However, even after I had committed Session 2 and ended it completely, the return from the query on the balance on account 300000 in Session 1 was still showing $100, until I ran the commit transaction in Session 1, when it turned back to the $500 value that was actually sitting in the database.
<p>
This actually makes sense, since we had done a begin transaction at the start of the query in Session 1, so it was showing us a picture of the data as it existed at the start of Session 1.
<p>
If we run through the example one more time, this time not putting the queries in Session 1 inside a transaction, as soon as Session 2 commits, the balance of account 300000 is showing the updated value immediately.
<p>
Microsoft provides an excellent <a href="http://msdn.microsoft.com/en-us/library/ms191190%28v=sql.105%29.aspx">example</a> of the different behavior under snapshot isolation and read committed snapshot isolation on their website (that uses vacation days instead of a bank account as the example).
<p>So, there's a definite "Gotcha!" sitting in there. If this kind of level of accuracy is important to you and your application, make sure you completely understand how Isolation levels work in the database system you're using.
<p>
I should re-iterate that the demonstration scenario used in this post was used in Tom Kyte's article as linked above and replicated here in order to compare and contrast the differences between the default behavior in both Oracle and SQL Server and how it can be modified by using other options. While he mentioned "other vendors" in his article, I wanted to see for myself how this scenario specifically played out in SQL Server (as I already knew for certain how it would behave in Oracle).Stevehttp://www.blogger.com/profile/10758481055115692692noreply@blogger.comtag:blogger.com,1999:blog-916087967433702978.post-29232417661555011092012-11-11T18:26:00.000-08:002012-11-11T18:46:41.382-08:00PASS Summit 2012 Thoughts and Ruminations<hr>
"Sir, I think you have a problem with your brain being missing" - Zoe, Firely, "The Train Job"
<hr>
The quote above is a little how I feel after this past week in Seattle at PASS Summit 2012. This was my second PASS Summit (many thanks to my boss for sending me out for another great week of education. My brain is so overloaded with information, I'm beginning to feel like it isn't there at all.
<p>
I learned last year a few things that were very helpful for me this year. Last year I struggled with notepads and pens and then trying not to lose all the pieces of paper I had acquired after I got home.
<p>
In the last 8 months or so I bought myself an iPad. Not really being a fan of Apple's soft keyboard I got myself a bluetooth keyboard/case to go with it. Saw at least one person with the ZAGGFolio case, my personal preference is the Airbender from New Trent. Used it heavily all week long in combination with Evernote as the note taking app and it worked brilliantly.
<p>
A downside to the conference was the wifi - while it seemed to work brilliantly for anything that didn't require an SSL connection (with the sole exception, oddly enough, of the session review site, couldn't get that to work on my iPad at all), I couldn't make an SSL connection to anything.
<p>
A new change this year was using the Guidebook site for electronic recording of speaker review forms. Even with the tablet access issue, I was able to use my phone to record session reviews with ease.
<p>
I was fortunate enough to also be there for both Pre Cons on Monday and Tuesday.
<p>
On Monday, Argenis Fernandez and Robert Davis talked about Best Practices for DBA's. While they covered some topics that I had already researched well and implemented, there were still lots of ideas for things to look at. I also met a new friend on Monday, a DBA from Singapore who I will be sure to follow up with this week.
<p>
Had a brief moment to introduce myself to Argenis and thank him for his assistance on Twitter. Whenever I post to #sqlhelp he usually has some good input.
<p>
On Tuesday I decided to spend the day with Allen White and go over some detail in Powershell. As a complete Powershell novice, I was very impressed at the things that can be done with this language.
<p>
I will need to look into getting this framework installed on all my servers and seeing just what I can leverage from it.
<p>
Tuesday evening was a great time to catch up with a former colleague of mine from my previous employer (if you saw the Princess wandering around on Thursday, it was her) and also got to meet face to face with one of my customers for the first time.
<p>
I tend to skip the Keynotes because I'm just not that excited by the whiz bang new features that Microsoft has to offer as they are announced - because I know I won't see any of them for a long time anyway, and it gives me a couple of hours in the morning to catch up on email and tend to any issues before hitting the sessions.
<p>
Highlights of Wednesday's sessions, for me, were Erin Stellato's morning session on DBCC options, Bob Dylan (Brent Ozar) doing a Lightning Talk on the war going on inside tempdb and the fabulous afternoon session with Tom LaRock and Tim Chapman (Choose your own adventure). For those who weren't there, this was a cleverly designed session where Tim and Tom had concocted a poorly performing system. The scenario was - the server is slow and we can't work. Every minute the server is in this state, it's costing the company $250,000. Fix it.
<p>
They would ask for suggestions of what to look at from the room and then follow it up with "Why that? What will that tell you, what are you looking for".
<p>
There was some controlled chaos in some parts, with people yelling out all sorts of ideas but it was a brilliant concept for a session. Kudos to you, gentlemen.
<p>
Wednesday evening was the amazing Hard Rock Cafe Karaoke. Note for next time, if you want to sing, be up there early - by the time I had imbibed enough liquid courage to put my name down, the list was full. The live band were amazing and the performances by the attendees were (for the most part :) ) pretty amazing as well. One thing I did learn was that I have apparently led a musically sheltered life because I had never heard Creep by Radiohead before - and I heard it twice that night (and once more the following night at EMP). Big thanks to Pragmatic Worx, HP and Microsoft for sponsoring that one.
<p>
Thursday's highlights included a session from Brad Mcgehee on OS tuning and Brent Ozar's session on his sp_Blitz script. The new features in this new version give me high hopes that I might get some insights into tracking down a recurring process at my shop that locks the database to high heaven at the start of every month (I really didn't know you could pull execution plans out of the DMV's and place them in a free tool and get information on the parameters and what not out of it). Very excited about looking more into that at the start of next month.
<p>
The look on Brent's face after the session when my question started with "I have this really old COBOL based app that connects to SQL Server" was worth the price of admission.
<p>
Finally Friday came and topped off the week with some good tips from Kevin Kline on virtualization. We really don't virtualize database servers (they tend to be physical boxes) but I'm looking forward to digging more into this area.
<p>
There was so much to learn this week that it is going to take weeks to go through it all on every server and work out just what improvements can possibly be made.
<p>
Favorite session of the week is probably a split between Tom/Tim's "Choose Your Own Adventure" session or Brent's "sp_Blitz". If I had to choose, I think Tom and Tim would come out on top by a hair.
<p>
Next year, the Summit is off to North Carolina and I don't know if I'll be making it (we seem to be taking a direction more and more to Oracle in the future and less and less on SQL Server, and if that trend continues, I'll likely be investing more in Oracle education than in SQL Server)
<p>
Still, all in all a great week of education. Next on the list is the Rocky Mountain Oracle User Group Training Days in Denver in February. Same kind of thing as PASS Summit, for for Oracle (and always a great couple of days of training)! If you do Oracle, can't recommend this conference highly enough, go take a peek at <a href="http://www.rmoug.org/training/training-days-2013-registration/">http://www.rmoug.org/training/training-days-2013-registration/</a> for full information.
<p>
Many thanks to the organizers, the Board of Directors and the volunteers who made this all possible.Stevehttp://www.blogger.com/profile/10758481055115692692noreply@blogger.comtag:blogger.com,1999:blog-916087967433702978.post-49888280350026188692012-07-17T06:32:00.000-07:002012-07-31T06:30:21.941-07:00T-SQL Tuesday #32 - Day in the Life<hr>
<i>Insanity: doing the same thing over and over again and expecting different results. - Albert Einstein
</i>
<hr>
<a href="http://erinstellato.com/2012/07/invitation-for-tsql-tuesday-day-life/"><img style="float:left; margin:0 10px 10px 0;cursor:pointer; cursor:hand;width: 150px; height: 150px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiJquyEJB2QJGebzSIF-4krAaC28GclMK-POt-MUKo0E37ELJECQd46GyK2ek6vXVkaAid5vk4Blf2k2jzcdFtGWqSNeFH2Iio77ht_3yj5AuR943ro-LHleuUBUvXvwpXAva8j-Grqf16F/s400/tsqltuesday.jpg" border="0" alt="T-SQL Tuesday" id="BLOGGER_PHOTO_ID_5480538702979664562" /></a>After a couple of months in absentia from T-SQL Tuesday, I went hunting again this month and found Erin Stellato (<a href="http://www.erinstellato.com">blog</a>|<a href="https://twitter.com/erinstellato">twitter</a>) asking us to write about a "Day in the Life" on what we did on either July 11th or July 12th, 2012. Since I didn't really look until Wednesday, I chose Thursday.
<p>
There's more to a 'Day in the Life' of an exiled DBA that just the DBA stuff. Sometimes, there's some personal things of note to report upon and I thought I'd start this post with some of that.
<p>
The second Thursday of the month at 6AM local (Mountain) time is a scheduled maintenance window for one of my customers. Normally, while I'm at home, that means getting up at 5:55AM, stumbling down 2 flights of stairs to my basement (I work from home and telecommute), dialling into a conference bridge so I can coordinate with the server guy and do those maintenance things that usually happen.
<p>
This month, however, my family and I are away from home and staying with some friends for a few weeks in the state where my company's office is. So, in order not to wake up every one, I needed to get up earlier to actually be in the office by 6 AM.
<p>
I awoke at 4:45AM and headed out of my friend's basement to find my 17 year old son sitting in the living room watching TV
<p>
The conversation went something like this.
<p>
Me: "What are you doing up so early"<br>
Son: "Mom said I could stay up, but I am not allowed to be grumpy today"<br>
Me: "Aren't you going walking around the zoo today for 3 hours or so?"<br>
Son: "Yes"<br>
Me: "Trust me, go get some sleep"
<p>
(For the record, when my wife said "you can stay up", she meant for something resembling an hour, not "all night". I guess you need to be very specific when talking to teenagers on summer break).
<p>
Anyway, that aside, let's get back to the work related stuff.
<p>
Before we get started, what is it that I do ? My job title is "Senior Technical Consultant". What does that mean? It means I'm a Jack of All Trades (See my post for <a href="http://dba-in-exile.blogspot.com/2012/03/t-sql-tuesday-028-jack-of-all-trades.html">T-SQL Tuesday #28</a> if you really care :) ).
<p>
I do Oracle stuff, I do SQL Server Stuff, I do Windows Stuff, I do Unix Stuff, I write CMD Scripts, Shell Scripts, I still bravely wade neck deep into COBOL programming waters (no matter how much I try, I can't seem to ditch that one - reminds me of the story of the COBOL programmer who cryofreezes himself and gets woken up in 9999 AD in time to save the world from the Y10K bug because it says on his bio that he knows COBOL) and I also do system administration / troubleshooting on an ERP package that my company supports.
<p>
Arriving at work by 6AM, got together with the Windows guy and we did our thing. He patched the servers, I cleaned up some wayward security issues in the database and performed some index defragmentation and we got the application and monitoring restarted and continued along with our morning.
<p>
With the maintenance over it was time to read the stack of un-opened emails that appears in my inbox every morning. Backups! Check. Scheduled jobs! Check. Disk space and file growth! Check. Everything appeared to work perfectly last night.
<p>
Around mid morning I realized that an important thing had been missed today. Caffeine! After getting appropriately caffeinated, the day continued.
<p>
Drudgery time! Anyone who needs to keep up with timekeeping needs to find some time to enter their time sheet data (and I always seem to be a day behind with that). Mid-morning seemed to be as good a time as any to catch up on this chore.
<p>
Next up today (which like any other days appears to be heading down the path of controlled chaos) is a call about a web application that isn't responding. A check of the database shows me that the DB is responding just fine, no hung Agent jobs, queries returning normally. A short time later someone finds that a timeout setting has been modified when it shouldn't have been on the web server side. A quick change over there and all is well again. This is a fine example of how when things stop working, people immediately suspect the database! Sometimes it is the database of course, but just as often (if not more so) it isn't.
<p>
Another project we have in place at the moment is to move what effectively is a small data center from one state to another. Several hours are spent today going over the requirements for the live move and working on debugging the bits and pieces in one of the applications after the test move the day before - this is a <b>very</b> legacy application that was written about a century and a half ago and can be very cantankerous. Making it work after shifting between servers hasn't been easy.
<p>
Lunch comes and goes and the stream of chaos is left behind for a half hour.
<p>
Plunging back into the current post midday sustenance and it's digging into a malfunctioning trigger after the data conversion of the cantankerous app mentioned above and redoing a part of that data conversion.
<p>
A colleague then wants to analyze some data, so a quick extract into Excel later and they go back to their office humming happily over the bits and bytes contained in their pretty new spreadsheet.
<p>
The office manager strolls past my office and mentions that it's almost time for Open Enrollment again (my company has open enrollment for insurance for an October to September schedule), so a half hour goes bye bye as plans and costs and what not are reviewed. I hate Open Enrollment. Passionately.
<p>
A call comes in from a colleague asking why a report that's worked for the last 5 years isn't working today. I dig down through the code and after a short while realize that the report works fine ... as long as the user connects to the database using the correct login. Funny that. Log in as someone without the required permissions and the report doesn't work. Log in with the correct user (the one with the correct grants) and the report works!
<p>
Yesterday (in this context "yesterday" means July 11th - which was also my father's 75th birthday, so a happy birthday shout out to my dear old Dad), late in the day, I had implemented a change in a stored procedure that would allow some time sensitive data to be retained for 60 days instead of the 10 it was originally implemented with. Change Management requires validation of all changes in order to make sure they are successful. The end of my afternoon today is making sure that my change is accumulating data (since we can't prove the "delete" part of this change until 60 days from now).
<p>
Finally, as the day winds down, it seems like a good time to sit down and catch up on some of the administration that is needed in keeping track of all of the issues still before me. Machete in hand, it's time to dive into the jungle of the ticketing system and update everything that's been accomplished today, then perform the miracle that is getting the old time sheet updated before going home!
<p>
This day is probably not unlike many other people's days - organized chaos, many balls in the air at the same time and all of the tasks due yesterday.
<p>
Between the MS Outlook "To Do" List and a notepad that's always at my side, I somehow manage to keep 99.99% of my tasks and to do's tracked.
<p>
Which brings me to the end of my day and the end of the story.
<p>
Almost.
<p>
Let's go back to the opening story about a young boy who didn't want to go to bed.
<p>
What happens to a young man at the end of the day after all of about 2 hours of sleep and a 3.5 hour walk around the Denver Zoo (which is one of my favorite places btw) ?
<p>
Well, he gets in the car for the drive home, opens his gameboy to play and pass the time and this happens....
<p>
<div class="separator" style="clear: both; text-align: center;">
<a href="http://home.comcast.net/~sjwales/images/SleepingSmall.jpg" imageanchor="1" style="clear:left; float:left;margin-right:1em; margin-bottom:1em"><img border="0" height="365" width="274" src="http://home.comcast.net/~sjwales/images/SleepingSmall.jpg" /></a></div>
<p>
I love how the stylus for the Nintendo DS is sitting there in his right hand hand between this thumb and index finger.
<p>
Yes, I am a mean old Dad, but it needed to be done.
<p>
I'm sure when I show this post to my son he'll be cranky at me, but he will have a chuckle at it and then next time go to bed at a more reasonable hour!
<p>
(EDIT: When I showed my son this post he didn't let me down. He had a good chuckle and his response to the going to bed at a more reasonable hour was basically "Or not!")
<p>
Today was a fairly typical day. I delved into my Oracle databases (the trigger and the report that didn't work), my SQL Server databases (the maintenance window and the web app), did some scripting (data center move), analyzed some COBOL Code (part of the data center move - there's also a platform change in there somewhere) and even got some non-standard activities in today (Open Enrollment review).
<p>
At the end of the day there was also the stories about my son at the zoo. He had warned his younger sister not to bug him because he was tired and my wife countered with "Oh no, no, no. You bug him all you want, he's not allowed to be grumpy today".
<p>
Fortunately, I really enjoy doing what I do.
<p>
Hopefully your day was less chaotic than mine and if you're still with me, you haven't fallen asleep!
<p>
Until next time.Stevehttp://www.blogger.com/profile/10758481055115692692noreply@blogger.comtag:blogger.com,1999:blog-916087967433702978.post-35299211365410127502012-04-02T18:34:00.000-07:002014-06-08T13:11:29.588-07:00Scotty, Beam Me Up<hr>
<i>"N-C-C-1-7-0-1 ... No bloody A, B, C or D" - Captain Montgomery Scott, Star Trek: TNG - 'Relics'</i>
<hr>
With the topic of this post, a starting quote from Scotty himself seemed apropos.
<p>
However, this post has nothing to do with Star Trek at all. Today we'll be talking about Transportable Tablespaces in an Oracle database.
<p>
<h3>Why do this?</h3>
<p>
I have a database that I'm responsible for that has three primary schemas inside. When refreshing test from prod, I have to replace everything except one, that contains some sort of metadata for the others, that's specific to the test instance. Replacing this metadata breaks the application in test.
<p>
Up until recently, I had been doing an export out of prod and then truncate / import into test. This method is extremely time consuming and I thought that there had to be a better way.
<p>
This is, of course, <a href="http://docs.oracle.com/cd/E11882_01/server.112/e25494/tspaces.htm#ADMIN11396">extensively documented</a> in the Oracle Documentation and there are some limitations to this (refer to the link) - but for what I need, this turns out to be an excellent way to get a refresh done in a much faster timeframe.
<p>
<h3>Setup</h3>
<p>
The first thing we need to do is to make sure that the tablespaces we're going to be moving are self consistent within the tablespace set. You can't have a dependency in a tablespace that's not a part of the set of tablespaces being moved.
<p>
Oracle has a built in procedure to check this for us.
<p>
EXEC <a href="http://docs.oracle.com/cd/E11882_01/appdev.112/e25788/d_tts.htm#ARPLS68936">DBMS_TTS.TRANSPORT_SET_CHECK</a>('TS1_NAME, TS2_NAME, TS3_NAME, ...', TRUE);
<p>
The first parameter is a comma separated list of tablespace names, enclosed in single quotes. The second parameter is TRUE or FALSE indicating whether or not referential integrity constraints should be checked (default is false) and a third boolean parameter (default FALSE) indicating whether a "full check" should be performed (refer the documentation for full details).
<p>
Once complete, check to see if there are any inconsistencies:
<p>
select * from transport_set_violations;
<p>
There should be no rows returned.
<p>
My next step at this point was to check the owners of the objects in the tablespaces I was moving, since the object owners need to be defined during the import of the tablespaces.
<p>
select distinct owner from dba_tables where tablespace_name in ('TS1_NAME','TS2_NAME','TS3_NAME');
<br>
select distinct owner from dba_indexes where tablespace_name in ('TS1_NAME','TS2_NAME','TS3_NAME');
<p>
Write these down and keep them until later.
<p>
<h3>The Copy</h3>
<p>
Next step is to recover the production database to a dummy database on the test DB server. Once the recovery is complete, set the tablespaces being moved into readonly mode.
<p>
alter tablespace [name] read only;
<p>
Once complete for each tablespace, we need to export the tablespace meetadata:
<p>
$ORACLE_HOME/bin/exp transport_tablespace=y tablespaces=ts1_name,ts2_name,ts3_name triggers=y constraints=y grants=y statistics=none file=exp_ts.dmp
<p>
When prompted, enter "/ as sysdba" as the username (you'll need to be logged in as the Oracle owner to the operating system).
<P>
In my test database that I'm importing to, we need to get rid of the existing tablespaces.
<p>
Copy the datafiles from the temporary copy that we recovered above into place for the test database we want to copy into, renaming the files at the OS level if needed.<p>
drop tablespace [name] including contents and datafiles;
<p>
<h3>The Import</h3>
<p>
Now, import the tablespace metadata:
<p>
$ORACLE_HOME/bin/imp file=exp_ts.dmp tablespaces=TS1_NAME,TS2_NAME,TS3_NAME tts_owners=user1,user2,user3 datafiles=datafile1,datafile2,datafile3 transport_tablespace=y
<p>
TTS_OWNERS is the list of schema owners from the earlier query.
<p>
When prompted, enter "/ as sysdba" as the username (you'll need to be logged in as the Oracle owner to the operating system).
<p>
When the import is complete, for each tablespace do:
<p>
alter tablespace [name] read write.
<P>
<h3>Conclusion</h3>
<p>
This method, for a non trivially sized database is much faster that import and export and the whole process completes in about a third of the time.Stevehttp://www.blogger.com/profile/10758481055115692692noreply@blogger.comtag:blogger.com,1999:blog-916087967433702978.post-53772460991983661682012-03-29T08:50:00.000-07:002012-03-29T17:18:21.325-07:00A Speaker's Nightmare<hr>
"I am not left handed either!" - Westley, "The Princess Bride"
<hr>
As noted in the profile section, I am a DBA who works with both Oracle and SQL Server. I had never really considered trying to speak at a technical conference. After I started paying more attention to the SQL Server online community, and attended the <a href="http://www.sqlpass.org/summit/2012/">SQL PASS Summit</a> in 2011, I thought I'd give it a shot.
<p>
I know more about Oracle than I do SQL Server and usually attend the <a href="http://www.rmoug.org/2012/03/13/training-days-2012-presentations/">Rocky Mountain Oracle User Group Training Days</a> event in Denver each February, so I submitted an abstract during the call for papers.
<p>
I have been attending the RMOUG Training Days event an average of about 2 years out of 3 for the last 10 years or so. In all that time, I never remember having seen a session on Oracle Heterogeneous Services. OHS is a feature of Oracle that let's you seamlessly connect to a non-Oracle database via either a fully functional (and separately licensed) target specific gateway or via the more restricted functionality of the (free!) Generic Gateway Agent.
<p>
Having needed to put together a real time query link to a SQL Server database in the not so distant past, I thought it would be a good topic to talk about and hoped it served up enough interest to get some backsides in seats.
<p>
I was pleasantly surprised, a couple of months later to get an acceptance letter and started preparing my paper and slides.
<p>
During the course of the several weeks I worked on the presentation, I researched more into the topic. What I had done to meet my business requirement was only touching the tip of the iceberg. I explored, read, learned and tested. I installed MySQL, SQL Server and Oracle on my laptop and went through demo scenarios and making sure it all hung together.
<p>
On conference day, there was a 15 minute break between sessions and I turned up into my allotted room right after the end of the previous session to set up my laptop, make sure the projector was working and my remote was working.
<p>
The room monitor came in and we started talking and I joked that I'd had this nightmare that no one would show up.
<p>
At the top of the hour, as I'm supposed to start talking, I have 2 people in the room: One person who had walked in, and the room monitor.
<p>
Not to be discouraged, I ran through my presentation, took good questions from my small audience and provided answers.
<p>
Of course, post presentation, I was somewhat crushed. All that effort into putting the white paper and the slides together. The hours spent practicing the delivery, setting up the demo.
<p>
I'm sure that there were several contributing factors to the almost non-existant turn out. There were some big names in the Oracle world speaking at the same time on more main stream topics than mine. If I was Joe Q. DBA and could see a big known name talk about something more mainstream (like the optimizer, performance tuning etc) or go see some unknown talk about a rarely used part of the technology, I probably would have gone and seen big name too.
<p>
Also, the reason that I'd never seen a presentation on OHS before is probably because not a lot of people use it or need to know about it. If you see a session on the track about a piece of technology that you never use, the likelihood of attending that session, instead of a session on something that you need to know about and use on a day to day basis is low.
<p>
So, why am I writing this post and telling my sob story ?
<p>
The real value out of this whole process was the preparation. Even though the time spent researching, writing, building a slide deck seemed to have been wasted (because of the extremely small turnout), the reward wasn't in hearing the applause from a full room. The reward came from what I learned during the process. I now know a whole lot more about distributed transaction processing in Oracle, how it works over a Generic Gateway Agent as opposed to a target specific Gateway Agent and this whole piece of the technology in general.
<p>
The real reward was, pure and simply, the knowledge gained.
<p>
Does this mean that I'll never put forth another abstract in the future? No. If I can think of a topic with a more general scope that I am somewhat knowledgeable and passionate about then I'll probably submit again.
<p>
The real lesson learned here is not to be afraid to put yourself out there and speak about a topic that you feel others could learn something from. Even if your session is poorly attended, you'll accomplish two pretty important things. You'll learn something along the way, and you'll get your name out there.
<p>
Who knows, in the years to come, someone's online presence and visibility in the technical conference circuit could play a huge impact in finding you your dream job. (Brent Ozar PLF recently hired their first employee and a large part of their decision making process was just that: see their <a href="http://www.brentozar.com/archive/2012/03/meet-brent-ozar-plfs-employee/">blog entry</a> about their hiring decision).
<p>
So, when it comes to deciding about trying to speak at a technical conference, as those Nike people used to say: "Just Do It!"
<p>
By the way, if you're someone who deals with Oracle and want to read the slides / paper, drop me an email (address in the About Me section of the blog) and I'll send the documents your way :)Stevehttp://www.blogger.com/profile/10758481055115692692noreply@blogger.comtag:blogger.com,1999:blog-916087967433702978.post-76839800083745837972012-03-28T07:15:00.000-07:002012-03-28T07:15:32.275-07:00Architectural notes on SQL Server vs. Oracle<hr>
"I am not left handed!" - Inigo Montoya, "The Princess Bride"
<hr>
One of the first things to cover under this heading would be the concept of a schema. The difference between a schema in SQL Server and Oracle is distinct.
<p>
The <a href="http://docs.oracle.com/cd/E11882_01/server.112/e25789/tablecls.htm#CNCPT111">Oracle 11g Database Concepts</a> manual defines a schema as follows:
<p><i>
"A database schema is a logical container for data structures, called schema objects. Examples of schema objects are tables and indexes. Schema objects are created and manipulated with SQL.
<p>
A database user has a password and various database privileges. Each user owns a single schema, which has the same name as the user. The schema contains the data for the user owning the schema. For example, the hr user owns the hr schema, which contains schema objects such as the employees table. In a production database, the schema owner usually represents a database application rather than a person.
<p>
Within a schema, each schema object of a particular type has a unique name. For example, hr.employees refers to the table employees in the hr schema. Figure 2-1 depicts a schema owner named hr and schema objects within the hr schema."
</i><p>
A schema does not necessarily need to contain tables. It contains schema objects - be they triggers, synonyms, views or whatever. Note that the schema is the same name as the user the owns it.
<p>
In SQL Server things are a little different. <a href="http://msdn.microsoft.com/en-us/library/ms190387.aspx">SQL Server Books Online</a> tells us how from SQL Server 2005 and on:
<p><i>
"The behavior of schemas changed in SQL Server 2005. Schemas are no longer equivalent to database users; each schema is now a distinct namespace that exists independently of the database user who created it. In other words, a schema is simply a container of objects. A schema can be owned by any user, and its ownership is transferable."
</i><p>
The AdventureWorks database (the demo database that comes with SQL Server) shows this. Inside the one database, there is a schema called Sales, containing all the Sales tables, another called Product for the Products, one called HR for employee data. One user can be granted access to all of these tables (much like in Oracle or any other RDBMS) but they are each contained in their own schema. The schema is just a corral to hold these database objects.
<p>
The next piece of the architecture to discuss is how the physical data is stored. In Oracle, the database block size is determined at database creation and can be any multiple of 2K from the smallest of 2K up to the maximum of 32K (although each operating system may have a narrower range that restricts that). In SQL Server, the database block size, call a page, is always set at 8K (although there are exceptions to this, for data types such as varchar(max), varbinary(max), xml, image etc). In SQL Server, 8 x 8K pages forms an Extent which is the smallest allocation unit and should be 8 contiguous pages. Extent allocation in Oracle is handled differently depending on if the tablespace is defined with uniform extent sizes (each allocation is the same, based upon a value set by the DBA at tablespace creation) or is left as system managed (where Oracle decides how large the next extent needs to be based on internal algorithms).
<p>
Both Oracle and SQL Server store data in datafiles. In Oracle, one or more datafiles are assigned to a tablespace. (More accurately, when a tablespace is created, a datafile is defined to that tablespace. Later, that datafile can be expanded, or more datafiles can be added to it to increase overall space). Each tablespace has at least one datafile but may have more than one.
<p>
In SQL Server, the Filegroup is the concept that most closely matches the tablespace. A filegroup is a logical construct containing datafiles that can be used to spread datafiles out over multiple physical disk devices. When a database is created, a PRIMARY filegroup is mandatory Other user defined filegroups can also be created containing data files on various physical devices. Unless otherwise defined, the primary file group is the default and new objects created without the FILEGROUP keyword are assigned to the default filegroup.
<p>
Once again, <a href="http://msdn.microsoft.com/en-us/library/ms179316.aspx">SQL Server Books Online</a> has an excellent reference to the subject, that I will not replicate here.
<p>
The final thing I want to mention in this post is the difference between the transaction logging models.
<p>
Oracle instances are defined with redo log file. The redo log contains two or more groups contains 1 or more online redo log files. It is common practice to multiplex the redo log files to protect against media failure. As changes are made to data, logging information is written to the currently active redo log group. When the current group fills, a log switch occurs and log information is written to the next member of the group. This cycle continues through the group as data is generated. If ARCHIVELOG mode is enabled, after a log group is filled a copy of the log is written to the log archive destination. These sequentially numbered archive logs are used for point in time recovery in the event of a failure.
<p>
The SQL Server Transaction Log on the other hand is one or more log file per database that is written to continually in a wrap around fashion. In a database in full recovery mode, the transaction log is written to continually and will continue to grow unless a transaction log backup is done. Once the backup is done, the transaction log is free to be written over again from the beginning. In simple recovery model, changes are still logged to the transaction log, but once a transaction is completed, the space in the transaction log is able to be overwritten, so a chain of changes cannot be maintained (point in time recovery is impossible in this mode).
<p>
Oracle has two options for logging: ARCHIVELOG mode and NOARCHIVELOG mode. Transactions are still logged to the online redo logs, but the difference here is whether or not the redo logs are archived out to disk when full. The <a href="http://docs.oracle.com/cd/B19306_01/server.102/b14231/archredo.htm">Oracle Documentation</a> discusses this in more detail.
<p>
SQL Server has three different logging modes: Full, Bulk Logged and Simple. Full and Simple respectively correspond to the two Oracle methods. <a href="http://msdn.microsoft.com/en-us/library/aa173531%28v=sql.80%29.aspx">SQL Server Books Online</a> covers these in more detail.
<p>
SQL Server and Oracle are two different Relational Database Management Systems and hopefully these posts will help explain some of the differences between the two for those people, like me, who regularly are faced with using both.Stevehttp://www.blogger.com/profile/10758481055115692692noreply@blogger.comtag:blogger.com,1999:blog-916087967433702978.post-42562592339782680002012-03-19T09:51:00.002-07:002012-03-19T09:51:54.821-07:00Quick and Dirty Data Extract<hr>
"You've been mostly dead all day!" - <i>Fezzik, The Princess Bride</i>
<hr>
I was recently tasked with extracting data from a couple of dozen tables, tilde delimited, with column headings.
<p>
So, I spent a little time coming up with this:
<p>
<pre>
<font color=green>
set nocount on
select 'select '
select ''''+a.name+''''+'+''~''+'
from syscolumns a, sysobjects b
where a.id = b.id
and b.name = 'EXTRACT_TABLE'
order by colid;
select 'select ';
select case when a.xtype = 106 then 'isnull(cast('+a.name+' as varchar), '' '')'
when a.xtype = 108 then 'isnull(cast('+a.name+' as varchar), '' '')'
when a.xtype = 61 then 'isnull(convert(varchar(8),'+a.name+',112), '' '')'
else 'isnull('+a.name+','' '')' END+'+''~''+'
from syscolumns a, sysobjects b
where a.id = b.id
and b.name = 'EXTRACT_TABLE'
order by colid;
select 'from extract_table';
</font>
</pre>
<p>
The output from this query needs a little tidying up - the last column of each generated select statement needs the +'~'+ removed from it (for obvious syntactical reasons) and the spacing of the output can always be cleaned up a little from a prettying it up perspective, but it got the job done.
<p>
This was from a SQL Server 2000 system and would need to be ever so slightly modified to use on 2005/2008/2012.
<p>
Like I said in the subject line, quick and dirty (but it got the job done) :)Stevehttp://www.blogger.com/profile/10758481055115692692noreply@blogger.comtag:blogger.com,1999:blog-916087967433702978.post-32339085375493093612012-03-13T12:36:00.000-07:002012-03-13T12:36:42.890-07:00T-SQL Tuesday #028 - Jack of All Trades, Master of None<hr>
<i>"Certainty of death ... small chance of success ... what are we waiting for ?" - Gimli, Return of the King</i>
<hr>
<a href="http://sqlblog.com/blogs/argenis_fernandez/archive/2012/03/05/t-sql-tuesday-028-jack-of-all-trades-master-of-none.aspx"><img style="float:left; margin:0 10px 10px 0;cursor:pointer; cursor:hand;width: 150px; height: 150px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiJquyEJB2QJGebzSIF-4krAaC28GclMK-POt-MUKo0E37ELJECQd46GyK2ek6vXVkaAid5vk4Blf2k2jzcdFtGWqSNeFH2Iio77ht_3yj5AuR943ro-LHleuUBUvXvwpXAva8j-Grqf16F/s400/tsqltuesday.jpg" border="0" alt="T-SQL Tuesday" id="BLOGGER_PHOTO_ID_5480538702979664562" /></a>The second Tuesday of the month invariably means that it's time for T-SQL Tuesday again. Last time I participated was January and I promised all sorts of things for resolutions like attending my local User Group meetings (0/3 so far this year), updating my blog at least once a week (6 weeks since the last post) and reading more books this year than I usually do (doing OK on the fiction front, need to catch up on the tech books though). So, this month I decided to jump back into the pool and at least post something for T-SQL Tuesday.<p>
Argenis Fernandez(<a href="http://sqlblog.com/blogs/argenis_fernandez">blog</a>|<a href="http://twitter.com/#!/DBArgenis">twitter</a>) is the Maitre d' of our blogging hotel this month and he's asked us to talk about what we specialize in, if indeed that's anything.
<p>
Jack of All Trades, he posted as the title of this month's event. I've been working in Information Technology for over 25 years. Looking back over my career, the title "Jack of All Trades" probably fits me very well.
<p>
I started as a mainframe COBOL programmer back in the mid-80's and did that for several years. In 1991 I left my government job and joined the development staff of an Australian ERP Software company. Today, a little over 21 years later, I'm still working with their software, but my role has changed significantly.
<p>
It was with this company, after transferring to the USA that I started getting exposed to databases. Our machines were hosted out of Australia, our DBA support was Australian based (which meant if we had problems we got support starting around 3PM) and so I started teaching myself Oracle. After several years of being the "go to" guy for "in a pinch" support earlier than 3PM, my boss gave me the chance that changed my career direction. He was going to let me transfer and become an Oracle Database Administrator.
<p>
That was 12 years ago and I've loved working with them ever since.
<p>
However, owing to the niche environment I work in (I'm now working for a small consulting firm that supports this ERP product), I've found myself in an interesting place.
<p>
Because of the knowledge I have of the ERP product and the environment it runs in, I'm very much a generalist in what I do. Four years ago, I was forced to learn SQL Server. I'd pushed back against that for years but finally had to succumb to the inevitable.
<p>
The software package tends not to use a lot of the advanced features of the databases it runs against. The data isn't necessarily considered up to the second critical (such as for banking transactions) and in the event of problems, some down time can be tolerated.
<p>
For Oracle, for example, Real Application Clustering (RAC) isn't used in this environment. So I don't have those skills (and the few times I've looked at the job market, just to see what's out there, everyone seems to want it).
<p>
For SQL Server, I've never had to deal with clustering, or log shipping or replication.
<p>
Because of the specific requirements of my job at the moment, I'm very much a generalist (I do Oracle, I do SQL Server, I do Unix Shell and Windows CMD scripting, I still fall back to my COBOL roots as needed from time to time too) and for me, at this point in my career that's not a bad thing.
<p>
Does this mean I don't want to learn these other more specialized aspects of the various RDBMS systems I work with ? No. However, when you have to balance available time with what's needed for day to day operations, sometimes you don't always get what you want.
<p>
So for now, I am perfectly happy in what I'm doing. I love my job. The "restrictions" on not being a specialist in any particular field have not held me back any at the moment. Having such a broad, generalized knowledge base, when working in a very niche field has actually been very helpful.
<p>
It could actually be said, come to think of it, that I'm a "specialist" in the technical area around this particular product I've been working with for the last two decades, while being a generalist in the underlying technology (the databases, the operating systems etc).
<p>
One of the things I have found in working across so many diverse areas is a need to keep some good notes about the things you learn (so that 2 years from now when a problem pops up again, you have a reference), and that there's always something new to learn!
<p>
The bottom line is that for me, generalizing across many areas has worked very well. Reading some of the other posts, I've seen that a lot of people tend to end up in one area and that doesn't surprise me. For me, being a Jack of All Trades just hasn't been a bad thing :)Stevehttp://www.blogger.com/profile/10758481055115692692noreply@blogger.comtag:blogger.com,1999:blog-916087967433702978.post-21108391608180343492012-01-17T12:59:00.000-08:002012-01-17T12:59:09.152-08:00Oracle and SQL Server: Compare and Contrast (Database vs Instance)<hr>
<i>"Sacrificing minions: Is there any problem it can't solve?" -Xykon, Order of the Stick webcomic </i>
<hr>
As it states in my profile, I work in both Oracle and SQL Server on a daily basis. When I was first introduced to SQL Server back in 2008, I had a great deal of trouble wrapping my mind around some of the concepts, since some of them were so strange and counter intuitive to what I was used to.
<p>
So, as an exercise in both self-review and in an effort to put something like this 'out there', I thought I might start a series of postings comparing and contrasting Oracle to SQL Server.
<p>
This could be looked at as an "Oracle for SQL Server DBA's", or alternatively, "SQL Server for Oracle DBA's", depending upon who's reading it and I'll try to cover both sides of the story.
<p>
To get the ball rolling, there are two concepts that form the core of a relational database management system. The terms are used often and just as often are probably used incorrectly. Database and Instance.
<p>
What is a database and what is an instance ?
<p>
Not surprisingly, the definitions vary depending on what system you're working with
<p>
Let's start with Oracle. The Oracle Documentation (Oracle Database Concepts - Oracle 11g Release 2 - E25789-01) defines an instance as "An instance is a set of memory structures that manage database files. The instance consists of a shared memory area, called the system global area (SGA), and a set of background processes. An instance can exist independently of database files."
<p>
So, from an Oracle perspective, the instance is the memory processes that manage the files that make up the database.
<p>
The database is defined, from the same document, as "A database is a set of files, located on disk, that store data. These files can exist
independently of a database instance.".
<p>
The two are closely related, and the term "Oracle database" is often used to refer to both under the same umbrella, but in the purest sense, they are separate and distinct.
<p>
There are three different types of files that make up a database.
<ul>
<li>Data Files - these contain, funnily enough, your data, as well as the corresponding indexes and any other database objects (procedures and functions, triggers etc).
<li>Control Files - these contain the metadata about the physical structure of the database - names and locations of data files and the name of the database, among other things.
<li>Redo Log Files - these are a set of two or more files that record transactional changes made to the data in the database. As they fill, they are written out to archive log files and rotate back through the other files in the redo log group.
</ul>
<p>
Also of note is that each instance (set of memory processes) supports one and only one database (at the basic level. Real Application Clusters - RAC - can change that, where you have multiple instances accessing a single set of files, but that is outside of the scope of what I'm talking about here).
<p>
The figure below is a representation of what makes up and Oracle instance and database (also from the same Oracle document). It also shows the assorted processes and everything else associated with the database, which is not what we're talking about here. For now, just look at the groupings indicated as Instance (memory) and Database (files).
<p>
<img src="http://home.comcast.net/~sjwales/blog/Oracle_Concepts.png">
<p>
It is important to note that you can create as many instances as you like from a single set of binaries installed on a server.
<p>
SQL Server considers an instance to the be the installation of binaries on a server. Your first install on a server is usually set up to be the Default Instance on the server (but it doesn't have to be). If you install the software a second or third time, these become separate named instances. Each instance can the a Named instance if desired.
<p>
Each installed instance has its own set of memory processes when started.
<p>
They key difference between the two is that Oracle can spawn multiple sets of Oracle instances (memory processes) from each Oracle software install. SQL Server has a one to one relationship between memory processes and SQL Server software installation.
<p>
Buck Woody (<a href="http://blogs.msdn.com/b/buckwoody/">blog</a>|<a href="https://twitter.com/buckwoody">twitter</a>) wrote an <a href="http://www.informit.com/guides/content.aspx?g=sqlserver&seqNum=29">article</a> talking about SQL Server instances in much more detail than I plan on talking about here, if you're interested in the finer points.
<p>
What about SQL Server databases ? A SQL Server database is defined as one or more data files and a log file. In SQL Server, one instance can support multiple databases.
<p>
Another difference to be noted here, is that while Oracle rotates Online redo logs through a group in a loop and writes out old changes to archive log files, the SQL Server transaction log will continue to grow and grow and grow unless the database is running in Simple Recovery Mode or a full backup is taken, at which time the log is reset and writing starts again from the beginning of the file.
<p>
Upon installation, SQL Server contains several system databases that contain objects shared by all other databases in the instance. From <a href="http://msdn.microsoft.com/en-us/library/ms178028.aspx">Books Online</a>, the SQL Server system databases are:
<ul>
<li>master - Contains all the instance wide metadata of the instance - logins, endpoints, linked servers, system configuration settings.
<li>model - Used as a template for creating all databases in the instance.
<li>msdb - Used by SQL Server agent for scheduling jobs. Also used by database mail
<li>tempdb - Used for temporary user objects for all databases. Also is the row version store for some isolation levels.
<li>Resource Database - a read only database that contains all of the system objects in SQL Server.
</ul>
<p>
I did find an interesting article on <a href="http://www.sqlservercentral.com/articles/SQL+Server/67007/">SQL Server Central</a> that covered some of what I've written about here and has a summary at the end mapping the system databases of SQL Server to appropriate Oracle entities.
<p>
A similarity between both systems is that it is possible to run multiple applications out of a single database, where they could be separated by different schemas, but I haven't seen that done a whole lot. Separating applications into different databases does seem to make maintenance tasks somewhat easier.
<p>
Next Time: Some of the Architectural differences between Oracle and SQL Server.Stevehttp://www.blogger.com/profile/10758481055115692692noreply@blogger.comtag:blogger.com,1999:blog-916087967433702978.post-25687621627017882102012-01-16T14:58:00.000-08:002012-01-17T11:07:27.545-08:00CASE in point, redux<hr>
<i>"My friends, you bow to no one" - Aragorn, to the hobbits, Return of the King</i>
<hr>
In a <a href="http://dba-in-exile.blogspot.com/2012/01/case-in-point.html">recent post</a> made for my reference, I came posted the syntax for the SQL Server CASE statement, because I always forget some of the finer points of less used syntax.
<p>
Well, it just so happens that I also found myself recently needing the syntax for a CASE statement used inside an update.
<p>
In keeping with the other posting, this seems a mighty fine place to record such a thing
<p>
<pre>
<font color=green>
update mytable
set col1 =
(CASE
WHEN
(condition)
THEN
(value)
ELSE
(other value)
END
)
where column = value
</font>
</pre>
<p>
Another quickie, mainly for my enjoyment at some stage down the road!Stevehttp://www.blogger.com/profile/10758481055115692692noreply@blogger.comtag:blogger.com,1999:blog-916087967433702978.post-84584455206007982362012-01-10T13:11:00.000-08:002012-01-17T11:07:06.111-08:00T-SQL Tuesday #14, er #26: Second Chances<hr>
<i>"Whoo-hoo-hoo, look who knows so much. It just so happens that your friend here is only MOSTLY dead. There's a big difference between mostly dead and all dead. Mostly dead is slightly alive." - Miracle Max, The Princess Bride</i>
<hr>
<a href="http://davidbrycehoward.com/archive/2012/01/tsql-tuesday-026-second-chances/"><img style="float:left; margin:0 10px 10px 0;cursor:pointer; cursor:hand;width: 150px; height: 150px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiJquyEJB2QJGebzSIF-4krAaC28GclMK-POt-MUKo0E37ELJECQd46GyK2ek6vXVkaAid5vk4Blf2k2jzcdFtGWqSNeFH2Iio77ht_3yj5AuR943ro-LHleuUBUvXvwpXAva8j-Grqf16F/s400/tsqltuesday.jpg" border="0" alt="T-SQL Tuesday" id="BLOGGER_PHOTO_ID_5480538702979664562" /></a>A new year is upon us and that inevitably means that there must be a second Tuesday of the month again. Which means it's T-SQL Tuesday again.<p>
David Howard (<a href="http://davidbrycehoward.com">blog</a>|<a href="http://twitter.com/#!/davidh0ward">twitter</a>) is acting as the Master of Ceremonies for the start of the 2012 volume of T-SQL Tuesday and his topic is "Second Chances", much like Westley got in the scene from "The Princess Bride" mentioned at the top. The theme this month is to revisit any of the previous 25 topics and write a little about one of them - without recycling your own or someone else's previous musings.
<p>
Before I begin, let me tell you a little story. Once upon a <a href="http://sqlblog.com/blogs/adam_machanic/archive/2009/12/09/t-sql-tuesday-001-the-roundup.aspx">date/time</a>, a young DBA was faced with a <a href="http://sqlblog.com/blogs/adam_machanic/archive/2010/02/08/t-sql-tuesday-002-the-roundup.aspx">Puzzling Situation</a>. He had been having a rough time with this <a href="http://msmvps.com/blogs/robfarley/archive/2010/02/13/t-sql-tuesday-003-relationships-the-round-up.aspx">Relationship</a> with his <a href="http://www.straightpathsql.com/archives/2010/03/t-sql-tuesday-004-a-roundup/">I/O subsystem</a>. For all the <a href="http://sqlvariant.com/wordpress/index.php/2010/04/t-sql-tuesday-005-reporting-the-round-up/">Reporting</a> he'd been doing trying to work out what was wrong, he hadn't had much luck. He felt like such a <a href="http://sqlblog.com/blogs/michael_coles/archive/2010/05/13/t-sql-tuesday-006-round-up.aspxx">BLOB</a>. He didn't really care though, because it was <a href="http://sqlchicken.com/2010/06/t-sql-tuesday-007-roundup/">Summertime</a>, and after picking up his kids from <a href="http://www.sqlsoldier.com/wp/sqlserver/tsqltuesday8roundup">Getting Schooled</a>, he was off for some good old fashioned <a href="http://jasonbrimhall.info/2010/08/12/tsql-tuesday-009-roundup/">Beach Time</a>. He had all of his vacation plans <a href="http://michaeljswart.com/2010/09/t-sql-tuesday-10-round-up-an-index-of-participants/">Indexed</a> and organized and even had the <a href="http://sankarreddy.com/2010/11/t-sql-tuesday-11-round-up-misconceptions-in-sql-server/">Misconception</a> that his <a href="http://sqlskills.com/BLOGS/PAUL/post/T-SQL-Tuesday-012-Summary-of-why-DBA-skills-are-necessary.aspx">DBA Skills</a> wouldn't be needed while he was away. However, the <a href="http://www.sqlservercentral.com/blogs/steve_jones/2010/12/17/t_2D00_sql-tuesday-_2300_13-roundup/">Business Wants what the Business thinks it wants</a> and apart from these <a href="http://www.midnightdba.com/Jen/2011/01/t-sql-tuesday-014-roundup/">Resolutions</a>, he was soon paged by his <a href="http://sqlasylum.wordpress.com/2011/02/11/t-sql-tuesday-15-summary/">Automated Systems</a>, and had to get on the phone and <a href="http://blogs.lessthandot.com/index.php/DataMgmt/DBProgramming/sum-t-sql-tuesday-016">Aggregate</a> and <a href="http://mattvelic.com/tsql-tuesday-17-roundup/">APPLY</a> his knowledge to help fix a problem with a misbehaving <a href="http://www.bobpusateri.com/archive/2011/05/t-sql-tuesday-18-wrapup/">CTE</a>. Thankfully there weren't any <a href="http://www.allenkinsel.com/archive/2011/06/t-sql-tuesday-19-wrapup/">Disasters</a> while was away because of the <a href="http://troubleshootingsql.com/2011/07/15/t-sql-tuesday-20-wrap-up-and-a-few-smiles/">Best Practices</a> he employed - making sure that there was not any <a href="http://sqlblog.com/blogs/adam_machanic/archive/2011/08/03/t-sql-tuesday-21-a-day-late-and-totally-full-of-it.aspx">Crap Code</a> in the system really helped. His monitoring data was nicely <a href="http://www.sqlservercentral.com/blogs/pearlknows/2011/09/20/t-sql-tuesday-22-round-up-data-presentation/">Presented</a> and <a href="http://codegumbo.com/index.php/2011/10/10/tsql2sday-roundup/">Joined</a> with some clever <a href="http://bradsruminations.blogspot.com/2011/11/t-sql-tuesday-024-roundup.html">Procedures and Functions</a> he was able to <a href="http://sqlblog.com/blogs/allen_white/archive/2011/12/17/t-sql-tuesday-25-followup-just-in-time-for-the-holidays.aspx">Trick</a> his way to a good vacation.
<p>
OK, it's a convoluted and long winded story, but hidden in the links above you'll find all 25 of the previous topics embedded for your reference :) Most of them link directly to the roundup / summary but there's one or two where I just couldn't find the darn thing.
<p>
As you might have guessed from the topic, I was thinking I might revisit #14 (Resolutions). Being that it's the start of a new year, visiting the list of "Technical Resolutions" is probably not a bad way to get the show on the road. So, what are my technical resolutions for 2012?
<ul>
<li>Attend PASS Summit 2012 - My first ever PASS Summit was 2011 and I came away with such a mental overload that I've probably forgotten half of what I learned. Which leads nicely into ...
<li>Review the PASS Summit DVD's - I have all eight of them on my desk (they arrived last Tuesday as I was writing the draft of this post). I need to pay particular attention to the sessions I was in and go over all of it again - and then go over the sessions I did NOT attend.
<li>Try to be more active on this blog, making at least one post a week.
<li>Upgrade my Oracle credentials. (It's T-SQL post, but this is definitely on my list of things to do this year). Currently sitting certified for 9i, should really try to upgrade to at least 10g, if not 11g.
<li>I really need to try to get to the local user group meetings this year. I made plans to go about 4 times in 2011, and every time work got in the way. I need to try to tell myself that unless it's a real Priority One issue, that it can wait 3 hours while I drive over and attend a 2 hour user group meeting! (already blown this one for January, thought it was *next* week, turns out it was *yesterday*... Eleven out of Twelve, maybe?)
<li>Learn some Powershell. I've heard so many people evangelizing this tool, that I really need to suck it up and at least learn the basics!
<li>Read some, if not all of the tech books I've bought. I have four sitting on my desk, with ebook versions of three of them as well. Time to put my Kindle Fire that Santa brought me to use and get them read! (This doesn't include the shelf full of other books that I've bought and pick through as I need the relevant parts of them)
</ul>
<p>
What remains to be seen is, this time next year, how many of these have check marks next to them and how many have big red crosses.Stevehttp://www.blogger.com/profile/10758481055115692692noreply@blogger.comtag:blogger.com,1999:blog-916087967433702978.post-78619966280431980922012-01-09T19:39:00.000-08:002012-01-17T11:06:52.714-08:00SETORA: A Utility to set up a Unix Session for Oracle in a multi DB Environment<hr>
<i>"Of my friend, I can only say this: of all the souls I have encountered in my travels, his was the most... human." - Captain James T. Kirk, Star Trek II: The Wrath of Khan.</i>
<hr>
Back in my early Oracle days in 2000-2001, I had been given a chance by a manager who I had worked with for a long time to prove myself as an Oracle DBA. I had expressed an interest in getting into the field, had self taught myself quite a bit and was learning a whole lot more as I went along.
<p>
I hadn't been at it more than about 6 months when a new face was hired into the company. A jovial fellow by the name of Stan Yellott. Stan was a big happy fellow, with a bushy white beard.
<p>
In the 2 years I worked with him before I left that company's employ, Stan taught me a great deal. He had probably forgotten more about Oracle that I will ever know.
<p>
We were working for the Managed Services division of the company at the time, hosting servers for any number of different customers. The environment was set up so that from inside our network, the only way to get to the customer machines was via a single Unix host that was the gateway to all the other Unix servers where the databases were hosted.
<p>
We would often login to that gateway machine, and connect to the Oracle databases on the other machines via TNS - which saved us having to login to those remote machines and also somewhat eased having to quickly change from machine to machine to run queries.
<p>
Unfortunately, Stan passed away in November, 2006 after a long battle with illness and I was so happy to have a chance to talk to him a few days before he passed.
<p>
In keeping with my New Year's resolution (see tomorrow's post!) to be a little more active on this blog, I was thinking about what I could write about.
<p>
One of the tools that Stan developed (that I still carry in my toolkit to this day) is a perl script that parses tnsnames.ora and allows you to set your environment so that you can easily query databases on remote machines.
<p>
The utility was called "setora".
<p>
You could pass a parameter "-l" to it and it would parse your tnsnames.ora file and give you a list of every alias in there, along with the database name and host of the alias.
<p>
To set your environment, you would simply do: <pre><font color="green">setora alias</font></pre>
<p>
If your database was called sales, it would be: <pre><font color="green">setora sales</font></pre>
<p>
If you had a couple of databases called sales on two different machines, you could also specify the host and go: <pre><font color="green">setora sales@server1</font></pre>
<p>
So, whilst I don't claim to understand all that this perl script does, I thought I'd share it here, in the hope that someone else can benefit from Stan's wisdom.
<p>
This setup is based in a Unix environment and I'm copying the setup that Stan had configured. If someone wants to edit it, to suit themselves, go for it.
<p>
The first thing we need to do is to setup a directory called DBA under /home/oracle (or whereever your default Oracle login directory is).
<p>
Then under that, we'll have two more directories called fcn and bin.
<p>
In /home/oracle, edit .profile (or the login script specific to your shell, we used kshell) and add a line in there somewhere:
<pre><font color="green">
. /home/oracle/DBA/bin/profile.dba
</pre></font>
<p>
Don't forget the leading period and a space. "Dotting" the script ensures that the variables set within are set in the current shell's context and not spawned into a subshell (which is the Unix default).
<p>
Next, let's create a file called /home/oracle/DBA/bin/profile.dba
<p>
<pre><font color="green">
# Setup various environment variables
set +o nounset
export NODENAME=`uname -n`
export ORACLE_BASE=/oracle
export HOME=/home/oracle
export ORACLE_HOME=${ORACLE_BASE}/product/10.2
export PATH=${PATH}:${HOME}/DBA/bin
# Load DBA functions
export FPATH=${FPATH}:${ORACOMMON}/DBA/fcn
. /home/oracle/DBA/fcn/setora
</pre></font>
<p>
Next step, is to setup a file called /home/oracle/DBA/fcn/setora
<p>
This file loads into the shell a function called setora which it the driver of our bus.
<p>
<pre><font color="green">
function setora
{
OptList=":hl"
PassArgs=""
Status=0
Usage="Usage: setora [-h] [-l] [sid | sid@host | alias]"
while getopts ${OptList} OptArg
do
case ${OptArg} in
h) PassArgs="${PassArgs}-h " ;;
l) PassArgs="${PassArgs}-l " ;;
\?) print "Invalid Option -${OPTARG}"
print "${Usage}"
Status=255 ;;
esac
done
if [[ ${Status} -eq 0 ]]
then
setora.pl "${PassArgs}${1:-}"
Status=$?
CmndFile=/tmp/setora.set.${LOGNAME}
if [[ ${Status} -eq 0 && -r ${CmndFile} ]]
then
. ${CmndFile}
rm ${CmndFile}
fi
fi
return ${Status}
}
</pre></font>
<p>
This function calls setora.pl (which will write a command file setting some environment variables) and then execute that file within the current shell context.
<p>
Finally, the file that does all the work (that I don't begin to completely understand, because perl isn't my thing), that lives in /home/oracle/DBA/setora.pl:
<p>
<pre><font color="green">
#!/usr/bin/perl
#
#------------------------------------------------------------------------------
#
main:
{
$Status = &Initialization ;
unless ( $Status ) { $Status = &GetCmndArgs ;}
unless ( $Status ) { $Status = &ParseTnsnamesFile ;}
if ( $RunOptions{'Help'} )
{
unless ( $Status ) { $Status = &ShowCmndHelp ;}
unless ( $Status ) { $Status = &ListTnsData ;}
}
elsif ( $RunOptions{'List'} )
{
unless ( $Status ) { $Status = &ListTnsData ;}
}
else
{
unless ( $Status ) { $Status = &FindTargetDb ;}
unless ( $Status ) { $Status = &CreateCmndFile ;}
}
if ( $Status ) { &PrintErrorMessage ;}
exit ( $Status ) ;
}
#
#------------------------------------------------------------------------------
#
sub Initialization
{
my $LclReturnStatus = 0 ;
# Standard Variables
$Null = "" ;
# Translate current environment variables
$DatPath = $ENV{'DATPATH'} ;
$Home = $ENV{'HOME'} ;
$LdLibraryPath = $ENV{'LD_LIBRARY_PATH'} ;
$LibPath = $ENV{'LIBPATH'} ;
$LogName = $ENV{'LOGNAME'} ;
$LogPath = $ENV{'LOGPATH'} ;
$ManPath = $ENV{'MANPATH'} ;
$NodeName = $ENV{'NODENAME'} ;
$OracleBase = $ENV{'ORACLE_BASE'} ;
$OracleHome = $ENV{'ORACLE_HOME'} ;
$OraCommon = $ENV{'ORACOMMON'} ;
$OutPath = $ENV{'OUTPATH'} ;
$Path = $ENV{'PATH'} ;
$PID = $$ ;
$SqlPath = $ENV{'SQLPATH'} ;
$TmpPath = $ENV{'TMPPATH'} ;
$TnsAdmin = $ENV{'TNS_ADMIN'} ;
$TwoTask = $ENV{'TWO_TASK'} ;
# Include user library area
unshift ( @INC , "$Home/lib" ) ;
unshift ( @INC , "$OracleHome/lib" ) ;
unshift ( @INC , "$OraCommon/DBA/lib" ) ;
# Standard Packages
use Getopt::Std ;
# Return Codes & Messages
$ReturnStatus{'Success'} = 0 ;
$ReturnCode{'Success'} = 'Success' ;
$ReturnText{'Success'} = $Null ;
$ReturnStatus{'NullSIDArg'} = 1 ;
$ReturnCode{'NullSIDArg'} = 'CmdArgMissing' ;
$ReturnText{'NullSIDArg'} = 'SID must be specified' ;
$ReturnStatus{'NoNamesFile'} = 2 ;
$ReturnCode{'NoNamesFile'} = 'FileOpenErr' ;
$ReturnText{'NoNamesFile'} = $Null ;
$ReturnStatus{'MultSIDFound'} = 3 ;
$ReturnCode{'MultSIDFound'} = 'CmdArgInvalid' ;
$ReturnText{'MultSIDFound'} = 'Multiple SIDs found - Must specify host' ;
$ReturnStatus{'SIDNotFound'} = 4 ;
$ReturnCode{'SIDNotFound'} = 'SIDInvalid' ;
$ReturnText{'SIDNotFound'} = $Null ;
$ReturnStatus{'CmndFileErr'} = 5 ;
$ReturnCode{'CmndFileErr'} = 'FileCreateErr' ;
$ReturnText{'CmndFileErr'} = $Null ;
# TNS Data
$TnsCount = -1 ;
$TnsEntry = -1 ;
@TnsAlias = () ;
@TnsHome = () ;
@TnsHost = () ;
@TnsSid = () ;
# Return successful completion status
return $LclReturnStatus ;
}
#
#------------------------------------------------------------------------------
#
sub GetCmndArgs
{
my $LclReturnStatus = $ReturnStatus{'Success'} ;
my $LclOptionList = "hl" ;
# Define Processing Options
getopt( $LclOptionList ) ;
if ( $opt_h ) { $RunOptions{'Help'} = 1 ;}
if ( $opt_l ) { $RunOptions{'List'} = 1 ;}
# Define Command Arguments
$ArgValue{'DbSpec'} = $ARGV[0] ;
if ( $ARGV[0] eq $Null
&& ! $RunOptions{'List'}
&& ! $RunOptions{'Help'} )
{
$LclReturnStatus = $ReturnStatus{'NullSIDArg'} ;
}
return $LclReturnStatus ;
}
#
#------------------------------------------------------------------------------
#
sub ParseTnsnamesFile
{
my $LclReturnStatus = $ReturnStatus{'Success'} ;
my $LclFileSpec = $Null ;
my $LclLineChar = $Null ;
my $LclParenCnt = 0 ;
my $LclParenLeft = 0 ;
my $LclParenRight = 0 ;
my $LclTnsLine = $Null ;
if ( $TnsAdmin eq $Null )
{
$LclFileSpec = join ( "/" , $OracleHome ,
"network" , "admin" , "tnsnames.ora" ) ;
}
else
{
$LclFileSpec = join ( "/" , $TnsAdmin , "tnsnames.ora" ) ;
}
if ( open ( NAMESFILE , $LclFileSpec ) )
{
while ( <NAMESFILE> )
{
chop ( $LclFileLine = $_ ) ;
$LclLineChar = substr ( $LclFileLine , 0 , 1 ) ;
if ( $LclLineChar ne "#" )
{
if (( $LclParenLeft == $LclParenRight ) &&
(( $LclParenLeft != 0 ) || ( $LclParenRight != 0 )))
{
&ParseTnsLine ( $LclTnsLine ) ;
$LclParenLeft = 0 ;
$LclParenRight = 0 ;
$LclTnsLine = $Null ;
}
$LclParenCnt = tr/(/(/ ;
$LclParenLeft += $LclParenCnt ;
$LclParenCnt = tr/)/)/ ;
$LclParenRight += $LclParenCnt ;
$LclTnsLine = join ( $Null , $LclTnsLine , $LclFileLine ) ;
}
}
close ( NAMESFILE ) ;
}
else
{
$LclReturnStatus = $ReturnStatus{'NoNamesFile'} ;
$ReturnText{'NoNamesFile'} = $LclFileSpec ;
}
return $LclReturnStatus ;
}
#
#------------------------------------------------------------------------------
#
sub ParseTnsLine
{
my $LclReturnStatus = $ReturnStatus{'Success'} ;
my $LclTnsLine = $_[0] ;
my $LclLineChar = $Null ;
my $LclLineLength = 0 ;
my $LclLineOffset = 0 ;
my $LclParenLeft = 0 ;
my $LclParenRight = 0 ;
my $LclTnsAlias = $Null ;
my $LclTnsHome = $Null ;
my $LclTnsHost = $Null ;
my $LclTnsSid = $Null ;
my $LclTnsSpec = $Null ;
my $LclTnsSegment = $Null ;
my $LclTnsParm = $Null ;
my $LclTnsValue = $Null ;
( $LclTnsAlias , $LclTnsSpec )= split ( "=" , $LclTnsLine , 2 ) ;
$LclTnsAlias =~ tr/ //d ;
$LclLineLength = length ( $LclTnsSpec ) ;
for ( $LclLineOffset = 0 ; $LclLineOffset < $LclLineLength ;
$LclLineOffset++ )
{
$LclLineChar = substr ( $LclTnsSpec , $LclLineOffset , 1 ) ;
if ( $LclLineChar eq "(" )
{
$LclParenLeft = $LclLineOffset + 1 ;
}
elsif ( $LclLineChar eq ")" )
{
if ( $LclParenLeft != 0 )
{
$LclParenRight = $LclLineOffset ;
$LclTnsSegment = substr ( $LclTnsSpec , $LclParenLeft ,
$LclParenRight - $LclParenLeft ) ;
$LclTnsSegment =~ tr/ //d ;
( $LclTnsParm , $LclTnsValue ) = split ( "=" , $LclTnsSegment ) ;
$LclTnsParm = uc ( $LclTnsParm ) ;
if ( $LclTnsParm eq "HOST" ) { $LclTnsHost = $LclTnsValue ;}
elsif ( $LclTnsParm eq "ORACLE_HOME" ) { $LclTnsHome = $LclTnsValue ;}
elsif ( $LclTnsParm eq "SID" ) { $LclTnsSid = $LclTnsValue ;}
}
$LclParenLeft = 0 ;
$LclParenRight = 0 ;
}
}
$TnsCount++ ;
$TnsAlias[$TnsCount] = $LclTnsAlias ;
$TnsHome[$TnsCount] = $LclTnsHome ;
$TnsHost[$TnsCount] = $LclTnsHost ;
$TnsSid[$TnsCount] = $LclTnsSid ;
return $LclReturnStatus ;
}
#
#------------------------------------------------------------------------------
#
sub ShowCmndHelp
{
my $LclReturnStatus = $ReturnStatus{'Success'} ;
my $LclDash = "-" x 80 ;
my $LclHeaderFormat = "%-27.27s\n" ;
my $LclLineFormat = "%-8.8s %-12.12s %-40.40s\n" ;
printf $LclHeaderFormat , "List of Databases available" ;
printf $LclHeaderFormat , $LclDash ;
printf $LclLineFormat , " SID" , " Host" , " Alias" ;
printf $LclLineFormat , $LclDash , $LclDash , $LclDash ;
return $LclReturnStatus ;
}
#
#------------------------------------------------------------------------------
#
sub ListTnsData
{
my $LclReturnStatus = $ReturnStatus{'Success'} ;
my $LclCount = 0 ;
my $LclLineFormat = "%-8.8s %-12.12s %-40.40s\n" ;
for ( $LclCount = 0 ; $LclCount <= $TnsCount ; $LclCount++ )
{
printf $LclLineFormat , $TnsSid[$LclCount] , $TnsHost[$LclCount] ,
$TnsAlias[$LclCount] ;
}
return $LclReturnStatus
}
#
#------------------------------------------------------------------------------
#
sub FindTargetDb
{
my $LclReturnStatus = $ReturnStatus{'Success'} ;
my $LclCount = 0 ;
my $LclTestAlias = $Null ;
my $LclTestHostSid = $Null ;
my $LclTestSid = $Null ;
for ( $LclCount = 0 ; $LclCount <= $TnsCount ; $LclCount++ )
{
$LclTestAlias = $TnsAlias [ $LclCount ] ;
$LclTestHostSid = join ( "@" , $TnsSid [ $LclCount ] ,
$TnsHost [ $LclCount ] ) ;
$LclTestSid = $TnsSid [ $LclCount ] ;
if ( $ArgValue{'DbSpec'} eq $LclTestSid )
{
if ( $TnsEntry != -1 )
{
$LclReturnStatus = $ReturnStatus{'MultSIDFound'} ;
last ;
}
else
{
$TnsEntry = $LclCount ;
}
}
elsif ( $ArgValue{'DbSpec'} eq $LclTestHostSid )
{
$TnsEntry = $LclCount ;
}
elsif ( $ArgValue{'DbSpec'} eq $LclTestAlias )
{
$TnsEntry = $LclCount ;
}
}
if ( $TnsEntry == -1) { $LclReturnStatus = $ReturnStatus{'SIDNotFound'} ;}
return $LclReturnStatus ;
}
#
#------------------------------------------------------------------------------
#
sub CreateCmndFile
{
my $LclReturnStatus = $ReturnStatus{'Success'} ;
my $LclFileSpec = "/tmp/setora.set.$LogName" ;
my $LclAlias = $TnsAlias [ $TnsEntry ] ;
my $LclHome = $TnsHome [ $TnsEntry ] ;
my $LclHost = $TnsHost [ $TnsEntry ] ;
my $LclSid = $TnsSid [ $TnsEntry ] ;
my $LclTarget = $Null ;
my $LclValue = $Null ;
$DatPath = &ResetPath ( $DatPath , "DBA/dat" ) ;
$LdLibraryPath = &ResetPath ( $LdLibraryPath , "lib" ) ;
$LibPath = &ResetPath ( $LibPath , "DBA/lib" ) ;
$ManPath = &ResetPath ( $ManPath , "DBA/man" ) ;
$Path = &ResetPath ( $Path , "bin" ) ;
$Path = &ResetPath ( $Path , "DBA/bin" ) ;
$SqlPath = &ResetPath ( $SqlPath , "DBA/sql" ) ;
if ( open ( CMNDFILE , ">$LclFileSpec" ) )
{
print CMNDFILE "export DATPATH=$DatPath\n" ;
print CMNDFILE "export LIBPATH=$LibPath\n" ;
print CMNDFILE "export LD_LIBRARY_PATH=$LdLibraryPath\n" ;
print CMNDFILE "export MANPATH=$ManPath\n" ;
print CMNDFILE "export ORACLE_BASE=$OracleBase\n" ;
print CMNDFILE "export ORACLE_HOME=$LclHome\n" ;
print CMNDFILE "export ORACLE_SID=$LclSid\n" ;
print CMNDFILE "export ORAADMIN=$OracleBase/admin/$LclHost/$LclSid\n" ;
print CMNDFILE "export ORAHOST=$LclHost\n" ;
print CMNDFILE "export PATH=$Path\n" ;
print CMNDFILE "export SQLPATH=$SqlPath\n" ;
print CMNDFILE "export TNS_ALIAS=$LclAlias\n" ;
if ( $LclHost eq $NodeName )
{
print CMNDFILE "export ORAENV_ASK=NO\n" ;
print CMNDFILE "export TWO_TASK=\n" ;
print CMNDFILE "export ORAREMADMIN=\n" ;
print CMNDFILE ". oraenv\n" ;
print CMNDFILE "export ORAENV_ASK=\n" ;
}
else
{
print CMNDFILE "export TWO_TASK=$LclAlias\n" ;
print CMNDFILE "export ORAREMADMIN=$OracleBase/admin/$LclSid\n" ;
}
print CMNDFILE "PS1=\"$NodeName:$LogName:$LclSid\@$LclHost >> \"\n" ;
print CMNDFILE "\n" ;
close ( CMNDFILE ) ;
}
else
{
$LclReturnStatus = $ReturnStatus{'CmndFileErr'} ;
$ReturnText{'CmndFileErr'} = $LclFileSpec ;
}
return $LclReturnStatus ;
}
#
#------------------------------------------------------------------------------
#
sub ResetPath
{
my $LclReturnStatus = $ReturnStatus{'Success'} ;
my $LclOrigPath = $_[0] ;
my $LclSubDir = $_[1] ;
my $LclElement = $Null ;
my $LclHome = $TnsHome [ $TnsEntry ] ;
my $LclNewPath = $Null ;
my $LclNewValue = "$LclHome/$LclSubDir" ;
my $LclTarget = "$OracleHome/$LclSubDir" ;
if ( ( $LclOrigPath eq $Null )
|| ( $LclOrigPath eq $LclTarget ) )
{
$LclNewPath = $LclNewValue ;
}
else
{
$LclNewPath = $Null ;
foreach $LclElement ( split ( ":" , $LclOrigPath ) )
{
if ( $LclElement eq $LclTarget )
{
$LclElement = $LclNewValue ;
$LclNewValue = $Null ;
}
if ( $LclElement ne $Null )
{
if ( $LclNewPath eq $Null )
{
$LclNewPath = $LclElement ;
}
else
{
$LclNewPath = join ( ":" , $LclNewPath , $LclElement ) ;
}
}
}
if ( $LclNewValue ne $Null )
{
$LclNewPath = "$LclNewPath:$LclNewValue" ;
}
}
return $LclNewPath ;
}
#
#------------------------------------------------------------------------------
#
sub PrintErrorMessage
{
my $LclArrayKey = $Null ;
foreach $LclArrayKey (sort keys(%ReturnStatus))
{
if ( $ReturnStatus{$LclArrayKey} == $Status )
{
$LclErrKey = $ReturnCode{$LclArrayKey} ;
print "$ReturnCode{$LclArrayKey} $ReturnText{$LclArrayKey} \n" ;
}
}
}
#
#------------------------------------------------------------------------------
#
</pre></font>
<p>
Make sure all three new files have their owner's executable bit set.
<p>
Logout and login again so that the new .profile can be processed and loaded into memory.
<p>
For usage help:
<pre><font color="green">
setora -h
</pre></font>
<p>
For a list of known TNS aliases:
<pre><font color="green">
setora -l
</pre></font>
<p>
To set an environment
<pre><font color="green">
setora testdb
</pre></font>
<p>
Try it out and see what you think.
<p>
This code was not written by me but by my late friend Stan as mentioned above. Stan had a love of sharing knowledge. I hope that he won't mind me posting this for him. I use it every day and love it.
<p>
<i><font color="red">This post is dedicated to the memory of Stanley L. Yellott. You were a giant among men, my friend, and I still miss your joviality and friendship.</font></i>Stevehttp://www.blogger.com/profile/10758481055115692692noreply@blogger.comtag:blogger.com,1999:blog-916087967433702978.post-87150707014766508752012-01-06T11:18:00.000-08:002012-01-17T11:06:42.421-08:00CASE in point<hr>
<i>"They have a cave troll!" - Boromir, The Fellowship of the Ring</i>
<hr>
I recently needed to write a query where based upon the value of a lookup field, I needed to find the description of a code.
<p>
I can never remember the exact syntax of a case statement, so I thought I would throw a post up here with what I did so that I could refer to it later.
<p>
<pre>
<font color="green">
select account_code,
case when account_ind = '1' then
(select cost_center_description
from cost_centers
where cost_center_code = account_code)
when account_ind = '2' then
(select exp_element_description
from exp_elements
where exp_element_code = account_code)
when account_ind = '3' then
(select gl_description
from gl_codes
where gl_code = account_code)
else 'Unknown'
end as account_desc,
open_balance,
end_balance
from account_codes
</font>
</pre>
<p>
Nothing ground breaking in this one, but I know that I'll refer to it regularly because nitty gritty things like syntax is something that I often need to look up.Stevehttp://www.blogger.com/profile/10758481055115692692noreply@blogger.comtag:blogger.com,1999:blog-916087967433702978.post-52817249176810978512012-01-03T15:47:00.000-08:002012-01-17T11:06:32.683-08:00It's a New Year, time for a change in theme and a First for me<hr>
<i>"I got no rudder. Wind blows northerly, I go north. That's who I am." - Captain Malcolm Reynolds, Serenity</i>
<hr>
OK, the black background and white text (particularly with the colors of the hot links) was getting too much for these tired old eyes.
<p>
Time for a change!
<p>
Might try this blue and white thing and see how it holds up.
<p>
<a href="http://www.teamycc.com/RMOUG_2012_Conference/index.html"><img style="float:left; margin:0 10px 10px 0;cursor:pointer; cursor:hand;width: 150px; height: 150px;" src="http://home.comcast.net/~sjwales/images/SpeakerButton.jpg" border="0" alt="RMOUG 2012"/></a>
Also, I was quite happy to find out that I've had an abstract accepted for the Rocky Mountain Oracle Users Group Training Days at the Colorado Convention Center in Denver, CO happening February 15 and 16, 2012.
<p>
My paper is titled "Hailing Frequencies Open - An Introduction to Oracle Heterogeneous Services". This will be my first time presenting at a technical conference and while very nervous about the whole thing, I'm quite excited at the same time.
<p>
The abstract of the presentation reads:
<p>
Abstract: In business today, applications run on multiple platforms, across multiple database systems. You may have your timekeeping application running on Windows with Microsoft SQL Server, yet your payroll system runs on Oracle on Solaris. Perhaps you have some other system using MySQL on Linux. If you need to have your Oracle system talk to both of them, how do you do it? Making Oracle talk to Oracle is easy. How do we make Oracle talk to these alien systems? The answer is Oracle Heterogeneous Services. Learn how to configure Oracle and your other database systems to talk together. Hailing Frequencies Open!
<p>
Apart from the obvious Star Trek connotations in the title, I'll try to keep the Trek references to a minimum during the presentation, but there's some definite Trek related slides in the planning to help get a point across.
<p>
Definitely looking forward to this one!Stevehttp://www.blogger.com/profile/10758481055115692692noreply@blogger.comtag:blogger.com,1999:blog-916087967433702978.post-50814773050515676102011-12-27T09:14:00.000-08:002012-01-17T11:06:22.904-08:00Examining what's causing blocking<hr>
<i>"I prepared Explosive Ru ..." **BOOM** - Vaarsuvius, Order of the Stick webcomic</i>
<hr>
We have had some fairly basic monitoring for blocking that lasts for an extended period of time running on our primary servers for some time now. In the last few days, we've started getting some alerts in the wee hours of the morning, every night, pretty much like clockwork.
<p>
It doesn't last long, but it's routine - and at 1 AM each morning, it's annoying.
<p>
I wasn't quite sure how to track this down, so I threw out a quick tweet to #sqlhelp and @SQLSoldier suggested capturing some wait information.
<p>
First, I created a table that mirrored <a href="http://msdn.microsoft.com/en-us/library/ms188743%28v=SQL.90%29.aspx">sys.dm_os_waiting_tasks</a>, with an extra column up front for the Date/Time. Next, I created the following little stored procedure.
<p>
<font color="green">
<pre>
create procedure [dbo].[GetWaitingTasks]
as
begin
insert into prod_waiting_tasks
select getdate(), * from prod.sys.dm_os_waiting_tasks
end;
</pre>
</font>
<p>
I scheduled that to run, every 1 minute in SQL Server Agent, between 12:30 AM and 1:30 AM.
<p>
When I queried this table the next morning, I saw a bunch of these messages:
<p>
<font color="green">
<pre>
pagelock fileid=3 pageid=8969941 dbid=11 id=lock1b2c6500 mode=IX associatedObjectId=72057594959429632
</pre>
</font>
<p>
Checking into sys.databases, I confirmed that dbid=11 was my prod database.<br>
Checking into sys.master_files, I confirmed that the file was file containing most of the data files in the darabase.
<p>
I also had a page number in the datafile. The problem was how to work out what object it belonged to.
<p>
That's where an undocumented feature came to my rescue. Paul Randall blogged about DBCC PAGE <a href="http://blogs.msdn.com/b/sqlserverstorageengine/archive/2006/06/10/625659.aspx">here</a>.
<p>
<font color="green">
<pre>
dbcc traceon (3604)
dbcc page (11, 3, 8969941, printopt=1)
</pre>
</font>
<p>
On of the pieces of output from this was the object id.
<p>
Follow that up with:
<p>
<font color="green">
<pre>
select * from sys.objects where object_id = 738101670
</pre>
</font>
<p>
Once I had run that and had an object that was the cause of my nightly blocking, I was able to backtrack through the job history tables that the application provides to work out where my locking was coming from.Stevehttp://www.blogger.com/profile/10758481055115692692noreply@blogger.comtag:blogger.com,1999:blog-916087967433702978.post-29484743149982755192011-12-13T09:08:00.000-08:002012-01-17T11:06:10.512-08:00T-SQL Tuesday #025 - Tips and Tricks<hr>
<i>"Let me explain ... no there is too much, let me sum up" - Inigo Montoya, The Princess Bride</i>
<hr>
<a href="http://sqlblog.com/blogs/allen_white/archive/2011/12/05/t-sql-tuesday-025-invitation-to-share-your-tricks.aspx"><img style="float:left; margin:0 10px 10px 0;cursor:pointer; cursor:hand;width: 150px; height: 150px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiJquyEJB2QJGebzSIF-4krAaC28GclMK-POt-MUKo0E37ELJECQd46GyK2ek6vXVkaAid5vk4Blf2k2jzcdFtGWqSNeFH2Iio77ht_3yj5AuR943ro-LHleuUBUvXvwpXAva8j-Grqf16F/s400/tsqltuesday.jpg" border="0" alt="T-SQL Tuesday" id="BLOGGER_PHOTO_ID_5480538702979664562" /></a>When I created this blog, I made a promise to myself that I would attempt to update it regularly. The last month has been very busy with some tight deadlines and I haven't been able to keep that promise. I looked at the calendar today and saw that it was the second Tuesday of the month and thought "Ack! It's T-SQL Tuesday!" - so I'm forcing myself to spend a little time to throw something together. A complete lack of time to prepare something is a shame, but there's two things I'd like to share.<p>
Allen White (<a href="http://sqlblog.com/blogs/allen_white">blog</a>|<a href="http://twitter.com/#!/SQLRunr">twitter</a>) is hosting this month's event and the topic du jour is "Sharing your Tricks". The theme is to share some of the things that make your job easier.
<p>
Batter up! (However, owing to the shortage of time I talked about above, I'm going to have to cheat and recycle a previous post or two because they are a couple of my favorite tricks).
<p>
One of the things that I was used to, as an Oracle DBA on Unix for years before adding a SQL Server cap to the many other hats I wear was being able to use Unix shell scripting to parse my alert logs and the like, and also script out information like free space remaining in my tablespaces and what not.
<p>
Not being able to get this information out of SQL Server bugged me. There were ways of course, but I wasn't aware of them since I got thrown into the deep end of the SQL Server pool very suddenly a few years back.
<p>
When I attended SQL Saturday #94 here in Salt Lake City back in September, I attended Chris Shaw's session on his Utility Database and immediately got some return on investment. I sat down for a few hours and expanded upon it. One of the things my customers had recently asked was an estimate on growth of their databases - how much data was being added on a monthly basis so there could be some growth planning. I took what Chris started and expanded on it to suit my needs. The end result was a monthly report showing data file growth in a month (physical footprint on disk) as well as data growth (size of data in each file group) in a month and the average growth per day.
<p>
It's all described <a href="http://dba-in-exile.blogspot.com/2011/10/expanding-on-chris-shaws-utility.html">in this post</a> in detail.
<p>
The other thing I needed to work out how to do was to extract information out of SQL Server via a Windows CMD script. One of the applications I help maintain has a very active batch system - the users can request to run reports out of the online application and feed parameters to a request record. The batch report executes, picks up these parameters and runs the report. Sometimes, at the end of the report / update process there's something else that needs to be done.
<p>
I found myself needing to be able to pick up a Unique Identifier from the operating system that identified the job, query the database for the parameters, return one of the parameters to the OS and act upon it (one example was to query the parameters to find an email address and then email a particular piece of output to that person).
<p>
So, I ended up writing a Windows CMD script to pass in the UUID, assign the output to another environment variable and be able to work on that. The details of that trick are <a href="http://dba-in-exile.blogspot.com/2011/09/passing-parameter-to-sql-script-and.html">in this post</a>. (For the SQL Server folks reading this, ignore the second half of the article where I go into Unix / Oracle specifics) :)
<p>
Apologies for the lack of "new content" for this one, but I find the above two items have helped me a lot and were worthy contenders for this month's T-SQL Tuesday.Stevehttp://www.blogger.com/profile/10758481055115692692noreply@blogger.comtag:blogger.com,1999:blog-916087967433702978.post-15961490013451987272011-11-18T22:32:00.001-08:002012-01-17T11:05:56.158-08:00Flattening Your Data - How to Make Multiple Rows into Columns<hr>
<i>"Mercy is the mark of a great man ... <stab> ...Guess I'm just a good one ... <stab> ... Well, I'm alright" - Captain Malcolm Reynolds, Firefly - 'Shindig'.</i>
<hr>
<p>
I recent had to extract some data from my database. The general format of the tables was like so:
<p>
<font color=green>
<pre>
Supplier Table:
supplier_code char(6)
supplier_name char(40)
supplier address char(40)
sup_status char(1)
etc
Colloquial Name Table:
supplier_code char(6)
colloquial_name char(40)
Email Address Table:
supplier_code char(6)
sup_email_address char(40)
</pre></font>
<p>
The base requirement was to output one supplier per line with the base information on each supplier from the first table and from 0 through 6 colloquial names for the supplier, depending on how many were present. There was also an optional email address. Finally, there was a status code on the table and I had to include every supplier with a status of "3" or less.
<p>
I'll be the first to admit that I do not fully understand the depth of the OVER / PARTITION BY syntax but I get the gist of it. I ended up needing to get some help on this one but the end query looked like this:
<p>
<font color=green>
<pre>
select a.supplier_code, a.supplier_name, a.supplier_address,
d.sup_email_addr, g.colloq_1, g.colloq_2, g.colloq_3,
g.colloq_4, g.colloq_5, g.colloq_6
FROM SUPPLIER a
LEFT OUTER JOIN SUP_EMAIL D on a.supplier_code = d.supplier_code
LEFT OUTER JOIN
(
select a.supplier_code,
MAX(CASE WHEN ClqOrder = 1 THEN Colloquial_name ELSE ' ' END) AS colloq_1,
MAX(CASE WHEN ClqOrder = 2 THEN Colloquial_name ELSE ' ' END) AS colloq_2,
MAX(CASE WHEN ClqOrder = 3 THEN Colloquial_name ELSE ' ' END) AS colloq_3,
MAX(CASE WHEN ClqOrder = 4 THEN Colloquial_name ELSE ' ' END) AS colloq_4,
MAX(CASE WHEN ClqOrder = 5 THEN Colloquial_name ELSE ' ' END) AS colloq_5,
MAX(CASE WHEN ClqOrder = 6 THEN Colloquial_name ELSE ' ' END) AS colloq_6
FROM (
SELECT s.*, B.Colloquial_name, B.ClqOrder
FROM SUPPLIER S
INNER JOIN
(SELECT supplier_code, Colloquial_Name,
ROW_NUMBER() OVER (PARTITION BY supplier_code ORDER BY Colloquial_name) AS ClqOrder
FROM sup_colloq) B
on S.SUPPLIER_CODE = B.SUPPLIER_CODE) A
GROUP by a.Supplier_CODE
) G on a.supplier_CODE = g.supplier_CODE
where a.sup_status <= '3'
</pre></font>
<p>
So this all works, but how does it break down ?
<p>
For simplicity's sake, and the limited width of the blog page, we're going to shorten all those char(40) columns to something much shorter. Let's start with making the tables and populating some base data.
<p>
<font color=green>
<pre>
SQL> create table supplier
2 (supplier_code char(2),
3 supplier_name char(5),
4 supplier_address char(5),
5 sup_status char(1));
Table created.
SQL> create table sup_colloq
2 (supplier_code char(2),
3 colloquial_name char(5));
Table created.
SQL> create table sup_email
2 (supplier_code char(2),
3 sup_email_addr char(5));
Table created.
SQL> insert into supplier values ('01','Sup#1','Addr1','1');
1 row created.
SQL> insert into supplier values ('02','Sup#2','Addr2','1');
1 row created.
SQL> insert into supplier values ('03','Sup#3','Addr3','1');
1 row created.
SQL> insert into sup_email values ('01','a@a');
1 row created.
SQL> insert into sup_email values ('02','b@b');
1 row created.
SQL> insert into sup_colloq values('01','Cql01');
1 row created.
SQL> insert into sup_colloq values('01','Cql02');
1 row created.
SQL> insert into sup_colloq values('01','Cql03');
1 row created.
SQL> insert into sup_colloq values('01','Cql04');
1 row created.
SQL> insert into sup_colloq values('01','Cql05');
1 row created.
SQL> insert into sup_colloq values('01','Cql06');
1 row created.
SQL> insert into sup_colloq values('03','Cql07');
1 row created.
SQL> insert into sup_colloq values('03','Cql08');
1 row created.
</pre></font>
<p>
Running the big query gives us the expected output:
<p>
<font color=green>
<pre>
SQL> select a.supplier_code, a.supplier_name, a.supplier_address,
2 d.sup_email_addr, g.colloq_1, g.colloq_2, g.colloq_3,
3 g.colloq_4, g.colloq_5, g.colloq_6
4 FROM SUPPLIER a
5 LEFT OUTER JOIN SUP_EMAIL D on a.supplier_code = d.supplier_code
6 LEFT OUTER JOIN
7 (
8 select a.supplier_code,
9 MAX(CASE WHEN ClqOrder = 1 THEN Colloquial_name ELSE ' ' END) AS colloq_1,
10 MAX(CASE WHEN ClqOrder = 2 THEN Colloquial_name ELSE ' ' END) AS colloq_2,
11 MAX(CASE WHEN ClqOrder = 3 THEN Colloquial_name ELSE ' ' END) AS colloq_3,
12 MAX(CASE WHEN ClqOrder = 4 THEN Colloquial_name ELSE ' ' END) AS colloq_4,
13 MAX(CASE WHEN ClqOrder = 5 THEN Colloquial_name ELSE ' ' END) AS colloq_5,
14 MAX(CASE WHEN ClqOrder = 6 THEN Colloquial_name ELSE ' ' END) AS colloq_6
15 FROM (
16 SELECT s.*, B.Colloquial_name, B.ClqOrder
17 FROM SUPPLIER S
18 INNER JOIN
19 (SELECT supplier_code, Colloquial_Name,
20 ROW_NUMBER() OVER (PARTITION BY supplier_code ORDER BY Colloquial_name) AS ClqOrder
21 FROM sup_colloq) B
22 on S.SUPPLIER_CODE = B.SUPPLIER_CODE) A
23 GROUP by a.Supplier_CODE
24 ) G on a.supplier_CODE = g.supplier_CODE
25 where a.sup_status <= '3'
26
SQL> /
SU SUPPL SUPPL SUP_E COLLO COLLO COLLO COLLO COLLO COLLO
-- ----- ----- ----- ----- ----- ----- ----- ----- -----
01 Sup#1 Addr1 a@a Cql01 Cql02 Cql03 Cql04 Cql05 Cql06
03 Sup#3 Addr3 Cql07 Cql08
02 Sup#2 Addr2 b@b
</pre></font>
<p>
So now we see what it did, let's break it down. The first small select gets our colloquial names with row numbers in an inline table. These row numbers will help us later.
<p>
<font color=green>
<pre>
SQL> SELECT supplier_code, Colloquial_Name,
2 ROW_NUMBER() OVER (PARTITION BY supplier_code ORDER BY Colloquial_name) AS ClqOrder
3 FROM sup_colloq;
SU COLLO CLQORDER
-- ----- ----------
01 Cql01 1
01 Cql02 2
01 Cql03 3
01 Cql04 4
01 Cql05 5
01 Cql06 6
03 Cql07 1
03 Cql08 2
8 rows selected.
</pre></font>
<p>
When we grab the next chunk of code, with the CASE / MAX, we can query on each of the ClqOrder columns and turn that into something a little more friendly to what we need:
<p>
<font color=green>
<pre>
SQL> select a.supplier_code,
2 MAX(CASE WHEN ClqOrder = 1 THEN Colloquial_name ELSE ' ' END) AS colloq_1,
3 MAX(CASE WHEN ClqOrder = 2 THEN Colloquial_name ELSE ' ' END) AS colloq_2,
4 MAX(CASE WHEN ClqOrder = 3 THEN Colloquial_name ELSE ' ' END) AS colloq_3,
5 MAX(CASE WHEN ClqOrder = 4 THEN Colloquial_name ELSE ' ' END) AS colloq_4,
6 MAX(CASE WHEN ClqOrder = 5 THEN Colloquial_name ELSE ' ' END) AS colloq_5,
7 MAX(CASE WHEN ClqOrder = 6 THEN Colloquial_name ELSE ' ' END) AS colloq_6
8 FROM (
9 SELECT s.*, B.Colloquial_name, B.ClqOrder
10 FROM SUPPLIER S
11 INNER JOIN
12 (SELECT supplier_code, Colloquial_Name,
13 ROW_NUMBER() OVER (PARTITION BY supplier_code ORDER BY Colloquial_name) AS ClqOrder
14 FROM sup_colloq) B
15 on S.SUPPLIER_CODE = B.SUPPLIER_CODE) A
16 GROUP by a.Supplier_CODE;
SU COLLO COLLO COLLO COLLO COLLO COLLO
-- ----- ----- ----- ----- ----- -----
01 Cql01 Cql02 Cql03 Cql04 Cql05 Cql06
03 Cql07 Cql08
</pre></font>
<p>
Finally, adding in the outer joins as show at the top, we pick up the grand combination of everything. This was definitely something I've had to do before and thought it was worth preserving here. It falls squarely int the "for my edification" bucket that was talked about in the first post :)Stevehttp://www.blogger.com/profile/10758481055115692692noreply@blogger.comtag:blogger.com,1999:blog-916087967433702978.post-91063904382759022252011-11-07T22:11:00.000-08:002015-09-04T09:57:14.547-07:00T-SQL Tuesday #024: Prox 'n' Funx<hr>
<i>"It doesn't matter if you win or lose, as long as you look cool doing it." - Julio Scoundrél - Order of the Stick webcomic.</i>
<hr>
<a href="http://bradsruminations.blogspot.com/2011/10/invitation-for-t-sql-tuesday-024-prox-n.html"><img style="float:left; margin:0 10px 10px 0;cursor:pointer; cursor:hand;width: 150px; height: 150px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiJquyEJB2QJGebzSIF-4krAaC28GclMK-POt-MUKo0E37ELJECQd46GyK2ek6vXVkaAid5vk4Blf2k2jzcdFtGWqSNeFH2Iio77ht_3yj5AuR943ro-LHleuUBUvXvwpXAva8j-Grqf16F/s400/tsqltuesday.jpg" border="0" alt="T-SQL Tuesday" id="BLOGGER_PHOTO_ID_5480538702979664562" /></a>I only recently became aware of T-SQL Tuesday and considering my last post on <a href="http://dba-in-exile.blogspot.com/2011/10/another-way-of-looking-at-joins.html">Joins</a>, if I had known about it, that post would have made a good submission for <a href="http://codegumbo.com/index.php/2011/09/27/tsql2sday-t-sql-tuesday-23early-edition/">last month's T-SQL Tuesday</a>. However, that aside, it's not last month, it's this month.
<p>
Brad Schulz (<a href="http://bradsruminations.blogspot.com">blog</a>) is hosting this month's party and his topic of choice is "Prox 'n' Funx" (Procedures and Functions).
<p>
So here we go.
<p>
Anyone who spent any time learning anything about just about any kind of programming language (there are exceptions there too) should have learned early on about the usefulness of procedures and functions. The <a href="http://en.wikipedia.org/wiki/Function_%28computer_science%29">Wikipedia</a> entry on this topic defines it as follows: "<i>a subroutine (also called procedure, function, routine, method, or subprogram) is a portion of code within a larger program that performs a specific task and is relatively independent of the remaining code</i>". This definition holds true in SQL Server - however instead of being a part of a larger program, they are stored in the database as executable pieces of code - and these can be called by any other T-SQL program in the database - indeed you can even call them across databases (let's face it, the majority of system procedures are stored in a system database).
<p>
With Programming 101 out of the way, I thought I'd write a little about some of the differences between Oracle and SQL Server in Procedure and Function definition for those people (like me), who have to work in both database engines. Since I recently had to port some PL/SQL code to T-SQL, it seems like a good time to write about it.
<p>
The differences aren't so much in how the procedure is defined as much as how you call it.
<p>
The basic construct for both languages is the same:
<p>
<font color=green>
<pre>
CREATE [or replace] PROCEDURE schema.procname(parameters)
AS
</pre>
</font>
<p>
The schema part of the declaration is optional. In SQL Server it defaults to the default schema (usually dbo) and in Oracle it's the user name you're logged in as. (If you're not familiar with Oracle's security model you'll have to trust me, that's a post for another time and I won't go into it here). Oracle also allows the "OR REPLACE" syntax to overlay an existing stored procedure - as opposed to SQL Server where you need to drop an object before recreating it (or you can use "alter procedure" to replace the procedure so that permissions on the object are not lost).
<p>
Definition of the parameters varies only slightly.
<p>
For Oracle, list the parameter names and types, each declaration separated by a comma:
<p>
<font color=green>
<pre>
CREATE PROCEDURE myproc1 (param1 char(10), param2 integer, param3 date)
AS
</pre>
</font>
<p>
The SQL Server syntax is almost exactly the same, except you need to prefix the parameter names with the "@" sign.
<p>
<font color=green>
<pre>
CREATE PROCEDURE myproc1 (@param1 char(10), @param2 int, @param3 datetime)
AS
</pre>
</font>
<p>
The function definitions are also very close. The syntax for the returning of the function value differs only slightly. In Oracle it looks like this:
<p>
<font color=green>
<pre>
CREATE FUNCTION myfunc1 (param1 date) RETURN DATE
AS
</pre>
</font>
<p>
The function definitions in SQL Server looks like this:
<p>
<font color=green>
<pre>
CREATE FUNCTION myfunc1 (@param1 datetime) RETURNS DATETIME
AS
</pre>
</font>
<p>
The big difference between the two is how they are called. This was something that tricked me up the first time I tried converting code between the two. Both expect a comma separated parameter list, however, Oracle expects the parameters to be enclosed in parentheses, this SQL Server does not. What we come down to, is this for Oracle:
The function definitions are also very close. The syntax for the returning of the function value differs only slightly. In Oracle it looks like this:
<p>
<font color=green>
<pre>
EXEC myproc1 (param1, param2, param3);
</pre>
</font>
<p>
while in SQL Server it looks like this:
<p>
<font color=green>
<pre>
EXEC myproc1 @param1, @param2, @param3;
</pre>
</font>
<p>
This all gets a little murkier when you realize that in SQL Server user functions are called differently. A user function has to be called using 2 part names (you have to call it as dbo.myfunc and not just myfunc) *and* parameters are passed in parentheses, not as strings passed after a procedure call. This is yet another "gotcha" for the cross-database person.
<p>
Example time. In SQL Server it looks like this.
<p>
<font color=green>
<pre>
create procedure dbo.test1 (@date1 datetime, @days1 int)
as
declare @result1 datetime;
select @result1 = dateadd(dd, @days1, @date1);
print 'Input : ' + convert(varchar(10), @date1, 101);
print 'Output: ' + convert(varchar(10), @result1, 101);
go
Command(s) completed successfully.
declare @indate1 datetime;
select @indate1=getdate();
exec dbo.test1 @indate1,2
Input : 11/07/2011
Output: 11/09/2011
</pre>
</font>
<p>
And the same kind of thing in Oracle:
<p>
<font color=green>
<pre>
create or replace procedure test1 (date1 date, days1 integer)
as
result1 date;
begin
result1 := date1 + days1;
dbms_output.put_line('Input : ' || to_char(date1, 'MM/DD/YYYY'));
dbms_output.put_line('Output: ' || to_char(date1, 'MM/DD/YYYY'));
end;
/
Procedure created.
SQL> set serveroutput on
SQL> exec test1(sysdate, 2);
Input : 11/07/2011
Output: 11/09/2011
PL/SQL procedure successfully completed.
SQL>
</pre>
</font>
<p>
The result are the same, but there's the little syntactical tricks that are needed in order to make it work.
<p>
Finally, same treatment for similar functions. SQL Server first.
<p>
<font color=green>
<pre>
create function dbo.func1 (@date1 datetime, @days1 int)
returns datetime
as
begin
declare @result1 datetime;
select @result1 = dateadd(dd, @days1, @date1);
RETURN @result1
end
GO
Command(s) completed successfully.
declare @indate1 datetime;
declare @outdate1 datetime;
select @indate1=getdate();
select @outdate1=dbo.func1(@indate1, 3)
print 'Input : ' + convert(varchar(10), @indate1, 101);
print 'Output: ' + convert(varchar(10), @outdate1, 101);
Input : 11/07/2011
Output: 11/10/2011
</pre>
</font>
<p>
As noted above, if you replace "dbo.func1" with "func1", you get this error message:
<p>
<pre>
<font color="red">Msg 195, Level 15, State 10, Line 4
'func1' is not a recognized built-in function name.</font>
</pre>
<p>
And finally, for Oracle:
<p>
<font color=green>
<pre>
create or replace function func1 (date1 date, days1 integer)
return date
as
result1 date;
begin
result1 := date1 + days1;
return result1;
end;
/
Function created.
SQL> select 'Input Date = '||sysdate||' / Output Date = '||func1(sysdate, 3)
2 as Input_Output_Date from dual;
INPUT_OUTPUT_DATE
------------------------------------------------
Input Date = 07-NOV-11 / Output Date = 10-NOV-11
SQL>
</pre>
</font>
<p>
When working in two different databases, it's important to keep in mind the differences in syntax between them when working on whatever solution you are working on. Little things like a missing pair of parentheses, or missing @'s can cause you just enough irritation to make you want to pull out what in my case, is the very little hair left on your head. The bottom line, especially for people like me who walk both sides of the Oracle / SQL Server line is to make sure you understand the subtle differences in the syntax you're working with.
<p>
As an "end of post" bonus, just because we're talking about functions, there's something else I'd like to throw out there. Here's a post I wrote in September and how a compatibility setting can "break" a Dynamic Management Function - <a href="http://dba-in-exile.blogspot.com/2011/09/dmv-sysdmdbindexphysicalstats-is-broken.html">sys.dm_db_index_physical_stats is broken !?</a>Stevehttp://www.blogger.com/profile/10758481055115692692noreply@blogger.comtag:blogger.com,1999:blog-916087967433702978.post-74372698205183493812011-11-06T00:31:00.000-07:002012-01-17T11:05:09.684-08:00Addendum to "My database went down during a hot backup"<hr>
<i>"This place gives me an uncomfortableness." - Jayne Cobb, Firefly - 'Safe'</i>
<hr>
In checking the archives of my blog, I noticed <a href="http://dba-in-exile.blogspot.com/2011/09/oh-no-my-database-went-down-during-hot.html">this posting</a> from September.
<p>
There's another way to fix this and for completeness, thought I should post it
<p>
Once you've attempted to start the database, you'll get as far as "mounted' before the error message appears. If you can check your alert logs and take the affected tablespaces out of backup mode:
<p>
<pre>
alter tablespace MYTS end backup;
</pre>
<p>
all you'll need to do is to issue "ALTER DATABASE OPEN;" and the database will open.
<p>
Either method will work. I've had the opportunity to use both in the last couple of months.Stevehttp://www.blogger.com/profile/10758481055115692692noreply@blogger.com