Menu Bar

Saturday, November 24, 2012

FLASHBACK Technology in Oracle Databases (Part 2)


"By the way, did I mention it also travels in time?" - The Ninth Doctor, referring to the TARDIS
Last time we talked about using Flashback technology to run queries against our database as it looked in the recent past.

This can certainly be useful, but what if it just isn't enough. What if we need to get back a table we just accidentally dropped, or recover a data screw up so bad that recovery is our only option.

There are other aspects to Flashback technology that let us do just that.

What we looked at last time would be covered under the topic sub-titles "Flashback Query" and "Flashback Version Query". What other options do we have available to us?

Looking into the heart of our TARDIS (refer to Part 1 for the introduction of the Oracle TARDIS), we find the following options awaiting us:

  • Flashback Database - a physical level restore that restores the entire database to a specific point in time using flashback logs rather than doing a database recovery and rolling archive logs forward
  • Flashback Table - recover a table to a point in time
  • Flashback Drop - restores an accidentally dropped table as well as all related indexes, triggers and constraints
  • Flashback Transaction - rollback a single transaction and optionally, all transactions dependent upon the rolled back transaction
  • Flashback Transaction Query - see all the changes made by a specific transaction

Today we'll be talking about the first three on that list (we'll deal with Flashback Transaction in part 3 of the series).

Flashback Database

Let's start with Flashback Database. Much like a database recovery, Flashback Database is a physical level recovery, much like a database point in time recovery, up to and including the OPEN RESETLOGS at the end, but usually much faster because the initial recovery of the data files from backup isn't needed.

In order to use Flashback Database, the Fast Recovery Area needs to be configured in the database that is a candidate for Flashback.

This means that the db_recovery_file_dest and db_recovery_file_dest_size initialization parameters need to be set. Furthermore, in order to enable Flashback Database, db_flashback_retention_target also needs to be set. This parameter the number of minutes we can go back into the past and perform a Flashback Database.

Once this is configured, we must also make sure to issue ALTER DATABASE FLASHBACK ON.

What this will do is to start writing images of data blocks to the fast recovery area. The flashback logs are written sequentially and often in bulk. The database automatically creates, deletes and resizes these logs as needed. These logs are not archived.

The only thing that the DBA really needs to know about for the flashback logs is that they exist (from a sizing of the fast recovery area point of view) and that they are adding a performance overhead to the database.

The database also needs to be in Archivelog Mode for this functionality to work.

When doing a Flashback Database, the database uses information in these logs, as well as some information from archive logs, in order to perform the requested operation. It is because of this that you can't enable flashback after a failure and expect that you can flashback your database to before the failure.

Similar to the syntax used in Flashback Query, there are several options you can apply to Flashback Database:

  • TO SCN
  • TO SEQUENCE
  • TO TIMESTAMP
  • TO RESTORE POINT

Each of these also supports a TO BEFORE construct as well. Doing FLASHBACK TO will recover the database right up until the parameter specified. Doing FLASHBACK TO BEFORE will recover the database right up until immediately before the parameter specified.

Demo Time

First things first, let's check that our database is set up correctly:


SQL> show parameter db_recovery

NAME                        TYPE        VALUE                          
--------------------------- ----------- ------------------------------ 
db_recovery_file_dest       string      c:\oracle\flash_recovery_area  
db_recovery_file_dest_size  big integer 4977M                          

SQL> show parameter flashback;

NAME                          TYPE        VALUE
----------------------------- ----------- -----
db_flashback_retention_target integer     1440

SQL> select flashback_on from v$database;

FLASHBACK_ON                                               
------------------                                               
NO                                                               

SQL> alter database flashback on;

Database altered.

SQL> select flashback_on from v$database;

FLASHBACK_ON                                                     
------------------                                               
YES                                                              

SQL> select instance_name, archiver from v$instance;

INSTANCE_NAME    ARCHIVE                              
---------------- -------                              
demo             STARTED

Next, let's check the status of the database as we left it after Part 1, then add a new table and populate it with some related data.


SQL> conn demo/demo
Connected.
SQL> alter session set nls_date_format = 'YYYY-MM-DD:HH24:MI:SS';

Session altered.

SQL> select table_name from user_tables;

TABLE_NAME                                                    
---------------                                               
PREZ                                                          

SQL> desc prez
 Name                                      Null?    Type 
------------------------------------------ -------- -----------
 NAME                                      VARCHAR2(25)
 ELECTED                                   VARCHAR2(4)

SQL> select * from prez;

NAME                      ELEC                     
------------------------- ----                                               
Barack Obama              2008                     
George W. Bush            2013                     
Bill Clinton              2013                                           
George H. W. Bush         2013                     
Ronald Reagan             2013                                           

SQL> select sysdate from dual;

SYSDATE                                                           
-------------------                   
2012-11-24:22:02:36                                               

SQL> create table viceprez (name varchar2(25), elected varchar2(4));

Table created.

SQL> insert into viceprez values ('Joe Biden', '2008');

1 row created.

SQL> insert into viceprez values ('Dick Cheney', '2000');

1 row created.

SQL> insert into viceprez values ('Al Gore', '1992');

1 row created.

SQL> insert into viceprez values ('Dan Quayle', '1988');

1 row created.

SQL> insert into viceprez values ('George H. W. Bush', '1980');

1 row created.

SQL> select * from viceprez;

NAME                      ELEC                                    
------------------------- ----                                           
Joe Biden                 2008                     
Dick Cheney               2000                     
Al Gore                   1992                     
Dan Quayle                1988                     
George H. W. Bush         1980                     

SQL> select sysdate from dual;

SYSDATE                                                                      
-------------------                                                          
2012-11-24:22:09:07                                                          

SQL> quit

Note the time stamp here. At the time that the timestamp is shown the rows have not been committed to the database. The commit does not happen until we quit from SQL*Plus, where the commit happens implicitly.

So, a user gets into a database and starts deleting rows. They suddenly realize that they are in prod and not in test!

The DBA comes to the rescue (after first realizing that he must shut down the database in order to perform the flashback).

I've also included listings here showing the current incarnation of the database from RMAN between steps.


C:\>rman target /
RMAN> list incarnation of database;

using target database control file instead of recovery catalog

List of Database Incarnations
DB Key  Inc Key DB Name  DB ID            STATUS  Reset SCN  Reset Time
------- ------- -------- ---------------- --- ---------- ----------
1       1       DEMO     3543868088       CURRENT 1          19-NOV-12

RMAN>quit

C:\> sqlplus demo/demo
Connected.

SQL> delete from viceprez where elected< 2000;

3 rows deleted.

SQL> delete from prez where elected = 2013;

4 rows deleted.

SQL> commit;

Commit complete.

SQL> select * from prez;

NAME                      ELEC
------------------------- ----
Barack Obama              2008

SQL> select * from viceprez;

NAME                      ELEC
------------------------- ----
Joe Biden                 2008
Dick Cheney               2000

SQL>connect / as sysdba
Connected.


SQL> flashback database to timestamp to_timestamp('2012-11-24:22:09:07','YYYY-MM-DD:HH24:MI:SS');
flashback database to timestamp to_timestamp('2012-11-24:22:09:07','YYYY-MM-DD:HH24:MI:SS')
*
ERROR at line 1:
ORA-38757: Database must be mounted and not open to FLASHBACK. 

SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.

SQL> quit

C:\> sqlplus / as sysdba

SQL> startup mount
ORACLE instance started.

Total System Global Area  535662592 bytes                                       
Fixed Size                  1375792 bytes                                       
Variable Size             415236560 bytes                                       
Database Buffers          113246208 bytes                                       
Redo Buffers                5804032 bytes                                       
Database mounted.
SQL> flashback database to timestamp to_timestamp('2012-11-24:22:09:07','YYYY-MM-DD:HH24:MI:SS');

Flashback complete.

SQL> alter database open resetlogs;

Database altered.

SQL> connect demo/demo
Connected.

SQL> select * from prez;

NAME                      ELEC                                                  
------------------------- ----                                                  
Barack Obama              2008                                                  
George W. Bush            2013                                                  
Bill Clinton              2013                                                  
George H. W. Bush         2013                                                  
Ronald Reagan             2013                                                  

SQL> select * from viceprez;

no rows selected

SQL> quit

C:\>rman target /
RMAN> list incarnation of database;

using target database control file instead of recovery catalog

List of Database Incarnations
DB Key  Inc Key DB Name  DB ID            STATUS  Reset SCN  Reset Time
------- ------- -------- ---------------- --- ---------- ----------
1       1       DEMO     3543868088       PARENT  1          19-NOV-12
2       2       DEMO     3543868088       CURRENT 849611     24-NOV-12

RMAN>quit

This is where the earlier comment about the placement of the commit came into play (and also ties neatly into my earlier post this month about Transaction Isolation Levels in Oracle). Because the changes had not been committed at the timestamp specified, when the flashback database was performed, those rows are missing from the resultant database.

Let's assume that we now go back and recreate the data in the VICEPREZ table and that our same user goes and deletes the exact same records again. We'll pick it up at the delete and the subsequent rescue by the DBA:


SQL> select sysdate from dual;

SYSDATE                                                                         
-------------------                                                             
2012-11-24:22:22:11                                                             

SQL> delete from viceprez where elected < '2000';

3 rows deleted.

SQL> delete from prez where elected = '2013';

4 rows deleted.

SQL> commit;

Commit complete.

SQL> select * from prez;

NAME                      ELEC
------------------------- ----
Barack Obama              2008

SQL> select * from viceprez;

NAME                      ELEC
------------------------- ----
Joe Biden                 2008
Dick Cheney               2000

SQL> connect / as sysdba
Connected.

SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup mount
ORACLE instance started.

Total System Global Area  535662592 bytes                                       
Fixed Size                  1375792 bytes                                       
Variable Size             415236560 bytes                                       
Database Buffers          113246208 bytes                                       
Redo Buffers                5804032 bytes                                       
Database mounted.
SQL> flashback database to timestamp to_timestamp('2012-11-24:22:22:11','YYYY-MM-DD:HH24:MI:SS');

Flashback complete.

SQL> alter database open resetlogs;

Database altered.

SQL> connect demo/demo
Connected.

SQL> select * from prez;

NAME                      ELEC                                                  
------------------------- ----                                                  
Barack Obama              2008                                                  
George W. Bush            2013                                                  
Bill Clinton              2013                                                  
George H. W. Bush         2013                                                  
Ronald Reagan             2013                                                  

SQL> select * from viceprez;

NAME                      ELEC                                                  
------------------------- ----                                                  
Joe Biden                 2008                                                  
Dick Cheney               2000                                                  
Al Gore                   1992                                                  
Dan Quayle                1988                                                  
George H. W. Bush         1980                                                  

SQL> quit

C:\>rman target /
RMAN> list incarnation of database;

using target database control file instead of recovery catalog

List of Database Incarnations
DB Key  Inc Key DB Name  DB ID            STATUS  Reset SCN  Reset Time
------- ------- -------- ---------------- --- ---------- ----------
1       1       DEMO     3543868088       PARENT  1          19-NOV-12
2       2       DEMO     3543868088       PARENT  849611     24-NOV-12
3       3       DEMO     3543868088       CURRENT 849931     24-NOV-12

RMAN>

The DBA is the hero of the hour, having managed to successfully recover the database back to pre-delete state without the large time investment of a complete database restore.

Of course, such functionality needs to be used with great care in a production environment to make sure that the rest of the data you lose isn't more of a pain than the data you've already lost.

One scenario that immediately comes to mind for the use of something like this would be in a large Enterprise application where Payroll is one of the subsystems in the database. The developers in this theoretical shop are in the process of making major functional changes to the code. They want to test on as close to live data as possible so they request a refresh of the test database.

Being a large Enterprise level database, it takes 2-3 hours to restore the test server from backup. The developers run their test payroll but it didn't work. They need another fresh copy of the production database in order to run their test again.

Before Flashback Database, another 2-3 hours would be wasted waiting for the refresh to complete.

If the DBA setup the test environment for Flashback and let the developers do their testing, if they needed a fresh copy of the database in test, a quick Flashback Database later and they have a fresh copy of the data in a fraction of the time.

Flashback Table

Our next scenario is for Flashback Table. With this feature you can flashback just one table in the database to a particular SCN, TIMESTAMP or RESTORE POINT. Creating a restore point is as simple as:


SQL> create restore point flashback_table;

Restore point created.

Again, let's assume that someone has run rampant over some table in the database. The table is stand alone, with no other dependencies on other objects, so it has been determined that it's safe to roll this one table back to a point in time in the past.

Demo Time


SQL> conn demo/demo
Connected.
SQL> select * from viceprez;

NAME                      ELEC
------------------------- ----
Joe Biden                 2008
Dick Cheney               2000
Al Gore                   1992
Dan Quayle                1988
George H. W. Bush         1980

SQL> select sysdate from dual;

SYSDATE            
-------------------
2012-11-24:22:32:14

SQL> delete from viceprez where elected = '2008';

1 row deleted.

SQL> select * from viceprez;

NAME                      ELEC
------------------------- ----
Dick Cheney               2000
Al Gore                   1992
Dan Quayle                1988
George H. W. Bush         1980

SQL> flashback table viceprez to restore point flashback_table;
flashback table viceprez to restore point flashback_table
                *
ERROR at line 1:
ORA-08189: cannot flashback the table because row movement is not enabled 

What does that mean? Row movement is a setting that tells Oracle it's OK to generate new ROWID's for rows being recovered in this manner (since there's no guarantee that the old ROWID is still unique).

Fortunately, this is a setting that can be enabled after the fact!


SQL> alter table viceprez enable row movement;

Table altered.

SQL> select * from viceprez;

NAME                      ELEC                                                  
------------------------- ----                                                  
Dick Cheney               2000                                                  
Al Gore                   1992                                                  
Dan Quayle                1988                                                  
George H. W. Bush         1980                                                  

SQL> flashback table viceprez to restore point flashback_table;

Flashback complete.

SQL> select * from viceprez;

NAME                      ELEC                                                  
------------------------- ----                                                  
Joe Biden                 2008                                                  
Dick Cheney               2000                                                  
Al Gore                   1992                                                  
Dan Quayle                1988                                                  
George H. W. Bush         1980                                                  

Flashback Drop

For our last scenario, we're going to look at recovering a table that has been dropped.

In this scenario (ignoring the fact that a user shouldn't have drop rights on a table anyway), a user inserts a row into the table, realizes their error and attempts to remove it. However, instead of removing the row, they drop the table in error.

The DBA puts on his red cape and once again comes to the rescue!

Demo Time


SQL> insert into viceprez values ('The Man in the Moon', '2013');

1 row created.

SQL> select * from viceprez;

NAME                      ELEC                                                  
------------------------- ----                                                  
Joe Biden                 2008                                                  
Dick Cheney               2000                                                  
Al Gore                   1992                                                  
Dan Quayle                1988                                                  
George H. W. Bush         1980                                                  
The Man in the Moon       2013                                                  

6 rows selected.

SQL> drop table viceprez;

Table dropped.

SQL> select * from viceprez;
select * from viceprez
              *
ERROR at line 1:
ORA-00942: table or view does not exist 


SQL> select table_name from user_tables;

TABLE_NAME                                                                      
------------------------------                                                  
PREZ                                                                            

SQL> show recyclebin;
ORIGINAL NAME    RECYCLEBIN NAME                OBJECT TYPE  DROP TIME          
---------------- ------------------------------ ------------ -------------------
VICEPREZ         BIN$bsEpu+DLQZWxoPngfpkZJQ==$0 TABLE        2012-11-24:22:40:18

SQL> flashback table viceprez to before drop;

Flashback complete.

SQL> show recyclebin;
SQL> select table_name from user_tables;

TABLE_NAME                                                                      
------------------------------                                                  
PREZ                                                                            
VICEPREZ                                                                        

SQL> select * from viceprez;

NAME                      ELEC                                                  
------------------------- ----                                                  
Joe Biden                 2008                                                  
Dick Cheney               2000                                                  
Al Gore                   1992                                                  
Dan Quayle                1988                                                  
George H. W. Bush         1980                                                  
The Man in the Moon       2013                                                  

6 rows selected.

SQL> delete from viceprez where elected = '2013';

1 row deleted.

SQL> select * from viceprez;

NAME                      ELEC                                                  
------------------------- ----                                                  
Joe Biden                 2008                                                  
Dick Cheney               2000                                                  
Al Gore                   1992                                                  
Dan Quayle                1988                                                  
George H. W. Bush         1980                                                  

Conclusion

Used with caution, the above Flashback options could save you hours of time in the event that you accidentally drop the wrong table, or have the need to perform repetitive testing on a test copy of the database where the testing causes mass updates to the data (such as in a payroll update).

There are many more options to these commands than what we've talked about here. For full information, please refer to Chapter 18 of the Oracle 11g Database Backup and Recovery User's Guide.

Next Time

Next time, we'll dig a little into Flashback Transaction and Flashback Transaction Query.

Tuesday, November 20, 2012

FLASHBACK Technology in Oracle Databases (Part 1)


"If it helps, I can travel in time as well" - The Tenth Doctor, referring to the TARDIS
Today we're talking about Time Travel in Oracle Databases.

I almost made a reference in the title and quote of this post about Mr. Peabody and the Way Back Machine but that would have perhaps shown my age - besides, quoting The Doctor is way cooler.

The Doctor and the Tardis

I recently found myself in a situation where I needed to go back in time. I was rolling forward a copy of a database to a particular point in time from a backup. I needed to stop at that particular point in time because the project manager wanted me to run row counts on the 1500 odd tables in the "before" and "after" databases and I had taken the "before" at that specific point in time.

During my recovery operation I fat fingered the UNTIL TIME clause and went past my stopping point.

After the expected cursing had died down I started wishing I had a TARDIS of my very own. Apart from the coolness factor of being able to zip back and forth through time at will, being able to go back about 15 minutes and fix my mistake would have been a miracle in and of itself.

It was then that I remembered FLASHBACK and it became my own TARDIS - Time and Relational Data in Schemas.

What is Flashback Query

First introduced in Oracle 9i, Flashback Query enables as to view the database as it existed at a point in time in the recent past. In Oracle 9i and onwards, this is implemented via the DBMS_FLASHBACK package.

This functionality was enhanced in Oracle 9i Release 2 with the addition of the "AS OF TIMESTAMP" clause to the SELECT statement.

Prerequisites and Limitations

There are some prerequisites to use Flashback Query

  • Initialization parameter requirement: undo_management = auto
  • Initialization parameter requirement: undo_retention = nnnn (maximum number of seconds to be able to query into the past)
  • Initialization parameter requirement: undo_tablespace = <undo_tablespace_name>
  • FLASHBACK (specific tables) or FLASHBACK ANY TABLE system privilege
  • EXECUTE privilege on DBMS_FLASHBACK

There are some restrictions that you should be aware of as well:

  • Prior to 11g, DDL that alters the structure of a table invalidates old UNDO data, rendering Flashback Query inoperative. Post 11g, most DDL is supported.
  • Timestamps map to an SCN (System Change Number) Value. These are internally updated every 5 minutes. If operating within that window, query results may be inaccurate if using timestamps (use SCN's for increased accuracy).
  • You must DISABLE/ENABLE the flashback mode before changing time windows.
  • Only data state is affected. Any table that has had a DDL change will reflect the most recent state of the table.
  • You cannot use Flashback Query on V$ views, but you can use it on catalog tables like USER_TABLES.
  • The SYS user cannot use DBMS_FLASHBACK but can use AS OF TIMESTAMP in a SELECT.
  • DML or DDL operations cannot be performed in Flashback mode. Open a cursor, fetch the data, end the flashback session and manipulate the data or use INSERT INTO ... SELECT AS OF TIMESTAMP constructs.
  • The undo_retention parameter is a target - it is not a guarantee of data availability.

Demo Time

The first thing we would need to do is to make sure that the user we're going to use has the correct permissions so we'd want to execute the following script:


create user demo identified by demo default tablespace users;
grant create session, flashback any table, create table to demo;
grant execute on dbms_flashback to demo;
alter user demo quota unlimited on users;

Next, we will check our parameters to make sure that automatic undo is turned on and that we're archiving:


SQL> connect / as sysdba
Connected.
SQL> show parameter undo

NAME                         TYPE        VALUE
---------------------------- ----------- ---------------
undo_management              string      AUTO
undo_retention               integer     3600
undo_tablespace              string      UNDOTBS1

SQL> select instance_name, archiver from v$instance;

INSTANCE_NAME    ARCHIVE                              
---------------- -------                              
demo             STARTED                              

After that, let's create a table and load a little data and validate what we have. At the end of the inserts, we'll commit the changes, grab our current SCN and our current date and time.


SQL> conn demo/demo
Connected.

SQL> create table prez (name varchar2(25), elected varchar2(4));

Table created.

SQL> insert into prez values ('Barack Obama', '2008');

1 row created.

SQL> insert into prez values ('George W. Bush', '2000');

1 row created.

SQL> insert into prez values ('Bill Clinton', '1992');

1 row created.

SQL> insert into prez values ('George H. W. Bush', '1988');

1 row created.

SQL> insert into prez values ('Ronald Reagan', '1980');

1 row created.

SQL> select * from Prez;

NAME                      ELEC
------------------------- ----
Barack Obama              2008
George W. Bush            2000
Bill Clinton              1992
George H. W. Bush         1988
Ronald Reagan             1980

SQL> commit;

Commit complete.

SQL> alter session set nls_date_Format = 'YYYY-MM-DD:HH24:MI:SS';

Session altered.

SQL> set serveroutput on
SQL> var scn number;
SQL> exec :scn := dbms_flashback.get_system_change_number;

PL/SQL procedure successfully completed.

SQL> exec dbms_output.put_line ('SCN = '||to_char(:scn));
SCN = 818801

PL/SQL procedure successfully completed.

SQL> select sysdate from dual;

SYSDATE 
-------------------
2012-11-20:22:49:06

Next, we'll run an update where we "accidentally" update all the rows in the table to an incorrect value. We'll also grab a timestamp here.


SQL> update prez set elected = '2013';

5 rows updated.

SQL> commit;

Commit complete.

SQL> select sysdate from dual;

SYSDATE
-------------------
2012-11-20:22:49:32

With that done, it's now time to check the table out with current values and values as they were at a specific SCN. The first way uses the DBMS_FLASHBACK package to view the data as it was at that first SCN we assigned to the variable :scn. Then, we do it again using SELECT ... AS OF SCN.


SQL> select * from prez;

NAME                      ELEC
------------------------- ----
Barack Obama              2013
George W. Bush            2013
Bill Clinton              2013
George H. W. Bush         2013
Ronald Reagan             2013

SQL> exec dbms_flashback.enable_at_system_change_number(:scn);

PL/SQL procedure successfully completed.

SQL> select * from prez;

NAME                      ELEC
------------------------- ----
Barack Obama              2008
George W. Bush            2000
Bill Clinton              1992
George H. W. Bush         1988
Ronald Reagan             1980

SQL> exec dbms_flashback.disable;

PL/SQL procedure successfully completed.

SQL> select * from prez;

NAME                      ELEC
------------------------- ----
Barack Obama              2013
George W. Bush            2013
Bill Clinton              2013
George H. W. Bush         2013
Ronald Reagan             2013

SQL> select * from prez as of scn :scn;

NAME                      ELEC
------------------------- ----
Barack Obama              2008
George W. Bush            2000
Bill Clinton              1992
George H. W. Bush         1988
Ronald Reagan             1980

For the next demo, we'll do effectively the same thing, using Timestamps instead of SCN's. I made sure to wait at least 5 minutes so that the internal SCN to TIMESTAMP mapping was updated correctly and we went and did round 2.


SQL> select * from prez;

NAME                      ELEC
------------------------- ----
Barack Obama              2013
George W. Bush            2013
Bill Clinton              2013
George H. W. Bush         2013
Ronald Reagan             2013

SQL> exec dbms_flashback.enable_at_time(to_date('2012-11-20:22:49:00','YYYY-MM-DD:HH24:MI:SS'));

PL/SQL procedure successfully completed.

SQL> select * from prez;

NAME                      ELEC
------------------------- ----
Barack Obama              2008
George W. Bush            2000
Bill Clinton              1992
George H. W. Bush         1988
Ronald Reagan             1980

SQL> exec dbms_flashback.disable;

PL/SQL procedure successfully completed.

SQL> select * from prez 
  2  as of timestamp 
  3  to_timestamp('2012-11-20:22:49:00','YYYY-MM-DD:HH24:MI:SS');

NAME                      ELEC
------------------------- ----
Barack Obama              2008
George W. Bush            2000
Bill Clinton              1992
George H. W. Bush         1988
Ronald Reagan             1980

SQL> select * from prez 
  2  as of timestamp 
  3  to_timestamp('2012-11-20:22:49:32','YYYY-MM-DD:HH24:MI:SS');

NAME                      ELEC
------------------------- ----
Barack Obama              2013
George W. Bush            2013
Bill Clinton              2013
George H. W. Bush         2013
Ronald Reagan             2013

Finally, there's another clause we can add to the SELECT statement to show what versions of a row may exist. You can specify SELECT ... VERSIONS BETWEEN.

We start by updating one row of the table, getting the new times and SCN's and then running the queries.


SQL> update prez set elected = '2008' where name = 'Barack Obama';

1 row updated.

SQL> select * from prez;

NAME                      ELEC
------------------------- ----
Barack Obama              2008
George W. Bush            2013
Bill Clinton              2013
George H. W. Bush         2013
Ronald Reagan             2013

SQL> commit;

Commit complete.

SQL> select sysdate from dual;

SYSDATE                                                           
-------------------                                               
2012-11-20:23:11:54                                               

SQL> exec dbms_output.put_line ('SCN = '||to_char(:scn));
SCN = 818801                                              

PL/SQL procedure successfully completed.

SQL> var scn2 number;
SQL> exec :scn2 := dbms_flashback.get_system_change_number;

PL/SQL procedure successfully completed.

SQL> exec dbms_output.put_line ('SCN = '||to_char(:scn2));
SCN = 820456                                         

PL/SQL procedure successfully completed.

SQL> select * from prez 
  2  versions between scn 818801 and 820456 
  3  where name = 'Barack Obama';

NAME                      ELEC
------------------------- ----
Barack Obama              2008
Barack Obama              2013
Barack Obama              2008

SQL> select sysdate from dual;

SYSDATE                                               
-------------------                                               
2012-11-20:23:19:24                                               

SQL> select * from prez 
  2  versions between timestamp 
  3  to_timestamp('2012-11-20:22:49:00','YYYY-MM-DD:HH24:MI:SS')
  4  and to_timestamp('2012-11-20:23:20:00','YYYY-MM-DD:HH24:MI:SS')
  5  where name = 'Barack Obama';

NAME                      ELEC
------------------------- ----
Barack Obama              2008
Barack Obama              2013
Barack Obama              2008

Conclusion

One of the important things to note here is that while you're in Flashback mode, no DML is allowed. One way to get around that is to use SELECT ... AS OF to create a temporary copy of the data you need and then work from there to correct it.

The following would work:


create table temp_prez as select * from prez as of scn :scn;

From there you'd be able to compare current and old data and update accordingly as needed.

For further information on Flashback Technology in Oracle Databases, be sure to check the Oracle Documentation.

Next Time

Next time, we'll dig a little deeper into the features added in Oracle 10g and later, specifically Flashback Table, Flashback Drop and Flashback Database.

Thursday, November 15, 2012

Question to self: Who am I ?


"I believe the appropriate metaphor here involves a river of excrement and a Native American water vessel without any means of propulsion." - Dr. Sheldon Cooper, The Big Bang Theory.
Today we'll talk about some Oracle stuff for a change.

I had one of my colleagues come to be a couple of weeks ago and say to me "How to I tell what database I'm connected to after I connect to the database" ?

I had to scratch my head a little on that one. When connected as SYSDBA, it's very simple to select from v$instance and you have your instance name.

However, if you're not connected as SYSDBA how do you do it.

Enter the SYS_CONTEXT function. SYS_CONTEXT allows you to extract parameter values associated with a particular namespace. Oracle provides a predefined namespace called USERENV which contains a wealth of information about the current session.

The following, for example would tell you which database you're currently connected to:


SELECT SYS_CONTEXT ('USERENV', 'INSTANCE_NAME') 
   FROM DUAL;

There's a full list of the predefined parameters in the Oracle SQL Language Reference Manual.

Wednesday, November 14, 2012

Isolation Levels in Oracle vs SQL Server


"I am somewhat preoccupied telling the Laws of Physics to shut up and sit down." -Vaarsuvius, Order of the Stick Webcomic
I sat in on a session at the PASS Summit in Seattle last week listening to a session delivered by Randy Knight talking about Isolation Levels in SQL Server.

Knowing that Oracle handles things differently to SQL Server made me want to research things a little closer.

As a DBA who spends time in both Oracle and SQL Server, it's important to understand Isolation Levels in these two RDBMS's and how they work, because their default behavior is very different.

According to the ANSI Standard, there are 4 default Isolation Levels in a Relational Database Management System, ordered below from least isolated to most isolated.

  1. Read Uncommitted
  2. Read Committed
  3. Repeatable Read
  4. Serializable

Going into the details of what each of these mean is outside the scope of what I want to write about today. There's a good write up at Wikipedia or any of a dozen or more blogs, just search "Isolation Levels" on your favorite search engine.

It should be noted that SQL Server implements all 4 of the ANSI Standard Isolation Levels plus one of its own (SNAPSHOT Isolation - which has 2 different levels in itself - Statement level and Transaction Level). As with the basic tenets, going into the details of snapshot isolation is outside of the scope of what I want to talk about here. To learn more about SNAPSHOT Isolation, refer to Books Online.

Oracle, on the other hand, does not implement Read Uncommitted and also does not implement Repeatable Read. Oracle adds in their own extra option called Read Only Isolation. You can read more about this option in the Oracle Documentation.

Because I work daily in both Oracle and SQL Server, what I really wanted to find out was what are the core effects of how SQL Server approached the READ COMMITTED Isolation level vs how Oracle handles it.

In basic terms, SQL Server has a "pessimistic" view of data concurrency by default. When in the middle of a transaction, SQL Server will block readers on any row that has changed until that row is committed.

Oracle, on the other hand has an "optimistic" view of concurrency by default. When in the middle of a transaction, Oracle will have readers see a version of the row as it existed at the start of the update transaction, thus not blocking readers.

I found an article written by Tom Kyte, Oracle's technology expert, where he talked about possible data inaccuracies when Read Committed is implemented the way that Microsoft does it in SQL Server and wanted to summarize it here as I used it for my own testing.

The basic setup is like this:

A bank account database has a table for balances that contains two columns, account and balance. The table has many hundreds of thousands rows.

A query is started to sum balances. The database begins reading pages and accumulating balances. After 200,000 records are read a transaction is done in the database. An account that has already been read is decremented by $400 and an account that has not yet been read (row number 300,000) is incremented by $400.

How do both database systems handle this ?

The timeline would go like this (I am going to represent the summing of balances by selecting one row at a time):

  1. Transaction 1: select balance where account = 1;
  2. Transaction 1: select balance where account = 2;

    ... etc ...

  3. Transaction 2: Decrement the value of account 1 by $400;
  4. Transaction 2: Increment the value of account 300000 by $400;

    ... etc ...

  5. Transaction 1: select balance where account = 300000;

    ... etc ...

  6. Transaction 2: Commit

Let's start by looking at how Oracle handles it.

I preload a table with three rows. Account 1 has a balance of $500, Account 2 has a balance of $200 and Account 300000 has a balance of $100.

Oracle's SQL*Plus, differently from SQL Server Management Studio, has an implicit transaction. You don't need to specify BEGIN TRANSACTION - a transaction is assumed from the moment you start until you quit SQL*Plus (implicit commit) or explicitly issue a commit or rollback.

In Session 1, we execute the following


SQL> select balance from accounts where account = 1;

   BALANCE
----------
       500

SQL> select balance from accounts where account = 2;

   BALANCE
----------
       200

SQL>

We do not commit the transaction.

In Session 2 we update the values of our accounts


SQL> update accounts set balance = 100 where account = 1;

1 row updated.

SQL> update accounts set balance = 500 where account = 300000;

1 row updated.

SQL>

Once again we leave the transaction open and do not commit.

We switch back to session 1:


SQL> select balance from accounts where account = 300000;

   BALANCE
----------
       100

SQL>

The query immediately returns the value of account 300000 as it was at the beginning of this current transaction, giving us a consistent view of all the balances as of the time that the query started.

Returning to Session 2 now, we can commit the transaction over there just to tidy up loose ends.


SQL> Commit;

Commit complete.

SQL> 

Having seen how Oracle does it (the optimistic view), let's have a look at SQL Server.

On SQL Server, we need to explicitly tell SQL Server Management Studio that we want a transaction.


begin transaction

select * from accounts where account = 1;
select * from accounts where account = 2;


account                                 balance
--------------------------------------- ---------------------------------------
1                                       500

(1 row(s) affected)

account                                 balance
--------------------------------------- ---------------------------------------
2                                       200

(1 row(s) affected)

So far so good.

Over to Session 2, we perform our balance transfer between accounts


begin transaction

update accounts set balance = 100 where account = 1;
update accounts set balance = 500 where account = 300000;

(1 row(s) affected)

(1 row(s) affected)

Returning to Session 1, we try to query Account 300000:


select * from accounts where account = 300000;

SSMS sits here and spins. It is waiting for the transaction from Session 2 to complete and commit the change to the database. A writer is blocking a reader now.

We swap back to session 2 and complete the transaction:


commit transaction

Immediately the query in Session 1 returns a value:


account                                 balance
--------------------------------------- ---------------------------------------
300000                                  500

(1 row(s) affected)

That's not right .... or is it ? I'd have to say no - it's not right - not from a pure data accuracy standpoint.

From the SQL Server Read Committed Isolation level and the way it's implemented that is the correct value - the value was read after it was committed to the database but the net result is that the $400 has been counted twice. It was counted in the balance of account 1 and then again in the balance of account 300000 - our query has, in fact, returned incorrect data!

If I was wanting an accurate picture of my account balances under this scenario, it appears to be that Snapshot Isolation is what's needed, but even then it can be a little tricky.

I ran through the scenario above again using Snapshot Isolation, using these scripts in the same order of command execution as presented above::

Session 1


set transaction isolation level snapshot
begin transaction
select * from accounts
select * from accounts where account = 300000
commit transaction

Session 2


set transaction isolation level snapshot
begin transaction
update accounts set balance = 100 where account = 1;
update accounts set balance = 500 where account = 300000
commit transaction

The value in the last query of Session 1 showed 100 before I committed Session 2, with an immediate return, just like Oracle did. However, even after I had committed Session 2 and ended it completely, the return from the query on the balance on account 300000 in Session 1 was still showing $100, until I ran the commit transaction in Session 1, when it turned back to the $500 value that was actually sitting in the database.

This actually makes sense, since we had done a begin transaction at the start of the query in Session 1, so it was showing us a picture of the data as it existed at the start of Session 1.

If we run through the example one more time, this time not putting the queries in Session 1 inside a transaction, as soon as Session 2 commits, the balance of account 300000 is showing the updated value immediately.

Microsoft provides an excellent example of the different behavior under snapshot isolation and read committed snapshot isolation on their website (that uses vacation days instead of a bank account as the example).

So, there's a definite "Gotcha!" sitting in there. If this kind of level of accuracy is important to you and your application, make sure you completely understand how Isolation levels work in the database system you're using.

I should re-iterate that the demonstration scenario used in this post was used in Tom Kyte's article as linked above and replicated here in order to compare and contrast the differences between the default behavior in both Oracle and SQL Server and how it can be modified by using other options. While he mentioned "other vendors" in his article, I wanted to see for myself how this scenario specifically played out in SQL Server (as I already knew for certain how it would behave in Oracle).

Sunday, November 11, 2012

PASS Summit 2012 Thoughts and Ruminations


"Sir, I think you have a problem with your brain being missing" - Zoe, Firely, "The Train Job"
The quote above is a little how I feel after this past week in Seattle at PASS Summit 2012. This was my second PASS Summit (many thanks to my boss for sending me out for another great week of education. My brain is so overloaded with information, I'm beginning to feel like it isn't there at all.

I learned last year a few things that were very helpful for me this year. Last year I struggled with notepads and pens and then trying not to lose all the pieces of paper I had acquired after I got home.

In the last 8 months or so I bought myself an iPad. Not really being a fan of Apple's soft keyboard I got myself a bluetooth keyboard/case to go with it. Saw at least one person with the ZAGGFolio case, my personal preference is the Airbender from New Trent. Used it heavily all week long in combination with Evernote as the note taking app and it worked brilliantly.

A downside to the conference was the wifi - while it seemed to work brilliantly for anything that didn't require an SSL connection (with the sole exception, oddly enough, of the session review site, couldn't get that to work on my iPad at all), I couldn't make an SSL connection to anything.

A new change this year was using the Guidebook site for electronic recording of speaker review forms. Even with the tablet access issue, I was able to use my phone to record session reviews with ease.

I was fortunate enough to also be there for both Pre Cons on Monday and Tuesday.

On Monday, Argenis Fernandez and Robert Davis talked about Best Practices for DBA's. While they covered some topics that I had already researched well and implemented, there were still lots of ideas for things to look at. I also met a new friend on Monday, a DBA from Singapore who I will be sure to follow up with this week.

Had a brief moment to introduce myself to Argenis and thank him for his assistance on Twitter. Whenever I post to #sqlhelp he usually has some good input.

On Tuesday I decided to spend the day with Allen White and go over some detail in Powershell. As a complete Powershell novice, I was very impressed at the things that can be done with this language.

I will need to look into getting this framework installed on all my servers and seeing just what I can leverage from it.

Tuesday evening was a great time to catch up with a former colleague of mine from my previous employer (if you saw the Princess wandering around on Thursday, it was her) and also got to meet face to face with one of my customers for the first time.

I tend to skip the Keynotes because I'm just not that excited by the whiz bang new features that Microsoft has to offer as they are announced - because I know I won't see any of them for a long time anyway, and it gives me a couple of hours in the morning to catch up on email and tend to any issues before hitting the sessions.

Highlights of Wednesday's sessions, for me, were Erin Stellato's morning session on DBCC options, Bob Dylan (Brent Ozar) doing a Lightning Talk on the war going on inside tempdb and the fabulous afternoon session with Tom LaRock and Tim Chapman (Choose your own adventure). For those who weren't there, this was a cleverly designed session where Tim and Tom had concocted a poorly performing system. The scenario was - the server is slow and we can't work. Every minute the server is in this state, it's costing the company $250,000. Fix it.

They would ask for suggestions of what to look at from the room and then follow it up with "Why that? What will that tell you, what are you looking for".

There was some controlled chaos in some parts, with people yelling out all sorts of ideas but it was a brilliant concept for a session. Kudos to you, gentlemen.

Wednesday evening was the amazing Hard Rock Cafe Karaoke. Note for next time, if you want to sing, be up there early - by the time I had imbibed enough liquid courage to put my name down, the list was full. The live band were amazing and the performances by the attendees were (for the most part :) ) pretty amazing as well. One thing I did learn was that I have apparently led a musically sheltered life because I had never heard Creep by Radiohead before - and I heard it twice that night (and once more the following night at EMP). Big thanks to Pragmatic Worx, HP and Microsoft for sponsoring that one.

Thursday's highlights included a session from Brad Mcgehee on OS tuning and Brent Ozar's session on his sp_Blitz script. The new features in this new version give me high hopes that I might get some insights into tracking down a recurring process at my shop that locks the database to high heaven at the start of every month (I really didn't know you could pull execution plans out of the DMV's and place them in a free tool and get information on the parameters and what not out of it). Very excited about looking more into that at the start of next month.

The look on Brent's face after the session when my question started with "I have this really old COBOL based app that connects to SQL Server" was worth the price of admission.

Finally Friday came and topped off the week with some good tips from Kevin Kline on virtualization. We really don't virtualize database servers (they tend to be physical boxes) but I'm looking forward to digging more into this area.

There was so much to learn this week that it is going to take weeks to go through it all on every server and work out just what improvements can possibly be made.

Favorite session of the week is probably a split between Tom/Tim's "Choose Your Own Adventure" session or Brent's "sp_Blitz". If I had to choose, I think Tom and Tim would come out on top by a hair.

Next year, the Summit is off to North Carolina and I don't know if I'll be making it (we seem to be taking a direction more and more to Oracle in the future and less and less on SQL Server, and if that trend continues, I'll likely be investing more in Oracle education than in SQL Server)

Still, all in all a great week of education. Next on the list is the Rocky Mountain Oracle User Group Training Days in Denver in February. Same kind of thing as PASS Summit, for for Oracle (and always a great couple of days of training)! If you do Oracle, can't recommend this conference highly enough, go take a peek at http://www.rmoug.org/training/training-days-2013-registration/ for full information.

Many thanks to the organizers, the Board of Directors and the volunteers who made this all possible.

Tuesday, July 17, 2012

T-SQL Tuesday #32 - Day in the Life


Insanity: doing the same thing over and over again and expecting different results. - Albert Einstein
T-SQL TuesdayAfter a couple of months in absentia from T-SQL Tuesday, I went hunting again this month and found Erin Stellato (blog|twitter) asking us to write about a "Day in the Life" on what we did on either July 11th or July 12th, 2012. Since I didn't really look until Wednesday, I chose Thursday.

There's more to a 'Day in the Life' of an exiled DBA that just the DBA stuff. Sometimes, there's some personal things of note to report upon and I thought I'd start this post with some of that.

The second Thursday of the month at 6AM local (Mountain) time is a scheduled maintenance window for one of my customers. Normally, while I'm at home, that means getting up at 5:55AM, stumbling down 2 flights of stairs to my basement (I work from home and telecommute), dialling into a conference bridge so I can coordinate with the server guy and do those maintenance things that usually happen.

This month, however, my family and I are away from home and staying with some friends for a few weeks in the state where my company's office is. So, in order not to wake up every one, I needed to get up earlier to actually be in the office by 6 AM.

I awoke at 4:45AM and headed out of my friend's basement to find my 17 year old son sitting in the living room watching TV

The conversation went something like this.

Me: "What are you doing up so early"
Son: "Mom said I could stay up, but I am not allowed to be grumpy today"
Me: "Aren't you going walking around the zoo today for 3 hours or so?"
Son: "Yes"
Me: "Trust me, go get some sleep"

(For the record, when my wife said "you can stay up", she meant for something resembling an hour, not "all night". I guess you need to be very specific when talking to teenagers on summer break).

Anyway, that aside, let's get back to the work related stuff.

Before we get started, what is it that I do ? My job title is "Senior Technical Consultant". What does that mean? It means I'm a Jack of All Trades (See my post for T-SQL Tuesday #28 if you really care :) ).

I do Oracle stuff, I do SQL Server Stuff, I do Windows Stuff, I do Unix Stuff, I write CMD Scripts, Shell Scripts, I still bravely wade neck deep into COBOL programming waters (no matter how much I try, I can't seem to ditch that one - reminds me of the story of the COBOL programmer who cryofreezes himself and gets woken up in 9999 AD in time to save the world from the Y10K bug because it says on his bio that he knows COBOL) and I also do system administration / troubleshooting on an ERP package that my company supports.

Arriving at work by 6AM, got together with the Windows guy and we did our thing. He patched the servers, I cleaned up some wayward security issues in the database and performed some index defragmentation and we got the application and monitoring restarted and continued along with our morning.

With the maintenance over it was time to read the stack of un-opened emails that appears in my inbox every morning. Backups! Check. Scheduled jobs! Check. Disk space and file growth! Check. Everything appeared to work perfectly last night.

Around mid morning I realized that an important thing had been missed today. Caffeine! After getting appropriately caffeinated, the day continued.

Drudgery time! Anyone who needs to keep up with timekeeping needs to find some time to enter their time sheet data (and I always seem to be a day behind with that). Mid-morning seemed to be as good a time as any to catch up on this chore.

Next up today (which like any other days appears to be heading down the path of controlled chaos) is a call about a web application that isn't responding. A check of the database shows me that the DB is responding just fine, no hung Agent jobs, queries returning normally. A short time later someone finds that a timeout setting has been modified when it shouldn't have been on the web server side. A quick change over there and all is well again. This is a fine example of how when things stop working, people immediately suspect the database! Sometimes it is the database of course, but just as often (if not more so) it isn't.

Another project we have in place at the moment is to move what effectively is a small data center from one state to another. Several hours are spent today going over the requirements for the live move and working on debugging the bits and pieces in one of the applications after the test move the day before - this is a very legacy application that was written about a century and a half ago and can be very cantankerous. Making it work after shifting between servers hasn't been easy.

Lunch comes and goes and the stream of chaos is left behind for a half hour.

Plunging back into the current post midday sustenance and it's digging into a malfunctioning trigger after the data conversion of the cantankerous app mentioned above and redoing a part of that data conversion.

A colleague then wants to analyze some data, so a quick extract into Excel later and they go back to their office humming happily over the bits and bytes contained in their pretty new spreadsheet.

The office manager strolls past my office and mentions that it's almost time for Open Enrollment again (my company has open enrollment for insurance for an October to September schedule), so a half hour goes bye bye as plans and costs and what not are reviewed. I hate Open Enrollment. Passionately.

A call comes in from a colleague asking why a report that's worked for the last 5 years isn't working today. I dig down through the code and after a short while realize that the report works fine ... as long as the user connects to the database using the correct login. Funny that. Log in as someone without the required permissions and the report doesn't work. Log in with the correct user (the one with the correct grants) and the report works!

Yesterday (in this context "yesterday" means July 11th - which was also my father's 75th birthday, so a happy birthday shout out to my dear old Dad), late in the day, I had implemented a change in a stored procedure that would allow some time sensitive data to be retained for 60 days instead of the 10 it was originally implemented with. Change Management requires validation of all changes in order to make sure they are successful. The end of my afternoon today is making sure that my change is accumulating data (since we can't prove the "delete" part of this change until 60 days from now).

Finally, as the day winds down, it seems like a good time to sit down and catch up on some of the administration that is needed in keeping track of all of the issues still before me. Machete in hand, it's time to dive into the jungle of the ticketing system and update everything that's been accomplished today, then perform the miracle that is getting the old time sheet updated before going home!

This day is probably not unlike many other people's days - organized chaos, many balls in the air at the same time and all of the tasks due yesterday.

Between the MS Outlook "To Do" List and a notepad that's always at my side, I somehow manage to keep 99.99% of my tasks and to do's tracked.

Which brings me to the end of my day and the end of the story.

Almost.

Let's go back to the opening story about a young boy who didn't want to go to bed.

What happens to a young man at the end of the day after all of about 2 hours of sleep and a 3.5 hour walk around the Denver Zoo (which is one of my favorite places btw) ?

Well, he gets in the car for the drive home, opens his gameboy to play and pass the time and this happens....

I love how the stylus for the Nintendo DS is sitting there in his right hand hand between this thumb and index finger.

Yes, I am a mean old Dad, but it needed to be done.

I'm sure when I show this post to my son he'll be cranky at me, but he will have a chuckle at it and then next time go to bed at a more reasonable hour!

(EDIT: When I showed my son this post he didn't let me down. He had a good chuckle and his response to the going to bed at a more reasonable hour was basically "Or not!")

Today was a fairly typical day. I delved into my Oracle databases (the trigger and the report that didn't work), my SQL Server databases (the maintenance window and the web app), did some scripting (data center move), analyzed some COBOL Code (part of the data center move - there's also a platform change in there somewhere) and even got some non-standard activities in today (Open Enrollment review).

At the end of the day there was also the stories about my son at the zoo. He had warned his younger sister not to bug him because he was tired and my wife countered with "Oh no, no, no. You bug him all you want, he's not allowed to be grumpy today".

Fortunately, I really enjoy doing what I do.

Hopefully your day was less chaotic than mine and if you're still with me, you haven't fallen asleep!

Until next time.

Monday, April 2, 2012

Scotty, Beam Me Up


"N-C-C-1-7-0-1 ... No bloody A, B, C or D" - Captain Montgomery Scott, Star Trek: TNG - 'Relics'
With the topic of this post, a starting quote from Scotty himself seemed apropos.

However, this post has nothing to do with Star Trek at all. Today we'll be talking about Transportable Tablespaces in an Oracle database.

Why do this?

I have a database that I'm responsible for that has three primary schemas inside. When refreshing test from prod, I have to replace everything except one, that contains some sort of metadata for the others, that's specific to the test instance. Replacing this metadata breaks the application in test.

Up until recently, I had been doing an export out of prod and then truncate / import into test. This method is extremely time consuming and I thought that there had to be a better way.

This is, of course, extensively documented in the Oracle Documentation and there are some limitations to this (refer to the link) - but for what I need, this turns out to be an excellent way to get a refresh done in a much faster timeframe.

Setup

The first thing we need to do is to make sure that the tablespaces we're going to be moving are self consistent within the tablespace set. You can't have a dependency in a tablespace that's not a part of the set of tablespaces being moved.

Oracle has a built in procedure to check this for us.

EXEC DBMS_TTS.TRANSPORT_SET_CHECK('TS1_NAME, TS2_NAME, TS3_NAME, ...', TRUE);

The first parameter is a comma separated list of tablespace names, enclosed in single quotes. The second parameter is TRUE or FALSE indicating whether or not referential integrity constraints should be checked (default is false) and a third boolean parameter (default FALSE) indicating whether a "full check" should be performed (refer the documentation for full details).

Once complete, check to see if there are any inconsistencies:

select * from transport_set_violations;

There should be no rows returned.

My next step at this point was to check the owners of the objects in the tablespaces I was moving, since the object owners need to be defined during the import of the tablespaces.

select distinct owner from dba_tables where tablespace_name in ('TS1_NAME','TS2_NAME','TS3_NAME');
select distinct owner from dba_indexes where tablespace_name in ('TS1_NAME','TS2_NAME','TS3_NAME');

Write these down and keep them until later.

The Copy

Next step is to recover the production database to a dummy database on the test DB server. Once the recovery is complete, set the tablespaces being moved into readonly mode.

alter tablespace [name] read only;

Once complete for each tablespace, we need to export the tablespace meetadata:

$ORACLE_HOME/bin/exp transport_tablespace=y tablespaces=ts1_name,ts2_name,ts3_name triggers=y constraints=y grants=y statistics=none file=exp_ts.dmp

When prompted, enter "/ as sysdba" as the username (you'll need to be logged in as the Oracle owner to the operating system).

In my test database that I'm importing to, we need to get rid of the existing tablespaces.

Copy the datafiles from the temporary copy that we recovered above into place for the test database we want to copy into, renaming the files at the OS level if needed.

drop tablespace [name] including contents and datafiles;

The Import

Now, import the tablespace metadata:

$ORACLE_HOME/bin/imp file=exp_ts.dmp tablespaces=TS1_NAME,TS2_NAME,TS3_NAME tts_owners=user1,user2,user3 datafiles=datafile1,datafile2,datafile3 transport_tablespace=y

TTS_OWNERS is the list of schema owners from the earlier query.

When prompted, enter "/ as sysdba" as the username (you'll need to be logged in as the Oracle owner to the operating system).

When the import is complete, for each tablespace do:

alter tablespace [name] read write.

Conclusion

This method, for a non trivially sized database is much faster that import and export and the whole process completes in about a third of the time.

Thursday, March 29, 2012

A Speaker's Nightmare


"I am not left handed either!" - Westley, "The Princess Bride"
As noted in the profile section, I am a DBA who works with both Oracle and SQL Server. I had never really considered trying to speak at a technical conference. After I started paying more attention to the SQL Server online community, and attended the SQL PASS Summit in 2011, I thought I'd give it a shot.

I know more about Oracle than I do SQL Server and usually attend the Rocky Mountain Oracle User Group Training Days event in Denver each February, so I submitted an abstract during the call for papers.

I have been attending the RMOUG Training Days event an average of about 2 years out of 3 for the last 10 years or so. In all that time, I never remember having seen a session on Oracle Heterogeneous Services. OHS is a feature of Oracle that let's you seamlessly connect to a non-Oracle database via either a fully functional (and separately licensed) target specific gateway or via the more restricted functionality of the (free!) Generic Gateway Agent.

Having needed to put together a real time query link to a SQL Server database in the not so distant past, I thought it would be a good topic to talk about and hoped it served up enough interest to get some backsides in seats.

I was pleasantly surprised, a couple of months later to get an acceptance letter and started preparing my paper and slides.

During the course of the several weeks I worked on the presentation, I researched more into the topic. What I had done to meet my business requirement was only touching the tip of the iceberg. I explored, read, learned and tested. I installed MySQL, SQL Server and Oracle on my laptop and went through demo scenarios and making sure it all hung together.

On conference day, there was a 15 minute break between sessions and I turned up into my allotted room right after the end of the previous session to set up my laptop, make sure the projector was working and my remote was working.

The room monitor came in and we started talking and I joked that I'd had this nightmare that no one would show up.

At the top of the hour, as I'm supposed to start talking, I have 2 people in the room: One person who had walked in, and the room monitor.

Not to be discouraged, I ran through my presentation, took good questions from my small audience and provided answers.

Of course, post presentation, I was somewhat crushed. All that effort into putting the white paper and the slides together. The hours spent practicing the delivery, setting up the demo.

I'm sure that there were several contributing factors to the almost non-existant turn out. There were some big names in the Oracle world speaking at the same time on more main stream topics than mine. If I was Joe Q. DBA and could see a big known name talk about something more mainstream (like the optimizer, performance tuning etc) or go see some unknown talk about a rarely used part of the technology, I probably would have gone and seen big name too.

Also, the reason that I'd never seen a presentation on OHS before is probably because not a lot of people use it or need to know about it. If you see a session on the track about a piece of technology that you never use, the likelihood of attending that session, instead of a session on something that you need to know about and use on a day to day basis is low.

So, why am I writing this post and telling my sob story ?

The real value out of this whole process was the preparation. Even though the time spent researching, writing, building a slide deck seemed to have been wasted (because of the extremely small turnout), the reward wasn't in hearing the applause from a full room. The reward came from what I learned during the process. I now know a whole lot more about distributed transaction processing in Oracle, how it works over a Generic Gateway Agent as opposed to a target specific Gateway Agent and this whole piece of the technology in general.

The real reward was, pure and simply, the knowledge gained.

Does this mean that I'll never put forth another abstract in the future? No. If I can think of a topic with a more general scope that I am somewhat knowledgeable and passionate about then I'll probably submit again.

The real lesson learned here is not to be afraid to put yourself out there and speak about a topic that you feel others could learn something from. Even if your session is poorly attended, you'll accomplish two pretty important things. You'll learn something along the way, and you'll get your name out there.

Who knows, in the years to come, someone's online presence and visibility in the technical conference circuit could play a huge impact in finding you your dream job. (Brent Ozar PLF recently hired their first employee and a large part of their decision making process was just that: see their blog entry about their hiring decision).

So, when it comes to deciding about trying to speak at a technical conference, as those Nike people used to say: "Just Do It!"

By the way, if you're someone who deals with Oracle and want to read the slides / paper, drop me an email (address in the About Me section of the blog) and I'll send the documents your way :)

Wednesday, March 28, 2012

Architectural notes on SQL Server vs. Oracle


"I am not left handed!" - Inigo Montoya, "The Princess Bride"
One of the first things to cover under this heading would be the concept of a schema. The difference between a schema in SQL Server and Oracle is distinct.

The Oracle 11g Database Concepts manual defines a schema as follows:

"A database schema is a logical container for data structures, called schema objects. Examples of schema objects are tables and indexes. Schema objects are created and manipulated with SQL.

A database user has a password and various database privileges. Each user owns a single schema, which has the same name as the user. The schema contains the data for the user owning the schema. For example, the hr user owns the hr schema, which contains schema objects such as the employees table. In a production database, the schema owner usually represents a database application rather than a person.

Within a schema, each schema object of a particular type has a unique name. For example, hr.employees refers to the table employees in the hr schema. Figure 2-1 depicts a schema owner named hr and schema objects within the hr schema."

A schema does not necessarily need to contain tables. It contains schema objects - be they triggers, synonyms, views or whatever. Note that the schema is the same name as the user the owns it.

In SQL Server things are a little different. SQL Server Books Online tells us how from SQL Server 2005 and on:

"The behavior of schemas changed in SQL Server 2005. Schemas are no longer equivalent to database users; each schema is now a distinct namespace that exists independently of the database user who created it. In other words, a schema is simply a container of objects. A schema can be owned by any user, and its ownership is transferable."

The AdventureWorks database (the demo database that comes with SQL Server) shows this. Inside the one database, there is a schema called Sales, containing all the Sales tables, another called Product for the Products, one called HR for employee data. One user can be granted access to all of these tables (much like in Oracle or any other RDBMS) but they are each contained in their own schema. The schema is just a corral to hold these database objects.

The next piece of the architecture to discuss is how the physical data is stored. In Oracle, the database block size is determined at database creation and can be any multiple of 2K from the smallest of 2K up to the maximum of 32K (although each operating system may have a narrower range that restricts that). In SQL Server, the database block size, call a page, is always set at 8K (although there are exceptions to this, for data types such as varchar(max), varbinary(max), xml, image etc). In SQL Server, 8 x 8K pages forms an Extent which is the smallest allocation unit and should be 8 contiguous pages. Extent allocation in Oracle is handled differently depending on if the tablespace is defined with uniform extent sizes (each allocation is the same, based upon a value set by the DBA at tablespace creation) or is left as system managed (where Oracle decides how large the next extent needs to be based on internal algorithms).

Both Oracle and SQL Server store data in datafiles. In Oracle, one or more datafiles are assigned to a tablespace. (More accurately, when a tablespace is created, a datafile is defined to that tablespace. Later, that datafile can be expanded, or more datafiles can be added to it to increase overall space). Each tablespace has at least one datafile but may have more than one.

In SQL Server, the Filegroup is the concept that most closely matches the tablespace. A filegroup is a logical construct containing datafiles that can be used to spread datafiles out over multiple physical disk devices. When a database is created, a PRIMARY filegroup is mandatory Other user defined filegroups can also be created containing data files on various physical devices. Unless otherwise defined, the primary file group is the default and new objects created without the FILEGROUP keyword are assigned to the default filegroup.

Once again, SQL Server Books Online has an excellent reference to the subject, that I will not replicate here.

The final thing I want to mention in this post is the difference between the transaction logging models.

Oracle instances are defined with redo log file. The redo log contains two or more groups contains 1 or more online redo log files. It is common practice to multiplex the redo log files to protect against media failure. As changes are made to data, logging information is written to the currently active redo log group. When the current group fills, a log switch occurs and log information is written to the next member of the group. This cycle continues through the group as data is generated. If ARCHIVELOG mode is enabled, after a log group is filled a copy of the log is written to the log archive destination. These sequentially numbered archive logs are used for point in time recovery in the event of a failure.

The SQL Server Transaction Log on the other hand is one or more log file per database that is written to continually in a wrap around fashion. In a database in full recovery mode, the transaction log is written to continually and will continue to grow unless a transaction log backup is done. Once the backup is done, the transaction log is free to be written over again from the beginning. In simple recovery model, changes are still logged to the transaction log, but once a transaction is completed, the space in the transaction log is able to be overwritten, so a chain of changes cannot be maintained (point in time recovery is impossible in this mode).

Oracle has two options for logging: ARCHIVELOG mode and NOARCHIVELOG mode. Transactions are still logged to the online redo logs, but the difference here is whether or not the redo logs are archived out to disk when full. The Oracle Documentation discusses this in more detail.

SQL Server has three different logging modes: Full, Bulk Logged and Simple. Full and Simple respectively correspond to the two Oracle methods. SQL Server Books Online covers these in more detail.

SQL Server and Oracle are two different Relational Database Management Systems and hopefully these posts will help explain some of the differences between the two for those people, like me, who regularly are faced with using both.

Monday, March 19, 2012

Quick and Dirty Data Extract


"You've been mostly dead all day!" - Fezzik, The Princess Bride
I was recently tasked with extracting data from a couple of dozen tables, tilde delimited, with column headings.

So, I spent a little time coming up with this:


set nocount on
select 'select '
select ''''+a.name+''''+'+''~''+' 
from syscolumns a, sysobjects b
where a.id = b.id
and b.name = 'EXTRACT_TABLE'
order by colid;

select 'select ';
select case when a.xtype = 106 then 'isnull(cast('+a.name+' as varchar), '' '')' 
            when a.xtype = 108 then 'isnull(cast('+a.name+' as varchar), '' '')' 
            when a.xtype =  61 then 'isnull(convert(varchar(8),'+a.name+',112), '' '')' 
            else 'isnull('+a.name+','' '')' END+'+''~''+' 
from syscolumns a, sysobjects b
where a.id = b.id
and b.name = 'EXTRACT_TABLE'
order by colid;

select 'from extract_table';

The output from this query needs a little tidying up - the last column of each generated select statement needs the +'~'+ removed from it (for obvious syntactical reasons) and the spacing of the output can always be cleaned up a little from a prettying it up perspective, but it got the job done.

This was from a SQL Server 2000 system and would need to be ever so slightly modified to use on 2005/2008/2012.

Like I said in the subject line, quick and dirty (but it got the job done) :)

Tuesday, March 13, 2012

T-SQL Tuesday #028 - Jack of All Trades, Master of None


"Certainty of death ... small chance of success ... what are we waiting for ?" - Gimli, Return of the King
T-SQL TuesdayThe second Tuesday of the month invariably means that it's time for T-SQL Tuesday again. Last time I participated was January and I promised all sorts of things for resolutions like attending my local User Group meetings (0/3 so far this year), updating my blog at least once a week (6 weeks since the last post) and reading more books this year than I usually do (doing OK on the fiction front, need to catch up on the tech books though). So, this month I decided to jump back into the pool and at least post something for T-SQL Tuesday.

Argenis Fernandez(blog|twitter) is the Maitre d' of our blogging hotel this month and he's asked us to talk about what we specialize in, if indeed that's anything.

Jack of All Trades, he posted as the title of this month's event. I've been working in Information Technology for over 25 years. Looking back over my career, the title "Jack of All Trades" probably fits me very well.

I started as a mainframe COBOL programmer back in the mid-80's and did that for several years. In 1991 I left my government job and joined the development staff of an Australian ERP Software company. Today, a little over 21 years later, I'm still working with their software, but my role has changed significantly.

It was with this company, after transferring to the USA that I started getting exposed to databases. Our machines were hosted out of Australia, our DBA support was Australian based (which meant if we had problems we got support starting around 3PM) and so I started teaching myself Oracle. After several years of being the "go to" guy for "in a pinch" support earlier than 3PM, my boss gave me the chance that changed my career direction. He was going to let me transfer and become an Oracle Database Administrator.

That was 12 years ago and I've loved working with them ever since.

However, owing to the niche environment I work in (I'm now working for a small consulting firm that supports this ERP product), I've found myself in an interesting place.

Because of the knowledge I have of the ERP product and the environment it runs in, I'm very much a generalist in what I do. Four years ago, I was forced to learn SQL Server. I'd pushed back against that for years but finally had to succumb to the inevitable.

The software package tends not to use a lot of the advanced features of the databases it runs against. The data isn't necessarily considered up to the second critical (such as for banking transactions) and in the event of problems, some down time can be tolerated.

For Oracle, for example, Real Application Clustering (RAC) isn't used in this environment. So I don't have those skills (and the few times I've looked at the job market, just to see what's out there, everyone seems to want it).

For SQL Server, I've never had to deal with clustering, or log shipping or replication.

Because of the specific requirements of my job at the moment, I'm very much a generalist (I do Oracle, I do SQL Server, I do Unix Shell and Windows CMD scripting, I still fall back to my COBOL roots as needed from time to time too) and for me, at this point in my career that's not a bad thing.

Does this mean I don't want to learn these other more specialized aspects of the various RDBMS systems I work with ? No. However, when you have to balance available time with what's needed for day to day operations, sometimes you don't always get what you want.

So for now, I am perfectly happy in what I'm doing. I love my job. The "restrictions" on not being a specialist in any particular field have not held me back any at the moment. Having such a broad, generalized knowledge base, when working in a very niche field has actually been very helpful.

It could actually be said, come to think of it, that I'm a "specialist" in the technical area around this particular product I've been working with for the last two decades, while being a generalist in the underlying technology (the databases, the operating systems etc).

One of the things I have found in working across so many diverse areas is a need to keep some good notes about the things you learn (so that 2 years from now when a problem pops up again, you have a reference), and that there's always something new to learn!

The bottom line is that for me, generalizing across many areas has worked very well. Reading some of the other posts, I've seen that a lot of people tend to end up in one area and that doesn't surprise me. For me, being a Jack of All Trades just hasn't been a bad thing :)

Tuesday, January 17, 2012

Oracle and SQL Server: Compare and Contrast (Database vs Instance)


"Sacrificing minions: Is there any problem it can't solve?" -Xykon, Order of the Stick webcomic
As it states in my profile, I work in both Oracle and SQL Server on a daily basis. When I was first introduced to SQL Server back in 2008, I had a great deal of trouble wrapping my mind around some of the concepts, since some of them were so strange and counter intuitive to what I was used to.

So, as an exercise in both self-review and in an effort to put something like this 'out there', I thought I might start a series of postings comparing and contrasting Oracle to SQL Server.

This could be looked at as an "Oracle for SQL Server DBA's", or alternatively, "SQL Server for Oracle DBA's", depending upon who's reading it and I'll try to cover both sides of the story.

To get the ball rolling, there are two concepts that form the core of a relational database management system. The terms are used often and just as often are probably used incorrectly. Database and Instance.

What is a database and what is an instance ?

Not surprisingly, the definitions vary depending on what system you're working with

Let's start with Oracle. The Oracle Documentation (Oracle Database Concepts - Oracle 11g Release 2 - E25789-01) defines an instance as "An instance is a set of memory structures that manage database files. The instance consists of a shared memory area, called the system global area (SGA), and a set of background processes. An instance can exist independently of database files."

So, from an Oracle perspective, the instance is the memory processes that manage the files that make up the database.

The database is defined, from the same document, as "A database is a set of files, located on disk, that store data. These files can exist independently of a database instance.".

The two are closely related, and the term "Oracle database" is often used to refer to both under the same umbrella, but in the purest sense, they are separate and distinct.

There are three different types of files that make up a database.

  • Data Files - these contain, funnily enough, your data, as well as the corresponding indexes and any other database objects (procedures and functions, triggers etc).
  • Control Files - these contain the metadata about the physical structure of the database - names and locations of data files and the name of the database, among other things.
  • Redo Log Files - these are a set of two or more files that record transactional changes made to the data in the database. As they fill, they are written out to archive log files and rotate back through the other files in the redo log group.

Also of note is that each instance (set of memory processes) supports one and only one database (at the basic level. Real Application Clusters - RAC - can change that, where you have multiple instances accessing a single set of files, but that is outside of the scope of what I'm talking about here).

The figure below is a representation of what makes up and Oracle instance and database (also from the same Oracle document). It also shows the assorted processes and everything else associated with the database, which is not what we're talking about here. For now, just look at the groupings indicated as Instance (memory) and Database (files).

It is important to note that you can create as many instances as you like from a single set of binaries installed on a server.

SQL Server considers an instance to the be the installation of binaries on a server. Your first install on a server is usually set up to be the Default Instance on the server (but it doesn't have to be). If you install the software a second or third time, these become separate named instances. Each instance can the a Named instance if desired.

Each installed instance has its own set of memory processes when started.

They key difference between the two is that Oracle can spawn multiple sets of Oracle instances (memory processes) from each Oracle software install. SQL Server has a one to one relationship between memory processes and SQL Server software installation.

Buck Woody (blog|twitter) wrote an article talking about SQL Server instances in much more detail than I plan on talking about here, if you're interested in the finer points.

What about SQL Server databases ? A SQL Server database is defined as one or more data files and a log file. In SQL Server, one instance can support multiple databases.

Another difference to be noted here, is that while Oracle rotates Online redo logs through a group in a loop and writes out old changes to archive log files, the SQL Server transaction log will continue to grow and grow and grow unless the database is running in Simple Recovery Mode or a full backup is taken, at which time the log is reset and writing starts again from the beginning of the file.

Upon installation, SQL Server contains several system databases that contain objects shared by all other databases in the instance. From Books Online, the SQL Server system databases are:

  • master - Contains all the instance wide metadata of the instance - logins, endpoints, linked servers, system configuration settings.
  • model - Used as a template for creating all databases in the instance.
  • msdb - Used by SQL Server agent for scheduling jobs. Also used by database mail
  • tempdb - Used for temporary user objects for all databases. Also is the row version store for some isolation levels.
  • Resource Database - a read only database that contains all of the system objects in SQL Server.

I did find an interesting article on SQL Server Central that covered some of what I've written about here and has a summary at the end mapping the system databases of SQL Server to appropriate Oracle entities.

A similarity between both systems is that it is possible to run multiple applications out of a single database, where they could be separated by different schemas, but I haven't seen that done a whole lot. Separating applications into different databases does seem to make maintenance tasks somewhat easier.

Next Time: Some of the Architectural differences between Oracle and SQL Server.

Monday, January 16, 2012

CASE in point, redux


"My friends, you bow to no one" - Aragorn, to the hobbits, Return of the King
In a recent post made for my reference, I came posted the syntax for the SQL Server CASE statement, because I always forget some of the finer points of less used syntax.

Well, it just so happens that I also found myself recently needing the syntax for a CASE statement used inside an update.

In keeping with the other posting, this seems a mighty fine place to record such a thing


update mytable
set col1 =
  (CASE
     WHEN
        (condition)
     THEN
        (value)
     ELSE
        (other value)
     END
  )
where column = value

Another quickie, mainly for my enjoyment at some stage down the road!

Tuesday, January 10, 2012

T-SQL Tuesday #14, er #26: Second Chances


"Whoo-hoo-hoo, look who knows so much. It just so happens that your friend here is only MOSTLY dead. There's a big difference between mostly dead and all dead. Mostly dead is slightly alive." - Miracle Max, The Princess Bride
T-SQL TuesdayA new year is upon us and that inevitably means that there must be a second Tuesday of the month again. Which means it's T-SQL Tuesday again.

David Howard (blog|twitter) is acting as the Master of Ceremonies for the start of the 2012 volume of T-SQL Tuesday and his topic is "Second Chances", much like Westley got in the scene from "The Princess Bride" mentioned at the top. The theme this month is to revisit any of the previous 25 topics and write a little about one of them - without recycling your own or someone else's previous musings.

Before I begin, let me tell you a little story. Once upon a date/time, a young DBA was faced with a Puzzling Situation. He had been having a rough time with this Relationship with his I/O subsystem. For all the Reporting he'd been doing trying to work out what was wrong, he hadn't had much luck. He felt like such a BLOB. He didn't really care though, because it was Summertime, and after picking up his kids from Getting Schooled, he was off for some good old fashioned Beach Time. He had all of his vacation plans Indexed and organized and even had the Misconception that his DBA Skills wouldn't be needed while he was away. However, the Business Wants what the Business thinks it wants and apart from these Resolutions, he was soon paged by his Automated Systems, and had to get on the phone and Aggregate and APPLY his knowledge to help fix a problem with a misbehaving CTE. Thankfully there weren't any Disasters while was away because of the Best Practices he employed - making sure that there was not any Crap Code in the system really helped. His monitoring data was nicely Presented and Joined with some clever Procedures and Functions he was able to Trick his way to a good vacation.

OK, it's a convoluted and long winded story, but hidden in the links above you'll find all 25 of the previous topics embedded for your reference :) Most of them link directly to the roundup / summary but there's one or two where I just couldn't find the darn thing.

As you might have guessed from the topic, I was thinking I might revisit #14 (Resolutions). Being that it's the start of a new year, visiting the list of "Technical Resolutions" is probably not a bad way to get the show on the road. So, what are my technical resolutions for 2012?

  • Attend PASS Summit 2012 - My first ever PASS Summit was 2011 and I came away with such a mental overload that I've probably forgotten half of what I learned. Which leads nicely into ...
  • Review the PASS Summit DVD's - I have all eight of them on my desk (they arrived last Tuesday as I was writing the draft of this post). I need to pay particular attention to the sessions I was in and go over all of it again - and then go over the sessions I did NOT attend.
  • Try to be more active on this blog, making at least one post a week.
  • Upgrade my Oracle credentials. (It's T-SQL post, but this is definitely on my list of things to do this year). Currently sitting certified for 9i, should really try to upgrade to at least 10g, if not 11g.
  • I really need to try to get to the local user group meetings this year. I made plans to go about 4 times in 2011, and every time work got in the way. I need to try to tell myself that unless it's a real Priority One issue, that it can wait 3 hours while I drive over and attend a 2 hour user group meeting! (already blown this one for January, thought it was *next* week, turns out it was *yesterday*... Eleven out of Twelve, maybe?)
  • Learn some Powershell. I've heard so many people evangelizing this tool, that I really need to suck it up and at least learn the basics!
  • Read some, if not all of the tech books I've bought. I have four sitting on my desk, with ebook versions of three of them as well. Time to put my Kindle Fire that Santa brought me to use and get them read! (This doesn't include the shelf full of other books that I've bought and pick through as I need the relevant parts of them)

What remains to be seen is, this time next year, how many of these have check marks next to them and how many have big red crosses.