Sunday, December 25, 2011

FLASHBACK DATA ARCHIVE


To start with, this feature is not available for Oracle standard, personal and Express Editions.
From  Oracle 10G, flashback feature mainly depends on undo segments (other than flashback database as it uses flashback logs). Thus it has limitation up to how we can retrieve data and it depends on undo data in undo tablespace. To avoid this, Oracle has provided new feature as FLASHBACK DATA ARCHIVE.
It is a database object (must be confused with object, but yes it is!) that holds histotrical data for one or many tables. Additional advantage of Flashback Data Archive that it has space retention and purging policy also.
For Flashback Data Archive, oracle has background process FDBA to deal with all flashback related work.
It should be something like ora_fbda_padwsdpr. Where PADWSDPR is the name of the instance for which this process is attached.
Let see how to create and use flashback data archive.
To deal with flashback data archive, user must have FLASHBACK ARCHIVE ADMINISTER privilege.
SQL> select * from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
PL/SQL Release 11.2.0.2.0 - Production
CORE    11.2.0.2.0      Production
TNS for Linux: Version 11.2.0.2.0 - Production
NLSRTL Version 11.2.0.2.0 - Production

SQL> select * from dba_sys_privs where privilege like '%FLASH%';

GRANTEE                        PRIVILEGE                                ADM
------------------------------ ---------------------------------------- ---
SYS                            FLASHBACK ANY TABLE                      NO
DBA                            FLASHBACK ANY TABLE                      YES
SYS                            FLASHBACK ARCHIVE ADMINISTER             NO
DBA                            FLASHBACK ARCHIVE ADMINISTER             YES
MDSYS                          FLASHBACK ANY TABLE                      NO

SQL> grant flashback archive administer to Mohit_flash;

Grant succeeded.

Two main parameters that is used with the create statement of flashback data are rentention and quota. Retention confirms the maximum time holding of flashback data and quota will confirm for the space it will be used on provided tablespace.

Once reaching the quota, oracle will issue out-of-space alert.

SQL> create flashback archive flash2
  2  tablespace users quota 1024M retention 5 year;

Flashback archive created.

Other option available with alteration of flashback data are:

alter flashback archive flash2  set default

alter flashback archive flash2 add tablespace user2;

alter tablespace archive flash2 modify tablespace user2;

Now let’s see how to use this flashback data with tables.

SQL> create table test11 ( name varchar2(30), address varchar2(50))
  2  flashback archive flash2;

Table created.

Already created can be altered this way:

SQL> alter table test12 flashback archive flash2;

Table altered.

Checking the flashback data of the tables:

SQL> select * from dba_flashback_archive_tables;

TABLE_NAME                OWNER_NAME                FLASHBACK_ARCHIVE_NAME    ARCHIVE_TABLE_NAME        STATUS
------------------------- ------------------------- ------------------------- ------------------------- -------------------------
TEST11                    DBCHECK                   FLASH2                    SYS_FBA_HIST_366239       ENABLED

SQL> select flashback_archive_name,retention_in_days from dba_flashback_archive;

FLASHBACK_ARCHIVE_NAME    RETENTION_IN_DAYS
------------------------- -----------------
FLASH2                                 1825

SQL>

Now table is created with flashback data, let’s try to use it. Here I will update the created table and try to read from flashback data for the updated column.

SQL> select * from test11;

NAME                           ADDRESS
------------------------------ --------------------------------------------------
MOHIT                            A
MOHIT                            A
MOHIT                            A
MOHIT                            A
MOHIT                            A

SQL>
SQL> !date
Wed Dec 21 05:14:17 EST 2011

SQL> !date
Wed Dec 21 05:15:35 EST 2011

SQL> update test11 set address='B' where name='MOHIT';

5 rows updated.

SQL> commit;

Commit complete.

SQL> select * from test11;

NAME                           ADDRESS
------------------------------ --------------------------------------------------
MOHIT                            B
MOHIT                            B
MOHIT                            B
MOHIT                            B
MOHIT                            B

Now the before image must be available with flashback data, so checking that.

SQL> select * from SYS_FBA_HIST_366239;

no rows selected

SQL> /

no rows selected.

It should be available but unfortunately not…!! Here I understand why. Background process FDBA wakes up at system determined interval (default is 5 mins) and there it copies all the corresponding undo data for archive. So next time if changes won’t reflect immediately won’t be surprise.

Let’s try again:

SQL> !date
Wed Dec 21 05:45:59 EST 2011

SQL> select NAME,ADDRESS from SYS_FBA_HIST_366239;

NAME                           ADDRESS
------------------------------ --------------------------------------------------
MOHIT                            A
MOHIT                            A
MOHIT                            A
MOHIT                            A
MOHIT                            A

SQL> select * from test11;

NAME                           ADDRESS
------------------------------ --------------------------------------------------
MOHIT                            B
MOHIT                            B
MOHIT                            B
MOHIT                            B
MOHIT                            B

Now the data is available in flashback archive. One more point to add, we cannot modify the flashback data. It is only allowed for read-only access.

Now if needed, we can also think to purge the flash data.

SQL> alter flashback archive FLASH2 purge all;

Flashback archive altered.
Other option are also available for purging, some of them are:

alter flashback archive flash1 purge before timestamp (systimestamp – interval '2' day);

alter flashback archive flash1 purge before scn 123456; 

Next section, we will take a short look of some limitation of this feature.

All the tables enabled for the flashback archive feature will not be allowed to perform below funtions:

·         ALTER TABLE statement that does any of the following
                 -Drops, renames, or modifies a column
                 -Performs partition or subpartition operations
                -Converts a LONG column to a LOB column
                -Includes an UPGRADE TABLE ,with or without an INCLUDING DATA clause
·         DROP TABLE statement
·         RENAME TABLE statement
·         TRUNCATE TABLE statement

Attempted to drop for the table which is enabled for faskback data archive:

SQL> drop table test11;
drop table test11
           *
ERROR at line 1:
ORA-55610: Invalid DDL statement on history-tracked table

Solution:

SQL> alter table test11 no flashback archive;

Table altered.

SQL> drop table test11;

Table dropped.

Adding also to drop the flashback data archive.

SQL> drop flashback archive FLASH2;

Flashback archive dropped.

SQL>

Also please read Oracle document more information.

Thanks

Thursday, February 10, 2011

Basic differences between Data pump and Export/Import


If you have worked with prior 10g database you possibly are familiar with exp/imp utilities of oracle database. Oracle 10g introduces a new feature called data pump export and import.Data pump export/import differs from original export/import. The difference is listed below.

1)Impdp/Expdp has self-tuning unities. Tuning parameters that were used in original Export and Import, such as BUFFER and RECORDLENGTH, are neither required nor supported by Data Pump Export and Import.

2)Data Pump represent metadata in the dump file set as XML documents rather than as DDL commands.

3)Impdp/Expdp use parallel execution rather than a single stream of execution, for improved performance.

4)In Data Pump expdp full=y and then impdp schemas=prod is same as of expdp schemas=prod and then impdp full=y where in original export/import does not always exhibit this behavior.

5)Expdp/Impdp access files on the server rather than on the client.

6)Expdp/Impdp operate on a group of files called a dump file set rather than on a single sequential dump file.

7)Sequential media, such as tapes and pipes, are not supported in oracle data pump.But in original export/import we could directly compress the dump by using pipes.

8)The Data Pump method for moving data between different database versions is different than the method used by original Export/Import.

9)When you are importing data into an existing table using either APPEND or TRUNCATE, if any row violates an active constraint, the load is discontinued and no data is loaded. This is different from original Import, which logs any rows that are in violation and continues with the load.

10)Expdp/Impdp consume more undo tablespace than original Export and Import.

11)If a table has compression enabled, Data Pump Import attempts to compress the data being loaded. Whereas, the original Import utility loaded data in such a way that if a even table had compression enabled, the data was not compressed upon import.

12)Data Pump supports character set conversion for both direct path and external tables. Most of the restrictions that exist for character set conversions in the original Import utility do not apply to Data Pump. The one case in which character set conversions are not supported under the Data Pump is when using transportable tablespaces.

13)There is no option to merge extents when you re-create tables. In original Import, this was provided by the COMPRESS parameter. Instead, extents are reallocated according to storage parameters for the target table.

Differences between Data Pump impdp and import utility

The original import utility dates back to the earliest releases of Oracle, and it's quite slow and primitive compared to Data Pump.  While the old import (imp) and Data Pump import (impdp) do the same thing, they are completely different utilities, with different syntax and characteristics. 
Here are the major syntax differences between import and Data Pump impdp:
  • Data Pump does not use the BUFFERS parameter
  • Data Pump export represents the data in XML format
  • A Data Pump schema import will recreate the user and execute all of the associated security privileges (grants, user password history).
  • Data Pump's parallel processing feature is dynamic. You can connect to a Data Pump job that is currently running and dynamically alter the number of parallel processes.
  • Data Pump will recreate the user, whereas the old imp utility required the DBA to create the user ID before importing.