Tip
2010.04.06 10:39

Log miner 사용방법

조회 수 29914 추천 수 0 댓글 1

(V9I) ORACLE9I NEW FEATURE : LOGMINER 의 기능과 사용방법
==========================================

Purpose
-------
Oracle 9i LogMiner 의 새로운 기능과 사용방법에 대해 알아보도록 한다.
(8I LogMiner의 일반적인 개념과 기능은 BUL. 12033 참조)


Explanation
-----------
LogMiner 는 8I 에서부터 새롭게 제공하는 기능으로 Oracle 8 이상의 Redo
log file 또는 Archive log file 분석을 위해 이용된다.
9I 에서는 기존 8I 에서 제공하던 기능을 모두 포함하고 그 외 새로운 점은
LogMiner 분석을 위해 dictionary 정보를 Flat file(output)로 생성이 가능하
였으나 9I 에서부터는 Flat file 과 함께 On line redo log 를 이용하여
dictionary 정보를 저장할 수 있게 되었고, Block corruption이 발생하였을
경우 해당 부분만 skip할 수 있는 기능이 추가되었다.
 
9I 에서의 New feature 를 정리하면 다음과 같다.

1. 9I New feature

1) DDL 지원 단, 9I 이상의 Redo/Archive log file만 분석 가능
 : V$LOGMNR_CONTENTS 의 OPERATION column에서 DDL 확인

2) LogMiner 분석을 위해 생성한 dictioanry 정보를 online redo 에 저장 가능
 : 반드시 Archive log mode 로 운영 중이어야 한다.
 : DBMS_LOGMNR_D.BUILD를 사용하여 dictionary file 생성
 : 기존 Flat file 또는 Redo log 에 생성 가능
 : 예) Flat file
  - SQL> EXECUTE dbms_logmnr_d.build
   (DICTIONARY_FILENAME => 'dictionary.ora'
   ,DICTIONARY_LOCATION => '/oracle/database'
   ,OPTIONS => DBMS_LOGMNR_D.STORE_IN_FLAT_FILE);
 예) Redo log
  - SQL> EXECUTE dbms_logmnr_d.build
   (OPTIONS => DBMS_LOGMNR_D.STORE_IN_REDO_LOGS);

3) Redo log block corruption 이 발생하였을 경우 corruption 된 부분을 skip하고 분석
 : 8I 에서 log corruption 발생 시 LogMiner 가 종료되고 분석 위해 다시 시도
 : 9I 에서는 DBMS_LOGMNR.START_LOGMNR 의 SKIP_CORRUPTION option 으로 skip 가능

4) Commit 된 transaction 에 대해서만 display
 : DBMS_LOGMNR.START_LOGMNR 의 COMMITTED_DATA_ONLY option

5) Index clustered 와 연관된 DML 지원 (8I 제공 안 됨)

6) Chained and Migrated rows 분석


2. 제약 사항(9I LogMiner 에서 지원하지 않는 사항)

1) LONG and LOB data type
2) Object types
3) Nested tables
4) Object Refs
5) IOT(Index-Organized Table)


3. LogMiner Views

1) V$LOGMNR_CONTENTS - 현재 분석되고 있는 redo log file의 내용
2) V$LOGMNR_DICTIONARY - 사용 중인 dictionary file
3) V$LOGMNR_LOGS - 분석되고 있는 redo log file
4) V$LOGMNR_PARAMETERS - LogMiner에 Setting된 현재의 parameter의 값


4. LogMiner 를 이용하기 위한 Setup

1) LogMiner 를 위한 dictionary 생성(flatfile or on line redo log)
2) Archive log file or Redo log file 등록
3) Redo log 분석 시작
4) Redo log 내용 조회
5) LogMiner 종료


5. LogMiner Example

1) flatfile이 생성될 location을 확인

SQL> show parameter utl

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
utl_file_dir                         string      /home/ora920/product/9.2.0/smlee     


2) dictionary 정보를 저장할 flatfile 정의 -> dictionary.ora 로 지정

SQL> execute dbms_logmnr_d.build -
> (dictionary_filename => 'dictionary.ora', -
> dictionary_location => '/home/ora920/product/9.2.0/smlee', -
> options => dbms_logmnr_d.store_in_flat_file);

PL/SQL procedure successfully completed.
 
 혹은 utl 파라메터 설정없이....
 
 - SQL> EXECUTE dbms_logmnr_d.build
   (OPTIONS => DBMS_LOGMNR_D.STORE_IN_REDO_LOGS);
   

3) logfile을 switch 하고 current logfile name과 current time을 기억한다.

SQL> alter system switch logfile;

System altered.

SQL> select member from v$logfile, v$log
  2  where v$logfile.group# = v$log.group#
  3  and v$log.status='CURRENT';

MEMBER
--------------------------------------------------------------------------------
/home/ora920/oradata/ORA920/redo02.log   

SQL> select current_timestamp from dual;

CURRENT_TIMESTAMP
---------------------------------------------------------------------------
13-NOV-02 10.37.14.887671 AM +09:00    


4) test를 위해 table emp30 을 생성하고 update -> drop 수행

SQL> create table emp30 as
  2  select employee_id, last_name, salary from hr.employees
  3  where department_id=30;

Table created.   

SQL> alter table emp30 add (new_salary number(8,2));

Table altered.

SQL> update emp30 set new_salary = salary * 1.5;

6 rows updated.

SQL> rollback;

Rollback complete.

SQL> update emp30 set new_salary = salary * 1.2;

6 rows updated.

SQL> commit;

Commit complete.

SQL> drop table emp30;
select
Table dropped.

SQL>  select current_timestamp from dual;

CURRENT_TIMESTAMP
---------------------------------------------------------------------------
13-NOV-02 10.39.20.390685 AM +09:00          


5) logminer start (다른 session을 열어 작업)

SQL> connect /as sysdba
Connected.

SQL> execute dbms_logmnr.add_logfile ( -
> logfilename => -
> '/home/ora920/oradata/ORA920/redo02.log', -
> options => dbms_logmnr.new)

PL/SQL procedure successfully completed.

SQL> execute dbms_logmnr.start_logmnr( -
> dictfilename => '/home/ora920/product/9.2.0/smlee/dictionary.ora', -
> starttime => to_date('13-NOV-02 10:37:44.434','DD_MON_RR HH24:MI:SS.FF'), -
> endtime => to_date('13-NOV-02 10:39:20.435','DD_MON_RR HH24:MI:SS.FF'), -
> options => dbms_logmnr.ddl_dict_tracking + dbms_logmnr.committed_data_only)

PL/SQL procedure successfully completed. 


6) v$logmnr_contents view를 조회

SQL> select to_char(timestamp,'yyyy/mm/dd hh24:mi:ss') "Time",SESSION_INFO, username, operation, sql_redo
     from v$logmnr_contents
     where username='ADMIN'
     and (seg_name = 'LOCMAST' or seg_name is null);      

TIMESTAMP    USERNAME   OPERATION   SQL_REDO   
----------------------------------------------------------------------------------------------------------
13-NOV-02  10:38:20  HR   START   set transaction read write;

13-NOV-02  10:38:20  HR   DDL   CREATE TABLE emp30 AS
          SELECT EMPLOYEE_ID, LAST_NAME,
          SALARY FROM HR.EMPLOYEES
          WHERE DEPARTMENT_ID=30;
13-NOV-02  10:38:20  HR    COMMIT
commit;

13-NOV-02  10:38:50  HR   DDL   ALTER TABLE emp30 ADD
          (new_salary NUMBER(8,2));

13-NOV-02  10:39:02  HR   UPDATE  UPDATE "HR"."EMP30" set
          "NEW_SALARY" = '16500' WHERE
          "NEW_SALARY" IS NULL AND ROWID
          ='AAABnFAAEAALkUAAA';

13-NOV-02  10:39:02-10 HR   DDL   DROP TABLE emp30;


7) logminer 를 종료한다.

SQL> execute dbms_logmnr.end_logmnr
 
PL/SQL procedure successfully completed. 


Reference Documents
-------------------
Note. 148616.1

   col TIMESTAMP    format a20                    
   col USERNAME     format a5                    
   col JOB_NEW      format a20        
   col JOB_OLD      format a20      
   col OPERATION    format a9      
   select TO_CHAR(TIMESTAMP,'DD-MON-YYYY HH24:MI:SS') TIMESTAMP,          
   USERNAME,          
   SQL_UNDO JOB_OLD,          
   SQL_REDO JOB_NEW,                       
   OPERATION      
   from v$logmnr_contents where SEG_OWNER = 'SCOTT'
   AND       SEG_NAME = 'EMP' order by SCN,TIMESTAMP;

 

 

출처 : 정확히 알수 없음, 문제될시 삭제하겠음.

 

 

 

 

 

아래의 내용은 메타링크 내용입니다.

===================================================================================

제목:  Oracle9i LogMiner New Features
  문서 ID:  공지:148616.1 유형:  BULLETIN
  마지막 갱신 날짜:  12-AUG-2002 상태:  PUBLISHED


PURPOSE
  This note introduces the new LogMiner features/enhancements available in
  Oracle9i.

SCOPE & APPLICATION
  This note assumes the reader has an understanding of the basic functionality
  of LogMiner in Oracle8i.


Oracle9i LogMiner New Features:
===============================

-------------------------------------------------
  New Dictionary Options
-------------------------------------------------

In Oracle8i, there was only one option for the LogMiner dictionary which was
to export the dictionary to a flat file.

In Oracle9i, there are now three options:


A. Export to a flat file
   ---------------------

1. Make sure UTL_FILE_DIR is set in your init.ora file. For example:

     UTL_FILE_DIR=/oracle/logminer

2. Use the DBMS_LOGMNR_D package to build the dictionary. To ensure that the
   dictionary contains a consistent snapshot, ensure there are no DDL
   operations occuring during the build process.

     SQL> execute DBMS_LOGMNR_D.BUILD('dictionary.ora', '/oracle/logminer', -
     >                            options => dbms_logmnr_d.store_in_flat_file);

3. Use the DBMS_LOGMNR.ADD_LOGFILE procedure to create the list of logs to
   analyzed:
 
     SQL> execute DBMS_LOGMNR.ADD_LOGFILE('/ora/ora901/oradata/V901/redo01.log',-
     >                                    options => dbms_logmnr.new);

     SQL> execute DBMS_LOGMNR.ADD_LOGFILE('/ora/ora901/oradata/V901/redo02.log',-
     >                                    options => dbms_logmnr.addfile);

4. Start the log analysis, specifying the dictionary file you just created:

     SQL> execute DBMS_LOGMNR.START_LOGMNR(dictfilename => -
     >    '/oracle/logminer/dictionary.ora');


B. Export to the redo log files
   ----------------------------

To export the LogMiner dictionary to the redo log files, the following
restrictions exist:

   - DBMS_LOGMNR_D.BUILD must be run on an Oracle9i database
   - The database must be in archivelog mode
   - The COMPATIBLE parameter value must be set to 9.0.X
   - The dictionary must belong to the same database as the redo logs to be
     analyzed
   - No DDL is allowed during the dictionary extraction


     SQL> execute DBMS_LOGMNR_D.BUILD(options => -
     >                             dbms_logmnr_d.store_in_redo_logs);

Note that the build process generates extra redo.

1. Use the DBMS_LOGMNR.ADD_LOGFILE procedure to add the logs to be analyzed

2. Start the log analysis:

     SQL> execute DBMS_LOGMNR.START_LOGMNR(options => -
     >                                     dbms_logmnr.dict_from_redo_logs);

LogMiner expects to find a dictionary in the redo log files specified with
DBMS_LOGMNR.ADD_LOGFILE.  Display the V$LOGMNR_LOGS view to list the redo logs
that contain the dictionary.


C. Use the online data dictionary
   ------------------------------

To instruct LogMiner to use the database data dictionary, simply provide this
option to the DBMS_LOGMNR.START_LOGMNR procedure after adding the logs to be
analyzed. No dictionary build is done.

     SQL> execute DBMS_LOGMNR.START_LOGMNR(options => -
     >                                   dbms_logmnr.dict_from_online_catalog);

 

-------------------------------------------------
  DDL Tracking
-------------------------------------------------

A. LogMiner automatically records the SQL statement used for a DDL operation as
   such, so that operations like a DROP/ALTER/CREATE table can be easily
   tracked. In Oracle8i, only the internal operations to the data dictionary
   are recorded and it is difficult to track these operations (A DROP table
   results in several DML statements against the data dictionary).

B. By specifying the DBMS_LOGMNR.DDL_DICT_TRACKING option when starting
   LogMiner, the LogMiner internal dictionary is updated if a DDL event is
   found in the redo log files. This allows the SQL_REDO in V$LOGMNR_CONTENTS
   to accurately display information for objects that are modified by user DML
   statements after LogMiner dictionary is built.

   This option is not valid with the DICT_FROM_ONLINE_CATALOG option.

   1. Build the dictionary:

      SQL> execute DBMS_LOGMNR_D.BUILD('dictionary.ora','/database/9i/logminer');

   2. Alter the table to add a column :

      SQL> alter table test add(c4 number);

   3. Add the log which contains the ALTER statement:

      SQL> execute DBMS_LOGMNR.ADD_LOGFILE(logfilename => -
      >                                    '/database/9i/arch/1_683.dbf', -
      >                                    options => dbms_logmnr.new);

   4. Start the log analysis:

      => Without the DDL_DICT_TRACKING option:

         SQL> execute DBMS_LOGMNR.START_LOGMNR(dictfilename => -
         >                        '/database/9i/logminer/dictionary.ora');

         V$LOGMNR_CONTENTS.SQL_REDO contains:

         insert into "SCOTT"."TEST"("COL 1","COL 2","COL 3","COL 4") values
         (HEXTORAW('c102'),HEXTORAW('c103'),HEXTORAW('c104'),HEXTORAW('c105'));

     => With the DDL_DICT_TRACKING option:

        SQL> execute DBMS_LOGMNR.START_LOGMNR(dictfilename => -
        >                             '/database/9i/logminer/dictionary.ora', -
        >                             options => dbms_logmnr.ddl_dict_tracking);

        V$LOGMNR_CONTENTS.SQL_REDO contains:

        insert into "SCOTT"."TEST"("C1","C2","C3","C4") values ('1','2','3','4');

   Note: You must be sure that you have included the log which contains the DDL
         statement in the logs to be analyzed with DBMS_LOGMNR.ADD_LOGFILE.

 

-------------------------------------------------
  Skip Past Redo Log Corruption
-------------------------------------------------

Oracle9i LogMiner allows you to skip a log corruption in the redo log being
analyzed and continue. This does not work if the block corrupted is in the
header of the redo log. The INFO column of v$logmnr_contents contains
information regarding the corrupt blocks skipped by LogMiner.

    SQL> execute DBMS_LOGMNR.START_LOGMNR(options => -
    >                                     dbms_logmnr.skip_corruption);

 

-------------------------------------------------
  Ability to Record Only Committed Transactions
-------------------------------------------------

Oracle9i LogMiner allows users to see only transactions that have been
committed. Transactions are returned in commit order.

    SQL> execute DBMS_LOGMNR.START_LOGMNR(options => -
    >                                     dbms_logmnr.committed_data_only);

 

-------------------------------------------------
  Support for Chained/Migrated Rows
-------------------------------------------------

Oracle9i LogMiner accurately reflects the sql_redo and sql_undo in
V$LOGMNR_CONTENTS for DML on chained and migrated rows.

 

-------------------------------------------------
  Support for Clustered Tables
-------------------------------------------------

Oracle9i LogMiner accurately reflects the sql_redo and sql_undo in
V$LOGMNR_CONTENTS for DML on clustered tables.

 

-------------------------------------------------
  Direct Path Insert Logging
-------------------------------------------------

Direct path inserts are logged as a series of individual inserts in
V$LOGMNR_CONTENTS. The operation column is set to DIRECT INSERT.

 

-------------------------------------------------
  Supplemental Logging
-------------------------------------------------

Oracle9i has the ability to log columns in the redo which are not actually
changed as part of the DML statements. This is useful for maintaining copies
of tables on other databases.

Prior to 9i, LogMiner only returned the columns which were changed and
identified the row with a WHERE clause with a ROWID. But, ROWIDs are not
portable to other databases so it was not possible to extract SQL using
LogMiner which could be used on other databases. The ROWID is still included
in  V$LOGMNR_CONTENTS.sql_redo, but can be removed if necessary.

There are two types of supplemental logging: database and table.

*** Database supplemental logging

Database supplemental logging allows you to specify logging of primary keys,
unique indexes or both. With this enabled, whenever a DML is performed, the
columns involved in the primary key or unique index are always logged even if
they were not involved in the DML.

To turn on database-wide supplemental logging for both primary keys and unique
indexes, execute the following:

    SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA
         (PRIMARY KEY, UNIQUE INDEX) COLUMNS;

This only takes effect for statements which have not yet been parsed. It also
invalidates all DML cursors in the cursor cache and therefore has an effect on
performance until the cache is repopulated.

1. The EMP table has a primary key defined on the EMPID column.

--> If supplemental logging is turned on for primary key columns, then any
    update to EMP logs the EMPID column.

    SQL> select * from emp;

         EMPID        SAL
    ---------- ----------
            10     100000

    SQL> update emp set sal=150000;
    1 row updated.

--> Without supplemental logging, V$LOGMNR_CONTENTS.sql_redo contains:

    update "SCOTT"."EMP" set "SAL" = '150000' where "SAL" = '100000' and
    ROWID ='AAABOaAABAAANZ/AAA';

    But, with the log group test_always defined above,
    V$LOGMNR_CONTENTS.sql_redo contains:

    update "SCOTT"."EMP" set "SAL" = '150000' where "EMPID" = '10' and
    "SAL" ='100000' and ROWID = 'AAABOaAABAAANZ/AAA';

2. To turn off the supplemental logging, execute the following:

    SQL> ALTER DATABASE DROP SUPPLEMENTAL LOG DATA;


*** Table-level supplemental logging

Table-level supplemental logging allows users to define log groups and specify
which columns are always logged in the redo stream. It is done on a
table-by-table basis.  The ALWAYS keyword is used to indicate that if a row is
updated, all columns in the group are logged. If ALWAYS is not used, the
columns in the log group are logged when at least one of the columns in the
group is updated.

You can define a log group using either CREATE TABLE or ALTER TABLE.

1. Create a table with a log group:

    SQL> CREATE TABLE test_log(c1 number, c2 number, c3 number,
         SUPPLEMENTAL LOG GROUP test_always(c1,c2) always);

    SQL> select * from test_log;

            C1         C2         C3
    ---------- ---------- ----------
             1          2          3

2. Update a column that does not belong to the log group:

    SQL> update test_log set c3=99;
    1 row updated.

--> Without supplemental logging, v$logmnr_contents.sql_redo contains:

    update "SCOTT"."TEST_LOG" set "C3" = '99' where "C3" = '3' and ROWID =
    'AAABOZAABAAANZ6AAA';

--> With the log group test_always defined above, V$LOGMNR_CONTENTS.sql_redo
    contains:

    update "SCOTT"."TEST_LOG" set "C3" = '99' where "C1" = '1' and "C2" = '2'
    and "C3" = '3' and ROWID = 'AAABOZAABAAANZ6AAA';

3. Define another log group:

    SQL> alter table test_log
      2  add supplemental log group test_sometimes(c1,c2);

4.1 Update a column that belongs to the log group:

    SQL> update test_log set c2=10;
    1 row updated.

    V$LOGMNR_CONTENTS.sql_redo contains:

    update "SCOTT"."TEST_LOG" set "C2" = '10' where "C1" = '1' and "C2" = '2'
    and ROWID = 'AAABLtAABAAANYgAAA';

4.2  Update a column that does not belong to the log group:
     => no information is logged.

5. There are 2 sets of views for log groups:

   ALL_/USER_/DBA_LOG_GROUPS - log group definitions on tables
   ALL_/USER_/DBA_LOG_GROUP_COLUMNS - columns that are specified in a log group

6. To drop a log group, issue an ALTER TABLE:

     SQL> ALTER TABLE test_log DROP SUPPLEMENTAL LOG GROUP test_always;


-------------------------------------------------------------------
  DBMS_LOGMNR.MINE_VALUE and DBMS_LOGMNR.COLUMN_PRESENT Functions
-------------------------------------------------------------------

These functions allow you to access the actual data in columns from the redo
log files being analyzed and perform more detailed analysis than is possible
with just the columns available in v$logmnr_contents.

A. DBMS_LOGMNR.MINE_VALUE returns the specified value of the first parameter
   (either redo_value or undo_value) for the column name specified in the
   second parameter.

   Example:

   To return update statements executed against SCOTT.EMP which changed SAL to
   more than twice its original value, the following SQL could be used:

    SQL> SELECT sql_redo
         FROM v$logmnr_contents
         WHERE dbms_logmnr.mine_value(redo_value,'SCOTT.EMP.SAL') >
         2*dbms_logmnr.mine_value(undo_value,'SCOTT.EMP.SAL')
         AND operation='UPDATE';

B. DBMS_LOGMNR.COLUMN_PRESENT returns 1 if the specified column is contained
   in a redo record and 0 if it is not.

   Note: COLUMN_PRESENT returns a 1 not only if the column was involved in a
         DML, but also if the column is logged as a result of being part of an
         identification key or a logging group.

C. MINE_VALUE can return a NULL in two scenarios:

   1. The redo record contains the column and its value is actually NULL.
   2. The redo record does not contain the column.

   So, COLUMN_PRESENT can be used to supplement the MINE_VALUE function to
   identify NULLs which are the result of a DML which changed the column to
   NULL vs. a NULL indicating a column was not present in the redo log.

   Example:

   To select redo values for the SAL column and filter out the case where SAL
   is not present in the redo, the following SQL could be used:

    SQL> SELECT dbms_logmnr.mine_value(redo_value,'SCOTT.EMP.SAL')
         FROM v$logmnr_contents
         WHERE dbms_logmnr.mine_value(redo_value,'SCOTT.EMP.SAL') is NOT NULL
         OR   (dbms_logmnr.mine_value(redo_value,'SCOTT.EMP.SAL') is NULL
         AND   dbms_logmnr.column_present(redo_value,'SCOTT.EMP.SAL')=1);

   These functions do not support LONG, LOB, ADT or COLLECTION data types.

 

References:
-----------

Oracle9i Administrator's Guide
Oracle9i Supplied PL/SQL Packages Reference

@ Note 160534.1 ORA-01336 or ORA-01309 When Creating the Logminer Dictionary
  Note 139410.1 ORA-1330 Extracting the Dictionary with LogMiner
  Bug 1897764   LOGMINER THROWS ORA-1332: INTERNAL LOGMINER DICTIONARY ERROR
                  FROM START_LOGMNR

===================================================================================


List of Articles
번호 분류 제목 글쓴이 날짜 조회 수
공지 Q&A Oracle관련 게시물만 Sean 2014.04.09 84896
159 Tip TEMP TABLESPACE 사용현황 조회하기 1 고구마 2010.05.19 43762
158 TM windows에서 Opatch 진행시 oci.dll 에러 조치 1 이현정 2012.03.26 37457
157 Tip 핫백업 디비올리기 유주환 2010.04.09 34980
156 Tip UDP Buffer Tuning 기법 3 김준호 2010.05.04 30910
» Tip Log miner 사용방법 1 김준호 2010.04.06 29914
154 Tip 오라클 null값 정리 유주환 2010.04.18 29861
153 Tip (Diagnostics) Oracle10g DB 접속 안될때 sqlplus 에서 SYSTEMSTATE DUMP 받기 1 고구마 2011.05.18 29071
152 Tip shared pool wait event 2 file 유주환 2010.04.05 28881
151 Tip em 실행시 에러 발생 조치 1 dbkill 2010.12.15 27305
150 Tip 오라클11g에서 SE1,SE,EE제품의 옵션비교입니다. file perfstat 2012.03.07 27011
149 TM audit 관련 2 이현정 2012.03.29 26569
148 TM check whether crs auto start is enable or disable 담벼락 2012.10.12 26286
147 Tip EM 재구성 고구마 2010.04.09 26261
146 Tip 10g standard edition과 enterprise edition의 차이점 2 윤현 2011.05.16 26048
145 Tip Buffer Cahe 관련 대기 이벤트들 고구마 2010.04.28 25635
144 TM 테이블과 인덱스의 관계 및 인덱스 종류 및 생성 방법. Tech1 2012.03.28 25522
143 Tip EXPORT/IMPORT 이용한 TABLE과 INDEX 분리하는 방법 고구마 2010.05.24 25452
142 Tip sysaux resize 유주환 2010.04.18 25290
141 Tip shared pool / library cache 고구마 2010.04.28 25066
Board Pagination Prev 1 2 3 4 5 6 7 8 Next
/ 8