Tutorials to .com

Tutorials to .com » Database » Oracle » Oracle Database log files analysis

Oracle Database log files analysis

Print View , by: iSee ,Total views: 20 ,Word Count: 2298 ,Date: Thu, 20 Aug 2009 Time: 3:59 AM

First, explain how to analyze that LogMiner

Judging from the current situation, analysis of oracle is the only way to log the use of Oracle's LogMiner provides for, Oracle database, all changes are recorded in the log, but the original log message, we simply can not understand, but let us see LogMiner is a tool to understand the log information. From this point of view, it almost tkprof, a log information is used to analyze a trace file is formatted. Analysis of the log we can achieve the purpose of the following:

1, to identify changes in the logic of the database; 2, detection and correction of the user's misoperation; 3, after the implementation of the audit; 4, analysis of the implementation of change. Not only that, but the information recorded in the log include: changes in the history of the database, change the type (INSERT, UPDATE, DELETE, DDL, etc.), change the corresponding SCN, as well as the implementation of these operations, such as user information, LogMiner analysis of log, equivalent to the SQL statement Reconstruction and UNDO statement (respectively, recorded in the view V $ LOGMNR_CONTENTS of SQL_REDO and SQL_UNDO). It should be noted that the equivalent statement, rather than the original SQL statement, for example: our initial implementation of the "delete a where c1 <> 'cyx';", and LogMiner reconstruction is equivalent to 6 DELETE statement. Therefore, we should be aware of the view V $ LOGMNR_CONTENTS is not shown in the reality of the original, from the database perspective, it is easy to understand, it is a record of the operation element, because the same "delete a where c1 <> 'cyx' ; "statement, in a different environment, the actual number of records to delete may vary, so records of such a statement in fact there is no practical significance, LogMiner reconstruction is in practice element into the operation of a number of single statement.

Also as a result of Oracle redo log is not recorded in the original object (such as table and column in which) the name, but only in the Oracle database's internal ID (for the table is their object in the database ID, and for the table out in the example, is the corresponding column in the table with the serial number: COL 1, COL 2, etc.), so in order to enable the reconstruction of the LogMiner easily identifiable SQL statements, we need to be translated into a corresponding number of name, which requires a data dictionary (it is said, can not LogMiner data dictionary, see the following analysis of the process), LogMiner to use DBMS_LOGMNR_D.BUILD () process to extract the data dictionary information.

LogMiner two PL / SQL packages and several views:

1, dbms_logmnr_d package, this package only includes a data dictionary information for the extraction process, that is, dbms_logmnr_d.build () process. 2, dbms_logmnr package, it has three processes: add_logfile (name varchar2, options number) - used to add / delete the log files for analysis; start_logmnr (start_scn number, end_scn number, start_time number, end_time number, dictfilename varchar2, options number ) - used to open the log analysis, and analysis to determine the time / SCN window, as well as to confirm whether or not to use the data dictionary extracted information. end_logmnr () - used to terminate the analysis of conversation, it will recall the memory occupied by LogMiner. Associated with the LogMiner data dictionary. 1, v $ logmnr_dictionary, LogMiner may use the data dictionary information, because logmnr can have more than one dictionary file, the view used to display this information. 2, v $ logmnr_parameters, the current parameters set by LogMiner information. 3, v $ logmnr_logs, the log is used to analyze the current list. 4, v $ logmnr_contents, log the results of the analysis. Second, Oracle9i LogMiner enhancements:

1, support for more data / storage type: link / relocation of firms, CLUSTER table operation, DIRECT PATH insert as well as the DDL operation. V $ LOGMNR_CONTENTS in SQL_REDO the DDL operations can be seen in the original sentence (CREATE USER, except, of which the password will be encrypted form, rather than the original password). If TX_AUDITING initialization parameter is set to TRUE, the database of all account operations will be recorded. 2, extract and use data dictionary option: It is now not only the data dictionary can be extracted to an external document, can also be extracted directly to the redo log stream, which flows in the log at the time of operation provided a snapshot of data dictionary, so off-line analysis can be achieved. 3, to allow DML operations on the group by the Panel: can START_LOGMNR () to set up COMMITTED_DATA_ONLY options to achieve the division of the DML operation, so that the order will be returned to SCN has been submitted to the Service. 4, in support of SCHEMA changes: the state of the database open, if the use of LogMiner options DDL_DICT_TRACKING of, Oracle9i contrast the LogMiner will automatically log the original stream and the current system data dictionary, and return to the correct DDL statements, and will automatically reconnaissance and marking the current data dictionary and the initial difference between the log stream, so that even if the initial log stream table involved has been changed or no longer exists, LogMiner will return the correct statement of the DDL. 5, recorded in the log out more information: for example, for the UPDATE operation will not only update the record trip, you can capture more information before the video. 6, values-based inquiry: Oracle9i LogMiner in support of the original data based on metadata (operation, object, etc.) based on the query and began to support on the basis of actual data related to the query. Such as in the case of a payroll, and now we can easily identify staff salaries by 1000 into a 2000 update of the original statement, and in the previous election of all we can only update statement.

Third, Oracle8i/9i log analysis

LogMiner up as long as the case examples can be run, LogMiner uses a dictionary file to achieve the Oracle internal object name conversion, if not the dictionary file, then directly show that the internal object ID, for example, we implement the following statement: delete from "C "." A "where" C1 "= 'gototop' and ROWID = 'AAABg1AAFAAABQaAAH'; If there is no dictionary file, LogMiner analysis of the results will be: delete from" UNKNOWN "." OBJ # 6197 "where" COL 1 "= HEXTORAW ( 'd6a7d4ae') and ROWID = 'AAABg1AAFAAABQaAAH'; If you want to use the dictionary files, the database should be for at least MOUNT state. And then the implementation process will dbms_logmnr_d.build extracted data dictionary information to an external file. The following is a detailed analysis steps:

1, to confirm the initialization parameter settings: UTL_FILE_DIR, and make sure Oracle have to read and write permissions to the directory, and then start the instance. UTL_FILE_DIR parameters in the following example: SQL> show parameter utl NAME TYPE VALUE ------------------------ ----------- -- ----------------------------- utl_file_dir string / data6/cyx/logmnr mainly used for storage of the directory produced by the process of dbms_logmnr_d.build dictionary information file, if it is not this, you can not, it is to skip the following step.

2, to generate a dictionary of information documents:

exec dbms_logmnr_d.build (dictionary_filename => 'dic.ora', dictionary_location => '/ data6/cyx/logmnr'); which refers to the dictionary dictionary_location information file storage location, it must exactly match the value of UTL_FILE_DIR, for example: Suppose UTL_FILE_DIR = / data6/cyx/logmnr /, while above this statement to be wrong, simply because after more than a UTL_FILE_DIR "/", and in many other parts of the "/" is not sensitive. dictionary_filename refers to the release of information in the dictionary file name, can take. Of course, we can not explicitly write these two options, that is written in: exec dbms_logmnr_d.build ( 'dic.ora', '/ data6/cyx/logmnr'); If you do not set the parameters of the first step, and direct the beginning of this step, Oracle will report the following error:

ERROR at line 1: ORA-01308: initialization parameter utl_file_dir is not setORA-06512: at "SYS.DBMS_LOGMNR_D", line 923ORA-06512: at "SYS.DBMS_LOGMNR_D", line 1938ORA-06512: at line 1 It should be noted that, in oracle817 for Windows version will appear in the following error:

14:26:05 SQL> execute dbms_logmnr_d.build ( 'oradict.ora', 'c: \ oracle \ admin \ ora \ log'); BEGIN dbms_logmnr_d.build ( 'oradict.ora', 'c: \ oracle \ admin \ ora \ log '); END; * ERROR at line 1: ORA-06532: Subscript outside of limitORA-06512: at "SYS.DBMS_LOGMNR_D", line 793ORA-06512: at line 1 Solutions:

Edit "$ ORACLE_HOME / rdbms / admindbmslmd.sql" document, to which TYPE col_desc_array IS VARRAY (513) OF col_description; into: TYPE col_desc_array IS VARRAY (700) OF col_description; Save the file, then run the script again:

15:09:06 SQL> @ c: \ oracle \ ora81 \ rdbms \ admin \ dbmslmd.sqlPackage created.Package body created.No errors.Grant succeeded. DBMS_LOGMNR_D compiler and then re-package:

15:09:51 SQL> alter package DBMS_LOGMNR_D compile body; Package body altered. After the re-implementation of dbms_logmnr_d.build can: 15:10:06 SQL> execute dbms_logmnr_d.build ( 'oradict.ora', 'c: \ oracle \ admin \ ora \ log '); PL / SQL procedure successfully completed. 3, add the need to analyze the log file

SQL> exec dbms_logmnr.add_logfile (logfilename => '/ data6/cyx/rac1arch/arch_1_197.arc', options => dbms_logmnr.new); PL / SQL procedure successfully completed. Here there are three options for the options parameter can be used: NEW - that create a new log file list ADDFILE - that add to the list of log files, such as the following example REMOVEFILE - and addfile the contrary. SQL> exec dbms_logmnr.add_logfile (logfilename => '/ data6/cyx/rac1arch/arch_2_86.arc', options => dbms_logmnr.addfile); PL / SQL procedure successfully completed. 4, when you add the need to analyze log files, we can let the beginning of an analysis of LogMiner:

SQL> exec dbms_logmnr.start_logmnr (dictfilename => '/ data6/cyx/logmnr/dic.ora'); PL / SQL procedure successfully completed. If you do not have information on the use of a dictionary file (at this time we only need to activate the instance can be a) , then there would be no need dictfilename with the parameters: SQL> exec dbms_logmnr.start_logmnr (); PL / SQL procedure successfully completed. Of course dbms_logmnr.start_logmnr () process is used to define several other analysis of the log time / SCN window parameters, they are: STARTSCN / ENDSCN - the definition of the analysis initial / end SCN No., STARTTIME / ENDTIME - the definition of the analysis of Start / End Time. For example, the course of the following analysis will only be'2003-09-21 09:39:00 'to'2003-09-21 09:45:00' period of the log: SQL> exec dbms_logmnr.start_logmnr (dictfilename => ' / data6/cyx/logmnr/dic.ora ', - starttime =>'2003-09-21 09:39:00', endtime =>'2003-09-21 09:45:00 '); PL / SQL procedure successfully completed. above the first line at the end of the process "-" that a career change, if you are on the same line, you do not need. We can see that the effective time-stamp log: SQL> select distinct timestamp from v $ logmnr_contents; TIMESTAMP ------------------- 2003-09-21 09:40:02 2003-09-21 09:42:39 It should be noted that, because I have set before the NLS_DATE_FORMAT environment variable, so the date of the above can be written directly on the line in this format, if you have not set up, you need to use to_date function to convert click. SQL>! Env grep NLS NLS_LANG = american_america.zhs16cgb231280 NLS_DATE_FORMAT = YYYY-MM-DD HH24: MI: SS ORA_NLS33 = / oracle/oracle9/app/oracle/product/9.2.0/ocommon/nls/admin/data use the to_date format is as follows: exec dbms_logmnr.start_logmnr (dictfilename => '/ data6/cyx/logmnr/dic.ora', - starttime => to_date ('2003-09-21 09:39:00 ',' YYYY-MM-DD HH24 : MI: SS '), - endtime => to_date ('2003-09-21 09:45:00', 'YYYY-MM-DD HH24: MI: SS')); STARTSCN and similar use ENDSCN parameters.

5, above the end of the process of implementation, we can visit and view a few LogMiner to extract the information we need. V $ logmnr_logs in which we can see our current analysis of the log list, if there are two examples of the database (that is, OPS / RAC), in the v $ logmnr_logs there would be two different THREAD_ID. The real results of the analysis is on the v $ logmnr_contents in, there are a lot of information, we can track the information we are interested. I will separate the back is a common case tracking.

6, after all, we can withdraw from the process of implementation of dbms_logmnr.end_logmnr analysis LogMiner, you can directly from the SQL * PLUS, it will automatically terminate. (END)

Oracle Tutorial Articles

Can't Find What You're Looking For?

Rating: Not yet rated


No comments posted.