In this Document
Applies to:Oracle Database - Enterprise Edition - Version 10.2.0.4 and laterOracle Database Cloud Schema Service - Version N/A and later Oracle Database Exadata Cloud Machine - Version N/A and later Oracle Cloud Infrastructure - Database Service - Version N/A and later Oracle Database Backup Service - Version N/A and later Information in this document applies to any platform. PurposeThis document discusses the ora-600 [kdsgrp1] error, its possible causes and the work around solutions that can be tried. Troubleshooting StepsThe ora-600 [kdsgrp1] error is thrown when a fetch operation fails to find the expected row. The error is hit in memory and so may be a memory only error or an error that results from corruption on disk. This error may indicate (but is not restricted to) any of the following conditions:
A full list of known issues is given in This issue may be intermittent or it may persist until the underlying disk level corruption is fixed. Intermittent issues are likely to be memory based (however intermittent access to the corruption can be confused with intermittent memory issues). Common Work Around Solutions If the issue is in memory only we can try to immediately resolve the issue by flushing the buffer cache but remember to consider the performance impact on production systems: alter system flush buffer_cache;
If we have an intermittent consistent read issue we can try disabling rowCR which is an optimization to reduce consistent-read rollbacks during queries by setting _row_cr=FALSE in the initialization files. However, this could lead to performance degradation of queries. Please check the ratio of the two statistics "RowCR hits"/"RowCR attempts" to determine whether the workaround is to be used. If this is a result of index corruption then we can drop and rebuild the index. Note that this will require a maintenance window on production systems. Root Cause Determination ----- Current SQL Statement for this session (sql_id=9mamr7xn4wg7x) -----
This immediately shows us the data objects that were accessed. Searching the trace file for the text string 'Plan Table' will locate the SQL execution plan that is dumped within this trace file. For a persistent issue this allows us to determine which indexes have been accessed and so identify indexes that should be validated to check for block corruption: SQL> analyze index <OWNER>.<INDEX NAME> validate structure online;
Index analyzed.
*** SESSION ID:(3202.5644) 2011-03-19 04:12:16.910
row 07c7c8c7.a continuation at file# 31 block# 510151 slot 11 not found
Select owner, segment_name, segment_type, partition_name,tablespace_name
From dba_extents Where relative_fno = <file id> And <block#> between block_id and (block_id+blocks-1);
Analyze table <OWNER>.<TABLE NAME> validate structure cascade online;
Remember that we may be dealing with a permanent corruption that is not located in the object blocks themselves. Examples of this include:
If analyze reports no corruption then check if there are any chained rows on the table. If these exist then we may have an undetected corruption and the issue should reproduce whenever the SQL is run. Exporting the table will also detect this issue. If analyze and exporting the table (in the presence of chained rows) both report no errors then this should be considered a consistent read issue. Once you understand the nature of the problem you can review the list of known bugs and determine which one matches your condition. If you cannot determine which issue is affecting you then open a service request with Oracle Support and upload the RDBMS and ASM (if applicable)instance alert logs for all nodes, any trace and incident files generated and a full description of the nature of the problem. ReferencesNOTE:285586.1 - ORA-600 [kdsgrp1]NOTE:28814.1 - Handling Oracle Block Corruptions NOTE:411.1 - Collecting Diagnostics For Oracle Support |
|
Didn't find what you are looking for?