Oracle7 Parallel Server Concepts and Administrator's Guide | ![]() Library |
![]() Product |
![]() Contents |
![]() Index |
See Also: Oracle7 Server Migration for instructions on upgrading your database.
CLEANUP_ROLLBACK_ENTRIES DELAYED_LOGGING_BLOCK_CLEANOUTS GC_FREELIST_GROUPS GC_RELEASABLE_LOCKS
The following views were added:
V$SORT_SEGMENT V$ACTIVE_INSTANCES
Fine grain locking is a more efficient method for providing locking in a multinode configuration. It provides a reduced rate of lock collision, and reduced space requirements for managing locks, particularly in MPP systems. This feature relies on facilities provided by the hardware and operating system platform, and may not be available on all platforms.
Fine grain locking is discussed in the section "Two Methods of PCM Locking: Hashed and Fine Grain" .
For best results, try to establish stable sort space. Remember that sort space is cached in the instance. One instance does not release the space unless another instance runs out of space and issues a call to the first one to do so. This is an expensive, serialized process which hurts performance. If your system permanently deviates from stable sort space, it is better to overallocate space, or simply not to use temporary tablespaces.
To determine the stability of your sort space, you can check the V$SORT_SEGMENT view. This new view shows every instance's history of sorting. If the FREED_EXTENTS and ADDED_EXTENTS columns show excessive allocation/deallocation activity, you should consider adding more space to the corresponding tablespace. Check also the FREE_REQUESTS value to determine if there is inter-instance conflict over sort space.
Another reason for excessive allocation and deallocation may be that some sorts are just too big. It may be worthwhile to assign a different temporary tablespace for the operations which require huge sorts. The MAX_SORT_SIZE value may help you to determine whether these large sorts have indeed occurred.
See Also: Oracle7 Server Administrator's Guide for more information on sort enhancements.
The following three latches perform much better, and so enhance scalability:
An option has been added to make the XA_RECOVER call wait for instance recovery. This feature enables one Oracle instance to do recovery on behalf of a failed Oracle instance, when both are part of the same OPS cluster.
The XA_INFO string has a new clause called OPS_FAILOVER. If this is set to true for a given XA resource manager connection, any XA_RECOVER call issued from that connection will wait for any needed instance recovery to complete. The syntax is as follows:
OPS_FAILOVER=T
Upper- or lowercase (T or t) can be used. The default value of OPS_FAILOVER is false (F or f).
Previously, there was no guarantee that an XA_RECOVER call would return the list of in-doubt transactions from the failed instance. Setting OPS_FAILOVER=T ensures that this will happen.
When OPS_FAILOVER is set to true, the XA_RECOVER call will wait until SMON has finished cache recovery, has identified the in-doubt transactions, and added them to the PENDING_TRANS$ table that has a list of in-doubt transactions.
This means that active transactions as of the time of the failure are not yet rolled back; they appear active (holding row locks) to users of the system. Furthermore, all transactions system-wide that were active as of the time of failure are marked DEAD and the rollback segments containing these transactions are marked PARTIALLY AVAILABLE. These transactions are recovered as part of SMON recovery in the background, or by foreground processes that may encounter them, as described in the next section. The rollback segment is available for onlining.
Recovery by Foreground Processes Rows may be locked by a transaction that has not yet been recovered. Any foreground process that encounters such a row can itself recover the transaction. The current recovery by SMON will still happen--so the entire transaction recovery will complete eventually. But if any foreground process runs into a row lock, it can quickly recover the transaction holding the lock, and continue. In this way recovery operations are parallelized on a need basis: dead transactions will not hold up active transactions. Previously, active transactions had to wait for SMON to recover the dead transactions.
Recovery is done on a per-rollback segment basis. This prevents multiple foreground processes in different instances from recovering transactions in the same rollback segment, which would cause pinging. The foreground process fully recovers the transaction that it would otherwise have waited for. In addition, it makes a pass over the entire rollback segment and partially recovers all unrecovered transactions. It applies a configurable number of changes (undo records) to each transaction. This allows short transactions to be recovered quickly; without waiting for long transactions to be recovered. The initialization parameter CLEANUP_ROLLBACK_ENTRIES specifies the number of changes to apply.
Recovery by Onlining Rollback Segment Onlining a rollback segment now causes complete recovery of all transactions it contains. Previously, the onlining process posted SMON to do the recovery. Note that implicit onlining of rollback segments as part of warmstart or instance startup does not recover all transactions but instead marks them DEAD.
For more information about load balancing at connect, please see the SQL*Net documentation for Oracle7 Server release 7.3.
Oracle7 Server release 7.3 provides a new initialization parameter, DELAYED_LOGGING_BLOCK_CLEANOUTS, which is TRUE by default.
When Oracle commits a transaction, each block that the transaction changed is not immediately marked with the commit time. This is done later, upon demand--when the block is read or updated. This is called block cleanout. When block cleanout is done during an update to a current block, the cleanout changes and the redo records of the update are piggybacked with those of the update. In previous releases, when block cleanout was needed during a read to a current block, extra cleanout redo records were generated and the block was dirtied. This has been changed.
As of release 7.3, when a transaction commits, all blocks changed by the transaction are cleaned out immediately. This cleanout performed at commit time is a "fast version" which does not generate redo log records and does not repin the block. Most blocks will be cleaned out in this way, with the exception of blocks changed by long running transactions.
During queries, therefore, the data block's transaction information is normally up-to-date and the frequency with which block cleanout is needed is much reduced. Regular block cleanouts are still needed when querying a block where the transactions are still truly active, or when querying a block which was not cleaned out during commit.
During changes (INSERT, DELETE, UPDATE), the cleanout redo log records are generated and piggyback with the redo of the changes.
Parallel query slaves are now assigned based on disk transfer rates and CPU processing rates for user queries. Work is assigned to query slaves that have preferred access to local disks versus remote disks, which is more costly. In this way data locality will improve parallel query performance.
For best results, you should evenly divide data among the parallel server instances and nodes--particularly for moderate to large size tables that substantially dominate the processing. Data should be fairly evenly distributed on various disks, or across all the nodes. For very small tables, this is not necessary.
For example, if you have two nodes, a table should not be divided in an unbalanced way such that 90% resides on one node and 10% on the other node. Similarly, if you have four disks, one should not contain 90% of the data and the others contain only 10%. Rather, data should be spread evenly across available nodes and disks. This happens automatically if you use disk striping. If you do not use disk striping, you must manually ensure that this happens, if you desire optimum performance.
FILE_LOCK FILE_PING
V$BH V$CACHE V$PING V$LOCK_ACTIVITY
The following views were added:
V$FALSE_PING V$LOCKS_WITH_COLLISIONS V$LOCK_ELEMENT
ALTER SESSION SET INSTANCE = instance_number
ALTER TABLE table_name DISABLE TABLE LOCK
Re-enabling table locks is accomplished using the following command:
ALTER TABLE table_name ENABLE TABLE LOCK
V$BH V$CACHE V$PING V$LOCK_ACTIVITY
Table A - 1 shows the file operations and corresponding SQL statements that cannot be performed in Oracle Version 6 with the database mounted in parallel mode.
A redo log file cannot be dropped when it is active, or when dropping it would reduce the number of groups for that thread below two. When taking a datafile online or offline in Oracle7, the instance can have the database either open or closed and mounted. If any other instance has the database open, the instance taking the file online or offline must also have the database open.
Note: Whenever you add a datafile, create a tablespace, or drop a tablespace and its datafiles, you should adjust the values of GC_FILES_TO_LOCKS and GC_DB_LOCKS, if necessary, before restarting Oracle in parallel mode. Failure to do so may result in an insufficient number of locks to cover the new file.
Version 6 does not support taking tablespaces offline in parallel mode, so the initialization parameter GC_SAVE_ROLLBACK_LOCKS is not necessary in Oracle Version 6. In Oracle7, this parameter is required for deferred rollback segments.
In Oracle7, each instance has its own set of redo log files. A set of redo log files is called a thread of redo. Thread numbers are associated with redo log files when the files are added to the database, and each instance acquires a thread number when it starts up.
Log switches are performed on a per-instance basis in Oracle7; log switches in Oracle Version 6 apply to all instances, because the instances share redo log files.
Oracle7 introduces mirroring of online redo log files. The degree of mirroring is determined on a per-instance basis. This allows you to specify mirroring according to the requirements of the applications that run on each instance.
There is no global option for this SQL statement in Oracle7, but you can force all instances to switch log files (and archive all online log files up to the switch) by using the ALTER SYSTEM ARCHIVE LOG CURRENT statement.
In Oracle7, clusters (except for most hash clusters) can use multiple free lists by specifying the FREELISTS and FREELIST GROUPS storage options of CREATE CLUSTER and by assigning extents to instances with the statement ALTER CLUSTER ALLOCATE EXTENT (INSTANCE n).
Hash clusters in Oracle7 can have free lists and free list groups if they are created with a user-defined key for the hashing function and the key is partitioned by instance.
In Oracle7, operations can be performed using either commands or the SQL*DBA menu interface, as described in Oracle7 Server Utilities Guide, Release 7.0.
For a complete list of new parameters, refer to the Oracle7 Server Reference manual.
In Oracle7, each instance has its own set of online redo log files so that automatic archiving only archives for the current instance. Oracle7 can also archive closed threads. Manual archiving allows you to archive online redo log files for all instances. You can use the THREAD option of the ALTER SYSTEM ARCHIVE LOG statement to archive redo log files for any specific instance.
In Oracle7, the filenames of archived redo log files can include the thread number and log sequence number.
A new initialization parameter, LOG_ARCHIVE_FORMAT, specifies the format for the archived filename. A new database parameter, MAXLOGHISTORY, in the CREATE DATABASE statement can be specified to keep an archive history in the control file.
In either mode, the database or object being recovered cannot be in use during recovery:
![]() ![]() Prev Next |
![]() Copyright © 1996 Oracle Corporation. All Rights Reserved. |
![]() Library |
![]() Product |
![]() Contents |
![]() Index |