Oracle7 Parallel Server Concepts and Administrator's Guide | ![]() Library |
![]() Product |
![]() Contents |
![]() Index |
See Also: "Overview of Locking Mechanisms" for an understanding of lock hierarchy in Oracle.
Oracle + Option | Exclusive Mode | Shared Mode | |
Single Node | Multiple Nodes | ||
OPS not installed | Yes: default | No | No |
OPS installed | Yes: default | Yes: Single Shared | Yes: Multiple Shared |
In shared mode, one or more instances of a parallel server mount the same database. All instances mount the database in shared mode and read from and write to the same datafiles. Single shared mode describes an Oracle Parallel Server configuration in which only one instance is running. Global operations exist, but are not needed at the moment. The instance operates as though it is in a cluster (with DLM overhead, and so on), although there is no contention for resources. Multiple shared mode describes an Oracle Parallel Server configuration with multiple instances running.
Note: "Shared" mode is also known as "parallel" mode. There is no difference between the options PARALLEL and SHARED in either the ALTER DATABASE statement or the STARTUP command.
Figure 4 - 1 illustrates a typical configuration of Oracle running in shared mode with three instances on separate nodes accessing the database.
Figure 4 - 1. Shared Mode Sharing Disks
Oracle supports the different levels of parallel processing within a node and between nodes. For a shared memory system, standard Oracle can be used. For a shared disk or shared nothing system, the Oracle Parallel Server must be used to enable each node to access the same database.
Parallel Query Option | |||||
Standard Oracle | Oracle Parallel Server | ||||
Shared Memory | Shared Memory | Shared Disk | Shared Nothing | ||
The Oracle Parallel Query Option runs in all of these cases, because it runs as part of Oracle. Oracle Parallel Server provides the framework for the Parallel Query Option to work between nodes, but on a single node standard Oracle makes use of the system's shared memory.
In Oracle Parallel Server exclusive mode, all synchronization is done within the instance. In shared mode, synchronization is accomplished with the help of a distributed lock manager provided with the operating system.
Block level locking occurs only in OPS shared mode, and is transparent to the user. (Row level locking also operates in shared mode, just as in exclusive mode.)
Consider the following example. Instance 1 reads file 2, block 10 in order to update row 1. Instance 2 also reads file 2, block 10, in order to update row 2. Here, instance 1 obtains an instance lock on block 10, then locks and updates row 1. (The row lock is implicit because of the UPDATE statement.)
Instance 2 will then force instance 1 to write the updated block to disk, and instance 1 will give up ownership of the lock on block 10 so that instance 2 can have ownership of it. Instance 2 will then lock row 2 and perform its own UPDATE.
Since OPS runs in an environment having multiple memories, there can be multiple copies of the same data block in the multiple memories. Internode synchronization using the DLM is used to ensure that all copies of the block are valid: these block-level locks are the buffer cache locks.
The problem of allocating space for inserts illustrates space management issues. When a table uses more space, how can you make sure that no one else uses the same space? How can you make sure that two nodes are not inserting into the same space on the same disk, in the same file?
Consider the following example. Instance 1 reads file 2, block 10 in order to insert a row. Instance 2 reads file 3, block 20, in order to insert another row. Each instance proceeds to insert rows as needed. If one particular block were responsible for assigning enough space for all these inserts, that block would be constantly pinged back and forth between the instances. Instance 1 would lose ownership of the block when instance 2 needs to make an insert, and so forth. The situation would involve a great deal of contention, and performance would suffer.
By contrast, free list groups make good space management possible. If two instances are inserting into the same object (such as a table), but each instance has its own set of free lists for that object, then contention for a single block would be avoided. Each instance would insert into a different block belonging to the object.
Within a single instance, Oracle uses a buffer cache in memory to reduce the amount of disk I/O necessary for database operations. Since each node in the parallel server has its own memory that is not shared with other nodes, the DLM must coordinate the buffer caches of different nodes while minimizing additional disk I/O that could reduce performance. The Oracle parallel cache management technology maintains the high-performance features of Oracle while coordinating multiple buffer caches.
Oracle only reads data blocks from disk if they are not already in the buffer cache of the instance that needs the data. Because data block writes are deferred, they often contain modifications from multiple transactions.
Optimally, Oracle writes modified data blocks to disk only when necessary:
See Also: "How to Detect False Pinging" for information about false pinging.
If you operate Oracle in ARCHIVELOG mode, online redo log files are archived before they can be overwritten. In a parallel server, each instance can automatically archive its own redo log files or one or more instances can archive the redo log files manually for all instances.
In ARCHIVELOG mode, you can make both online and offline backups. If you operate Oracle in NOARCHIVELOG mode, you can only make offline backups.
The sequence number generator allows multiple instances to access and increment a sequence without contention among instances for sequence numbers and without waiting for any transactions to commit. Each instance can have its own sequence cache for faster access to sequence numbers. Distributed locks coordinate sequences across instances in a parallel server.
With a single free list, when multiple inserts are taking place, single threading occurs as these processes try to allocate space from the free list. The advantage of using multiple free lists is that it allows processes to search a specific pool of blocks when space is needed, thus reducing contention among users for free space.
Even if multiple free lists reside in a single block, on OPS the block containing the free lists would be pinged between all the instances all the time. To avoid this problem, freelists can be grouped, with one group assigned to each instance. Each instance then has its own block containing freelists. Since each instance uses its own freelists, there is no contention between instances to access the same block containing freelists.
See Also: "Using Free List Groups to Partition Data" regarding proper use of free lists to achieve optimal performance in an OPS environment.
"Online and Offline Backups" .
Cache coherency is provided by the Parallel Cache Manager for the buffer caches of instances located on separate nodes. The set of global constant (GC_*) initialization parameters associated with PCM buffer cache locks are not used with the dictionary cache, library cache, and so on.
The Parallel Cache Manager ensures that a master copy data block in an SGA has identical copies in other SGAs that require a copy of the master. Thus, the most recent copy of the block in all SGAs contains all changes made to that block by all instances in the system, regardless of whether any of the transactions on those instances have committed.
If a data block is modified in one buffer cache, then all existing copies in other buffer caches are no longer current. New copies can be obtained after the modification operation completes.
Parallel cache management enforces cache coherency while minimizing I/O and use of the distributed lock manager. I/O and lock operations for cache coherency are only done when the current version of a data block is in one instance's buffer cache and another instance requests that block for update.
Multiple transactions running on a single instance of a parallel server can share access to a set of data blocks without additional distributed lock operations, as long as the blocks are not needed by transactions running on other instances.
In shared mode, the distributed lock manager (DLM) of the network maintains the status of distributed locks. In exclusive mode, an instance does not need the DLM to coordinate database resources.
Instances use distributed locks simply to indicate the ownership of a master copy of a resource. When an instance becomes the owner of a master copy of a database resource, it also inherently becomes the owner of the distributed lock covering the resource. A master copy indicates that it is an update copy of the resource. The instance only disowns the distributed lock when another instance requests the resource for update. Once another instance owns the master copy of the resource, it becomes the owner of the distributed lock.
See Also: "How Buffer State and Lock Mode Change" .
When a master copy of a data block is copied into an instance's SGA, that instance becomes the owner of the PCM lock covering the data block. The PCM lock and the data block it covers are only disowned when another instance requests the data block for update. Once the requesting instance reads the master copy of the data block into its SGA, it becomes the owner of the PCM lock covering the data block.
Attention: Transactions and parallel cache management are autonomous mechanisms in Oracle. PCM locks function independently of any form of transaction lock.
Example
Consider the following example and the illustrations in Figure 4 - 2. (This example assumes that one PCM lock covers one block--although many blocks could be covered.)
In contrast, transactions do not release row locks until changes to the rows are either committed or rolled back. Oracle uses internal mechanisms for concurrency control to isolate transactions, so that modifications to data made by one transaction are not visible to other transactions until the transaction modifying the data commits. The row lock concurrency control mechanisms are independent of parallel cache management: concurrency control does not require PCM locks, and PCM lock operations do not depend on individual transactions committing or rolling back.
Understanding the way that caches are synchronized across instances can help you to understand the ongoing overhead which affects the performance of your system. Consider a five-node parallel server in which someone drops a table on one node. Each of the five dictionary caches has a copy of the definition of that particular table, thus the node that drops the table from its own dictionary cache must also flush the other four dictionary caches. It does this automatically through the DLM. Users on the other nodes will be notified of the change in lock status.
There are big advantages to having each node cache the library and table information. Occasionally, a command like DROP TABLE will force other caches to be flushed, but the brief effect this may have on performance does not diminish the advantage of having multiple caches.
See Also: "Space Management" .
"System Change Number" for additional examples of non-PCM cache management issues.
![]() ![]() Prev Next |
![]() Copyright © 1996 Oracle Corporation. All Rights Reserved. |
![]() Library |
![]() Product |
![]() Contents |
![]() Index |