T3:
#Undo:
Oracle Database has a method of maintaining information that is used to rollback or undo the changes to the database. Oracle Database keeps records of actions of transactions, before they are committed and Oracle needs this information to rollback or undo the changes to the database. These records are called rollback or undo records.
* Rollback transactions - when a ROLLBACK statement is issued, undo records are used to undo changes that were made to the database by the uncommitted transaction.
* Recover the database - during database recovery, undo records are used to undo any uncommitted changes applied from the redolog to the datafiles.
* Provide read consistency - undo records provide read consistency by maintaining the before image of the data for users who are accessing the data at the same time that another user is changing it.
* Analyze data as of an earlier point in time by using Flashback Query.
* Recover from logical corruptions using Flashback features.
@VOracle_VOITG
T4:
Automaticlly realocate #service on preferred node in RAC 19c with parameter (-failback YES) when we add a service on #RAC19c. In 11g realocate service is manually after failuer.
@VOracle_VOITG
T5:
#OCR (Oracle Cluster Registry) VS #OLR (Oracle Local Registry). OCR store information about ORACLE RAC configuration and metadata on ASM. OLR keep RAC configuration on Local file system. OLR is on local file system. OCR is on ASM. then the first available resource on RAC is OLR. becuase OCR will available after goes up ASM. As OCRs files are present in ASM diskgroup, While starting the CRS, it wont be able to Access OCR file to find the cluster resource information. Because at this point ASM instance would also be down. So at this point OLR file ( which is present at normal file system) is accessed to start the required resources.
@VOracle_VOITG
T6:
#CSS service:
CSS is a service that is responsible for determining which nodes of the cluster are available to the cluster. The CSS uses the private interconnect for communications as well as the Clusterware voting disks through a combination of heartbeat messages over the interconnect and the voting disks CSS will determine the status of each node of the cluster.( node Fencing). CSS then uses the voting disks to determine which node has gone offline. CSS will then work with Oracle Clusterware to evict the missing node from the cluster. --CSS daemon (ocssd) – Manages cluster node membership information. --CSS Agent (cssdagent) – Monitors the cluster and provides fencing services . --CSS Monitor (cssdmonitor) – This process monitors for node hangs, monitoris OCSSD processes for hangs.
@VOracle_VOITG
T7:
Various types of #checkpoints in #Oracle : – Full checkpoint
– Thread checkpoint
- File checkpoint
- Parallel Query checkpoint
- Object checkpoint
- Log switch checkpoint
_ Incremental checkpoint For example Full CK: - Caused by :
. Alter system checkpoint [global]
. ALter database begin backup
. ALter database close
. Shutdown [immediate]
- Controlfile and datafile headers are updated
. Checkpoint_change#
@VOracle_VOITG
T8:
User Global Area(#UGA) concept: is a memory area that holds session-based information. In Shared Server Architecture: sessions can be served by multiple server processes and the UGA cannot be stored in the PGA and is moved to the SGA. In Dedicated Server Architecture: one session = one dedicated process and the UGA are stored in the PGA.
@VOracle_VOITG
T9:
Oracle Dynamic #Remastering in #RAC
In RAC, every data block is mastered by an instance. Mastering a block simply means that master instance keeps track of the state of the block until the next reconfiguration event.
When one instance left the cluster, the GRD section of that instance needs to be redistributed blocks to the surviving nodes. Similarly, when a new instance enters the cluster, the GRD section of the existing instances must be redistributed blocks to create the GRD section of the new instance. This is called dynamic resource reconfiguration. if an instance, compared to another, is heavily accessing blocks from the same object, the GCS can take the decision to dynamically migrate all of that object’s resources to the instance that is accessing the object most. Remaster activity is performed by the LMD process.
Reference-: Oracleinaction
@VOracle_VOITG
T10:
#ASM redundancy levels
are:
*External redundancy: Oracle ASM does not provide mirroring redundancy and relies on the storage system to provide RAID functionality. *Normal redundancy: A loss of one Oracle ASM disk is tolerated. *High redundancy: A loss of two Oracle ASM disks in different failure groups is tolerated.
@VOracle_VOITG
VOracle Interview Tips Group(VOITG)
My primary goal is to review Oracle DBA interview tips and quick overview of Oracle database concepts.
Group link:
https://t.me/VOracle_VOITG
T11:
#Authentication Methods in #Oracle:
Oracle supports a robust set of authentication methods:
• Database authentication (username and password stored in the database)
• OS authentication
• Network authentication
• Global user authentication and authorization
• External service authentication
@VOracle_VOITG
T12:
Oracle Row #Chaining and #Migration
Row Chaining: A row is too large to fit into a single database block. Row Migration: We will migrate a row when an update to that row would cause it to not fit on the block. Row migration means that the entire row will move and we just leave behind the «forwarding address».
So, the original block just has the rowid of the new block and the entire row is moved.
* Row migration is typically caused by UPDATE operation
* Row chaining is typically caused by INSERT operation.
* To diagnose chained/migrated rows use ANALYZE command, query V$SYSSTAT view.
* To remove chained/migrated rows use higher PCTFREE using ALTER TABLE MOVE.
Reference-: akadia
@VOracle_VOITG
T13:
#Partition #Pruning:
Partition pruning is the skipping of unnecessary index and data partitions or subpartitions by a query. Partitioning pruning is the simplest and also the most substantial means to improve performance using partitioning.
@VOracle_VOITG
T14:
What is the reason for #checkpoint #incomplete
in your alert log?
The first reason is using small redo logfile size and sometimes the reason is related to DBWR. It means LGWR has to wait until DBWR has completed writing the dirty buffer to disk and it can be a lake of I/O storage or configuration problem for example on storage array.
@VOracle_VOITG
T15:
Oracle #Restore Point
Restore point is one of the Oracle feature that flash database back to creation restore point time. Using this facility assures us that we will back to correct time (before change) on correct SCN.
Types of restore point:
1-Guaranteed restore points: we can flash the database back to the restore point regardless of the db_flashback_retention_target parameter.
2-Normal restore point: we can flash the database back to the restore point within the time period specific by the db_flashback_retention_target parameter.
In which instance state we can flash database back to the restore point? In mount mode and database must be in Archive mode.
@VOracle_VOITG
Point-:4
Data Protection Mode in DG
(1)Maximum Performance-(default)
Noaffirm/ASYNC/
(2)Maximum Availability-
Affirm or Noaffirm/SYNC
(3)Maximum Protection
Affirm/SYNC
Thanks
Ahmad
T16:
What is I/O #fencing or Disk #fencing in RAC:
In a RAC environment, I/O heartbeat send over the private network hence when a node fails, communication breakdown in a cluster and the remaining node takes corrective action so the failed node is prevented from access to shared disks to avoid data corruption. In Rac the first node that detects other unreachable nodes will evict the failed node from the cluster.
@VOracle_VOITG
T17:
Why do we need odd number of voting disks?
The reason for the odd number of voting disks is to solve the "split-brain" problem. The split-brain problem occurs when the cluster interconnect goes down and each node cannot obtain a heartbeat from the other nodes. At this time, each node in the cluster thinks they are the only one alive and they think they should become the "master node". There can be only one master node. Each half thinks they are the brains of the operation, which we cannot allow.
So how do we resolve this?
We set up a race. Each candidate node wants to be the master so we put it up to a vote. Whichever contestant gets the most votes wins. So node 1 contacts the first voting disk and says "I am here first so I am the master!". Simultaneously, node 2 contacts the second voting disk and says "I am here first so I am the master!". They both can't be right. If there were an even number of voting disks, then it is possible that each candidate node could come up with exactly half the number of total votes and we would have a tie, which must be resolved. Now the real race begins. The two nodes run to the third and final voting disk. The node that gets there first now has 2 votes to 1, wins the election, and becomes the master node in the cluster.
Thanks
Ahmad
Reference-:dbaschool
T18:
Points about SCAN-:
--------------------------------
• SCAN Name will represent the cluster in the network.
• SCAN used by clients to connect to any database in the cluster.
• SCAN is a GSD resource, which is managed by CRS.
• SCAN provides a single domain name (via DNS), allowing end-users to address a RAC cluster as-if it were a single IP address.
• Removes the requirement to change the client connection if cluster changes.
• Load balances across the instances providing a service. SCAN listener selects least loaded node.
• SCAN requires a DNS entry or GNS to be used.
• SCAN is an alternative to the transparent application failover (TAF) for automatic load balancing.
• Provides failover between instances.
• SCAN VIP/LISTENER will failover to another node in the cluster.
• Clients do not require VIP information.
• Must resolve to at least one address on the public network.
• SCAN provides location independence for the databases.
• Allow clients to use EZConnect or JDBC connections.
• SCAN Listener would forward the request to local listener that’s running on VIPs.
• Each cluster will have 3 SCAN listeners, each having a SCAN VIP defined as cluster resources.
• Instance registers with local listener on its node. Database “REMOTE_LISTENER” registers instances with all SCAN listeners.
• SCAN listeners on each node in the cluster, which are not replacements for a regular listener.
• PMON process of each instance registers the database services with the default listener on the local node and with each SCAN listener, which is specified by the REMOTE_LISTENER database parameter.
• Client queries DNS to resolve to SCAN.
Thanks
Ahmad
Reference-: Satya Blog
T19:
#Voting Disks in RAC
Oracle Clusterware uses voting disk files to determine which nodes are members of a cluster.
After ASM is introduced to store these files, these are called as VOTING FILES.
What Information is stored in VOTING DISK/FILE
a. Which node is part of the cluster.
b. Which node is leaving the cluster.
c. Which node is joining the cluster.
@VOracle_VOITG
Point-5-:
Data Gaurd Protection Modes-:
(1) Maximum Performance (D)
-Potential for minimal data loss.
-ASYNC
-Primary never waits for standby acknowledgement to single commit success to the application. There can be no guarantee of Zero Data Loss.
(2) Maximum Protection
-Zero Data Loss
-Double Failure Protection
-SYNC
-Single Commit success to the application only after acknowledgement is received from a standby database that redo for that transaction has been hardened to disk. The production database can not proceed until acknowledgement has been received.
(3) Maximum Availability
-Zero Data Loss
-Single Failure Protection
-SYNC/Fast SYNC/Far SYNC
-Single Commit success to the application only after acknowledgement is received from a standby database or after NET_TIMEOUT threshold period expires.-Whichever occurs first.
-A network or standby outage does not affect the availability of the production database.
Thanks
Ahmad
http://www.oracleride.com/
T20:
How does Oracle #instance #recovery work and what happens internally?
When an instance terminated as abnormally or crashed the database goes down in an inconsistent state that
means all ongoing transactions committed or uncommitted were not completed, therefore, before it can be opened, the database must be in a consistent mode
hence Oracle performs instance recovery. Oracle using last SCN in the control file and will apply committed or uncommitted transaction from Redo logs. In this
state database is in MOUNT state and when you open the database Oracle use of UNDO's data to rollback uncommitted transactions
after that, the database is in a stable state.
@VOracle_VOITG
T21:
When does a #checkpoint occur? And what will happen when a #checkpoint occur?
A checkpoint occurs on below events:
• Redo log switch occurs
• If MTTR is enabled by setting parameter FAST_START_MTTR_TARGET
• On consistent shutdown
• Manually checkpoint is triggered alter system checkpoint
• A partial checkpoint occurs while taking the datafile offline or read-only
• When active redo log is to be written by LGWR
• Every 3 seconds
• After commit.
Checkpoint operation:
• Redo logs from SGA will be written on online redo logs on disk.
(LGWR)
• Dirty data from SGA will be written on data files on disk.
(DBWR)
• One row will record in the redo log file.
• Data file headers are also updated with the latest checkpoint SCN that record in control file, therefore, datafile header will sync by the control file header.
@VOracle_VOITG
Point-6-:
How Connection happened in 11g R2 RAC-:
(1)Pmon process of each instance registers the database services with the default listener on the local node and with each SCAN listener which is specified by the REMOTE_LISTENER database parameter.
(2)Oracle Client connects using SCAN name.
(3)Clients queries DNS to resolve scan_name.
(4)SCAN LISTENER selects least loaded node.
(5)The Client connects to the Local Listener on least loaded node. The Local Listener starts a dedicated Server Process for the connection to the database.
(6)The Client connects directly to the dedicated Server Process on least loaded node and access the database instance.
Thanks
Ahmad
http://www.oracleride.com/
T22:
Rman Block Change Tracking (#Block #Change #Tracking)
Block Change Tracking is a mechanism for improving the performance of incremental backups.
It works by keeping the track of Data Block changes in the block change tracking file.
So instead of scanning all data blocks, RMAN uses this file to identify the changed blocks that need to be backed up.
How to activate the above feature? with alter system or alter database? The answer is quite clear.
ALTER DATABASE ENABLE BLOCK CHANGE TRACKING;
be careful after enable this feature, you must at first take a full backup to activate block change tracking in the next one incremental backup.
@VOracle_VOITG
T23:
Difference between #Complete and #Incomplete #Recovery?
In fact, complete recovery Oracle using redo logs to recover the database. I mean Oracle applies all of the redo changes from the archived log and online redo logs without the loss of any committed transactions. Complete recovery needs all archive log backups.
In Incomplete Recovery or point in time recovery, Oracle does not recovery database until the current time, rather Oracle
will recover the database until the point of time or specific SCN.
Oracle uses a backup to produce a noncurrent version of the database In other words
We do not apply all of the redo records generated after the most recent backup.
After incomplete recovery we have to open database with reset logs option.
We usually perform incomplete recovery of the whole database in the below situations:
-Media failure and loss of some or all online redo log files or loss of the control file.
-User error, for example, user drop a table
-Missing archive logs
@VOracle_VOITG
Point-7-:
Lisneters Importance in RAC DB
1)Local Listener runs on nodes.
-By local_listener parameter is used to set.
-Local Listener used Virtual IP as a IP Address.
2)Remote Listener is used for SCAN Name.
-Remote Listener identified Scan Listener from 11g R2 RAC.
-Remote_Listener parameter to set for SCAN listeners.
-In this parameter we set SCAN Name.
Thanks
Ahmad
http://www.oracleride.com/
Oracle #RAC #Cache #Fusion
T24:
Cache Fusion is one of the most Important concepts in Oracle RAC.
As we know each and every instance of RAC cluster is having its own
local buffer cache. Cache Fusion transfer the data block from buffer cash of one instance to the
buffer cache of another instance using the cluster high speed interconnect.
Cache Fusion is implemented by a controlling mechanism called Global Cache Service[GCS] and
maintain the integrity of data is a task of GCS.
We can say that GCS is a heart of CACHE FUSION process. GCS is available in RAC as LMS background process.
Reference: oracle-help.com, myorastuff.blogspot.com
@VOracle_VOITG
#Rac #Block #Transfare
T25:
In a RAC environment when a session on one instance requests for a block, and the block is not available on the local cache of the instance, then a request is sent to the master of the object.
When a request for a block is received, the GCS processes get involved, locating the block with the help of the GRD.
the master will send a request to the
current holder. The entire block physically transferred across the interconnect to the requesting instance.
@VOracle_VOITG
#Rac #Read #Block from #Disk
T26:
A user session or process attached to instance node4 makes a request for a block. and the block is not available on the local cache of the instance.
node4 determines the master for this specific block that
contains the row is node3. The request is directed to instance node3 where the GRD for
the object is maintained. Instance node3 after checking against the GRD determines that neither instance node3 no
other instance in the cluster has a copy of the block. so the data must be on
disk. Hence, a grant message is sent to the requesting instance to read the block from disk.
The grant gives the reader on instance node4 the permission to read the block from disk.
@VOracle_VOITG
T27:
#Data #Guard #Services:
- Log Transport Services: Data Guard transport services transmit the redo directly from the primary database log
buffer to the standby database(s) where it is written to a standby redo log file.
- Log Apply Services: on the standby database read redo records from a standby redo log file, perform continuous Oracle validation to ensure
that the redo is not corrupt, and then applies redo changes to the standby database.
- Role Management Services: Change the role of a database from a standby database to a primary database,
or from a primary database to a standby database using either a switchover or a
failover operation.
@VOracle_VOITG
T28:
Data #Guard #LNS #RFS #MRP #process:
LNS (Log Network Server):
-reads redo information from the redo buffer in SGA of PRIMARY Database.
-passes redo to Oracle Net Services for transmission to the STANDBY database.
RFS (Remote File Server):
-records the redo information transmitted by the LNS at the STANDBY database on standby redo log files.
-transmits an acknowledgment back to the LNS process on the primary database.
MRP (Managed Recovery Process):
-A physical standby database applies redo received from the primary using MRP.
@VOracle_VOITG
T29:
Benefits of a #Physical #Standby Database :
- Disaster recovery and high availability: Easy-to-manage switchover and failover
capabilities allow easy role reversals between primary and physical standby database.
- Data protection : Using a physical standby database, Data Guard can ensure no data loss.
- Reduction in primary database workload: Oracle Recovery Manager (RMAN) can use physical standby databases to
off-load backups from the primary database saving valuable CPU and I/O
cycles.
- Performance: redo apply technology as a highly efficient mechanism.
@VOracle_VOITG
T30:
#Log #Apply #services use:
#Redo Apply (physical standby databases only)
#SQL Apply (logical standby databases only)
@VOracle_VOITG
T31:
#RAC #Load #Balancing #Methods
Two types of load balancing that you can implement for an Oracle RAC database: client-side and server-side load balancing.
On the client side: On Client TNSNAME.ora set (LOAD_BALANCE=ON)
On the server side: After a listener receives the connection request, it can forward the request to another instance based on the connect time load balancing goal (CLB_GOAL)specified for the service.
CLB_GOAL: can be:
-LONG(Default): used for application connections that are connected for a long period such as third-party connection pools and SQL*Forms applications
. the listener will load balance on the number of sessions.
. Run time load balancing goal will not be used in this case.
-SHORT: used for application connections that are short in duration.
. integrated with the load balancing advisory. (LBA)
. the listener uses Load Balancing Advisory (LBA) to make the connection based on CPU utilization on the node.
@VOracle_VOITG
T32:
Advantages of #Run #Time #load #Balancing (#RTLB)
1- Run time load balancing is achieved using connection pools.
Work requests are automatically balanced across the pool of connections.
2- provides load balancing at the transaction level instead of load balancing at the time of initial connection.
3- With server-side load balancing, the listener directs a connection request to the best instance currently providing the service by using the load balancing advisory.
@VOracle_VOITG
T33:
What happens after user #Commit in Oracle
When a commit issue a transaction will end successfully and all changes recorded permanently in the database.
After #commit:
1- The Oracle database assigns a unique system change number (SCN) to the transaction.
2- The LGWR writes the redo log buffer entries for the transaction from the redo log buffer to
the online redo log files. This guarantees that the changes will not be lost even if there is an instance failure.
3-Server process sends a transaction completion message to the user process and any locks that Oracle holds are released, and Oracle marks the transaction as complete.
@VOracle_VOITG
T34:
Oracle #RAC #NODE #EVICTION
What are the root causes of #NODE #EVICTION
1-Missing network heartbeat:
A node will be evicted from the cluster if it can’t communicate with other nodes in the cluster for example lost private network.
2-Missing disk heartbeat:
A node will be evicted from the cluster if it can’t access the voting disks.
3-Some kind of hang in accessing voting disk:
If one of the RAC instances is hanging, the database LMON process will request a member
kill and ask the CSS process to remove the hanging database instance from the cluster and in other way facing writing problem to the voting disk file.
4-Cssdagent stopping
Cssdagent and Cssdmonitor are monitoring processes therefore if the execution of cssdagent facing problem or stop
the related node will be evicted.
@VOracle_VOITG
T35:
#RAC #processes which are basically deciding about node #evictions:
1. OCSSD : This process is primarily responsible for inter-node health monitoring and instance endpoint recovery.
2. OPROCD : This process is known as checking hang check and drive freezes on a machine.
@VOracle_VOITG
T36:
Oracle #Log #File #Sync #Wait #Event
The Oracle “log file sync” wait event is triggered when a user session issues a commit (or a rollback).
When a commit occurs then the LGWR has to write the log buffer to the online redo log files.
The wait is related to LGWR as we know after LGWR write Redos, It sends confirmation of its completion back to the user session.
The wait time includes the writing of the log buffer and the post-confirmation back.
-- What are the reasons for this wait?
- Commit frequency, the application might be committing after every row, rather than batching COMMITs.
- The SELECT statement, review the Oracle Auditing settings. If Auditing is enabled for SELECT statements,
Oracle could be spending time writing and commit data to the AUDIT$ table.
- Slow I/O
—Solutions:
-Reduce other I/O activity on the disks containing the redo logs, or use dedicated disks.
-Move the redo logs to faster disks or a faster I/O subsystem (for example, switch from RAID 5 to RAID 1).
-Reduce the amount of redo being written with NOLOGGING / UNRECOVERABLE options.
-Use the COMMIT NOWAIT option (commit_logging=batch, COMMIT_WAIT=nowait). If an instance crash may cause loss of data that is being batched inside the log buffer.
-Check the size of the log buffer as it may be so large that LGWR is writing too many blocks at one time.
Reference: Dean Richards
@VOracle_VOITG
T37:
#Current #block Read(#CUR_BLOCK) and #Counsistent #Block Read(#CR_BLOCK) in #RAC
Current block(CUR): the first time a block is read into a buffer of any participating instance, it’s termed a current block.
Consistent Block(CR): is when the block is transferred from one instance to another instance because a session on that instance requested the block.
@VOracle_VOITG
T38:
What are the #Hot #blocks?
Hot blocks are an application issue. When the application has to repeatedly access the same block or blocks.
This happens when multiple sessions repeatedly access one or more blocks that are protected by the same child cache buffers chains latch.
Oracle Hot Blocks Cause Cache Buffers Chains Latches Contention and generally, hot blocks have higher touch count values.
In most cases, increasing the number of cache buffers chains latches will do little to improve performance.
How to spread the blocks
-Deleting and reinserting some of the rows by ROWID.
-Exporting the table, increasing the PCTFREE significantly, and importing the data. This minimizes the number of rows per block.
-Minimizing the number of records per block in the table.
-For indexes, you can rebuild them with higher PCTFREE values.
-Consider reducing the block size.
-Use Better index like a reverse key index.
Reference: logicalread.com
@VOracle_VOITG
T39:
Why we must enable #force #logging on production for building a #standby?
-By enable force logging, all operations are logged into the redo logs.
-Nologging operations, no redo changes are logged on the primary database and no changes will be applied to standby database.
-Redo logs are critical for media recovery operation and redo applying on standby database.
-In some SQL statements, the user has the option of specifying the NOLOGGING clause,
which indicates that the database operation is not logged in the online redo log file.
Even though the user specifies the clause, a redo record is still written to the online
redo log file. However, there is no data associated with this record. This can result in
log application or data access errors at the standby site and manual recovery might be
required to resume applying log files.
-In Force logging mode Oracle database must write the redo records even when NOLOGGING is used with DDL Statements.
It will force the write of REDO records even when no-logging is specified because every change should be recorded and updated in standby server while syncing.
--for example: create a table in Nologging mode on Primary and select on the table on Standby side and we occurred to:
ERROR at line 1:
ORA-01578: ORACLE data block corrupted (file # 100, block # 2658)
ORA-01110: data file 100: '/oradata/ORCL/appdata_test_66.dbf'
ORA-26040: Data block was loaded using the NOLOGGING option
.
This feature can be enabled at TWO levels:
– Database level
– Tablespace level
@VOracle_VOITG
T40:
All #redo #logs in current state in a group were currepted. What will happen and what is the action.
Online redolog file in CURRENT state and database be in archivelog mode:
The LGWR will be terminated in this case. We have to perform incomplete recovery.
#Incomplete #Recovery
-SCN-based incomplete recovery
SELECT archivelog_change#-1 "SCN" FROM v$database;
RUN {
SET UNTIL SCN 1048438;
RESTORE DATABASE;
RECOVER DATABASE;
ALTER DATABASE OPEN RESETLOGS;
}
-------------
-Time-based incomplete recovery
SET UNTIL TIME "TO_DATE('2019-07-01:10:00:00', 'YYYY-MM-DD:HH24:MI:SS')";
-------------
-Sequence-based incomplete recovery
SET UNTIL SEQUENCE 3400;
-------------
Every incomplete recovery needs to be followed by a resetlogs.
@VOracle_VOITG
T41:
#RMAN #Recovery #Catalog
A recovery catalog is a database schema used by RMAN to store metadata about one or more Oracle databases. Typically, you store the catalog in a dedicated database.
A recovery catalog provides the following benefits:
-If the target control file and all backups are lost, then the RMAN metadata still exists in the recovery catalog.
-A recovery catalog centralizes metadata for all your target databases.
-A recovery catalog can store metadata history much longer than the control file.
-You can use a stored script as an alternative to a command file for managing frequently used sequences of RMAN commands.
-A recovery catalog is required to KEEP FOREVER.
@VOracle_VOITG
T42:
#RMAN #Virtual #Private #Catalogs
you may desire to restrict each database administrator to modify only backup metadata belonging to those databases that they are responsible for.
This goal can be achieved by implementing virtual private catalogs.
-There is no restriction to the number of virtual private catalogs that can create under one recovery catalog. Each virtual private catalog is owned by a database schema user which is different than the user who owns the recovery catalog.
-Every virtual private catalog has access to all global stored scripts and cannot access non-global stored
scripts that belong to databases that they do not have privileges for
@VOracle_VOITG
T43:
What is #SCN?
Why Oracle use unique #SCN numbers?
SCN (System Change Number) is a primary mechanism to maintain data consistency in Oracle database, in fact, it is Oracle’s clock, every time we commit,
the clock increments. The SCN just marks a consistent point in time in the database.
SCN is necessary to satisfy the ACID properties of a transaction.
therefore after commit and generate SCN, a unique SCN of the transaction is assigned and recorded in the transaction table.
SCN is used primarily in the following areas:
1-Every redo record has an SCN version of the redo record in the redo header. (redo records can have non-unique SCN)
2-Every data block also has block SCN (aka block version).
3-Read consistency also uses SCN. Every query includes an SCN at the start of the query.
4-Commit. Every commit will generate SCN, aka commit SCN, that marks a transaction boundary.
Reference: orainternals.wordpress.com
@VOracle_VOITG
T44:
What happened behind #begin #backup?
after begin backup command, database goes to backup mode and it means datafile header won't update but the database will work normally like before the begin backup. In the other hand trace of the transaction will hold in the database and when END BACKUP command run, datafile header will sync by last SCN in the red log files
@VOracle_VOITG
T45:
Query running slow on instance 1 and fast in instance 2 what is our approach
In this case, I reviewed the execution plan of query on each instance and I noticed the query was hitting cache on node 2 all the time whereas on node 1 it
was doing physical reads all the time, therefore for understanding the main reason, we should check the execution plan of query on each instance,
gather statistic and use of tkptof to trace session behavior and check db_cash_size.
@VOracle_VOITG
T46:
Oracle #SGA and #PGA
• SGA (System Global Area) is a memory area allocated during an instance start up.
• SGA size is controlled by DB_CACHE_SIZE parameter defined in the initialization parameter file (init.ora file or SPFILE).
• PGA (Program or Process Global Area) is a memory area that stores a user session specific information.
@VOracle_VOITG
T47:
What is #Database #Writer (#DBWR) and when does #DBWR write to the data file?
• DBWR is a background process that writes data blocks information from Database buffer cache to data files.
There are 4 important situations when DBWR writes to data file
• Every 3 seconds
• Whenever checkpoint occurs
• When server process needs free space in the database buffer cache to read new blocks.
• Whenever number of changed blocks reaches a maximum value.
@VOracle_VOITG
T47:
Oracle #ASM #Filter #Driver
1- Oracle ASM Filter Driver (Oracle ASMFD) is installed with an Oracle Grid Infrastructure installation.
2- Oracle ASMFD rejects write I/O requests that are not issued by Oracle software.
3- Oracle ASMFD simplifies the configuration and management of disk devices by eliminating the need to rebind disk devices used with Oracle ASM each time the system is restarted.
4- ASMFD is a superset of ASMLIB; therefore it includes base-ASMLIB features (permissions persistence & sharing open handles).
5- The Oracle ASM filter driver (ASMFD) is a kernel module that resides in the I/O path of the Oracle ASM disks. Logically, ASMFD provides an interface between Oracle binaries and the underlying operating environment which includes the storage hardware interfaces.
Following are descriptions of the key capabilities of ASMFD:
• Reject non-Oracle I/Os
• Reduce OS resource usage.
• Enable device name persistence.
• Faster node recovery.
@VOracle_VOITG
T48:
#SQL #plan #baseline.
A plan baseline is a set of accepted plans that the optimizer is allowed to use for a SQL statement.
In the typical use case, the database accepts a plan into the plan baseline only after verifying that the plan performs well.
a plan includes all plan-related information (for example, SQL plan identifier, set of hints, bind values, and optimizer environment)
that the optimizer needs to reproduce an execution plan.
SQL Plan Baselines, on the other hand, are much more automated, and more proactive in their behavior.
New plans can only be added to the SQL Plan Baseline if they will not result in slower performance and only plans that are actually
in the SQL Plan Baseline are used by the optimizer to generate the plans to process statements.
A SQL plan baseline for a SQL statement consists of a set of accepted plans. When the statement is parsed,
the optimizer will only select the best plan from among this set. SQL plan baselines generated by the SQL plan management mechanism(SPM).
@VOracle_VOITG
T49:
#SQL #profiles
SQL profiles provide additional information to the optimizer to help select the best plan.
A SQL profile is a set of auxiliary information specific to a SQL statement. Conceptually, a SQL
profile is to a SQL statement what statistics are to a table or index. The database can use auxiliary information to improve execution plans.
A SQL profile help to the optimizer for picking the best plan in different situations.
@VOracle_VOITG
T50:
#SQL #Plan #Management (#SPM)
introduced in Oracle Database 11g that enables the system to automatically control the SQL plan by maintaining
SQL plan baselines.
With this feature enabled, a newly generated plan can be used only if the Oracle optimizer can
determine that using the new plan will not impact the performance of the statement. However, if the optimizer determines that
the SQL plan is more efficient compared to the previous plan, it will use the new plan and will save this new plan as
the new plan baseline for future use.
SQL Plan Management (SPM) has three component, plan capture, plan selection, and plan evolution.
SPM plan capture:
When the statement is executed it will be hard parsed and a cost based plan will be generated.
Once the cost based plan is determined it will be compared to the plans that exist in the SQL plan baseline.
If the plan matches one of the accepted plans in the SQL plan baseline, Oracle uses it. However,
if the cost based plan doesn't match any of the accepted plans in the SQL plan baseline it will be added to the plan baseline as an unaccepted plan.
SPM plan selection:
When a SQL statement with a SQL plan baseline is parsed, the accepted plan with the best cost will be chosen.
This process uses the regular optimizer. The presence of a SQL profile will affect the estimated cost of each of
these plans and thus potentially the plan that is finally selected.
SPM plan evolution:
The third sub-component of SPM is verification or evolution of non-accepted plans.
The evolution process test-executes the non-accepted plan against the best of the accepted plans.
The best accepted plan is selected based on cost. Again, if a SQL profile exists,
it will influence the estimated cost and thus the accepted plan chosen for comparison against the non-accepted plan.
Reference: blogs.oracle.com
@VOracle_VOITG
T51:
#Hard #parse vs #Soft #parse
If a user executes a SQL statement and it does not exist in the shared pool then Oracle has to do a hard parse.
If a user executes a SQL statement and it exists in the shared pool and there is a version of SQL statement Oracle to do soft parse.
During the parse call Oracle performs:
-Syntax check: Parsing syntax to check SQL keywords.
-Semantic check: Will check does the object exist It means, the related columns in the table exists and also does user required privileges.
-Shared Pool check: During the parse, Oracle uses a Hashing algorithm for generating a HASH value
for every SQL statement. This hash value checked in the shared pool. If any existing of parsed statements have the same hash value then
it can be reused.
@VOracle_VOITG
T52:
#Oracle #Shared #Pool
Shared pool is the second largest part of the SGA that keeps SQL statements, information on the objects, Packages, and PL/SQL blocks.
The information created in memory for a session can be useful to another session. Shared Pool contains
- Library Cash: It includes Shared SQL area and private SQL area.
library cash holds parsed SQL statements and execution plans and pars SQL codes.
-Dictionary Cash: It holds the information about user privileges, table and column definitions, password
and etc.
@VOracle_VOITG
T53:
#CURSORS in Oracle
A cursor is a group of information stored in memory(SGA) about a SQL statement.
A cursor is in library cache in SGA allocated to a SQL statement.
In cursor stores different information about the statement like its text, execution plan,
statistics etc.
Each SQL statement has:
-One Parent cursor
-One or more child cursors
Each parent requires at least one child cursors.
@VOracle_VOITG
T54:
#CURSORS + #HARD #Parse + #SOFT #Parse
When new SQL arrives, the database tries to find a suitable child cursor on the library cash. If there is no parent cursor, then Hard parse.
If there is a parent cursor, but It's existing children can't be reused by this call
(because of the diffrent size of bind variables, or because of different optimizer settings,
or because of different NLS setting.), there will be HARD PARSE.
If existing child cursors can be reused by this call, there will be a soft parse.
Parent cursor contains the SQL statement text only.
Child cursor contains Execution plans.
Reference: scribd.com
@VOracle_VOITG
T55:
How can I avoid unnecessary #Parsing in Oracle?
By using bind variables we can avoid unnecessary Parsing.
If bind variables are not used, then there is hard parsing of all SQL statements. This has
a server impact on performance, and you will face with the high waits on your DB.
@VOracle_VOITG
T56:
#Fencing in Oracle #RAC
I/O fencing prevents updates by failed instances, and detecting the failure and preventing split brain in the cluster.
When a cluster node fails, the thefailed node needs to be fenced off from all the shared disk devices or diskgroups.
This methodology is called I/O Fencing, sometimes called Disk Fencing or failure fencing.
@VOracle_VOITG
T57:
#GPnP ( #Grid #plug and #play ) #profile in #Oracle
The GPnP profile is a small XML file located in GRID_HOME/gpnp.
need for explicit add and delete nodes steps.
Each node maintains a local copy of the GPnP Profile and is maintained by the GPnP Deamon (GPnPD).
GPnP daemon ensures the synchronization of GPnP profile across all the nodes in the cluster and GPnP profile is
used by Clusterware to establish the correct global personality of a node.
it cannot be stored on ASM as it is required prior to the start of ASM. Hence, it is stored locally on each node
GPnP Profile contains various attributes:
Cluster name
Network classifications (Public/Private).
Storage to be used for CSS.
Storage to be used for ASM: SPFILE location, ASM DiskString, etc
Digital signature information.
@VOracle_VOITG
T58:
#Data #Redaction in Oracle Database 12c
Oracle Data Redaction is one of the new features introduced in Oracle Database 12c. This new feature is part of the Advanced Security option and enables the protection of data shown to the user in real-time, without requiring changes to the application.
We can create redaction policies which specify conditions that must be met before the data gets redacted and returned to the user. During the definition of such policies, the DBA can specify which columns and the type of protection that must be applied.
@VOracle_VOITG
T59:
Oracle #Security Solutions/Controls
-Data Masking
-Advance Security (TDE, Data Redaction)
-Label security
-Virtual Private Database (VPD)
-Fine Grained Auditing (FGA)
-Data Vault
-Database Auditing
-Audit Vault
-Storage/Network authentication
-Oracle Database Firewall
@VOracle_VOITG
T60:
What is #LMS #process in Oracle #RAC? (#Global #Cache Service Process)
LMS (Global Cache Service Process) is one of the Oracle RAC background processes in the database. The main task
of the LMS is transmits block images between the buffer caches of different instances.
This processing is part of the Cache Fusion feature.
This block image transports using High-Speed Interconnect.
Another task is to maintain records of the data file statuses and each cached block by recording information in a Global Resource Directory (GRD).
@VOracle_VOITG
T61:
What is #LMD #process in Oracle #RAC? (#Global #Enqueue Service Daemon)
The LMD process manages incoming remote resource requests within each instance.
The LMD process also handles deadlock detection and remote enqueue requests.
Remote resource requests are the requests beginning from another instance.
LMD processes also handle deadlock detection and remote lock requests.
@VOracle_VOITG
T62:
#Buffer #Busy #Waits
The Buffer Busy Waits Oracle metric occurs when an Oracle session needs to access a block in the buffer cache,
but cannot because the buffer copy of the data block is locked.
This buffer busy wait condition can happen for either of the following reasons:
A: The block is being read into the buffer by another session, so the waiting session must wait for the block read to complete.
B: Another session has the buffer block locked in a mode that is incompatible with the waiting session's request.
@VOracle_VOITG
T63:
#GC #Buffer #Busy #Waits
in #RAC
GC buffer busy event means that a session is trying to access a buffer, but there is an open request for Global cache lock for that block already, and so,
the session must wait for the GC lock request to complete before proceeding. This wait is instrumented as 'gc buffer busy' event.
From 11g onwards, GC buffer busy event differentiated between two cases:
1-If existing GC open request originated from the local instance, then the current session will wait for 'gc buffer busy acquire'.
Essentially, the current process is waiting for another process in the local instance to acquire GC lock, on behalf of the local instance. Once GC lock is acquired,
the current process can access that buffer without additional GC processing (if the lock is acquired in a compatible mode).
2-If existing GC open request originated from a remote instance, then the current session will wait for
'gc buffer busy release' event. In this case session is waiting for another remote session (hence another instance)
to release the GC lock, so that local instance can acquire buffer.
Reference: orainternals.wordpress.com
@VOracle_VOITG
T64:
The #optimizer #tasks in Oracle
The optimizer (also known as “query optimizer”) performs its tasks during the parse phase of
SQL processing. Most of these tasks are performed only during a hard parse because of the optimizer output, the execution plan is stored with the cursor in the shared pool. Optimizer
operations include:
• Transforming queries
• Estimating
• Generating plans
@VOracle_VOITG
T65:
#Estimating
The estimator generates three types of measures:
• Selectivity
• Cardinality
• Cost
The end goal of the estimator is to estimate the overall cost of a given plan
• Selectivity represents a fraction of rows from a row source.
• Cardinality represents the number of rows in a row source. Here, the row source can be a base table, a view, or the result of a join or GROUP BY operator.
• Cost represents the number of units of work (or resource) that are used. The query optimizer uses disk I/O, CPU usage, and memory usage as units of work.
@VOracle_VOITG
T66:
#Controlling the #Behavior of the #Optimizer with #Parameters.
Optimizer behavior can be controlled by using the following initialization parameters:
• CURSOR_SHARING
• DB_FILE_MULTIBLOCK_READ_COUNT (autotuned)
• OPTIMIZER_INDEX_CACHING
• OPTIMIZER_INDEX_COST_ADJ
• PGA_AGGREGATE_TARGET
• OPTIMIZER_MODE
• OPTIMIZER_FEATURES_ENABLE
• OPTIMIZER_ADAPTIVE_FEATURES
• OPTIMIZER_ADAPTIVE_REPORTING_ONLY
@VOracle_VOITG
T67:
#Adaptive #Execution #Plans
The Adaptive Execution Plans feature enables the optimizer to automatically adapt a poorly
performing execution plan at run time and prevent a poor plan from being chosen on
subsequent executions. With an adaptive plan, the optimizer instruments its chosen plan so
that at run time, it can detect if the estimates are not optimal, and the plan can change to
automatically adapt to the actual conditions.
The two Adaptive Plan techniques are:
• Dynamic plans
• Re-optimization
The database uses adaptive execution plans when OPTIMIZER_FEATURES_ENABLE is set to 12.1.0.1
@VOracle_VOITG
T68:
What are the #Access #Paths in Oracle?
Access paths are ways in which data is retrieved from the database.
The optimizer chooses below access path for generating the best execution plan.
• Full table scan
• Row ID scan
• Index scan
• Sample table scan
• Cluster scan
@VOracle_VOITG
T69:
Why Oracle using #Full #Table #Scans?
• Lack of index
• A large amount of data
• Small table
• Multiblock I/O calls
• All blocks below high-water mark
Full table scans have a lower cost than index range scans when accessing a large fraction of
the blocks in a table, because full table scans can use larger I/O calls, and making fewer large
I/O calls has a lower cost than making many smaller calls.
@VOracle_VOITG
T70:
#Row #ID #Scans
The row ID provides the data file, data block, and the location of the row in that block. Locating a row by
specifying its row ID is the fastest way to retrieve a single row.
@VOracle_VOITG
T71:
#Types of #index #scans:
• Index unique scan
• Index range scan
• Index range scan descending
• Index skip scan
• Full scan
• Fast-full index scan
• Index join
• Bitmap operations
@VOracle_VOITG
T72:
#Read #I/O #Wait #events:
• db file sequential read: A count of standard reads, one block at a time into the buffer
cache. This is seldom seen in a full table scan (FTS).
• db files scattered read: A count of multiblock reads into the buffer cache. Up to
DB_FILE_MULTIBLOCK_READ_COUNT consecutive blocks are read at a time and
scattered into buffers in the buffer cache.
• direct path read: A count of waits on direct reads into PGA or a temporary segment.
• direct path read temp: A count of waits on direct reads from a temporary tablespace.
This is most like seen with sorts or temporary tables that do not fit in memory.
@VOracle_VOITG
#Undo:
Oracle Database has a method of maintaining information that is used to rollback or undo the changes to the database. Oracle Database keeps records of actions of transactions, before they are committed and Oracle needs this information to rollback or undo the changes to the database. These records are called rollback or undo records.
* Rollback transactions - when a ROLLBACK statement is issued, undo records are used to undo changes that were made to the database by the uncommitted transaction.
* Recover the database - during database recovery, undo records are used to undo any uncommitted changes applied from the redolog to the datafiles.
* Provide read consistency - undo records provide read consistency by maintaining the before image of the data for users who are accessing the data at the same time that another user is changing it.
* Analyze data as of an earlier point in time by using Flashback Query.
* Recover from logical corruptions using Flashback features.
@VOracle_VOITG
T4:
Automaticlly realocate #service on preferred node in RAC 19c with parameter (-failback YES) when we add a service on #RAC19c. In 11g realocate service is manually after failuer.
@VOracle_VOITG
T5:
#OCR (Oracle Cluster Registry) VS #OLR (Oracle Local Registry). OCR store information about ORACLE RAC configuration and metadata on ASM. OLR keep RAC configuration on Local file system. OLR is on local file system. OCR is on ASM. then the first available resource on RAC is OLR. becuase OCR will available after goes up ASM. As OCRs files are present in ASM diskgroup, While starting the CRS, it wont be able to Access OCR file to find the cluster resource information. Because at this point ASM instance would also be down. So at this point OLR file ( which is present at normal file system) is accessed to start the required resources.
@VOracle_VOITG
T6:
#CSS service:
CSS is a service that is responsible for determining which nodes of the cluster are available to the cluster. The CSS uses the private interconnect for communications as well as the Clusterware voting disks through a combination of heartbeat messages over the interconnect and the voting disks CSS will determine the status of each node of the cluster.( node Fencing). CSS then uses the voting disks to determine which node has gone offline. CSS will then work with Oracle Clusterware to evict the missing node from the cluster. --CSS daemon (ocssd) – Manages cluster node membership information. --CSS Agent (cssdagent) – Monitors the cluster and provides fencing services . --CSS Monitor (cssdmonitor) – This process monitors for node hangs, monitoris OCSSD processes for hangs.
@VOracle_VOITG
T7:
Various types of #checkpoints in #Oracle : – Full checkpoint
– Thread checkpoint
- File checkpoint
- Parallel Query checkpoint
- Object checkpoint
- Log switch checkpoint
_ Incremental checkpoint For example Full CK: - Caused by :
. Alter system checkpoint [global]
. ALter database begin backup
. ALter database close
. Shutdown [immediate]
- Controlfile and datafile headers are updated
. Checkpoint_change#
@VOracle_VOITG
T8:
User Global Area(#UGA) concept: is a memory area that holds session-based information. In Shared Server Architecture: sessions can be served by multiple server processes and the UGA cannot be stored in the PGA and is moved to the SGA. In Dedicated Server Architecture: one session = one dedicated process and the UGA are stored in the PGA.
@VOracle_VOITG
T9:
Oracle Dynamic #Remastering in #RAC
In RAC, every data block is mastered by an instance. Mastering a block simply means that master instance keeps track of the state of the block until the next reconfiguration event.
When one instance left the cluster, the GRD section of that instance needs to be redistributed blocks to the surviving nodes. Similarly, when a new instance enters the cluster, the GRD section of the existing instances must be redistributed blocks to create the GRD section of the new instance. This is called dynamic resource reconfiguration. if an instance, compared to another, is heavily accessing blocks from the same object, the GCS can take the decision to dynamically migrate all of that object’s resources to the instance that is accessing the object most. Remaster activity is performed by the LMD process.
Reference-: Oracleinaction
@VOracle_VOITG
T10:
#ASM redundancy levels
are:
*External redundancy: Oracle ASM does not provide mirroring redundancy and relies on the storage system to provide RAID functionality. *Normal redundancy: A loss of one Oracle ASM disk is tolerated. *High redundancy: A loss of two Oracle ASM disks in different failure groups is tolerated.
@VOracle_VOITG
VOracle Interview Tips Group(VOITG)
My primary goal is to review Oracle DBA interview tips and quick overview of Oracle database concepts.
Group link:
https://t.me/VOracle_VOITG
T11:
#Authentication Methods in #Oracle:
Oracle supports a robust set of authentication methods:
• Database authentication (username and password stored in the database)
• OS authentication
• Network authentication
• Global user authentication and authorization
• External service authentication
@VOracle_VOITG
T12:
Oracle Row #Chaining and #Migration
Row Chaining: A row is too large to fit into a single database block. Row Migration: We will migrate a row when an update to that row would cause it to not fit on the block. Row migration means that the entire row will move and we just leave behind the «forwarding address».
So, the original block just has the rowid of the new block and the entire row is moved.
* Row migration is typically caused by UPDATE operation
* Row chaining is typically caused by INSERT operation.
* To diagnose chained/migrated rows use ANALYZE command, query V$SYSSTAT view.
* To remove chained/migrated rows use higher PCTFREE using ALTER TABLE MOVE.
Reference-: akadia
@VOracle_VOITG
T13:
#Partition #Pruning:
Partition pruning is the skipping of unnecessary index and data partitions or subpartitions by a query. Partitioning pruning is the simplest and also the most substantial means to improve performance using partitioning.
@VOracle_VOITG
T14:
What is the reason for #checkpoint #incomplete
in your alert log?
The first reason is using small redo logfile size and sometimes the reason is related to DBWR. It means LGWR has to wait until DBWR has completed writing the dirty buffer to disk and it can be a lake of I/O storage or configuration problem for example on storage array.
@VOracle_VOITG
T15:
Oracle #Restore Point
Restore point is one of the Oracle feature that flash database back to creation restore point time. Using this facility assures us that we will back to correct time (before change) on correct SCN.
Types of restore point:
1-Guaranteed restore points: we can flash the database back to the restore point regardless of the db_flashback_retention_target parameter.
2-Normal restore point: we can flash the database back to the restore point within the time period specific by the db_flashback_retention_target parameter.
In which instance state we can flash database back to the restore point? In mount mode and database must be in Archive mode.
@VOracle_VOITG
Point-:4
Data Protection Mode in DG
(1)Maximum Performance-(default)
Noaffirm/ASYNC/
(2)Maximum Availability-
Affirm or Noaffirm/SYNC
(3)Maximum Protection
Affirm/SYNC
Thanks
Ahmad
T16:
What is I/O #fencing or Disk #fencing in RAC:
In a RAC environment, I/O heartbeat send over the private network hence when a node fails, communication breakdown in a cluster and the remaining node takes corrective action so the failed node is prevented from access to shared disks to avoid data corruption. In Rac the first node that detects other unreachable nodes will evict the failed node from the cluster.
@VOracle_VOITG
T17:
Why do we need odd number of voting disks?
The reason for the odd number of voting disks is to solve the "split-brain" problem. The split-brain problem occurs when the cluster interconnect goes down and each node cannot obtain a heartbeat from the other nodes. At this time, each node in the cluster thinks they are the only one alive and they think they should become the "master node". There can be only one master node. Each half thinks they are the brains of the operation, which we cannot allow.
So how do we resolve this?
We set up a race. Each candidate node wants to be the master so we put it up to a vote. Whichever contestant gets the most votes wins. So node 1 contacts the first voting disk and says "I am here first so I am the master!". Simultaneously, node 2 contacts the second voting disk and says "I am here first so I am the master!". They both can't be right. If there were an even number of voting disks, then it is possible that each candidate node could come up with exactly half the number of total votes and we would have a tie, which must be resolved. Now the real race begins. The two nodes run to the third and final voting disk. The node that gets there first now has 2 votes to 1, wins the election, and becomes the master node in the cluster.
Thanks
Ahmad
Reference-:dbaschool
T18:
Points about SCAN-:
--------------------------------
• SCAN Name will represent the cluster in the network.
• SCAN used by clients to connect to any database in the cluster.
• SCAN is a GSD resource, which is managed by CRS.
• SCAN provides a single domain name (via DNS), allowing end-users to address a RAC cluster as-if it were a single IP address.
• Removes the requirement to change the client connection if cluster changes.
• Load balances across the instances providing a service. SCAN listener selects least loaded node.
• SCAN requires a DNS entry or GNS to be used.
• SCAN is an alternative to the transparent application failover (TAF) for automatic load balancing.
• Provides failover between instances.
• SCAN VIP/LISTENER will failover to another node in the cluster.
• Clients do not require VIP information.
• Must resolve to at least one address on the public network.
• SCAN provides location independence for the databases.
• Allow clients to use EZConnect or JDBC connections.
• SCAN Listener would forward the request to local listener that’s running on VIPs.
• Each cluster will have 3 SCAN listeners, each having a SCAN VIP defined as cluster resources.
• Instance registers with local listener on its node. Database “REMOTE_LISTENER” registers instances with all SCAN listeners.
• SCAN listeners on each node in the cluster, which are not replacements for a regular listener.
• PMON process of each instance registers the database services with the default listener on the local node and with each SCAN listener, which is specified by the REMOTE_LISTENER database parameter.
• Client queries DNS to resolve to SCAN.
Thanks
Ahmad
Reference-: Satya Blog
T19:
#Voting Disks in RAC
Oracle Clusterware uses voting disk files to determine which nodes are members of a cluster.
After ASM is introduced to store these files, these are called as VOTING FILES.
What Information is stored in VOTING DISK/FILE
a. Which node is part of the cluster.
b. Which node is leaving the cluster.
c. Which node is joining the cluster.
@VOracle_VOITG
Point-5-:
Data Gaurd Protection Modes-:
(1) Maximum Performance (D)
-Potential for minimal data loss.
-ASYNC
-Primary never waits for standby acknowledgement to single commit success to the application. There can be no guarantee of Zero Data Loss.
(2) Maximum Protection
-Zero Data Loss
-Double Failure Protection
-SYNC
-Single Commit success to the application only after acknowledgement is received from a standby database that redo for that transaction has been hardened to disk. The production database can not proceed until acknowledgement has been received.
(3) Maximum Availability
-Zero Data Loss
-Single Failure Protection
-SYNC/Fast SYNC/Far SYNC
-Single Commit success to the application only after acknowledgement is received from a standby database or after NET_TIMEOUT threshold period expires.-Whichever occurs first.
-A network or standby outage does not affect the availability of the production database.
Thanks
Ahmad
http://www.oracleride.com/
T20:
How does Oracle #instance #recovery work and what happens internally?
When an instance terminated as abnormally or crashed the database goes down in an inconsistent state that
means all ongoing transactions committed or uncommitted were not completed, therefore, before it can be opened, the database must be in a consistent mode
hence Oracle performs instance recovery. Oracle using last SCN in the control file and will apply committed or uncommitted transaction from Redo logs. In this
state database is in MOUNT state and when you open the database Oracle use of UNDO's data to rollback uncommitted transactions
after that, the database is in a stable state.
@VOracle_VOITG
T21:
When does a #checkpoint occur? And what will happen when a #checkpoint occur?
A checkpoint occurs on below events:
• Redo log switch occurs
• If MTTR is enabled by setting parameter FAST_START_MTTR_TARGET
• On consistent shutdown
• Manually checkpoint is triggered alter system checkpoint
• A partial checkpoint occurs while taking the datafile offline or read-only
• When active redo log is to be written by LGWR
• Every 3 seconds
• After commit.
Checkpoint operation:
• Redo logs from SGA will be written on online redo logs on disk.
(LGWR)
• Dirty data from SGA will be written on data files on disk.
(DBWR)
• One row will record in the redo log file.
• Data file headers are also updated with the latest checkpoint SCN that record in control file, therefore, datafile header will sync by the control file header.
@VOracle_VOITG
Point-6-:
How Connection happened in 11g R2 RAC-:
(1)Pmon process of each instance registers the database services with the default listener on the local node and with each SCAN listener which is specified by the REMOTE_LISTENER database parameter.
(2)Oracle Client connects using SCAN name.
(3)Clients queries DNS to resolve scan_name.
(4)SCAN LISTENER selects least loaded node.
(5)The Client connects to the Local Listener on least loaded node. The Local Listener starts a dedicated Server Process for the connection to the database.
(6)The Client connects directly to the dedicated Server Process on least loaded node and access the database instance.
Thanks
Ahmad
http://www.oracleride.com/
T22:
Rman Block Change Tracking (#Block #Change #Tracking)
Block Change Tracking is a mechanism for improving the performance of incremental backups.
It works by keeping the track of Data Block changes in the block change tracking file.
So instead of scanning all data blocks, RMAN uses this file to identify the changed blocks that need to be backed up.
How to activate the above feature? with alter system or alter database? The answer is quite clear.
ALTER DATABASE ENABLE BLOCK CHANGE TRACKING;
be careful after enable this feature, you must at first take a full backup to activate block change tracking in the next one incremental backup.
@VOracle_VOITG
T23:
Difference between #Complete and #Incomplete #Recovery?
In fact, complete recovery Oracle using redo logs to recover the database. I mean Oracle applies all of the redo changes from the archived log and online redo logs without the loss of any committed transactions. Complete recovery needs all archive log backups.
In Incomplete Recovery or point in time recovery, Oracle does not recovery database until the current time, rather Oracle
will recover the database until the point of time or specific SCN.
Oracle uses a backup to produce a noncurrent version of the database In other words
We do not apply all of the redo records generated after the most recent backup.
After incomplete recovery we have to open database with reset logs option.
We usually perform incomplete recovery of the whole database in the below situations:
-Media failure and loss of some or all online redo log files or loss of the control file.
-User error, for example, user drop a table
-Missing archive logs
@VOracle_VOITG
Point-7-:
Lisneters Importance in RAC DB
1)Local Listener runs on nodes.
-By local_listener parameter is used to set.
-Local Listener used Virtual IP as a IP Address.
2)Remote Listener is used for SCAN Name.
-Remote Listener identified Scan Listener from 11g R2 RAC.
-Remote_Listener parameter to set for SCAN listeners.
-In this parameter we set SCAN Name.
Thanks
Ahmad
http://www.oracleride.com/
Oracle #RAC #Cache #Fusion
T24:
Cache Fusion is one of the most Important concepts in Oracle RAC.
As we know each and every instance of RAC cluster is having its own
local buffer cache. Cache Fusion transfer the data block from buffer cash of one instance to the
buffer cache of another instance using the cluster high speed interconnect.
Cache Fusion is implemented by a controlling mechanism called Global Cache Service[GCS] and
maintain the integrity of data is a task of GCS.
We can say that GCS is a heart of CACHE FUSION process. GCS is available in RAC as LMS background process.
Reference: oracle-help.com, myorastuff.blogspot.com
@VOracle_VOITG
#Rac #Block #Transfare
T25:
In a RAC environment when a session on one instance requests for a block, and the block is not available on the local cache of the instance, then a request is sent to the master of the object.
When a request for a block is received, the GCS processes get involved, locating the block with the help of the GRD.
the master will send a request to the
current holder. The entire block physically transferred across the interconnect to the requesting instance.
@VOracle_VOITG
#Rac #Read #Block from #Disk
T26:
A user session or process attached to instance node4 makes a request for a block. and the block is not available on the local cache of the instance.
node4 determines the master for this specific block that
contains the row is node3. The request is directed to instance node3 where the GRD for
the object is maintained. Instance node3 after checking against the GRD determines that neither instance node3 no
other instance in the cluster has a copy of the block. so the data must be on
disk. Hence, a grant message is sent to the requesting instance to read the block from disk.
The grant gives the reader on instance node4 the permission to read the block from disk.
@VOracle_VOITG
T27:
#Data #Guard #Services:
- Log Transport Services: Data Guard transport services transmit the redo directly from the primary database log
buffer to the standby database(s) where it is written to a standby redo log file.
- Log Apply Services: on the standby database read redo records from a standby redo log file, perform continuous Oracle validation to ensure
that the redo is not corrupt, and then applies redo changes to the standby database.
- Role Management Services: Change the role of a database from a standby database to a primary database,
or from a primary database to a standby database using either a switchover or a
failover operation.
@VOracle_VOITG
T28:
Data #Guard #LNS #RFS #MRP #process:
LNS (Log Network Server):
-reads redo information from the redo buffer in SGA of PRIMARY Database.
-passes redo to Oracle Net Services for transmission to the STANDBY database.
RFS (Remote File Server):
-records the redo information transmitted by the LNS at the STANDBY database on standby redo log files.
-transmits an acknowledgment back to the LNS process on the primary database.
MRP (Managed Recovery Process):
-A physical standby database applies redo received from the primary using MRP.
@VOracle_VOITG
T29:
Benefits of a #Physical #Standby Database :
- Disaster recovery and high availability: Easy-to-manage switchover and failover
capabilities allow easy role reversals between primary and physical standby database.
- Data protection : Using a physical standby database, Data Guard can ensure no data loss.
- Reduction in primary database workload: Oracle Recovery Manager (RMAN) can use physical standby databases to
off-load backups from the primary database saving valuable CPU and I/O
cycles.
- Performance: redo apply technology as a highly efficient mechanism.
@VOracle_VOITG
T30:
#Log #Apply #services use:
#Redo Apply (physical standby databases only)
#SQL Apply (logical standby databases only)
@VOracle_VOITG
T31:
#RAC #Load #Balancing #Methods
Two types of load balancing that you can implement for an Oracle RAC database: client-side and server-side load balancing.
On the client side: On Client TNSNAME.ora set (LOAD_BALANCE=ON)
On the server side: After a listener receives the connection request, it can forward the request to another instance based on the connect time load balancing goal (CLB_GOAL)specified for the service.
CLB_GOAL: can be:
-LONG(Default): used for application connections that are connected for a long period such as third-party connection pools and SQL*Forms applications
. the listener will load balance on the number of sessions.
. Run time load balancing goal will not be used in this case.
-SHORT: used for application connections that are short in duration.
. integrated with the load balancing advisory. (LBA)
. the listener uses Load Balancing Advisory (LBA) to make the connection based on CPU utilization on the node.
@VOracle_VOITG
T32:
Advantages of #Run #Time #load #Balancing (#RTLB)
1- Run time load balancing is achieved using connection pools.
Work requests are automatically balanced across the pool of connections.
2- provides load balancing at the transaction level instead of load balancing at the time of initial connection.
3- With server-side load balancing, the listener directs a connection request to the best instance currently providing the service by using the load balancing advisory.
@VOracle_VOITG
T33:
What happens after user #Commit in Oracle
When a commit issue a transaction will end successfully and all changes recorded permanently in the database.
After #commit:
1- The Oracle database assigns a unique system change number (SCN) to the transaction.
2- The LGWR writes the redo log buffer entries for the transaction from the redo log buffer to
the online redo log files. This guarantees that the changes will not be lost even if there is an instance failure.
3-Server process sends a transaction completion message to the user process and any locks that Oracle holds are released, and Oracle marks the transaction as complete.
@VOracle_VOITG
T34:
Oracle #RAC #NODE #EVICTION
What are the root causes of #NODE #EVICTION
1-Missing network heartbeat:
A node will be evicted from the cluster if it can’t communicate with other nodes in the cluster for example lost private network.
2-Missing disk heartbeat:
A node will be evicted from the cluster if it can’t access the voting disks.
3-Some kind of hang in accessing voting disk:
If one of the RAC instances is hanging, the database LMON process will request a member
kill and ask the CSS process to remove the hanging database instance from the cluster and in other way facing writing problem to the voting disk file.
4-Cssdagent stopping
Cssdagent and Cssdmonitor are monitoring processes therefore if the execution of cssdagent facing problem or stop
the related node will be evicted.
@VOracle_VOITG
T35:
#RAC #processes which are basically deciding about node #evictions:
1. OCSSD : This process is primarily responsible for inter-node health monitoring and instance endpoint recovery.
2. OPROCD : This process is known as checking hang check and drive freezes on a machine.
@VOracle_VOITG
T36:
Oracle #Log #File #Sync #Wait #Event
The Oracle “log file sync” wait event is triggered when a user session issues a commit (or a rollback).
When a commit occurs then the LGWR has to write the log buffer to the online redo log files.
The wait is related to LGWR as we know after LGWR write Redos, It sends confirmation of its completion back to the user session.
The wait time includes the writing of the log buffer and the post-confirmation back.
-- What are the reasons for this wait?
- Commit frequency, the application might be committing after every row, rather than batching COMMITs.
- The SELECT statement, review the Oracle Auditing settings. If Auditing is enabled for SELECT statements,
Oracle could be spending time writing and commit data to the AUDIT$ table.
- Slow I/O
—Solutions:
-Reduce other I/O activity on the disks containing the redo logs, or use dedicated disks.
-Move the redo logs to faster disks or a faster I/O subsystem (for example, switch from RAID 5 to RAID 1).
-Reduce the amount of redo being written with NOLOGGING / UNRECOVERABLE options.
-Use the COMMIT NOWAIT option (commit_logging=batch, COMMIT_WAIT=nowait). If an instance crash may cause loss of data that is being batched inside the log buffer.
-Check the size of the log buffer as it may be so large that LGWR is writing too many blocks at one time.
Reference: Dean Richards
@VOracle_VOITG
T37:
#Current #block Read(#CUR_BLOCK) and #Counsistent #Block Read(#CR_BLOCK) in #RAC
Current block(CUR): the first time a block is read into a buffer of any participating instance, it’s termed a current block.
Consistent Block(CR): is when the block is transferred from one instance to another instance because a session on that instance requested the block.
@VOracle_VOITG
T38:
What are the #Hot #blocks?
Hot blocks are an application issue. When the application has to repeatedly access the same block or blocks.
This happens when multiple sessions repeatedly access one or more blocks that are protected by the same child cache buffers chains latch.
Oracle Hot Blocks Cause Cache Buffers Chains Latches Contention and generally, hot blocks have higher touch count values.
In most cases, increasing the number of cache buffers chains latches will do little to improve performance.
How to spread the blocks
-Deleting and reinserting some of the rows by ROWID.
-Exporting the table, increasing the PCTFREE significantly, and importing the data. This minimizes the number of rows per block.
-Minimizing the number of records per block in the table.
-For indexes, you can rebuild them with higher PCTFREE values.
-Consider reducing the block size.
-Use Better index like a reverse key index.
Reference: logicalread.com
@VOracle_VOITG
T39:
Why we must enable #force #logging on production for building a #standby?
-By enable force logging, all operations are logged into the redo logs.
-Nologging operations, no redo changes are logged on the primary database and no changes will be applied to standby database.
-Redo logs are critical for media recovery operation and redo applying on standby database.
-In some SQL statements, the user has the option of specifying the NOLOGGING clause,
which indicates that the database operation is not logged in the online redo log file.
Even though the user specifies the clause, a redo record is still written to the online
redo log file. However, there is no data associated with this record. This can result in
log application or data access errors at the standby site and manual recovery might be
required to resume applying log files.
-In Force logging mode Oracle database must write the redo records even when NOLOGGING is used with DDL Statements.
It will force the write of REDO records even when no-logging is specified because every change should be recorded and updated in standby server while syncing.
--for example: create a table in Nologging mode on Primary and select on the table on Standby side and we occurred to:
ERROR at line 1:
ORA-01578: ORACLE data block corrupted (file # 100, block # 2658)
ORA-01110: data file 100: '/oradata/ORCL/appdata_test_66.dbf'
ORA-26040: Data block was loaded using the NOLOGGING option
.
This feature can be enabled at TWO levels:
– Database level
– Tablespace level
@VOracle_VOITG
T40:
All #redo #logs in current state in a group were currepted. What will happen and what is the action.
Online redolog file in CURRENT state and database be in archivelog mode:
The LGWR will be terminated in this case. We have to perform incomplete recovery.
#Incomplete #Recovery
-SCN-based incomplete recovery
SELECT archivelog_change#-1 "SCN" FROM v$database;
RUN {
SET UNTIL SCN 1048438;
RESTORE DATABASE;
RECOVER DATABASE;
ALTER DATABASE OPEN RESETLOGS;
}
-------------
-Time-based incomplete recovery
SET UNTIL TIME "TO_DATE('2019-07-01:10:00:00', 'YYYY-MM-DD:HH24:MI:SS')";
-------------
-Sequence-based incomplete recovery
SET UNTIL SEQUENCE 3400;
-------------
Every incomplete recovery needs to be followed by a resetlogs.
@VOracle_VOITG
T41:
#RMAN #Recovery #Catalog
A recovery catalog is a database schema used by RMAN to store metadata about one or more Oracle databases. Typically, you store the catalog in a dedicated database.
A recovery catalog provides the following benefits:
-If the target control file and all backups are lost, then the RMAN metadata still exists in the recovery catalog.
-A recovery catalog centralizes metadata for all your target databases.
-A recovery catalog can store metadata history much longer than the control file.
-You can use a stored script as an alternative to a command file for managing frequently used sequences of RMAN commands.
-A recovery catalog is required to KEEP FOREVER.
@VOracle_VOITG
T42:
#RMAN #Virtual #Private #Catalogs
you may desire to restrict each database administrator to modify only backup metadata belonging to those databases that they are responsible for.
This goal can be achieved by implementing virtual private catalogs.
-There is no restriction to the number of virtual private catalogs that can create under one recovery catalog. Each virtual private catalog is owned by a database schema user which is different than the user who owns the recovery catalog.
-Every virtual private catalog has access to all global stored scripts and cannot access non-global stored
scripts that belong to databases that they do not have privileges for
@VOracle_VOITG
T43:
What is #SCN?
Why Oracle use unique #SCN numbers?
SCN (System Change Number) is a primary mechanism to maintain data consistency in Oracle database, in fact, it is Oracle’s clock, every time we commit,
the clock increments. The SCN just marks a consistent point in time in the database.
SCN is necessary to satisfy the ACID properties of a transaction.
therefore after commit and generate SCN, a unique SCN of the transaction is assigned and recorded in the transaction table.
SCN is used primarily in the following areas:
1-Every redo record has an SCN version of the redo record in the redo header. (redo records can have non-unique SCN)
2-Every data block also has block SCN (aka block version).
3-Read consistency also uses SCN. Every query includes an SCN at the start of the query.
4-Commit. Every commit will generate SCN, aka commit SCN, that marks a transaction boundary.
Reference: orainternals.wordpress.com
@VOracle_VOITG
T44:
What happened behind #begin #backup?
after begin backup command, database goes to backup mode and it means datafile header won't update but the database will work normally like before the begin backup. In the other hand trace of the transaction will hold in the database and when END BACKUP command run, datafile header will sync by last SCN in the red log files
@VOracle_VOITG
T45:
Query running slow on instance 1 and fast in instance 2 what is our approach
In this case, I reviewed the execution plan of query on each instance and I noticed the query was hitting cache on node 2 all the time whereas on node 1 it
was doing physical reads all the time, therefore for understanding the main reason, we should check the execution plan of query on each instance,
gather statistic and use of tkptof to trace session behavior and check db_cash_size.
@VOracle_VOITG
T46:
Oracle #SGA and #PGA
• SGA (System Global Area) is a memory area allocated during an instance start up.
• SGA size is controlled by DB_CACHE_SIZE parameter defined in the initialization parameter file (init.ora file or SPFILE).
• PGA (Program or Process Global Area) is a memory area that stores a user session specific information.
@VOracle_VOITG
T47:
What is #Database #Writer (#DBWR) and when does #DBWR write to the data file?
• DBWR is a background process that writes data blocks information from Database buffer cache to data files.
There are 4 important situations when DBWR writes to data file
• Every 3 seconds
• Whenever checkpoint occurs
• When server process needs free space in the database buffer cache to read new blocks.
• Whenever number of changed blocks reaches a maximum value.
@VOracle_VOITG
T47:
Oracle #ASM #Filter #Driver
1- Oracle ASM Filter Driver (Oracle ASMFD) is installed with an Oracle Grid Infrastructure installation.
2- Oracle ASMFD rejects write I/O requests that are not issued by Oracle software.
3- Oracle ASMFD simplifies the configuration and management of disk devices by eliminating the need to rebind disk devices used with Oracle ASM each time the system is restarted.
4- ASMFD is a superset of ASMLIB; therefore it includes base-ASMLIB features (permissions persistence & sharing open handles).
5- The Oracle ASM filter driver (ASMFD) is a kernel module that resides in the I/O path of the Oracle ASM disks. Logically, ASMFD provides an interface between Oracle binaries and the underlying operating environment which includes the storage hardware interfaces.
Following are descriptions of the key capabilities of ASMFD:
• Reject non-Oracle I/Os
• Reduce OS resource usage.
• Enable device name persistence.
• Faster node recovery.
@VOracle_VOITG
T48:
#SQL #plan #baseline.
A plan baseline is a set of accepted plans that the optimizer is allowed to use for a SQL statement.
In the typical use case, the database accepts a plan into the plan baseline only after verifying that the plan performs well.
a plan includes all plan-related information (for example, SQL plan identifier, set of hints, bind values, and optimizer environment)
that the optimizer needs to reproduce an execution plan.
SQL Plan Baselines, on the other hand, are much more automated, and more proactive in their behavior.
New plans can only be added to the SQL Plan Baseline if they will not result in slower performance and only plans that are actually
in the SQL Plan Baseline are used by the optimizer to generate the plans to process statements.
A SQL plan baseline for a SQL statement consists of a set of accepted plans. When the statement is parsed,
the optimizer will only select the best plan from among this set. SQL plan baselines generated by the SQL plan management mechanism(SPM).
@VOracle_VOITG
T49:
#SQL #profiles
SQL profiles provide additional information to the optimizer to help select the best plan.
A SQL profile is a set of auxiliary information specific to a SQL statement. Conceptually, a SQL
profile is to a SQL statement what statistics are to a table or index. The database can use auxiliary information to improve execution plans.
A SQL profile help to the optimizer for picking the best plan in different situations.
@VOracle_VOITG
T50:
#SQL #Plan #Management (#SPM)
introduced in Oracle Database 11g that enables the system to automatically control the SQL plan by maintaining
SQL plan baselines.
With this feature enabled, a newly generated plan can be used only if the Oracle optimizer can
determine that using the new plan will not impact the performance of the statement. However, if the optimizer determines that
the SQL plan is more efficient compared to the previous plan, it will use the new plan and will save this new plan as
the new plan baseline for future use.
SQL Plan Management (SPM) has three component, plan capture, plan selection, and plan evolution.
SPM plan capture:
When the statement is executed it will be hard parsed and a cost based plan will be generated.
Once the cost based plan is determined it will be compared to the plans that exist in the SQL plan baseline.
If the plan matches one of the accepted plans in the SQL plan baseline, Oracle uses it. However,
if the cost based plan doesn't match any of the accepted plans in the SQL plan baseline it will be added to the plan baseline as an unaccepted plan.
SPM plan selection:
When a SQL statement with a SQL plan baseline is parsed, the accepted plan with the best cost will be chosen.
This process uses the regular optimizer. The presence of a SQL profile will affect the estimated cost of each of
these plans and thus potentially the plan that is finally selected.
SPM plan evolution:
The third sub-component of SPM is verification or evolution of non-accepted plans.
The evolution process test-executes the non-accepted plan against the best of the accepted plans.
The best accepted plan is selected based on cost. Again, if a SQL profile exists,
it will influence the estimated cost and thus the accepted plan chosen for comparison against the non-accepted plan.
Reference: blogs.oracle.com
@VOracle_VOITG
T51:
#Hard #parse vs #Soft #parse
If a user executes a SQL statement and it does not exist in the shared pool then Oracle has to do a hard parse.
If a user executes a SQL statement and it exists in the shared pool and there is a version of SQL statement Oracle to do soft parse.
During the parse call Oracle performs:
-Syntax check: Parsing syntax to check SQL keywords.
-Semantic check: Will check does the object exist It means, the related columns in the table exists and also does user required privileges.
-Shared Pool check: During the parse, Oracle uses a Hashing algorithm for generating a HASH value
for every SQL statement. This hash value checked in the shared pool. If any existing of parsed statements have the same hash value then
it can be reused.
@VOracle_VOITG
T52:
#Oracle #Shared #Pool
Shared pool is the second largest part of the SGA that keeps SQL statements, information on the objects, Packages, and PL/SQL blocks.
The information created in memory for a session can be useful to another session. Shared Pool contains
- Library Cash: It includes Shared SQL area and private SQL area.
library cash holds parsed SQL statements and execution plans and pars SQL codes.
-Dictionary Cash: It holds the information about user privileges, table and column definitions, password
and etc.
@VOracle_VOITG
T53:
#CURSORS in Oracle
A cursor is a group of information stored in memory(SGA) about a SQL statement.
A cursor is in library cache in SGA allocated to a SQL statement.
In cursor stores different information about the statement like its text, execution plan,
statistics etc.
Each SQL statement has:
-One Parent cursor
-One or more child cursors
Each parent requires at least one child cursors.
@VOracle_VOITG
T54:
#CURSORS + #HARD #Parse + #SOFT #Parse
When new SQL arrives, the database tries to find a suitable child cursor on the library cash. If there is no parent cursor, then Hard parse.
If there is a parent cursor, but It's existing children can't be reused by this call
(because of the diffrent size of bind variables, or because of different optimizer settings,
or because of different NLS setting.), there will be HARD PARSE.
If existing child cursors can be reused by this call, there will be a soft parse.
Parent cursor contains the SQL statement text only.
Child cursor contains Execution plans.
Reference: scribd.com
@VOracle_VOITG
T55:
How can I avoid unnecessary #Parsing in Oracle?
By using bind variables we can avoid unnecessary Parsing.
If bind variables are not used, then there is hard parsing of all SQL statements. This has
a server impact on performance, and you will face with the high waits on your DB.
@VOracle_VOITG
T56:
#Fencing in Oracle #RAC
I/O fencing prevents updates by failed instances, and detecting the failure and preventing split brain in the cluster.
When a cluster node fails, the thefailed node needs to be fenced off from all the shared disk devices or diskgroups.
This methodology is called I/O Fencing, sometimes called Disk Fencing or failure fencing.
@VOracle_VOITG
T57:
#GPnP ( #Grid #plug and #play ) #profile in #Oracle
The GPnP profile is a small XML file located in GRID_HOME/gpnp.
need for explicit add and delete nodes steps.
Each node maintains a local copy of the GPnP Profile and is maintained by the GPnP Deamon (GPnPD).
GPnP daemon ensures the synchronization of GPnP profile across all the nodes in the cluster and GPnP profile is
used by Clusterware to establish the correct global personality of a node.
it cannot be stored on ASM as it is required prior to the start of ASM. Hence, it is stored locally on each node
GPnP Profile contains various attributes:
Cluster name
Network classifications (Public/Private).
Storage to be used for CSS.
Storage to be used for ASM: SPFILE location, ASM DiskString, etc
Digital signature information.
@VOracle_VOITG
T58:
#Data #Redaction in Oracle Database 12c
Oracle Data Redaction is one of the new features introduced in Oracle Database 12c. This new feature is part of the Advanced Security option and enables the protection of data shown to the user in real-time, without requiring changes to the application.
We can create redaction policies which specify conditions that must be met before the data gets redacted and returned to the user. During the definition of such policies, the DBA can specify which columns and the type of protection that must be applied.
@VOracle_VOITG
T59:
Oracle #Security Solutions/Controls
-Data Masking
-Advance Security (TDE, Data Redaction)
-Label security
-Virtual Private Database (VPD)
-Fine Grained Auditing (FGA)
-Data Vault
-Database Auditing
-Audit Vault
-Storage/Network authentication
-Oracle Database Firewall
@VOracle_VOITG
T60:
What is #LMS #process in Oracle #RAC? (#Global #Cache Service Process)
LMS (Global Cache Service Process) is one of the Oracle RAC background processes in the database. The main task
of the LMS is transmits block images between the buffer caches of different instances.
This processing is part of the Cache Fusion feature.
This block image transports using High-Speed Interconnect.
Another task is to maintain records of the data file statuses and each cached block by recording information in a Global Resource Directory (GRD).
@VOracle_VOITG
T61:
What is #LMD #process in Oracle #RAC? (#Global #Enqueue Service Daemon)
The LMD process manages incoming remote resource requests within each instance.
The LMD process also handles deadlock detection and remote enqueue requests.
Remote resource requests are the requests beginning from another instance.
LMD processes also handle deadlock detection and remote lock requests.
@VOracle_VOITG
T62:
#Buffer #Busy #Waits
The Buffer Busy Waits Oracle metric occurs when an Oracle session needs to access a block in the buffer cache,
but cannot because the buffer copy of the data block is locked.
This buffer busy wait condition can happen for either of the following reasons:
A: The block is being read into the buffer by another session, so the waiting session must wait for the block read to complete.
B: Another session has the buffer block locked in a mode that is incompatible with the waiting session's request.
@VOracle_VOITG
T63:
#GC #Buffer #Busy #Waits
in #RAC
GC buffer busy event means that a session is trying to access a buffer, but there is an open request for Global cache lock for that block already, and so,
the session must wait for the GC lock request to complete before proceeding. This wait is instrumented as 'gc buffer busy' event.
From 11g onwards, GC buffer busy event differentiated between two cases:
1-If existing GC open request originated from the local instance, then the current session will wait for 'gc buffer busy acquire'.
Essentially, the current process is waiting for another process in the local instance to acquire GC lock, on behalf of the local instance. Once GC lock is acquired,
the current process can access that buffer without additional GC processing (if the lock is acquired in a compatible mode).
2-If existing GC open request originated from a remote instance, then the current session will wait for
'gc buffer busy release' event. In this case session is waiting for another remote session (hence another instance)
to release the GC lock, so that local instance can acquire buffer.
Reference: orainternals.wordpress.com
@VOracle_VOITG
T64:
The #optimizer #tasks in Oracle
The optimizer (also known as “query optimizer”) performs its tasks during the parse phase of
SQL processing. Most of these tasks are performed only during a hard parse because of the optimizer output, the execution plan is stored with the cursor in the shared pool. Optimizer
operations include:
• Transforming queries
• Estimating
• Generating plans
@VOracle_VOITG
T65:
#Estimating
The estimator generates three types of measures:
• Selectivity
• Cardinality
• Cost
The end goal of the estimator is to estimate the overall cost of a given plan
• Selectivity represents a fraction of rows from a row source.
• Cardinality represents the number of rows in a row source. Here, the row source can be a base table, a view, or the result of a join or GROUP BY operator.
• Cost represents the number of units of work (or resource) that are used. The query optimizer uses disk I/O, CPU usage, and memory usage as units of work.
@VOracle_VOITG
T66:
#Controlling the #Behavior of the #Optimizer with #Parameters.
Optimizer behavior can be controlled by using the following initialization parameters:
• CURSOR_SHARING
• DB_FILE_MULTIBLOCK_READ_COUNT (autotuned)
• OPTIMIZER_INDEX_CACHING
• OPTIMIZER_INDEX_COST_ADJ
• PGA_AGGREGATE_TARGET
• OPTIMIZER_MODE
• OPTIMIZER_FEATURES_ENABLE
• OPTIMIZER_ADAPTIVE_FEATURES
• OPTIMIZER_ADAPTIVE_REPORTING_ONLY
@VOracle_VOITG
T67:
#Adaptive #Execution #Plans
The Adaptive Execution Plans feature enables the optimizer to automatically adapt a poorly
performing execution plan at run time and prevent a poor plan from being chosen on
subsequent executions. With an adaptive plan, the optimizer instruments its chosen plan so
that at run time, it can detect if the estimates are not optimal, and the plan can change to
automatically adapt to the actual conditions.
The two Adaptive Plan techniques are:
• Dynamic plans
• Re-optimization
The database uses adaptive execution plans when OPTIMIZER_FEATURES_ENABLE is set to 12.1.0.1
@VOracle_VOITG
T68:
What are the #Access #Paths in Oracle?
Access paths are ways in which data is retrieved from the database.
The optimizer chooses below access path for generating the best execution plan.
• Full table scan
• Row ID scan
• Index scan
• Sample table scan
• Cluster scan
@VOracle_VOITG
T69:
Why Oracle using #Full #Table #Scans?
• Lack of index
• A large amount of data
• Small table
• Multiblock I/O calls
• All blocks below high-water mark
Full table scans have a lower cost than index range scans when accessing a large fraction of
the blocks in a table, because full table scans can use larger I/O calls, and making fewer large
I/O calls has a lower cost than making many smaller calls.
@VOracle_VOITG
T70:
#Row #ID #Scans
The row ID provides the data file, data block, and the location of the row in that block. Locating a row by
specifying its row ID is the fastest way to retrieve a single row.
@VOracle_VOITG
T71:
#Types of #index #scans:
• Index unique scan
• Index range scan
• Index range scan descending
• Index skip scan
• Full scan
• Fast-full index scan
• Index join
• Bitmap operations
@VOracle_VOITG
T72:
#Read #I/O #Wait #events:
• db file sequential read: A count of standard reads, one block at a time into the buffer
cache. This is seldom seen in a full table scan (FTS).
• db files scattered read: A count of multiblock reads into the buffer cache. Up to
DB_FILE_MULTIBLOCK_READ_COUNT consecutive blocks are read at a time and
scattered into buffers in the buffer cache.
• direct path read: A count of waits on direct reads into PGA or a temporary segment.
• direct path read temp: A count of waits on direct reads from a temporary tablespace.
This is most like seen with sorts or temporary tables that do not fit in memory.
@VOracle_VOITG
No comments:
Post a Comment