
To CDB or not to CDB, that’s the question
A product manager has several parts on the job. You can do outbound PM, that is what you, the readers or public, see. But we have also an inbound task. One of the things we do is answer internal questions and advise on how to use the product. This question is coming from a colleague who is doing an upgrade at a customer and facing some strange things with the move from non-cdb towards a cdb.
On My Oracle Support there is a mos-note which is very interesting: Reusing the Source Standby Database Files When Plugging a non-CDB as a PDB into the Primary Database of a Data Guard Configuration (Doc ID 2273304.1)
Wouldn’t it be nice to test this? In short it means, as long as you share the same ASM disk groups, you do not need to copy the files and you can just “use” them in the PDB, but also … you’re not messing up your standby database. So let’s explore how this works.
The idea

The idea is simple, after the upgrade of the DB to 19.3, they want to (and I strongly agree and encourage this) convert this to PDB’s.
So it starts with a normal upgrade from the DB towards a 19.3 non-cdb and then we want to plug it into a CDB and make it multitenant.
The question is … will this

work?
I have setup my lab accordingly. So I have created a non-cdb 19.3 database. Just next next finish, nothing fancy (you will read later why I added this sentence).
Next to that, I created an empty cdb, without PDBs.
For both databases, Data Guard was setup and is handled using the broker.
Maybe not too important, but for this blogpost it is, We use ASM on Oracle Restart. 2 Diskgroups +DATA and +RECO.
Both vm’s are setup EXACTLY the same.
My non-cdb is called dgdemo. Db unique for the primary is dgdemovm1 and for the standby dgdemovm2. You will see later on why this was not a smart choice.
The CDB is called (how can you guess it) cdgdemo with instance names cdgdemovm1 for the primary and cdgdemovm2 for the standby.
Ready? Ok, let’s go!
Prerequisites
As everything, also this has prerequisites. In the mos-note it states
- The source non-CDB and destination CDB must be the same version.
- The Source non-CDB primary database and destination CDB Primary database share storage
- The Source non-CDB standby database and destination CDB standby database share storage
- The Source non-CDB standby database and the destination CDB standby database must use the same diskgroup, ASM aliases cannot span multiple diskgroups
- The db_create_file_dest parameter in the CDB standby database must be set and should be set to the diskgroup name being used by the standby database
Ok, did you read it? Stop now and re-read it please. It is important.
I will borrow a sentence from the mos-note as I cannot write it better: “Using this method the data files of both the Primary and the Standby non-CDB databases will physically remain in their existing directory structure and be consumed by the destination CDB and its Standby. The steps as documented are for ASM storage although a similar process can be used with non-ASM using softlinks. “
This is interesting, right? This means that no (big) file copies need to be done and we can reuse the data in ASM (or filesystem) with just a (I call it) a metadata change.
The procedure
Let me immediately rephrase myself. It is not a big “copy / paste” procedure, otherwise I’d just have given you a script which does everything for you.
First step, we need to know the GUID from the Database to make sure that we can check we are on the correct database.
You get that from the Current (non-cdb) primary database:
1 2 3 4 5 6 7 |
SQL> select guid from v$containers; GUID -------------------------------- 8AC1435ADD2C1F1EE0536538A8C039FF SQL> |
On the standby Server, the non-cdb standby database, create a SQL Script (build_crt_alias_noncdb.sql) with following content:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
set newpage 0 set linesize 999 set pagesize 0 set feedback off set heading off set echo off set space 0 set tab off set trimspool on set ver off spool crt_noncdb_alias.sql prompt set echo on select 'ALTER DISKGROUP &&diskgrp_name_without_plus_sign add directory '||''''||'+&&diskgrp_name_without_plus_sign/&&new_stby_name_in_upper_case/'||guid||''''||';' from v$containers; select 'ALTER DISKGROUP &&diskgrp_name_without_plus_sign add directory '||''''||'+&&diskgrp_name_without_plus_sign/&&new_stby_name_in_upper_case/'||guid||'/DATAFILE'||''''||';' from v$containers; select 'ALTER DISKGROUP &&diskgrp_name_without_plus_sign add alias '||''''||replace(replace(replace(name,'.','_'),'&old_stby_name_in_upper_case','&&new_stby_name_in_upper_case'),'DATAFILE',guid||'/DATAFILE')||''''||' for '||''''||name||''''||';' from v$datafile df, (select guid from v$containers) con; exit |
And then execute it on the non-cdb physical standby database. This script creates another script, that you will execute later to create the ALIASES (so if you want to remove them later, use rmalias as rm will also remove the file) for the metadata change to succeed. I agree, … this is a bit … whatever, but currently we do not support single Pdb role transitions, so bear with me for a moment.
So when you execute this, it will have comparable output like this
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 |
[oracle@oelvm2 ~]$ vi build_crt_alias_noncdb.sql [oracle@oelvm2 ~]$ . oraenv ORACLE_SID = [cdgdemovm2] ? dgdemovm2 The Oracle base remains unchanged with value /u01/app/oracle [oracle@oelvm2 ~]$ sqlplus / as sysdba SQL*Plus: Release 19.0.0.0.0 - Production on Sat Jun 8 06:59:04 2019 Version 19.3.0.0.0 Copyright (c) 1982, 2019, Oracle. All rights reserved. Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.3.0.0.0 SQL> @build_crt_alias_noncdb.sql set echo on Enter value for diskgrp_name_without_plus_sign: DATA Enter value for new_stby_name_in_upper_case: cdgdemovm2 ALTER DISKGROUP DATA add directory '+DATA/cdgdemovm2/8AC1435ADD2C1F1EE0536538A8C039FF'; ALTER DISKGROUP DATA add directory '+DATA/cdgdemovm2/8AC1435ADD2C1F1EE0536538A8C039FF/DATAFILE'; Enter value for old_stby_name_in_upper_case: DGDEMOVM2 ALTER DISKGROUP DATA add alias '+DATA/cdgdemovm2/8AC1435ADD2C1F1EE0536538A8C039FF/DATAFILE/system_263_1010331949' for '+DATA/DGDEMOVM2/DATAFILE/system.263.1010331949'; ALTER DISKGROUP DATA add alias '+DATA/cdgdemovm2/8AC1435ADD2C1F1EE0536538A8C039FF/DATAFILE/sysaux_264_1010331963' for '+DATA/DGDEMOVM2/DATAFILE/sysaux.264.1010331963'; ALTER DISKGROUP DATA add alias '+DATA/cdgdemovm2/8AC1435ADD2C1F1EE0536538A8C039FF/DATAFILE/undotbs1_265_1010331967' for '+DATA/DGDEMOVM2/DATAFILE/undotbs1.265.1010331967'; ALTER DISKGROUP DATA add alias '+DATA/cdgdemovm2/8AC1435ADD2C1F1EE0536538A8C039FF/DATAFILE/users_266_1010331971' for '+DATA/DGDEMOVM2/DATAFILE/users.266.1010331971'; Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.3.0.0.0 [oracle@oelvm2 ~]$ ls build_crt_alias_noncdb.sql crt_noncdb_alias.sql postDg.sql [oracle@oelvm2 ~]$ |
Then we need to teach the standby server his ASM instance, that we are going to mess around with it, not really messing around, but we will need the aliases. So log on to the standby server ASM instance and execute the generated script
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 |
[oracle@oelvm2 ~]$ . oraenv ORACLE_SID = [dgdemovm2] ? +ASM The Oracle base has been changed from /u01/app/oracle to /u01/app/grid [oracle@oelvm2 ~]$ sqlplus / as sysasm SQL*Plus: Release 19.0.0.0.0 - Production on Sat Jun 8 07:11:01 2019 Version 19.2.0.0.0 Copyright (c) 1982, 2018, Oracle. All rights reserved. Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.2.0.0.0 SQL> SQL> set echo on SQL> @crt_noncdb_alias SQL> set echo on SQL> Enter value for diskgrp_name_without_plus_sign: DATA SP2-0734: unknown command beginning "Enter valu..." - rest of line ignored. SQL> Enter value for new_stby_name_in_upper_case: cdgdemovm2 SP2-0734: unknown command beginning "Enter valu..." - rest of line ignored. SQL> ALTER DISKGROUP DATA add directory '+DATA/cdgdemovm2/8AC1435ADD2C1F1EE0536538A8C039FF'; Diskgroup altered. SQL> ALTER DISKGROUP DATA add directory '+DATA/cdgdemovm2/8AC1435ADD2C1F1EE0536538A8C039FF/DATAFILE'; Diskgroup altered. SQL> Enter value for old_stby_name_in_upper_case: DGDEMOVM2 SP2-0734: unknown command beginning "Enter valu..." - rest of line ignored. SQL> ALTER DISKGROUP DATA add alias '+DATA/cdgdemovm2/8AC1435ADD2C1F1EE0536538A8C039FF/DATAFILE/system_263_1010331949' for '+DATA/DGDEMOVM2/DATAFILE/system.263.1010331949'; Diskgroup altered. SQL> ALTER DISKGROUP DATA add alias '+DATA/cdgdemovm2/8AC1435ADD2C1F1EE0536538A8C039FF/DATAFILE/sysaux_264_1010331963' for '+DATA/DGDEMOVM2/DATAFILE/sysaux.264.1010331963'; Diskgroup altered. SQL> ALTER DISKGROUP DATA add alias '+DATA/cdgdemovm2/8AC1435ADD2C1F1EE0536538A8C039FF/DATAFILE/undotbs1_265_1010331967' for '+DATA/DGDEMOVM2/DATAFILE/undotbs1.265.1010331967'; Diskgroup altered. SQL> ALTER DISKGROUP DATA add alias '+DATA/cdgdemovm2/8AC1435ADD2C1F1EE0536538A8C039FF/DATAFILE/users_266_1010331971' for '+DATA/DGDEMOVM2/DATAFILE/users.266.1010331971'; Diskgroup altered. SQL> |
At this point, the standby ASM is ready for the CDB Standby database to be able to find the non-CDB Standby database’s data files when the non-cdb primary will be plugged in into the primary CDB.
Now it is time to stop the redo apply on the non-cdb standby database. Either use sql or the broker, but in my example, I prefer the broker.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
[oracle@oelvm1 ~]$ dgmgrl DGMGRL for Linux: Release 19.0.0.0.0 - Production on Sat Jun 8 07:13:24 2019 Version 19.3.0.0.0 Copyright (c) 1982, 2019, Oracle and/or its affiliates. All rights reserved. Welcome to DGMGRL, type "help" for information. DGMGRL> connect sys@dgdemovm1 Password: Connected to "dgdemovm1" Connected as SYSDBA. DGMGRL> edit database dgdemovm2 set state='apply-off'; Succeeded. DGMGRL> |
If you use Active Data Guard and you have the physical standby in read-only mode, you should now reboot it in mount mode as we need to roll it forward a little bit.
When the primary is a RAC database, shut down all the other nodes and you now should continue on one instance only.
Next step is to stop it using srvctl and then cleanly start it. Then we flush the redo to the standby site and open the database read-only. Flushing, in this case, will send the redo to the standby server, but will not apply it yet as we stopped the redo-apply earlier.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 |
[oracle@oelvm1 ~]$ srvctl stop database -db dgdemovm1 [oracle@oelvm1 ~]$ sqlplus / as sysdba SQL*Plus: Release 19.0.0.0.0 - Production on Sat Jun 8 07:17:49 2019 Version 19.3.0.0.0 Copyright (c) 1982, 2019, Oracle. All rights reserved. Connected to an idle instance. SQL> startup mount ORACLE instance started. Total System Global Area 1073737800 bytes Fixed Size 8904776 bytes Variable Size 281018368 bytes Database Buffers 780140544 bytes Redo Buffers 3674112 bytes Database mounted. SQL> alter system flush redo to dgdemovm2 no confirm apply; System altered. SQL> alter database open read only; Database altered. SQL> |
This is a very important moment. Now we need to determine the checkpoint_change number as this has to be the same on primary and standby or otherwise the rest will fail.
On the non-cdb primary:
1 2 3 4 5 6 7 |
SQL> select file#, CHECKPOINT_CHANGE# from v$datafile_header where file#=1; FILE# CHECKPOINT_CHANGE# ---------- ------------------ 1 2824060 SQL> |
Then recover the standby database until this number and verify all went well
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 |
[oracle@oelvm2 ~]$ sqlplus / as sysdba SQL*Plus: Release 19.0.0.0.0 - Production on Sat Jun 8 07:19:48 2019 Version 19.3.0.0.0 Copyright (c) 1982, 2019, Oracle. All rights reserved. Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.3.0.0.0 SQL> alter database recover managed standby database until change 2824060; Database altered. SQL> select file#, CHECKPOINT_CHANGE# from v$datafile_header where file#=1; FILE# CHECKPOINT_CHANGE# ---------- ------------------ 1 2824060 SQL> |
So far so good. This MUST be good otherwise you might find yourself in interesting situations later on.
Now it is time to create a manifest file which we need later to plug the database into the CDB. After that, we need to shutdown the database.
1 2 3 4 5 6 7 8 9 10 |
SQL> set serveroutput on SQL> exec dbms_pdb.describe('/home/oracle/dgdemovm1.xml'); PL/SQL procedure successfully completed. SQL> shutdown immediate; Database closed. Database dismounted. ORACLE instance shut down. SQL> |
Then also shutdown the non-cdb standby database
1 2 3 4 5 6 7 8 9 10 |
SQL> shutdown immediate; ORA-01109: database not open Database dismounted. ORACLE instance shut down. SQL> exit Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.3.0.0.0 [oracle@oelvm2 ~]$ |
Now all is ready to plug in the non-cdb into the cdb as a pdb. Except … and this has bitten me in the past (hey , the one without mistakes … ). At this point, check your standby_file_management parameter. It should be AUTO! if it is not, change it to auto on both cdb-primary and cdb-standby.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
SQL> show parameter standby NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ enabled_PDBs_on_standby string * standby_db_preserve_states string NONE standby_file_management string MANUAL standby_pdb_source_file_dblink string standby_pdb_source_file_directory string SQL> alter system set standby_file_management='AUTO' scope=both; System altered. SQL> |
Then on the CDB primary, plug-in the non-cdb as follows
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 |
[oracle@oelvm1 ~]$ . oraenv ORACLE_SID = [dgdemovm1] ? cdgdemovm1 The Oracle base remains unchanged with value /u01/app/oracle [oracle@oelvm1 ~]$ sqlplus / as sysdba SQL*Plus: Release 19.0.0.0.0 - Production on Sat Jun 8 07:24:14 2019 Version 19.3.0.0.0 Copyright (c) 1982, 2019, Oracle. All rights reserved. Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.3.0.0.0 SQL> SQL> create pluggable database dgdemovm1 using '/home/oracle/dgdemovm1.xml' tempfile reuse nocopy; Pluggable database created. SQL> show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 2 PDB$SEED READ ONLY NO 4 DGDEMOVM1 MOUNTED SQL> |
Verify this on the standby CDB
1 2 3 4 5 6 7 |
SQL> show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 2 PDB$SEED MOUNTED 4 DGDEMOVM1 MOUNTED SQL> |
Trust is good, but verify. In alertlog you need to find messages that new data files have been added to the media recovery
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
...2019-06-08T07:27:25.256604-04:00 Recovery created pluggable database DGDEMOVM1 DGDEMOVM1(4):Recovery scanning directory +DATA/CDGDEMOVM2/8AC1435ADD2C1F1EE0536538A8C039FF/DATAFILE for any matching files DGDEMOVM1(4):Datafile 8 added to flashback set DGDEMOVM1(4):Successfully added datafile 8 to media recovery DGDEMOVM1(4):Datafile #8: '+DATA/CDGDEMOVM2/8AC1435ADD2C1F1EE0536538A8C039FF/DATAFILE/system_263_1010331949' DGDEMOVM1(4):Datafile 9 added to flashback set DGDEMOVM1(4):Successfully added datafile 9 to media recovery DGDEMOVM1(4):Datafile #9: '+DATA/CDGDEMOVM2/8AC1435ADD2C1F1EE0536538A8C039FF/DATAFILE/sysaux_264_1010331963' DGDEMOVM1(4):Datafile 10 added to flashback set DGDEMOVM1(4):Successfully added datafile 10 to media recovery DGDEMOVM1(4):Datafile #10: '+DATA/CDGDEMOVM2/8AC1435ADD2C1F1EE0536538A8C039FF/DATAFILE/undotbs1_265_1010331967' DGDEMOVM1(4):Datafile 11 added to flashback set DGDEMOVM1(4):Successfully added datafile 11 to media recovery DGDEMOVM1(4):Datafile #11: '+DATA/CDGDEMOVM2/8AC1435ADD2C1F1EE0536538A8C039FF/DATAFILE/users_266_1010331971' ... |
So this looks good!
On the primary CDB, we now need convert the non-cdb to a PDB. To do so, log on to the mounted container and run the noncdb_to_pbd.sql script
1 2 3 4 5 6 7 8 9 10 11 12 13 |
SQL> alter session set container = dgdemovm1 2 ; Session altered. SQL> SQL> @?/rdbms/admin/noncdb_to_pdb.sql ... lots of output ... SQL> set wrap ON SQL> set xmloptimizationcheck OFF SQL> |
sorry to skip all the output, but at the end you should have your prompt back and verify if all went well after opening the PDB.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
SQL> alter pluggable database DGDEMOVM1 open ; Warning: PDB altered with errors. SQL> SQL> select con_id, type, message, status from PDB_PLUG_IN_VIOLATIONS where status<>'RESOLVED' order by time; 2 3 4 CON_ID TYPE MESSAGE STATUS ------ --------- -------------------------------------------------- --------- 4 ERROR Database option DV mismatch: PDB installed version PENDING 19.0.0.0.0. CDB installed version NULL. 4 ERROR Database option OLS mismatch: PDB installed versio PENDING n 19.0.0.0.0. CDB installed version NULL. SQL> |
For this demo, it is not a real problem as all went fine for what we wanted to show here, but in a real production situation, this should have been carefully taken care of upfront.
I mean, create your CDB with the correct options, make sure no options are missing and all is ok, then this will not show.
Key is, that we did NOT perform a file copy, we did NOT transfer lots of data, but we kept our DR in place by plugging in the non-cdb into a fresh CDB.
As always, questions, remarks? find me on twitter @vanpupi