Hi Friends,
It’s a post about one of the most questioned subjects that we receive.
I hope you all enjoy once it’s a very detailed one….
CDC consistent is a concept where Journalizing is carried at the Model level . As you all are aware of the point that in ODI CDC are of two types namely Simple and Consistent. We have already covered the Simple part early . You can visit this link for simple CDC steps. –http://odiexperts.com/changed-data-capture-cdc
CDC Consistent is great when we have tables in a schema related to each other via PK – FK relationship, as CDC Consistent are applied on the Model rather than on the Data store level as practiced in Simple.
Please find the step by step instruction of initial set up of CDC Consistent
Step 1. Edit the Model and select the appropriate Consistent Journalizing mode and select the appropriate Consistent KM.
Step 2. Add the data stores into CDC.
As you can see the all the data stores will have the orange clock shaped icon.
Step 3. Add the subscriber(s)
Step 4– Start the Journal
Once the Journalizing is successful the Orange clock icon will change to Green icon.
Testing the CDC Consistent on various scenarios.
Simple Insert is being carried at the Source Table.
Data being shown at the Journalizing data option.
Lets see if the Journalized data gets flow through.
The records does not seems to come through . Let’s see the issue or the real reason behind it.
The source table for the journalizing is reading from the JV$ view, where as the data seems to be reading from the JV$D view.
The real reason behind why ODI reads from the JV$ rather than JV$D is that JV$ captures all the records from the JV$D by locking the subscriber.
Consider this example say at 10 pm there are 3 Inserts , 5 Updates and 1 deletes are happening at the source side and thus gets captured in the respective JV$D views .
We apply the Extend windows and Lock Journalize table and all the records in the JV$D gets captured into JV$.Now at that time there is again a new 1 insert and so gets captured in the JV$D views. Since there is lock in the subscriber table . New data will not be captured into JV$ .
Let’s repeat the necessary steps and rerun the interface.
Step 1. Extend Window
Step 2. Lock Subscriber
Select the right subscriber
Step 3– Run the interface
Step 4 – Unlock Subscriber
Step 5 – Purge Journal
Once the data is been captured the journal is purged.
Lets see how we can process the above steps via Package.
Step 1 – OdiWaitforLogData
Apart from the default setting provide the Logical Schema and either provide the table name or the CDC Set.
Step 2 – Extend and Lock Subscribers the model
Drag and drop the Model into the Package and in the Type select Journalizing Model
and check mark Extend windows and Lock Subscribers
Step 3– Interfaces
Now call the interfaces.
Step 4 – Unlock Subscribers and Purge Journal.
Next steps after the interface drag and drop the model and select Purge Journal and Unlock Subscribers.
Lets test the package . I am inserting another record and let see if the records come through .
Yes data has been inserted into the Target table.
Single Journalized table per interface Limit.
If there are n number of Journalized table in an interface , only one interface can be marked as Journalized data , even if they are consistent based .
In order to avoid this limit , I have revised the code and re written it like this, so that we can use two journalized table without requiring to read through the complete table of the other source.
Note – There is a big assumption I am making that both the tables namely parent and child are dependent on each other via PK-FK and so if there is an insert in one the of the Parent table then there will be insert into the child table too
<% if (odiRef.getDataSet(i, "HAS_JRN").equals("1")) { %>
<%=odiRef.getJrnInfo("FULL_TABLE_NAME")%>.JRN_FLAG IND_UPDATE <%} else {%>'I' IND_UPDATE <%}%>
from
<%=odiRef.getSrcTablesList("", "[WORK_SCHEMA].<? if (u0022[IND_JRN]u0022.equals(u00221u0022)){?>JV$[TABLE_NAME] [TABLE_NAME] <?} else {?>[TABLE_NAME] [TABLE_NAME]<?}?> ", " , ", "")%>
where (1=1)
<%=odiRef.getJoin(i)%>
<%=odiRef.getFilter(i)%>
<%=odiRef.getSrcTablesList(" ", "AND [TABLE_NAME].<? if (u0022[IND_JRN]u0022.equals(u00221u0022)){?>JRN_SUBSCRIBER = 'SUNOPSIS' /* AND JRN_DATE < sysdate */ <?} else {?><?}?> ", " ", "")%>
<%=odiRef.getGrpBy(i)%>
<%=odiRef.getHaving(i)%>
<%}%>
In order to use the above code in any KM.
Mark the parent Datastore with Journalized data store option.
Replace the code JRN_FLAG IND_UPDATE with <%=odiRef.getJrnInfo("FULL_TABLE_NAME")%>.JRN_FLAG IND_UPDATE
so it can use the Parent JRN_FLAG to accordingly drive the target.
Replace the code <%=odiRef.getFrom(i)%> with <%=odiRef.getSrcTablesList("", "[WORK_SCHEMA].<? if (u0022[IND_JRN]u0022.equals(u00221u0022)){?>JV$[TABLE_NAME] [TABLE_NAME] <?} else {?>[TABLE_NAME] [TABLE_NAME]<?}?> ", " , ", "")%>
so if the datastore is journalized then its associated JV$ view is used else the table name
Similarly replace <%=odiRef.getJrnFilter(i)%> with <%=odiRef.getSrcTablesList(" ", "AND [TABLE_NAME].<? if (u0022[IND_JRN]u0022.equals(u00221u0022)){?>JRN_SUBSCRIBER = 'SUNOPSIS' /* AND JRN_DATE < sysdate */ <?} else {?><?}?> ", " ", "")%>
Here for this example i have used the hard coded subscriber name of SUNOPSIS , you can use the getoption and pass the subscriber name in the Interface option.
The main reason for re writing it this way is that , when we have only one table as journalized , ODI have to read through the other child tables even though they are Journalized.
Lets see an example of the above code
SOURCE
Two new region id is inserts into Regions table with region id 7 and 8
Record is insert with country _id ID and region id 7
Looking at the combination and the Location parent driving then both will be inserted into JV$D views for the respective table.
TARGET
As you can see for the country id and region id 7 , the target table has correctly fetched the associated Region name .
Hope this article gives you a better understanding on the CDC Consistent type.
Thank you and please, keep visiting us.
April 19, 2019 at 9:26 AM
Hi… I am using ODI 12c… I have implemented the CDC within Dev Environment..
I have to move the changes to another Environment (similary to prod) which has only Execution Repository..
Generally we deploy only Load Plans and all mapping scenarios..
In this case also we have done the same thing, but it is not working in the environment where i have moved..
Error Message –
java.sql.SQLSyntaxErrorException: ORA-00942: table or view does not exist
….
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: Error : 942, Position : 46, Sql = select count(FULL_DATA_VIEW) from ODI_WORKING.SNP_CDC_SET_TABLE where CDC_SET_NAME=:1 , OriginalSql = select count(FULL_DATA_VIEW) from ODI_WORKING.SNP_CDC_SET_TABLE where CDC_SET_NAME=?, Error Msg = ORA-00942: table or view does not exist
at oracle.jdbc.driver.T4CTTIoer11.processError(T4CTTIoer11.java:499)
… 43 more
Table listed in the error message – ODI_WORKING.SNP_CDC_SET_TABLE will get created while defining subscriber..
In addition tot he Mapping and Package Scenario, do i need to move anything specific to Execution Repository.
Please help with this query…
April 19, 2019 at 3:44 PM
Hi… Did you already check about the users grants? The most common problem when this error is raised is about the user being used in the connection has no grant on the table in the query…
November 27, 2017 at 8:49 PM
Hi,
When was update this page? when I review this page I didn´t remember the component OdiWaitforLogData
November 27, 2017 at 9:50 PM
Hi Alex,
there is no update for years…
August 7, 2017 at 11:46 AM
Hi Cezar,
I am using odi 11g. I have a model with 4 datastores and these datastores have been used by different subscribers. Like
Datastore 1 and 2 are used by subscriber1 and datastore 1, 2 and 3 are used by subscriber2. This is resulting in lots of junk data in JV$D table which will never be used by any process.
Creating a separate model per subscriber is not the option.
Could you please suggest something to handle this situation.
August 7, 2017 at 6:25 PM
Hi,
If I understood right, you should create the subscribers per tables, not per model and so, the problem should be solved.
But if not, just create a ODI procedure that that delete any unnecessary data from J$… that executes at end of consumption. Make sense?
August 21, 2017 at 9:14 AM
Hi Cezar,
We cannot create subscriber on a table. It is only model based.
Yes I have created a procedure for deleting the records from J$ but I want to know if there is any other process?
May 15, 2017 at 8:37 AM
Hi,
What could be the best way to migrate aggregated data from one instance to another instance:
1. Every time purge all the data from the target instance and reload all again.
This is not possible at all. Because millions of row can not be migrate (purge and reload all) after every hour.
2. Load all at first time and then Move only changed data to the target instance
Question 1: How do we migrate aggregated data from source DB to the target DB?
This step is done.
Question 2: And How do we move changed aggregated data after first load?
I am assuming the aggregated term for Sales or Collections Transactions not the Master Data.
Regards,
Waqas
August 13, 2014 at 8:52 AM
Can I create index on J$ tables?
August 14, 2014 at 6:14 PM
Yes, and I suggest you to add the create step at JKM…
May 29, 2014 at 4:01 AM
Hi, this is an old post, but hopefully, someone can still answer my question.
I’ve tried following the instructions to modify the knowledge module to overcome the “Single Journalized table per interface Limit”.
However, I get the following errors because of this line:
<%=odiRef.getSrcTablesList("", "[WORK_SCHEMA].JV$[TABLE_NAME] [TABLE_NAME] [TABLE_NAME] [TABLE_NAME] “, ” , “, “”)%>
The error I am getting is:
com.sunopsis.tools.core.exception.SnpsSimpleMessageException: ODI-17517: Error during task interpretation.
Task: 17
java.lang.Exception: BeanShell script error: Sourced file: inline evaluation of: “out.print(“insert /*+ APPEND */ into “) ; out.print(snpRef.getObjectName(“L”, ” . . . ” : Undefined argument: u00221u0022 : at Line: 8 : in file: inline evaluation of: “out.print(“insert /*+ APPEND */ into “) ; out.print(snpRef.getObjectName(“L”, ” . . . ” : ( u00221u0022 )
It is complaining about the argument “u00221u0022”. Could someone show some light as to what this argument is and if this is the correct value or a typo on the example show?
Much appreciated 🙂
May 29, 2014 at 4:04 AM
Sorry, the code that I am getting problem with is:
<%=odiRef.getSrcTablesList(" ", "AND [TABLE_NAME].JRN_SUBSCRIBER = ‘SUNOPSIS’ /* AND JRN_DATE < sysdate */ “, ” “, “”)%>
Also, we are using ODI 11g.
May 29, 2014 at 4:06 AM
<%=odiRef.getSrcTablesList("", "[WORK_SCHEMA].JV$[TABLE_NAME] [TABLE_NAME] [TABLE_NAME] [TABLE_NAME] “, ” , “, “”)%>
January 30, 2014 at 8:48 AM
I have followed the above post to implement Consistent CDC using the latest ODI 12c and JKM Oracle 11g Consistent (Streams) KM. I would like to draw the attention to those users trying to implement the same kind of CDC, that when one chooses the KM in question for a Data Model, one needs to change the ‘Validate’ Option from ‘default: true’ to false.
The reason behind this change is to ignore validation errors coming from the KMs code when trying to re-created the tables, PKs etc. during a subsequent ‘Start Journal’ task. The first ‘Start Journal’ task on any datastore within the data model creates the necessary tables with the STREAMS_OBJECT_GROUP prefix. Subsequent ‘Start Journal’ tasks on other datastores will try to re-create the same tables and if the ‘Validate’ option is set to true it will raise errors causing the ‘start Journal’ task to fail.
January 31, 2014 at 11:56 AM
Thank you Mario!!!!
November 22, 2013 at 4:50 AM
Hi,
Please reply on the above request if anyone have an idea.
Thanks in advance.
Regards,
Archana
November 20, 2013 at 11:51 AM
Hi,
There is one table which having foreign key constraints on another schema table.Then can i use jk,m consistent.if yes,how?
Please advice on priority.
Thanks,
Rao
August 5, 2013 at 5:04 AM
can u please tell me where to replace above code in KM?
cheers,
Hari….
August 5, 2013 at 1:25 PM
you will need to use this code to load into i$ table.
September 26, 2011 at 12:37 AM
I have got this situation, whereby I need to have consistent CDC. The form of consistent required is that for 2 tables of PK-FK relationship, either both will be journalized or non.
For example, if there’s changes to a set of parent-child records, it will be journalized. If there changes to the child record, but no changes to the respective parent record, it will not be journalized.
I’ve tried your steps on this webpage multiple times but it doesn’t seems to work the way I want. Is there any details which I need to specifically watch out for?
Thank you
July 6, 2011 at 2:02 PM
Cezar, devendra,
can we use ODI for loading the data from oracle databse to coherence cache.
If yes, then details pls.
June 8, 2011 at 2:22 AM
Hi Cezar,
in your post (SINGLE Journalized table per interface at ODI 11) you said( ”
Replace the code JRN_FLAG IND_UPDATE with .JRN_FLAG IND_UPDATE”
Replace the code with <%=odiRef.getSrcTablesList("", "[WORK_SCHEMA].JV$[TABLE_NAME] [TABLE_NAME] [TABLE_NAME] [TABLE_NAME] “, ” , “, “”)%>”)
I searched in all kms that in ODI 10.1.3… I didn’t see (odiref.getFrom(i))&(odiRef.getJrnfilter(i))
where can I find (odiref.getFrom(i))&(odiRef.getJrnfilter(i)) in odi10.1.3 KM ?
شكرا جزيلا THANK YOU
June 7, 2011 at 5:14 AM
Hi cezar
I think odi 11g use (odiref.getFrom(i))&(odiRef.getJrnfilter(i))in it’s km’s
but I want this issue in ODI10.1.3…. which use (snpRef.) in it’s km’s
please I want it in detail
it’s very important for my project
THANK YOU VERY MUCH
April 15, 2011 at 7:52 AM
Is this for 10g or 11g? I’m trying to programmaticly drop journal(remove J$ and jv$). Any idea? Thanks
April 15, 2011 at 9:28 AM
This post was build on 11g but I don’t see problems in apply to 10g. Maybe some smalls details…
To drop the J$ and JV$ use the “stop journal” option from model or table.
Make sense?