ODIExperts.com

The blog for Oracle Data Integrator ( ODI )

October 19, 2009
by Cezar Santos
12 Comments

Categories: Administration , Architecture , How to , ODI , Tips and Tricks

Tags: , , , , , , , , , , , , ,

Single Post View


Topology – Data Server – Which user should be used to connect?

Hi All,

Today we will discuss about which database user can be used at Data Server (Topology) and why. As usual, Oracle database will be used in the example.

When you start with ODI, one of the firsts things to learn is that is necessary configure database connections to reverse the source and target table.

And what every single beginner does? Configures both Data Servers (source and target) using the owner schema of the source and target tables and, after that in the ODI physical schema, choose the same schema as “Data Schema” and “Work Schema”.

This is a natural behavior once he is a beginner but let us make a simple analysis about what was done from Database point of view.

  1. A new set of tables (C$, I$, E$) were added into database with no planning from DA (Data Administrator)
  2. A new application is accessing the database by a user that wasn’t created for it
  3. Thinking about tablespaces and ETL/ELT process a not provided load is happening with no warning to those responsible to manager the database

Some possible effects of this 3 points:

  1. Developers from other softwares than ODI starts to see ODI tables and could interferer in its working dropping a I$ table that “shouldn’t” be there once it doesn’t belongs  to the original schema.
  2. The amount of allowed connections can be not sufficient for the ODI connections
  3. Massive data transfers can interferer in the normal working of source and target systems like:
    1. The target database raise an error about no more Tablespace space, in the Final User GUI at 2:15 PM and a SR to Infra-Structure department is opened
    2. When the DBA, at 2:30 PM, investigates what is happening, he founds nothing once there is the projected space on database
    3. Real situation: A massive process create a huge C$ and a huge I$ that used all tablespace (in that moment the Final User got the error) but as the C$ and I$ are dropped at end of execution, everything comes back to the normal. See bellow:
  2:00 PM 2:15 PM 2:30 PM
Free Tablespace 55 0 55
Normal Use 45 45 45
ODI instantaneous Use 0 55 0

 

at 2:15 PM there is no tablespace to the database works.

at 2:15 PM there is no tablespace to the database works.

Then what do to avoid all of this problems (and a few more not discussed…)???

The solution is very simple and the same as always…  CREATE A USER JUST FOR ODI!  And make it as your Staging Area (Work Schema at Physical Schema).

In this way it will be possible:

  • Define a specific tablespace to it
  • Total security control once it will be necessary to grant access to any table that needs to be read or updated or selected, etc.
  • No interference with systems already established
  • Less ODI objects once only one Data Server (Topology) by database will be necessary.
  • Centralized control of database objects created by ODI once they will be under the same owner/schema

These are some advantages between others!

Well folks, I hope to have helped in the understanding of a “Best Practice” in ODI Administration.

One small tip:

–  Print this post and take it to the DBA from your environment, I can assure you that a lot of questions and problems will be avoid when ODI starts… 😉

As always, comments are very welcome.

Cezar Santos

October 16, 2009
by Cezar Santos
53 Comments

Categories: How to , Logic , ODI , Tips and Tricks

Tags: , , , , , , , , , , ,

Single Post View


How to use PL/SQL procedures and functions in ODI

Hi people,

It’s very frequent I got emails asking me how to use PL/SQL function and procedures in ODI.

For that, all you need is an Anonymous blocks of PL/SQL. That is all.

A single example is:

1 – Situation:

You need to call a generic ERP PL/SQL procedure that has 3 “in” parameters before starts an interface

Procedure description:

MY_PLSQL_PROC(par1 in varchar2, par2 in number, par3 in date)

Solution:

  1. Create a ODI Procedure and one step inside  
  2. Choose the technology as Oracle
  3. Choose the right Logical Schema that reach the procedure
  4. Write the following code:

Begin

MY_PLSQL_PROC(#ODI_var1, #ODI_var2, sysdate);

end;

Voila!!! It’s possible to call a procedure!

And yes, I know that I showed the simplest possible case… hehehehehe!

Ok, let us go to something a little more complex:

 

2 – Situation:

You need to call a generic ERP PL/SQL procedure that has 2 “in” parameters and 1 “out” parameter before starts an interface. This “out” parameter brings if there is a business error.

Procedure description:

MY_PLSQL_PROC(par1 in varchar2, par2 in number, par3 out varchar2)

 Solution:

  1. Create a ODI Procedure and one step inside
  2. Choose the technology as Oracle
  3. Choose the right Logical Schema that reach the procedure
  4. Write the following code:

Declare

v_ret varchar2(1000);

Begin

MY_PLSQL_PROC(#ODI_var1, #ODI_var2, v_ret);

If v_ret is not null then

raise_application_error(-20001, ‘The following error is raised: ‘ || v_ret);

end if;

end;

This code will allow you get a business error into Operator and even use the getPrevStepLog API to send an email with the error description.

Ok, can we add more complexity???

 

3 – Situation:

You need to call a generic ERP PL/SQL procedure that has 2 “in” parameters and 1 “out” parameter before starts an interface. This “out” needs to be passed to an ODI variable to future uses.

Procedure description:

MY_PLSQL_PROC(par1 in varchar2, par2 in number, par3 out varchar2)

Solution:

  1. Create a ODI Procedure and 3 steps inside
  • First Step
    1. Choose the technology as Oracle
    2. Choose the right Logical Schema that reach the procedure
    3. Write the following code:

create or replace package P$_temp as

pv_ret VARCHAR2(1000);

function get_ret return varchar2;

end P$_temp;

 

  • Second Step
    1. Choose the technology as Oracle
    2. Choose the right Logical Schema that reach the procedure
    3. Write the following code:

create or replace package body P$_temp as

function get_ret return varchar2 as

begin
return pv_ret;
end get_ret;

end P$_temp;

 

  • Third Step
    1. Choose the technology as Oracle
    2. Choose the right Logical Schema that reach the procedure
    3. Write the following code:

Declare

v_ret varchar2(1000);

Begin

MY_PLSQL_PROC(#ODI_var1, #ODI_var2, v_ret);

P$_temp.pv_ret := v_ret;

end;

Now, for recover this value into a ODI variable is so simple as create the ODI variable with the following code into Refreshing tab: (use the same Logical schema as used in the procedure)

select P$_temp.get_ret

from dual

Add the procedure followed for the ODI variable (in Refresh mode) and you will have the ODI variable composed with the “OUT” value from a PL/SQL Procedure.

 

Well my friends, that is all. At true, it isn’t so complicated.

Any comment is highly appreciated!

Best Regards to all,

Cezar Santos

October 12, 2009
by Cezar Santos
17 Comments

Categories: Architecture , ODI

Tags: , , , , , , , , , , , ,

Single Post View


Context, Logical And Physical Schema – How does it work?

Hi people,

Let me try to explain a subject that I already got a lot of emails asking me about how it works:

  • Data Server
    • Physical Schema
  • Logical Schema
  • Context

It’s very common doubts raises from this combination because it is based in 3 concepts that I call “The three bases”.

The most important is: one doesn’t exist without other.

Topology Engine

Topology Engine

 

 

 

 

 

 

 

It’s good remember that Data Server can be understood as the higher level of Physical Schema and that one Physical Schema is linked to one, and no more than one Data Server.

To a better understanding, take a look in the following flow:

Logical Schema, Context and Physical Schema inter-relashionship

Logical Schema, Context and Physical Schema inter-relationship

 

 

 

 

 

 

 

How does this flow wok?

  •  Data Server
    • Object that defines the connection to database. It storage the IP, User and Password for instance
  • Physical Schema
    • Defines 2 database schema’s (Oracle definition), one to read the data and other to ODI works (work area where the C$, I$ tables could be created if necessary)
  • Context:
    • Defines an “environment”, a particular instance for code execution. The most common example is Development, Test and Production environments but there are several possible other possibilities.
  • Logical Schema
    • It is an alias to a “Logic Structure”, I mean, when a code is developed in a Development environment (a single interface in ODI as example) it is expected (and necessary) that any database structure table and column used at it must be at any new environment where this code could be deployed because, if not, a database error will be raised.

Logical Schema is the final dot to understand the flow. The idea behind its existence is allow the same code be used at any environment once it is an alias.

But this alias, the Logical Schema, can not work alone, once it only represents the a Logical Structure not the connection itself, I mean, User, IP, etc… For that, exists the Physical Schema. It will complete the Logical Schema with physical characteristics to connection.

Because of that, one Logical Schema is linked to one, and just one, Physical Schema.

But why to have an alias to the user, IP, password??? Because then there is no need to include these physicals characteristics into the scenario (“compiled code”) allowing that if, for instance, a password is changed there is no need of a scenario regeneration!!!

Well, after understand the link between Physical and Logical Schema how to add the Context in this equation?

It is to determine, at execution moment, to which hardware the Logical Schema that points to a Physical schema it will be executed. A hardware here can be understood as Development, Test or Production.

If you take a look into the users and schema’s used at the figure you will see differences between the environments. I will explain those differences in another post, about “Connection Architecture”.

Friends, I hope to be helpful in the understanding of all this concepts!

See you soon.

 

Cezar Santos

October 5, 2009
by Cezar Santos
9 Comments

Categories: Architecture , ODI , Technology

Tags: , , , , , , ,

Single Post View


Repository Architecture – Work Repository, Development or Execution? When and why uses each one.

Hi Friends,

I got a lot of emails asking me what is the difference between Development Work Repository (DWR) and Execution Work Repository (EWR) and too, why and when use each one.

In this post, I will try to explain these questions.

First we are discuss the concept of each type of repository and, after, the inter-relationship between them.

What is a work repository from ODI point of view?

It is a set of tables where all information from Designer and/or Operator is stored. It means, code, interface, procedure, variables, packages, execution log, scenarios, etc…

In this way, the ODI client install is only for GUI (Graphical User Interface) and then if, for instance, a HD from a ODI developer hardware crash, since the database wasn’t at same HD, absolutely nothing is lost.

 Concepts

Development Work Repository (DWR) – The code storage.

  • It’s where all developments (or wrote code) are storage, I mean, each single comma wrote in a interface or procedure, each OK line from a package, each datastore created or reversed in a model, etc… and plus, store any execution log from process started from there

In my concept of a perfect ODI environment it’s where the unitary tests are executed, I mean the tests, from developers, after finish the development of an object.

Execution Work Repository (EWR) – The “compiled code” (scenario) storage.

  • It’s where all scenarios are stored and no code alteration is allowed. Only accessible by Operator 

It means, after a generic development be ready to go to test (or production) and a scenario is generated from it (“compiled code” from ODI concept) it should be imported into a EWR for integrated tests (once again in my concept of a perfect ODI environment).

 

Work Repositories Inter-Relationship

OK, after define each repository, let me try to show how to use each one.

In theory, any program code, doesn’t matter if ODI, Java, Visual Basic, PL/SQL, should have just one version in production and just one code to be altered. Of course there is exceptions, when a “bug fixer” is necessary for some old version but this is more frequent in softwares, not at single programs or process.

Based on this theory, the Test and the Production environment couldn’t have a DWR. Imagine the following sequence and its consequences:

  1. All environments has DWR
  2. A generic process is developed in Development environment
  3. After be imported at test environment, a small bug is detected
  4. The “tester” take a decision to fix it once is just “add a comma” and doesn’t talk to the developer
  5. The process goes to Production (too a DWR)
  6. The Operator see some other small problem and, too, take the decision to fix it.
  7. Now, the Business Analyst ask for a new mapping and the developer executes it.

Think… In wich code the new Business Analyst request will be implemented? Can anyone assure that the same process will be tested for the same “tester” and he will remember the process and fix the old issue once more?

Of course that I’m creating limit situation but is to build a terrible scenario and scare you, for sure!!!

Worse than that, imagine if someone put a malicious code in production once can access it with the Developer module?

I know that controls can be created to try to avoid these situation but I prefer doesn’t create a door if no one can pass thru it. There is no reason to create it.

Well, these are my reasons to defend a Work Repository Architecture with just one Development Work Repository and so many Executions Work Repositories as your environment asked for!

Best Regards,

Cezar Santos

September 29, 2009
by kdevendr
0 comments

Categories: Logic , ODI , Tips and Tricks

Single Post View


Removing Special character using Jython script

This post describes a simple method to remove a single or more special character using Jython script

My source flat file had this special character ( ” )

Image

To remove such special character from ODI without running Unix script or other scripts , Please follow this simple ODI procedure to remove such special character using Jython script.

Image

Image

PROCEDURE – REMOVE SPECIAL CHARACTER IN FILE

STEP – REMOVE

source_file = open(‘D:/xp_odi/oracledi/demo/file/pacs08.csv’, ‘r’)
temp_file = open(‘D:/xp_odi/oracledi/demo/file/pacs08_new.csv’, ‘w’)
count_record=source_file.readline()
while count_record :
s=count_record.replace(‘”‘,”)
temp_file.write(s)
count_record=source_file.readline()
temp_file.close()
source_file.close()

STEP – MOVE FILE

OdiFileMove -FILE=D:/xp_odi/oracledi/demo/file/pacs08_new.csv -TOFILE= D:/xp_odi/oracledi/demo/file/pacs08.csv

In the first part of the script

  • source_file = open(‘D:/xp_odi/oracledi/demo/file/pacs08.csv’, ‘r’)

Iam opening my source file in the Read mode

  • temp_file = open(‘D:/xp_odi/oracledi/demo/file/pacs08_new.csv’, ‘w’)

Here iam defining the Temporary file in the Writable mode

  • count_record=source_file.readline()

Counting the Number of records in the file

  • while count_record :s=count_record.replace(‘”‘,”)temp_file.write(s)count_record=source_file.readline()

Here for each record , iam replacing the special character (“) with null and temp_file.write(s) writes in the target file

Finally after the complete tranfere i move the temp file to source so that we dont need to make any changes to the source.

Run the ODI Procedure and the special character will be removed and final Source file will be

Image

Solution 2 – Suppressing at the ODI Datastore level [ Easiest solution]

image

Under the required data store – in Text Delimiter provide  the Special Character and the special character will be suppressed while loading into the target .As for my  example i have  provide the semicolon in the Text Delimiter.

The File is still there with the special character but  this is the easiest solution.

image

September 25, 2009
by kdevendr
0 comments

Categories: Knowledge Modules , Logic , ODI , Tips and Tricks

Single Post View


Faster and Easiest way to design Interfaces

Define your Model Folder and Models .

Right Click on the Target Models and select ‘Interface In

Image

Select your Generation Folder and Optimization Context accordingly and Select the Data Store Name for which the interfaces needs to be generated else Click ok and all the Interfaces will be generated at selected Folder.

Image

The sample Interfaces are generated as shown below according to the desired Data store Name.

Image

Drag in your source and apply the LKM , IKM and CKM and run the interface. Now there is simple process to make the selection of the KMs automatically.

Considering for an Example –

Source – SQL Server and Target – Oracle , LKM as SQL to Oracle , IKM SQL Control Append with No Flow control and Truncate as yes for options and No CKM as Flow control is disabled in IKM.

Open your LKM – LKM Sql to Oracle – Check mark the option Default KM for this pair of Technologies.

Image

Do the same for IKM SQL Control Append

Image

Go to IKM options and change the required settting

Image

[ Note : Please do not change it on the standard IKM , either make a duplicate and do the required changes or add change history with the required changes on both the conditions. This are some of the good practice of ODI KMs usage and Reusability ]

Image

Go the CKM Oracle and SQL and uncheck the mart – ” Default KM for this pair of Technologies ” as CKM wont be used since we are disabling the flow control option.

Image

Now drag your source and automatically your LKM , IKM and CKM will be selected. If not check with other option KMs , if they are overriding the above mentioned option of “Default KM for this pair of Technologies” .

Keep looking at ‘www.odiexperts.com ‘ for more wonderful tips and tricks.

September 23, 2009
by kdevendr
12 Comments

Categories: Architecture , Logic , ODI , Tips and Tricks

Tags: ,

Single Post View


AUTOMIZE ODI REPOSITORY EXPORTS

This Post defines the simple process on how to setup automatic  daily backup of ODI Master and Work Repository. This process is especially important where there is no backup or improper schedule backup is carried out by DBAs.

Steps

Export Master Repository using ODIEXPORTMASTER
Export Work Repository using ODIEXPORTWORK
Refresh the Variable SYSDATE
Delete the Required day files using ODIDELETEFILE
Finally Send Mail for notification ( optional)

ODIEXPORTMASTER

Provide your target directory and give the zip file_name as

MASTER_REP_BACKUP_<%=odiRef.getSysDate(“dd-MM-yyyy”)%>.zip

This way Exported Master Repository is saved with Meaningful Filename and Date of the backup.

ODIEXPORTWORK

Repeat the above process for ODIEXPORTWORK

Provide your target directory and give the zip file_name as

WORK_REP_BACKUP_<%=odiRef.getSysDate(“dd-MM-yyyy”)%>.zip

Sample output of the above files.

Creating daily such Exports of  Repository won’t be an ideal solution,  so its good to delete the old archive and keep just two days of archive .

To do so , Please follow these simple steps.

Create a variable named ‘SYSDATE’ and write a following query

Here iam using -2 as i want to keep just two days of files and delete all the previous day files.Change this value according to your need.

ODIFILEDELETE

Call ‘ ODIFILEDELETE ‘ to delete the required files.

The Filename ‘ *_#SYSDATE.zip ‘ will delete the required day file. so if the sysdate is 23-09-2009 , the above command will delete 21-09-2009 files.

This simple procedure would enable you to have a simple backup of the Master and Work Repository especially in condition where there is no proper backup of Development database or backup is not being carried out at proper interval.

Schedule the Package using ODI Scheduler and schedule it accordingly and add odisendmail for any success or failure notification.

[ Note : No write permission should be given any ODI developer in the Archived Folder so that no one deletes the above file accidentally or corrupt them with improper handling. Only ODI should be have write access to this Folder]

If any error occurred and you are not being able to recover the Database or Repository ,these backup files can come into handy and using the right Import procedure the whole Development box can be brought back to action. Please keep looking into future post  of  ‘ odiexperts.com ‘ for how to do a proper Import without affecting objects Internal Id .

September 23, 2009
by kdevendr
1 Comment

Categories: ODI

Tags: ,

Single Post View


Informix Connection

To download the Latest driver for Informix, please visit here

http://www14.software.ibm.com/webapp/download/search.jsp?go=y&rs=ifxjdbc

There is also a driver present for Informix in ODI

Driver Name – IBM Informix JDBC Driver
JDBC Driver – com.informix.jdbc.IfxDriver
JDBC URL -jdbc:informix-sqli://165.202.560. 369:1576:informixserver=infxdw;database=ftp

URL Parameters:

<host>: server network name or IP address. < 165.202.560.369 >
<port>: server port number. < 1576 >
<server name>: name of the Informix server < infxdw >
<dbname>: Informix database name <ftp>
<property>=<value>: Connection properties. Refer to the driver’s documentation for a list of available properties.

September 23, 2009
by kdevendr
6 Comments

Categories: ODI , Technology

Tags: , , ,

Single Post View


SQL Server connection

Connecting Oracle Data Integrator and MS SQL Server

1. Refer to this Download link for Microsoft SQL Server 2005 JDBC Driver 1.2

http://www.microsoft.com/downloads/details.aspx?FamilyId=C47053EB-3B64-4794-950D-81E1EC91C1BA&displaylang=en

and for Microsoft SQL Server JDBC Driver 2.0 here

http://www.microsoft.com/downloads/details.aspx?familyid=99B21B65-E98F-4A61-B811-19912601FDC9&displaylang=en

JDBC Driver 1.2

Extract the downloaded .exe or .gz and copy jar file ” sqljdbc.jar” into the $ODI_HOMEDRIVERS folder

Now go to the topology and Under the Physical architecture of Microsoft SQL Server. Define a New Data Server with the required Name , User connection.

Go the Next JDBC tab , enter the JDBC Driver and JDBC Url

JDBC Driver : com.microsoft.sqlserver.jdbc.SQLServerDriver
JDBC Url : jdbc:sqlserver://<Server address> :<Default port=1433 >selectMethod=cursor
JDBC Url : jdbc:sqlserver://<Server address> :<Default port=1433 >selectMethod=cursor;responseBuffering=adaptive

The above command works great with both SQL Server 2005 and for SQLServer 2000 try this

JDBC Driver : com.microsoft.jdbc.sqlserver.SQLServerDriver
JDBC Url : jdbc:microsoft:sqlserver://<Server address> :<Default port=1433 >selectMethod=cursor

Learn more about option like selectMethod=cursor and responseBuffering=adaptive here http://msdn.microsoft.com/en-us/library/bb879937.aspx and other connection strings here http://msdn.microsoft.com/en-us/library/ms378428.aspx.

To learn more about the Url Option check the ” Release ” document available with extracted .exe or . gz

Click Test to test the connection and Select the appropriate database and owner for Physical Schema and define the appropriate Logical Schema.

Note for ‘ Owner ( Schema )‘ Select the right owner of the database , mostly ” dbo ” . check with your SQL Server DBA for right schema owner

For the Work schema owner , the right user who have permission to access the schema has to be selected.

JDBC Driver 2.0

JDBC Driver ver 2.0 works only with JDK or JRE 1.5 or greater and as ODI is shipped with 1.4 so you would surely encounter the error.

To Resolve this issue download Jdk or Jre 1.5 or greater and change the ODI_JAVA_HOME accordingly and point to the new JAVA_HOME,

Sql Server JDBC Driver is shipped with both ‘sqljdbc‘ and ‘sqljdbc4‘ .To use JDBC Type 4 driver copy only ‘ sqljdbc4‘ in the ‘ oracledi/drivers ‘ folders .

[ Note : Please refer to the release document – point number 1 under Known Issues ]

Copy the ‘ sqljdbc4 ‘driver and test the connection. Restart the agent and try the same Driver and URL as shown above and refer to its ‘ Release ‘ document for more details.

Learn more about  SQL JDBC driver from your extracted driver folder at  ‘ enuhelpindex.html ‘ or access all these information online http://msdn.microsoft.com/en-us/library/ee229547%28SQL.10%29.aspx

September 23, 2009
by Cezar Santos
2 Comments

Categories: Architecture , ODI

Tags: , , , , ,

Single Post View


Repository Architecture – Two or more Masters – Part 3

Hi Friends,

I got some emails and comments asking for publish about how to work with more then one Master Repository and, you will see, it isn’t so complicated.

I will use, as base, the post  Repository Architecture – Just one Master – Part 2 to show to have a multiple Master Repositories environment.

First:

Question: Why to have 2 or more Masters Repositories?

Answer: When there is any restriction, physical or by policy, about contacting between environments, i.e. Development Environment has no physical connection with Production. Very common architecture to financial institutions like banks.

Take a look in the following image, we will discuss it: 

Three environmnets with no connection between them

Three environments with no connection between them

Here is how it will work:

  1. Development
    • One Master Repository (MR)
    • One Development Work Repository (DWR)
    • ODI modules – Designer and Operator – are used to developing, tests and log view
    • Topology has only connections to development database (source and target)
    • CENTRAL POINT OF ARCHITECTURE: ALL LOGICAL SCHEMAS AND LOGICAL AGENTS NEED TO BE DEFINED EXACTLY AS WILL BE DEFINED IN THE OTHERS ENVIRONMENTS
  2. Test
    • One Master Repository (MR)
    • One Execution Work Repository (EWR)
    • Operator will be used to import and export scenario and to create scheduling process
    • Metadata Navigator will be used to test (manual execution from Business Analysts)
    • Topology has only connections to test database (source and target)
    • CENTRAL POINT OF ARCHITECTURE: ALL LOGICAL SCHEMAS AND LOGICAL AGENTS NEED TO BE DEFINED EXACTLY AS WILL BE DEFINED IN THE OTHERS ENVIRONMENTS
  3. Production
    • One Master Repository (MR)
    • One Execution Work Repository (EWR)
    • Operator will be used to import and export scenario and to create scheduling process
    • Metadata Navigator will be used to log view and manual execution from Final Users (when necessary)
    • Topology has only connections to test database (source and target)
    • CENTRAL POINT OF ARCHITECTURE: ALL LOGICAL SCHEMAS AND LOGICAL AGENTS NEED TO BE DEFINED EXACTLY AS WAS DEFINED IN THE OTHERS ENVIRONMENTS

 

About the “CENTRAL POINT OF ARCHITECTURE”

ODI works with Logical Objects (Schemes and Agents) to separate the developing from “physical ” changes, I mean, changes of IP, User, Password, hardware, etc.

Using this concept, once exactly the same Logical objects are declared (created) in the three environments, it is absolutely possible to migrate scenarios from Development to Test to Production.

The connection and agents (physical objects) created at Topology can be, with no problem, created each one in its respective environment with its own parameters once they will be different once each one go to distinct hardware. I mean, no ODI import/export is necessary for physical objects.

An other possible question is “Why there is just one Development Work Repository?” but this one I will answer in my next post. 😉

I hope to have helped you to obtain more knowledge about ODI Repositories.

See you soon!

Cezar Santos