Monday, December 30, 2013

Scenario 11: ORA-01502: index 'XYZ.UNIQUE_UK1' or partition of such index is in unusable state

Answer: This is error which is Related to Database table XYZ having Unique index UNIQUE_UK1. the Index was Dropped by somehow, so due to this we are getting this error.

Resolution: we have to Ask the DBA Guys to Recreate the indexes or we need to build the index manually. once indexes are built, we are good to restart the job.

***Note***
Generally indexes will be dropped if there are any Duplicate Records in the table.

Saturday, December 28, 2013

Scenario10: Incomplete Record at the end of file(Abinitio error)

This is an Abinitio Error. we usually get this kind of error when Delimiters were Present in between the field Values. Due to this Records will not be Ended with Proper Delimiters.

Example:

we have a flatfile which is having below delimiters:

Field Delimiter: "&&"
Record Delimiter: "\n"

We are Creating the Flatfile one of the file field value Contained "\n" in its Value. So Record will be Spillted into Another Record from that point(Where it contains "\n" new line Character). it will Treat "\n" as a Record Delimitter. Due to this One Complete record will be Splitted into Two Incomplete Records.

Example records::

100&&Rama&&156&&Good Boy
101&&Laxmi&&1560&&Bad Boy
102&&Suryanarayana
&&1760&&Good Boy
103&&JrNtr&&9999&&Perfect Actor

from the Above example Records we can see that record 102 was Splitted at Suryanarayana Baceuse of Newline Character("\n") at the end of Field. and again the next two fields are Treated as Another new record. both the Records are incomplete.

Resolution:

So we have to Remove the new line Character("\n") from the Value(Suryanarayana) and Re-create the Flatfile again. then data will be Loaded without Corrupted data. 

Friday, December 20, 2013

ORA-01422: exact fetch returns more than requested number of rows

Scenario:
ORA-01422: exact fetch returns more than requested number of rows

Solution:  This is an Error related to Cursor Variables. when we are Declaring Cursor variables and then we are trying to Insert Values INTO these variables from PL/SQL Select Statements. while Retrieving the Value from Select Statement at a time is more than one value then it will Throw the below error.

Example:

BEGIN
SELECT A.XYZ
INTO l_XYZ
FROM XYZ_TABLE_LOG A
WHERE
A.XYZ IN ((SELECT MAX(A.XYZ) FROM XYZ_TABLE_LOG A) where TEXT='SAMPLE')

Error:

ORA-01422: exact fetch returns more than requested number of rows

So here The Values Retrieving from the Select Statement Returning More than one value. so The Cursor Variable is Throwing an Error.

Select MAX(A.XYZ) from XYZ_TABLE_LOG A where TEXT='SAMPLE'
342
342

so here the max value is 342. but it returns Two Values, becuase it is selecting based on TEXT='SAMPLE' in where Clause. so Remove one Value from Table XYZ_TABLE_LOG.

Please investigate why we are getting 2 values As a MAX value. then Issue will be resolved.

Thanks,
Anilkumar.

Saturday, November 30, 2013

Why is Persistant Cache in Lookup Transformation in Informatica?

Why is persistant Cache in informatica lookup?

Ans: Persistant cache is one of the lookup cache in informatica. persistant cache plays an important role in Lookup caches.

if your lookup file/table is Static and will not change regularly, then we can check the option persistant Cache in lookup Transformation. 

Note: make sure the Data in Lookup table/file is always Constant if we check this option.

thsi will help to Increase the Lookup Performance. it plays important role in Performance of llokup tranformation.

Monday, November 18, 2013

Q: How to Print Data in Multifile Prtition?

Q: How to Print Data in Multifile Prtition?

Ans: To Print Data in a Multifile Partition:

Ex:  m_dump Loader.dml mfile:mfs8/tempfile.dat -partition 6

the above xample is to print the data of a multifile tempfile.dat from the partition 6.

Q: How to Print data from a Multifile? in ABinitio

Q: How to Print Data of a multifile in Abinitio?

Answer: To Print Data from a Multifile:

m_dump {Location of the DML file} {Location of the Data file};

Ex: m_dump Loader.dml mfile:mfs8/tempfile.dat


Q: CHARACTER SET in AB-initio Effects output?

Q: How a Character Set Effects the Output?

Answer: Th Actual Value Output That DML(M_EVAL) utility prints deepends on the character set it uses.

Ex:

 Lt Suppose i want to Print String DEL in Character st UTF-16 then the output will be 007F.
 if Same String DEL in Character set ISO-8859-1 will Output as 7F.

The same Character set differencee we can see in Unix OS as well.

Ex: $ m_eval ''abc\x7f"'  -------> output: "abc\x7f"
      $ m_eval ' "abc\x7f\u1000" " ---> output: "abc\u007f\u1000"

Thanks,
Anilkumar.

    

Thursday, November 14, 2013

Q: Wht is Abinitiorc File?

Q:What is Abinitorc File?

Answer: The Name itself indicates Abinitio Remote Connection.

Example: if we have two servers ABC and XYZ. we have installed co->operating systems on both the servers. if we want to connect to one server from another server abinitiorc should be there in both the servers.

so that we can communicate to other server from another server.

if we want to conncet to XYZ server from ABC server. we can login iin the below Way:

SSH XYZ

the above command will take you to XYZ server from ABC server.

Sunday, November 10, 2013

Realtime Scenario 9: ORA-12505: TNS:listener could not resolve SID given in connect descriptor

Realtime Scenario 9:

ORA-12505: TNS:listener could not resolve SID given in connect descriptor.

Solution:

This is an Complete Database error. So Please check with database (DBA Administration).

Please wait if is there any issue from DBA Side. if there is no Issue, then Please try login to that database(Just for Confirmation),if you are successfully able to login, Please restart the failed job.

Thanks,
Anilkumar.

RealTime Scenario 8: ORA-01722: INVALID NUMBER

Realtime Scenario 8:

ORA-01722: INVALID NUMBER
Database driver error...
Solutions:

This is little Difficult to find, but please follow the below steps, you can find data which is causing issue.

Below are the steps:

*********************

1-> let suppose i have  a table EXAMPLE which is having below fields.

          1. USER_NAME VARCHAR2(10);
          2. USER_ID NUMBER(3);
          3. PLACE VARCHAR2(20);
          4. PINCODE NUMBER(6); 
          5. DATE_OF_ENTRY  DATE;
          6. COMMENTS VARCHAR2(100);

         now when i execute the below query i got the below error.

         Select * from EXAMPLE:
        error:
        ORA-01722: INVALID NUMBER

2->  The error it is saying that INVALID NUMBER, so we have to check all the FIELDS which are defined as NUMBER Data Type. we should not bother about VARCHAR/VARCHAR2/DATE datatypes.

3-> View the Data for Below 2 Columns from the table EXAMPLE.

              select USER_ID,PINCODE from EXAMPLE;

 Note: Please select the ata for each column independently fromo Table data.(Press f4 on the table, you will get table description an then select data tab)

4->Once the Data is Fetched, please select USER_ID filed in Desending order/Ascending Order(to view Data). So that it will gives you a wrong record which is Different from the Defined Datatype.

5-> Same repeat for the second column PINCODE.

6->From the above 2 steps(Step 4 and Step 5) ,u will know which field is causing the issue.

7-> Please Correct the Bad Record which is Identified. definitly that Record should come from the Source. So please modify the Record in Source end.

8-> Once the Data is Corrected. Please Restart the failed job.

Thanks,
Anilkumar.



Real time Scenario 7: MAX-CORE Value issue in Hash-Joins(Abinitio)

Real Scenario 7: (Abinitio)

Unable to subdivide hash-join input into partitions smaller than
max-core = 157286400 bytes.   This can be caused by a very large number
of records having the same key value.  Either increase max-core to
at least 157483759 or sort and use the merge-join component.


Solution:

This kind of errors we usually face in Sorter/Joiner components in Abinitio. when we use hash -Join Component we should declare Some max core value. lets say (150M) in this scenario.

if we recieve more numbe rof Records, then hash-Join component will not handle those many records with  a MAX-CORE value of 150M. so we have to Increase our Max-CORE value based on the number of records we are Processing.

so once we increased the MAX_CORE value then we are good to start the Abinitio Graph. it should be completed without any issues.

Thanks,
Anilkumar.

Real Scenario 6: ORA-00001: unique constraint (XYZ.EXAMPLE) violated

Real Scenario 6: ORA-00001: unique constraint (XYZ.EXAMPLE) violated
Solutions:
We can solve this issue in the below ways:
****Temporary fix***************
we can allow the number of rejects by using STOP ON ERRORS Property in Session properties to recject the records.
so that all the records whcih are rejecting will be skipped and job will be completed succesfully.
******Permenant fix*************
1->we have to identify the Duplicate record.
   How to find the Duplicate Records in a table:
       ** we need to check the Primary key Constarint which is created on this table.
Example:
lets Say EXAMPLE table is having a Primary key Constraint EXAMPLE_PK which is defined on two columns USER_NAME and USER_ID.
so if my job is failed due to unique constraint violated issue then i need to do the below to check the duplicate records in the table.
   Query to find Duplicates:
      
       select count(*),A.USER_NAME,A.USER_ID from XYZ.EXAMPLE A GROUP BY A.USER_NAME,A.USER_ID having count(*)>1;
this query will fetch the duplicate record in the table EXAMPLE.
2-> if we found any Duplicate in EXAMPLE Table then please keep the valid records and delete the Duplicate Data in the table.
3-> if the indexes are Disabled on the Table, please recreate the indexes.
4-> Please restart the job, it should be completed.
5-> for a permenant fix, please Check the Source side why we got Duplicate records.

if we did not get Any Duplicate Records in Step1(permenant fix), still our job failed due to Unique Constraint error, then we need to check the below:
6-> if there are no Duplicate records in table EXAMPLE, then we need to check the Source of EXAMPLE Table.
Example:
lets say if EXAMPLE table is Sourcoing from SOURCE_EXAMPLE table then do the below.
     to find the Duplicates in the Source table:
select count(*),B.USER_NAME,B.USER_ID from XYZ.SOURCE_EXAMPLE B GROUP BY B.USER_NAME,B.USER_ID having count(*)>1;
*******
Note: if the Primary key is defined on Different Columns, please arrange the Query According to the same.
7->in Source if the records are not valid,then please delete the invalid records. if they are valid then please Sync the Source and tager table Constraint keys.( define Primary keys on Proper fields in match with Source and target tables).
8-> after u done with Deletion of Duplicates, please restrart the job.(make Sure Indexes are Enabled).

Thanks,
Anilkumar.

Real Scenario 5: ABINITIO: Unexpected agent disconnect on host localhost

Real Scenario 5:

ABINITIO: Unexpected agent disconnect on host localhost

Solutions:

This is an Run time ABINITIO ERROR. we can restart this job.after a restart this should complete.

Reason:

due to some Security scan's on the servers may casue these type of errors.

Real Scenario 4: ORA-12899 Inserted value too large for Coulmn.

Real Scenario 4: ORA-12899 Inserted value too large for Coulmn.

Solutions:
We can solve this issue in the below ways:(Informatica)
****Temporary fix***************
we can allow the number of rejects by using STOP ON ERRORS Property in Session properties to recject the records.
so that all the records whcih are rejecting will be skipped and job will be completed succesfully.
******Permenant fix*************
1->we have to identify the number of records which are rejected.
2-> we need to check the length of the field (NAME_USER) in the rejected records. it shpould be greater then the column defined value.
      Ex: if Column USER_NAME is defined as below
                    
                      USER_NAME VARCHAR2(10);
     But the Value of USER_NAME in the rejected records are greater than 10.due to this the records are rejected while Loading into table EXAMPLE.
     we can check the length of USER_NAME for all rejected record as below.
     if Suppose we are Sourcing the Data from SOURCE_EXAMPLE table and Loading it into EXAMPLE Table.
      
     select LENGTH(USER_NAME),USER_ID from XYZ.SOURCE_EXAMPLE where LENGTH(USER_NAME)>10;
    
     This above Query will give all the record which are Having length >10.
3-> we need to Correct the field Size to allow the Records into the table EXAMPLE. if the data is valid.
4-> if the data in the field USER_NAME is not valid then we need to correct the Data iin the Source end. then please run the Failed job.
Thanks,
Anilkumar

Realtime Scenario 3:ORA-01401: inserted value too large for column(Abinitio)

Real Scenario 3:
There were more records rejected than the maximum allowed.
The maximum number of rejects allowed is 50.
If you wish to allow more rejects then increase the num_errors parameter.


== SQL*Loader Log =================================
ORA-01401: inserted value too large for column
Record 130098: Rejected - Error on table XYZ.EXAMPLE, column "NAME_USER".
ORA-01401: inserted value too large for column


Solutions:

We can solve this issue in the below ways:

****Temporary fix***************(Abinitio)
we can allow the number of rejects more then 50 may be upto your choice(if you have 100 record which are causing issue then select 100).
so that all the records whcih are rejecting will be skipped and job will be completed succesfully.

******Permenant fix*************

1->we have to identify the number of records which are rejected.
2-> we need to check the length of the field (NAME_USER) in the rejected records. it shpould be greater then the column defined value.
      Ex: if Column USER_NAME is defined as below
                    
                      USER_NAME VARCHAR2(10);

     But the Value of USER_NAME in the rejected records are greater than 10.due to this the records are rejected while Loading into table EXAMPLE.
     we can check the length of USER_NAME for all rejected record as below.
     if Suppose we are Sourcing the Data from SOURCE_EXAMPLE table and Loading it into EXAMPLE Table.
      
     select LENGTH(USER_NAME),USER_ID from XYZ.SOURCE_EXAMPLE where LENGTH(USER_NAME)>10;
    
     This above Query will give all the record which are Having length >10.

3-> we need to Correct the field Size to allow the Records into the table EXAMPLE. if the data is valid.
4-> if the data in the field USER_NAME is not valid then we need to correct the Data iin the Source end. then please run the Failed job.
Thanks,
Anilkumar.
Real Scenario 2:

SQL Error [
[IBM][CLI Driver] SQL30081N  A communication error has been detected. Communication protocol being used: "TCP/IP".  Communication API being used: "SOCKETS".

Solution: it should be good after a restart.

Thanks,
Anilkumar. 

Scenario: ORA-00942 table or view does not exist

Real Scenraio 1:

Q: ORA-00942: table or view does not exist

Answer:   when we are getting this type of errors in Informatica we need to check the below two scenario's.

** first we need to check the particular schema which is mentioned infront of the tablename. whether it is exsted in the Database or not?

*** if the Schema ia existed, then we need to check whether the table is existed in the database or not?

Example: suppose the below Query was failed when i ran this quey in XYZ Database.

select * from A.EXAMPLE;

i am getting the below error:

ORA-00942: table or view does not exist.

so first we need to check Schema A is existed in XYZ Db or not?

second we need to check EXAMPLE Table is existed in A Schema in XYZ Databse or not?

Thanks,
Anilkumar.

Friday, November 8, 2013

CDC in DWH

Q: What is Change data Capture (CDC) in DataWareHousing?

Answer: Change data Capture will identify the Data in the Source System that has been modfied since last Extraction.

Example:
 Suppose 11/07/2013 you have loaded your target table. so your last extraction date for the Source table is 11/07/2013. for te next load(11/08/2013) you want to load the target table again. so insted of pulling all the data from Source table we can extract the data newly Updated in Source System after 11/07/2013(Which is Last Extraction date). so that we will get all the records which will be Inserted newly/updated/Deleted in Source System. So that we will save the time insted of  loading the entire target table again.

CDC will be Usefull when the Source table Containg Millions of Records.  
Q: What is StatusCode in Stored Procedure Transformation in Informatica?

Answer: StatusCode Provides the Error handling for the Informatica Session During the Session.The Stored Procedure issues a a StatusCode that notifies whether the Stored Procedure Completed Successfully or not?

Users will not be able to see this value. this value will be used by informatica server to Continue Running session or Stop the Session(will not move furthur).
Q: What is Constraint Based Loading?

Answer:  if we define a target table with Primary Key and Foregien Keys. if we want to Enable the below option in COnfig object:

Configure Object: Check the Constarint Based Loading.

if we enable this option all the Records will go for Constarint Checks(Primaty key Checks (or) Unique Constarint) before they loaded into the target table. 
Q: What is meta Data?

Answer: in a real time Datawarehousing, while Designing datamarts Involvs writing and storing a complex set of instructions.you need to know where to get data(Sources), How to Change it(BY Using Transformations) and Finally where to Store the Data(Targets). PowerMart and PowerCenter will call this set of Instuctions metadata.Each Piece of metadata can contain comments about it.

In Simple Words Matadata is a "Data about data".
please refer the link for more information.http://en.wikipedia.org/wiki/Metadata


Q: Why can't we use Sorted Input Option for Incremental Aggregation?

Answer:  in Incremental Aggregation the Aggregate Calculations are Stored in Historical Caches on the servers. in that Historical Caches the Data is not in Sorted Order(need not be). so even if you pre-sort the incoming records for the particular run, all the data in the historical Cache  need not be sorted order. so due to this Reason, Sorted Option is Not Allowed for Incremental Aggregators. 
Q: Target Load Order Plan in Informatica?

Ans:  if you are mapping has Mulitple Source Qualifiers are Connected to multiple Targets, you can design the order in which informatica server loads the data into targets.

The Target Load Order plan Determines The Order you are Extracting the Data from Source Qualifiers.

You Can FInd this option in Maping tab------Taget Load order Plan.
Q: How to Avoid Duplicates Records in target Table by Using Lookup?

Answer:  Below are the ways to find that:

1. Connected lookup: Use a Connected lookup with Enabled Dynamic Cache. and Please check the Output Port of the NewlookupRow(Which WIll be Inserted into the target Table). lookup will Decide whether the incoming record is existed or not in the table cache?

2.UnConneted Lookup: Create An UnConnected Lookup. and Call the lookup in Expression Transformation and Please Check the Lookup Condition Port Value(NULL/NOT NULL) to decide whether the incoming record already existed in the target table or not?
Q: Different Types of Tracing levels in informatica?

Ans: Tracing levels are the amount of information they  written to the log file. we have 4 types of Tracing levels in Informatica.

1.Normal : it gives Intialisation status and status information of the success rows and target rows. and informtion about the skipped rows due tot he trasformation errors.

2.Terse: it is a combination of Normal and Notification of data.

3.Verbose Initilaisation: it is a combination of normal data + it specifies the loction of the data files and cache files and Index files for each and every transformation in mapping. so that tracing will be very easier.

4. Verbose data:  Along with Verbose Iniliasation and it will Write each and evry record that are processed by the Informatica server.but this can take more space.
Q: What is DTM Process in Informatica?

Ans: DTM process Creates Threads to Initilaise the sessions,read,write,Transform Data and Pre and Post Session Operations.
Q: Different Types of Lookup Caches?

Ans:
Static Cache: Static Cache is only Read only Cache. when u create a lookup informatica bydefault it will cretaes a Static cache.it caches the lookup table and Value in cache for the rows that are entering into the lookup transformation. when the lookup condition is true, it will not Update the Cache until it it process the lookup transformation.

Dynamic Lookup: Unlike Static Cache,Dynamic Cache will Update the Cache if the lookup Condition is True. So Lookup Transformation Dynamically Inserts Data into the target table. This Cache is very Usefull To Avoid Duplicate Rows in target table.

Persistent Cache: We Can Save the lookup Cache Files and reuse them for the next time the informatica server process a lookup.

Recache from Database: if the persistent cache is not Synchronised with the lookup table, you can configure the lookup table to rebuild the lookup cache.

Shared Cache: you can Share the lookup cache between multiple transactions.you can share unnamed cache between Transformations in the Same mapping.

Thursday, November 7, 2013

Q: Different Types of Joiners in Informatica?

Ans: There are Four Types of JOiners in Informatica.

1. Normal Join: only Gives Matching Rows from Master and Detail Tables.
2. master Outer: All the Rows in Detail Table and matched Rows from master Table.
3. Detail outer: All The Rows in Master Table and matching Rows from Detail table.
4.Full Outer: All the Rows from Master and Detail Tables.(Including Non Matching Rows)
Q: What is Data Driven?

Answer: it is a property of Informatica which follows all the instructions that are coded into Update Transformation. it will determine how to flag records for insert, Update,delete.

Q:Is DataDriven(session Property) Is mandatory with Update Transformation?


Q: What is the Use of Update Transformation? why it is Necessary in Informatica Mappings?


Q: What is IncrementalAggregator?