Oracle GoldenGate Best Practices 101

How Problems with Oracle GoldenGate

CONTENTS
SECTION 1-) GOLDENGATE INSTALLATION

• Goldengate Installation
• Extract Process
• GGSCI commands
• Parameter Files
• As400 Extract Example
• Oracle Extract Example
• Pump Example
• Replicate Process
• GGSCI commands
• Parameter files
• Replicate Example
• Manager Operation
• GGSCI commands
• Parameter File
• Manager Example
• Useful commands

SECTION 2-) USE OF GOLDENGATE

• Goldengate Daily Usage
• INFO
• STATS
• Using Extract
a. As400 Example
I. info
ii. Stats
iii. Adding/Deleting Tables
b. Oracle Example
• Info
• Stats
• Adding/Deleting Tables
• Pump Usage
a. As400 Example
I. info
ii. Stats
b. Oracle Example
I. info
ii. Stats

• Replicat Usage
a. Oracle Example
I. info
ii. Stats
iii. Adding/Deleting Tables

SECTION 3-) GOLDENGATE ERROR SOLUTION

• Getting Error in Goldengate
• Error Examples and Solution Techniques

CHAPTER 4-) SCHEDULED SCRIPTS/DB PROCEDURES

• Startup Sh
• Swap Sh
• DB Procedure

CHAPTER 5-) CONTROL MECHANISMS

• Goldengate Log Monitoring
• Database controls (taking table count)

SECTION 1-) GOLDENGATE INSTALLATION

• GoldenGate Installation
For GoldenGate (GG) installation, firstly, the media Pack of the relevant platform is downloaded from http://edelivery.oracle.com under the title of Oracle Middleware. The downloaded zip-formatted file is moved to the installation directory.
GG installation and management at the operating system level is done by a separate user. If this user is on the source side, the following two conditions must be met.
• GG is the owner of the directory to be installed
• Permission to read table log files
If the owner of the GG application to be installed on the target side is the owner of the schema that will replicate in the target database, the schema owners must have the privileges, or the user of the data generating application on the source side must have the privileges.
With the user in question, GG is opened by going to the directory where the installation will be done and the zip file is opened with the “unzip” command. In the opened directory, the installation file is the file with the extension “tar” and the tar file is opened with the following command.
“tar -xvof ”
The unzipped files are extracted to the unzipped main directory. In this directory, when the ggsci application is started on the command line, the “GGSCI>” prompt is entered and all GG commands can be run in this command environment. The next steps after this step are done by following the installation document of the relevant platform.

• Extract Process
Extract operation is the operation to be run on the source side. Pump operation is also an extract operation and is located in the source systems. Preliminary work on reading the log files of the source system to start the extraction process is included in the setup file. After the completion of the mentioned work, the extract process is created in two steps.
• GGSCI prompt commands required for the process to be up and running
• Creation of the prm file in which the parameters are entered

• GGSCI commands
These commands are platform independent and are applied unchanged. First, the extract operation is created.
“ADD EXTRACT , TRANLOG , BEGIN NOW”
While creating the extratc process here, the “TRANLOG , BEGIN NOW” command ensures that the extratc process accepts the current physical address of the log file as its starting point when the command is run. Here, a retrospective starting point can also be given in the format “…, BEGIN YYYY-MM-DD:HH[:SI[.999999]]”.
Then, the trail file that the extract process will use is introduced to the process.
“ADD EXTTRAIL ./dirdat/e1, EXTRACT ”
Here, the name of the trail file is limited to two characters and it is not possible to enter more than two characters. With this command, the extratc process automatically names the files (e1000000, e1000001,…). The next step is to create the parameter file.
If it is a pump operation created, the commands will change, even if it is an extract operation. Because in the Pump process, the source is a trail file and the target is another physical machine, and the network process comes into play.
With the command below, the pump process is created and the trail file format and location to be used as a source are determined.
“ADD EXTRACT , EXTTRAILSOURCE ./dirdat/e1”
The e1 definition given in this command must be the same as the file name in the extract operation. The other step is the name and location definition of the file to be transferred to the target system.
“ADD RMTTRAIL ./dirdat/r1, EXTRACT ”
The r1 in this definition command is the format of the trail file that will be reproduced on the target system and is the source file for the replicat process.

• Parameter File
Parameter file content varies according to the environment to be read in the source system. Here, AS400 and Oracle examples were used.

• AS400 Parameter File Example

EXTRACT E01ANA
–create here

Enter the name of the extract to be made. case is insensitive.

— Database Connection Information
SOURCEDB PROD USERID JVLOGN, PASSWORD HHD75869CK
–If you are going to connect to an external db, dsn is defined and entered as SOURCEDB . In Oracle, these two words are not needed.
–cache parameters
CACHEMGR CACHESIZE 64GB
If the cache value is low at the –os level, it can be set here on a session basis. In report files, if the cache value is low, this is reported as a warning.
— this definition is not essential, although it may not be…

— Statistics and Report Information
STATOPTIONS, RESETREPORTSTATS, REPORTFETCH
REPORTCOUNT EVERY 1 MINUTES, RATE
REPORT AT 00:00
REPORTROLLOVER ON SUNDAY
–commands to regularly extract statistics and report information…

— Discard File Information
DISCARDFILE ./dirrpt/E01ANA.dsc, APPEND, MEGABYTES 100
–discard data is written here, see reference document page 185 for possible discard situations.

— Occurring Trail File Information
EXTTRAIL ./dirdat/e1
— is the path and name of the exttrail file it will create. Looking at this path, it is created by adding a 6-digit number starting from zero after e1. when this file pump is used
— is the source of the pump. If there is no pump, it is created with the rmttrail keyword and thrown directly to the target machine.

GETUPDATEBEFORES
— this definition serves to trail file both before and after the update of the updated record in update statements.

NOCOMPRESSDELETES
— this definition ensures that all columns are written to trail e in delete steps, not just pk columns.

— Long Running Transaction Information
WARNLONGTRANS 1H, CHECKINTERVAL 10M
— Allows long-running transactions to be written to ggserr.log.

— Table Information
— here are the definitions of the tables to be extracted. YPROD.* definition can also be made when the whole schema will be imported…
TABLE YPROD.FGSEGAC;
TABLE YPROD.FDBINA;

• Oracle Parameter File Example

EXTRACT E01SWE
–The name of the extract to be created is entered here. case is insensitive. this version belongs to oracle..

— Database Connection Information
USERID goldengate@swepprd, PASSWORD XXXXXXXX
–We use the userid and password information directly when connecting to the oracle database. With the following command, an encrypted key can be reserved and that key can be used above.
–GGS>encrypt password blowfish encryptkey default
–The key generated here is also added as “userid goldengate@swepprd, password encryptkey default” in the above definition…

— Statistics and Report Information
STATOPTIONS, RESETREPORTSTATS, REPORTFETCH
REPORTCOUNT EVERY 1 MINUTES, RATE
REPORT AT 00:00
REPORTROLLOVER ON SUNDAY
–commands to regularly extract statistics and report information…

— Discard File Information
DISCARDFILE ./dirrpt/E01SWE.dsc, APPEND, MEGABYTES 100
–discard data is written here, see reference document page 185 for possible discard situations.

— Occurring Trail File Information
EXTTRAIL ./dirdat/e1
— is the path and name of the exttrail file it will create. Looking at this path, it is created by adding a 6-digit number starting from zero after e1. when this file pump is used
— is the source of the pump. If there is no pump, it is created with the rmttrail keyword and thrown directly to the target machine.

GETUPDATEBEFORES
— this definition serves to trail file both before and after the update of the updated record in update statements.

NOCOMPRESSDELETES
— this definition ensures that all columns are written to trail, not just pk columns, in delete steps.

— Long Running Transaction Information
WARNLONGTRANS 1H, CHECKINTERVAL 10M
— Allows long-running transactions to be written to ggserr.log.

— Table Information
TABLE SWEP.MUS_MUS_MUSTERI;
TABLE SWEP.TREE_DEFINITION;

• Pump Parameter File Example

EXTRACT P01ANA
–Pump process in name being set, case insenstive
–There is no special definition for environments such as as400 or oracle in this process.

— Statistics and Report information
STATOPTIONS, RESETREPORTSTATS, REPORTFETCH
REPORTCOUNT EVERY 1 MINUTES, RATE
REPORT AT 00:00
REPORTROLLOVER ON SUNDAY
–commands to regularly extract statistics and report information…

— Discard File Information
DISCARDFILE ./dirrpt/P01ANA.dsc, APPEND, MEGABYTES 100
–discard data is written here, see reference document page 185 for possible discard situations.

— Remote Host Information
RMTHOST 10.1.4.96, MGRPORT 7809, COMPRESS
— If the remote machine can be resolved by ip or domain, the machine name is given. The mgr port in target is set here.
–COMPRESS enables data to be compressed and discarded in the network.

RMTTRAIL ./dirdat/rp
The name of the trail file that will be created on the –target machine and sourced the replicat process is set here.

— Passthru Mode
PASSTHRU
–This parameter is used because it is discarded without any transformation or mapping.

— Table Information
TABLE YPROD.; TABLE YKIBRIS.;
TABLE YIST.; TABLE YSAGDAT.;
–Here, the trail from the extarct process can also be captured

and all the tables of the above schemes have been set to be thrown into the target trail.

  1. Replicate Process
    The replicat process is the process on the target system that will run commands run from source. Therefore, it should have dml/ddl privileges that are authorized for commands to be run on the target side and required for related tables. Preferably, a separate user belonging to GG is created in the database and certain roles and required authorizations are provided. Apart from this, an optional but recommended preliminary step is to define the database level checkpointtable in the setup document. When this definition is defined in the GLOBALS file, it becomes valid for all replicat operations that will create that table at the GG level.
    There are two steps in the replicate process, as in the extract process.
    • Required GGSCI prompt for the process to be up and running
    • Creation of the prm file in which the parameters are entered

• GGSCI commands
These commands are platform independent and are applied unchanged. First, the extract operation is created.
“ADD REPLICAT , EXTTRAIL ./dirdat/r1”
The file named r1 given in this command must be the same as the format of the file sent by the pump operation.

• Parameter File
The example used is valid for Oracle target systems, as the target system for the replicate operation is Oracle only.

• Replicat Sample File

REPLICAT R01PROD
—-The name of the replicat to be created is entered here. case is insensitive. This file contains oracle-specific definitions.

— Database Connection information
SETENV (NLS_LANG = “AMERICAN_AMERICA.UTF8”)
Session a language settings used for dml/ddls to be run in the –target environment are set. Likewise, other session-based parameters can be set.

USERID GOLDENGATE, PASSWORD AACAAAAAAAAAAAAAKAUCPDBFOAJAMBQDBGGAYHKCECHATCKJDJ, BLOWFISH, ENCRYPTKEY DEFAULT
— Since we are in the oracle environment, we connect directly to the loak oracle db with the userid and password information.
— Since a password created with the blowfish method is used here, the encryption method information is also added to the connection information.
— this password is created with the command “GGS>encrypt password blowfish encryptkey default”.

— Statistics and Report information
STATOPTIONS, RESETREPORTSTATS, REPORTFETCH
REPORTCOUNT EVERY 1 MINUTES, RATE
REPORT AT 00:00
REPORTROLLOVER ON SUNDAY
–commands to regularly extract statistics and report information…

— Discard file information
DISCARDFILE ./dirrpt/R01PROD.dsc, APPEND, MEGABYTES 100
–discard data is written here, see reference document page 185 for possible discard situations.

–Table definition information
SOURCEDEFS ./dirprm/source.def
–This definition file, which should be created by default in the AS400 environment, is required in the oracle environment only when the table desciptions are different.
–As this environment will be fed from tables from as400, it has been created to include all tables by default.
— This file is created with the command “defgen paramfile ./dirprm/defgen.prm” on the source side. The defgen.prm file here defines our definitions.
— is the parameter file containing the names of the tables we will import. The relevant file can be found in the attachment of the e-mail.

–GROUPTRANSOPS 1000
–sets how many dml statements to run in a replicat operation.

–MAXTRANSOPS 1
–This is the parameter for dividing into small groups in case of a large number of transactions.

–HANDLECOLLISIONS
-It ensures that the problems that will occur in duplicate or not found records are not caught as errors and bypassed.
With this setting, replicate is run for a while behind –Initial load. However, it should not be used in the final setting.

INSERTMISSINGUPDATES
–If the record is not found in the target in an update operation, the entire record is inserted as a new record.

–Performance parameters
–BATCHSQL
— By bringing similar sql clauses back to back without breaking data consistency, it makes a reordering and makes it run faster.

–INSERTAPPEND
— allows writing to the table in append mode. See. “append hint in sql insert statements”

— Table Information
MAP YPROD.FGPOLMAS, TARGET GGATE.FGPOLMAS, colmap ( usedefaults, sys_date = @GETENV (“GGHEADER”, “COMMITTIMESTAMP”));
MAP YPROD.FGMUSTER, TARGET GGATE.FGMUSTER, colmap ( usedefaults, sys_date = @GETENV (“GGHEADER”, “COMMITTIMESTAMP”));
–Above, the information of which table’s data will be written to which table is entered.
–usedefaults ensures that all columns with the same column names are automatically mapped together.
The –sys_date column is a column we add to the target table. A setting has been made that allows the transaction timestamp in the source db to be written to this column.

  1. Manager Operation
    The Manager process is the process that provides the control and management of all processes, purging the trail files, observing and reporting the entire system, determining which port to transmit the sent or received trail files, and other administrative needs. Stopping is not preferred. This

It is the most correct approach to write scripts for the automatic start of the manager process against possible restart operations on the servers. There are also two steps for creating the Manager process.
a. GGSCI commands
b. Creating the parameter file

a. GGSCI Commands
The command to create the Manager process is the direct launch code. The following command directly creates and starts the process.
“START MGR”
In case of setting changes, the following command is run to ensure that the new settings become valid. The process is not stop-started.
“REFRESH MGR”

b. Parameter File
Parameter files are files that do not change according to the environment and are generally used in the same format. As long as there is no port change between systems, it can be created by transferring to newly installed systems by copy-paste method.

I. Manager Parameter Example

–Manager Port No.
PORT 7809
DYNAMICPORTLIST 7810-7820
–Manager process tries to open port 7809 by default, if it is not possible to use it, it uses a port in the range given as port list.

— Deletion Interval of Old Trail Files
PURGEOLDEXTRACTS ./dirdat/*, USECHECKPOINTS
–The path of the extract files to be deleted is specified here. The usecheckpoints command, on the other hand, ensures that all processes for purg operation are deleted after making sure that they are finished with the mentioned trail file.

— Autorestart Parameters
AUTORESTART ER *, RETRIES 3, WAITMINUTES 5, RESETMINUTES 30
The –Manager process uses the AUTORESTART command to regularly check that the extract and replicat processes are up. Here, all extract and replicat operations are tried 3 times, with 5 minutes between attempts, in case of failure to start. Then the trial process is terminated and after 30 minutes a new trial cycle is created and retried…

— Lag Control Parameters
LAGCRITICALMINUTES 3
LAGINFOMINUTES 2
LAGREPORTMINUTES 1
–This is the command that allows lag information to be written to the ggserr.log file. In the example above, the manager process checks the lag information every 1 minute. If lag reaches 2 minutes, it is logged with “INFO” warning information. As soon as the lag reaches 3 minutes, it is logged with the “WARNING” warning information.

  1. Useful Commands

• To start the Extract process from a certain date;
“alter extract extract_name>, begin yyyy-mm-dd:hh:mi[:ss.999999]”
• For detailed report of Extract process
“send extract , report”
• For the dml statistics performed by the Extract process;
“stats ”
• To start an extract from where another aborted extratc left off;
“info , showch — here retrieve recovery checkpoint information (extseqno and extrba)
“alter extract , extseqno xxx, extrba yyy”
• To delete Extract;
“delete extract ”
• To activate the changed Manager settings
“refresh mgr”
• To start a replicat operation from a particular trail file point;
“alter replicat, extseqno xxx, extrba yyy”
• To delete Replicat;

  1. dblogin userid xxx, password yyy
  2. delete replicat
    • To look at the information in the Trail file;
  3. logdump (In the goldengate folder, this command is written instead of ggsci.)
  4. open
  5. detail on
  6. detail data
  7. ghdr on
  8. n
    • To return to the beginning of the file in the logdump;
    “pos 0”

SECTION 2-) USE OF GOLDENGATE

• Goldengate Daily Usage

The requirements for daily use of Goldengate are approximately the same for both extract and replicat. First, the database where the tables are taken or printed is connected with utilities such as Putty and the status of the jobs in ggsci under the goldengate folder is checked. With the commands ‘STATS ‘ and ‘INFO ‘, the jobs that are in working condition are examined.
The actions to be taken according to the status of the status are explained in the 2nd and 3rd steps.

• INFO
With the INFO command this extract, pump or
Information such as the last date the replicat was run, its status, the last read checkpoint in the file, the checkpoint point, which file was last read and its location in the file are listed.
In order to view all extracts at the same time under the ggsci folder, the INFO ALL command is written. Here, information such as status, checkpoints, etc. of all extracts and their pump files displayed on the screen are listed one under the other.

An extract must be compatible with its respective replicat and pump. If extract, pump and replicat are working properly, read checkpoints should be the same or very close (except for bulk transactions). Examples of these are shown in sub-headings 2-3-4.

• STATS
The hourly, daily, and after the last starting time of insert, update, delete operations on tables read or processed with the STATS command.

The status of the ems is listed. It is a statistic of transactions made in tables.
The stats command will only show results if extract/pump/replicat is running.
As with the info command, an extract must be compatible with its pump file and replicat file. An example of these is shown in sub-headings 2-3-4.

• Using Extract

For daily use of extracts, after connecting to the system by entering a user name and password with utilities such as Putty or Moca to the source system, the status of the extract should be checked by going under the goldengate folder and entering ggsci with the commands shown in the 1st sub-title. How many different databases are taken from, all of them should be checked. In addition, it is not recommended to keep the extract closed for more than 72 hours. In these cases, if it is not desired to lose the initial load application or system dates, the database can be edited with the help of test tables.

• AS400 EXAMPLE
If the As400 database is the source system, it should be used when connecting to the system with utilities such as Putty or Moca,

Username: GGUSER
Password: GGUSER1

If a connection is established with the Moca program, after entering the user name, the password should be entered by going to the password line with the tab key. On this page, the enter key is used after entering the username and password. After connecting to the system, the ‘call qp2term’ command is written on the page that opens and a new page is opened with enter. From this page, the goldengate folder is accessed. When the new page is opened, type the ‘cd zz’ command and press enter. This command is case sensitive and must be written in lowercase letters. In order to go to the ggsci folder, type ‘ggsci’ and press enter.

• INFO
The image of all extracts and pumps with the info all command is as follows.

 GGSCI (PROD.anadolusigorta.com.tr) 5> info all

Program Status Group Lag at Chkpt Time Since Chkpt

MANAGER RUNNING
EXTRACT RUNNING E01ANA 00:00:00 00:00:06
EXTRACT RUNNING E02ANA 00:00:00 00:00:01
EXTRACT RUNNING E03ANA 00:00:00 00:00:05
EXTRACT RUNNING E04ANA 00:00:00 00:00:09
EXTRACT RUNNING P01ANA 00:00:00 00:00:03
EXTRACT RUNNING P02ANA 00:00:00 00:00:05
EXTRACT RUNNING P03ANA 00:00:00 00:00:03
EXTRACT RUNNING P04ANA 00:00:00 00:00:02

The names of the extract and pump files under the Group name here. Time Since Chkpt represents the time taken for a transection, or the time since the extract or pump files have been errored or stopped. Lag at Chkpt is the duration of the transections that need to be processed until the current time after the extract or pump is started. However, this time may be shorter or longer than the time shown, depending on whether the transection is intense or not. As can be seen in this example, groups starting with E are extracts and those starting with P are the Pump files of the relevant extracts.
The first thing to note is always MANAGER. Manager must be running.

To check the status of a particular extract, type INFO E01ANA and press enter.

GGSCI (PROD.anadolusigorta.com.tr) 2> info e01ana

EXTRACT E01ANA Last Started 2013-11-04 08:57 Status RUNNING
Checkpoint Lag 00:00:00 (updated 00:00:03 ago)
VAM Read Checkpoint 2013-11-28 13:31:22.355952

Here, the status of the relevant extract, the date on which this extract was last re-run, the last read checkpoint and the checkpoint lag time showing the up-to-dateness of the extract are listed.
In this example, this extract is up-to-date because the lag time is 00:00:00.

ii. STATS
With the help of the Stats command, how many tables are in an extract and if there are tansections made in these tables, the breakdown of all transection numbers of all tables is listed. As an example of this;

GGSCI (PROD.anadolusigorta.com.tr) 4> stats e01ana

Sending STATS request to EXTRACT E01ANA …
Start of Statistics at 2013-11-28 14:19:59.

Output to ./dirdat/e1:

Extracting from YPROD.FGPOLMAS to YPROD.FGPOLMAS:

*** Total statistics since 2013-11-04 08:57:52 ***
Total inserts 358704

.00
Total updates 304504.00
Total before 304504.00
Total deletes 4892.00
Total discards 0.00
Total operations 668100.00

*** Daily statistics since 2013-11-28 00:00:00 ***
Total inserts 10336.00
Total updates 10700.00
Total before 10700.00
Total deletes 83.00
Total discards 0.00
Total operations 21119.00

*** Hourly statistics since 2013-11-28 14:00:00 ***
Total inserts 580.00
Total updates 521.00
Total before 521.00
Total deletes 5.00
Total discards 0.00
Total operations 1106.00

*** Latest statistics since 2013-11-04 08:57:52 ***
Total inserts 358704.00
Total updates 304504.00
Total before 304504.00
Total deletes 4892.00
Total discards 0.00
Total operations 668100.00

As can be seen here, from which table to which table the extract was taken, daily, hourly, all transections that took place after the last start of the extract are listed on this table.

iii. ADD/DELETE TABLES
When adding a new table to extract from the As400 database, the first thing to do is to see which of the before/after/both options the image of this table has. The condition that should be is BOTH. In order to look at this, it is necessary to look at the properties of the table.
If a connection is established with Moca, enter ‘dspfd ‘ and press enter on the page that opens after entering the username and password. On the page that opens, it is checked which option is written in the Journal Image section.

selection or command
===> dspfd yprod/fgpolmas

File is currently journaled . . . . . . . . : yes
Current or last journal . . . . . . . . . . : LINK07
library . . . . . . . . . . . . . . . . . : SHJRNLIB
Journal images. . . . . . . . . . . . . . : IMAGES *BOTH
Journal entries to be omitted . . . . . . . : OMTJRNE *OPNCLO

If the journal image option is not both, the system admin should be asked to pull the journal image of the relevant table to both.
Apart from this, another point to be considered is that the tables with the same journal and library are included in the same extract.

It is necessary to create the source definition of the table to be added. What needs to be done for this;
• In as400, new as400 tables are added to defgen.prm under the dirprm directory in the zz folder.
• In AS400, run “defgen paramfile dirprm/defgen.prm” command under zz directory in aix shell.
• This command produces a file named source.def in the same directory (dirprm).
• This file is taken to the local and if there are places that need to be corrected, they are corrected. For example, if the column name starts with the letter Ö (ÖRTP), it is written as # character (#RTP) in the source.def file. It needs to be fixed locally.
• This locally corrected source.def file is placed under the dirprm directory on the target side. It is recommended to perform these operations by backing up.

After these processes are completed, the table can be added to the prm file of the relevant extract. By taking the prm file to the local, the table information is under the ‘TABLE YPROD.FGAYKAPA; ‘ is added. The file becomes prepared by throwing it back under the dirprm directory. The process is completed by stopping-starting the relevant extract.

To check whether the trandata of the newly added table is available for control purposes, a database connection is made with dblogin under ggsci. Connection information is the database connection information contained in the prm file of extract.

GGSCI (PROD.anadolusigorta.com.tr) 3>

dblogin SOURCEDB PROD USERID JVLOGN, PASSWORD HHD75869CK

2013-11-28 17:07:48 INFO OGG-03036 Database character set identified as UTF-8. Locale: tr_TR.

2013-11-28 17:07:48 INFO OGG-03037 Session character set identified as UTF-8.
Successfully logged into database.

GGSCI (PROD.anadolusigorta.com.tr) 4> info trandata yprod/fgpolmas

Journaling for object YPROD/FGPOLMAS is enabled to journal @JRNLIB/LINK07.

As can be seen here, the trandata reading is suitable for this table.

For table deletion, localize the prm file of the relevant extract and that table expression of which table is desired to be extracted from the extract.
(TABLE ; ) preceded by two dashes(–)
It should be saved as (–TABLE ; ) and uploaded back to the dirpm directory. After that, the process is completed by starting the extract stop-start.

• ORACLE EXAMPLE
In Oracle-based databases, the necessary user name and password information is entered after establishing the connections to the database that needs to be connected first for daily extrack use, thanks to utilities such as Putty. As an example, we will give examples from the RH+ database.

login as: gguser
gguser@askaccsora04’s password: GoldPass

Once entered, it is successfully connected to the system. Now what needs to be done is to go under the goldengate directory and then under the ggsci.

gguser:askaccsora04:/home/gguser:>cd /goldengate
gguser:askaccsora04:/goldengate:>ggsci

After going under ggsci, the examination of the state of the pumps can be started.

• INFO
After reaching below ggsci

        GGSCI (askaccsora04) 1> info all

The general status of all extracts and pumps working with the command is listed.

Program Status Group Lag at Chkpt Time Since Chkpt

MANAGER RUNNING
EXTRACT RUNNING E01RH 00:00:01 00:00:04
EXTRACT RUNNING P01RH 00:00:00 00:00:06

       GGSCI (askaccsora04) 5> info with e01rh command

EXTRACT E01RH Last Started 2013-12-02 11:48 Status RUNNING
Checkpoint Lag 00:00:01 (updated 00:00:03 ago)
Log Read Checkpoint Oracle Redo Logs
2013-12-03 10:21:17 Thread 1, Seqno 4017, RBA 380355600
SCN 13.63040701 (55897615549)
Log Read Checkpoint Oracle Redo Logs
2013-12-03 10:21:17 Thread 2, Seqno 2150, RBA 560060416
SCN 13.63041298 (55897616146)

In this example, information such as the extract’s name, status, last run date, and lag status are listed. In addition, since the source is fed from 2 different sources, there are 2 Redo Log information and it is seen that these two sources are compatible.

• STATS
In order to view the stats of the relevant extract, it is necessary to be under ggsci as in info. The stats command, which lists the transactions coming to the tables in the extracts, is checked for each extract. There is no general overview.

With GGSCI (askaccsora04) 7> stats e01rh command

Sending STATS request to EXTRACT E01RH …

Start of Statistics at 2013-12-03 11:03:54.

Output to ./dirdat/e1:

Extracting from CCS_COMPONENTS.SUPPLIER_SRVC to CCS_COMPONENTS.SUPPLIER_SRVC:

*** Total statistics since 2013-12-02 11:57:03 ***
Total inserts 3.00
Total updates 2742.00
Total deletes 0.00
Total discards 0.00
Total operations 2745.00

*** Daily statistics since 2013-12-03 00:00:00 ***
Total inserts 1.00
Total updates 269.00
Total deletes 0.00
Total discards 0.00
Total operations 270.00

*** Hourly statistics since 2013-12-03 11:00:00 ***
Total inserts 0.00
Total updates 6.00
Total deletes 0.00
Total discards 0.00
Total operations 6.00

*** Latest statistics since 2013-12-02 11:57:03 ***
Total inserts 3.00
Total updates 2742.00
Total deletes 0.00
total discards

0.00
Total operations 2745.00

Only one table’s statistics are listed here. Normally, the status of all tables received and transaction received in extracxt are listed in this form. If there are many tables taken with extract, it lists as many as it can.

 Statistics reply buffer exceeded. It gives the information as Results truncated....

iii. ADD/DELETE TABLES

When adding a new table to extract from an Oracle-based database, the dirprm file of the extract to which the table will be added is taken locally and ‘TABLE CCS_COMPONENTS.LINE; ‘, the table is added to the file and put under the dirprm directory again.
In addition, the trandata of the table to be added must be added to the system. After connecting to the source system with Putty;

With the gguser:askaccsora04:/home/gguser:>cd /goldengate/ command, you can go under the goldengate directory.

With the Gguser:askaccsora04:/goldengate:>ggsci command, you can go under ggsci.

In order to add trandata, it is necessary to connect to the system with dblogin.

GGSCI (askaccsora04) 1> dblogin USERID goldengate, PASSWORD AACAAAAAAAAAAAKAOAGHVEYBSJZHMBBIAJHEEDMHDBAAJGAD BLOWFISH ENCRYPTKEY DEFAULT
Successfully logged into database.

The username and password used here (USERID and later) are included in the database connection information section of the relevant extract.

Trandata of the table is added with the command GGSCI (askaccsora04) 2> add trandata ccs_components.line.

While connected with dblogin to check

GGSCI (askaccsora04) 2> info trandata ccs_components.line
Logging of supplemental redo log data is enabled for table CCS_COMPONENTS.LINE.

Columns supplementally logged for table CCS_COMPONENTS.LINE: ALL.

As can be seen from here, the trandata of the relevant table is available.

After these processes are completed, the related extract stop-start is completed and the table is added.
For table deletion, localize the prm file of the relevant extract and that table expression of which table is desired to be extracted from the extract.
(TABLE ; ) preceded by two dashes(–)
It should be saved as (–TABLE ; ) and uploaded back to the dirpm directory. After that, the process is completed by starting the extract stop-start.

  1. Using the Pump

For the daily use of pumps, after connecting to the system by entering the user name and password with utilities such as Putty or Moca to the source system, the status of the pump should be checked by going under the goldengate folder and entering ggsci with the commands shown in the 1st sub-title. How many different databases are taken from, all of them should be checked.

• AS400 EXAMPLE
After connecting to the As400 database, daily checks can be started after going under ggsci, just like in extract. The first thing to consider is the status of the pumps. Things to pay attention ; is the pump working?, is it lag?, is it reading the correct file?, ..etc. What needs to be done for this is to look at the info and stats of the pump. As mentioned before, it must be compatible with the extract and replicat of the relevant pump. It is important to pay particular attention to this compliance while performing these checks.

• INFO
Pump files need to be done for info su, again under ggsci,

GGSCI (askaccsora04) 9> info p01rh

EXTRACT P01RH Last Started 2013-12-02 11:47 Status RUNNING
Checkpoint Lag 00:00:00 (updated 00:00:08 ago)
Log Read Checkpoint File ./dirdat/e1000497
2013-12-03 11:20:45.000000 RBA 70937786

As can be seen here, the name of the relevant pump, the status of its status, the last date this pump file was run again, the update of the pump file, the last checkpoint read, which extract file the pump file is connected to, and which RBA it is in the file. is listed.

• STATS

Since each pump file must be compatible with its respective extract and its replicat, table statistics must also be compatible. Therefore, when looking at the stats of a pump file, it should be checked by looking at the stats of the relevant extract and replicat.
Under ggsci when looking at the stats of the pumps,

GGSCI (askaccsora04) 10> stats with p01rh command
Sending STATS request to EXTRACT P01RH …

Start of Statistics at 2013-12-03 11:41:15.

Output to ./dirdat/rr:

Extracting from CCS_COMPONENTS.SUPPLIER_SRVC to CCS_COMPONENTS.SUPPLIER_SRVC:

*** Total statistics since 2013-12-02 11:57:04 ***
Total inserts 3.00
Total updates 2824.00
Total deletes 0.00
Total discards 0.00
Total operations 2827.00

*** Daily statistics since 2013-12-03

00:00:00 ***
Total inserts 1.00
Total updates 351.00
Total deletes 0.00
Total discards 0.00
Total operations 352.00

*** Hourly statistics since 2013-12-03 11:00:00 ***
Total inserts 0.00
Total updates 88.00
Total deletes 0.00
Total discards 0.00
Total operations 88.00

*** Latest statistics since 2013-12-02 11:57:04 ***
Total inserts 3.00
Total updates 2824.00
Total deletes 0.00
Total discards 0.00
Total operations 2827.00

As can be seen here, information such as the daily, hourly, and the total number of transactions after the last execution time of the tanbsactions in the tables are listed.

  1. Use of Replicates

For the daily use of replicas, the system is connected by entering the user name and password by means of utilities such as Putty. After connecting to the system, go under the goldengate directory and go under ggsci. Under ggsci, the daily status of replicats is checked with the help of the commands described in the 1st title.

• ORACLE EXAMPLE

After the system connections are made for the use of Replicats in the Oracle database, the user name and password are entered and the connection is completed.

login as: gguser
gguser@askaodsora01’s password: AsGold

After connecting to the system, reach under the goldengate directory and go under ggsci.

gguser:askaodsora01:/home/gguser:>cd /goldengate
gguser:askaodsora01:/goldengate:>ggsci

The status of replicats can be examined under ggsci. The points to be considered when examining replicats are the status of the replicat, general information of the replicat, statistics, and whether they are compatible with the relevant extract and pump files. In cases where there are many transections, the replicat catches its normal time even if it comes from behind.

• INFO

In Replicat, the operations are the same as in extract and pump. The general status of all replicats is listed with the info all command.

GGSCI (askaodsora01) 263> info all

Program Status Group Lag at Chkpt Time Since Chkpt

MANAGER RUNNING
REPLICAT RUNNING R01ANA 00:00:00 00:00:00
REPLICAT RUNNING R01CRM 00:00:00 00:00:04
REPLICAT RUNNING R01HKK 00:00:00 00:00:09
REPLICAT RUNNING R01RH 00:00:00 00:00:02
REPLICAT RUNNING R01SWE 00:00:07 00:00:00
REPLICATION RUNNING R02ANA 00:00:00 00:00:01

In order to check the status of a particular replicat, the status of the replicat can be examined with the info and stats commands described in the first chapter.

GGSCI (askaodsora01) 264> info r01ana

REPLICAT R01ANA Last Started 2013-12-03 10:18 Status RUNNING
Checkpoint Lag 00:00:00 (updated 00:00:01 ago)
Log Read Checkpoint File ./dirdat/r1000941
2013-12-03 16:35:58.687614 RBA 47195104

As can be seen here, the date the replicat was last restarted, its status, lag status, information on which file and which rba number was processed are displayed.
• STATS

In the replicat related to the Stats command, the daily, hourly, and the total number of transactions are listed. How many tables are matched in Replicat, statistics of all tables are shown.
Sometimes the entire transaction breakdown of all tables may not be displayed on the page.
Statistics reply buffer exceeded. It gives the information as Results truncated….

GGSCI (askaodsora01) 265> stats with r01ana command

Sending STATS request to REPLICAT R01ANA …

Start of Statistics at 2013-12-03 16:48:10.

Replicating from YPROD.FDBINA to YPROD.FDBINA:

*** Total statistics since 2013-12-03 10:18:07 ***
Total inserts 7855.00
Total updates 0.00
Total deletes 0.00
Total discards 0.00
Total operations 7855.00

*** Daily statistics since 2013-12-03 10:18:07 ***
Total inserts 7855.00
Total updates

0.00
Total deletes 0.00
Total discards 0.00
Total operations 7855.00

*** Hourly statistics since 2013-12-03 16:00:00 ***
Total inserts 317.00
Total updates 0.00
Total deletes 0.00
Total discards 0.00
Total operations 317.00

*** Latest statistics since 2013-12-03 10:18:07 ***
Total inserts 7855.00
Total updates 0.00
Total deletes 0.00
Total discards 0.00
Total operations 7855.00
• ADD/DELETE TABLES

The new table to be added to the Replicat must first be added to the relevant extract and its schema must be added to the relevant pump. If the schema already exists in the pump, there is no need to rewrite it.
When adding a new table, after adding the table to the relevant extract, taking the prm file of the replicat to be added to the local;

MAP <şema adı>.<tablo adı>, TARGET <şema adı>.<tablo adı>, colmap( usedefaults, sys_date = @GETENV (“GGHEADER”, “COMMITTIMESTAMP”));

, colmap( usedefaults, sys_date = @GETENV (“GGHEADER”, “COMMITTIMESTAMP”));

The desired table is mapped in the form and saved. Then it is restored from the local to the system. Replicat is restarted with stop-start. In this software, the sys_date column is on the target side and the data that should be added to this column is defined. This is where the data definitions that need to be inserted into the new columns added on the target side are made.

Table deletion is;

–MAP <şema adı>.<tablo adı>, TARGET <şema adı>.<tablo adı>, colmap( usedefaults, sys_date = @GETENV (“GGHEADER”, “COMMITTIMESTAMP”));

, colmap( usedefaults, sys_date = @GETENV (“GGHEADER”, “COMMITTIMESTAMP”));

The expression is made by placing two dashes (–) in front of it. All of these operations must be done by taking the prm file to the local and then uploading it to the system.

SECTION 3-) GOLDENGATE ERROR SOLUTION

• Getting Error in Goldengate
Even if Goldengate is in good working order, it is likely to get errors sometimes. If any or more of the running jobs have the Abended statement instead of the Running statement, an error is received. Below is an example from a replicat.

Program Status Group Lag at Chkpt Time Since Chkpt

MANAGER RUNNING
REPLICATION ABENDED R01ANA 00:00:03 02:24:36
REPLICAT RUNNING R01CRM 00:00:00 00:00:08
REPLICAT RUNNING R01HKK 00:00:00 00:00:07
REPLICAT RUNNING R01RH 00:00:00 00:00:03
REPLICATION RUNNING R01SWE 00:00:00 00:00:07
REPLICATION RUNNING R02ANA 00:00:00 00:00:00
REPLICATION RUNNING R02SWE 00:00:09 00:00:00
REPLICATION RUNNING R03ANA 00:00:00 00:00:03
As seen in the example above, job R01ANA has received an error. Here, Time Since Chkpt indicates how long ago it received an error.
When investigating the causes of errors that may occur in extract files taken with GG or replicat files processed and the appropriate solutions, the first thing to do is to make a report (rpt) of the relevant extract or replicat in the ‘dirrpt’ file under the GG folder after establishing system connections with programs such as COREFTP, WINSCP. file is checked. Then, the record containing the last ‘ERROR’ statement in the relevant discard(dsc) file under the same folder is found.
According to the error written in the report or dsc file, it is resolved with the error resolution methods described in the other sections.
These files can also be looked at through putty, but it is recommended to use utilities because the file is large and it will take a long time to search for the error in it.

• Error Examples and Solution Techniques

Some errors encountered in Goldengate and examples of how these errors should be resolved will be mentioned.

No Data Found: This error is usually encountered with deletes and updates. This error is not encountered in normal operating time, only if there is data escaping during initial load, this is why we encounter such an error. It’s a replica error.

—- OCI Error ORA-01403: no data found, SQL
Aborting transaction on ./dirdat/rr beginning at seqno 158 rba 19824121
error at seqno 158 rba 19835886
Problem replicating CCS_COMPONENTS.FINANCIAL_TX to CCS_COMPONENTS.FINANCIAL_TX
Record not found
Mapping problem with delete record (target format)…
*
N_FIN_TX_ID =

000000: 00 37 00 30 00 42 00 41 00 46 00 31 00 31 00 35 |.7.0.B.A.F.1.1.5|

000010: 00 44 00 38 00 39 00 32 00 42 00 31 00 39 00 43 |.D.8.9.2.B.1.9.C|

N_FIN_DTL_ID =

000000: 00 33 00 35 00 39 00 36 00 39 00 35 00 43 00 31 |.3.5.9.6.9.5.C.1|

000010: 00 36 00 44 00 30 00 43 00 46 00 31 00 35 00 33 |.6.D.0.C.F.1.5.3|

After the last record written Error is found in the Discard file, an error can be deleted or updated in the relevant table. The solution method for this is to manually insert the relevant data into the database via pk’s. That is, only pk columns or columns that are not null constraints, if any. In the Discard file, it writes the data that should be in all the columns in the table.
The table is inserted manually by connecting to the ODS database with Toad. Replicat is run again.

Unique Constraint Violated: Primary key error. This error is received if it tries to re-add a record that already exists in the table. Under normal circumstances, this error is not received. It is caused by the fact that the data missed or received when Initial Load is written comes with both initial load and extract.

—– OCI Error ORA-00001: unique constraint (CCS_COMPONENTS.PKFN_RELATION) violated (status = 1). INSERT INTO “CCS_COMPONENTS”.”FN_RELATION” (“N_FILE_NTE_ID”,”N_CLAIM_ID”,”N_LINE_ID”,”D_CREATE_TS”,”D_LAST_UPDT_TS”,”N_LAST_MOD”,”N_SYSTEM_MOD”,”C_RCD_DEL”,”N_LAST_UPD”, “SYS_DATE”) VALUES (:a0,:a1,:a2,:a3,:a4,:a5,:a6,:a7,:a8,:a9,:a10)
Aborting transaction on ./dirdat/rr beginning at seqno 150 rba 62403384
error at seqno 150 rba 62404447
Problem replicating CCS_COMPONENTS.FN_RELATION to CCS_COMPONENTS.FN_RELATION
Mapping problem with insert record (target format)…
*
N_FILE_NTE_ID =
000000: 00 41 00 32 00 38 00 32 00 42 00 35 00 33 00 46 |.A.2.8.2.B.5.3.F|
000010: 00 31 00 43 00 46 00 43 00 46 00 37 00 43 00 30 |.1.C.F.C.F.7.C.0|

N_CLAIM_ID =
000000: 00 35 00 46 00 35 00 36 00 42 00 45 00 32 00 45 |.5.F.5.6.B.E.2.E|
000010: 00 46 00 36 00 35 00 44 00 39 00 46 00 41 00 38 |.F.6.5.D.9.F.A.8|

Since this error is received because it tries to print an existing data, this data is manually deleted from the table by connecting to the database. Replicat is run again. So the replicat prints the data in.

Cannot Insert Null Into : There may be 2 reasons why it gives this error. The first one is that it is not null constraint for the column that it actually gives error from the table it gives error. In this case, if the column is not PK, this constraint can be resolved by dsable. The second one may be due to the inability to read the incoming data properly.

—– OCI Error ORA-01400: cannot insert NULL into (“CCS_COMPONENTS”.”SUPPLIER_LOCATION”.”N_SUPP_LOC_ID”) (status = 140). INSERT INTO “CCS_COMPONENTS”.”SUPPLIER_LOCATION” (“N_SUPP_LOC_ID”,”N_CLIENT_ID”,”D_CREATE_TS”,”D_LAST_UPDT_TS”,”N_LAST_MOD”,”N_SYSTEM_MOD”,”C_RCD_LOC_ID”,”N_LAST_UP_TRY”,”N_SYSTEM_MOD”,”N_RCD_SUPPY_NCD”,NC_LAST_UP_TRY”, “C_WORK_CITY_CD”,”C_WORK_TOWN_CD”,”C_CUR_COUNTRY_CD”,”C_CUR_CITY_CD”,”C_CUR_TOWN_CD”,”SYS_DATE”) VALUES (:a0,:a1,:a2,:a3,:a4,:a5,:a6,:a7, :a8,:a9,:a10,:a11,:a12,:a13,:a14,:a15)
Aborting transaction on ./dirdat/rr beginning at seqno 110 rba 12020405
error at seqno 110 rba 12020728
Problem replicating CCS_COMPONENTS.SUPPLIER_LOCATION to CCS_COMPONENTS.SUPPLIER_LOCATION
Mapping problem with compressed key update record (target format)…
*
N_SUPP_LOC_ID =

N_SUPP_LOC_ID =

N_CLIENT_ID = NULL
D_CREATE_TS = NULL

In such cases, it is investigated why the column cannot be read. It is usually caused by the mismatch of the character set written in the prm file and the character set of the data written in the column. In this case, the necessary character set definition is added in front of this table in the replicat prm file. There may be cases where the language defined in the prm files is not sufficient. (SETENV (NLS_LANG = “AMERICAN_AMERICA.UTF8”)), (SETENV (NLS_LANG = “AMERICAN_AMERICA.AL32UTF8”))..etc.

Bad Column Index : This means that a new column has been added to the table with an error in the source. When this new column is extracted and tried to be printed with a replicat, it gives this error because there is no column on the target side. Therefore, the newly added column to the table by connecting to the source system should be added to the target side in the same way. After this addition is made, replicat is run again.

—-2013-11-22 11:23:26 ERROR OGG-01161 Bad column index (44) specified for table CCS_COMPONENTS.SUPPLIER_SRVC, max columns = 44

As here, he got a bad coulumn index error for the CCS_COMPONENTS.SUPPLIER_SRVC table. Looking at the source side, the newly added column or columns were added to the target side, and the problem was solved.

Char – Varchar2 Confusion: While character columns in tables in the As400 database are defined in char ODS, they are also defined as varchar2. However, when these columns are pk or constraints of the table, a situation like not matching occurs. Because how many characters are defined in columns with char, the data written in it is that.

If it is not as many characters, the remaining characters keep the space character pressed. Columns with Varchar2 directly print the data inside without spaces, so a mismatch occurs. This problem is usually encountered when an update or delete arrives. The problem is overcome by drawing the relevant column to char.

Bad/Incompetible Record / Header: Such errors are encountered in cases where the data from the extract cannot be read, replicated, or stuck in the header of the newly started file. In the report file, it writes which file is stuck in which rba. To solve such errors;

— Exit ggsci, that is, when under cd/goldegate, the following operations are performed respectively.

gguser:askaodsora01:/goldengate:>logdump
Logdump 133 is >open ./dirdat/r1000960——-960 extseqno.
Current LogTrail is /goldengate/dirdat/r1000960
Logdump 129 >ghdr on
Logdump 130 >detail on
Logdump 131 >detail data
Logdump 137 >pos 0 ——————rba number with error in report file
Reading forward from RBA 0
Logdump 138 >n

2013/12/06 13:49:12.151.386 FileHeader Len 1245 RBA 0
Name: FileHeader
3000 01ae 3000 0008 4747 0d0a 544c 0a0d 3100 0002 | 0…0…DD..TL..1…
0003 3200 0004 2000 0000 3300 0008 02f2 130f b0d6 | ..2… …3………
fd5a 3400 0015 0013 7572 693a 5052 4f44 3a3a 7a7a | .Z4…..uri:PROD::zz
3a50 3031 414e 4135 0000 1935 0000 1500 1375 7269 | :P01ANA5…5…..uri
3a50 524f 443a 3a7a 7a3a 4530 3141 4e41 3600 0013 | :PROD::zz:E01ANA6…
0011 2e2f 6469 7264 6174 2f72 3130 3030 3936 3037 | …/dirdat/r10009607
0000 0101 3800 0004 0000 03c0 39ff 0008 0000 0000 | ….8…….9…….

Logdump 139 >n


Hdr-Ind : E (x45) Partition : . (x04)
UndoFlag : . (x00) BeforeAfter: A (x41)
RecLength : 684 (x02ac) IO Time : 2013/12/06 14:52:23.941.541
IOType: 5 (x05) OrigNode: 255 (xff)
TransInd: . (x01) FormatType: R (x52)
SyskeyLen: 0 (x00) Incomplete: . (x00)
AuditRBA : 1017874249 AuditPos : 0
Continued : N (x00) RecCount : 1 (x01)

2013/12/06 14:52:23.941.541 Insert Len 684 RBA 1253

Here, if the error is Bad/incompetible Record error, n is displayed in the logdump. Next rba numbered record is shown with n again. The rba number here is taken and the logdump is exited. Go under ggsci.

Logdump 140 >exit
gguser:askaodsora01:/goldengate:>ggsci
GGSCI (askaodsora01) 1>alter , extseqno extrba ——–as you can see here, the written rba number is the next rba number after the wrong record.
GGSCI (askaodsora01) 1>alter r01ana, extseqno 960 extrba 1253
is in the form.

With the operation performed here, the wrong record was bypassed and it was ensured to continue with the next record.

If the error is a header error, when it is requested to display a record with n while performing the operations in the logdump, it does not show any results, and when n is written again, it does not show any records. What needs to be done is to exit the logdump and go under ggsci.

GGSCI (askaodsora01) 1>alter , extseqno extrba ——- it is enough to write one greater than rba where rba no header is here. So let’s say the rba number of the header is 1082.

GGSCI (askaodsora01) 1>alter r01ana, extseqno 960 extrba 1083

It is enough to write. Replicat is run again.

Replicat / Extract stuck: In cases where the extract or replicat is abutted for a long time or there are too many transactions, the extract or replicat may be stuck in a certain time period. Although the error is fixed, extract or replicat may be stuck even though it appears to be working. If the date field in the log read checkpoint section does not change in a previous date and for a long time, when the info of the relevant extract or replicat is checked to understand whether it is stuck or not, it means the extract or replicat is stuck.

—– GGSCI (askaodsora01) 71> info r01ana

REPLICA R01ANA Last Started 2013-12-06 15:59 Status RUNNING
Checkpoint Lag 00:00:00 (updated 00:00:00 ago)
Log Read Checkpoint File ./dirdat/r1000961
2013-12-02 16:14:54.406888 RBA 898900

For example, if the replicat still shows on the 2nd of the 12th month when viewed on the 6th of the 12th month, it means it is stuck. If to solve this

GGSCI (askaodsora01) 72> alter replicat r01ana, begin 2013-12-02 16:14:54.406888

It is drawn back to that moment by writing. Thus, the replicat is corrected.

Journal Errors: These are the errors that occur when the journals and libraries of the As400 tables are not the same in Extract. In other words, a new extract to be created or the tables in an existing extract should be placed according to their journals. So the journals of the tables in an extract and the libraries of these journals

i should be the same. This should be considered when adding a new table.
To look at the journal and library information of the tables, on the page that comes after connecting to the system with the help of the Moca program and entering the user name and password.

Dspfd /

( Dspfd yprod/fgpoltem )

by typing and pressing enter, all the necessary table information is listed. When you go down by using the page down button 2 times, the journal and library information of the table is written.

File is currently journaled . . . . . . . . : yes
Current or last journal . . . . . . . . . . : LINK07
library . . . . . . . . . . . . . . . . . : SHJRNLIB
Journal images. . . . . . . . . . . . . . : IMAGES *BOTH
Journal entries to be omitted . . . . . . . : OMTJRNE *OPNCLO
Last journal start date/time . . . . . . . : 20/06/10 02:03:31

CHAPTER 4-) SCHEDULED SCRIPTS

• STARTUP SH

There is a sh that can automatically restore all replicats when the ODS database is restarted. Gguser calls the startup.txt file under the start_gg.sh goldengate directory, located under the user’s file, and ensures the continuation of the transfer by making the replicats operational. This will run automatically when sh database restarts. In order to look at these, when we list all the sh’s under gguser after connecting to the system with Putty;

gguser:askaodsora01:/home/gguser:>ls *.sh
control.sh start_gg.sh
master_rowcount_job.sh stop_R02SWE.sh
rowcount_control.sh trunc_rowcount_control.sh
start_R02SWE.sh

In start_gg.sh, it was written first to go under the goldengate directory and run start_gg.txt with the obey command.

gguser:askaodsora01:/home/gguser:>cat start_gg.sh
cd /goldengate
./ggsci << EOF
OBEY /goldengate/start_gg.txt
EOF

The start_gg.txt file is under the goldengate directory. In this txt file, there is the command to run the manager and the command to run all replicats.

gguser:askaodsora01:/goldengate:>cat startup.txt
START MGR
START ER *
INFO ALL

• SWEP SH

Since the R02SWE replicat was not wanted to run during the day, SHs were needed to stop and restart this replicat in the morning and evening. For this reason, 2 sh has been added under gguser. In order to look at these, when we list all the sh’s under gguser after connecting to the system with Putty;

gguser:askaodsora01:/home/gguser:>ls *.sh
control.sh start_gg.sh
master_rowcount_job.sh stop_R02SWE.sh
rowcount_control.sh trunc_rowcount_control.sh
start_R02SWE.sh

start_R02SWE.sh is the sh that enables the R02SWE replicat to be run. It was requested to run start_R02SWE.txt with obey command by going under goldengate directory.

gguser:askaodsora01:/home/gguser:>cat start_R02SWE.sh
cd /goldengate
./ggsci << EOF
OBEY /goldengate/start_R02SWE.txt
EOF

The start_R02SWE.txt file contains the command to run the R02SWE replicat.

gguser:askaodsora01:/goldengate:>cat start_R02SWE.txt
START R02SWE

stop_R02SWE.sh is the sh that stops the R02SWE replicat. It was requested to run stop_R02SWE.txt with obey command by going under goldengate directory.

gguser:askaodsora01:/home/gguser:>cat start_R02SWE.sh
cd /goldengate
./ggsci << EOF
OBEY /goldengate/ stop_R02SWE.txt
EOF

In the stop_R02SWE.txt file, there is a command to stop the R02SWE replicat.

gguser:askaodsora01:/goldengate:>cat start_R02SWE.txt
STOP R02SWE

These 2 sh are scheduled. stop_R02SWE.sh is set to run in the morning and start_R02SWE.sh is set to run in the evening.

• DB Procedure

The procedure named ODS.DATE_KONTROL belonging to the ODS user has been created to create a date formatted data by taking the year, month and day parameters, and if the sent data is not meaningful, it creates the date 01 January 1900 and returns it with an out parameter. This procedure is used to fill in the TANZIMTARIH_DT, START STARTING_DT and BITISTARIH_DT fields in the table named AGGR.POLMAS for the transaction named R01ANA. Usage formats are given below.

Usage in R01ana.prm:

MAP YPROD.FDPOLMAS, TARGET AGGR.POLMAS, SQLEXEC ( SPNAME ODS.DATE_KONTROL , ID DK_TANZIM_2 , PARAMS ( P_YIL = FTANYL , P_AY = FTANAY , P_GUN = FTANGN ) , ……… = DK_TANZIM_2.P_DATE , ….)
If it is required to be used in more than one field in the same table, the SQLEXEC definition is multiplexed and the SPNAME definitions are the same, but the ID definitions are different, and definition multiplexing is used.

CHAPTER 5-) CONTROL MECHANISMS

Control mechanisms work at two different levels. Goldengate log monitoring mechanism is based on the logic of writing the file named ggserr.log under /goldengate (/zz in AS400) to a table in the database and tracking it from there in case of stopping or delay in running jobs.

no. However, it does not report the status of the data so far. At this point, a data-level comparison is made by obtaining the rowcounts of the tables with the operating system, or db procedures, triggered by the IBM Datastage, with the queries run at the same time.

• Goldengate Log Monitoring

In this process, in accordance with the settings in the parameter file of the manager process, messages with the title INFO are automatically written to the log file produced by GoldenGate, when the time set with the LAGINFOMINUTES parameter is exceeded, and the message titled WARNING when the time set with LAGCRITICALMINUTES is exceeded. These messages are written in the table below.

GOLDENGATE.PROCESS_STATUS_HISTORY:

SERVER_NAME VARCHAR2(30 BYTE),
PROCESS_NAME VARCHAR2(30 BYTE),
MESSAGE VARCHAR2(4000 BYTE),
LAG INTERVAL DAY(9) TO SECOND(9),
SYS_DATE TIMESTAMP(6)

  The data held by the columns in this table have the following ID.

SERVER_NAME => Name of the server whose log is monitored
PROCESS_NAME => Name of the extract, pump or replicat process running on the server in question
MESSAGE => The message of the transaction in the log file
LAG => The delay time of the transaction in question (if the transaction is abend or stop, this information is not written, in this case there is no information in this column of the row). Lag information is logged continuously in hourly format. Therefore, when it switches to the day level, a delay of more than 24 hours can be seen. At this point, the parsed lag information is inserted into this table with the following function. The example lag time here is 25 minutes 44 seconds.
numtodsinterval(AVG(to_number(REGEXP_SUBSTR(’00:25:44′, ‘[^:]+’, 1, 1))6060 + to_number(REGEXP_SUBSTR(’00:25:44′, ‘[^: ]+’, 1, 2))*60 + to_number(REGEXP_SUBSTR(’00:25:44′, ‘[^:]+’, 1, 3))), ‘second’)
SYS_DATE => The time when the message was dropped.

This table is read with java codes created on all servers listed below and written to this table in the ODS system.

SERVER PROCESSES
Askaodsora01 Replicat
askaccsora04 Extract
Askacrmora01 Extract
asasweora02 Extract

Messages written to the process_status_history table are sent to the responsible person by e-mail to inform about possible process problems. In addition, since this table is a historical information source for a report to be created in the SAP BO environment, the history of delays or problems over time can also be reported.

• Database controls (taking table count)

This step aims to control the differences in the data level through the count logic. Counting is done from a single point, that is, via scripts triggered by IBM Datastage. This mechanism currently compares tables between ODS and AS400 and other servers are not included. Since the aforementioned coutn process creates a serious load, especially in Oracle databases, it should be applied carefully on source systems and should be done centered on the Datastage. The important thing is to throw the count queries to the tables to be compared in the source and target at the same time. If the delays at the Goldengate level are negligible, these count results are expected to be close to 0. However, there may be slight differences in tables that are heavily traded. It should be noted that this difference is not always at the same level. Since this may mean a constant loss of data, possible loss should be detected and corrected by comparing the source table with the table on the destination side.
On the ODS side, the count operation is started by triggering the script named master_rowcount_job.sh under the /home/gguser folder. This script file calls the script file named rowcount_control.sh, which is also in the same directory, for each table to be counted. The contents of this file are as follows.

!/bin/sh

sqlplus / as sysdba<<END
execute ods.rowcounts(‘$1′,’$2’);
exit;
END

As can be seen here, this shell script calls the procedure named ods.rowcounts in the database. This procedure runs a count operation, taking the table name and owner information.

CREATE OR REPLACE PROCEDURE ODS.ROWCOUNTS(SCHEMA_NAME IN VARCHAR2 ,TABLE_NAME IN VARCHAR2)
WORK
V_SHEMA_NAME CONSTANT VARCHAR2(30) := SCHEMA_NAME;
V_TABLE_NAME CONSTANT VARCHAR2 (30) := TABLE_NAME;

BEGIN
EXECUTE IMMEDIATE ‘INSERT INTO ODS.ODS_ROWCOUNTS
(TABLE_NAME,ROW_COUNT,SYS_DATE)
SELECT ”’||V_SHEMA_NAME||’.’||V_TABLE_NAME||” ,COUNT(*), SYSDATE FROM ‘||V_SHEMA_NAME||’.’||V_TABLE_NAME;
COMMIT;
END;
/

This procedure drops the count information of the table in question to the ODS.ODS_ROWCOUNTS table in the database. The information of the AS400 environment is written to the ODS.AS400_ROWCOUNTS table with the Datastage job. The SYS_DATE column in these tables locks the time of writing the data to this table after the row count operation. Therefore, historical data can be displayed. Then the data in these two tables

It is summarized in the ODS.GG_TABLE_ROW_DIFFS table by comparing it on the basis of o, and the number of rows between source and destination and the ratio of this number to the number of rows in the table are inserted. If the same system is installed for other sources, a table can be created for them as well, and the differences can be written to the ODS.GG_TABLE_ROW_DIFFS table by using it for comparison.

Comments