Search This Blog

Dec 16, 2014

Setup standby with PostgreSQL 8.4

Here is a step by step guide on "How to setup standby with PostgreSQL 8.4. This article will also give you fair enough idea about the log shipping to remote server AKA log archiving.
 
 
Pre-requisites:
 
1. Two machines with same OS and version.
2. PostgreSQL 8.4 should be installed on the both the machines.
3. Passwordless authentication for Postgres user(From server A to Server B and vice versa).
 
 
To start with what all you need is 2 hosts with PostgreSQL 8.4 installed on it.
 
Host Details
Host-1: RHEL1.ajay.com IP: 192.168.178.100
Host-2: RHEL2.ajay.com IP: 192.168.178.101
 

OS and version
 
 


 
PostgreSQL version
 


 

 
 
Step-1
 
Edit postgresql.conf file, modify below parameters and save the file.
Note: Make sure location we are using for the archives do exist. If not, we need to create directory/directories.
 
archive_mode = on               # allows archiving to be done
                                # (change requires restart)
archive_command = 'cp %p /data/archive/%f && scp %p postgres@192.168.178.101:/data/prodarchive/%f'          # command to use to archive a logfile segment
archive_timeout = 100           # force a logfile segment switch after this
                                # number of seconds; 0 disables
                                               

 
 
Step-2
 
As archiving requires restart of the cluster, we have to stop and start the PostgreSQL cluster.
 
Stop the cluster
/opt/postgres_8.4/bin/pg_ctl -D /data/8.4_data/ stop
 
Start the cluster
/opt/postgres_8.4/bin/pg_ctl -D /data/8.4_data/ start
 
Or
/opt/postgres_8.4/bin/pg_ctl -D /data/8.4_data/ restart
 


Step-3
 
Now connect to psql to make sure all the parameters we have modified are reflecting the new values. You can also verify the wal archive locations on master and slave whether archives are getting copied or not.
 












Step-4
 
Assuming that's everything is working fine now we are ready to create the initial snapshot(Backup) of the master.
 
Connect to Pgsql and put the database in begin backup mode:
 
[postgres@RHEL1 ~]$ /opt/postgres_8.4/bin/psql -p 5444 -d postgres
psql (8.4.18)
Type "help" for help.
 
postgres=#SELECT pg_start_backup('label', true);
 

Step-5
Now we are going to zip/tar the complete data directory and transfer the backup file handle to the slave.
 
cd /data
tar -vcf /backup_location/backup_of_cluster_8.4.tar 8.4_data 
 
scp /backup_location/backup_of_cluster_8.4.tar postgres@192.168.178.101:/u01/

 
Step-6
Once the backup is transferred we need to put the cluster out of the backup mode.(On Master)
 
postgres=#SELECT pg_stop_backup(); 
 
 
Step-7
On the slave untar the backup of the master to location which is going to be your datadirectpry, in my case  "/data/standby8.4"
 
mv /u01/backup_of_cluster_8.4.tar /data
tar -xvf backup_of_cluster_8.4.tar
 
Rename directory backup_of_cluster_8.4 to standby8.4
 
 
Step-8
Clean up of files which are not required:
 
rm -rf postmaster.pid postmaster.opts
rm -rf pg_log/*
rm -rf pg-xlog/*
rm -rf backup_label
 
 
Step-9
 
 
Create recovery.conf file, add below enties and save the file.
 
restore_command = '/opt/postgres_8.4/bin/pg_standby -d -t /tmp/pgsql.trigger.5444 /data/standby8.4/prodarchive%f %p %r 2>>/var/log/postgresql/standby.log'
recovery_end_command = 'rm -f /tmp/pgsql.trigger.5444'
 

Step-10
Edit postgresql.conf file and comment below entries(As we have copied everything from the master.):
 
 
#archive_mode = on               # allows archiving to be done
                                # (change requires restart)
#archive_command = 'cp %p /data/archive/%f && scp %p postgres@192.168.178.101:/data/prodarchive/%f'          # command to use to archive a logfile segment
#archive_timeout = 100           # force a logfile segment switch after this
                                # number of seconds; 0 disables
 

Step-11
We are all set to start our slave now. Open 2 sessions, 1 to start the cluster and another one we are going to use the tail log file of cluster.
 
/opt/postgres_8.4/bin/pg_ctl -D /data/standby8.4 start


Step-12
Log file should look like below:
 
[postgres@RHEL2 pg_log]$ tail -f  postgresql-2014-12-11_040146.log
 
 
[postgres@RHEL2 pg_log]$ more postgresql-2014-12-11_040146.log
LOG:  database system was interrupted; last known up at 2014-12-11 03:54:53 IST
LOG:  starting archive recovery
LOG:  restore_command = '/opt/postgres_8.4/bin/pg_standby -d -t /tmp/pgsql.trigger.5444 /data/standby8.4/prodarchive %f %p %r'
LOG:  recovery_end_command = 'rm -f /tmp/pgsql.trigger.5444'
 
Trigger file            : /tmp/pgsql.trigger.5444
Waiting for WAL file    : 00000001.history
WAL file path           : /data/standby8.4/prodarchive/00000001.history
Restoring to...         : pg_xlog/RECOVERYHISTORY
Sleep interval          : 5 seconds
Max wait interval       : 0 forever
Command for restore     : cp "/data/standby8.4/prodarchive/00000001.history" "pg_xlog/RECOVERYHISTORY"
Keep archive history    : 000000000000000000000000 and later
running restore         :cp: cannot stat `/data/standby8.4/prodarchive/00000001.history': No such file or directory
cp: cannot stat `/data/standby8.4/prodarchive/00000001.history': No such file or directory
cp: cannot stat `/data/standby8.4/prodarchive/00000001.history': No such file or directory
cp: cannot stat `/data/standby8.4/prodarchive/00000001.history': No such file or directory
not restored            : history file not found
 
Trigger file            : /tmp/pgsql.trigger.5444
Waiting for WAL file    : 00000001000000000000003C
WAL file path           : /data/standby8.4/prodarchive/00000001000000000000003C
Restoring to...         : pg_xlog/RECOVERYXLOG
Sleep interval          : 5 seconds
Max wait interval       : 0 forever
Command for restore     : cp "/data/standby8.4/prodarchive/00000001000000000000003C" "pg_xlog/RECOVERYXLOG"
Keep archive history    : 000000000000000000000000 and later
running restore         : OK
LOG:  restored log file "00000001000000000000003C" from archive
LOG:  automatic recovery in progress
LOG:  redo starts at 0/3C000068
LOG:  consistent recovery state reached
 
Trigger file            : /tmp/pgsql.trigger.5444
Waiting for WAL file    : 00000001000000000000003D
WAL file path           : /data/standby8.4/prodarchive/00000001000000000000003D
Restoring to...         : pg_xlog/RECOVERYXLOG
Sleep interval          : 5 seconds
Max wait interval       : 0 forever
Command for restore     : cp "/data/standby8.4/prodarchive/00000001000000000000003D" "pg_xlog/RECOVERYXLOG"
Keep archive history    : 00000001000000000000003C and later
running restore         : OK
removing "/data/standby8.4/prodarchive/000000010000000000000037"
removing "/data/standby8.4/prodarchive/00000001000000000000003B"
removing "/data/standby8.4/prodarchive/00000001000000000000003A"
removing "/data/standby8.4/prodarchive/000000010000000000000039"
 

Mar 8, 2010

Oracle 10g: SGA related Parameter

Auto tuned SGA parameters

  • db_cache_size
  • shared_pool_size
  • large_pool_size
  • java_pool_size
These parameters are called auto tuned because automatic shared memory managment can dynamically change the sizes of these pools if it is enabled.

DB_CACHE_SIZE

The value of this parameter affects the size of the SGA: It sets the size of the default buffer pool.
According to metalink note 223299.1, this is one of the top parameters affecting performance.
With Oracle 10g, one can turn on Automatic Shared Memory Management which distributes shared memory as required. See also sga_target.

JAVA_POOL_SIZE

The value of this parameter affects the size of the SGA.
With Oracle 10g, one can turn on Automatic Shared Memory Management which distributes shared memory as required. See also sga_target.

SHARED_POOL_SIZE

The value of this parameter affects the size of the SGA or more appropriately the size of the shared pool within the SGA.
Apparently, when installing JServer, this parameter must at least be set to 24M.

LARGE_POOL_SIZE

With Oracle 10g, one can turn on Automatic Shared Memory Management which distributes shared memory as required. See also sga_target.

Manual SGA parameters

  • db_keep_cache_size
  • db_recycle_cache_size
  • db_NNk_cache_size
    NN being one of 2, 4, 8, 16, 32
  • log_buffer
  • streams_pool_size

DB_KEEP_CACHE_SIZE

According to metalink note 223299.1, this is one of the top parameters affecting performance.

DB_RECYCLE_CACHE_SIZE

According to metalink note 223299.1, this is one of the top parameters affecting performance.

LOG_BUFFER

Up to 9i

The value of this parameter defines the size of the redo log buffer. As the redo log buffer is part of the SGA, it affects the size of the SGA as well.
Setting this value to anything greater than 3M is useless as the log buffer is flushed anyway when it is filled up to 1M or when it is reaches one third of its capacity, whichever comes first. (Thanks to William White who notified me of an error here).

10g

From 10g onwards, there is a lot more to do about the initialization parameter log_buffer. Also, Metalink note 351857.1states that the size of the log buffer cannot be changed with this paramter from 10gR2, instead, the size will be set by Oracle.

STREAMS_POOL_SIZE

Other parameters

DB_BLOCK_BUFFERS

The value of this parameter affects the size of the SGA, or more precisely, the size of the buffer cache. This parameter is deprecated since 9i, db_cache_size should be used instead.
With Oracle 10g, one can turn on Automatic Shared Memory Management which distributes shared memory (of which the buffer cache is part) as required. See also sga_target.

DB_BLOCK_CHECKSUM

DB_BLOCK_SIZE

Determines the size of a database blocks.

SGA_MAX_SIZE

sga_max_size sets the maximum value for sga_target
If sga_max_size is less than the sum of db_cache_size +log_buffer + shared_pool_size + large_pool_size at initialization time, then the value of sga_max_size is ignored.
According to metalink note 223299.1, this is one of the top parameters affecting performance.

SGA_TARGET

This parameter is new with Oracle 10g. It specifies the total amaount of SGA memory available to an instance. Setting this parameter makes Oracle distribute the available memory among various components - such as shared pool (for SQL and PL/SQL), Java pool, large_pool and buffer cache - as required.
This new feature is called Automatic Shared Memory Management. With ASMM, the parameters java_pool_size,shared_pool_size, large_pool_size and db_cache_size need not be specified explicitely anymore.
sga_target cannot be higher than sga_max_size.


Oracle 10g: SGA related Parameter

Auto tuned SGA parameters

  • db_cache_size
  • shared_pool_size
  • large_pool_size
  • java_pool_size
These parameters are called auto tuned because automatic shared memory managment can dynamically change the sizes of these pools if it is enabled.

DB_CACHE_SIZE

The value of this parameter affects the size of the SGA: It sets the size of the default buffer pool.
According to metalink note 223299.1, this is one of the top parameters affecting performance.
With Oracle 10g, one can turn on Automatic Shared Memory Management which distributes shared memory as required. See also sga_target.

JAVA_POOL_SIZE

The value of this parameter affects the size of the SGA.
With Oracle 10g, one can turn on Automatic Shared Memory Management which distributes shared memory as required. See also sga_target.

SHARED_POOL_SIZE

The value of this parameter affects the size of the SGA or more appropriately the size of the shared pool within the SGA.
Apparently, when installing JServer, this parameter must at least be set to 24M.

LARGE_POOL_SIZE

With Oracle 10g, one can turn on Automatic Shared Memory Management which distributes shared memory as required. See also sga_target.

Manual SGA parameters

  • db_keep_cache_size
  • db_recycle_cache_size
  • db_NNk_cache_size
    NN being one of 2, 4, 8, 16, 32
  • log_buffer
  • streams_pool_size

DB_KEEP_CACHE_SIZE

According to metalink note 223299.1, this is one of the top parameters affecting performance.

DB_RECYCLE_CACHE_SIZE

According to metalink note 223299.1, this is one of the top parameters affecting performance.

LOG_BUFFER

Up to 9i

The value of this parameter defines the size of the redo log buffer. As the redo log buffer is part of the SGA, it affects the size of the SGA as well.
Setting this value to anything greater than 3M is useless as the log buffer is flushed anyway when it is filled up to 1M or when it is reaches one third of its capacity, whichever comes first. (Thanks to William White who notified me of an error here).

10g

From 10g onwards, there is a lot more to do about the initialization parameter log_buffer. Also, Metalink note 351857.1states that the size of the log buffer cannot be changed with this paramter from 10gR2, instead, the size will be set by Oracle.

STREAMS_POOL_SIZE

Other parameters

DB_BLOCK_BUFFERS

The value of this parameter affects the size of the SGA, or more precisely, the size of the buffer cache. This parameter is deprecated since 9i, db_cache_size should be used instead.
With Oracle 10g, one can turn on Automatic Shared Memory Management which distributes shared memory (of which the buffer cache is part) as required. See also sga_target.

DB_BLOCK_CHECKSUM

DB_BLOCK_SIZE

Determines the size of a database blocks.

SGA_MAX_SIZE

sga_max_size sets the maximum value for sga_target
If sga_max_size is less than the sum of db_cache_size +log_buffer + shared_pool_size + large_pool_size at initialization time, then the value of sga_max_size is ignored.
According to metalink note 223299.1, this is one of the top parameters affecting performance.

SGA_TARGET

This parameter is new with Oracle 10g. It specifies the total amaount of SGA memory available to an instance. Setting this parameter makes Oracle distribute the available memory among various components - such as shared pool (for SQL and PL/SQL), Java pool, large_pool and buffer cache - as required.
This new feature is called Automatic Shared Memory Management. With ASMM, the parameters java_pool_size,shared_pool_size, large_pool_size and db_cache_size need not be specified explicitely anymore.
sga_target cannot be higher than sga_max_size.


May 26, 2009

What is PRESENTATION=RO in tnsnames.ora file?

Check the entry in tnsnames.ora file:

EXTPROC_CONNECTION_DATA =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1))
    )
    (CONNECT_DATA =
      (SID = PLSExtProc)
      (PRESENTATION = RO)
    )
  )

In this article we will be discussing about the PRESENTATION clause in the entry. 

Little history
The whole tnsnames.ora file is meant to be information for client softwares which will be connecting to the Oracle server (database).

The client must know where the database resides, which PROTOCOL to use for connection, unique identifier of the database etc.

Back to EXTPROC_CONNECTION_DATA
But in this particular tnsnames.ora entry, Oracle uses this for connecting to external procedures. The examples of external procedures are the procedures written in C/C++/VB which are compiled as available as shared libraries (DLLs).

PRESENTATION in connect descriptor
There must be a presentation layer between client and server, if in case the charactersets of both are different. This layer ensures that information sent from within application layer of one system is readable by application layer of the other system.

The various presentation layer options available are
1. Two-Task Common (TTC)
2. JavaTTC
3. FTP
4. HTTP
5. GIOP (for IIOP)
6. IMAP
7. POP
8. IM APSSL (6, 7, and 8 are for email) etc
9. RO

TTC
TTC/Two-Task Common is Oracle's implementation of presentation layer. It provides characterset and datatype conversion between different charactersets or formats on the client and server. This layer is optimized on a per connection basis to perform conversion only when required.

Information obtained from: 
http://download-west.oracle.com/docs/cd/A87860_01/doc/network.817/a76933/architec.htm#1007271

JavaTTC
This is a Java implementation of TTC for Oracle Net foundation layer capable of providing characterset and datatype conversion.

It is responsible for 
  a. Negotiating protocol version and datatype
  b. Determining any conversions
  c. SQL statement execution

RO
For external procedures the PRESENTATION layer value will be normally RO, meaning for "Remote Operation". By this parameter the application layer knows that a remote procedure call (RPC) has to be made.

QUERY parameter in Export Utility

This parameter is used in conjunction with TABLE parameter of exp (export) utility of Oracle. This parameter will enable selection of rows from the list of tables mentioned in TABLE parameter. The value to this parameter is a WHERE clause for a SELECT statement which you would normally issue.

For example if you want to query all records of employees for a particular department you will use:

SELECT *
FROM employees
WHERE dept = 10;

To export these rows into a file using exp utility you will follow the below syntax:

exp scott/tiger TABLES=employees QUERY=\"WHERE dept=10\"

Use \ for providing character or special characters like less than or greater than symbol inside the string. Also for operating system keywords you need to place \ as escape character.

For example:

exp scott/tiger TABLES=employees QUERY=\"WHERE name=\ANANTHA\' and sal \<1600\" The default value for QUERY parameter is NULL. Note: If there are multiple tables in TABLES parameter, this QUERY will be applied to all those tables.

You can also use ROWID for exporting, for example:

exp scott/tiger@slspnc1 tables=emp query=\"where ROWID='AAAMgzAAEAAAAAgAAB'\" file=test.dmp log=test.log

In Windows it is not required to put escape character for single quote character wheras you need to put escape character if you are using Linux or its variants.

exp scott/tiger@slspnc1 tables=emp query=\"where ROWID=\'AAAMgzAAEAAAAAgAAB\'\" file=test.dmp log=test.log

Restrictions When Using the QUERY Parameter:
The QUERY parameter cannot be specified for full, user, or tablespace-mode exports.
The QUERY parameter must be applicable to all specified tables.
The QUERY parameter cannot be specified in a direct path Export (DIRECT=y)
The QUERY parameter cannot be specified for tables with inner nested tables.
You cannot determine from the contents of the export file whether the data is the result of a QUERY export.