11.2.0.3 asm报ORA-04031

0

Posted on by

数据库11.2.0.3 在ASM instances后台会报 ORA-04031这样的错

ERROR at line 1:
ORA-04031: unable to allocate 256 bytes of shared memory ("shared
pool","unknown object","PCursor","kglob")
 
原因是:
1) ASM instance was running out of shared pool free memory.
2) The shared pool free memory was close to 0 MB on the affected ASM instances:
 

+ASM1
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
SQL> select bytes/1024/1024 MB
2 from v$sgastat
3 where pool = 'shared pool' and name = 'free memory';

 

 

 

MB 
17.8854446 
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

 

+ASM2
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
SQL> select bytes/1024/1024 MB
2 from v$sgastat
3 where pool = 'shared pool' and name = 'free memory';

 

MB 
22.6575012 
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

 

+ASM3
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
SQL> select bytes/1024/1024 MB
2 from v$sgastat
3 where pool = 'shared pool' and name = 'free memory';

 

select bytes/1024/1024 MB
*

 

ERROR at line 1:
ORA-04031: unable to allocate 256 bytes of shared memory ("shared
pool","unknown object","PCursor","kglob")
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

 


 

原因:

In 11.2.0.3, the "PROCESSES" parameter will be default to "available CPU cores * 80 + 40" (in the ASM spfile). As the default value for "MEMORY_TARGET" is based on "PROCESSES", it can be insufficient if there is a large number of CPU cores or large number of diskgroups which could cause issues (i.e. Grid Infrastructure stack fails to stop with ORA-04031 etc) per Bug:13605735 & Bug:12885278, it is recommended to increase the value of MEMORY_MAX_TARGET & MEMORY_TARGET before upgrading/installing to 11.2.0.3 (does not apply to 10g ASM)

 

解决:


Log in to ASM:

SQL> show parameter memory_target

If the value is smaller than 1536m, issue the following:

SQL> alter system set memory_max_target=4096m scope=spfile;
SQL> alter system set memory_target=1536m scope=spfile;


The number 1536m has proven to be sufficient for most environment, the change will not be effective until next restart.

Configure Direct NFS Client (DNFS) on Linux (11g)

1

Posted on by

在Oracle 11g中有了Direct NFS Client (DNFS)新功能 ,配置好后数据库将可以直接访问NFS服务器上的文件,避免由OS内核NFS造成的额外开销。

DIRECT NFS CLIENT OVERVIEW

      Standard NFS client software, provided by the operating system, is not optimized for Oracle Database file I/O access patterns.  With Oracle Database 11g, you can configure Oracle Database to access NFS V3 NAS devices directly using Oracle Direct NFS Client, rather than using the operating system kernel NFS client.  Oracle Database will access files stored on the NFS server directly through the integrated Direct NFS Client eliminating the overhead imposed by the operating system kernel NFS.  These files are also accessible via the operating system kernel NFS client thereby allowing seamless administration.

 

 

Direct NFS客户机装载点的设置信息可以是以下3个的一个

1. $ORACLE_HOME/dbs/oranfstab

2. /etc/oranfstab

3. /etc/mtab

 

说明:

Direct NFS Client can use a new configuration file or the mount tab file (/etc/mtab on Linux) to determine the mount 
point settings for NFS storage devices.

This file is required only for configuring the Direct NFS for load balancing and specfic to single database.You can 
still enable the Direct NFS without configuring oranfstab file.DNFS will take mount point settings for NFS from 
/etc/mtab on Linux

 

In RAC,the oranfstab must be configured on all nodes and keep /etc/oranfstab file synchronized on all nodes.
 

(When the oranfstab file is placed in $ORACLE_HOME/dbs, the entries in the file are specific to a single database.
 In this case, all nodes running an Oracle RAC database use the same ORACLE_HOME/dbs/oranfstab file. 

When the oranfstab file is placed in /etc, then it is globally available to all Oracle databases, and can contain
 mount points used by all Oracle databases running on nodes in the cluster, including single-instance databases. 
However, on Oracle RAC systems, if the oranfstab file is placed in /etc, then you must replicate the file 
/etc/oranfstab  file on all nodes, and keep each /etc/oranfstab file synchronized on all nodes, just as you must with the 
/etc/fstab file.

 

配置过程:

1.编辑oranfstab

[root@oradbca ~]# cat /etc/oranfstab 
server: oradbca
path: 192.168.1.11
export: /oraclenfsserver        mount: /oraclenfs

 

Server:NFS服务器名

Path:到达NFS服务器的最多4个网络路径,可以是IP或者主机名

Export:从NFS服务器导出的路径

Mount:NFS的本地装载点

 

2.加载库文件

[oracle@oradbca ~]$ cd $ORACLE_HOME/lib
[oracle@oradbca lib]$ mv libodm11.so libodm11.so_bak 
[oracle@oradbca lib]$ ln -s libnfsodm11.so libodm11.so

3.mount NFS

[root@oradbca~]# mount -t nfs 192.168.1.11:/oraclenfsserver /oraclenfs

 

4.startup database

SQL> startup
ORACLE instance started.

Total System Global Area  422670336 bytes
Fixed Size                  1336960 bytes
Variable Size             281020800 bytes
Database Buffers          134217728 bytes
Redo Buffers                6094848 bytes
Database mounted.
Database opened.

 

在alter日志会有

Oracle instance running with ODM: Oracle Direct NFS ODM Library Version 2.0 
Sat Apr 20 01:36:16 2013

 

5.测试新建表空间

SQL> create tablespace test datafile '/oraclenfs/test.dbf' size 1m ;

后台日志输出:

create tablespace test datafile '/oraclenfs/test.dbf' size 1m 
Direct NFS: NFS3ERR 1 Not owner. path oradbca mntport 728 nfsport 2049
Direct NFS: NFS3ERR 1 Not owner. path oradbca mntport 728 nfsport 2049

 

6.查看系统视图

SQL> desc v$dnfs_servers;
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 ID                                                 NUMBER
 SVRNAME                                            VARCHAR2(255)
 DIRNAME                                            VARCHAR2(1024)
 MNTPORT                                            NUMBER
 NFSPORT                                            NUMBER
 WTMAX                                              NUMBER
 RTMAX                                              NUMBER

SQL> 
SQL> 
SQL> select svrname,dirname,mntport,nfsport from v$dnfs_servers;

SVRNAME              DIRNAME                           MNTPORT    NFSPORT
-------------------- ------------------------------ ---------- ----------
oradbca                 /oraclenfsserver                      728       2049

 

rac环境在rman备份出现ORA-00245

1

Posted on by

版本

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
PL/SQL Release 11.2.0.3.0 - Production
CORE    11.2.0.3.0      Production
TNS for Linux: Version 11.2.0.3.0 - Production
NLSRTL Version 11.2.0.3.0 - Production

数据库在rman备份的时候出现如下错误:

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================

RMAN-03009: failure of backup command on ch1 channel at 04/19/2013 18:04:39
ORA-00245: control file backup failed; target is likely on a local file system

 

错误原因

快照控制文件必须要被所有节点访问,如果快照控制不是放在共享设备上,那么在RMAN 进行控制文件的快照备份时就会报错。

具体原文:

From 11gR2 onwards, the controlfile backup happens without holding the controlfile enqueue. For non-RAC database,
 this doesn't change anything. But for RAC database, due to the changes made to the controlfile backup mechanism in 
11gR2, any instance in the cluster may write to the snapshot controlfile. Due to this snapshot controlfile need to 
be visible to all instances. 


The snapshot controlfile MUST be accessible by all nodes of a RAC database, if the snapshot controlfile does not 
reside on a shared device error will be raised at the time of RMAN backup while taking snapshot of controlfile. 

This applies to backing up controlfile using sqlplus / having autobackup of controlfile configured on non shared location.

解决方法:

1. Check the snapshot controlfile location:

RMAN> show snapshot controlfile name;

2. Configure the snapshot controlfile to a shared disk:

RMAN> CONFIGURE SNAPSHOT CONTROLFILE NAME TO '<shared_disk>/snapcf_<DBNAME>.f';

Or in case of ASM use

RMAN> CONFIGURE SNAPSHOT CONTROLFILE NAME TO '+<DiskGroup>/snapcf_<DBNAME>.f';

 

参考:In RAC environment from 11.2 onwards Backup Or Snapshot controlfile needs to be in shared location [ID 1472171.1]

 

 

 

 

 

oracle在NFS上参数选项

0

Posted on by

在oracle环境使用NFS,用默认mount方式,oracle备份等会报错。

以下是官方文档推荐参数 Mount Options for Oracle files when used with NFS on NAS devices [ID 359515.1]

 

RAC including RACone and single instance RAC

 

In the table below 

  • Binaries is the shared mount points where the Oracle Home and CRS_HOME is installed.
  • Datafiles includes Online Logs, Controlfile and Datafiles
  • nfsvers and vers are identical on those OS platforms that has nfsvers.  The ver option is an alternative to the nfsvers option. It is included for compatibility with other operating systems
  • Please note that the mount options on each of the following cells are applicable only to those type of files listed in the column heading.
  • For RMAN backup sets, image copies, and Data Pump dump files, the "NOAC" mount option should not be specified – that is because RMAN and Data Pump do not check this option and specifying this can adversely affect performance.

 

Operating System

Mount options for Binaries Mount options for Oracle Datafiles Mount options for CRS Voting Disk and OCR
Sun Solaris *

rw,bg,hard,nointr,rsize=32768,
wsize=32768,proto=tcp,
noac,vers=3,suid

rw,bg,hard,nointr,rsize=32768,
wsize=32768,proto=tcp,noac,
forcedirectio, vers=3,suid
rw,bg,hard,nointr,rsize=32768,
wsize=32768,proto=tcp,vers=3,
noac,forcedirectio
AIX (5L) **

rw,bg,hard,nointr,rsize=32768,
wsize=32768,proto=tcp,vers=3,
timeo=600

cio,rw,bg,hard,nointr,rsize=32768,
wsize=32768,proto=tcp,noac,
vers=3,timeo=600

cio,rw,bg,hard,intr,rsize=32768,
wsize=32768,tcp,noac,
vers=3,timeo=600

HPUX 11.23 ****  – rw,bg,vers=3,proto=tcp,noac,
hard,nointr,timeo=600,
rsize=32768,wsize=32768,suid
rw,bg,vers=3,proto=tcp,noac,
forcedirectio,hard,nointr,timeo=600,
rsize=32768,wsize=32768,suid
rw,bg,vers=3,proto=tcp,noac,
forcedirectio,hard,nointr,timeo=600
,rsize=32768,wsize=32768,suid
Linux x86
#

rw,bg,hard,nointr,rsize=32768,
wsize=32768,tcp, vers=3,
timeo=600, actimeo=0

rw,bg,hard,nointr,rsize=32768,
wsize=32768,tcp,actimeo=0,
vers=3,timeo=600

rw,bg,hard,nointr,rsize=32768,
wsize=32768,tcp,noac,vers=3,
timeo=600

Linux x86-64 # rw,bg,hard,nointr,rsize=32768,
 wsize=32768,tcp,vers=3,
timeo=600, actimeo=0
rw,bg,hard,nointr,rsize=32768,
wsize=32768,tcp,actimeo=0,
vers=3,timeo=600
rw,bg,hard,nointr,rsize=32768,
wsize=32768,tcp,vers=3,
timeo=600,noac
Linux – Itanium rw,bg,hard,nointr,rsize=32768,
wsize=32768,tcp,vers=3,
timeo=600, actimeo=0
rw,bg,hard,nointr,rsize=32768,
wsize=32768,tcp,actimeo=0,
vers=3,timeo=600
rw,bg,hard,nointr,rsize=32768,
wsize=32768,tcp,noac,vers=3,
timeo=600

* NFS mount option “forcedirectio” is required on Solaris platforms when mounting the OCR/CRS files when using Oracle 10.1.0.4 or 10.2.0.2 or later (Oracle unpublished bug 4466428) 
** AIX is only supported with NAS on AIX 5.3 TL04 and higher with Oracle 10.2.0.1 and later (NetApp) 
*** NAS devices are only supported with HPUX 11.23 or higher ONLY 

# These mount options are for Linux kernels 2.6 and above. For older kernels please check Note 279393.1

Due to Unpublished bug 5856342, it is necessary to use the following init.ora parameter when using NAS with all versions of RAC on Linux (x86 & X86-64 platforms) until 10.2.0.4. This bug is fixed and included in 10.2.0.4 patchset.
filesystemio_options = DIRECTIO

 

 Single Instance (non-RAC)

Operating System

Mount options for Binaries Mount options for Oracle Datafiles
Sun Solaris *
(8, 9, 10)

rw,bg,hard,rsize=32768,
wsize=32768,vers=3,nointr,
proto=tcp,suid

rw,bg,hard,rsize=32768,
wsize=32768,vers=3,[forcedirectio or llock],
nointr,proto=tcp,suid
AIX (5L) **

rw,bg,hard,rsize=32768,
wsize=32768,vers=3,intr,
timeo=600,proto=tcp

rw,bg,hard,rsize=32768,
wsize=32768,vers=3,cio,intr,
timeo=600,proto=tcp

HPUX 11.23 **** rw,bg,hard,rsize=32768,
wsize=32768,vers=3,nointr,
timeo=600,proto=tcp,suid
rw,bg,hard,rsize=32768,
wsize=32768,vers=3,nointr,
timeo=600,proto=tcp,suid, forcedirectio
Linux x86
#
rw,bg,hard,rsize=32768,
wsize=32768,vers=3,nointr,
timeo=600,tcp
rw,bg,hard,rsize=32768,
wsize=32768,vers=3,nointr,
timeo=600,tcp
Linux x86-64 # rw,bg,hard,rsize=32768,
wsize=32768,vers=3,nointr,
timeo=600,tcp
rw,bg,hard,rsize=32768,
wsize=32768,vers=3,nointr,
timeo=600,tcp
Linux – Itanium rw,bg,hard,rsize=32768,
wsize=32768,vers=3,nointr,
timeo=600,tcp
rw,bg,hard,rsize=32768,
wsize=32768,vers=3,nointr,
timeo=600,tcp

* actime=0 or noac can be used

如果想放在启动项里,可以编辑/etc/fstab

列:

192.168.1.11:/data      /u01/oraclebak          nfs     rw,bg,hard,rsize=32768,wsize=32768,vers=3,nointr,timeo=600,tcp        0  0

 

指定参数具体含义如下:
rw 以读写方式安装文件系统(也必须要以相同的方式来导出)
ro 以只读方式安装文件系统
bg 如果安装失败(服务器没有响应),在后台一直尝试,继续发其它的安装请求
hard 以硬方式安装文件系统(这是默认情况)。如果服务器当机,让试图访问它的操作被阻塞,直到服务器恢复为止。
soft 以软方式安装文件系统。如果服务器当机,让试图访问它的操作失败,返回一条出错消息。这项功能对于避免进程“挂”在无关紧要的安装操作上来说非常有用。
intr 允许用户中断被阻塞的操作(并且让它们返回一条出错消息
nointr 不允许用户中断
retrans=n 指定在以软方式安装的文件系统上,在返回一条出错消息之前重复发出请求的次数。
timeo=n 设置请求的超时时间(以十分之一秒为单位)
rsize=n 设置读缓冲的大小为n字节。对TCP和UDP安装都适用,但最优值不一样(32K较好)。
wsize=n 设置写缓冲的大小为n字节。对TCP和UDP安装都适。
nfsvers=n 设置NFS协议的版本 2 或者 3 (在正常情况下是自动的)
tcp 选择通过TCP来传输。默认选择UDP
fg 和bg正好相反,是默认的参数
mountport 设定mount的端口

 

 

redhat6 tmpfs调整大小

0

Posted on by

在redhat6之前,修改tmpfs大小只需要修改fstab里面的值就行了。

redhat 6  除了修改fstab外,还需要修改一下参数文件  /etc/rc.d/rc.sysinit

  1. 修改/etc/fstab

查看内存:

[root@oradbca ~]# grep MemTotal /proc/meminfo 
MemTotal:       65951744 kB

默认值

[root@oradbca ~]# grep tmpfs /etc/fstab 
tmpfs                   /dev/shm                tmpfs   defaults        0 0

 

通过 vi 改成

tmpfs                   /dev/shm                tmpfs   size=60g        0 0

 

   2.修改 /etc/rc.d/rc.sysinit

if [ "$READONLY" != "yes" ] ; then
        # Clear mtab
        (> /etc/mtab) &> /dev/null

        # Remove stale backups
        rm -f /etc/mtab~ /etc/mtab~~

        # Enter mounted filesystems into /etc/mtab
        mount -f /
        mount -f /proc >/dev/null 2>&1
        mount -f /sys >/dev/null 2>&1
        mount -f /dev/pts >/dev/null 2>&1
        mount -f /dev/shm >/dev/null 2>&1
        mount -f /proc/bus/usb >/dev/null 2>&1
fi

# Mount all other filesystems (except for NFS and /proc, which is already
# mounted). Contrary to standard usage,
# filesystems are NOT unmounted in single user mode.
# The 'no' applies to all listed filesystem types. See mount(8).
if [ "$READONLY" != "yes" ] ; then
        action $"Mounting local filesystems: " mount -a -t nonfsi,nfs4,smbfs,ncpfs,cifs,gfs,gfs2 -O no_netdev
else
        action $"Mounting local filesystems: " mount -a -n -t nonfs,nfs4,smbfs,ncpfs,cifs,gfs,gfs2 -O no_netdev
fi

 

改成

if [ "$READONLY" != "yes" ] ; then
        # Clear mtab
        (> /etc/mtab) &> /dev/null

        # Remove stale backups
        rm -f /etc/mtab~ /etc/mtab~~

        # Enter mounted filesystems into /etc/mtab
        mount -f /
        mount -f /proc >/dev/null 2>&1
        mount -f /sys >/dev/null 2>&1
        mount -f /dev/pts >/dev/null 2>&1
        ##mount -f /dev/shm >/dev/null 2>&1
        mount -f /proc/bus/usb >/dev/null 2>&1
fi

# Mount all other filesystems (except for NFS and /proc, which is already
# mounted). Contrary to standard usage,
# filesystems are NOT unmounted in single user mode.
# The 'no' applies to all listed filesystem types. See mount(8).
if [ "$READONLY" != "yes" ] ; then
        action $"Mounting local filesystems: " mount -a -t nonfsi,tmpfs,nfs4,smbfs,ncpfs,cifs,gfs,gfs2 -O no_netdev
else
        action $"Mounting local filesystems: " mount -a -n -t nonfs,tmpfs,nfs4,smbfs,ncpfs,cifs,gfs,gfs2 -O no_netdev
fi

 

修改了13行,22行,24行。

修改保存,重启生效。或者你觉得你改的东西保证没有问题,可以不重起临时修改tmpfs大小(临时生效)

命令为:

mount -o remount,size=60g /dev/shm

 

[root@oradbca ~]# df -h |grep tmpfs
tmpfs                  60G     0   60G   0% /dev/shm

修改成功。。