Categotry Archives: linux

创建lvm Found duplicate PV的错误

0

Posted on by

在创建lvm的时候,报一下错误:

[root@oradbca ~]#  vgcreate vg_data /dev/sddlmaa
  Found duplicate PV YjFs3QZw30enKplsoQ5YmFFnDV08Owyy: using /dev/sdb not /dev/sddlmaa
  Found duplicate PV YjFs3QZw30enKplsoQ5YmFFnDV08Owyy: using /dev/sdc not /dev/sdb
  Found duplicate PV YjFs3QZw30enKplsoQ5YmFFnDV08Owyy: using /dev/sdd not /dev/sdc
  Found duplicate PV YjFs3QZw30enKplsoQ5YmFFnDV08Owyy: using /dev/sde not /dev/sdd
  Found duplicate PV YjFs3QZw30enKplsoQ5YmFFnDV08Owyy: using /dev/sddlmaa not /dev/sde
  Found duplicate PV YjFs3QZw30enKplsoQ5YmFFnDV08Owyy: using /dev/sdb not /dev/sddlmaa
  Found duplicate PV YjFs3QZw30enKplsoQ5YmFFnDV08Owyy: using /dev/sdc not /dev/sdb
  Found duplicate PV YjFs3QZw30enKplsoQ5YmFFnDV08Owyy: using /dev/sdd not /dev/sdc
  Found duplicate PV YjFs3QZw30enKplsoQ5YmFFnDV08Owyy: using /dev/sde not /dev/sdd
  Volume group "vg_data" successfully created

虽然创建成功了,,但是感觉有点别扭。

同时创建的盘也不是我多路径上的盘

[root@oradbca ~]# pvdisplay 
  Found duplicate PV YjFs3QZw30enKplsoQ5YmFFnDV08Owyy: using /dev/sdb not /dev/sddlmaa
  Found duplicate PV YjFs3QZw30enKplsoQ5YmFFnDV08Owyy: using /dev/sdc not /dev/sdb
  Found duplicate PV YjFs3QZw30enKplsoQ5YmFFnDV08Owyy: using /dev/sdd not /dev/sdc
  Found duplicate PV YjFs3QZw30enKplsoQ5YmFFnDV08Owyy: using /dev/sde not /dev/sdd
  --- Physical volume ---
  PV Name               /dev/sde
  VG Name               vg_data
  PV Size               2.44 TiB / not usable 4.00 MiB
  Allocatable           yes 
  PE Size               4.00 MiB
  Total PE              639999
  Free PE               367
  Allocated PE          639632
  PV UUID               YjFs3Q-Zw30-enKp-lsoQ-5YmF-FnDV-08Owyy

原因是:默认的扫描设置是扫描所有的磁盘,由于多路径的问题,多块磁盘前面的元数据信息是一致的,导致PV信息相同

所以直接修改lvm的配置文件,修改扫描的策略就可以了。

我的fdisk -l 内容


Disk /dev/sda: 1199.1 GB, 1199101181952 bytes
255 heads, 63 sectors/track, 145782 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0000a399

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          26      204800   83  Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2              26        8185    65536000   82  Linux swap / Solaris
/dev/sda3            8185      145783  1105255424   8e  Linux LVM

Disk /dev/sdb: 2684.4 GB, 2684354560000 bytes
255 heads, 63 sectors/track, 326354 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


Disk /dev/sdc: 2684.4 GB, 2684354560000 bytes
255 heads, 63 sectors/track, 326354 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


Disk /dev/mapper/VolGroup-LogVol00: 1131.8 GB, 1131774214144 bytes
255 heads, 63 sectors/track, 137597 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


Disk /dev/sdd: 2684.4 GB, 2684354560000 bytes
255 heads, 63 sectors/track, 326354 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


Disk /dev/sde: 2684.4 GB, 2684354560000 bytes
255 heads, 63 sectors/track, 326354 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


Disk /dev/sddlmaa: 2684.4 GB, 2684354560000 bytes
255 heads, 63 sectors/track, 326354 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


Disk /dev/mapper/vg_data-lv_data: 2682.8 GB, 2682811056128 bytes
255 heads, 63 sectors/track, 326166 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

 

 

修改文件   /etc/lvm/lvm.conf

由原来的 

	filter = [ "a/.*/" ]

修改成   

	filter = [ "a/sddl.*/","a/sda.*/", "r/sd.*/" ]

a 是允许  b是拒绝

由于我在sda上也有lvm 所以单独允许。

 

修改后 允许 vgscan -v  重建缓存 

运行lvmdiskscan 查看当前的访问路径

 

[root@whljk ~]# lvmdiskscan 
  /dev/ram0            [      16.00 MiB] 
  /dev/sddlmaa         [       2.44 TiB] LVM physical volume
  /dev/root            [       1.03 TiB] 
  /dev/ram1            [      16.00 MiB] 
  /dev/sda1            [     200.00 MiB] 
  /dev/vg_data/lv_data [       2.44 TiB] 
  /dev/ram2            [      16.00 MiB] 
  /dev/sda2            [      62.50 GiB] 
  /dev/ram3            [      16.00 MiB] 
  /dev/sda3            [       1.03 TiB] LVM physical volume
  /dev/ram4            [      16.00 MiB] 
  /dev/ram5            [      16.00 MiB] 
  /dev/ram6            [      16.00 MiB] 
  /dev/ram7            [      16.00 MiB] 
  /dev/ram8            [      16.00 MiB] 
  /dev/ram9            [      16.00 MiB] 
  /dev/ram10           [      16.00 MiB] 
  /dev/ram11           [      16.00 MiB] 
  /dev/ram12           [      16.00 MiB] 
  /dev/ram13           [      16.00 MiB] 
  /dev/ram14           [      16.00 MiB] 
  /dev/ram15           [      16.00 MiB] 
  2 disks
  18 partitions
  1 LVM physical volume whole disk
  1 LVM physical volume

 

正常显示

redhat 6用udev设置multipath盘的权限

1

Posted on by

   在操作系统redhat 6上,重启multipath服务,盘的权限就变root了,需要用udev绑定设置权限。

查看设备

[root@oradbca ~]# dmsetup ls 
data01  (253:2)
crs03   (253:3)
VG0-LV_ROOT     (253:6)
crs02   (253:1)
crs01   (253:0)
data03  (253:5)
data02  (253:4)

 

配置参数

[root@oradbca ~]# cat /etc/udev/rules.d/12-dm-permissions.rules 
ENV{DM_NAME}=="crs01", OWNER:="grid", GROUP:="asmadmin", MODE:="660", SYMLINK+="iscsi/oraasm-$env{DM_NAME}"
ENV{DM_NAME}=="crs02", OWNER:="grid", GROUP:="asmadmin", MODE:="660", SYMLINK+="iscsi/oraasm-$env{DM_NAME}"
ENV{DM_NAME}=="crs03", OWNER:="grid", GROUP:="asmadmin", MODE:="660", SYMLINK+="iscsi/oraasm-$env{DM_NAME}"
ENV{DM_NAME}=="data01", OWNER:="grid", GROUP:="asmadmin", MODE:="660", SYMLINK+="iscsi/oraasm-$env{DM_NAME}"
ENV{DM_NAME}=="data02", OWNER:="grid", GROUP:="asmadmin", MODE:="660", SYMLINK+="iscsi/oraasm-$env{DM_NAME}"
ENV{DM_NAME}=="data03", OWNER:="grid", GROUP:="asmadmin", MODE:="660", SYMLINK+="iscsi/oraasm-$env{DM_NAME}"

 

重启服务

	service multipath restart

 

查看权限

[root@oradbca ~]# ll /dev/dm-*
brw-rw---- 1 grid asmadmin 253, 0 Jul 23 14:26 /dev/dm-0
brw-rw---- 1 grid asmadmin 253, 1 Jul 23 14:26 /dev/dm-1
brw-rw---- 1 grid asmadmin 253, 2 Jul 13 03:48 /dev/dm-2
brw-rw---- 1 grid asmadmin 253, 3 Jul 23 14:26 /dev/dm-3
brw-rw---- 1 grid asmadmin 253, 4 Jul 13 03:48 /dev/dm-4
brw-rw---- 1 grid asmadmin 253, 5 Jul 23 14:26 /dev/dm-5
brw-rw---- 1 root disk     253, 6 Jul 13 03:04 /dev/dm-6

 

贴multipath配置文件:

blacklist {
        devnode "^(sda)"
#       devnode "^(ram|raw|loop|fd|md|sr|scd|st|sdh|sdq)[0-9]*"
        devnode "^cciss.*"
}

## Use user friendly names, instead of using WWIDs as names.
defaults {
        user_friendly_names yes
        max_fds             max
        queue_without_daemon no  
        flush_on_last_del yes 
        dev_loss_tmo infinity
        fast_io_fail_tmo 5
}


devices {
        device {
                vendor                  "NetAPP"
                product                 "FAS2240"
                path_grouping_policy    group_by_prio
                features                "3 queue_if_no_path pg_init_retries 50"
                prio                    "ontap"
                getuid_callout          "/lib/udev/scsi_id -g -u -d /dev/%n"
                path_checker            tur
                path_selector           "round-robin 0"
                hardware_handler        "0"
                failback                imediate
                rr_weight               uniform
                rr_min_io               128
        }
}




multipaths {
        multipath {
                wwid                    360a98000426b5953423f456e30774c61
                alias                   crs01
        }
        multipath {
                wwid                    360a98000426b5953423f456e30774c63
                alias                   crs02
        }
        multipath {
                wwid                    360a98000426b5953423f456e30774c65
                alias                   crs03
        }
                multipath {
                wwid                    360a98000426b5953423f456e30774c67
                alias                   data01
        }
                multipath {
                wwid                    360a98000426b5953423f456e30774c69
                alias                   data02
        }
                multipath {
                wwid                    360a98000426b5953423f456e30774c6b
                alias                   data03
        }
}

USE Udev for Oracle ASM

0

Posted on by

   下面说明RHEL5和RHEL6通过UUID来绑udev

 

RHEL5:

1.检查一下要绑那几块盘

ll /dev/sd*

 

2.生成脚本

for i in b c d e ;
do
echo "KERNEL==\"sd*\", BUS==\"scsi\", PROGRAM==\"/sbin/scsi_id -g -u -s %p\", RESULT==\"`scsi_id -g -u -s /block/sd$i`\", NAME=\"asm-disk$i\", OWNER=\"grid\", GROUP=\"asmadmin\", MODE=\"0660\"" >> /tmp/99-oracle-asmdevices.rules
done

 

3.检查一下99-oracle-asmdevices.rules

cat /tmp/99-oracle-asmdevices.rules

 

4.copy到udev目录下(rac的话直接copy这个文件到另一个节点上)

cp /tmp/99-oracle-asmdevices.rules /etc/udev/rules.d/99-oracle-asmdevices.rules

 

重启udev

start_udev

 

 

RHEL6:

1.检查一下要绑那几块盘

ll /dev/sd*

 

 

2.生成脚本

for i in b c d e ;
do
echo "KERNEL==\"sd*\", BUS==\"scsi\", PROGRAM==\"/sbin/scsi_id -g -u -d /dev/$name\", RESULT==\"`scsi_id -g -u -d /dev/sd$i`\", NAME=\"asm-disk$i\", OWNER=\"grid\", GROUP=\"asmadmin\", MODE=\"0660\"" >> /tmp/99-oracle-asmdevices.rules
done

for i in b c d e ;
do
echo "KERNEL==\"sd*\", BUS==\"scsi\", PROGRAM==\"/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/\$name\", RESULT==\"`/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/sd$i`\", NAME=\"asm-disk$i\", OWNER=\"grid\", GROUP=\"asmadmin\", MODE=\"0660\""      >> /tmp/99-oracle-asmdevices.rules
done

 

3.检查一下99-oracle-asmdevices.rules

cat /tmp/99-oracle-asmdevices.rules

 

4.copy到udev目录下(rac的话直接copy这个文件到另一个节点上)

cp /tmp/99-oracle-asmdevices.rules /etc/udev/rules.d/99-oracle-asmdevices.rules

 

重启udev

start_udev

 

udev管理

1.Test the rules are working as expected.(udevtest)

# #OL5
# udevtest /block/sdb
# udevtest /block/sdc
# udevtest /block/sdd
# udevtest /block/sde

# #OL6
# udevadm test /block/sdb
# udevadm test /block/sdc
# udevadm test /block/sdd
# udevadm test /block/sde

 

2.restart the UDEV service.

# #OL5
# /sbin/udevcontrol reload_rules

# #OL6
# udevadm control --reload-rules

# #OL5 and OL6
# /sbin/start_udev

 

3.udev info

# #OL5
# udevinfo -q all -n /dev/sdb

# #OL6
# udevadm info --query=all --path=/sys/block/sdb
# udevadm info --query=all --name=asm-diska

Configure Direct NFS Client (DNFS) on Linux (11g)

1

Posted on by

在Oracle 11g中有了Direct NFS Client (DNFS)新功能 ,配置好后数据库将可以直接访问NFS服务器上的文件,避免由OS内核NFS造成的额外开销。

DIRECT NFS CLIENT OVERVIEW

      Standard NFS client software, provided by the operating system, is not optimized for Oracle Database file I/O access patterns.  With Oracle Database 11g, you can configure Oracle Database to access NFS V3 NAS devices directly using Oracle Direct NFS Client, rather than using the operating system kernel NFS client.  Oracle Database will access files stored on the NFS server directly through the integrated Direct NFS Client eliminating the overhead imposed by the operating system kernel NFS.  These files are also accessible via the operating system kernel NFS client thereby allowing seamless administration.

 

 

Direct NFS客户机装载点的设置信息可以是以下3个的一个

1. $ORACLE_HOME/dbs/oranfstab

2. /etc/oranfstab

3. /etc/mtab

 

说明:

Direct NFS Client can use a new configuration file or the mount tab file (/etc/mtab on Linux) to determine the mount 
point settings for NFS storage devices.

This file is required only for configuring the Direct NFS for load balancing and specfic to single database.You can 
still enable the Direct NFS without configuring oranfstab file.DNFS will take mount point settings for NFS from 
/etc/mtab on Linux

 

In RAC,the oranfstab must be configured on all nodes and keep /etc/oranfstab file synchronized on all nodes.
 

(When the oranfstab file is placed in $ORACLE_HOME/dbs, the entries in the file are specific to a single database.
 In this case, all nodes running an Oracle RAC database use the same ORACLE_HOME/dbs/oranfstab file. 

When the oranfstab file is placed in /etc, then it is globally available to all Oracle databases, and can contain
 mount points used by all Oracle databases running on nodes in the cluster, including single-instance databases. 
However, on Oracle RAC systems, if the oranfstab file is placed in /etc, then you must replicate the file 
/etc/oranfstab  file on all nodes, and keep each /etc/oranfstab file synchronized on all nodes, just as you must with the 
/etc/fstab file.

 

配置过程:

1.编辑oranfstab

[root@oradbca ~]# cat /etc/oranfstab 
server: oradbca
path: 192.168.1.11
export: /oraclenfsserver        mount: /oraclenfs

 

Server:NFS服务器名

Path:到达NFS服务器的最多4个网络路径,可以是IP或者主机名

Export:从NFS服务器导出的路径

Mount:NFS的本地装载点

 

2.加载库文件

[oracle@oradbca ~]$ cd $ORACLE_HOME/lib
[oracle@oradbca lib]$ mv libodm11.so libodm11.so_bak 
[oracle@oradbca lib]$ ln -s libnfsodm11.so libodm11.so

3.mount NFS

[root@oradbca~]# mount -t nfs 192.168.1.11:/oraclenfsserver /oraclenfs

 

4.startup database

SQL> startup
ORACLE instance started.

Total System Global Area  422670336 bytes
Fixed Size                  1336960 bytes
Variable Size             281020800 bytes
Database Buffers          134217728 bytes
Redo Buffers                6094848 bytes
Database mounted.
Database opened.

 

在alter日志会有

Oracle instance running with ODM: Oracle Direct NFS ODM Library Version 2.0 
Sat Apr 20 01:36:16 2013

 

5.测试新建表空间

SQL> create tablespace test datafile '/oraclenfs/test.dbf' size 1m ;

后台日志输出:

create tablespace test datafile '/oraclenfs/test.dbf' size 1m 
Direct NFS: NFS3ERR 1 Not owner. path oradbca mntport 728 nfsport 2049
Direct NFS: NFS3ERR 1 Not owner. path oradbca mntport 728 nfsport 2049

 

6.查看系统视图

SQL> desc v$dnfs_servers;
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 ID                                                 NUMBER
 SVRNAME                                            VARCHAR2(255)
 DIRNAME                                            VARCHAR2(1024)
 MNTPORT                                            NUMBER
 NFSPORT                                            NUMBER
 WTMAX                                              NUMBER
 RTMAX                                              NUMBER

SQL> 
SQL> 
SQL> select svrname,dirname,mntport,nfsport from v$dnfs_servers;

SVRNAME              DIRNAME                           MNTPORT    NFSPORT
-------------------- ------------------------------ ---------- ----------
oradbca                 /oraclenfsserver                      728       2049

 

oracle在NFS上参数选项

0

Posted on by

在oracle环境使用NFS,用默认mount方式,oracle备份等会报错。

以下是官方文档推荐参数 Mount Options for Oracle files when used with NFS on NAS devices [ID 359515.1]

 

RAC including RACone and single instance RAC

 

In the table below 

  • Binaries is the shared mount points where the Oracle Home and CRS_HOME is installed.
  • Datafiles includes Online Logs, Controlfile and Datafiles
  • nfsvers and vers are identical on those OS platforms that has nfsvers.  The ver option is an alternative to the nfsvers option. It is included for compatibility with other operating systems
  • Please note that the mount options on each of the following cells are applicable only to those type of files listed in the column heading.
  • For RMAN backup sets, image copies, and Data Pump dump files, the "NOAC" mount option should not be specified – that is because RMAN and Data Pump do not check this option and specifying this can adversely affect performance.

 

Operating System

Mount options for Binaries Mount options for Oracle Datafiles Mount options for CRS Voting Disk and OCR
Sun Solaris *

rw,bg,hard,nointr,rsize=32768,
wsize=32768,proto=tcp,
noac,vers=3,suid

rw,bg,hard,nointr,rsize=32768,
wsize=32768,proto=tcp,noac,
forcedirectio, vers=3,suid
rw,bg,hard,nointr,rsize=32768,
wsize=32768,proto=tcp,vers=3,
noac,forcedirectio
AIX (5L) **

rw,bg,hard,nointr,rsize=32768,
wsize=32768,proto=tcp,vers=3,
timeo=600

cio,rw,bg,hard,nointr,rsize=32768,
wsize=32768,proto=tcp,noac,
vers=3,timeo=600

cio,rw,bg,hard,intr,rsize=32768,
wsize=32768,tcp,noac,
vers=3,timeo=600

HPUX 11.23 ****  – rw,bg,vers=3,proto=tcp,noac,
hard,nointr,timeo=600,
rsize=32768,wsize=32768,suid
rw,bg,vers=3,proto=tcp,noac,
forcedirectio,hard,nointr,timeo=600,
rsize=32768,wsize=32768,suid
rw,bg,vers=3,proto=tcp,noac,
forcedirectio,hard,nointr,timeo=600
,rsize=32768,wsize=32768,suid
Linux x86
#

rw,bg,hard,nointr,rsize=32768,
wsize=32768,tcp, vers=3,
timeo=600, actimeo=0

rw,bg,hard,nointr,rsize=32768,
wsize=32768,tcp,actimeo=0,
vers=3,timeo=600

rw,bg,hard,nointr,rsize=32768,
wsize=32768,tcp,noac,vers=3,
timeo=600

Linux x86-64 # rw,bg,hard,nointr,rsize=32768,
 wsize=32768,tcp,vers=3,
timeo=600, actimeo=0
rw,bg,hard,nointr,rsize=32768,
wsize=32768,tcp,actimeo=0,
vers=3,timeo=600
rw,bg,hard,nointr,rsize=32768,
wsize=32768,tcp,vers=3,
timeo=600,noac
Linux – Itanium rw,bg,hard,nointr,rsize=32768,
wsize=32768,tcp,vers=3,
timeo=600, actimeo=0
rw,bg,hard,nointr,rsize=32768,
wsize=32768,tcp,actimeo=0,
vers=3,timeo=600
rw,bg,hard,nointr,rsize=32768,
wsize=32768,tcp,noac,vers=3,
timeo=600

* NFS mount option “forcedirectio” is required on Solaris platforms when mounting the OCR/CRS files when using Oracle 10.1.0.4 or 10.2.0.2 or later (Oracle unpublished bug 4466428) 
** AIX is only supported with NAS on AIX 5.3 TL04 and higher with Oracle 10.2.0.1 and later (NetApp) 
*** NAS devices are only supported with HPUX 11.23 or higher ONLY 

# These mount options are for Linux kernels 2.6 and above. For older kernels please check Note 279393.1

Due to Unpublished bug 5856342, it is necessary to use the following init.ora parameter when using NAS with all versions of RAC on Linux (x86 & X86-64 platforms) until 10.2.0.4. This bug is fixed and included in 10.2.0.4 patchset.
filesystemio_options = DIRECTIO

 

 Single Instance (non-RAC)

Operating System

Mount options for Binaries Mount options for Oracle Datafiles
Sun Solaris *
(8, 9, 10)

rw,bg,hard,rsize=32768,
wsize=32768,vers=3,nointr,
proto=tcp,suid

rw,bg,hard,rsize=32768,
wsize=32768,vers=3,[forcedirectio or llock],
nointr,proto=tcp,suid
AIX (5L) **

rw,bg,hard,rsize=32768,
wsize=32768,vers=3,intr,
timeo=600,proto=tcp

rw,bg,hard,rsize=32768,
wsize=32768,vers=3,cio,intr,
timeo=600,proto=tcp

HPUX 11.23 **** rw,bg,hard,rsize=32768,
wsize=32768,vers=3,nointr,
timeo=600,proto=tcp,suid
rw,bg,hard,rsize=32768,
wsize=32768,vers=3,nointr,
timeo=600,proto=tcp,suid, forcedirectio
Linux x86
#
rw,bg,hard,rsize=32768,
wsize=32768,vers=3,nointr,
timeo=600,tcp
rw,bg,hard,rsize=32768,
wsize=32768,vers=3,nointr,
timeo=600,tcp
Linux x86-64 # rw,bg,hard,rsize=32768,
wsize=32768,vers=3,nointr,
timeo=600,tcp
rw,bg,hard,rsize=32768,
wsize=32768,vers=3,nointr,
timeo=600,tcp
Linux – Itanium rw,bg,hard,rsize=32768,
wsize=32768,vers=3,nointr,
timeo=600,tcp
rw,bg,hard,rsize=32768,
wsize=32768,vers=3,nointr,
timeo=600,tcp

* actime=0 or noac can be used

如果想放在启动项里,可以编辑/etc/fstab

列:

192.168.1.11:/data      /u01/oraclebak          nfs     rw,bg,hard,rsize=32768,wsize=32768,vers=3,nointr,timeo=600,tcp        0  0

 

指定参数具体含义如下:
rw 以读写方式安装文件系统(也必须要以相同的方式来导出)
ro 以只读方式安装文件系统
bg 如果安装失败(服务器没有响应),在后台一直尝试,继续发其它的安装请求
hard 以硬方式安装文件系统(这是默认情况)。如果服务器当机,让试图访问它的操作被阻塞,直到服务器恢复为止。
soft 以软方式安装文件系统。如果服务器当机,让试图访问它的操作失败,返回一条出错消息。这项功能对于避免进程“挂”在无关紧要的安装操作上来说非常有用。
intr 允许用户中断被阻塞的操作(并且让它们返回一条出错消息
nointr 不允许用户中断
retrans=n 指定在以软方式安装的文件系统上,在返回一条出错消息之前重复发出请求的次数。
timeo=n 设置请求的超时时间(以十分之一秒为单位)
rsize=n 设置读缓冲的大小为n字节。对TCP和UDP安装都适用,但最优值不一样(32K较好)。
wsize=n 设置写缓冲的大小为n字节。对TCP和UDP安装都适。
nfsvers=n 设置NFS协议的版本 2 或者 3 (在正常情况下是自动的)
tcp 选择通过TCP来传输。默认选择UDP
fg 和bg正好相反,是默认的参数
mountport 设定mount的端口