使用nfs3将hdfs挂载到本地或远程目录(非kerberos适用)

最基本的配置方法,aix、kerberos等的操作详见http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsNfsGateway.html

nfs3挂在到本地后,可以允许如下操作:

  • Users can browse the HDFS file system through their local file system on NFSv3 client compatible operating systems.
  • Users can download files from the the HDFS file system on to their local file system.
  • Users can upload files from their local file system directly to the HDFS file system.
  • Users can stream data directly to HDFS through the mount point. File append is supported but random write is not supported.
  • 用户可以通过操作系统兼容的本地nfsv3客户端来阅览hdfs文件系统
  • 用户可以从hdfs文件系统下载文档到本地文件系统
  • 用户可以将本地文件从本地文件系统直接上传到hdfs文件系统
  • 用户可以通过挂载点直接流化数据。支持文件附加,但是不支持随机写。

一、官方配置介绍.

1.更新core-site.xml的相关配置

  <property>

  <name>hadoop.proxyuser.nfsserver.groups</name>

  <value>root,users-group1,users-group2</value>

  <description> The 'nfsserver' user is allowed to proxy all members of the 'users-group1' and 'users-group2' groups. Note that in most cases you will need to include the group "root" because the user "root" (which usually belonges to "root" group) will generally be the user that initially executes the mount on the NFS client system. Set this to '*' to allow nfsserver user to proxy any group.

      nfs网管使用代理拥护来代理所有用户访问nfs挂载,在非安全模式,运行nfs网关的用户即为代理用户,因此黄色高亮部分应该换成启动nfs3的代理用户名

      </description>

</property>

<property>

  <name>hadoop.proxyuser.nfsserver.hosts</name>

  <value>nfs-client-host1.com</value>

  <description> This is the host where the nfs gateway is running. Set this to '*' to allow requests from any hosts to be proxied.

                           允许挂载的主机域名

     </description>

</property>

2.更新hdfs-site.xml的相关配置

<property>
  <name>dfs.namenode.accesstime.precision</name>
  <value>3600000</value>
  <description>The access time for HDFS file is precise upto this value.
    The default value is 1 hour. Setting a value of 0 disables
    access times for HDFS.默认配置,如无需更改,可忽略
  </description>
</property>
  <property>
    <name>nfs.dump.dir</name>
    <value>/tmp/.hdfs-nfs</value>
<description>Users are expected to update the file dump directory. NFS client often reorders writes,
especially when the export is not mounted with “sync” option. Sequential writes can arrive at the NFS
gateway at random order. This directory is used to temporarily save out-of-order writes before writing
to HDFS. For each file, the out-of-order writes are dumped after they are accumulated to exceed certain
threshold (e.g., 1MB) in memory. One needs to make sure the directory has enough space. For example, if
the application uploads 10 files with each having 100MB, it is recommended for this directory to have
roughly 1GB space in case if a worst-case write reorder happens to every file. Only NFS gateway needs to
restart after this property is update
</description>
</property>

<property>
  <name>nfs.exports.allowed.hosts</name>
  <value>* rw</value>
</property>
<property>
  <name>nfs.superuser</name>
  <value>the_name_of_hdfs_superuser</value>
<description>namenode进程的用户,默认不设置,如果设置了,则所有nfs.exports.allowed.hosts上的允许的nfs客户端上的该用户都可以访问hdfs上的任意文件。
 </description> </property>
<property>
  <name>nfs.metrics.percentiles.intervals</name>
  <value>100</value>
  <description>Enable the latency histograms for read, write and
     commit requests. The time unit is 100 seconds in this example.
  </description>
</property>

Export point. One can specify the NFS export point of HDFS. Exactly one export point is supported. 
Full path is required when configuring the export point. By default, the export point is the root directory “/”.<property> <name>nfs.export.point</name> <value>/</value> </property>

二、实践

1.更新core-site.xml

  <property>

  <name>hadoop.proxyuser.hadoop.groups</name>

  <value>*</value>

   <description> The 'nfsserver' user is allowed to proxy all members of the 'users-group1' and 'users-group2' groups. Note that in most cases you will need to include the group "root" because the user "root" (which usually belonges to "root" group) will generally be the user that initially executes the mount on the NFS client system. Set this to '*' to allow nfsserver user to proxy any group. </description>

</property>

<property>

  <name>hadoop.proxyuser.hadoop.hosts</name>

  <value>*</value>

   <description> This is the host where the nfs gateway is running. Set this to '*' to allow requests from any hosts to be proxied. </description>

</property> 

2.更新hdfs-site.xml

  <property>
    <name>nfs.dump.dir</name>
    <value>/home/hadoop/data/.hdfs-nfs</value>
</property>
<property>
  <name>nfs.exports.allowed.hosts</name>
  <value>* rw</value>
</property>

3.JVM和Log配置

   Log:

   log4j.logger.org.apache.hadoop.hdfs.nfs=DEBUG

   log4j.logger.org.apache.hadoop.oncrpc=DEBUG

   JVM:hadoop-env.sh

   export HADOOP_NFS3_OPTS="$HADOOP_NFS3_OPTS"

   export HADOOP_PORTMAP_OPTS="-Xmx512m $HADOOP_PORTMAP_OPTS"

4.启动nfs3和portmap

   1)停掉系统nfsv3 和rpcbind/portmap

        [root]> service nfs stop

        [root]> service rpcbind stop

   2)启动hadoop的portmap

        [root]> $HADOOP_HOME/bin/hdfs --daemon start portmap

 3)启动nfs3

        [hdfs]$ $HADOOP_HOME/bin/hdfs --daemon start nfs3

5.确认nfs服务可用性

   1)确认所有服务已启动并正在运行

       [root]> rpcinfo -p $nfs_server_ip

       返回类似如下输入即可

program vers proto port

100005 1 tcp 4242 mountd

100005 2 udp 4242 mountd

100005 2 tcp 4242 mountd

100000 2 tcp 111 portmapper

100000 2 udp 111 portmapper

100005 3 udp 4242 mountd

100005 1 udp 4242 mountd

100003 3 tcp 2049 nfs

100005 3 tcp 4242 mountd

   2)验证hdfs命名空间已被export和可以被挂载

       [root]> showmount -e $nfs_server_ip

       返回如下输出即可

        Exports list on $nfs_server_ip :

        / (everyone)

5.挂载export “/"

   mkdir -p $mountpoint

   [root]>mount -t nfs -o vers=3,proto=tcp,nolock,noacl,sync $server:/ $mount_point

完成!

6.也可以将hdfs文件系统挂载到远程节点,非hadoop集群节点亦可,操作方法

  在远程机器执行5操作

  前提:与nfsv3server端互相能ping通

原文地址:https://www.cnblogs.com/roger888/p/6097820.html