搭建maven开发环境测试Hadoop组件HDFS文件系统的一些命令

1.PC已经安装Eclipse Software,测试平台windows10及Centos6.8虚拟机

2.新建maven project

        

3.打开pom.xml,maven工程项目的pom文件加载以下内容

 1 <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
 2     xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
 3     <modelVersion>4.0.0</modelVersion>
 4     <groupId>com.neusoft</groupId>
 5     <artifactId>bigdata001</artifactId>
 6     <version>0.0.1-SNAPSHOT</version>
 7     <name>bigdata001</name>
 8     <dependencies>
 9         <!-- https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-common -->
10     <dependency>
11         <groupId>org.apache.hadoop</groupId>
12         <artifactId>hadoop-common</artifactId>
13         <version>2.6.0</version>
14     </dependency>
15     <!-- https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-hdfs -->
16     <dependency>
17         <groupId>org.apache.hadoop</groupId>
18         <artifactId>hadoop-hdfs</artifactId>
19         <version>2.6.0</version>
20     </dependency>
21     <!-- https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-client -->
22     <dependency>
23         <groupId>org.apache.hadoop</groupId>
24         <artifactId>hadoop-client</artifactId>
25         <version>2.6.0</version>
26     </dependency>
27         <dependency>
28         <groupId>jdk.tools</groupId>
29         <artifactId>jdk.tools</artifactId>
30         <version>1.7</version>
31         <scope>system</scope>
32         <systemPath>${JAVA_HOME}/lib/tools.jar</systemPath>
33     </dependency>
34     <!-- https://mvnrepository.com/artifact/org.apache.zookeeper/zookeeper -->
35     <dependency>
36         <groupId>org.apache.zookeeper</groupId>
37         <artifactId>zookeeper</artifactId>
38         <version>3.4.6</version>
39     </dependency>
40     
41 
42     </dependencies>
43     
44     <build>
45         <plugins>
46             <plugin>
47             <groupId>org.apache.maven.plugins</groupId>
48             <artifactId>maven-compiler-plugin</artifactId>
49             <version>2.3.2</version>
50             <configuration>
51                 <encoding>UTF-8</encoding>
52                 <source>1.7</source>
53                 <target>1.7</target>
54                 <showWarning>true</showWarning>
55             </configuration>
56             </plugin>
57         </plugins>
58     </build>
59 </project>
pom.xml

 4.如下图所示,下一步在src/main/java/下新建hdfs包,并新建java class:FileSystemTest.java

         

5.FileSystemTest.java代码内容如下:

 1 package Hdfs;
 2 
 3 import java.net.URI;
 4 import org.apache.hadoop.conf.Configuration;
 5 import org.apache.hadoop.fs.FileStatus;
 6 import org.apache.hadoop.fs.FileSystem;
 7 import org.apache.hadoop.fs.Path;
 8 
 9 public class FileSystemTest {
10     public static void main(String[] args) throws Exception {
11         FileSystem fileSystem = FileSystem.newInstance(new URI("hdfs://neusoft-master:9000"), new Configuration());
12         FileStatus[] listStatus = fileSystem.listStatus(new Path("/"));
13         for (FileStatus fileStatus : listStatus) {
14             System.out.println(fileStatus);
15         }
16     }
17 }
View Code

6.运行结果如下所示

log4j:WARN No appenders could be found for logger (org.apache.hadoop.metrics2.lib.MutableMetricsFactory).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
FileStatus{path=hdfs://neusoft-master:9000/hbase; isDirectory=true; modification_time=1483712306703; access_time=0; owner=root; group=supergroup; permission=rwxr-xr-x; isSymlink=false}
FileStatus{path=hdfs://neusoft-master:9000/tmp; isDirectory=true; modification_time=1483709831059; access_time=0; owner=root; group=supergroup; permission=rwx------; isSymlink=false}
FileStatus{path=hdfs://neusoft-master:9000/user; isDirectory=true; modification_time=1483709981792; access_time=0; owner=root; group=supergroup; permission=rwxr-xr-x; isSymlink=false}
Console view

      运行java类可以显示hadoop的HDFS文件系统下面各个目录。

7.在VM虚拟机搭建的Hadoop伪分布式环境进行测试。

    

8.总结

    上述步骤总结了如何通过在Windows平台的Eclipse平台下通过编码实现查看,hdfs文件系统内容。

 备注:编程时:用到了hdfs://neusoft-master:9000的neusoft-master需要在windows及linux平台进行设置

    windows系统需要在C:WindowsSystem32driversetc以管理员打开并添加“192.168.191.130 NEUSOFT-MASTER”

    Linux系统需要修改主机名(vi /etc/sysconfig/network修改hostname)及修改vi /etc/hosts对应的ip和host

博客地址:http://www.cnblogs.com/jackchen-Net/
原文地址:https://www.cnblogs.com/jackchen-Net/p/6264529.html