Spark实验编写独立应用程序实现数据去重

2. 编写独立应用程序实现数据去重

对于两个输入文件 A B,编写 Spark 独立应用程序,对两个文件进行合并,并剔除其

中重复的内容,得到一个新文件 C。下面是输入文件和输出文件的一个样例,供参考。

输入文件 A 的样例如下:

20170101 x

20170102 y

20170103 x

20170104 y

20170105 z

20170106 z

输入文件 B 的样例如下:

20170101 y

20170102 y

20170103 x

20170104 z

20170105 y

根据输入的文件 A B 合并得到的输出文件 C 的样例如下:

20170101 x

20170101 y

20170102 y

20170103 x

20170104 y

20170104 z

20170105 y

20170105 z

20170106 z

cd /usr/local/spark/mycode/remdup
mkdir -p src/main/scala
cd ~
vim /usr/local/spark/mycode/remdup/src/main/scala/remdup.scala
import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
import org.apache.spark.SparkConf
import org.apache.spark.HashPartitioner
object RemDup{
    def main(args: Array[String]) {
        val conf = new SparkConf().setAppName("RemDup")
        val sc = new SparkContext(conf)
        val dataFile ="file:///usr/local/spark/sparksqldata/A.txt,file:///usr/local/spark/sparksqldata/B.txt"
        val data = sc.textFile(dataFile,2)
val da = data.distinct()
da.foreach(println)
 

 
}
}
vim /usr/local/spark/mycode/remdup/simple.sbt
name := "Simple Project"
version := "1.0"
scalaVersion := "2.11.8"
libraryDependencies += "org.apache.spark" %% "spark-core" % "2.1.0"
cd /usr/local/spark/mycode/remdup
sudo /usr/local/sbt/sbt package
/usr/local/spark/bin/spark-submit --class "RemDup"   /usr/local/spark/mycode/remdup/target/scala-2.11/simple-project_2.11-1.0.jar
原文地址:https://www.cnblogs.com/a155-/p/14288116.html