Spark DataFrame选取多列

val df = sc.parallelize(Seq(
  (0,"cat26",30.9), 
  (1,"cat67",28.5), 
  (2,"cat56",39.6),
  (3,"cat8",35.6))).toDF("Hour", "Category", "Value")

//或者从文件读取成List
val cols = List("Hour", "Value")

scala> df.select(cols.head, cols.tail: _*).show
+----+----------+
|Hour|Value|
+----+----------+
|   1|      28.5|
|   3|      35.6|
|   2|      39.6|
|   0|      30.9|
+----+----------+
原文地址:https://www.cnblogs.com/v5captain/p/14208534.html