Spark执行样例报警告:WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources

搭建Spark环境后,调测Spark样例时,出现下面的错误:
WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources


[hadoop@gpmaster bin]$ ./run-example org.apache.spark.examples.SparkPi
15/10/01 08:59:33 INFO spark.SparkContext: Running Spark version 1.5.0
.......................
15/10/01 08:59:35 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver@192.168.1.128:17514]
15/10/01 08:59:35 INFO util.Utils: Successfully started service 'sparkDriver' on port 17514.
.......................
15/10/01 08:59:36 INFO ui.SparkUI: Started SparkUI at http://192.168.1.128:4040
15/10/01 08:59:37 INFO spark.SparkContext: Added JAR file:/home/hadoop/spark/lib/spark-examples-1.5.0-hadoop2.6.0.jar at http://192.168.1.128:36471/jars/spark-examples-1.5.0-hadoop2.6.0.jar with timestamp 1443661177865
15/10/01 08:59:37 WARN metrics.MetricsSystem: Using default name DAGScheduler for source because spark.app.id is not set.
15/10/01 08:59:38 INFO client.AppClient$ClientEndpoint: Connecting to master spark://192.168.1.128:7077...
15/10/01 08:59:38 INFO cluster.SparkDeploySchedulerBackend: Connected to Spark cluster with app ID app-20151001085938-0000
.................................
15/10/01 08:59:40 INFO scheduler.TaskSchedulerImpl: Adding task set 0.0 with 2 tasks
15/10/01 08:59:55 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
15/10/01 09:00:10 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
15/10/01 09:00:25 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
15/10/01 09:00:40 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
15/10/01 09:00:55 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
15/10/01 09:01:10 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
15/10/01 09:01:25 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
15/10/01 09:01:40 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
15/10/01 09:01:55 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
15/10/01 09:02:10 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
15/10/01 09:02:25 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources

从警告信息大致可以知道:
初始化job时没有获取到任何资源;提示检查集群,确保workers可以被注册并有足够的内存资源。


可能的原因有几点,可以逐个排查:
1. 主机主机名和ip是否配置正确
先查看/etc/hosts文件配置是否正确


同时可以通过spark-shell查看SparkContext获取的上下文信息, 如下操作:
[hadoop@gpmaster bin]$ ./spark-shell
........
scala> sc.getConf.getAll.foreach(println)
(spark.fileserver.uri,http://192.168.1.128:34634)
(spark.app.name,Spark shell)
(spark.driver.port,25392)
(spark.app.id,app-20151001090322-0001)
(spark.repl.class.uri,http://192.168.1.128:24988)
(spark.externalBlockStore.folderName,spark-1254a794-fbfa-4b4c-9757-b5a94dc26ffc)
(spark.jars,)
(spark.executor.id,driver)
(spark.submit.deployMode,client)
(spark.driver.host,192.168.1.128)
(spark.master,spark://192.168.1.128:7077)


scala> sc.getConf.toDebugString
res8: String = 
spark.app.id=app-20151001090322-0001
spark.app.name=Spark shell
spark.driver.host=192.168.1.128
spark.driver.port=25392
spark.executor.id=driver
spark.externalBlockStore.folderName=spark-1254a794-fbfa-4b4c-9757-b5a94dc26ffc
spark.fileserver.uri=http://192.168.1.128:34634
spark.jars=
spark.master=spark://192.168.1.128:7077
spark.repl.class.uri=http://192.168.1.128:24988
spark.submit.deployMode=client


2. 内存不足
我的环境就是因为内存的原因。
我集群环境中,spark-env.sh 文件配置如下:
export JAVA_HOME=/usr/java/jdk1.7.0_60
export SCALA_HOME=/usr/local/scala
export SPARK_MASTER_IP=192.168.1.128
export SPARK_WORKER_MEMORY=100m
export HADOOP_CONF_DIR=/home/hadoop/hadoop-2.6.0/etc/hadoop
export MASTER=spark://192.168.1.128:7077


因为我的集群环境,每个节点只剩下500MB内存了,由于我没有配置SPARK_EXECUTOR_MEMORY参数,默认会使用1G内存,所以会出现内存不足,从而出现上面日志报的警告信息。

所以解决办法是添加如下参数:

export SPARK_EXECUTOR_MEMORY=512m



3.端口号被占用,之前的程序已运行。 

原文地址:https://www.cnblogs.com/snowbook/p/5831473.html