ERROR: but there is no HDFS_NAMENODE_USER defined. Aborting operation.

执行hadoop的start-all.sh命令式启动报如下的错误

[root@iZbp13pwlxqwiu1xxb6szsZ hadoop-3.2.1]# start-all.sh
Starting namenodes on [iZbp13pwlxqwiu1xxb6szsZ]
ERROR: Attempting to operate on hdfs namenode as root
ERROR: but there is no HDFS_NAMENODE_USER defined. Aborting operation.
Starting datanodes
ERROR: Attempting to operate on hdfs datanode as root
ERROR: but there is no HDFS_DATANODE_USER defined. Aborting operation.
Starting secondary namenodes [iZbp13pwlxqwiu1xxb6szsZ]
ERROR: Attempting to operate on hdfs secondarynamenode as root
ERROR: but there is no HDFS_SECONDARYNAMENODE_USER defined. Aborting operation.
Starting resourcemanager
ERROR: Attempting to operate on yarn resourcemanager as root
ERROR: but there is no YARN_RESOURCEMANAGER_USER defined. Aborting operation.
Starting nodemanagers
ERROR: Attempting to operate on yarn nodemanager as root
ERROR: but there is no YARN_NODEMANAGER_USER defined. Aborting operation.
[root@iZbp13pwlxqwiu1xxb6szsZ hadoop-3.2.1]#
解决方法:
方法一、

在Hadoop安装目录下找到sbin文件夹

在里面修改四个文件

1、对于start-dfs.sh和stop-dfs.sh文件,添加下列参数:

#!/usr/bin/env bash
HDFS_DATANODE_USER=root
HADOOP_SECURE_DN_USER=hdfs
HDFS_NAMENODE_USER=root
HDFS_SECONDARYNAMENODE_USER=root
2、对于start-yarn.sh和stop-yarn.sh文件,添加下列参数:

#!/usr/bin/env bash
YARN_RESOURCEMANAGER_USER=root
HADOOP_SECURE_DN_USER=yarn
YARN_NODEMANAGER_USER=root
然后重新启动

方法二、(推荐,亲测可以)

The root cause of this problem,

hadoop install for different user and you start yarn service for different user. OR
in hadoop config's hadoop-env.sh specified HDFS_NAMENODE_USER and HDFS_DATANODE_USER user is something else.
Hence we need to correct and make it consistent at every place. So a simple solution of this problem is to edit your hadoop-env.sh file and add the user-name for which you want to start the yarn service. So go ahead and edit $HADOOP_HOME/etc/hadoop/hadoop-env.sh by adding the following lines

export HDFS_NAMENODE_USER=root
export HDFS_DATANODE_USER=root
export HDFS_SECONDARYNAMENODE_USER=root
export YARN_RESOURCEMANAGER_USER=root
export YARN_NODEMANAGER_USER=root
Now save and start yarn, hdfs service and check that it works.

我们在hadoop-env.sh文件中也可以找到如下的描述

To prevent accidents, shell commands be (superficially) locked to only allow certain users to execute certain subcommands.

为了防止发生意外,仅(部分)锁定shell命令以仅允许某些用户执行某些子命令。

It uses the format of (command)_(subcommand)_USER.For example, to limit who can execute the namenode command,export HDFS_NAMENODE_USER=hdfs

使用“命令_子命令_用户”,例如,通过使用export HDFS_NAMENODE_USER=hdfs来限制哪个用户可以执行namenode命令。
————————————————
版权声明:本文为CSDN博主「yaoshengting」的原创文章,遵循CC 4.0 BY-SA版权协议,转载请附上原文出处链接及本声明。
原文链接:https://blog.csdn.net/ystyaoshengting/java/article/details/103026872

原文地址:https://www.cnblogs.com/guohu/p/13199744.html