Linux Hadoop:输入“start-all.sh”后需要root密码

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/15195048/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-06 19:10:50  来源:igfitidea点击:

Hadoop: require root's password after enter "start-all.sh"

linuxubuntuhadoopsshsudo

提问by Munichong

I have installed Hadoop and SSH on my laptop. "ssh localhost" works fine. After formatting HDFS, I tried to start hadoop.

我已经在我的笔记本电脑上安装了 Hadoop 和 SSH。“ssh localhost”工作正常。格式化HDFS后,我尝试启动hadoop。

munichong@GrindPad:~$ sudo /usr/sbin/start-all.sh
starting namenode, logging to /var/log/hadoop/root/hadoop-root-namenode-GrindPad.out
root@localhost's password: 
root@localhost's password: localhost: Permission denied, please try again.

localhost: Permission denied (publickey,password).

It requires password. My role is "munichong". But munichong's password does not work here. Here, my role has changed to "root". I do not know whether I missed something here.

它需要密码。我的角色是“munichong”。但是munichong的密码在这里不起作用。在这里,我的角色已更改为“root”。我不知道我是否在这里遗漏了什么。

Is there anyone can help me?

有没有人可以帮助我?

Thanks!

谢谢!

回答by javamak

I ran into the same problem. As Amar said,if you are running as sudo hadoop will ask for root password. If you don't have a root password, you can setup one using

我遇到了同样的问题。正如 Amar 所说,如果您以 sudo hadoop 身份运行,则会要求输入 root 密码。如果你没有 root 密码,你可以设置一个使用

 sudo passwd

below URL gives you more detail about user management.

下面的 URL 为您提供了有关用户管理的更多详细信息。

https://help.ubuntu.com/12.04/serverguide/user-management.html

https://help.ubuntu.com/12.04/serverguide/user-management.html

回答by bun

Create and Setup SSH Certificates Hadoop requires SSH access to manage its nodes, i.e. remote machines plus our local machine. For our single-node setup of Hadoop, we therefore need to configure SSH access to localhost.

创建和设置 SSH 证书 Hadoop 需要 SSH 访问来管理其节点,即远程机器和我们的本地机器。因此,对于 Hadoop 的单节点设置,我们需要配置对 localhost 的 SSH 访问。

So, we need to have SSH up and running on our machine and configured it to allow SSH public key authentication.

因此,我们需要在我们的机器上启动并运行 SSH,并将其配置为允许 SSH 公钥身份验证。

Hadoop uses SSH (to access its nodes) which would normally require the user to enter a password. However, this requirement can be eliminated by creating and setting up SSH certificates using the following commands. If asked for a filename just leave it blank and press the enter key to continue.

Hadoop 使用 SSH(访问其节点),这通常需要用户输入密码。但是,可以通过使用以下命令创建和设置 SSH 证书来消除此要求。如果要求输入文件名,只需将其留空并按回车键继续。

check this site

检查这个网站

回答by Nishant Shrivastava

As in above case munichong is a user (munichong@GrindPad)

如上例 munichong 是用户 (munichong@GrindPad)

  1. In my case: Login as hduser

  2. Firstly, remove the directorysudo rm -rf ~/.ssh

  3. Use to re-generate /.ssh directory with default setting:

    [hduser@localhost ~]$ ssh-keygen
    
  4. Here we do copy and paste the content of id_rsa.pub into authorised_keys file created by using above command)

    [hduser@localhost ~]$ sudo cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
    
  5. [hduser@localhost ~]$ chmod -R 750 ~/.ssh/authorized_keys

  6. [hduser@localhost ~]$ ssh localhost

    The authenticity of host 'localhost (127.0.0.1)' can't be established. RSA key fingerprint is 04:e8:80:64:dc:71:b5:2f:c0:d9:28:86:1f:61:60:8a. Are you sure you want to continue connecting (yes/no)? yes

    Warning: Permanently added 'localhost' (RSA) to the list of known hosts. Last login: Mon Jan 4 14:31:05 2016 from localhost.localdomain

  7. [hduser@localhost ~]$ jps
    18531 Jps

  8. [hduser@localhost ~]$ start-all.sh

  9. All daemons start

  1. 就我而言:以 hduser 身份登录

  2. 首先,删除目录sudo rm -rf ~/.ssh

  3. 使用默认设置重新生成 /.ssh 目录:

    [hduser@localhost ~]$ ssh-keygen
    
  4. 这里我们将 id_rsa.pub 的内容复制并粘贴到使用上述命令创建的 authorised_keys 文件中)

    [hduser@localhost ~]$ sudo cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
    
  5. [hduser@localhost ~]$ chmod -R 750 ~/.ssh/authorized_keys

  6. [hduser@localhost ~]$ ssh localhost

    无法确定主机 'localhost (127.0.0.1)' 的真实性。RSA 密钥指纹为 04:e8:80:64:dc:71:b5:2f:c0:d9:28:86:1f:61:60:8a。您确定要继续连接吗(是/否)?是的

    警告:将“本地主机”(RSA) 永久添加到已知主机列表中。上次登录: 2016 年 1 月 4 日星期一 14:31:05 从 localhost.localdomain

  7. [hduser@localhost ~]$ jps
    18531 日元

  8. [hduser@localhost ~]$ start-all.sh

  9. 所有守护进程启动

Note: Sometime due to logs files other problem occur, in that case remove only dot out (.out) files from /usr/local/hadoop/logs/.

注意:有时由于日志文件会出现其他问题,在这种情况下,仅从 /usr/local/hadoop/logs/ 中删除点输出 (.out) 文件。

回答by KARTHIKEYAN.A

Solution:

解决方案:

1) Generate ssh key without password

1)生成没有密码的ssh密钥

$ ssh-keygen -t rsa -P ""

2) Copy id_rsa.pub to authorized-keys

2) 将 id_rsa.pub 复制到授权密钥

$  cat $HOME/.ssh/id_rsa.pub >> $HOME/.ssh/authorized_keys

3) Start ssh localhost

3) 启动 ssh 本地主机

$ ssh localhost

4) now go to the hadoop sbin directory and start hadoop

4)现在进入hadoop sbin目录并启动hadoop

$./start-all.sh 
./start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [localhost]
localhost: starting namenode, logging to /home/amtex/Documents/installed/hadoop/logs/hadoop-amtex-namenode-amtex-desktop.out
localhost: starting datanode, logging to /home/amtex/Documents/installed/hadoop/logs/hadoop-amtex-datanode-amtex-desktop.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /home/amtex/Documents/installed/hadoop/logs/hadoop-amtex-secondarynamenode-amtex-desktop.out
starting yarn daemons
starting resourcemanager, logging to /home/amtex/Documents/installed/hadoop/logs/yarn-amtex-resourcemanager-amtex-desktop.out
localhost: starting nodemanager, logging to /home/amtex/Documents/installed/hadoop/logs/yarn-amtex-nodemanager-amtex-desktop.out

5)password not asking

5)密码不问

$ jps 
12373 Jps
11823 SecondaryNameNode
11643 DataNode
12278 NodeManager
11974 ResourceManager
11499 NameNode

回答by user3029620

log in super user or root

以超级用户或 root 登录

:~ su

Password:

give permission to user

授予用户权限

:~ sudo chown -R <log in user> /usr/local/hadoop/

for your example log in user: munichong

例如,登录用户:munichong

HADOOP_HOME = /usr/local/hadoop/

HADOOP_HOME = /usr/local/hadoop/

回答by AVA

It seems you have logged-in as root and invoking start-all.sh.

Instead, login as owner of directory $SPARK_HOME and invoke spark's
start-all.sh.

您似乎已经以 root 身份登录并调用了 start-all.sh。

相反,以目录 $SPARK_HOME 的所有者身份登录并调用 spark 的
start-all.sh.

(or)

Let user hadoop be the owner of directory $SPARK_HOME and currently logged in as root, then command would be as follows:

(或)

让用户 hadoop 是 $SPARK_HOME 目录的所有者,当前以 root 身份登录,则命令如下:

sudo -u hadoop -c start-all.sh

Assumption:
a) PATH has reference to directory $SPARK_HOME/bin
b) Certificate based authentication is configured for user hadoop

假设:
a) PATH 引用目录 $SPARK_HOME/bin
b) 为用户 hadoop 配置了基于证书的身份验证