Linux 如何检查 Hadoop 守护进程是否正在运行?

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/15555965/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-06 22:30:44  来源:igfitidea点击:

How to check if Hadoop daemons are running?

linuxhadoop

提问by Bohdan

What are simple commands to check if Hadoop daemons are running?

检查 Hadoop 守护进程是否正在运行的简单命令是什么?

For example if I'm trying to figure out why HDFS is not setup correctly I'll want to know a way to check if namemonode/datanode/jobtracker/tasktracker are running on this machine.

例如,如果我想弄清楚为什么 HDFS 没有正确设置,我会想知道一种方法来检查 namemonode/datanode/jobtracker/tasktracker 是否正在这台机器上运行。

Is there any way to check it fast without looking into logs or using ps(on Linux)?

有没有办法在不查看日志或使用 ps(在 Linux 上)的情况下快速检查它?

采纳答案by Bohdan

I did not find great solution to it, so I used

我没有找到很好的解决方案,所以我使用了

ps -ef | grep hadoop | grep -P  'namenode|datanode|tasktracker|jobtracker'

just to see if stuff is running

只是为了看看东西是否正在运行

and

./hadoop dfsadmin -report

but last was not helpful until server was running.

但最后在服务器运行之前没有帮助。

回答by Mark Vickery

In the shell type 'jps' (you might need a jdk to run jps). It lists all the running java processes and will list out the hadoop daemons that are running.

在 shell 中输入“jps”(你可能需要一个 jdk 来运行 jps)。它列出了所有正在运行的 java 进程,并将列出正在运行的 hadoop 守护进程。

回答by Tariq

apart from jps, another good idea is to use the web interfaces for NameNode and JobTracker provided by Hadoop. It not only shows you the processes but provides you a lot of other useful info like your cluster summary, ongoing jobs etc atc. to go to the NN UI point your web browser to "YOUR_NAMENODE_HOST:9000" and for JT UI "YOUR_JOBTRACKER_HOST:9001".

除了 jps,另一个好主意是使用 Hadoop 提供的 NameNode 和 JobTracker 的 Web 界面。它不仅会向您展示进程,还会为您提供许多其他有用的信息,例如集群摘要、正在进行的作业等。要转到 NN UI,请将您的 Web 浏览器指向“YOUR_NAMENODE_HOST:9000”和 JT UI“YOUR_JOBTRACKER_HOST:9001”。

回答by Pranay Goyal

If you see hadoop process is not running on ps -ef|grep hadoop, run sbin/start-dfs.sh. Monitor with hdfs dfsadmin -report:

如果您看到 hadoop 进程没有运行ps -ef|grep hadoop,请运行sbin/start-dfs.sh。监控hdfs dfsadmin -report

[mapr@node1 bin]$ hadoop dfsadmin -report
Configured Capacity: 105689374720 (98.43 GB)
Present Capacity: 96537456640 (89.91 GB)
DFS Remaining: 96448180224 (89.82 GB)
DFS Used: 89276416 (85.14 MB)
DFS Used%: 0.09%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0

-------------------------------------------------
Datanodes available: 2 (2 total, 0 dead)

Name: 192.168.1.16:50010
Decommission Status : Normal
Configured Capacity: 52844687360 (49.22 GB)
DFS Used: 44638208 (42.57 MB)
Non DFS Used: 4986138624 (4.64 GB)
DFS Remaining: 47813910528(44.53 GB)
DFS Used%: 0.08%
DFS Remaining%: 90.48%
Last contact: Tue Aug 20 13:23:32 EDT 2013


Name: 192.168.1.17:50010
Decommission Status : Normal
Configured Capacity: 52844687360 (49.22 GB)
DFS Used: 44638208 (42.57 MB)
Non DFS Used: 4165779456 (3.88 GB)
DFS Remaining: 48634269696(45.29 GB)
DFS Used%: 0.08%
DFS Remaining%: 92.03%
Last contact: Tue Aug 20 13:23:34 EDT 2013

回答by CuriousMind

Try jpscommand. It specifies the java processes which are up and running.

试试jps命令。它指定了正在运行的 java 进程。

回答by Flowra

you can use Jps command as vipin said like this command :

您可以像 vipin 所说的那样使用 Jps 命令,如下命令:

/usr/lib/java/jdk1.8.0_25/bin/jps  

of course you will change the path of java with the one you have "the path you installed java in"
Jpsis A nifty tool for checking whether the expected Hadoop processes are running (part of Sun's Java since v1.5.0).
the result will be something like that :

当然,您将使用“您安装 java 的路径”更改 java 的路径。
Jps是一个用于检查预期 Hadoop 进程是否正在运行的漂亮工具(自 v1.5.0 以来是 Sun 的 Java 的一部分)。
结果将是这样的:

2287 TaskTracker  
2149 JobTracker  
1938 DataNode  
2085 SecondaryNameNode  
2349 Jps  
1788 NameNode  

I get the answer from this tutorial: http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/

我从本教程中得到答案:http: //www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/

回答by Prashant Chutke

Try running this:

尝试运行这个:

for service in /etc/init.d/hadoop-hdfs-*; do $service status; done;

回答by Santosh Singh

To check whether Hadoop Nodes are running or not:

要检查 Hadoop 节点是否正在运行:

sudo -u hdfs hdfs dfsadmin -report

Configured Capacity: 28799380685 (26.82 GB)
Present Capacity: 25104842752 (23.38 GB)
DFS Remaining: 25012056064 (23.29 GB)
DFS Used: 92786688 (88.49 MB)
DFS Used%: 0.37%
Under replicated blocks: 436
Blocks with corrupt replicas: 0
Missing blocks: 0


Datanodes available: 1 (1 total, 0 dead)

Live datanodes:
Name: 127.0.0.1:50010 (localhost.localdomain)
Hostname: localhost.localdomain
Rack: /default
Decommission Status : Normal
Configured Capacity: 28799380685 (26.82 GB)
DFS Used: 92786688 (88.49 MB)
Non DFS Used: 3694537933 (3.44 GB)
DFS Remaining: 25012056064 (23.29 GB)
DFS Used%: 0.32%
DFS Remaining%: 86.85%
Last contact: Thu Mar 01 22:01:38 IST 2018

配置容量:28799380685 (26.82 GB)
当前容量:25104842752 (23.38 GB)
DFS 剩余:25012056064 (23.29 GB) 已用
DFS:92786688(88.49 MB03ds 已使用的 88.49 MB03 ds 复制块 丢失
% 已 损坏的块数)块:0



可用数据节点:1(共 1 个,0 个死亡)

实时数据节点:
名称:127.0.0.1 :50010 (localhost.localdomain)
主机名:localhost.localdomain
机架:/default
退役状态:正常
配置容量:28799380685 (26.82 GB)
DFS 使用:92786688 (88.49 MB)
非 DFS (88.49 MB)非 DFS 3.44 GB)
DFS 剩余:25012056064 (23.29 GB)
DFS 已用%:0.32%
DFS 剩余%:86.85%
最后一次联系:2018 年 IST 2018 年 1 月 1 日星期四 22:01:38

回答by Sahil Agnihotri

To check deamons are running?

检查守护进程是否正在运行?

You can check with jps command

您可以使用 jps 命令检查

use use below commands also

也使用下面的命令

ps -ef | grep -w namenode

ps -ef | grep -w datanode

ps -ef | grep -w datanode

ps -ef | grep -w tasktracker 

-w :- will help to fetch the exact string

-w :- 将有助于获取确切的字符串

If you have Superuser privilege then you can also use below one for the same:

如果您有超级用户权限,那么您也可以使用以下权限:

./hadoop dfsadmin -report

Hope this will help !!!

希望这会有所帮助!!!