14 Mayıs 2014 Çarşamba

Hadoop Control Scripts - Start/Stop Daemons

Hadoop includes some control scripts to start and stop Hadoop daemons on a cluster or on a single machine.

Hadoop 1.0.3 is used on a cluster running Centos operating system. 


1. masters/slaves Files

Hadoop includes master and slaves files to run specified commands on all cluster machines. These files are located under $HADOOP_HOME/conf.

Suppose you have a cluster of 4 servers: server1, server2, server3 and server4. On server1, you want to start namenode, jobtracker and secondary namenode. On other 3 servers, you want to run datanode and tasktracker daemons.

masters file
server1


slaves file
server2
server3
server4

If you run Hadoop on a single machine, run your server name to both files.

2. Control Scripts

Lets look at the scripts. These scripts are located $HADOOP_HOME/bin.

1. start-dfs.sh

This script should be run on namonode node.
> $HADOOP_HOME/bin/start-dfs.sh

  • Starts namenode on the server which script is run.
  • Starts datanodes on the servers listed in slaves file
  • Starts secondary namenode on the servers listed in masters file


2. start-mapred.sh

This script should be run on jobtracker node.
> $HADOOP_HOME/bin/start-mapred.sh

  • Starts jobtracker on the server which scripts is run
  • Starts tasktrackers on the servers listed in slaves file
  • This script does not use masters file


3. stop-dfs.sh

This script should be run on namonode node.
> $HADOOP_HOME/bin/stop-dfs.sh

  • Stops namenode on the server which script is run.
  • Stops datanodes on the servers listed in slaves file
  • Stops secondary namenode on the servers listed in masters file


4. stop-mapred.sh

This script should be run on jobtracker node.
> $HADOOP_HOME/bin/stop-mapred.sh

  • Stops jobtracker on the server which scripts is run
  • Stops tasktrackers on the servers listed in slaves file


5. hadoop-daemon.sh

This script should be run on datanode/tasktracker nodes.
This script operates according to the given parameters. By default, it runs on the server which script is run.

Start/Stop datanode
>$HADOOP_HOME/bin/hadoop-daemon.sh start datanode
>$HADOOP_HOME/bin/hadoop-daemon.sh stop datanode

Start/Stop tasktracker
>$HADOOP_HOME/bin/hadoop-daemon.sh start tasktracker
>$HADOOP_HOME/bin/hadoop-daemon.sh stop tasktracker

6. start-all.sh

This script runs start-dfs.sh and start-mapred.sh. Starts all daemons using slaves and masters file. This scripts should be run namenode/jobtracker node.
>$HADOOP_HOME/bin/start-all.sh

7. stop-all.sh

This script runs stop-dfs.sh and stop-mapred.sh. Stops all daemons using slaves and masters file. This scripts should be run namenode/jobtracker node.
>$HADOOP_HOME/bin/stop-all.sh







Hiç yorum yok:

Yorum Gönder