Search This Blog

Decommissioning of Node manager in Hadoop cluster

In this article , We will learn how to perform decommissioning of the node managers in Hadoop clusters.

Decommissioning process will ensure running jobs moved to different node managers without failing them.

1) Check Ambari UI

If you are using HDP (Hortonworks Data Platform) , You can check Ambari UI to see how many node managers are present in your cluster.

The picture below shows cluster has 3 node managers. We would like to decommission one node manager from the cluster.




2) Check yarn.resourcemanager.nodes.exclude-path property 

Cluster should have yarn.resourcemanager.nodes.exclude-path property in yarn-site.xml file . If property not present , We should add it.



3) Update exclude file

Update /etc/hadoop/conf/yarn.exclude file with hostname on which you want to perform decommissioning of the node manager.

I have updated the file with master2 hostname to decommission node manager on master2 node.




4) Run refreshNodes command

Run yarn rmadmin -refreshNodes command to initiate decommissioning of nodemanagers.
This command needs to be run as yarn user.

The picture below shows refreshNodes command is run.




5) Check Ambari UI 

Login into Amabri GUI  and click on YARN service to check decommissioned nodemanagers.

The picture below show 1 nodemanager is decommissioned, I have highlighted it in yellow.




Trouble shooting:

Check logs of node manager which you are decommissioning, logs of active resource manager and also logs of the active namenode if decommissioning of node managers is not working.


No comments:

Post a Comment