Usually Hadoop is able to automatically recover cluster nodes from Unhealthy state by cleaning log and temporary directories. But sometimes nodes stay unhealthy for a long time and manual intervention is necessary to bring them back.
Amazon EMR allows you to define scale-out and scale-in rules to automatically add and remove instances based on the metrics you specify.
In this article I am going to explore the instance controller logs that can be very useful in monitoring the auto-scaling. The logs are located in
/emr/instance-controller/log/directory on the EMR master node.
When you run many Hadoop clusters it is useful to automatically collect metrics from all clusters in a single place (Hive table i.e.).
This allows you to perform any advanced and custom analysis of your clusters workload and not be limited to the features provided by Hadoop Administration UI tools that often offer only per cluster view so it is hard to see the whole picture of your data platform.
Analyzing a Hadoop cluster I noticed that it runs 2 GB and 4 GB containers only, and does not allocate the entire available memory to applications always leaving about 150 GB of free memory.
The clusters run Apache Pig and Hive applications, and the default settings (they are also inherited by Tez engine used by Pig and Hive):
-- from mapred-site.xml mapreduce.map.memory.mb 1408 mapreduce.reduce.memory.mb 2816 yarn.app.mapreduce.am.resource.mb 2816