I already wrote about recovering the ghost nodes in Amazon EMR clusters, but in this article I am going to extend this information for EMR clusters configured for auto-scaling.
-
-
Amazon EMR – Recovering Ghost Nodes
In a Hadoop cluster besides Active nodes you may also have Unhealthy, Lost and Decommissioned nodes. Unhealthy nodes are running but just excluded from scheduling the tasks because they, for example, do not have enough disk space. Lost nodes are nodes that are not reachable anymore. The decommissioned nodes are nodes that successfully terminated and left the cluster.
But all these nodes are known to the Hadoop YARN cluster and you can see their details such as IP addresses, last health updates and so on.
At the same time there can be also Ghost nodes i.e. nodes that are running by Amazon EMR services but Hadoop itself does not know anything about their existence. Let’s see how you can find them.
-
Amazon EMR – Recovering Unhealthy Nodes with EMR Services Down
Usually Hadoop is able to automatically recover cluster nodes from Unhealthy state by cleaning log and temporary directories. But sometimes nodes stay unhealthy for a long time and manual intervention is necessary to bring them back.
-
Amazon EMR – Monitoring Auto-Scaling using Instance Controller Logs
Amazon EMR allows you to define scale-out and scale-in rules to automatically add and remove instances based on the metrics you specify.
In this article I am going to explore the instance controller logs that can be very useful in monitoring the auto-scaling. The logs are located in
/emr/instance-controller/log/
directory on the EMR master node. -
Hadoop YARN – Collecting Utilization Metrics from Multiple Clusters
When you run many Hadoop clusters it is useful to automatically collect metrics from all clusters in a single place (Hive table i.e.).
This allows you to perform any advanced and custom analysis of your clusters workload and not be limited to the features provided by Hadoop Administration UI tools that often offer only per cluster view so it is hard to see the whole picture of your data platform.
-
YARN Memory Under-Utilization Running Low-Memory Instances (c4.xlarge i.e.)
Analyzing a Hadoop cluster I noticed that it runs 2 GB and 4 GB containers only, and does not allocate the entire available memory to applications always leaving about 150 GB of free memory.
The clusters run Apache Pig and Hive applications, and the default settings (they are also inherited by Tez engine used by Pig and Hive):
-- from mapred-site.xml mapreduce.map.memory.mb 1408 mapreduce.reduce.memory.mb 2816 yarn.app.mapreduce.am.resource.mb 2816
-
Tez Memory Tuning – Container is Running Beyond Physical Memory Limits – Solving By Reducing Memory Settings
Can reducing the Tez memory settings help solving memory limit problems? Sometimes this paradox works.
One day one of our Hive query failed with the following error:
Container is running beyond physical memory limits. Current usage: 4.1 GB of 4 GB physical memory used; 6.0 GB of 20 GB virtual memory used. Killing container.
-
YARN Resource Manager Silent Restarts – Java Heap Space Error – Amazon EMR
When you run a job in Hadoop you can notice the following error:
Application with id 'application_1545962730597_2614' doesn't exist in RM
. And later looking at the YARN Resource Manager UI athttp://<RM_IP_Address>:8088/cluster/apps
you can see low Application ID numbers: