Large-Scale Data Engineering in Cloud

Performance Tuning, Cost Optimization / Internals, Research

  • About
  • About
  • AWS,  EMR,  Hadoop,  YARN

    Amazon EMR – Downscaling and Ghost (Impaired) Nodes

    August 27, 2019

    I already wrote about recovering the ghost nodes in Amazon EMR clusters, but in this article I am going to extend this information for EMR clusters configured for auto-scaling.

    Read More
    dmtolpeko
  • AWS,  EMR,  Hadoop,  YARN

    Amazon EMR – Recovering Ghost Nodes

    May 23, 2019

    In a Hadoop cluster besides Active nodes you may also have Unhealthy, Lost and Decommissioned nodes. Unhealthy nodes are running but just excluded from scheduling the tasks because they, for example, do not have enough disk space. Lost nodes are nodes that are not reachable anymore. The decommissioned nodes are nodes that successfully terminated and left the cluster.

    But all these nodes are known to the Hadoop YARN cluster and you can see their details such as IP addresses, last health updates and so on.

    At the same time there can be also Ghost nodes i.e. nodes that are running by Amazon EMR services but Hadoop itself does not know anything about their existence. Let’s see how you can find them.

    Read More
    dmtolpeko
  • AWS,  EMR,  Hadoop,  YARN

    Amazon EMR – Recovering Unhealthy Nodes with EMR Services Down

    May 23, 2019

    Usually Hadoop is able to automatically recover cluster nodes from Unhealthy state by cleaning log and temporary directories. But sometimes nodes stay unhealthy for a long time and manual intervention is necessary to bring them back.

    Read More
    dmtolpeko
  • AWS,  EMR,  Hadoop,  YARN

    Amazon EMR – Monitoring Auto-Scaling using Instance Controller Logs

    May 20, 2019

    Amazon EMR allows you to define scale-out and scale-in rules to automatically add and remove instances based on the metrics you specify.

    In this article I am going to explore the instance controller logs that can be very useful in monitoring the auto-scaling. The logs are located in /emr/instance-controller/log/ directory on the EMR master node.

    Read More
    dmtolpeko
  • AWS,  EMR,  Hadoop,  YARN

    Hadoop YARN – Collecting Utilization Metrics from Multiple Clusters

    May 15, 2019

    When you run many Hadoop clusters it is useful to automatically collect metrics from all clusters in a single place (Hive table i.e.).

    This allows you to perform any advanced and custom analysis of your clusters workload and not be limited to the features provided by Hadoop Administration UI tools that often offer only per cluster view so it is hard to see the whole picture of your data platform.

    Read More
    dmtolpeko
  • Hadoop,  Memory,  YARN

    YARN Memory Under-Utilization Running Low-Memory Instances (c4.xlarge i.e.)

    April 19, 2019

    Analyzing a Hadoop cluster I noticed that it runs 2 GB and 4 GB containers only, and does not allocate the entire available memory to applications always leaving about 150 GB of free memory.

    The clusters run Apache Pig and Hive applications, and the default settings (they are also inherited by Tez engine used by Pig and Hive):

    -- from mapred-site.xml
    mapreduce.map.memory.mb            1408
    mapreduce.reduce.memory.mb         2816
    yarn.app.mapreduce.am.resource.mb  2816
    
    Read More
    dmtolpeko
  • Hive,  Memory,  Tez,  YARN

    Tez Memory Tuning – Container is Running Beyond Physical Memory Limits – Solving By Reducing Memory Settings

    January 21, 2019

    Can reducing the Tez memory settings help solving memory limit problems? Sometimes this paradox works.

    One day one of our Hive query failed with the following error: Container is running beyond physical memory limits. Current usage: 4.1 GB of 4 GB physical memory used; 6.0 GB of 20 GB virtual memory used. Killing container.

    Read More
    dmtolpeko
  • Amazon,  AWS,  EMR,  YARN

    YARN Resource Manager Silent Restarts – Java Heap Space Error – Amazon EMR

    January 4, 2019

    When you run a job in Hadoop you can notice the following error: Application with id 'application_1545962730597_2614' doesn't exist in RM. And later looking at the YARN Resource Manager UI at http://<RM_IP_Address>:8088/cluster/apps you can see low Application ID numbers:

    Read More
    dmtolpeko
Newer Posts 

Recent Posts

  • Apr 20, 2022 Amazon EMR Spark – Ignoring Partition Filter and Listing All Partitions When Reading from S3A
  • Mar 19, 2021 Spark – Reading Parquet – Why the Number of Tasks can be Much Larger than the Number of Row Groups
  • Mar 07, 2021 Spark – Reading Parquet – Predicate Pushdown for LIKE Operator – EqualTo, StartsWith and Contains Pushed Filters
  • Jan 15, 2021 Parquet 1.x File Format – Footer Content
  • Jan 02, 2021 Flink and S3 Entropy Injection for Checkpoints

Archives

  • April 2022 (1)
  • March 2021 (2)
  • January 2021 (2)
  • June 2020 (4)
  • May 2020 (8)
  • April 2020 (3)
  • February 2020 (3)
  • December 2019 (5)
  • November 2019 (4)
  • October 2019 (1)
  • September 2019 (2)
  • August 2019 (1)
  • May 2019 (9)
  • April 2019 (2)
  • January 2019 (3)
  • December 2018 (4)
  • November 2018 (1)
  • October 2018 (6)
  • September 2018 (2)

Categories

  • Amazon (12)
  • Auto Scaling (1)
  • AWS (26)
  • Cost Optimization (1)
  • CPU (2)
  • Data Skew (1)
  • Distributed (1)
  • EC2 (1)
  • EMR (11)
  • ETL (2)
  • Flink (5)
  • Hadoop (14)
  • Hive (17)
  • Hue (1)
  • I/O (20)
  • JVM (3)
  • Kinesis (1)
  • Logs (1)
  • Memory (7)
  • Monitoring (4)
  • ORC (5)
  • Parquet (7)
  • Pig (2)
  • Presto (3)
  • Qubole (2)
  • RDS (1)
  • S3 (18)
  • Snowflake (6)
  • Spark (5)
  • Storage (12)
  • Tez (10)
  • YARN (18)

Meta

  • Log in
  • Entries feed
  • Comments feed
  • WordPress.org
Savona Theme by Optima Themes