Skip to content
Large-Scale Data Engineering in Cloud

Performance Tuning, Cost Optimization / Internals, Research by Dmitry Tolpeko

  • About
  • About
  • Hadoop,  YARN

    Hadoop YARN – Monitoring Resource Consumption by Running Applications in Multi-Cluster Environments

    June 25, 2020

    In cloud it is typical to run multiple compute clusters, so browsing the Web UI for every cluster to check the current resource consumption by applications is not always easy and convenient especially if YARN clusters are managed by different Hadoop distributions (Amazon EMR, Cloudera, Qubole etc.).

    Let’s see how you can automate this process and find out how many applications are running and which resources they are consuming (containers, memory and CPU).

    Read More
    dmtolpeko
  • CPU,  Hadoop,  YARN

    YARN – Negative vCores – Capacity Scheduler with Memory Resource Type

    May 8, 2020

    You can expect that the total number of vCores available to YARN limits the number of containers you can run concurrently, that’s not true in some cases.

    Let’s consider one of them – Capacity Scheduler with DefaultResourceCalculator (Memory only).

    Read More
    dmtolpeko
  • AWS,  CPU,  EC2,  EMR,  Hadoop,  Qubole,  YARN

    AWS EC2 vCPU and YARN vCores – M4, C4, R4 Instances

    May 7, 2020

    Let’s review how EC2 vCPUs correspond to YARN vCores in Amazon EMR and Qubole Hadoop clusters. As an example, I will choose m4.4xlarge, r4.4xlarge and c4.4xlarge EC2 instance types.

    EC2 vCPU is a thread of a CPU core (typically, there are two threads per core). Does it mean that YARN vCores should be equal to the number of EC2 vCPU? That’s not always the case.

    Read More
    dmtolpeko
  • Hadoop,  Hive,  Tez,  YARN

    Hive on Tez – Shuffle Failed with Too Many Fetch Failures and Insufficient Progress

    February 26, 2020

    On one of the clusters I noticed an increased rate of shuffle errors, and the restart of a job did not help, it still failed with the same error.

    The error was as follows:

     Error: Error while running task ( failure ) : 
      org.apache.tez.runtime.library.common.shuffle.orderedgrouped.Shuffle$ShuffleError: 
        error in shuffle in Fetcher 
     at org.apache.tez.runtime.library.common.shuffle.orderedgrouped.Shuffle$RunShuffleCallable.callInternal
     (Shuffle.java:301)
    
    Caused by: java.io.IOException: 
      Shuffle failed with too many fetch failures and insufficient progress!failureCounts=1,
        pendingInputs=1, fetcherHealthy=false, reducerProgressedEnough=true, reducerStalled=true
    
    Read More
    dmtolpeko
  • Hadoop,  JVM,  Memory,  YARN

    Hadoop YARN – Container Virtual Memory – Understanding and Solving “Container is running beyond virtual memory limits” Errors

    February 19, 2020

    In the previous article about YARN container memory (see, Tez Memory Tuning – Container is Running Beyond Physical Memory Limits) I wrote about the physical memory. Now I would like to pay attention to the virtual memory in YARN.

    A typical YARN memory error may look like this:

    Container is running beyond virtual memory limits. Current usage: 1.0 GB of 1.1 GB physical memory used; 2.9 GB of 2.4 GB virtual memory used. Killing container.
    

    So what is the virtual memory, how to solve such errors and why is the virtual memory size often so large?

    Read More
    dmtolpeko
  • Hadoop,  YARN

    Hadoop YARN Cluster Idle Time

    February 14, 2020

    In the previous article Calculating Utilization of Cluster using Resource Manager Logs I showed how to estimate per-second utilization for a Hadoop cluster.

    This information can be useful to calculate the idle time statistics for a cluster i.e. time when no any containers are running.

    Read More
    dmtolpeko
  • Hadoop,  YARN

    Hadoop YARN – Calculating Per Second Utilization of Cluster using Resource Manager Logs

    September 11, 2019

    You can use YARN REST API to collect various Hadoop cluster metrics such as available and allocated memory, CPU, containers and so on.

    If you set up a process to extract data from this API once per minute e.g. you can very easily collect and analyze historical and current cluster utilization quite accurately. For more details, see Collecting Utilization Metrics from Multiple Clusters article.

    But even if you query YARN REST API every second it still can only provide a snapshot of the used YARN resources. It does not show which application allocates or releases containers, their memory and CPU capacity, in which order these events occur, what is their exact timestamp and so on.

    For this reason I prefer a different approach that is based on using the YARN Resource Manager logs to calculate the exact per second utilization metrics of a Hadoop cluster.

    Read More
    dmtolpeko
  • Hadoop,  Hive,  Memory,  YARN

    Tuning Hadoop YARN – Boosting Memory Settings Beyond the Limits to Increase Cluster Capacity and Utilization

    September 4, 2019

    Memory allocation in Hadoop YARN clusters has some drawbacks that may lead to significant cluster under-utilization and at the same time (!) to large queues of pending applications.

    So you have to pay for extra compute resources that you do not use and still have unsatisfied users. Let’s see how this can happen and how you can mitigate this.

    Read More
    dmtolpeko
  • AWS,  EMR,  Hadoop,  YARN

    Amazon EMR – Downscaling and Ghost (Impaired) Nodes

    August 27, 2019

    I already wrote about recovering the ghost nodes in Amazon EMR clusters, but in this article I am going to extend this information for EMR clusters configured for auto-scaling.

    Read More
    dmtolpeko
  • AWS,  EMR,  Hadoop,  YARN

    Amazon EMR – Recovering Ghost Nodes

    May 23, 2019

    In a Hadoop cluster besides Active nodes you may also have Unhealthy, Lost and Decommissioned nodes. Unhealthy nodes are running but just excluded from scheduling the tasks because they, for example, do not have enough disk space. Lost nodes are nodes that are not reachable anymore. The decommissioned nodes are nodes that successfully terminated and left the cluster.

    But all these nodes are known to the Hadoop YARN cluster and you can see their details such as IP addresses, last health updates and so on.

    At the same time there can be also Ghost nodes i.e. nodes that are running by Amazon EMR services but Hadoop itself does not know anything about their existence. Let’s see how you can find them.

    Read More
    dmtolpeko
 Older Posts

Recent Posts

  • Nov 26, 2023 ORDER BY in Spark – How Global Sort Is Implemented, Sampling, Range Rartitioning and Skew
  • Oct 25, 2023 Reading JSON in Spark – Full Read for Inferring Schema and Sampling, SamplingRatio Option Implementation and Issues
  • Oct 15, 2023 Distributed COUNT DISTINCT – How it Works in Spark, Multiple COUNT DISTINCT, Transform to COUNT with Expand, Exploded Shuffle, Partial Aggregations
  • Oct 10, 2023 Spark – Reading Parquet – Pushed Filters, SUBSTR(timestamp, 1, 10), LIKE and StringStartsWith
  • Oct 06, 2023 Spark Stage Restarts – Partial Restarts, Multiple Retry Attempts with Different Task Sets, Accepted Late Results from Failed Stages, Cost of Restarts

Archives

  • November 2023 (1)
  • October 2023 (5)
  • September 2023 (1)
  • July 2023 (1)
  • August 2022 (4)
  • April 2022 (1)
  • March 2021 (2)
  • January 2021 (2)
  • June 2020 (4)
  • May 2020 (8)
  • April 2020 (3)
  • February 2020 (3)
  • December 2019 (5)
  • November 2019 (4)
  • October 2019 (1)
  • September 2019 (2)
  • August 2019 (1)
  • May 2019 (9)
  • April 2019 (2)
  • January 2019 (3)
  • December 2018 (4)
  • November 2018 (1)
  • October 2018 (6)
  • September 2018 (2)

Categories

  • Amazon (14)
  • Auto Scaling (1)
  • AWS (28)
  • Cost Optimization (1)
  • CPU (2)
  • Data Skew (2)
  • Distributed (1)
  • EC2 (1)
  • EMR (13)
  • ETL (2)
  • Flink (5)
  • Hadoop (14)
  • Hive (17)
  • Hue (1)
  • I/O (25)
  • JSON (1)
  • JVM (3)
  • Kinesis (1)
  • Logs (1)
  • Memory (7)
  • Monitoring (4)
  • Optimizer (2)
  • ORC (5)
  • Parquet (8)
  • Pig (2)
  • Presto (3)
  • Qubole (2)
  • RDS (1)
  • S3 (18)
  • Snowflake (6)
  • Spark (17)
  • Storage (14)
  • Tez (10)
  • YARN (18)

Meta

  • Log in
  • Entries feed
  • Comments feed
  • WordPress.org
Savona Theme by Optima Themes