Skip to content
Large-Scale Data Engineering in Cloud

Performance Tuning, Cost Optimization / Internals, Research by Dmitry Tolpeko

  • About
  • About
  • AWS,  Hive,  S3

    Hive Table for S3 Access Logs

    May 26, 2020

    Although Amazon S3 can generate a lot of logs and it makes sense to have an ETL process to parse, combine and put the logs into Parquet or ORC format for better query performance, there is still an easy way to analyze logs using a Hive table created just on top of the raw S3 log directory.

    Read More
    dmtolpeko
  • Hadoop,  Hive,  Tez,  YARN

    Hive on Tez – Shuffle Failed with Too Many Fetch Failures and Insufficient Progress

    February 26, 2020

    On one of the clusters I noticed an increased rate of shuffle errors, and the restart of a job did not help, it still failed with the same error.

    The error was as follows:

     Error: Error while running task ( failure ) : 
      org.apache.tez.runtime.library.common.shuffle.orderedgrouped.Shuffle$ShuffleError: 
        error in shuffle in Fetcher 
     at org.apache.tez.runtime.library.common.shuffle.orderedgrouped.Shuffle$RunShuffleCallable.callInternal
     (Shuffle.java:301)
    
    Caused by: java.io.IOException: 
      Shuffle failed with too many fetch failures and insufficient progress!failureCounts=1,
        pendingInputs=1, fetcherHealthy=false, reducerProgressedEnough=true, reducerStalled=true
    
    Read More
    dmtolpeko
  • Amazon,  AWS,  Hive,  I/O,  Parquet,  S3,  Spark

    Spark – Slow Load Into Partitioned Hive Table on S3 – Direct Writes, Output Committer Algorithms

    December 30, 2019

    I have a Spark job that transforms incoming data from compressed text files into Parquet format and loads them into a daily partition of a Hive table. This is a typical job in a data lake, it is quite simple but in my case it was very slow.

    Initially it took about 4 hours to convert ~2,100 input .gz files (~1.9 TB of data) into Parquet, while the actual Spark job took just 38 minutes to run and the remaining time was spent on loading data into a Hive partition.

    Let’s see what is the reason of such behavior and how we can improve the performance.

    Read More
    dmtolpeko
  • Hive,  Memory,  Tez,  YARN

    Hive – Issues With Large YARN Containers – Low Concurrency and Utilization, High Execution Time

    December 11, 2019

    I was asked to tune a Hive query that ran more than 10 hours. It was running on a 100 node cluster with 16 GB available for YARN containers on each node.

    Although the query processed about 2 TB of input data, it did a fairly simple aggregation on user_id column and did not look too complex. It had 1 Map stage with 1,500 tasks and 1 Reduce stage with 7,000 tasks.

    All map tasks completed within 30 minutes, and the query stuck on the Reduce phase. So what was wrong?

    Read More
    dmtolpeko
  • ETL,  Hive,  Presto

    Presto vs Hive – SLA Risks for Long Running ETL – Failures and Retries Due to Node Loss

    December 4, 2019

    Presto is an extremely powerful distributed SQL query engine, so at some point you may consider using it to replace SQL-based ETL processes that you currently run on Apache Hive.

    Although it is completely possible, you should be aware of some limitations that may affect your SLAs.

    Read More
    dmtolpeko
  • Hive,  JVM,  Tez

    Apache Hive on Tez – Quick On The Fly Profiling of Long Running Tasks Using Jstack Probes and Flame Graphs

    November 20, 2019

    I was asked to diagnose and tune a long and complex ad-hoc Hive query that spent more than 4 hours on the reduce stage. The fetch from the map tasks and the merge phase completed fairly quickly (within 10 minutes) and the reducers spent most of their time iterating the input rows and performing the aggregations defined by the query – MIN, SUM, COUNT and PERCENTILE_APPROX and others on the specific columns.

    After the merge phase a Tez reducer does not output many log records to help you diagnose the performance issues and find the bottlenecks. In this article I will describe how you can profile an already running Tez task without restarting the job.

    Read More
    dmtolpeko
  • Hive,  Tez

    Apache Hive – Monitoring Progress of Long-Running Reducers – hive.log.every.n.records Option

    November 16, 2019

    Reducer aggregates rows within input groups (i.e. rows having the same grouping key) typically producing one output row per group. For example, the following query returns ~200 rows even if the source events table has billions of rows:

    SELECT country, count(*) 
    FROM events 
    GROUP BY country;
    

    The problem is that despite the small number of output rows the aggregation still can be a very long process taking many hours.

    Read More
    dmtolpeko
  • Hive,  Pig,  Tez

    Apache Hive/Pig on Tez – Long Running Tasks and Their Failed Attempts – Analyzing Performance and Finding Bottlenecks (Insufficient Parallelism) using Application Master Logs

    November 1, 2019

    Apache Tez is the main distributed execution engine for Apache Hive and Apache Pig jobs.

    Tez represents a data flow as DAGs (Directed acyclic graph) that consists of a set of vertices connected by edges. Vertices represent data transformations while edges represent movement of data between vertices.

    For example, the following Hive SQL query:

    SELECT u.name, sum_items 
    FROM
     (
       SELECT user_id, SUM(items) sum_items
       FROM sales
       GROUP BY user_id
     ) s
     JOIN
       users u
     ON s.user_id = u.id
    

    and its corresponding Apache Pig script:

    sales = LOAD 'sales' USING org.apache.hive.hcatalog.pig.HCatLoader();
    users = LOAD 'users' USING org.apache.hive.hcatalog.pig.HCatLoader();
    
    sales_agg = FOREACH (GROUP sales BY user_id)
      GENERATE 
        group.user_id as user_id,
        SUM(sales.items) as sum_items;
        
    data = JOIN sales_agg BY user_id, users BY id;
    

    Can be represented as the following DAG in Tez:

    In my case the job ran almost 5 hours:

    Why did it take so long to run the job? Is there any way to improve its performance?

    Read More
    dmtolpeko
  • Hadoop,  Hive,  Memory,  YARN

    Tuning Hadoop YARN – Boosting Memory Settings Beyond the Limits to Increase Cluster Capacity and Utilization

    September 4, 2019

    Memory allocation in Hadoop YARN clusters has some drawbacks that may lead to significant cluster under-utilization and at the same time (!) to large queues of pending applications.

    So you have to pay for extra compute resources that you do not use and still have unsatisfied users. Let’s see how this can happen and how you can mitigate this.

    Read More
    dmtolpeko
  • Hive,  Memory,  Tez,  YARN

    Tez Memory Tuning – Container is Running Beyond Physical Memory Limits – Solving By Reducing Memory Settings

    January 21, 2019

    Can reducing the Tez memory settings help solving memory limit problems? Sometimes this paradox works.

    One day one of our Hive query failed with the following error: Container is running beyond physical memory limits. Current usage: 4.1 GB of 4 GB physical memory used; 6.0 GB of 20 GB virtual memory used. Killing container.

    Read More
    dmtolpeko
 Older Posts

Recent Posts

  • Nov 26, 2023 ORDER BY in Spark – How Global Sort Is Implemented, Sampling, Range Rartitioning and Skew
  • Oct 25, 2023 Reading JSON in Spark – Full Read for Inferring Schema and Sampling, SamplingRatio Option Implementation and Issues
  • Oct 15, 2023 Distributed COUNT DISTINCT – How it Works in Spark, Multiple COUNT DISTINCT, Transform to COUNT with Expand, Exploded Shuffle, Partial Aggregations
  • Oct 10, 2023 Spark – Reading Parquet – Pushed Filters, SUBSTR(timestamp, 1, 10), LIKE and StringStartsWith
  • Oct 06, 2023 Spark Stage Restarts – Partial Restarts, Multiple Retry Attempts with Different Task Sets, Accepted Late Results from Failed Stages, Cost of Restarts

Archives

  • November 2023 (1)
  • October 2023 (5)
  • September 2023 (1)
  • July 2023 (1)
  • August 2022 (4)
  • April 2022 (1)
  • March 2021 (2)
  • January 2021 (2)
  • June 2020 (4)
  • May 2020 (8)
  • April 2020 (3)
  • February 2020 (3)
  • December 2019 (5)
  • November 2019 (4)
  • October 2019 (1)
  • September 2019 (2)
  • August 2019 (1)
  • May 2019 (9)
  • April 2019 (2)
  • January 2019 (3)
  • December 2018 (4)
  • November 2018 (1)
  • October 2018 (6)
  • September 2018 (2)

Categories

  • Amazon (14)
  • Auto Scaling (1)
  • AWS (28)
  • Cost Optimization (1)
  • CPU (2)
  • Data Skew (2)
  • Distributed (1)
  • EC2 (1)
  • EMR (13)
  • ETL (2)
  • Flink (5)
  • Hadoop (14)
  • Hive (17)
  • Hue (1)
  • I/O (25)
  • JSON (1)
  • JVM (3)
  • Kinesis (1)
  • Logs (1)
  • Memory (7)
  • Monitoring (4)
  • Optimizer (2)
  • ORC (5)
  • Parquet (8)
  • Pig (2)
  • Presto (3)
  • Qubole (2)
  • RDS (1)
  • S3 (18)
  • Snowflake (6)
  • Spark (17)
  • Storage (14)
  • Tez (10)
  • YARN (18)

Meta

  • Log in
  • Entries feed
  • Comments feed
  • WordPress.org
Savona Theme by Optima Themes