Skip to content
Large-Scale Data Engineering in Cloud

Performance Tuning, Cost Optimization / Internals, Research by Dmitry Tolpeko

  • About
  • About
  • Memory,  Presto

    Presto – Troubleshooting Query Exceeded Per-Node Total Memory Limit – resource_overcommit, query.max-total-memory-per-node, Reserved Pool, Disk Spill

    November 28, 2019

    I had a SQL query that failed on one of the Presto 0.208 clusters with the “Query exceeded per-node total memory” (com.facebook.presto.ExceededMemoryLimitException) error. How can you solve this problem? I will consider a few possible solutions, but firstly let’s review the memory allocation in Presto.

    Read More
    dmtolpeko
  • Hive,  JVM,  Tez

    Apache Hive on Tez – Quick On The Fly Profiling of Long Running Tasks Using Jstack Probes and Flame Graphs

    November 20, 2019

    I was asked to diagnose and tune a long and complex ad-hoc Hive query that spent more than 4 hours on the reduce stage. The fetch from the map tasks and the merge phase completed fairly quickly (within 10 minutes) and the reducers spent most of their time iterating the input rows and performing the aggregations defined by the query – MIN, SUM, COUNT and PERCENTILE_APPROX and others on the specific columns.

    After the merge phase a Tez reducer does not output many log records to help you diagnose the performance issues and find the bottlenecks. In this article I will describe how you can profile an already running Tez task without restarting the job.

    Read More
    dmtolpeko
  • Hive,  Tez

    Apache Hive – Monitoring Progress of Long-Running Reducers – hive.log.every.n.records Option

    November 16, 2019

    Reducer aggregates rows within input groups (i.e. rows having the same grouping key) typically producing one output row per group. For example, the following query returns ~200 rows even if the source events table has billions of rows:

    SELECT country, count(*) 
    FROM events 
    GROUP BY country;
    

    The problem is that despite the small number of output rows the aggregation still can be a very long process taking many hours.

    Read More
    dmtolpeko
  • Hive,  Pig,  Tez

    Apache Hive/Pig on Tez – Long Running Tasks and Their Failed Attempts – Analyzing Performance and Finding Bottlenecks (Insufficient Parallelism) using Application Master Logs

    November 1, 2019

    Apache Tez is the main distributed execution engine for Apache Hive and Apache Pig jobs.

    Tez represents a data flow as DAGs (Directed acyclic graph) that consists of a set of vertices connected by edges. Vertices represent data transformations while edges represent movement of data between vertices.

    For example, the following Hive SQL query:

    SELECT u.name, sum_items 
    FROM
     (
       SELECT user_id, SUM(items) sum_items
       FROM sales
       GROUP BY user_id
     ) s
     JOIN
       users u
     ON s.user_id = u.id
    

    and its corresponding Apache Pig script:

    sales = LOAD 'sales' USING org.apache.hive.hcatalog.pig.HCatLoader();
    users = LOAD 'users' USING org.apache.hive.hcatalog.pig.HCatLoader();
    
    sales_agg = FOREACH (GROUP sales BY user_id)
      GENERATE 
        group.user_id as user_id,
        SUM(sales.items) as sum_items;
        
    data = JOIN sales_agg BY user_id, users BY id;
    

    Can be represented as the following DAG in Tez:

    In my case the job ran almost 5 hours:

    Why did it take so long to run the job? Is there any way to improve its performance?

    Read More
    dmtolpeko
  • Amazon,  AWS,  RDS

    Sqoop Import from Amazon RDS Read Replica – ERROR: Canceling Statement due to Conflict with Recovery

    October 18, 2019

    Apache Sqoop is widely used to import data from relational databases into cloud. One of our databases uses Amazon RDS for PostgreSQL to store sales data and its Sqoop import periodically failed with the following error:

    Error: java.io.IOException: SQLException in nextKeyValue
    Caused by: org.postgresql.util.PSQLException: ERROR: canceling statement due to conflict with recovery
      Detail: User query might have needed to see row versions that must be removed.
    

    In this article I will describe a solution that helped resolve the problem in our specific case.

    Read More
    dmtolpeko
  • Hadoop,  YARN

    Hadoop YARN – Calculating Per Second Utilization of Cluster using Resource Manager Logs

    September 11, 2019

    You can use YARN REST API to collect various Hadoop cluster metrics such as available and allocated memory, CPU, containers and so on.

    If you set up a process to extract data from this API once per minute e.g. you can very easily collect and analyze historical and current cluster utilization quite accurately. For more details, see Collecting Utilization Metrics from Multiple Clusters article.

    But even if you query YARN REST API every second it still can only provide a snapshot of the used YARN resources. It does not show which application allocates or releases containers, their memory and CPU capacity, in which order these events occur, what is their exact timestamp and so on.

    For this reason I prefer a different approach that is based on using the YARN Resource Manager logs to calculate the exact per second utilization metrics of a Hadoop cluster.

    Read More
    dmtolpeko
  • Hadoop,  Hive,  Memory,  YARN

    Tuning Hadoop YARN – Boosting Memory Settings Beyond the Limits to Increase Cluster Capacity and Utilization

    September 4, 2019

    Memory allocation in Hadoop YARN clusters has some drawbacks that may lead to significant cluster under-utilization and at the same time (!) to large queues of pending applications.

    So you have to pay for extra compute resources that you do not use and still have unsatisfied users. Let’s see how this can happen and how you can mitigate this.

    Read More
    dmtolpeko
  • AWS,  EMR,  Hadoop,  YARN

    Amazon EMR – Downscaling and Ghost (Impaired) Nodes

    August 27, 2019

    I already wrote about recovering the ghost nodes in Amazon EMR clusters, but in this article I am going to extend this information for EMR clusters configured for auto-scaling.

    Read More
    dmtolpeko
  • AWS,  EMR,  Hadoop,  YARN

    Amazon EMR – Recovering Ghost Nodes

    May 23, 2019

    In a Hadoop cluster besides Active nodes you may also have Unhealthy, Lost and Decommissioned nodes. Unhealthy nodes are running but just excluded from scheduling the tasks because they, for example, do not have enough disk space. Lost nodes are nodes that are not reachable anymore. The decommissioned nodes are nodes that successfully terminated and left the cluster.

    But all these nodes are known to the Hadoop YARN cluster and you can see their details such as IP addresses, last health updates and so on.

    At the same time there can be also Ghost nodes i.e. nodes that are running by Amazon EMR services but Hadoop itself does not know anything about their existence. Let’s see how you can find them.

    Read More
    dmtolpeko
  • AWS,  EMR,  Hadoop,  YARN

    Amazon EMR – Recovering Unhealthy Nodes with EMR Services Down

    May 23, 2019

    Usually Hadoop is able to automatically recover cluster nodes from Unhealthy state by cleaning log and temporary directories. But sometimes nodes stay unhealthy for a long time and manual intervention is necessary to bring them back.

    Read More
    dmtolpeko
 Older Posts
Newer Posts 

Recent Posts

  • Nov 26, 2023 ORDER BY in Spark – How Global Sort Is Implemented, Sampling, Range Rartitioning and Skew
  • Oct 25, 2023 Reading JSON in Spark – Full Read for Inferring Schema and Sampling, SamplingRatio Option Implementation and Issues
  • Oct 15, 2023 Distributed COUNT DISTINCT – How it Works in Spark, Multiple COUNT DISTINCT, Transform to COUNT with Expand, Exploded Shuffle, Partial Aggregations
  • Oct 10, 2023 Spark – Reading Parquet – Pushed Filters, SUBSTR(timestamp, 1, 10), LIKE and StringStartsWith
  • Oct 06, 2023 Spark Stage Restarts – Partial Restarts, Multiple Retry Attempts with Different Task Sets, Accepted Late Results from Failed Stages, Cost of Restarts

Archives

  • November 2023 (1)
  • October 2023 (5)
  • September 2023 (1)
  • July 2023 (1)
  • August 2022 (4)
  • April 2022 (1)
  • March 2021 (2)
  • January 2021 (2)
  • June 2020 (4)
  • May 2020 (8)
  • April 2020 (3)
  • February 2020 (3)
  • December 2019 (5)
  • November 2019 (4)
  • October 2019 (1)
  • September 2019 (2)
  • August 2019 (1)
  • May 2019 (9)
  • April 2019 (2)
  • January 2019 (3)
  • December 2018 (4)
  • November 2018 (1)
  • October 2018 (6)
  • September 2018 (2)

Categories

  • Amazon (14)
  • Auto Scaling (1)
  • AWS (28)
  • Cost Optimization (1)
  • CPU (2)
  • Data Skew (2)
  • Distributed (1)
  • EC2 (1)
  • EMR (13)
  • ETL (2)
  • Flink (5)
  • Hadoop (14)
  • Hive (17)
  • Hue (1)
  • I/O (25)
  • JSON (1)
  • JVM (3)
  • Kinesis (1)
  • Logs (1)
  • Memory (7)
  • Monitoring (4)
  • Optimizer (2)
  • ORC (5)
  • Parquet (8)
  • Pig (2)
  • Presto (3)
  • Qubole (2)
  • RDS (1)
  • S3 (18)
  • Snowflake (6)
  • Spark (17)
  • Storage (14)
  • Tez (10)
  • YARN (18)

Meta

  • Log in
  • Entries feed
  • Comments feed
  • WordPress.org
Savona Theme by Optima Themes