Skip to content
Large-Scale Data Engineering in Cloud

Performance Tuning, Cost Optimization / Internals, Research by Dmitry Tolpeko

  • About
  • About
  • AWS,  I/O,  S3

    S3 Multipart Upload – S3 Access Log Messages

    April 17, 2020

    Most applications writing data into S3 use the S3 multipart upload API to upload data in parts. First, you initiate the load, then upload parts and finally complete the multipart upload.

    Let’s see how this operation is reflected in the S3 access log. My application uploaded the file data.gz into S3, and I can view it as follows:

    Read More
    dmtolpeko
  • AWS,  Flink,  I/O,  S3

    Flink – Tuning Writes to S3 Sink – fs.s3a.threads.max

    April 12, 2020

    One of our Flink streaming jobs had significant variance in the time spent on writing files to S3 by the same Task Manager process.

    What settings do you need to check first?

    Read More
    dmtolpeko
  • Hadoop,  Hive,  Tez,  YARN

    Hive on Tez – Shuffle Failed with Too Many Fetch Failures and Insufficient Progress

    February 26, 2020

    On one of the clusters I noticed an increased rate of shuffle errors, and the restart of a job did not help, it still failed with the same error.

    The error was as follows:

     Error: Error while running task ( failure ) : 
      org.apache.tez.runtime.library.common.shuffle.orderedgrouped.Shuffle$ShuffleError: 
        error in shuffle in Fetcher 
     at org.apache.tez.runtime.library.common.shuffle.orderedgrouped.Shuffle$RunShuffleCallable.callInternal
     (Shuffle.java:301)
    
    Caused by: java.io.IOException: 
      Shuffle failed with too many fetch failures and insufficient progress!failureCounts=1,
        pendingInputs=1, fetcherHealthy=false, reducerProgressedEnough=true, reducerStalled=true
    
    Read More
    dmtolpeko
  • Hadoop,  JVM,  Memory,  YARN

    Hadoop YARN – Container Virtual Memory – Understanding and Solving “Container is running beyond virtual memory limits” Errors

    February 19, 2020

    In the previous article about YARN container memory (see, Tez Memory Tuning – Container is Running Beyond Physical Memory Limits) I wrote about the physical memory. Now I would like to pay attention to the virtual memory in YARN.

    A typical YARN memory error may look like this:

    Container is running beyond virtual memory limits. Current usage: 1.0 GB of 1.1 GB physical memory used; 2.9 GB of 2.4 GB virtual memory used. Killing container.
    

    So what is the virtual memory, how to solve such errors and why is the virtual memory size often so large?

    Read More
    dmtolpeko
  • Hadoop,  YARN

    Hadoop YARN Cluster Idle Time

    February 14, 2020

    In the previous article Calculating Utilization of Cluster using Resource Manager Logs I showed how to estimate per-second utilization for a Hadoop cluster.

    This information can be useful to calculate the idle time statistics for a cluster i.e. time when no any containers are running.

    Read More
    dmtolpeko
  • Amazon,  AWS,  Hive,  I/O,  Parquet,  S3,  Spark

    Spark – Slow Load Into Partitioned Hive Table on S3 – Direct Writes, Output Committer Algorithms

    December 30, 2019

    I have a Spark job that transforms incoming data from compressed text files into Parquet format and loads them into a daily partition of a Hive table. This is a typical job in a data lake, it is quite simple but in my case it was very slow.

    Initially it took about 4 hours to convert ~2,100 input .gz files (~1.9 TB of data) into Parquet, while the actual Spark job took just 38 minutes to run and the remaining time was spent on loading data into a Hive partition.

    Let’s see what is the reason of such behavior and how we can improve the performance.

    Read More
    dmtolpeko
  • Hive,  Memory,  Tez,  YARN

    Hive – Issues With Large YARN Containers – Low Concurrency and Utilization, High Execution Time

    December 11, 2019

    I was asked to tune a Hive query that ran more than 10 hours. It was running on a 100 node cluster with 16 GB available for YARN containers on each node.

    Although the query processed about 2 TB of input data, it did a fairly simple aggregation on user_id column and did not look too complex. It had 1 Map stage with 1,500 tasks and 1 Reduce stage with 7,000 tasks.

    All map tasks completed within 30 minutes, and the query stuck on the Reduce phase. So what was wrong?

    Read More
    dmtolpeko
  • Auto Scaling,  Presto

    Presto – Query Auto Scaling Limitations – Low Utilization After Upscale – GROUP BY Partitioned Stage, query-manager.required-workers

    December 4, 2019

    One of the powerful features of Presto is auto scaling of compute resources for already running queries. When a cluster is idle you can reduce its size or even terminate the cluster, and then dynamically add nodes when necessary depending on the compute requirements of queries.

    Unfortunately, there are some limitations since not all stages can scale on the fly, so you have to use some workarounds to have reasonable cold start performance.

    Read More
    dmtolpeko
  • ETL,  Hive,  Presto

    Presto vs Hive – SLA Risks for Long Running ETL – Failures and Retries Due to Node Loss

    December 4, 2019

    Presto is an extremely powerful distributed SQL query engine, so at some point you may consider using it to replace SQL-based ETL processes that you currently run on Apache Hive.

    Although it is completely possible, you should be aware of some limitations that may affect your SLAs.

    Read More
    dmtolpeko
  • I/O,  Snowflake

    Snowflake – Micro-Partitions and Clustering Depth

    December 2, 2019

    Traditional data warehouses require you to explicitly specify partition columns for tables using the PARTITION BY clause. There is no PARTITION BY clause in the CREATE TABLE statement in Snowflake although it still heavily relies on partitions.

    I already wrote about partitions in Snowflake (see, MIN/MAX Functions and Partition Pruning in Snowflake) but in this article I am going to investigate some more details.

    Read More
    dmtolpeko
 Older Posts
Newer Posts 

Recent Posts

  • Nov 26, 2023 ORDER BY in Spark – How Global Sort Is Implemented, Sampling, Range Rartitioning and Skew
  • Oct 25, 2023 Reading JSON in Spark – Full Read for Inferring Schema and Sampling, SamplingRatio Option Implementation and Issues
  • Oct 15, 2023 Distributed COUNT DISTINCT – How it Works in Spark, Multiple COUNT DISTINCT, Transform to COUNT with Expand, Exploded Shuffle, Partial Aggregations
  • Oct 10, 2023 Spark – Reading Parquet – Pushed Filters, SUBSTR(timestamp, 1, 10), LIKE and StringStartsWith
  • Oct 06, 2023 Spark Stage Restarts – Partial Restarts, Multiple Retry Attempts with Different Task Sets, Accepted Late Results from Failed Stages, Cost of Restarts

Archives

  • November 2023 (1)
  • October 2023 (5)
  • September 2023 (1)
  • July 2023 (1)
  • August 2022 (4)
  • April 2022 (1)
  • March 2021 (2)
  • January 2021 (2)
  • June 2020 (4)
  • May 2020 (8)
  • April 2020 (3)
  • February 2020 (3)
  • December 2019 (5)
  • November 2019 (4)
  • October 2019 (1)
  • September 2019 (2)
  • August 2019 (1)
  • May 2019 (9)
  • April 2019 (2)
  • January 2019 (3)
  • December 2018 (4)
  • November 2018 (1)
  • October 2018 (6)
  • September 2018 (2)

Categories

  • Amazon (14)
  • Auto Scaling (1)
  • AWS (28)
  • Cost Optimization (1)
  • CPU (2)
  • Data Skew (2)
  • Distributed (1)
  • EC2 (1)
  • EMR (13)
  • ETL (2)
  • Flink (5)
  • Hadoop (14)
  • Hive (17)
  • Hue (1)
  • I/O (25)
  • JSON (1)
  • JVM (3)
  • Kinesis (1)
  • Logs (1)
  • Memory (7)
  • Monitoring (4)
  • Optimizer (2)
  • ORC (5)
  • Parquet (8)
  • Pig (2)
  • Presto (3)
  • Qubole (2)
  • RDS (1)
  • S3 (18)
  • Snowflake (6)
  • Spark (17)
  • Storage (14)
  • Tez (10)
  • YARN (18)

Meta

  • Log in
  • Entries feed
  • Comments feed
  • WordPress.org
Savona Theme by Optima Themes