Large-Scale Data Engineering and Analytics in Cloud

Performance Tuning and Optimization / Internals, Research

  • About
  • About
  • AWS,  Kinesis

    Kinesis Client Library (KCL 2.x) Consumer – Load Balancing, Rebalancing – Taking, Renewing and Stealing Leases

    May 20, 2020

    For zero-downtime, large-scale systems you can have multiple compute clusters located in different availability zones.

    The Kinesis KCL 2.x Consumer is very helpful to build highly scalable, elastic and fault-tolerant streaming data processing pipelines for Amazon Kinesis. Let’s review some of the KCL internals related to the load balancing and response to compute node/cluster failures and how you can tune and monitor such activities.

    Read More
    dmtolpeko
  • CPU,  Hadoop,  YARN

    YARN – Negative vCores – Capacity Scheduler with Memory Resource Type

    May 8, 2020

    You can expect that the total number of vCores available to YARN limits the number of containers you can run concurrently, that’s not true in some cases.

    Let’s consider one of them – Capacity Scheduler with DefaultResourceCalculator (Memory only).

    Read More
    dmtolpeko
  • AWS,  CPU,  EC2,  EMR,  Hadoop,  Qubole,  YARN

    AWS EC2 vCPU and YARN vCores – M4, C4, R4 Instances

    May 7, 2020

    Let’s review how EC2 vCPUs correspond to YARN vCores in Amazon EMR and Qubole Hadoop clusters. As an example, I will choose m4.4xlarge, r4.4xlarge and c4.4xlarge EC2 instance types.

    EC2 vCPU is a thread of a CPU core (typically, there are two threads per core). Does it mean that YARN vCores should be equal to the number of EC2 vCPU? That’s not always the case.

    Read More
    dmtolpeko
  • AWS,  S3

    S3 REST API – HTTP/1.1 Requests for Uploading Files

    May 2, 2020

    Let’s review major REST API requests for uploading files to S3 (PutObject, CreateMultipartUpload, UploadPart and CompleteMultipartUpload) that you can observe in S3 access logs.

    This can be helpful for monitoring S3 write performance. See also S3 Multipart Upload – S3 Access Log Messages.

    Read More
    dmtolpeko
  • Flink,  JVM,  Memory,  YARN

    Flink 1.9 – Off-Heap Memory on YARN – Troubleshooting Container is Running Beyond Physical Memory Limits Errors

    April 29, 2020

    On one of my clusters I got my favorite YARN error, although now it was in a Flink application:

    Container is running beyond physical memory limits. Current usage: 99.5 GB of 99.5 GB physical memory used; 105.1 GB of 227.8 GB virtual memory used. Killing container.

    Why did the container take so much physical memory and fail? Let’s investigate in detail.

    Read More
    dmtolpeko
  • AWS,  I/O,  S3

    S3 Multipart Upload – S3 Access Log Messages

    April 17, 2020

    Most applications writing data into S3 use the S3 multipart upload API to upload data in parts. First, you initiate the load, then upload parts and finally complete the multipart upload.

    Let’s see how this operation is reflected in the S3 access log. My application uploaded the file data.gz into S3, and I can view it as follows:

    Read More
    dmtolpeko
  • AWS,  Flink,  I/O,  S3

    Flink – Tuning Writes to S3 Sink – fs.s3a.threads.max

    April 12, 2020

    One of our Flink streaming jobs had significant variance in the time spent on writing files to S3 by the same Task Manager process.

    What settings do you need to check first?

    Read More
    dmtolpeko
  • Hadoop,  Hive,  Tez,  YARN

    Hive on Tez – Shuffle Failed with Too Many Fetch Failures and Insufficient Progress

    February 26, 2020

    On one of the clusters I noticed an increased rate of shuffle errors, and the restart of a job did not help, it still failed with the same error.

    The error was as follows:

     Error: Error while running task ( failure ) : 
      org.apache.tez.runtime.library.common.shuffle.orderedgrouped.Shuffle$ShuffleError: 
        error in shuffle in Fetcher 
     at org.apache.tez.runtime.library.common.shuffle.orderedgrouped.Shuffle$RunShuffleCallable.callInternal
     (Shuffle.java:301)
    
    Caused by: java.io.IOException: 
      Shuffle failed with too many fetch failures and insufficient progress!failureCounts=1,
        pendingInputs=1, fetcherHealthy=false, reducerProgressedEnough=true, reducerStalled=true
    
    Read More
    dmtolpeko
  • Hadoop,  JVM,  Memory,  YARN

    Hadoop YARN – Container Virtual Memory – Understanding and Solving “Container is running beyond virtual memory limits” Errors

    February 19, 2020

    In the previous article about YARN container memory (see, Tez Memory Tuning – Container is Running Beyond Physical Memory Limits) I wrote about the physical memory. Now I would like to pay attention to the virtual memory in YARN.

    A typical YARN memory error may look like this:

    Container is running beyond virtual memory limits. Current usage: 1.0 GB of 1.1 GB physical memory used; 2.9 GB of 2.4 GB virtual memory used. Killing container.
    

    So what is the virtual memory, how to solve such errors and why is the virtual memory size often so large?

    Read More
    dmtolpeko
  • Hadoop,  YARN

    Hadoop YARN Cluster Idle Time

    February 14, 2020

    In the previous article Calculating Utilization of Cluster using Resource Manager Logs I showed how to estimate per-second utilization for a Hadoop cluster.

    This information can be useful to calculate the idle time statistics for a cluster i.e. time when no any containers are running.

    Read More
    dmtolpeko
 Older Posts
Newer Posts 

Recent Posts

  • Jan 15, 2021 Parquet 1.x File Format – Footer Content
  • Jan 02, 2021 Flink and S3 Entropy Injection for Checkpoints
  • Jun 25, 2020 Hadoop YARN – Monitoring Resource Consumption by Running Applications in Multi-Cluster Environments
  • Jun 18, 2020 How Map Column is Written to Parquet – Converting JSON to Map to Increase Read Performance
  • Jun 09, 2020 Flink Streaming to Parquet Files in S3 – Massive Write IOPS on Checkpoint

Archives

  • January 2021 (2)
  • June 2020 (4)
  • May 2020 (8)
  • April 2020 (3)
  • February 2020 (3)
  • December 2019 (5)
  • November 2019 (4)
  • October 2019 (1)
  • September 2019 (2)
  • August 2019 (1)
  • May 2019 (9)
  • April 2019 (2)
  • January 2019 (3)
  • December 2018 (4)
  • November 2018 (1)
  • October 2018 (6)
  • September 2018 (2)

Categories

  • Amazon (11)
  • Auto Scaling (1)
  • AWS (25)
  • Cost Optimization (1)
  • CPU (2)
  • Data Skew (1)
  • Distributed (1)
  • EC2 (1)
  • EMR (10)
  • ETL (2)
  • Flink (5)
  • Hadoop (14)
  • Hive (17)
  • Hue (1)
  • I/O (18)
  • JVM (3)
  • Kinesis (1)
  • Logs (1)
  • Memory (7)
  • Monitoring (4)
  • ORC (5)
  • Parquet (5)
  • Pig (2)
  • Presto (3)
  • Qubole (2)
  • RDS (1)
  • S3 (17)
  • Snowflake (6)
  • Spark (2)
  • Storage (12)
  • Tez (10)
  • YARN (18)

Meta

  • Log in
  • Entries feed
  • Comments feed
  • WordPress.org
Savona Theme by Optima Themes