Skip to content
Large-Scale Data Engineering in Cloud

Performance Tuning, Cost Optimization / Internals, Research by Dmitry Tolpeko

  • About
  • About
  • AWS,  EMR,  Hadoop,  YARN

    Amazon EMR – Monitoring Auto-Scaling using Instance Controller Logs

    May 20, 2019

    Amazon EMR allows you to define scale-out and scale-in rules to automatically add and remove instances based on the metrics you specify.

    In this article I am going to explore the instance controller logs that can be very useful in monitoring the auto-scaling. The logs are located in /emr/instance-controller/log/ directory on the EMR master node.

    Read More
    dmtolpeko
  • AWS,  EMR,  Hadoop,  YARN

    Hadoop YARN – Collecting Utilization Metrics from Multiple Clusters

    May 15, 2019

    When you run many Hadoop clusters it is useful to automatically collect metrics from all clusters in a single place (Hive table i.e.).

    This allows you to perform any advanced and custom analysis of your clusters workload and not be limited to the features provided by Hadoop Administration UI tools that often offer only per cluster view so it is hard to see the whole picture of your data platform.

    Read More
    dmtolpeko
  • I/O,  Snowflake,  Storage

    Snowflake – Monitoring Data Ingestion using QUERY_HISTORY and COPY_HISTORY – Single Large File vs Multiple Small Files

    May 14, 2019

    Snowflake provides various options to monitor data ingestion from external storage such as Amazon S3. In this article I am going to review QUERY_HISTORY and COPY_HISTORY table functions.

    The COPY commands are widely used to move data into Snowflake on a time-interval basis, and we can monitor their execution accessing the query history with query_type = 'COPY' filter.

    Read More
    dmtolpeko
  • Cost Optimization,  Snowflake

    Snowflake – Data Ingestion – Cluster Utilization, Idle Time and Compute Cost

    May 11, 2019

    Snowflake separates compute and storage, so it is typical to have a dedicated compute cluster (virtual warehouse) to handle data ingestion into Snowflake (if you do not use Snowpipe).

    Like reporting and Ad-hoc SQL, data ingestion has some specifics. Usually there are many tables (data sources) that have own schedule (daily, hourly, every 5, 10, 15 minutes etc.) for periodic data transfer. ETL processes can be overlapped, and can have spikes followed by idle time and so on.

    Read More
    dmtolpeko
  • ETL,  Snowflake

    Snowflake – Reloading Data from Stage – TRUNCATE, DELETE, COPY and Transactions

    May 6, 2019

    Sometimes you need to reload the entire data set from the source storage into Snowflake. For example, you may want to fully refresh a quite large lookup table (2 GB compressed) without keeping the history. Let’s see how to do this in Snowflake and what issues you need to take into account.

    Read More
    dmtolpeko
  • I/O,  Snowflake

    Snowflake – Remote Disk I/O, Local Disk Cache – Capacity, Utilization and Transfer Rate

    May 4, 2019

    Snowflake uses a cloud storage service such as Amazon S3 as permanent storage for data (Remote Disk in terms of Snowflake), but it can also use Local Disk (SSD) to temporarily cache data used by SQL queries. Let’s test Remote and Local I/O performance by executing a sample SQL query multiple times on X-Large and Medium size Snowflake warehouses:

    SELECT MIN(event_hour), MAX(event_hour) FROM events WHERE event_name = 'LOGIN';
    

    Note that you should disable the Result Cache for queries in your session to perform such tests, otherwise Snowflake will just return the cached result immediately after the first attempt:

    alter session set USE_CACHED_RESULT = FALSE;
    Read More
    dmtolpeko
  • Snowflake

    Performance of MIN/MAX Functions – Metadata Operations and Partition Pruning in Snowflake

    May 3, 2019

    Snowflake stores table data in micro-partitions and uses the columnar storage format keeping MIN/MAX values statistics for each column in every partition and for the entire table as well. Let’s investigate how this affects the performance of SQL queries involving MIN and MAX functions.

    Read More
    dmtolpeko
  • Hadoop,  Memory,  YARN

    YARN Memory Under-Utilization Running Low-Memory Instances (c4.xlarge i.e.)

    April 19, 2019

    Analyzing a Hadoop cluster I noticed that it runs 2 GB and 4 GB containers only, and does not allocate the entire available memory to applications always leaving about 150 GB of free memory.

    The clusters run Apache Pig and Hive applications, and the default settings (they are also inherited by Tez engine used by Pig and Hive):

    -- from mapred-site.xml
    mapreduce.map.memory.mb            1408
    mapreduce.reduce.memory.mb         2816
    yarn.app.mapreduce.am.resource.mb  2816
    
    Read More
    dmtolpeko
  • Amazon,  EMR,  Spark

    Extremely Large Number of RDD Partitions and Tasks in Spark on Amazon EMR

    April 8, 2019

    After creating an Amazon EMR cluster with Spark support, and running a spark application you can notice that the Spark job creates too many tasks to process even a very small data set.

    For example, I have a small table country_iso_codes having 249 rows and stored in a comma-delimited text file with the length of 10,657 bytes.

    When running the following application on Amazon EMR 5.7 cluster with Spark 2.1.1 with the default settings I can see the large number of partitions generated:

    Read More
    dmtolpeko
  • Data Skew,  Distributed,  Pig,  Tez

    Reduce Number of Output Files for Skewed Data – ORDER in Apache Pig – Sampler and Weighted Range Partitioner to Balance Reducers

    January 25, 2019

    One of our event tables is very large, it contains billions of rows per day. Since the analytics team is often interested in specific events only it makes sense to process the raw events and generate a partition for every event type. Then when a data analyst runs an ad-hoc query, it reads data for the required event type only increasing the query performance.

    The problem is that there are 3,000 map tasks are launched to read the daily data and there are 250 distinct event types, so the mappers will produce 3,000 * 250 = 750,000 files per day. That’s too much.

    Read More
    dmtolpeko
 Older Posts
Newer Posts 

Recent Posts

  • Nov 26, 2023 ORDER BY in Spark – How Global Sort Is Implemented, Sampling, Range Rartitioning and Skew
  • Oct 25, 2023 Reading JSON in Spark – Full Read for Inferring Schema and Sampling, SamplingRatio Option Implementation and Issues
  • Oct 15, 2023 Distributed COUNT DISTINCT – How it Works in Spark, Multiple COUNT DISTINCT, Transform to COUNT with Expand, Exploded Shuffle, Partial Aggregations
  • Oct 10, 2023 Spark – Reading Parquet – Pushed Filters, SUBSTR(timestamp, 1, 10), LIKE and StringStartsWith
  • Oct 06, 2023 Spark Stage Restarts – Partial Restarts, Multiple Retry Attempts with Different Task Sets, Accepted Late Results from Failed Stages, Cost of Restarts

Archives

  • November 2023 (1)
  • October 2023 (5)
  • September 2023 (1)
  • July 2023 (1)
  • August 2022 (4)
  • April 2022 (1)
  • March 2021 (2)
  • January 2021 (2)
  • June 2020 (4)
  • May 2020 (8)
  • April 2020 (3)
  • February 2020 (3)
  • December 2019 (5)
  • November 2019 (4)
  • October 2019 (1)
  • September 2019 (2)
  • August 2019 (1)
  • May 2019 (9)
  • April 2019 (2)
  • January 2019 (3)
  • December 2018 (4)
  • November 2018 (1)
  • October 2018 (6)
  • September 2018 (2)

Categories

  • Amazon (14)
  • Auto Scaling (1)
  • AWS (28)
  • Cost Optimization (1)
  • CPU (2)
  • Data Skew (2)
  • Distributed (1)
  • EC2 (1)
  • EMR (13)
  • ETL (2)
  • Flink (5)
  • Hadoop (14)
  • Hive (17)
  • Hue (1)
  • I/O (25)
  • JSON (1)
  • JVM (3)
  • Kinesis (1)
  • Logs (1)
  • Memory (7)
  • Monitoring (4)
  • Optimizer (2)
  • ORC (5)
  • Parquet (8)
  • Pig (2)
  • Presto (3)
  • Qubole (2)
  • RDS (1)
  • S3 (18)
  • Snowflake (6)
  • Spark (17)
  • Storage (14)
  • Tez (10)
  • YARN (18)

Meta

  • Log in
  • Entries feed
  • Comments feed
  • WordPress.org
Savona Theme by Optima Themes