• Hadoop,  Memory,  YARN

    YARN Memory Under-Utilization Running Low-Memory Instances (c4.xlarge i.e.)

    Analyzing a Hadoop cluster I noticed that it runs 2 GB and 4 GB containers only, and does not allocate the entire available memory to applications always leaving about 150 GB of free memory.

    The clusters run Apache Pig and Hive applications, and the default settings (they are also inherited by Tez engine used by Pig and Hive):

    -- from mapred-site.xml
    mapreduce.map.memory.mb            1408
    mapreduce.reduce.memory.mb         2816
    yarn.app.mapreduce.am.resource.mb  2816
    
  • Amazon,  EMR,  Spark

    Extremely Large Number of RDD Partitions and Tasks in Spark on Amazon EMR

    After creating an Amazon EMR cluster with Spark support, and running a spark application you can notice that the Spark job creates too many tasks to process even a very small data set.

    For example, I have a small table country_iso_codes having 249 rows and stored in a comma-delimited text file with the length of 10,657 bytes.

    When running the following application on Amazon EMR 5.7 cluster with Spark 2.1.1 with the default settings I can see the large number of partitions generated: