Analyzing a Hadoop cluster I noticed that it runs 2 GB and 4 GB containers only, and does not allocate the entire available memory to applications always leaving about 150 GB of free memory.
The clusters run Apache Pig and Hive applications, and the default settings (they are also inherited by Tez engine used by Pig and Hive):
-- from mapred-site.xml mapreduce.map.memory.mb 1408 mapreduce.reduce.memory.mb 2816 yarn.app.mapreduce.am.resource.mb 2816