Hive,  Memory,  Tez,  YARN

Tez Memory Tuning – Container is Running Beyond Physical Memory Limits – Solving By Reducing Memory Settings

Can reducing the Tez memory settings help solving memory limit problems? Sometimes this paradox works.

One day one of our Hive query failed with the following error: Container is running beyond physical memory limits. Current usage: 4.1 GB of 4 GB physical memory used; 6.0 GB of 20 GB virtual memory used. Killing container.

I checked the settings, and noticed that the query was running with the following Hive settings:

set hive.tez.container.size=4096;
set hive.tez.java.opts=-Xmx4096m;

By accident, I re-ran the query with reduced memory settings (2 times less) and it worked:

set hive.tez.container.size=2048;
set hive.tez.java.opts=-Xmx1700m;

Failure and success were not transient, I got the same results rerunning the query with the corresponding settings. How is that possible?

hive.tez.container.size=4096 is a “logical” option, at run time YARN uses it to check if the container takes more physical memory than defined by this option.

hive.tez.java.opts=-Xmx4096m specifies to run the container with the specified heap size. This option is passed to JVM and limits the heap size (by JVM, not YARN).

But the Java process memory does not only consist of heap, it also includes many other areas such as stack, garbage collector memory, class metadata, I/O buffers, shared code and so on.

More importantly, the heap memory for non-used Java objects is not immediately reclaimed, and this is especially true if you have a large heap. As a result the physical memory of the process running the YARN container can be high, although it may contain unused objects not cleared yet by the garbage collector.

So when at some point YARN checks how much physical memory is already used by the container (defined by yarn.nodemanager.container-monitor.interval-ms with default value 3 seconds) from the OS standpoint (not JVM), YARN notices that it is more than 4 GB and it kills the container.

At the same time when the heap size is lower, the garbage collector runs more often and reclaims the unused memory, so the Java process takes less physical memory. This explains why the query ran successfully with lower memory setting hive.tez.java.opts=-Xmx1700m.

Conclusion

  • Do not set the Tez container size hive.tez.container.size equal to hive.tez.java.opts. Remember that the Java process requires some memory overhead besides the heap. It is better not to set hive.tez.java.opts at all since Tez allocates 80% of the container size for the heap by default.

  • “Container is running beyond…” is a “logical” error, it is YARN who reports about a memory problem, not JVM. If you already use the large container and heap size, make sure that they are balanced. As you can see from my example, sometimes reducing the memory settings can help. But if you have a small container (1 GB, for example) the memory overhead ratio can become high, so you need to increase the container size to solve the problem.

  • The Java Heap Space error is the error you should really worry about. It is a “physical” JVM error meaning you really do not have enough memory to process your data in the YARN container.

  • But remember that increasing the YARN container size for your tasks, you limit the number of jobs that can run concurrently on the cluster, and it can become underutilized. So blindly increasing the container size is not always a good idea.