I already wrote about recovering the ghost nodes in Amazon EMR clusters, but in this article I am going to extend this information for EMR clusters configured for auto-scaling.
In a Hadoop cluster besides Active nodes you may also have Unhealthy, Lost and Decommissioned nodes. Unhealthy nodes are running but just excluded from scheduling the tasks because they, for example, do not have enough disk space. Lost nodes are nodes that are not reachable anymore. The decommissioned nodes are nodes that successfully terminated and left the cluster.
But all these nodes are known to the Hadoop YARN cluster and you can see their details such as IP addresses, last health updates and so on.
At the same time there can be also Ghost nodes i.e. nodes that are running by Amazon EMR services but Hadoop itself does not know anything about their existence. Let’s see how you can find them.
Usually Hadoop is able to automatically recover cluster nodes from Unhealthy state by cleaning log and temporary directories. But sometimes nodes stay unhealthy for a long time and manual intervention is necessary to bring them back.
Amazon EMR allows you to define scale-out and scale-in rules to automatically add and remove instances based on the metrics you specify.
In this article I am going to explore the instance controller logs that can be very useful in monitoring the auto-scaling. The logs are located in
/emr/instance-controller/log/directory on the EMR master node.
When you run many Hadoop clusters it is useful to automatically collect metrics from all clusters in a single place (Hive table i.e.).
This allows you to perform any advanced and custom analysis of your clusters workload and not be limited to the features provided by Hadoop Administration UI tools that often offer only per cluster view so it is hard to see the whole picture of your data platform.
Snowflake provides various options to monitor data ingestion from external storage such as Amazon S3. In this article I am going to review QUERY_HISTORY and COPY_HISTORY table functions.
The COPY commands are widely used to move data into Snowflake on a time-interval basis, and we can monitor their execution accessing the query history with
query_type = 'COPY'filter.
Snowflake separates compute and storage, so it is typical to have a dedicated compute cluster (virtual warehouse) to handle data ingestion into Snowflake (if you do not use Snowpipe).
Like reporting and Ad-hoc SQL, data ingestion has some specifics. Usually there are many tables (data sources) that have own schedule (daily, hourly, every 5, 10, 15 minutes etc.) for periodic data transfer. ETL processes can be overlapped, and can have spikes followed by idle time and so on.
Sometimes you need to reload the entire data set from the source storage into Snowflake. For example, you may want to fully refresh a quite large lookup table (2 GB compressed) without keeping the history. Let’s see how to do this in Snowflake and what issues you need to take into account.
Snowflake uses a cloud storage service such as Amazon S3 as permanent storage for data (Remote Disk in terms of Snowflake), but it can also use Local Disk (SSD) to temporarily cache data used by SQL queries. Let’s test Remote and Local I/O performance by executing a sample SQL query multiple times on X-Large and Medium size Snowflake warehouses:
SELECT MIN(event_hour), MAX(event_hour) FROM events WHERE event_name = 'LOGIN';
Note that you should disable the Result Cache for queries in your session to perform such tests, otherwise Snowflake will just return the cached result immediately after the first attempt:
alter session set USE_CACHED_RESULT = FALSE;
Snowflake stores table data in micro-partitions and uses the columnar storage format keeping MIN/MAX values statistics for each column in every partition and for the entire table as well. Let’s investigate how this affects the performance of SQL queries involving MIN and MAX functions.