• Amazon,  AWS,  RDS

    Sqoop Import from Amazon RDS Read Replica – ERROR: Canceling Statement due to Conflict with Recovery

    Apache Sqoop is widely used to import data from relational databases into cloud. One of our databases uses Amazon RDS for PostgreSQL to store sales data and its Sqoop import periodically failed with the following error:

    Error: java.io.IOException: SQLException in nextKeyValue
    Caused by: org.postgresql.util.PSQLException: ERROR: canceling statement due to conflict with recovery
      Detail: User query might have needed to see row versions that must be removed.
    

    In this article I will describe a solution that helped resolve the problem in our specific case.

  • AWS,  EMR,  Hadoop,  YARN

    Amazon EMR – Recovering Ghost Nodes

    In a Hadoop cluster besides Active nodes you may also have Unhealthy, Lost and Decommissioned nodes. Unhealthy nodes are running but just excluded from scheduling the tasks because they, for example, do not have enough disk space. Lost nodes are nodes that are not reachable anymore. The decommissioned nodes are nodes that successfully terminated and left the cluster.

    But all these nodes are known to the Hadoop YARN cluster and you can see their details such as IP addresses, last health updates and so on.

    At the same time there can be also Ghost nodes i.e. nodes that are running by Amazon EMR services but Hadoop itself does not know anything about their existence. Let’s see how you can find them.

  • AWS,  EMR,  Hadoop,  YARN

    Hadoop YARN – Collecting Utilization Metrics from Multiple Clusters

    When you run many Hadoop clusters it is useful to automatically collect metrics from all clusters in a single place (Hive table i.e.).

    This allows you to perform any advanced and custom analysis of your clusters workload and not be limited to the features provided by Hadoop Administration UI tools that often offer only per cluster view so it is hard to see the whole picture of your data platform.

  • Amazon,  AWS,  EMR,  Hive,  I/O,  S3

    S3 Writes When Inserting Data into a Hive Table in Amazon EMR

    Often in an ETL process we move data from one source into another, typically doing some filtering, transformations and aggregations. Let’s consider which write operations are performed in S3.

    Just to focus on S3 writes I am going to use a very simple SQL INSERT statement just moving data from one table into another without any transformations as follows:

    INSERT OVERWRITE TABLE events PARTITION (event_dt = '2018-12-02', event_hour = '00')
    SELECT
      record_id,
      event_timestamp,
      event_name,
      app_name,
      country,
      city,
      payload
    FROM events_raw;
    
  • Amazon,  AWS,  EMR,  Hive,  ORC,  Tez

    Tez Internals #2 – Number of Map Tasks for Large ORC Files with Small Stripes in Amazon EMR

    Let’s see how Hive on Tez defines the number of map tasks when the input data is stored in large ORC files but having small stripes.

    Note. All experiments below were executed on Amazon Hive 2.1.1. This article does not apply to Qubole running on Amazon AWS. Qubole has a different algorithm to define the number of map tasks for ORC files.

  • Amazon,  AWS,  I/O,  Monitoring,  S3,  Storage

    S3 Monitoring #4 – Read Operations and Tables

    Knowing how Hive table storage is organized can help us extract some additional information for S3 read operations for each table.

    In most cases (and you can easily adapt this for your specific table storage pattern), tables are stored in a S3 bucket under the following key structure:

    s3://<bucket_name>/hive/<database_name>/<table_name>/<partition1>/<partition2>/...
    

    For example, hourly data for orders table can be stored as follows:

    s3://cloudsqale/hive/sales.db/orders/created_dt=2018-10-18/hour=00/