Overview

This notebook demonstrates using Apache Hudi on Amazon EMR to consume streaming updates to an S3 data lake.

Here are some good reference links to read later:

Read the Rawdata

Let's start by initializing the Spark Session to connect this notebook to our Spark EMR cluster:

In [1]:
%%configure -f
{
    "conf":  { 
             "spark.jars":"hdfs:///hudi-spark-bundle.jar,hdfs:///spark-avro.jar",
             "spark.serializer":"org.apache.spark.serializer.KryoSerializer",
             "spark.sql.hive.convertMetastoreParquet":"false",
             "spark.dynamicAllocation.executorIdleTimeout": 3600,
             "spark.executor.memory": "7G",
             "spark.executor.cores": 1,
             "spark.dynamicAllocation.initialExecutors":16,
             "spark.sql.parquet.outputTimestampType":"TIMESTAMP_MILLIS"
           } 
}
Current session configs: {'conf': {'spark.jars': 'hdfs:///hudi-spark-bundle.jar,hdfs:///spark-avro.jar', 'spark.serializer': 'org.apache.spark.serializer.KryoSerializer', 'spark.sql.hive.convertMetastoreParquet': 'false', 'spark.dynamicAllocation.executorIdleTimeout': 3600, 'spark.executor.memory': '7G', 'spark.executor.cores': 1, 'spark.dynamicAllocation.initialExecutors': 16, 'spark.sql.parquet.outputTimestampType': 'TIMESTAMP_MILLIS'}, 'kind': 'spark'}
No active sessions.
In [2]:
val s3_bucket="hudi-workshop-100231-899011185738"
val dataPath=s"s3://$s3_bucket/dms-full-load-path/salesdb/SALES_ORDER_DETAIL/LOAD*"
Starting Spark application
IDYARN Application IDKindStateSpark UIDriver logCurrent session?
0application_1576872917892_0001sparkidleLinkLink
SparkSession available as 'spark'.
s3_bucket: String = hudi-workshop-100231-899011185738
dataPath: String = s3://hudi-workshop-100231-899011185738/dms-full-load-path/salesdb/SALES_ORDER_DETAIL/LOAD*

Let's read data from our SALES_ORDER_DETAIL table in the Rawdata Tier of our DataLake:

In [3]:
var df=spark.read.parquet(dataPath)
df=df.toDF(df.columns map(_.toLowerCase): _*)
df.printSchema()
df.count()
df: org.apache.spark.sql.DataFrame = [LINE_ID: int, LINE_NUMBER: int ... 8 more fields]
df: org.apache.spark.sql.DataFrame = [line_id: int, line_number: int ... 8 more fields]
root
 |-- line_id: integer (nullable = true)
 |-- line_number: integer (nullable = true)
 |-- order_id: integer (nullable = true)
 |-- product_id: integer (nullable = true)
 |-- quantity: integer (nullable = true)
 |-- unit_price: decimal(38,10) (nullable = true)
 |-- discount: decimal(38,10) (nullable = true)
 |-- supply_cost: decimal(38,10) (nullable = true)
 |-- tax: decimal(38,10) (nullable = true)
 |-- order_date: date (nullable = true)

res2: Long = 98000
In [4]:
import org.apache.spark.sql.SaveMode
import org.apache.spark.sql.functions._
import org.apache.hudi.DataSourceWriteOptions
import org.apache.hudi.config.HoodieWriteConfig
import org.apache.hudi.hive.MultiPartKeysValueExtractor
import com.google.common.collect.Lists;
import org.apache.hudi.ComplexKeyGenerator
import org.apache.spark.sql.functions.{concat, lit}
import org.apache.spark.sql.functions.{year, month, dayofmonth, hour}
import org.apache.spark.sql.SaveMode
import org.apache.spark.sql.functions._
import org.apache.hudi.DataSourceWriteOptions
import org.apache.hudi.config.HoodieWriteConfig
import org.apache.hudi.hive.MultiPartKeysValueExtractor
import com.google.common.collect.Lists
import org.apache.hudi.ComplexKeyGenerator
import org.apache.spark.sql.functions.{concat, lit}
import org.apache.spark.sql.functions.{year, month, dayofmonth, hour}

Create Copy on Write Tables

Copy on Write Tables storage type stores data using exclusively columnar file formats (e.g parquet). Updates simply version & rewrite the files by performing a synchronous merge during write.

In [5]:
//Hudi Copy on Write Table
val hudiTableName = "sales_order_detail_hudi_cow"
val hudiTableRecordKey = "record_key"
val hudiTablePartitionKey = "partition_key"
val hudiTablePrecombineKey = "order_date"
val hudiTablePath = s"s3://$s3_bucket/hudi/" + hudiTableName
val hudiHiveTablePartitionKey = "year,month"

// Add Primary Key - RECORD_KEY
var inputDF = df.withColumn(hudiTableRecordKey, concat(col("order_id"), lit("#"), col("line_id")))
inputDF=inputDF.select(inputDF.columns.map(x => col(x).as(x.toLowerCase)): _*)
hudiTableName: String = sales_order_detail_hudi_cow
hudiTableRecordKey: String = record_key
hudiTablePartitionKey: String = partition_key
hudiTablePrecombineKey: String = order_date
hudiTablePath: String = s3://hudi-workshop-100231-899011185738/hudi/sales_order_detail_hudi_cow
hudiHiveTablePartitionKey: String = year,month
inputDF: org.apache.spark.sql.DataFrame = [line_id: int, line_number: int ... 9 more fields]
inputDF: org.apache.spark.sql.DataFrame = [line_id: int, line_number: int ... 9 more fields]

We will perform some transformations to ensure that the partitions columns YEAR and MONTH are of type string.

In [6]:
{ 
  import org.apache.spark.sql.types.DateType
  import org.apache.spark.sql.types.StringType

  inputDF = inputDF.withColumn("order_date", inputDF("order_date").cast(DateType))
  inputDF = inputDF.withColumn("year",year($"order_date").cast(StringType))
    .withColumn("month",month($"order_date").cast(StringType))

  inputDF = inputDF.withColumn(hudiTablePartitionKey,concat(lit("year="),$"year",lit("/month="),$"month"))

  inputDF.first()
}
res6: org.apache.spark.sql.Row = [1,1,1,427,66,23.0000000000,0E-10,11.0000000000,1.0000000000,2015-11-11,1#1,2015,11,year=2015/month=11]
In [7]:
inputDF.printSchema()
root
 |-- line_id: integer (nullable = true)
 |-- line_number: integer (nullable = true)
 |-- order_id: integer (nullable = true)
 |-- product_id: integer (nullable = true)
 |-- quantity: integer (nullable = true)
 |-- unit_price: decimal(38,10) (nullable = true)
 |-- discount: decimal(38,10) (nullable = true)
 |-- supply_cost: decimal(38,10) (nullable = true)
 |-- tax: decimal(38,10) (nullable = true)
 |-- order_date: date (nullable = true)
 |-- record_key: string (nullable = true)
 |-- year: string (nullable = true)
 |-- month: string (nullable = true)
 |-- partition_key: string (nullable = true)

Now that the input data is prepared, let's write the data to create the Hudi COW table in the Analytics tier of our DaraLake:

In [8]:
// Set up our Hudi Data Source Options
val hudiOptions = Map[String,String](
    DataSourceWriteOptions.RECORDKEY_FIELD_OPT_KEY -> hudiTableRecordKey,
    DataSourceWriteOptions.PARTITIONPATH_FIELD_OPT_KEY -> hudiTablePartitionKey, 
    DataSourceWriteOptions.PRECOMBINE_FIELD_OPT_KEY -> hudiTablePrecombineKey, 
    DataSourceWriteOptions.HIVE_SYNC_ENABLED_OPT_KEY -> "true", 
    DataSourceWriteOptions.HIVE_PARTITION_FIELDS_OPT_KEY -> hudiHiveTablePartitionKey, 
    DataSourceWriteOptions.HIVE_ASSUME_DATE_PARTITION_OPT_KEY -> "false", 
    DataSourceWriteOptions.HIVE_PARTITION_EXTRACTOR_CLASS_OPT_KEY ->
        classOf[MultiPartKeysValueExtractor].getName,
    "hoodie.parquet.max.file.size" -> String.valueOf(1024 * 1024 * 1024),
    "hoodie.parquet.small.file.limit" -> String.valueOf(64 * 1024 * 1024),
    "hoodie.parquet.compression.ratio" -> String.valueOf(0.5),
    "hoodie.insert.shuffle.parallelism" -> String.valueOf(2))
hudiOptions: scala.collection.immutable.Map[String,String] = Map(hoodie.parquet.small.file.limit -> 67108864, hoodie.insert.shuffle.parallelism -> 2, hoodie.parquet.compression.ratio -> 0.5, hoodie.datasource.write.precombine.field -> order_date, hoodie.datasource.hive_sync.partition_fields -> year,month, hoodie.datasource.hive_sync.partition_extractor_class -> org.apache.hudi.hive.MultiPartKeysValueExtractor, hoodie.parquet.max.file.size -> 1073741824, hoodie.datasource.hive_sync.enable -> true, hoodie.datasource.write.recordkey.field -> record_key, hoodie.datasource.hive_sync.assume_date_partitioning -> false, hoodie.datasource.write.partitionpath.field -> partition_key)
In [9]:
(
 inputDF.write 
  .format("org.apache.hudi")
  //Copy on Write Table
  .option(DataSourceWriteOptions.STORAGE_TYPE_OPT_KEY, DataSourceWriteOptions.COW_STORAGE_TYPE_OPT_VAL)
  .options(hudiOptions)
  .option(HoodieWriteConfig.TABLE_NAME,hudiTableName)
  .option(DataSourceWriteOptions.HIVE_TABLE_OPT_KEY, hudiTableName)
  .option(DataSourceWriteOptions.OPERATION_OPT_KEY, DataSourceWriteOptions.INSERT_OPERATION_OPT_VAL)
  .mode(SaveMode.Overwrite)
  .save(hudiTablePath)
)

We can now view and query the table created in Spark-SQL:

In [10]:
spark.sql("show create table "+hudiTableName).collect.foreach(println)
[CREATE EXTERNAL TABLE `sales_order_detail_hudi_cow`(`_hoodie_commit_time` STRING, `_hoodie_commit_seqno` STRING, `_hoodie_record_key` STRING, `_hoodie_partition_path` STRING, `_hoodie_file_name` STRING, `line_id` INT, `line_number` INT, `order_id` INT, `product_id` INT, `quantity` INT, `unit_price` DECIMAL(38,10), `discount` DECIMAL(38,10), `supply_cost` DECIMAL(38,10), `tax` DECIMAL(38,10), `order_date` DATE, `record_key` STRING, `partition_key` STRING)
PARTITIONED BY (`year` STRING, `month` STRING)
ROW FORMAT SERDE 'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe'
WITH SERDEPROPERTIES (
  'serialization.format' = '1'
)
STORED AS
  INPUTFORMAT 'org.apache.hudi.hadoop.HoodieParquetInputFormat'
  OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat'
LOCATION 's3://hudi-workshop-100231-899011185738/hudi/sales_order_detail_hudi_cow'
TBLPROPERTIES (
  'last_commit_time_sync' = '20191220210449',
  'transient_lastDdlTime' = '1576875917'
)
]
In [11]:
spark.sql("show partitions "+hudiTableName).show(100,false)
+------------------+
|partition         |
+------------------+
|year=2015/month=1 |
|year=2015/month=10|
|year=2015/month=11|
|year=2015/month=12|
|year=2015/month=2 |
|year=2015/month=3 |
|year=2015/month=4 |
|year=2015/month=5 |
|year=2015/month=6 |
|year=2015/month=7 |
|year=2015/month=8 |
|year=2015/month=9 |
+------------------+

In [12]:
spark.sql("select count(*) from "+hudiTableName).show(100,false)
+--------+
|count(1)|
+--------+
|98000   |
+--------+

Create Merge on Read Tables

Merge on Read storage type enables clients to ingest data quickly into a row based data format such as Avro. Any new data that is written to the Hudi dataset using MOR table type, will write new log/delta files that internally store the data as Avro encoded bytes.

A compaction process (configured as inline or asynchronous) will convert the log file format to the columnar base file format (parquet). The two different InputFormats expose 2 different views of this data:

  • Read Optimized view exposes columnar parquet reading performance
  • Realtime View exposes columnar and/or log reading performance respectively.

Updating an existing set of rows will result in either

  • a) a companion log/delta file for an existing base parquet file generated from a previous compaction or
  • b) an update written to a log/delta file in case no compaction ever happened for it. Hence, all writes to such datasets are limited by avro/log file writing performance, much faster than parquet. Although, there is a higher cost to pay to read log/delta files vs columnar (parquet) files.
In [13]:
//Hudi Merge On Read Table
val hudiTableName = "sales_order_detail_hudi_mor"
val hudiTablePath = s"s3://$s3_bucket/hudi/" + hudiTableName
hudiTableName: String = sales_order_detail_hudi_mor
hudiTablePath: String = s3://hudi-workshop-100231-899011185738/hudi/sales_order_detail_hudi_mor

The write operation to create the MOR Storage Type table:

In [14]:
(
 inputDF.write 
  .format("org.apache.hudi")
  // Merge on Read Table this time.  
  .option(DataSourceWriteOptions.STORAGE_TYPE_OPT_KEY, DataSourceWriteOptions.MOR_STORAGE_TYPE_OPT_VAL)
  .options(hudiOptions)
  .option(HoodieWriteConfig.TABLE_NAME,hudiTableName)
  .option(DataSourceWriteOptions.HIVE_TABLE_OPT_KEY, hudiTableName)
  .option(DataSourceWriteOptions.OPERATION_OPT_KEY, DataSourceWriteOptions.INSERT_OPERATION_OPT_VAL)
  .mode(SaveMode.Overwrite)
  .save(hudiTablePath)
)
In [15]:
spark.sql("show create table "+hudiTableName).collect.foreach(println)
[CREATE EXTERNAL TABLE `sales_order_detail_hudi_mor`(`_hoodie_commit_time` STRING, `_hoodie_commit_seqno` STRING, `_hoodie_record_key` STRING, `_hoodie_partition_path` STRING, `_hoodie_file_name` STRING, `line_id` INT, `line_number` INT, `order_id` INT, `product_id` INT, `quantity` INT, `unit_price` DECIMAL(38,10), `discount` DECIMAL(38,10), `supply_cost` DECIMAL(38,10), `tax` DECIMAL(38,10), `order_date` DATE, `record_key` STRING, `partition_key` STRING)
PARTITIONED BY (`year` STRING, `month` STRING)
ROW FORMAT SERDE 'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe'
WITH SERDEPROPERTIES (
  'serialization.format' = '1'
)
STORED AS
  INPUTFORMAT 'org.apache.hudi.hadoop.HoodieParquetInputFormat'
  OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat'
LOCATION 's3://hudi-workshop-100231-899011185738/hudi/sales_order_detail_hudi_mor'
TBLPROPERTIES (
  'last_commit_time_sync' = '20191220210537',
  'transient_lastDdlTime' = '1576875951'
)
]
In [16]:
spark.sql("show partitions "+hudiTableName).show(100,false)
+------------------+
|partition         |
+------------------+
|year=2015/month=1 |
|year=2015/month=10|
|year=2015/month=11|
|year=2015/month=12|
|year=2015/month=2 |
|year=2015/month=3 |
|year=2015/month=4 |
|year=2015/month=5 |
|year=2015/month=6 |
|year=2015/month=7 |
|year=2015/month=8 |
|year=2015/month=9 |
+------------------+

In [17]:
spark.sql("select count(*) from "+hudiTableName).show(100,false)
+--------+
|count(1)|
+--------+
|98000   |
+--------+

Query the tables from Presto and Hive

You can SSH to the Spark Presto cluster from the Jupyter terminal:

$> cd SageMaker
$> chmod 400 ee-default-keypair.pem
$> ssh -i ee-default-keypair.pem hadoop@ec2-54-80-95-22.compute-1.amazonaws.com
$> presto-cli

and query the data:

presto> use hive.default;
presto> show tables;
presto> select count(*) from sales_order_detail_hudi_cow;
presto> select count(*) from sales_order_detail_hudi_mor;

Press Ctrl+D to exist Presto-cli, and run the following command to run hive.

$> hive
# view the tables
hive> show tables;
Note: Make sure to run a Kernel->Shutdown on this notebook before the next steps. This will free up resources on the Spark EMR Cluster for the next steps.

Streaming Updates to Copy on Write Tables

Note: The following step need to be executed from the terminal available within Jupyter. Please start the Simulate Random Updates step in the 1st notebook.

SSH to the Spark EMR cluster from the Jupyter terminal:

$> cd SageMaker
$> chmod 400 ee-default-keypair.pem
$> ssh -i ee-default-keypair.pem hadoop@ec2-54-158-247-127.compute-1.amazonaws.com

Run the following command in the terminal once ssh-ed into the EMR cluster:

EMR $> spark-submit --conf "spark.serializer=org.apache.spark.serializer.KryoSerializer" --conf "spark.sql.hive.convertMetastoreParquet=false" --conf "spark.dynamicAllocation.maxExecutors=10" --jars hdfs:///hudi-spark-bundle.jar --jars hdfs:///spark-avro.jar --packages org.apache.spark:spark-streaming-kafka-0-10_2.11:2.4.2 --class com.hudiConsumer.SparkKafkaConsumerHudiProcessor_COW SparkKafkaConsumerHudiProcessor-assembly-1.0.jar hudi-workshop-100231-899011185738 b-2.kafkacluster1.zf8cl7.c6.kafka.us-east-1.amazonaws.com:9094,b-1.kafkacluster1.zf8cl7.c6.kafka.us-east-1.amazonaws.com:9094

This command launches a Spark Streaming job that continuously monitors the Kafka topic 's3_event_streams' to consume the updates into the Hudi table 'sales_order_detail_hudi_cow' in S3.

Query the changing data

You can SSH to the Spark Presto cluster from the Jupyter terminal:

$> cd SageMaker
$> ssh -i ee-default-keypair.pem hadoop@ec2-54-80-95-22.compute-1.amazonaws.com
$> presto-cli

and query the data:

presto> use hive.default;
presto> show tables;

# pick one record key here
presto> select record_key,quantity,month from sales_order_detail_hudi_cow where record_key = '<record_key>';
Note: Please press Ctrl+C in the terminal connected to the EMR Spark Cluster to stop the Spark Streaming job.

Streaming Updates to Merge on Read Tables

Note: The following step need to be executed from the terminal available within Jupyter. Make sure you have the Database Streaming Updates step running in the other notebook.

SSH to the Spark EMR cluster from the Jupyter terminal:

ssh -i ee-default-keypair.pem hadoop@ec2-54-158-247-127.compute-1.amazonaws.com

Run the following command in the terminal once ssh-ed into the EMR cluster:

spark-submit --conf "spark.serializer=org.apache.spark.serializer.KryoSerializer" --conf "spark.sql.hive.convertMetastoreParquet=false" --conf "spark.dynamicAllocation.maxExecutors=10" --jars hdfs:///hudi-spark-bundle.jar --jars hdfs:///spark-avro.jar --packages org.apache.spark:spark-streaming-kafka-0-10_2.11:2.4.2  --class com.hudiConsumer.SparkKafkaConsumerHudiProcessor_MOR SparkKafkaConsumerHudiProcessor-assembly-1.0.jar hudi-workshop-100231-899011185738 b-2.kafkacluster1.zf8cl7.c6.kafka.us-east-1.amazonaws.com:9094,b-1.kafkacluster1.zf8cl7.c6.kafka.us-east-1.amazonaws.com:9094

The command launches a Spark Streaming job that continuously monitors the Kafka topic 's3_event_streams' to consume the updates into the Hudi table 'sales_order_detail_hudi_mor' in S3.

Query the changing data

You can SSH to the Spark Presto cluster from the Jupyter terminal:

ssh -i ee-default-keypair.pem hadoop@ec2-54-80-95-22.compute-1.amazonaws.com

and query the data but this time we are going to use Hive to run our queries:

Note: Replace the record_key argument value in the next command with a record_key from the Spark Streaming logs.
$> hive
# view the tables
hive> show tables;
# pick one record key here
hive> select record_key,quantity,month from sales_order_detail_hudi_mor where record_key = '<record_key>'; 
# let's query the same record in the realtime table
hive> select record_key,quantity,month from sales_order_detail_hudi_mor_rt where record_key = '<record_key>';

We can observe that the Realtime table has the latest view of changes that is not yet compacted into our main MOR base table.

Note: Please press Ctrl+C in the terminal connected to the EMR Spark Cluster to stop the Spark Streaming job.

Run the Hudi Compaction Process

Let's now run the Apache Hudi Compaction process manually so that we understand the behavior. These steps will typically be automated in a production environment.

## From the terminal, let's connect to the Spark EMR cluster and start the hudi cli
$> /usr/lib/hudi/cli/bin/hudi-cli.sh

## at the hudi cli, let's connect to the datapath for the MOR table 
hudi-> connect --path s3://hudi-workshop-100231-899011185738/hudi/sales_order_detail_hudi_mor

## run a describe on this table
hudi:sales_order_detail_hudi_mor-> desc

## view the pending commits
hudi:sales_order_detail_hudi_mor-> commits show

## schedule the compactions on the table
hudi:sales_order_detail_hudi_mor-> compaction schedule

## refresh hudi metadata after compaction schedule is successful
hudi-> connect --path s3://hudi-workshop-100231-899011185738/hudi/sales_order_detail_hudi_mor

## view the pending compactions
hudi:sales_order_detail_hudi_mor-> compactions show all
Note: Replace the compactionInstant argument value in the next command as obtained from the previous command.
## execute the compactions on the table
hudi:sales_order_detail_hudi_mor-> compaction run --parallelism 12 --sparkMemory 100GB --retry 1 --compactionInstant <compactionInstant> --schemaFilePath s3://hudi-workshop-100231-899011185738/config/sales_order_detail.schema

## refresh hudi metadata after compactions run is successful
hudi-> connect --path s3://hudi-workshop-100231-899011185738/hudi/sales_order_detail_hudi_mor

## view the completed compactions
hudi:sales_order_detail_hudi_mor-> compactions show all

## the compactions should show Completed now. Let's view the commits and query the  changes in Hive as well:
hudi:sales_order_detail_hudi_mor-> commits show
Note: Replace the latest commit id argument value in the next command as obtained from the previous command.
## Let's now rollback the latest commit.
hudi:sales_order_detail_hudi_mor-> commit rollback --commit <latest commit id>

## After the step completes, lets view the commits again and query the changes in Hive as well:
hudi:sales_order_detail_hudi_mor-> commits show

We can now view the compacted 'sales_order_detail_hudi_mor' table to view the latest changes. Let's do that from Hive in our Presto EMR Cluster:

## start the hive cli
$> hive
Note: Replace with the same record_key as used above.
## query the changed record
hive> select record_key,quantity,month from sales_order_detail_hudi_mor where record_key = '<record_key>';