This notebook demonstrates using Apache Hudi on Amazon EMR to consume streaming updates to an S3 data lake.
Here are some good reference links to read later:
Let's start by initializing the Spark Session to connect this notebook to our Spark EMR cluster:
%%configure -f
{
"conf": {
"spark.jars":"hdfs:///hudi-spark-bundle.jar,hdfs:///spark-avro.jar",
"spark.serializer":"org.apache.spark.serializer.KryoSerializer",
"spark.sql.hive.convertMetastoreParquet":"false",
"spark.dynamicAllocation.executorIdleTimeout": 3600,
"spark.executor.memory": "7G",
"spark.executor.cores": 1,
"spark.dynamicAllocation.initialExecutors":16,
"spark.sql.parquet.outputTimestampType":"TIMESTAMP_MILLIS"
}
}
val s3_bucket="hudi-workshop-100231-899011185738"
val dataPath=s"s3://$s3_bucket/dms-full-load-path/salesdb/SALES_ORDER_DETAIL/LOAD*"
Let's read data from our SALES_ORDER_DETAIL table in the Rawdata Tier of our DataLake:
var df=spark.read.parquet(dataPath)
df=df.toDF(df.columns map(_.toLowerCase): _*)
df.printSchema()
df.count()
import org.apache.spark.sql.SaveMode
import org.apache.spark.sql.functions._
import org.apache.hudi.DataSourceWriteOptions
import org.apache.hudi.config.HoodieWriteConfig
import org.apache.hudi.hive.MultiPartKeysValueExtractor
import com.google.common.collect.Lists;
import org.apache.hudi.ComplexKeyGenerator
import org.apache.spark.sql.functions.{concat, lit}
import org.apache.spark.sql.functions.{year, month, dayofmonth, hour}
Copy on Write Tables storage type stores data using exclusively columnar file formats (e.g parquet). Updates simply version & rewrite the files by performing a synchronous merge during write.
//Hudi Copy on Write Table
val hudiTableName = "sales_order_detail_hudi_cow"
val hudiTableRecordKey = "record_key"
val hudiTablePartitionKey = "partition_key"
val hudiTablePrecombineKey = "order_date"
val hudiTablePath = s"s3://$s3_bucket/hudi/" + hudiTableName
val hudiHiveTablePartitionKey = "year,month"
// Add Primary Key - RECORD_KEY
var inputDF = df.withColumn(hudiTableRecordKey, concat(col("order_id"), lit("#"), col("line_id")))
inputDF=inputDF.select(inputDF.columns.map(x => col(x).as(x.toLowerCase)): _*)
We will perform some transformations to ensure that the partitions columns YEAR and MONTH are of type string.
{
import org.apache.spark.sql.types.DateType
import org.apache.spark.sql.types.StringType
inputDF = inputDF.withColumn("order_date", inputDF("order_date").cast(DateType))
inputDF = inputDF.withColumn("year",year($"order_date").cast(StringType))
.withColumn("month",month($"order_date").cast(StringType))
inputDF = inputDF.withColumn(hudiTablePartitionKey,concat(lit("year="),$"year",lit("/month="),$"month"))
inputDF.first()
}
inputDF.printSchema()
Now that the input data is prepared, let's write the data to create the Hudi COW table in the Analytics tier of our DaraLake:
// Set up our Hudi Data Source Options
val hudiOptions = Map[String,String](
DataSourceWriteOptions.RECORDKEY_FIELD_OPT_KEY -> hudiTableRecordKey,
DataSourceWriteOptions.PARTITIONPATH_FIELD_OPT_KEY -> hudiTablePartitionKey,
DataSourceWriteOptions.PRECOMBINE_FIELD_OPT_KEY -> hudiTablePrecombineKey,
DataSourceWriteOptions.HIVE_SYNC_ENABLED_OPT_KEY -> "true",
DataSourceWriteOptions.HIVE_PARTITION_FIELDS_OPT_KEY -> hudiHiveTablePartitionKey,
DataSourceWriteOptions.HIVE_ASSUME_DATE_PARTITION_OPT_KEY -> "false",
DataSourceWriteOptions.HIVE_PARTITION_EXTRACTOR_CLASS_OPT_KEY ->
classOf[MultiPartKeysValueExtractor].getName,
"hoodie.parquet.max.file.size" -> String.valueOf(1024 * 1024 * 1024),
"hoodie.parquet.small.file.limit" -> String.valueOf(64 * 1024 * 1024),
"hoodie.parquet.compression.ratio" -> String.valueOf(0.5),
"hoodie.insert.shuffle.parallelism" -> String.valueOf(2))
(
inputDF.write
.format("org.apache.hudi")
//Copy on Write Table
.option(DataSourceWriteOptions.STORAGE_TYPE_OPT_KEY, DataSourceWriteOptions.COW_STORAGE_TYPE_OPT_VAL)
.options(hudiOptions)
.option(HoodieWriteConfig.TABLE_NAME,hudiTableName)
.option(DataSourceWriteOptions.HIVE_TABLE_OPT_KEY, hudiTableName)
.option(DataSourceWriteOptions.OPERATION_OPT_KEY, DataSourceWriteOptions.INSERT_OPERATION_OPT_VAL)
.mode(SaveMode.Overwrite)
.save(hudiTablePath)
)
We can now view and query the table created in Spark-SQL:
spark.sql("show create table "+hudiTableName).collect.foreach(println)
spark.sql("show partitions "+hudiTableName).show(100,false)
spark.sql("select count(*) from "+hudiTableName).show(100,false)
Merge on Read storage type enables clients to ingest data quickly into a row based data format such as Avro. Any new data that is written to the Hudi dataset using MOR table type, will write new log/delta files that internally store the data as Avro encoded bytes.
A compaction process (configured as inline or asynchronous) will convert the log file format to the columnar base file format (parquet). The two different InputFormats expose 2 different views of this data:
Updating an existing set of rows will result in either
//Hudi Merge On Read Table
val hudiTableName = "sales_order_detail_hudi_mor"
val hudiTablePath = s"s3://$s3_bucket/hudi/" + hudiTableName
The write operation to create the MOR Storage Type table:
(
inputDF.write
.format("org.apache.hudi")
// Merge on Read Table this time.
.option(DataSourceWriteOptions.STORAGE_TYPE_OPT_KEY, DataSourceWriteOptions.MOR_STORAGE_TYPE_OPT_VAL)
.options(hudiOptions)
.option(HoodieWriteConfig.TABLE_NAME,hudiTableName)
.option(DataSourceWriteOptions.HIVE_TABLE_OPT_KEY, hudiTableName)
.option(DataSourceWriteOptions.OPERATION_OPT_KEY, DataSourceWriteOptions.INSERT_OPERATION_OPT_VAL)
.mode(SaveMode.Overwrite)
.save(hudiTablePath)
)
spark.sql("show create table "+hudiTableName).collect.foreach(println)
spark.sql("show partitions "+hudiTableName).show(100,false)
spark.sql("select count(*) from "+hudiTableName).show(100,false)
You can SSH to the Spark Presto cluster from the Jupyter terminal:
$> cd SageMaker
$> chmod 400 ee-default-keypair.pem
$> ssh -i ee-default-keypair.pem hadoop@ec2-54-80-95-22.compute-1.amazonaws.com
$> presto-cli
and query the data:
presto> use hive.default;
presto> show tables;
presto> select count(*) from sales_order_detail_hudi_cow;
presto> select count(*) from sales_order_detail_hudi_mor;
Press Ctrl+D to exist Presto-cli, and run the following command to run hive.
$> hive
# view the tables
hive> show tables;
SSH to the Spark EMR cluster from the Jupyter terminal:
$> cd SageMaker
$> chmod 400 ee-default-keypair.pem
$> ssh -i ee-default-keypair.pem hadoop@ec2-54-158-247-127.compute-1.amazonaws.com
Run the following command in the terminal once ssh-ed into the EMR cluster:
EMR $> spark-submit --conf "spark.serializer=org.apache.spark.serializer.KryoSerializer" --conf "spark.sql.hive.convertMetastoreParquet=false" --conf "spark.dynamicAllocation.maxExecutors=10" --jars hdfs:///hudi-spark-bundle.jar --jars hdfs:///spark-avro.jar --packages org.apache.spark:spark-streaming-kafka-0-10_2.11:2.4.2 --class com.hudiConsumer.SparkKafkaConsumerHudiProcessor_COW SparkKafkaConsumerHudiProcessor-assembly-1.0.jar hudi-workshop-100231-899011185738 b-2.kafkacluster1.zf8cl7.c6.kafka.us-east-1.amazonaws.com:9094,b-1.kafkacluster1.zf8cl7.c6.kafka.us-east-1.amazonaws.com:9094
This command launches a Spark Streaming job that continuously monitors the Kafka topic 's3_event_streams' to consume the updates into the Hudi table 'sales_order_detail_hudi_cow' in S3.
You can SSH to the Spark Presto cluster from the Jupyter terminal:
$> cd SageMaker
$> ssh -i ee-default-keypair.pem hadoop@ec2-54-80-95-22.compute-1.amazonaws.com
$> presto-cli
and query the data:
presto> use hive.default;
presto> show tables;
# pick one record key here
presto> select record_key,quantity,month from sales_order_detail_hudi_cow where record_key = '<record_key>';
SSH to the Spark EMR cluster from the Jupyter terminal:
ssh -i ee-default-keypair.pem hadoop@ec2-54-158-247-127.compute-1.amazonaws.com
Run the following command in the terminal once ssh-ed into the EMR cluster:
spark-submit --conf "spark.serializer=org.apache.spark.serializer.KryoSerializer" --conf "spark.sql.hive.convertMetastoreParquet=false" --conf "spark.dynamicAllocation.maxExecutors=10" --jars hdfs:///hudi-spark-bundle.jar --jars hdfs:///spark-avro.jar --packages org.apache.spark:spark-streaming-kafka-0-10_2.11:2.4.2 --class com.hudiConsumer.SparkKafkaConsumerHudiProcessor_MOR SparkKafkaConsumerHudiProcessor-assembly-1.0.jar hudi-workshop-100231-899011185738 b-2.kafkacluster1.zf8cl7.c6.kafka.us-east-1.amazonaws.com:9094,b-1.kafkacluster1.zf8cl7.c6.kafka.us-east-1.amazonaws.com:9094
The command launches a Spark Streaming job that continuously monitors the Kafka topic 's3_event_streams' to consume the updates into the Hudi table 'sales_order_detail_hudi_mor' in S3.
You can SSH to the Spark Presto cluster from the Jupyter terminal:
ssh -i ee-default-keypair.pem hadoop@ec2-54-80-95-22.compute-1.amazonaws.com
and query the data but this time we are going to use Hive to run our queries:
$> hive
# view the tables
hive> show tables;
# pick one record key here
hive> select record_key,quantity,month from sales_order_detail_hudi_mor where record_key = '<record_key>';
# let's query the same record in the realtime table
hive> select record_key,quantity,month from sales_order_detail_hudi_mor_rt where record_key = '<record_key>';
We can observe that the Realtime table has the latest view of changes that is not yet compacted into our main MOR base table.
Let's now run the Apache Hudi Compaction process manually so that we understand the behavior. These steps will typically be automated in a production environment.
## From the terminal, let's connect to the Spark EMR cluster and start the hudi cli
$> /usr/lib/hudi/cli/bin/hudi-cli.sh
## at the hudi cli, let's connect to the datapath for the MOR table
hudi-> connect --path s3://hudi-workshop-100231-899011185738/hudi/sales_order_detail_hudi_mor
## run a describe on this table
hudi:sales_order_detail_hudi_mor-> desc
## view the pending commits
hudi:sales_order_detail_hudi_mor-> commits show
## schedule the compactions on the table
hudi:sales_order_detail_hudi_mor-> compaction schedule
## refresh hudi metadata after compaction schedule is successful
hudi-> connect --path s3://hudi-workshop-100231-899011185738/hudi/sales_order_detail_hudi_mor
## view the pending compactions
hudi:sales_order_detail_hudi_mor-> compactions show all
## execute the compactions on the table
hudi:sales_order_detail_hudi_mor-> compaction run --parallelism 12 --sparkMemory 100GB --retry 1 --compactionInstant <compactionInstant> --schemaFilePath s3://hudi-workshop-100231-899011185738/config/sales_order_detail.schema
## refresh hudi metadata after compactions run is successful
hudi-> connect --path s3://hudi-workshop-100231-899011185738/hudi/sales_order_detail_hudi_mor
## view the completed compactions
hudi:sales_order_detail_hudi_mor-> compactions show all
## the compactions should show Completed now. Let's view the commits and query the changes in Hive as well:
hudi:sales_order_detail_hudi_mor-> commits show
## Let's now rollback the latest commit.
hudi:sales_order_detail_hudi_mor-> commit rollback --commit <latest commit id>
## After the step completes, lets view the commits again and query the changes in Hive as well:
hudi:sales_order_detail_hudi_mor-> commits show
We can now view the compacted 'sales_order_detail_hudi_mor' table to view the latest changes. Let's do that from Hive in our Presto EMR Cluster:
## start the hive cli
$> hive
## query the changed record
hive> select record_key,quantity,month from sales_order_detail_hudi_mor where record_key = '<record_key>';