GDPR Deletes

Table of Contents:

  1. Soft Deletes
  2. Hard Deletes

GDPR has made deletes a must-have tool in everyone's data management toolbox. Apache Hudi supports implementing two types of deletes on data stored in Hudi datasets, by enabling the user to specify a different record payload implementation.

  • Soft Deletes : With soft deletes, user wants to retain the key but just null out the values for all other fields. This can be simply achieved by ensuring the appropriate fields are nullable in the dataset schema and simply upserting the dataset after setting these fields to null.

  • Hard Deletes : A stronger form of delete is to physically remove any trace of the record from the dataset.

Let's now execute some delete operations on our dataset.

In [1]:
%%configure -f
{
    "conf":  { 
             "spark.jars":"hdfs:///hudi-spark-bundle.jar,hdfs:///spark-avro.jar",
             "spark.serializer":"org.apache.spark.serializer.KryoSerializer",
             "spark.sql.hive.convertMetastoreParquet":"false",
             "spark.dynamicAllocation.executorIdleTimeout": 3600,
             "spark.executor.memory": "7G",
             "spark.executor.cores": 1,
             "spark.dynamicAllocation.initialExecutors":16,
             "spark.sql.parquet.outputTimestampType":"TIMESTAMP_MILLIS"
           } 
}
Current session configs: {'conf': {'spark.jars': 'hdfs:///hudi-spark-bundle.jar,hdfs:///spark-avro.jar', 'spark.serializer': 'org.apache.spark.serializer.KryoSerializer', 'spark.sql.hive.convertMetastoreParquet': 'false', 'spark.dynamicAllocation.executorIdleTimeout': 3600, 'spark.executor.memory': '7G', 'spark.executor.cores': 1, 'spark.dynamicAllocation.initialExecutors': 16, 'spark.sql.parquet.outputTimestampType': 'TIMESTAMP_MILLIS'}, 'kind': 'spark'}
No active sessions.
In [2]:
import org.apache.spark.sql.SaveMode
import org.apache.spark.sql.functions._
import org.apache.hudi.DataSourceWriteOptions
import org.apache.hudi.config.HoodieWriteConfig
import org.apache.hudi.hive.MultiPartKeysValueExtractor
import com.google.common.collect.Lists;
import org.apache.hudi.ComplexKeyGenerator
import org.apache.spark.sql.functions.{concat, lit}
import org.apache.spark.sql.functions.{year, month, dayofmonth, hour}
Starting Spark application
IDYARN Application IDKindStateSpark UIDriver logCurrent session?
1application_1576872917892_0002sparkidleLinkLink
SparkSession available as 'spark'.
import org.apache.spark.sql.SaveMode
import org.apache.spark.sql.functions._
import org.apache.hudi.DataSourceWriteOptions
import org.apache.hudi.config.HoodieWriteConfig
import org.apache.hudi.hive.MultiPartKeysValueExtractor
import com.google.common.collect.Lists
import org.apache.hudi.ComplexKeyGenerator
import org.apache.spark.sql.functions.{concat, lit}
import org.apache.spark.sql.functions.{year, month, dayofmonth, hour}

Soft Deletes

Let's pick a few records to test the soft delete functionality on:

In [3]:
//Hudi Copy on Write Table
val s3_bucket="hudi-workshop-100231-899011185738"
val hudiTableName = "sales_order_detail_hudi_cow"
val hudiTableRecordKey = "record_key"
val hudiTablePartitionKey = "partition_key"
val hudiTablePrecombineKey = "order_date"
val hudiTablePath = s"s3://$s3_bucket/hudi/" + hudiTableName
val hudiHiveTablePartitionKey = "year,month"
s3_bucket: String = hudi-workshop-100231-899011185738
hudiTableName: String = sales_order_detail_hudi_cow
hudiTableRecordKey: String = record_key
hudiTablePartitionKey: String = partition_key
hudiTablePrecombineKey: String = order_date
hudiTablePath: String = s3://hudi-workshop-100231-899011185738/hudi/sales_order_detail_hudi_cow
hudiHiveTablePartitionKey: String = year,month

Let's pick a few random order_ids for this exercise:

In [4]:
val df=spark.sql("select order_id, quantity, order_date from "+hudiTableName+" where order_id in (10001,10002,10003)")
df.show(100,false)
df: org.apache.spark.sql.DataFrame = [order_id: int, quantity: int ... 1 more field]
+--------+--------+----------+
|order_id|quantity|order_date|
+--------+--------+----------+
|10001   |103     |2015-08-31|
|10001   |118     |2015-08-31|
|10001   |144     |2015-08-31|
|10001   |55      |2015-08-31|
|10001   |96      |2015-08-31|
|10002   |77      |2015-04-25|
|10003   |146     |2015-04-05|
|10002   |10      |2015-04-25|
+--------+--------+----------+

In [5]:
import org.apache.spark.sql.types.IntegerType

val df=spark.sql("select * from "+hudiTableName+" where order_id in (10001,10002,10003)")
df.printSchema()
val updatedDF = df.withColumn("quantity", lit("-1").cast(IntegerType))
updatedDF.printSchema()
import org.apache.spark.sql.types.IntegerType
df: org.apache.spark.sql.DataFrame = [_hoodie_commit_time: string, _hoodie_commit_seqno: string ... 17 more fields]
root
 |-- _hoodie_commit_time: string (nullable = true)
 |-- _hoodie_commit_seqno: string (nullable = true)
 |-- _hoodie_record_key: string (nullable = true)
 |-- _hoodie_partition_path: string (nullable = true)
 |-- _hoodie_file_name: string (nullable = true)
 |-- line_id: integer (nullable = true)
 |-- line_number: integer (nullable = true)
 |-- order_id: integer (nullable = true)
 |-- product_id: integer (nullable = true)
 |-- quantity: integer (nullable = true)
 |-- unit_price: decimal(38,10) (nullable = true)
 |-- discount: decimal(38,10) (nullable = true)
 |-- supply_cost: decimal(38,10) (nullable = true)
 |-- tax: decimal(38,10) (nullable = true)
 |-- order_date: date (nullable = true)
 |-- record_key: string (nullable = true)
 |-- partition_key: string (nullable = true)
 |-- year: string (nullable = true)
 |-- month: string (nullable = true)

updatedDF: org.apache.spark.sql.DataFrame = [_hoodie_commit_time: string, _hoodie_commit_seqno: string ... 17 more fields]
root
 |-- _hoodie_commit_time: string (nullable = true)
 |-- _hoodie_commit_seqno: string (nullable = true)
 |-- _hoodie_record_key: string (nullable = true)
 |-- _hoodie_partition_path: string (nullable = true)
 |-- _hoodie_file_name: string (nullable = true)
 |-- line_id: integer (nullable = true)
 |-- line_number: integer (nullable = true)
 |-- order_id: integer (nullable = true)
 |-- product_id: integer (nullable = true)
 |-- quantity: integer (nullable = true)
 |-- unit_price: decimal(38,10) (nullable = true)
 |-- discount: decimal(38,10) (nullable = true)
 |-- supply_cost: decimal(38,10) (nullable = true)
 |-- tax: decimal(38,10) (nullable = true)
 |-- order_date: date (nullable = true)
 |-- record_key: string (nullable = true)
 |-- partition_key: string (nullable = true)
 |-- year: string (nullable = true)
 |-- month: string (nullable = true)

In [6]:
// Set up our Hudi Data Source Options
val hudiOptions = Map[String,String](
    DataSourceWriteOptions.RECORDKEY_FIELD_OPT_KEY -> hudiTableRecordKey,
    DataSourceWriteOptions.PARTITIONPATH_FIELD_OPT_KEY -> hudiTablePartitionKey, 
    DataSourceWriteOptions.PRECOMBINE_FIELD_OPT_KEY -> hudiTablePrecombineKey, 
    DataSourceWriteOptions.HIVE_SYNC_ENABLED_OPT_KEY -> "true", 
    DataSourceWriteOptions.HIVE_PARTITION_FIELDS_OPT_KEY -> hudiHiveTablePartitionKey, 
    DataSourceWriteOptions.HIVE_ASSUME_DATE_PARTITION_OPT_KEY -> "false", 
    DataSourceWriteOptions.HIVE_PARTITION_EXTRACTOR_CLASS_OPT_KEY ->
        classOf[MultiPartKeysValueExtractor].getName,
    "hoodie.parquet.max.file.size" -> String.valueOf(1024 * 1024 * 1024),
    "hoodie.parquet.small.file.limit" -> String.valueOf(64 * 1024 * 1024),
    "hoodie.parquet.compression.ratio" -> String.valueOf(0.5))
hudiOptions: scala.collection.immutable.Map[String,String] = Map(hoodie.parquet.small.file.limit -> 67108864, hoodie.parquet.compression.ratio -> 0.5, hoodie.datasource.write.precombine.field -> order_date, hoodie.datasource.hive_sync.partition_fields -> year,month, hoodie.datasource.hive_sync.partition_extractor_class -> org.apache.hudi.hive.MultiPartKeysValueExtractor, hoodie.parquet.max.file.size -> 1073741824, hoodie.datasource.hive_sync.enable -> true, hoodie.datasource.write.recordkey.field -> record_key, hoodie.datasource.hive_sync.assume_date_partitioning -> false, hoodie.datasource.write.partitionpath.field -> partition_key)
In [7]:
(
 updatedDF.write 
  .format("org.apache.hudi")
  //Copy on Write Table
  .option(DataSourceWriteOptions.STORAGE_TYPE_OPT_KEY, DataSourceWriteOptions.COW_STORAGE_TYPE_OPT_VAL)
  .options(hudiOptions)
  .option(HoodieWriteConfig.TABLE_NAME,hudiTableName)
  .option(DataSourceWriteOptions.HIVE_TABLE_OPT_KEY, hudiTableName)
  .option(DataSourceWriteOptions.OPERATION_OPT_KEY, DataSourceWriteOptions.UPSERT_OPERATION_OPT_VAL)
  .mode(SaveMode.Append)
  .save(hudiTablePath)
)

Let's now view the changed data in our tables:

In [8]:
val df=spark.sql("select order_id, quantity, order_date from "+hudiTableName+" where order_id in (10001,10002,10003)")
df.show(100,false)
df: org.apache.spark.sql.DataFrame = [order_id: int, quantity: int ... 1 more field]
+--------+--------+----------+
|order_id|quantity|order_date|
+--------+--------+----------+
|10002   |-1      |2015-04-25|
|10003   |-1      |2015-04-05|
|10002   |-1      |2015-04-25|
|10001   |-1      |2015-08-31|
|10001   |-1      |2015-08-31|
|10001   |-1      |2015-08-31|
|10001   |-1      |2015-08-31|
|10001   |-1      |2015-08-31|
+--------+--------+----------+

We can see that the quantity field has been updated. So essentially a soft-delete is a update where certain fields have been cleared out. You would typically do this to PII or PHI columns to anonymize the records.

Hard Deletes

Let's test the hard delete functionality:

In [9]:
val deleteDF=spark.sql("select * from "+hudiTableName+" where order_id in (10001,10002,10003)")
deleteDF.count()
deleteDF: org.apache.spark.sql.DataFrame = [_hoodie_commit_time: string, _hoodie_commit_seqno: string ... 17 more fields]
res9: Long = 8
In [10]:
(
 deleteDF.write
   .format("org.apache.hudi")
   .option(DataSourceWriteOptions.STORAGE_TYPE_OPT_KEY, DataSourceWriteOptions.COW_STORAGE_TYPE_OPT_VAL)
   .options(hudiOptions)
   .option(HoodieWriteConfig.TABLE_NAME,hudiTableName)
   .option(DataSourceWriteOptions.HIVE_TABLE_OPT_KEY, hudiTableName)
   .option(DataSourceWriteOptions.OPERATION_OPT_KEY, DataSourceWriteOptions.UPSERT_OPERATION_OPT_VAL)
   // Empty out the row with the EmptyHoodieRecordPayload
   .option(DataSourceWriteOptions.PAYLOAD_CLASS_OPT_KEY, "org.apache.hudi.EmptyHoodieRecordPayload")
   .mode(SaveMode.Append)
   .save(hudiTablePath)
)
In [11]:
val df=spark.sql("select * from "+hudiTableName+" where order_id in (10001,10002,10003)")
df.count()
df: org.apache.spark.sql.DataFrame = [_hoodie_commit_time: string, _hoodie_commit_seqno: string ... 17 more fields]
res11: Long = 0

We can see that the records have been deleted from our data lake.