Showing posts from June, 2021

Call Recordings and other Binary Data and Metadata in the Data Lake

Data lakes hold data in any format.  This includes structured data, semi-structured text data, documents, and binary data. Organizing that binary data and its metadata can be done in several ways. Video Images in Video Welcome We're talking about binary data and its associated descriptive metadata.   This shows some of the metadata that could be associated with each call recording. The recording itself is  highly sensitive  because we don't know exactly what was said.   The extracted text is also highly sensitive  because it is a full text copy with the same risk. Media / binary files can add up.   We could have millions of call records and all of their associated metadata. It is a large data problem. We have to pick the format for the binary, non-rectangular, data and its associated metadata. We can use the native formats and links or embed the binary data inside another format. Here are two of the major options. Bin

Cloud Data Lake vs Warehouse - fit for purpose

Data Lakes and Data Warehouses each have their own strengths and weaknesses.  You may need one or the other depending on your needs. Look at your use cases to determine whether it makes to have one or the other or both.  Maybe this can help you with more things to think about when making a decision of one over the other. My general experience has been  Data Lakes tend to be the choice when feeding operational systems and when storing binary data.  They are often used for massive data transformations or ML Feature creation. Sometimes security concerns and partitions may drive highly sensitive data to protected lakes. Data Warehouses tend to be the choice when humans need big data for reporting, data exploration, and collaborative environments. Use cases that put them in the middle of data flows for operational systems should be evaluated for uptime and latency. Different companies will prioritize differently.  I've seen companies that were lake only , companies that had

Streaming Ecosystems Still Need Extract and Load

Enterprises move from batch to streaming data ingestion in order to make data available in a more near time  manner. This does not remove the need for extract and load capabilities.  Streaming systems only operate on data that is in the stream  right now .  There is no data available from a time outside of the retention window or from prior to system implementation.  There is a whole other set of lifecycle operations that require some type of bulk operations. Examples include: Initial data loads where data was collected prior or outside of streaming processing. Original event streams may need to be re-ingested because they were mis-processed or because you may wish to extract the data differently. Original event streams fixed/ modified and re-ingested in order to fix errors or add information in the operational store. Privacy and retention rules may require the generation of synthetic events to make data change