You need to recommend the appropriate storage and processing solution?
What should you recommend?
A . Enable auto-shrink on the database.
B . Flush the blob cache using Windows PowerShell.
C . Enable Apache Spark RDD (RDD) caching.
D . Enable Databricks IO (DBIO) caching.
E . Configure the reading speed using Azure Data Studio.
Answer: C
Explanation:
Scenario: You must be able to use a file system view of data stored in a blob. You must build an architecture that will allow Contoso to use the DB FS filesystem layer over a blob store.
Databricks File System (DBFS) is a distributed file system installed on Azure Databricks clusters. Files in DBFS persist to Azure Blob storage, so you won’t lose data even after you terminate a cluster.
The Databricks Delta cache, previously named Databricks IO (DBIO) caching, accelerates data reads by creating copies of remote files in nodes’ local storage using a fast intermediate data format. The data is cached automatically whenever a file has to be fetched from a remote location. Successive reads of the same data are then performed locally, which results in significantly improved reading speed.
Reference: https://docs.databricks.com/delta/delta-cache.html#delta-cache