Testlet 5
Case study
Overview
ADatum Corporation is a retailer that sells products through two sales channels: retail stores and a website.
Existing Environment
ADatum has one database server that has Microsoft SQL Server 2016 installed. The server hosts three mission-critical databases named SALESDB, DOCDB, and REPORTINGDB.
SALESDB collects data from the stored and the website.
DOCDB stored documents that connect to the sales data in SALESDB. The documents are stored in two different JSON formats based on the sales channel.
REPORTINGDB stores reporting data and contains server columnstore indexes. A daily process creates reporting data in REPORTINGDB from the data in SALESDB. The process is implemented as a SQL Server Integration Services (SSIS) package that runs a stored procedure from SALESDB.
Requirements
Planned Changes
ADatum plans to move the current data infrastructure to Azure.
The new infrastructure has the following requirements:
– Migrate SALESDB and REPORTINGDB to an Azure SQL database.
– Migrate DOCDB to Azure Cosmos DB.
– The sales data including the documents in JSON format, must be gathered as it arrives and analyzed online by using Azure Stream Analytics. The analytic process will perform aggregations that must be done continuously, without gaps, and without overlapping.
– As they arrive, all the sales documents in JSON format must be transformed into one consistent format.
– Azure Data Factory will replace the SSIS process of copying the data from SALESDB to REPORTINGDB.
Technical Requirements
The new Azure data infrastructure must meet the following technical requirements:
– Data in SALESDB must encrypted by using Transparent Data Encryption (TDE). The encryption must use your own key.
– SALESDB must be restorable to any given minute within the past three weeks.
– Real-time processing must be monitored to ensure that workloads are sized properly based on actual usage patterns.
– Missing indexes must be created automatically for REPORTINGDB.
– Disk IO, CPU, and memory usage must be monitored for SALESDB.
How should you monitor SALESDB to meet the technical requirements?
A . Query the sys.resource_statsdynamic management view.
B . Review the Query Performance Insights for SALESD
D . Query the sys.dm_os_wait_statsdynamic management view.
E . Review the auditing information of SALESD
Answer: A
Explanation:
Scenario: Disk IO, CPU, and memory usage must be monitored for SALESDB
The sys.resource_stats returns historical data for CPU, IO, DTU consumption. There’s one row every 5 minute for a database in an Azure logical SQL Server if there’s a change in the metrics.
Incorrect Answers:
B: Query Performance Insight helps you to quickly identify what your longest running queries are, how they change over time, and what waits are affecting them.
C: sys.dm_os_wait_stats: specific types of wait times during query execution can indicate bottlenecks or stall points within the query. Similarly, high wait times, or wait counts server wide can indicate bottlenecks or hot spots in interaction query interactions within the server instance. For example, lock waits indicate data contention by queries; page IO latch waits indicate slow IO response times; page latch update waits indicate incorrect file layout.
References: https://dataplatformlabs.com/monitoring-azure-sql-database-with-sys-resource_stats/
Leave a Reply