A city has been collecting data on its public bicycle share program for the past three years.
The SPB dataset currently on Amazon S3.
The data contains the following data points:
• Bicycle organization points
• Bicycle destination points
• Mileage between the points
• Number of bicycle slots available at the station (which is variable based on the station location)
• Number of slots available and taken at each station at a given time
The program has received additional funds to increase the number of bicycle stations, available. All data is regularly archived to Amazon Glacier.
The new bicycle station must be located to provide the most riders access to bicycles.
How should this task be performed?
A . Move the data from Amazon S3 into Amazon EBS-backed volumes and EC2 Hardoop with spot instances to run a Spark job that performs a stochastic gradient descent optimization.
B . Use the Amazon Redshift COPY command to move the data from Amazon S3 into RedShift and platform a SQL query that outputs the most popular bicycle stations.
C . Persist the data on Amazon S3 and use a transits EMR cluster with spot instances to run a Spark streaming job that will move the data into Amazon Kinesis.
D . Keep the data on Amazon S3 and use an Amazon EMR based Hadoop cluster with spot insistences to run a spark job that perform a stochastic gradient descent optimization over
EMBF
Answer: D