Your application provides data transformation services. Files containing data to be transformed are first uploaded to Amazon S3 and then transformed by a fleet of spot EC2 Instances. Files submitted by your premium customers must be transformed with the highest priority. How should you implement such a system?
Use a Dynamo DB table with an attribute defining the priority level.
ransformation instances will scan the table for tasks, sorting the results by priority level.
Use Route 53 latency based-routing to send high priority tasks to the closest transformation instances.
Use two SOS queues, one for high priority messages, the other for default priority. Transformation instances first p01 the high priority queue; if there is no message, they p01 the default priority queue.
Use a single SOS queue. Each message contains the priority level. Transformation instances p0l1 high-priority messages first.
Which of the following are characteristics of Amazon VPC subnets? Choose 2 answers
Each subnet spans at least 2 Availability Zones to provide a high-availability environment.
Each subnet maps to a single Availability Zone.
CIDR block mask of /25 is the smallest range supported.
By default, all subnets can route between each other, whether they are private or public.
Instances in a private subnet can communicate with the Internet only if they have an Elastic
In AWS, which security aspects are the custom's responsibility? Choose 4 answers
Security Group and ACL (Access Control List) settings
Decommissioning storage devices
Patch management on the EC2 instanc's operating system
Life-cycle management of AM credentials Controlling physical access to compute resources
Encryption of EBS (Elastic Block Storage) volumes
When you put objects in Amazon S3, what is the indication that an object was successfully stored?
A HTTP 200 result code and MD5 checksum, taken together, indicate that the operation was successful. Amazon S3 is engineered for 99.999999999% durability. Therefore there is no need to confirm that data was inserted.
A success code is inserted into the S3 object metadata.
Each S3 account has a special bucket named _s3_logs.
Success codes are written to this bucket with a timestamp and checksum