DVA-C02 Dumps DVA-C02 Braindumps

DVA-C02 Real Questions DVA-C02 Practice Test DVA-C02 Actual Questions


killexams.com


Amazon


DVA-C02


AWS Certified Developer - Associate


https://killexams.com/pass4sure/exam-detail/DVA-C02

Question: 334


A company is migrating legacy internal applications to AWS. Leadership wants to rewrite the internal employee directory to use native AWS services. A developer needs to create a solution for storing employee contact details and high-resolution photos for use with the new application.


Which solution will enable the search and retrieval of each employee's individual details and high-resolution photos using AWS APIs?


  1. Encode each employee's contact information and photos using Base64. Store the information in an Amazon DynamoDB table using a sort key.

  2. Store each employee's contact information in an Amazon DynamoDB table along with the object keys for the photos stored in Amazon S3.

  3. Use Amazon Cognito user pools to implement the employee directory in a fully managed software-as-a-service (SaaS) method.

  4. Store employee contact information in an Amazon RDS DB instance with the photos stored in Amazon Elastic File System (Amazon EFS).


Answer: B Question: 335

A developer is migrating some features from a legacy monolithic application to use AWS Lambda functions instead. The application currently stores data in an Amazon Aurora DB cluster that runs in private subnets in a VPC. The AWS account has one VPC deployed.

The Lambda functions and the DB cluster are deployed in the same AWS Region in the same AWS account. The developer needs to ensure that the Lambda functions can securely access the DB cluster without crossing the

public internet.


Which solution will meet these requirements?


  1. Configure the DB cluster's public access setting to Yes.

  2. Configure an Amazon RDS database proxy for he Lambda functions.

  3. Configure a NAT gateway and a security group for the Lambda functions.

  4. Configure the VPC, subnets, and a security group for the Lambda functions.


Answer: D Question: 336

A company wants to share information with a third party. The third party has an HTTP API endpoint that the company can use to share the information. The company has the required API key to access the HTTP API.


The company needs a way to manage the API key by using code. The integration of the API key with the application code cannot affect application performance.


Which solution will meet these requirements MOST securely?


  1. Store the API credentials in AWS Secrets Manager. Retrieve the API credentials at runtime by using the AWS SD

  2. Use the credentials to make the API call.

  3. Store the API credentials in a local code variable. Push the code to a secure Git repository. Use the local code variable at runtime to make the API call.

  4. Store the API credentials as an object in a private Amazon S3 bucket. Restrict access to the S3 object by using IAM policies. Retrieve the API credentials at runtime by using the AWS SD

  5. Use the credentials to make the API call.

  6. Store the API credentials in an Amazon DynamoDB table. Restrict access to the table by using resource-based policies. Retrieve the API credentials at runtime by using the AWS SD

  7. Use the credentials to make the API call.


Answer: B Question: 337

An application uses Lambda functions to extract metadata from files uploaded to an S3 bucket; the metadata is stored in Amazon DynamoDB. The application starts behavingunexpectedly, and the developer wants to examine the logs of the Lambda function code for errors.


Based on this system configuration, where would the developer find the logs?


  1. Amazon S3

  2. AWS CloudTrail

  3. Amazon CloudWatch

  4. Amazon DynamoDB


Answer: C Question: 338

A developer is creating an application that includes an Amazon API Gateway REST API in the us-east-2 Region. The developer wants to use Amazon CloudFront and a custom domain name for the API. The developer has acquired an SSL/TLS certificate for the domain from a third-party provider.


How should the developer configure the custom domain for the application?


  1. Import the SSL/TLS certificate into AWS Certificate Manager (ACM) in the same Region as the AP

  2. Create a DNS A record for the custom domain.

  3. Import the SSL/TLS certificate into CloudFront. Create a DNS CNAME record for the custom domain.

  4. Import the SSL/TLS certificate into AWS Certificate Manager (ACM) in the same Region as the AP

  5. Create a DNS CNAME record for the custom domain.

  6. Import the SSL/TLS certificate into AWS Certificate Manager (ACM) in the us-east-1 Region. Create a DNS CNAME record for the custom domain.


Answer: B Question: 339

An application that is hosted on an Amazon EC2 instance needs access to files that are stored in an Amazon S3 bucket. The application lists the objects that are stored in the S3 bucket and displays a table to the user. During testing, a developer discovers that the application does not show any objects in the list.


What is the MOST secure way to resolve this issue?


  1. Update the IAM instance profile that is attached to the EC2 instance to include the S3: * permission for the S3

    bucket.

  2. Update the IAM instance profile that is attached to the EC2 instance to include the S3: ListBucket permission for the S3 bucket.

  3. Update the developer's user permissions to include the S3: ListBucket permission for the S3 bucket.

  4. Update the S3 bucket policy by including the S3: ListBucket permission and by setting the Principal element to specify the account number of the EC2 instance.


Answer: B Question: 340

A developer is designing a serverless application with two AWS Lambda functions to process photos. One Lambda function stores objects in an Amazon S3 bucket and stores the associated metadata in an Amazon DynamoDB table. The other Lambda function fetches the objects from the S3 bucket by using the metadata from the DynamoDB table.


Both Lambda functions use the same Python library to perform complex computations and are approaching the quota for the maximum size of zipped deployment packages.


What should the developer do to reduce the size of the Lambda deployment packages with the LEAST operational overhead?


  1. Package each Python library in its own .zip file archive. Deploy each Lambda function with its own copy of the library.

  2. Create a Lambda layer with the required Python library. Use the Lambda layer in both Lambda functions.

  3. Combine the two Lambda functions into one Lambda function. Deploy the Lambda function as a single .zip file archive.

  4. Download the Python library to an S3 bucket. Program the Lambda functions to reference the object URLs.


Answer: B Question: 241

A developer is creating an AWS Lambda function that needs credentials to connect to an Amazon RDS for MySQL database. An Amazon S3 bucket currently stores the credentials. The developer needs to improve the existing solution by implementing credential rotation and secure storage. The developer also needs to provide integration with the Lambda function.


Which solution should the developer use to store and retrieve the credentials with the LEAST management overhead?


  1. Store the credentials in AWS Systems Manager Parameter Store. Select the database that the parameter will access. Use the default AWS Key Management Service (AWS KMS) key to encrypt the parameter. Enable automatic rotation for the parameter. Use the parameter from Parameter Store on the Lambda function to connect to the database.

  2. Encrypt the credentials with the default AWS Key Management Service (AWS KMS) key. Store the credentials as environment variables for the Lambda function. Create a second Lambda function to generate new credentials and to rotate the credentials by updating the environment variables of the first Lambda function. Invoke the second Lambda function by using an Amazon EventBridge rule that runs on a schedule. Update the database to use the new credentials. On the first Lambda function, retrieve the credentials from the environment variables. Decrypt the credentials by using AWS KMS, Connect to the database.

  3. Store the credentials in AWS Secrets Manager. Set the secret type to Credentials for Amazon RDS database. Select the database that the secret will access. Use the default AWS Key Management Service (AWS KMS) key to encrypt the secret. Enable automatic rotation for the secret. Use the secret from Secrets Manager on the Lambda function to connect to the database.

  4. Encrypt the credentials by using AWS Key Management Service (AWS KMS). Store the credentials in an Amazon

DynamoDB table. Create a second Lambda function to rotate the credentials. Invoke the second Lambda function by using an Amazon EventBridge rule that runs on a schedule. Update the DynamoDB table. Update the database to use the generated credentials. Retrieve the credentials from DynamoDB with the first Lambda function. Connect to the database.


Answer: C Question: 341

A developer wants to insert a record into an Amazon DynamoDB table as soon as a new file is added to an Amazon S3 bucket.


Which set of steps would be necessary to achieve this?


  1. Create an event with Amazon EventBridge that will monitor the S3 bucket and then insert the records into DynamoD

  2. Configure an S3 event to invoke an AWS Lambda function that inserts records into DynamoD

  3. Create an AWS Lambda function that will poll the S3 bucket and then insert the records into DynamoD

  4. Create a cron job that will run at a scheduled time and insert the records into DynamoD


Answer: C Question: 342

A developer is deploying an AWS Lambda function. The developer wants the ability to return to older versions of the function quickly and seamlessly.


How can the developer achieve this goal with the LEAST operational overhead?


  1. Use AWS OpsWorks to perform blue/green deployments.

  2. Use a function alias with different versions.

  3. Maintain deployment packages for older versions in Amazon S3.

  4. Use AWS CodePipeline for deployments and rollbacks.


Answer: B Question: 243

A development team maintains a web application by using a single AWS CloudFormation template. The template defines web servers and an Amazon RDS database. The team uses the Cloud Formation template to deploy the Cloud Formation stack to different environments.


During a recent application deployment, a developer caused the primary development database to be dropped and recreated. The result of this incident was a loss of data. The team needs to avoid accidental database deletion in the future.


Which solutions will meet these requirements? (Choose two.)


  1. Add a CloudFormation Deletion Policy attribute with the Retain value to the database resource.

  2. Update the CloudFormation stack policy to prevent updates to the database.

  3. Modify the database to use a Multi-AZ deployment.

  4. Create a CloudFormation stack set for the web application and database deployments.

  5. Add a Cloud Formation DeletionPolicy attribute with the Retain value to the stack.

Answer: A,D Question: 344

A company hosts a client-side web application for one of its subsidiaries on Amazon S3. The web application can be accessed through Amazon CloudFront from https://www.example.com. After a successful rollout, the company wants to host three more client-side web applications for its remaining subsidiaries on three separate S3 buckets.


To achieve this goal, a developer moves all the common JavaScript files and web fonts to a central S3 bucket that serves the web applications. However, during testing, the developer notices that the browser blocks the JavaScript files and web fonts.


What should the developer do to prevent the browser from blocking the JavaScript files and web fonts?


  1. Create four access points that allow access to the central S3 bucket. Assign an access point to each web application bucket.

  2. Create a bucket policy that allows access to the central S3 bucket. Attach the bucket policy to the central S3 bucket.

  3. Create a cross-origin resource sharing (CORS) configuration that allows access to the central S3 bucket. Add the CORS configuration to the central S3 bucket.

  4. Create a Content-MD5 header that provides a message integrity check for the central S3 bucket. Insert the Content- MD5 header for each web application request.


Answer: C Question: 345

A company wants to deploy and maintain static websites on AWS. Each website's source code is hosted in one of several version control systems, including AWS CodeCommit, Bitbucket, and GitHub.


The company wants to implement phased releases by using development, staging, user acceptance testing, and production environments in the AWS Cloud. Deployments to each environment must be started by code merges on the relevant Git branch. The company wants to use HTTPS for all data exchange. The company needs a solution that does not require servers to run continuously.


Which solution will meet these requirements with the LEAST operational overhead?


  1. Host each website by using AWS Amplify with a serverless backend. Conned the repository branches that correspond to each of the desired environments. Start deployments by merging code changes to a desired branch.

  2. Host each website in AWS Elastic Beanstalk with multiple environments. Use the EB CLI to link each repository branch. Integrate AWS CodePipeline to automate deployments from version control code merges.

  3. Host each website in different Amazon S3 buckets for each environment. Configure AWS CodePipeline to pull source code from version control. Add an AWS CodeBuild stage to copy source code to Amazon S3.

  4. Host each website on its own Amazon EC2 instance. Write a custom deployment script to bundle each website's static assets. Copy the assets to Amazon EC2. Set up a workflow to run the script when code is merged.


Answer: A Question: 346

For a deployment using AWS Code Deploy, what is the run order of the hooks for in-place deployments?


  1. BeforeInstall -> ApplicationStop -> ApplicationStart -> AfterInstall

  2. ApplicationStop -> BeforeInstall -> AfterInstall -> ApplicationStart

  3. BeforeInstall -> ApplicationStop -> ValidateService -> ApplicationStart

  4. ApplicationStop -> BeforeInstall -> ValidateService -> ApplicationStart


Answer: A Question: 347

A company is implementing an application on Amazon EC2 instances. The application needs to process incoming transactions. When the application detects a transaction that is not valid, the application must send a chat message to the company's support team. To send the message, the application needs to retrieve the access token to authenticate by using the chat API.


A developer needs to implement a solution to store the access token. The access token must be encrypted at rest and in transit. The access token must also be accessible from other AWS accounts.


Which solution will meet these requirements with the LEAST management overhead?


  1. Use an AWS Systems Manager Parameter Store SecureString parameter that uses an AWS Key Management Service (AWS KMS) AWS managed key to store the access token. Add a resource-based policy to the parameter to allow access from other accounts. Update the IAM role of the EC2 instances with permissions to access Parameter Store. Retrieve the token from Parameter Store with the decrypt flag enabled. Use the decrypted access token to send the message to the chat.

  2. Encrypt the access token by using an AWS Key Management Service (AWS KMS) customer managed key. Store the access token in an Amazon DynamoDB table. Update the IAM role of the EC2 instances with permissions to access DynamoDB and AWS KM

  3. Retrieve the token from DynamoD

  4. Decrypt the token by using AWS KMS on the EC2 instances. Use the decrypted access token to send the message to the chat.

  5. Use AWS Secrets Manager with an AWS Key Management Service (AWS KMS) customer managed key to store the access token. Add a resource-based policy to the secret to allow access from other accounts. Update the IAM role of the EC2 instanceswith permissions to access Secrets Manager. Retrieve the token from Secrets Manager. Use the decrypted access token to send the message to the chat.

  6. Encrypt the access token by using an AWS Key Management Service (AWS KMS) AWS managed key. Store the access token in an Amazon S3 bucket. Add a bucket policy to the S3 bucket to allow access from other accounts. Update the IAM role of the EC2 instances with permissions to access Amazon S3 and AWS KM

  7. Retrieve the token from the S3 bucket. Decrypt the token by using AWS KMS on the EC2 instances. Use the decrypted access token to send the massage to the chat.


Answer: B Question: 348

A company is building a scalable data management solution by using AWS services to improve the speed and agility of development. The solution will ingest large volumes of data from various sources and will process this data through multiple business rules and transformations.


The solution requires business rules to run in sequence and to handle reprocessing of data if errors occur when the business rules run. The company needs the solution to be scalable and to require the least possible maintenance.


Which AWS service should the company use to manage and automate the orchestration of the data flows to meet these requirements?

  1. AWS Batch

  2. AWS Step Functions

  3. AWS Glue

  4. AWS Lambda


Answer: D Question: 349

A company is building a serverless application on AWS. The application uses an AWS Lambda function to process customer orders 24 hours a day, 7 days a week. The Lambda function calls an external vendor's HTTP API to process payments.


During load tests, a developer discovers that the external vendor payment processing API occasionally times out and returns errors. The company expects that some payment processing API calls will return errors.


The company wants the support team to receive notifications in near real time only when the payment processing external API error rate exceed 5% of the total number of transactions in an hour. Developers need to use an existing Amazon Simple Notification Service (Amazon SNS) topic that is configured to notify the support team.


Which solution will meet these requirements?


  1. Write the results of payment processing API calls to Amazon CloudWatch. Use Amazon CloudWatch Logs Insights to query the CloudWatch logs. Schedule the Lambda function to check the CloudWatch logs and notify the existing SNS topic.

  2. Publish custom metrics to CloudWatch that record the failures of the external payment processing API calls. Configure a CloudWatch alarm to notify the existing SNS topic when error rate exceeds the specified rate.

  3. Publish the results of the external payment processing API calls to a new Amazon SNS topic. Subscribe the support team members to the new SNS topic.

  4. Write the results of the external payment processing API calls to Amazon S3. Schedule an Amazon Athena query to run at regular intervals. Configure Athena to send notifications to the existing SNS topic when the error rate exceeds the specified rate.


Answer: B Question: 350

A developer is creating an application that will be deployed on IoT devices. The application will send data to a RESTful API that is deployed as an AWS Lambda function. The application will assign each API request a unique identifier. The volume of API requests from the application can randomly increase at any given time of day.


During periods of request throttling, the application might need to retry requests. The API must be able to handle duplicate requests without inconsistencies or data loss.


Which solution will meet these requirements?


  1. Create an Amazon RDS for MySQL DB instance. Store the unique identifier for each request in a database table. Modify the Lambda function to check the table for the identifier before processing the request.

  2. Create an Amazon DynamoDB table. Store the unique identifier for each request in the table. Modify the Lambda function to check the table for the identifier before processing the request.

  3. Create an Amazon DynamoDB table. Store the unique identifier for each request in the table. Modify the Lambda function to return a client error response when the function receives a duplicate request.

  4. Create an Amazon ElastiCache for Memcached instance. Store the unique identifier for each request in the cache.

Modify the Lambda function to check the cache for the identifier before processing the request.


Answer: B Question: 351

A company is running a custom application on a set of on-premises Linux servers that are accessed using Amazon API Gateway. AWS X-Ray tracing has been enabled on the API test stage.


How can a developer enable X-Ray tracing on the on-premises servers with the LEAST amount of configuration?


  1. Install and run the X-Ray SDK on the on-premises servers to capture and relay the data to the X-Ray service.

  2. Install and run the X-Ray daemon on the on-premises servers to capture and relay the data to the X-Ray service.

  3. Capture incoming requests on-premises and configure an AWS Lambda function to pull, process, and relay relevant data to X-Ray using the PutTraceSegments API call.

  4. Capture incoming requests on-premises and configure an AWS Lambda function to pull, process, and relay relevant data to X-Ray using the PutTelemetryRecords API call.


Answer: B Question: 352

A developer has created an AWS Lambda function that is written in Python. The Lambda function reads data from objects in Amazon S3 and writes data to an Amazon DynamoDB table. The function is successfully invoked from an S3 event notification when an object is created. However, the function fails when it attempts to write to the DynamoDB table.


What is the MOST likely cause of this issue?


  1. The Lambda function's concurrency limit has been exceeded.

  2. DynamoDB table requires a global secondary index (GSI) to support writes.

  3. The Lambda function does not have IAM permissions to write to DynamoD

  4. The DynamoDB table is not running in the same Availability Zone as the Lambda function.


Answer: B