Click "Show Answer & Explanation" to see detailed explanations
All answers are hidden by default to test your knowledge
Review the explanations to understand the reasoning behind each answer
Domain Overview
Domain 1 (SDLC Automation) accounts for 22% of the exam (approximately 16-17 questions out of 75). This domain focuses on automating the software development lifecycle using AWS services.
---
Official Exam Syllabus for Domain 1
Task Statement 1.1: Implement CI/CD pipelines
Design and implement CI/CD pipelines using AWS services
Integrate automated testing into pipelines
Manage artifacts and dependencies
Implement deployment strategies
Task Statement 1.2: Integrate automated testing
Unit testing, integration testing, and end-to-end testing
Continuous Delivery: Automated deployment to staging, manual approval for production
Continuous Deployment: Fully automated deployment to production
Deployment Strategies
Strategy
Description
Use Case
In-Place
Update existing instances
Simple apps, accepts downtime
Rolling
Update in batches
Zero downtime, gradual rollout
Blue/Green
Two identical environments
Zero downtime, instant rollback
Canary
Small percentage first
Risk mitigation, testing in production
Linear
Gradual traffic shift
Controlled rollout
All-at-Once
Update everything simultaneously
Fast deployment, testing environments
Testing in CI/CD
Unit Tests: Test individual components
Integration Tests: Test component interactions
End-to-End Tests: Test complete workflows
Security Tests: SAST, DAST, dependency scanning
Performance Tests: Load testing, stress testing
Artifact Management
Versioning strategies (semantic versioning)
Immutable artifacts
Artifact promotion between environments
Dependency caching
Security in SDLC
Secrets management (never in code)
IAM roles for service-to-service communication
Encryption at rest and in transit
Compliance validation in pipelines
Vulnerability scanning
---
📝 Practice Questions
Question 1
A company uses AWS CodePipeline to deploy applications. The security team requires that all deployments to production must be approved by a senior engineer before proceeding. The approval must include comments about what is being deployed.
Which solution meets these requirements?
A. Add a manual approval action in the pipeline stage before the production deployment action
B. Configure an AWS Lambda function to send emails requesting approval
C. Use Amazon SNS to notify the senior engineer and wait for a response
D. Configure IAM policies to require MFA for production deployments
Answer: A
Explanation:
Manual approval actions in CodePipeline allow you to pause pipeline execution and require human approval before proceeding. Approvers can add comments when approving or rejecting. This is the native and correct way to implement deployment approvals.
Why others are wrong:
B: Lambda can send notifications but cannot pause the pipeline and wait for approval
C: SNS is for notifications, not for controlling pipeline flow
D: IAM MFA is for authentication, not deployment approval workflows
Question 2
A DevOps engineer is designing a CI/CD pipeline using AWS CodePipeline. The pipeline must deploy the same application to three AWS Regions simultaneously to reduce deployment time.
Which configuration achieves this requirement?
A. Create three separate pipelines, one for each Region
B. Configure a single pipeline with sequential deployment actions for each Region
C. Configure a single pipeline with parallel deployment actions in the same stage for each Region
D. Use AWS CodeDeploy with a deployment group spanning multiple Regions
Answer: C
Explanation:
CodePipeline supports parallel actions within a single stage by assigning different runOrder values or the same runOrder value. Actions with the same runOrder execute in parallel, allowing simultaneous deployments to multiple Regions.
B: Sequential actions deploy one after another, not simultaneously
D: CodeDeploy deployment groups cannot span multiple Regions
Question 3
A company has a CodePipeline that deploys a web application. The pipeline has been failing intermittently at the deploy stage with the error "Deployment failed: The overall deployment failed because too many individual instances failed deployment."
Which actions should the DevOps engineer take to troubleshoot this issue? (Choose TWO)
A. Review the CodeDeploy deployment logs in CloudWatch Logs
B. Check the CodeDeploy agent logs on the EC2 instances
C. Review the CodePipeline execution history in AWS CloudTrail
D. Examine the VPC flow logs for the EC2 instances
E. Check the CodeBuild build logs for compilation errors
Answer: A, B
Explanation:
When CodeDeploy deployments fail on instances, you need to check both the CodeDeploy service logs (in CloudWatch Logs if configured) and the CodeDeploy agent logs directly on the EC2 instances (/var/log/aws/codedeploy-agent/). These logs contain detailed information about what failed during deployment.
Why others are wrong:
C: CloudTrail tracks API calls, not deployment failures
D: VPC flow logs show network traffic, not application deployment issues
E: CodeBuild is for building, not deploying; the error indicates a deploy stage failure
Question 4
A company wants to implement a CI/CD pipeline where code changes are automatically deployed to a staging environment, but production deployments require manual approval. After production approval, the deployment should automatically proceed if staging tests passed.
Which pipeline design meets these requirements?
A. Create two separate pipelines with EventBridge triggering the production pipeline after staging
B. Create a single pipeline with staging deploy, test, manual approval, and production deploy stages in sequence
C. Create a single pipeline with parallel stages for staging and production
D. Create a single pipeline with a Lambda function that checks staging test results before production deployment
Answer: B
Explanation:
A single pipeline with sequential stages (Source → Build → Deploy to Staging → Test → Manual Approval → Deploy to Production) meets all requirements. The manual approval action pauses the pipeline, and since the test stage already passed, production deployment proceeds automatically after approval.
Why others are wrong:
A: Two pipelines add complexity and may lose artifact consistency
C: Parallel stages would deploy to production without waiting for staging tests
D: Lambda cannot replace manual approval and adds unnecessary complexity
Question 5
A DevOps engineer needs to configure a CodePipeline to deploy to a different AWS account. The target account has an IAM role that the pipeline should assume.
Which configuration is required in the pipeline? (Choose TWO)
A. Configure the pipeline's service role with sts:AssumeRole permission for the target account role
B. Add the target account's credentials to the pipeline's environment variables
C. Configure the deployment action with the roleArn parameter pointing to the target account role
D. Create an IAM user in the target account and store credentials in Secrets Manager
E. Configure VPC peering between the two accounts
Answer: A, C
Explanation:
For cross-account deployments, the pipeline's service role needs sts:AssumeRole permission, and the deployment action must specify the roleArn of the role to assume in the target account. The target account role must have a trust policy allowing the source account to assume it.
Why others are wrong:
B: Storing credentials is not secure and not the recommended approach
D: Using IAM users with long-term credentials is not secure
E: VPC peering is for network connectivity, not IAM cross-account access
Question 6
A company uses CodePipeline with CodeBuild for CI/CD. Builds are taking too long because dependencies are downloaded every time. The DevOps engineer wants to reduce build times.
What is the MOST effective solution?
A. Increase the CodeBuild compute type
B. Configure S3 caching in the buildspec.yml file
C. Use a larger EC2 instance for CodeBuild
D. Pre-install dependencies in a custom AMI
Answer: B
Explanation:
CodeBuild supports S3 caching, which allows you to cache dependencies and other files between builds. By configuring the cache section in buildspec.yml, subsequent builds can reuse cached dependencies instead of downloading them again, significantly reducing build time.
Why others are wrong:
A: Larger compute type speeds up processing but doesn't reduce download time
C: CodeBuild doesn't use EC2 instances you manage; it uses managed build environments
D: CodeBuild uses Docker images, not AMIs
Question 7
A company's CodePipeline is triggered by changes to a CodeCommit repository. The team wants to prevent the pipeline from running when only documentation files (*.md files) are changed.
Which solution meets this requirement?
A. Configure a CodeCommit trigger with file path filters
B. Use EventBridge with an event pattern that filters by file type
C. Add a Lambda function at the start of the pipeline to check changed files and stop the pipeline if needed
D. Configure the source action in CodePipeline with file path conditions
Answer: C
Explanation:
Currently, CodePipeline and CodeCommit triggers don't support file-path filtering. The solution is to add a Lambda function as the first action in the pipeline that checks the commit contents using the CodeCommit API and stops the pipeline execution if only documentation files changed.
Why others are wrong:
A: CodeCommit triggers don't support file path filters
B: EventBridge events from CodeCommit don't include file-level details
D: CodePipeline source actions don't support file path conditions
Question 8
A company is using AWS CodePipeline to deploy a containerized application to Amazon ECS. The deployment must use the blue/green deployment strategy with AWS CodeDeploy.
Which appspec.yml structure is correct for this deployment?
A. ```yaml
version: 0.0
os: linux
files:
- source: /
destination: /var/www/html
```
D. ```yaml
version: 1.0
phases:
install:
runtime-versions:
docker: 18
```
Answer: B
Explanation:
For ECS deployments with CodeDeploy, the appspec.yml must specify the ECS service as the target with TaskDefinition and LoadBalancerInfo properties. The structure includes Resources with TargetService of Type AWS::ECS::Service.
Why others are wrong:
A: This is the format for EC2/on-premises deployments, not ECS
C: This is the format for Lambda deployments
D: This is a buildspec.yml structure for CodeBuild, not appspec.yml for CodeDeploy
Question 9
A DevOps engineer needs to implement a deployment pipeline that deploys to multiple AWS accounts (development, staging, production). Each account has different VPC configurations and the application needs to access VPC resources during deployment.
Which approach should be used?
A. Create separate pipelines in each account
B. Use a single pipeline in the management account with cross-account roles and account-specific deployment configurations
C. Use AWS Organizations to deploy to all accounts simultaneously
D. Create a shared VPC across all accounts for deployments
Answer: B
Explanation:
A single pipeline in a central account with cross-account IAM roles is the best approach. Each account should have a role that the pipeline can assume, and deployment configurations (like environment variables, VPC settings) can be specified per environment in the pipeline actions.
Why others are wrong:
A: Multiple pipelines create management overhead and inconsistent deployments
D: Shared VPCs add complexity and may not meet security requirements
Question 10
A company's CodePipeline occasionally fails because the CodeBuild project times out. The build typically takes 45 minutes but sometimes exceeds the default timeout.
Which solution should the DevOps engineer implement?
A. Increase the CodeBuild project's timeout setting
B. Split the build into multiple CodeBuild projects
C. Use a larger compute type in CodeBuild
D. Configure CodeBuild to run on a dedicated EC2 instance
Answer: A
Explanation:
CodeBuild has a configurable timeout setting (default is 60 minutes, maximum is 8 hours/480 minutes). If builds occasionally exceed the default timeout, increasing the timeout value is the simplest solution.
Why others are wrong:
B: Splitting builds adds complexity and may not be necessary
C: Larger compute might help but doesn't address the timeout setting
D: CodeBuild doesn't run on dedicated EC2 instances you manage
Question 11
A company wants to trigger a CodePipeline execution only when a specific branch in CodeCommit is updated. The pipeline should not run for changes to other branches.
Which configuration achieves this?
A. Configure the CodePipeline source action with a BranchName parameter
B. Use EventBridge to filter events by branch name before triggering the pipeline
C. Configure a CodeCommit trigger that filters by branch
D. Add a Lambda function to check the branch before proceeding
Answer: A
Explanation:
When configuring a CodeCommit source action in CodePipeline, you specify the BranchName parameter. The pipeline will only trigger when changes are pushed to that specific branch.
Why others are wrong:
B: EventBridge can filter by branch, but the simplest solution is the native source action configuration
C: CodeCommit triggers work with the pipeline's native branch configuration
D: Adding Lambda adds unnecessary complexity
Question 12
A DevOps engineer is troubleshooting a CodePipeline that has stopped executing. The pipeline shows "Stopped" status, and no recent executions are visible.
What is the MOST likely cause?
A. The IAM service role for the pipeline was deleted or modified
B. The CodeCommit repository was deleted
C. The pipeline was manually disabled
D. The S3 artifact bucket was deleted
Answer: C
Explanation:
Pipelines can be manually disabled (stopped) using the console or CLI. When disabled, the pipeline won't execute on source changes and shows "Disabled" or "Stopped" status. This is the most likely cause if no error messages are present.
Why others are wrong:
A: IAM role issues would show permission errors, not a stopped status
B: A deleted repository would cause failures, not a stopped status
D: A deleted S3 bucket would cause failures during execution
Question 13
A company uses CodePipeline to deploy to Amazon ECS. They want to implement automated rollback if the new deployment causes HTTP 500 errors.
Which solution should be implemented?
A. Configure a CloudWatch alarm on the ALB 5XX error metric and associate it with the CodeDeploy deployment group
B. Add a Lambda function in the pipeline to monitor errors after deployment
C. Configure ECS service auto-scaling to handle errors
D. Use X-Ray to detect errors and trigger rollback
Answer: A
Explanation:
CodeDeploy supports automatic rollback based on CloudWatch alarms. By creating an alarm that triggers when HTTP 500 errors exceed a threshold and associating it with the deployment group, CodeDeploy will automatically roll back to the previous version if the alarm enters ALARM state during deployment.
Why others are wrong:
B: Lambda can monitor but can't trigger native CodeDeploy rollback
C: Auto-scaling handles load, not deployment failures
D: X-Ray is for tracing, not deployment rollback
Question 14
A company requires that all CodePipeline artifacts be encrypted using a customer-managed KMS key. How should this be configured?
A. Configure each CodeBuild project to encrypt artifacts with the KMS key
B. Configure the pipeline's artifact store to use the customer-managed KMS key
C. Configure S3 default encryption on the artifact bucket
D. Configure KMS key policies to automatically encrypt CodePipeline artifacts
Answer: B
Explanation:
When creating or updating a CodePipeline, you can specify a customer-managed KMS key for the artifact store. This key will be used to encrypt all artifacts stored in S3 as they pass between pipeline stages.
Why others are wrong:
A: CodeBuild encryption is separate from pipeline artifact encryption
C: S3 default encryption may not use the specific customer-managed key required
D: KMS key policies control access, not automatic encryption of specific resources
Question 15
A DevOps engineer needs to pass build artifacts from a CodeBuild project in one Region to a deployment action in another Region within the same pipeline.
Which configuration is required?
A. Configure cross-region replication on the artifact S3 bucket
B. Add a Lambda function to copy artifacts between Regions
C. Configure the pipeline with artifact stores in both Regions
D. Use CodePipeline's built-in cross-region artifact handling
Answer: C
Explanation:
For cross-region actions in CodePipeline, you must configure artifact stores (S3 buckets) in each Region where actions will run. CodePipeline automatically copies artifacts to the appropriate regional bucket when needed.
Why others are wrong:
A: Manual S3 replication is not integrated with pipeline execution
B: Lambda adds unnecessary complexity
D: While CodePipeline handles cross-region artifacts, you must still configure the regional artifact stores
Question 16
A company wants to implement feature flags to control which features are deployed to production. They want to gradually roll out new features to users.
Which AWS service should be used?
A. AWS AppConfig
B. AWS Systems Manager Parameter Store
C. Amazon DynamoDB
D. AWS Lambda environment variables
Answer: A
Explanation:
AWS AppConfig is specifically designed for feature flags and application configuration. It supports gradual rollouts with deployment strategies (linear, canary), validation, and automatic rollback. It integrates with Lambda, ECS, and EC2.
Why others are wrong:
B: Parameter Store can store flags but lacks gradual rollout capabilities
C: DynamoDB can store flags but requires custom implementation for rollouts
D: Lambda environment variables require redeployment to change
Question 17
A company uses CodePipeline with GitHub as the source. The pipeline must trigger only when changes are made to a specific folder in the repository.
Which solution enables this requirement?
A. Configure the GitHub webhook with path filters
B. Use CodePipeline's source action file path filters
C. Add a Lambda function to check changed files after the source action
D. Configure GitHub Actions to filter before triggering CodePipeline
Answer: C
Explanation:
CodePipeline's GitHub source action doesn't support file path filters. The solution is to add a Lambda function as the first action after the source stage to check which files changed using the GitHub API or artifact contents, and stop the pipeline if the target folder wasn't modified.
Why others are wrong:
A: GitHub webhooks can have path filters, but CodePipeline doesn't interpret them
B: CodePipeline source actions don't support file path filters
D: This adds external dependency and complexity
Question 18
A DevOps engineer is configuring a CodePipeline that uses CodeBuild. The build needs to access a private npm registry that requires authentication.
What is the MOST secure way to provide the registry credentials to CodeBuild?
A. Store credentials in the buildspec.yml file
B. Store credentials in CodeBuild environment variables as plaintext
C. Store credentials in AWS Secrets Manager and reference them in the buildspec.yml
D. Store credentials in a .npmrc file in the source repository
Answer: C
Explanation:
AWS Secrets Manager provides secure storage for credentials with encryption, access control, and rotation capabilities. CodeBuild can retrieve secrets at build time using the secrets-manager reference type in environment variables or the buildspec.yml.
Why others are wrong:
A: Credentials in buildspec.yml are stored in source control (insecure)
B: Plaintext environment variables are visible in the console
D: Credentials in source repository are visible to anyone with repo access
Question 19
A company has a CodePipeline that deploys to EC2 instances using CodeDeploy. The team notices that deployments succeed but the application doesn't start correctly. The ValidateService lifecycle hook is not catching the issue.
What should the DevOps engineer do to improve deployment validation?
A. Add health checks in the ValidateService script that verify application functionality
B. Increase the deployment timeout
C. Use a different deployment configuration
D. Add more instances to the deployment group
Answer: A
Explanation:
The ValidateService lifecycle hook runs scripts after deployment to validate the application. If it's not catching issues, the validation scripts need to be enhanced with proper health checks (HTTP requests, process verification, etc.) that accurately verify application functionality.
Why others are wrong:
B: Timeout doesn't affect validation logic
C: Deployment configuration affects rollout speed, not validation
D: More instances don't improve validation
Question 20
A company needs to implement a deployment pipeline for a serverless application built with AWS SAM. The pipeline must include testing and staged deployments.
Which pipeline configuration is recommended?
A. CodeCommit → CodeBuild (sam build) → CodeDeploy (deploy to Lambda)
B. CodeCommit → CodeBuild (sam build, sam package) → CloudFormation (deploy)
C. CodeCommit → CodeBuild (sam build) → Lambda (direct update)
D. CodeCommit → CloudFormation (deploy SAM template directly)
Answer: B
Explanation:
SAM applications should be built using sam build, packaged using sam package (which uploads artifacts to S3), and then deployed using CloudFormation (SAM templates are CloudFormation transforms). CodePipeline supports CloudFormation as a deploy action provider.
Why others are wrong:
A: SAM applications are deployed via CloudFormation, not CodeDeploy directly
C: Direct Lambda updates bypass SAM's CloudFormation-based deployment
D: SAM templates need to be built and packaged before deployment
Question 21
A DevOps engineer needs to implement a pipeline that deploys the same artifact to multiple environments (dev, staging, prod) with different configurations for each environment.
Which approach should be used?
A. Build separate artifacts for each environment
B. Use parameter overrides in the CloudFormation deploy action for each environment
C. Create separate pipelines for each environment
D. Use different source branches for each environment
Answer: B
Explanation:
Using the same artifact with parameter overrides ensures consistency across environments. CloudFormation deploy actions in CodePipeline support parameter overrides, allowing environment-specific values (like database endpoints, scaling settings) without rebuilding.
Why others are wrong:
A: Building separate artifacts can introduce inconsistencies
C: Separate pipelines reduce consistency and increase management overhead
D: Different branches mean different code, not just different configurations
Question 22
A company's CodePipeline occasionally fails with the error "ActionExecution timed out." The deploy action typically takes 30 minutes.
What should the DevOps engineer check first?
A. The deployment timeout in the appspec.yml
B. The action timeout configuration in the pipeline
C. The Lambda function timeout if using Lambda for deployment
D. The ECS task timeout settings
Answer: B
Explanation:
CodePipeline actions have configurable timeouts. If an action consistently takes longer than the configured timeout, the action times out and fails. The engineer should check if the action timeout is set appropriately for the deployment duration.
Why others are wrong:
A: appspec.yml doesn't control pipeline action timeout
C: Only relevant if Lambda is the action provider
D: ECS task timeout is not related to pipeline action timeout
Question 23
A company uses CodePipeline with a manual approval action. They want to notify approvers via Slack when approval is needed.
Which solution should be implemented?
A. Configure the manual approval action to send to an SNS topic, then use Lambda to send to Slack
B. Configure CodePipeline to send directly to Slack
C. Use EventBridge to detect approval actions and invoke Lambda
D. Configure AWS Chatbot to monitor the pipeline
Answer: A
Explanation:
Manual approval actions can be configured with an SNS topic for notifications. A Lambda function subscribed to that topic can format and send the message to Slack via the Slack webhook API.
Why others are wrong:
B: CodePipeline cannot send directly to Slack
C: EventBridge can detect pipeline events, but using the native SNS integration is more direct
D: AWS Chatbot can send to Slack but requires SNS as an intermediary anyway
Question 24
A DevOps team wants to prevent unauthorized changes to their CodePipeline configuration. Only specific IAM roles should be able to modify the pipeline.
Which approach should be used?
A. Use resource-based policies on the pipeline
B. Configure IAM policies with conditions restricting codepipeline:UpdatePipeline
C. Enable AWS Organizations SCPs
D. Use AWS Config rules to detect changes
Answer: B
Explanation:
IAM policies should be configured to restrict codepipeline:UpdatePipeline, codepipeline:DeletePipeline, and related actions to specific roles. This is the direct way to control who can modify pipeline configurations.
Why others are wrong:
A: CodePipeline doesn't support resource-based policies
C: SCPs are for organizational control, more appropriate at account level
D: Config rules detect changes but don't prevent them
Question 25
A company has multiple development teams, each needing their own CI/CD pipeline. All pipelines should follow the same structure and security standards.
Which approach ensures consistency while allowing team customization?
A. Create a shared CodePipeline that all teams use
B. Use CloudFormation templates to create standardized pipelines for each team
C. Let each team create their own pipeline with documentation
D. Use AWS Service Catalog to provide pipeline templates
Answer: D
Explanation:
AWS Service Catalog allows you to create standardized product templates (CloudFormation) that teams can deploy. This ensures all pipelines follow security and structural standards while allowing teams to deploy their own instances with permitted customizations.
Why others are wrong:
A: A shared pipeline doesn't allow team-specific customization
B: CloudFormation templates help but don't enforce usage
C: Documentation doesn't enforce standards
Question 26
A CodeBuild project needs to access resources in a private VPC, including a private npm registry and a database for integration tests.
Which configuration is required?
A. Configure VPC settings in the CodeBuild project with appropriate subnets and security groups
B. Create a VPN connection between CodeBuild and the VPC
C. Use AWS PrivateLink to connect CodeBuild to the VPC
D. Deploy a NAT gateway and configure CodeBuild to use it
Answer: A
Explanation:
CodeBuild supports VPC configuration where you specify the VPC, subnets, and security groups. CodeBuild runs the build in the specified VPC, allowing access to private resources. The subnets should be private subnets with NAT gateway access for internet connectivity.
Why others are wrong:
B: VPN is for connecting external networks, not CodeBuild
C: PrivateLink is for accessing AWS services privately
D: NAT gateway is needed for the VPC, but the main configuration is VPC settings in CodeBuild
Question 27
A DevOps engineer needs to run multiple CodeBuild builds in parallel as part of a single build project. Each build should test a different component of the application.
Which CodeBuild feature should be used?
A. Multiple CodeBuild projects
B. Batch builds with build matrix
C. Concurrent build limits
D. Multiple buildspec files
Answer: B
Explanation:
CodeBuild batch builds allow you to run multiple builds in parallel using a build matrix. You define variables, and CodeBuild creates a build for each combination, running them in parallel. This is ideal for testing multiple components or configurations simultaneously.
Why others are wrong:
A: Multiple projects add management overhead
C: Concurrent build limits control how many builds can run, not matrix builds
D: Multiple buildspec files don't run in parallel by default
Question 28
A CodeBuild project uses environment variables that contain sensitive database credentials. The DevOps engineer wants to ensure these credentials are not visible in build logs.
What should be configured?
A. Encrypt the environment variables with KMS
B. Use Parameter Store SecureString for the credentials and reference them in buildspec.yml
C. Configure the environment variables as "secrets-manager" type
D. Both B and C are valid solutions
Answer: D
Explanation:
Both Parameter Store SecureString and Secrets Manager can be used to securely provide credentials to CodeBuild. When configured as type "parameter-store" or "secrets-manager" in the environment variables, values are retrieved securely and masked in logs.
Why others are wrong:
A: KMS encryption alone doesn't mask values in logs
B: Correct, but not the only option
C: Correct, but not the only option
Question 29
A CodeBuild project builds Docker images and pushes them to ECR. The builds are failing with "no space left on device" errors.
Which solution should the DevOps engineer implement?
A. Use a larger compute type
B. Enable privileged mode
C. Configure local caching with Docker layer cache mode
D. Clear Docker cache in the pre_build phase
Answer: D
Explanation:
Docker images and layers accumulate during builds, consuming disk space. Clearing the Docker cache (docker system prune) in the pre_build phase frees up space. Additionally, using a larger compute type may help as it provides more disk space.
Why others are wrong:
A: Larger compute helps but doesn't address the root cause
B: Privileged mode is for running Docker, not disk space
C: Caching can actually increase disk usage
Question 30
A company wants to display build status badges in their repository README for their CodeBuild projects.
How should this be configured?
A. Enable build badges in the CodeBuild project and use the provided badge URL
B. Create a Lambda function to generate badges based on build status
C. Use a third-party service to monitor builds and generate badges
D. Configure EventBridge to update badges after each build
Answer: A
Explanation:
CodeBuild has built-in support for build badges. When enabled, CodeBuild provides a publicly accessible URL that returns an SVG badge showing the current build status (passing, failing, unknown). This URL can be embedded in README files.
Why others are wrong:
B: Lambda adds unnecessary complexity
C: Third-party services aren't needed
D: EventBridge can trigger actions but isn't needed for badges
Question 31
A DevOps engineer is troubleshooting a CodeBuild project that fails during the BUILD phase. The logs show that a required environment variable is not set.
Where should the engineer check for environment variable configuration? (Choose THREE)
A. The CodeBuild project configuration
B. The buildspec.yml env section
C. The CodePipeline action configuration
D. The EC2 instance metadata
E. The CloudFormation template outputs
Answer: A, B, C
Explanation:
Environment variables in CodeBuild can be defined in three places: the CodeBuild project configuration (console/CLI), the buildspec.yml file's env section, and the CodePipeline action configuration when CodeBuild is invoked by CodePipeline. All three should be checked.
Why others are wrong:
D: CodeBuild doesn't use EC2 instance metadata for environment variables
E: CloudFormation outputs don't directly set CodeBuild environment variables
Question 32
A company's CodeBuild project needs to run tests that require a PostgreSQL database. The tests should run in an isolated environment.
Which approach is recommended?
A. Configure CodeBuild VPC to access an RDS PostgreSQL instance
B. Use a PostgreSQL Docker container in the build using Docker Compose
C. Install PostgreSQL in the install phase of the build
D. Use DynamoDB as an alternative to PostgreSQL
Answer: B
Explanation:
Running a PostgreSQL container using Docker Compose during the build provides an isolated database for testing. This is a common pattern for integration testing in CI/CD. Enable privileged mode in CodeBuild to run Docker.
Why others are wrong:
A: RDS adds cost and shared state issues
C: Installing PostgreSQL adds time and complexity
D: DynamoDB is not a replacement for PostgreSQL
Question 33
A CodeBuild project needs to access multiple Git repositories during the build. The main repository is configured as the primary source.
How should secondary sources be configured?
A. Use git clone commands in the buildspec.yml
B. Configure secondary sources in the CodeBuild project
C. Create a monorepo containing all required code
D. Use Git submodules in the primary repository
Answer: B
Explanation:
CodeBuild supports secondary sources configuration, where you can specify additional Git repositories (CodeCommit, GitHub, Bitbucket). These sources are automatically checked out during the build, and you can reference them in buildspec.yml using the sourceIdentifier.
Why others are wrong:
A: Git clone works but requires credential management
C: Monorepo is an architectural change, not a configuration
D: Submodules work but add complexity
Question 34
A DevOps engineer wants to cache npm dependencies between CodeBuild runs to speed up builds. The dependencies are installed in the node_modules directory.
Which buildspec.yml configuration is correct?
A. ```yaml
cache:
paths:
- 'node_modules/**/*'
```
B. ```yaml
cache:
type: s3
location: my-cache-bucket/npm
```
C. ```yaml
artifacts:
files:
- node_modules/**/*
```
D. ```yaml
phases:
install:
cache: node_modules
```
Answer: A
Explanation:
The cache section in buildspec.yml specifies paths to cache in S3. Using 'node_modules/**/*' caches all npm dependencies. You must also enable S3 caching in the CodeBuild project configuration and specify an S3 bucket.
Why others are wrong:
B: Cache location is configured in the project, not buildspec.yml
C: Artifacts are build outputs, not cache
D: This is not valid buildspec syntax
Question 35
A company needs to build and test code on multiple operating systems (Linux and Windows) for each commit.
Which CodeBuild configuration achieves this?
A. Create one CodeBuild project with a Linux environment
B. Create two CodeBuild projects, one for each OS, and trigger both from CodePipeline
C. Configure batch builds with environment variable matrix for OS
D. Use a single project with a custom Docker image containing both OS environments
Answer: B
Explanation:
CodeBuild projects are tied to a specific environment (Linux or Windows). To build on both, you need separate projects. CodePipeline can run them in parallel by configuring them as parallel actions in the same stage.
Why others are wrong:
A: Single project can't build on both OS
C: Batch builds can't switch OS within the same project
D: A Docker image runs on one OS
Question 36
A CodeBuild project's build logs contain sensitive information that should not be visible. The engineer wants to prevent certain strings from appearing in logs.
What should be configured?
A. Configure log encryption with KMS
B. Disable CloudWatch Logs for the project
C. Use parameter-store or secrets-manager type for sensitive environment variables
D. Configure log filtering in CloudWatch
Answer: C
Explanation:
When environment variables are configured as parameter-store or secrets-manager type, CodeBuild automatically masks their values in build logs. This prevents sensitive data from appearing in logs while still allowing the build to use the values.
Why others are wrong:
A: Encryption protects at rest, not in log output
B: Disabling logs reduces visibility for troubleshooting
A DevOps engineer needs to create a CodeBuild project that builds a Docker image and pushes it to ECR. The buildspec.yml needs to authenticate with ECR.
Which pre_build command is correct?
C. `$(aws ecr get-login --no-include-email --region us-east-1)`
D. Both A and C are correct for different AWS CLI versions
Answer: D
Explanation:
For AWS CLI v2, you use aws ecr get-login-password piped to docker login. For AWS CLI v1, you use $(aws ecr get-login --no-include-email) which outputs a complete docker login command. Both achieve ECR authentication.
Why others are wrong:
A: Correct for CLI v2
B: Incomplete command
C: Correct for CLI v1
Question 38
A CodeBuild project is configured in a VPC. The build needs to access both private resources (database) and public resources (public npm registry).
What network configuration is required?
A. Configure CodeBuild with public and private subnets
B. Configure CodeBuild with private subnets that have NAT gateway access
C. Configure CodeBuild with public subnets
D. Configure VPC endpoints for all required services
Answer: B
Explanation:
CodeBuild must be in private subnets when configured for VPC access. These private subnets need a NAT gateway to access public internet resources (like public npm registry) while still being able to access private VPC resources.
Why others are wrong:
A: CodeBuild can only be in one subnet type
C: Public subnets don't provide access to private resources
D: VPC endpoints help but don't replace NAT for public internet access
Question 39
A company wants to generate code coverage reports during CodeBuild and display them in the CodeBuild console.
Which configuration is required?
A. Configure the reports section in buildspec.yml with coverage report group
B. Upload coverage reports to S3 as artifacts
C. Configure CloudWatch to receive coverage metrics
D. Use a third-party coverage reporting tool
Answer: A
Explanation:
CodeBuild has native support for test and coverage reports through the reports section in buildspec.yml. You specify the report group ARN and file locations. Reports appear in the CodeBuild console with visualizations.
Why others are wrong:
B: S3 artifacts don't integrate with CodeBuild reporting
C: CloudWatch metrics don't show coverage reports
D: Native support is available
Question 40
A DevOps engineer needs to run a CodeBuild project on a specific schedule (every night at midnight).
Which solution should be implemented?
A. Configure a CloudWatch Events rule with a cron expression to trigger CodeBuild
B. Use CodePipeline with a scheduled trigger
C. Configure Lambda to invoke CodeBuild on schedule
D. Use EventBridge Scheduler to trigger CodeBuild
Answer: D
Explanation:
EventBridge Scheduler (formerly CloudWatch Events) supports cron expressions and can directly invoke CodeBuild StartBuild API. This is the simplest and most direct solution for scheduling builds.
Why others are wrong:
A: CloudWatch Events is now EventBridge, but D is more current
B: CodePipeline doesn't have native scheduled triggers
C: Lambda adds unnecessary complexity
Question 41
A CodeBuild project needs to produce both a JAR file and a Docker image as outputs.
How should this be configured in buildspec.yml?
A. Configure two separate artifacts sections
B. Configure secondary-artifacts with multiple artifact definitions
C. Run two separate builds
D. Upload the Docker image and JAR to the same S3 location
Answer: B
Explanation:
CodeBuild supports secondary-artifacts in buildspec.yml, allowing you to define multiple artifact outputs. Each secondary artifact can have different configurations (files, locations, names). The Docker image would be pushed to ECR in the build phase.
Why others are wrong:
A: Only one artifacts section, but it can include secondary-artifacts
C: Multiple builds are unnecessary
D: Docker images go to ECR, not S3
Question 42
A company uses CodeBuild with S3 caching enabled, but cache hits are inconsistent. Builds sometimes download all dependencies despite caching being configured.
What are possible causes? (Choose TWO)
A. The cache has expired based on the TTL
B. The buildspec.yml cache paths are incorrect
C. The build is running in a different AWS Region
D. The KMS key for S3 encryption was rotated
E. The CodeBuild compute type was changed
Answer: A, B
Explanation:
S3 cache can expire (default lifecycle), and cache paths in buildspec.yml must exactly match the actual locations of cached files. Incorrect paths mean nothing is cached. Cache invalidation and path issues are common causes of cache misses.
Why others are wrong:
C: Cache bucket region is consistent for a project
D: KMS key rotation doesn't invalidate cache
E: Compute type change doesn't affect S3 cache
Question 43
A DevOps engineer wants to limit the maximum duration a CodeBuild build can run to prevent runaway builds from consuming resources.
Which configuration should be modified?
A. The timeout setting in the CodeBuild project
B. The resource limits in the build environment
C. The QueuedTimeoutInMinutes setting
D. The concurrent build limit
Answer: A
Explanation:
CodeBuild project timeout (build timeout) controls the maximum duration a build can run before being stopped. This prevents runaway builds. The default is 60 minutes, maximum is 480 minutes (8 hours).
Why others are wrong:
B: No resource limits configuration in CodeBuild
C: QueuedTimeoutInMinutes is how long a build waits in queue
D: Concurrent limits control parallel builds, not duration
Question 44
A CodeBuild project needs elevated permissions during the build to run Docker daemon operations.
Which setting must be enabled?
A. PrivilegedMode in the build environment
B. AdminAccess IAM policy for the service role
C. Root access in the buildspec.yml
D. Docker privileged flag in buildspec.yml commands
Answer: A
Explanation:
To run Docker daemon operations (building images, running containers), CodeBuild needs PrivilegedMode enabled in the project's environment configuration. This gives the build container elevated permissions required for Docker.
Why others are wrong:
B: IAM policies control AWS API access, not container privileges
C: Not a valid buildspec setting
D: Build environment controls, not individual commands
Question 45
A company wants to standardize their CodeBuild projects using CloudFormation. Which AWS::CodeBuild::Project properties are required?
A. Name, Source, Environment, ServiceRole
B. Source, Environment, ServiceRole, Artifacts
C. Name, Source, ServiceRole
D. Source, Environment
Answer: B
Explanation:
For AWS::CodeBuild::Project in CloudFormation, the required properties are Source (where to get code), Environment (build environment), ServiceRole (IAM role), and Artifacts (build output configuration, even if NONE). Name is optional.
Why others are wrong:
A: Artifacts is required
C: Environment and Artifacts are required
D: ServiceRole and Artifacts are required
Question 46
A CodeBuild project uses a custom Docker image from ECR. The build fails with "unable to pull image" error.
What should the DevOps engineer check? (Choose TWO)
A. The CodeBuild service role has ecr:GetDownloadUrlForLayer permission
B. The ECR repository policy allows the CodeBuild service role
C. The VPC configuration allows outbound internet access
D. The Docker image tag exists in the repository
E. The CodeBuild project has privileged mode enabled
Answer: A, D
Explanation:
To pull images from ECR, the CodeBuild service role needs ECR read permissions (ecr:GetDownloadUrlForLayer, ecr:BatchGetImage). Additionally, the specified image tag must exist in the repository. If using VPC mode, NAT gateway is needed for ECR access unless using VPC endpoints.
Why others are wrong:
B: Repository policy isn't needed if the service role has ECR permissions
C: Relevant if in VPC mode, but not the primary check
E: Privileged mode is for running Docker, not pulling images
Question 47
A DevOps engineer needs to pass build outputs from one CodeBuild project to another in a pipeline.
How is this accomplished in CodePipeline?
A. Configure the first project's artifacts as input to the second project
B. Use S3 to share artifacts between projects
C. Configure environment variables to pass artifact locations
D. Use Parameter Store to share artifact information
Answer: A
Explanation:
CodePipeline manages artifact passing between actions. The first CodeBuild project's output artifacts are automatically stored in the pipeline's artifact bucket and can be configured as input artifacts for the second CodeBuild action.
Why others are wrong:
B: CodePipeline handles this via its artifact store
C: Not necessary; pipeline manages artifact flow
D: Not needed for artifact sharing
Question 48
A company needs to ensure their CodeBuild projects only use approved Docker images from their internal ECR repository.
Which controls should be implemented? (Choose TWO)
A. Configure IAM policies to restrict ecr:BatchGetImage to specific repositories
B. Use AWS Organizations SCP to restrict CodeBuild image sources
C. Configure the CodeBuild project to use images only from the approved ECR repository
D. Use AWS Config rules to detect non-compliant image sources
E. Enable image scanning in ECR
Answer: C, D
Explanation:
Configure CodeBuild projects to use specific ECR repositories for custom images. AWS Config custom rules can monitor CodeBuild projects and alert if they use non-approved image sources, providing detective controls.
Why others are wrong:
A: Doesn't prevent using non-ECR images
B: SCP doesn't have this level of granularity for CodeBuild images
E: Scanning is for vulnerabilities, not source restriction
Question 49
A CodeBuild project generates test results in JUnit XML format. The engineer wants these results visible in the CodeBuild console.
Which buildspec.yml configuration is required?
B. ```yaml
artifacts:
files:
- test-results/**/*.xml
name: test-reports
```
C. ```yaml
phases:
post_build:
reports:
- test-results/**/*.xml
```
D. ```yaml
test-reports:
format: junit
files: '**/*.xml'
```
Answer: A
Explanation:
CodeBuild test reports are configured in the reports section of buildspec.yml. You specify a report group name, files to include, base directory, and file format (JUNITXML for JUnit XML reports). Reports then appear in the CodeBuild console.
Why others are wrong:
B: Artifacts are for build outputs, not test reports
C: Invalid syntax
D: Invalid syntax
Question 50
A DevOps engineer is configuring local caching for a CodeBuild project to improve build performance.
Which caching modes are available? (Choose THREE)
A. Docker layer cache
B. Source cache
C. Custom cache
D. Maven cache
E. npm cache
Answer: A, B, C
Explanation:
CodeBuild local caching supports three modes: Docker layer cache (for Docker builds), Source cache (caches Git metadata for faster clones), and Custom cache (caches paths you specify, similar to S3 caching but stored locally on the build host).
Why others are wrong:
D: Maven dependencies are cached using custom cache mode
E: npm dependencies are cached using custom cache mode
Question 51
A company is implementing blue/green deployments for EC2 instances using CodeDeploy. During deployment, traffic should shift gradually over 10 minutes.
Which deployment configuration should be used?
A. CodeDeployDefault.AllAtOnce
B. CodeDeployDefault.Linear10PercentEvery1Minutes
C. CodeDeployDefault.Canary10Percent10Minutes
D. Custom configuration with MinimumHealthyHosts
Answer: B
Explanation:
CodeDeployDefault.Linear10PercentEvery1Minutes shifts 10% of traffic every minute, completing in 10 minutes. This provides a gradual traffic shift allowing time to detect issues before full deployment.
Why others are wrong:
A: AllAtOnce shifts all traffic immediately
C: Canary shifts 10% initially, then waits 10 minutes, then shifts remaining 90%
D: MinimumHealthyHosts is for EC2 in-place, not traffic shifting
Question 52
A DevOps engineer is troubleshooting a CodeDeploy deployment that fails at the AfterInstall hook. The EC2 instances are running Amazon Linux 2.
Where should the engineer look for detailed error logs?
A. /var/log/messages
B. /var/log/aws/codedeploy-agent/codedeploy-agent.log
C. /opt/codedeploy-agent/logs
D. CloudWatch Logs (if configured)
Answer: B
Explanation:
The CodeDeploy agent log at /var/log/aws/codedeploy-agent/codedeploy-agent.log contains detailed information about deployment execution, including lifecycle hook script output and errors. Deployment logs are also available in /opt/codedeploy-agent/deployment-root.
A company uses CodeDeploy to deploy to an Auto Scaling group. They notice that new instances launched by scaling activities don't receive deployments.
What configuration is required?
A. Enable Amazon SNS notifications for the deployment group
B. Configure the Auto Scaling group as a deployment target in CodeDeploy
C. Install CodeDeploy agent in the AMI and configure the deployment group to use the Auto Scaling group
D. Create a lifecycle hook in the Auto Scaling group
Answer: C
Explanation:
When an Auto Scaling group is configured as a deployment target in CodeDeploy, new instances automatically receive the last successful deployment. The CodeDeploy agent must be installed (typically in the AMI), and the deployment group targets the Auto Scaling group.
Why others are wrong:
A: SNS notifications are for alerts, not deployment to new instances
B: Correct but incomplete without agent installation
A DevOps engineer needs to configure CodeDeploy to automatically roll back a deployment if CloudWatch alarms trigger.
Which configuration is required? (Choose TWO)
A. Associate CloudWatch alarms with the deployment group
B. Enable automatic rollbacks in the deployment group
C. Configure Lambda functions to monitor and trigger rollback
D. Enable deployment notifications with SNS
E. Configure CodePipeline to monitor alarms
Answer: A, B
Explanation:
To enable automatic rollback on alarm, you must associate CloudWatch alarms with the deployment group AND enable automatic rollbacks (specifically for alarm-triggered rollbacks). CodeDeploy monitors these alarms during deployment and rolls back if any enter ALARM state.
Why others are wrong:
C: Lambda isn't needed; CodeDeploy has native alarm integration
D: SNS notifications are for alerts, not rollback
E: CodePipeline doesn't control CodeDeploy rollback
Question 55
A company deploys a Node.js application to EC2 using CodeDeploy. The application should start after deployment and stop before the next deployment.
Which appspec.yml configuration is correct?
A. ```yaml
version: 0.0
os: linux
hooks:
ApplicationStop:
- location: scripts/stop.sh
timeout: 120
ApplicationStart:
- location: scripts/start.sh
timeout: 120
```
D. ```yaml
version: 0.0
os: linux
hooks:
Install:
- location: scripts/install.sh
Start:
- location: scripts/start.sh
```
Answer: B
Explanation:
The appspec.yml should include a files section for copying application files and hooks for ApplicationStop (runs before new deployment) and ApplicationStart (runs after installation). Option B has the correct structure.
Why others are wrong:
A: Missing files section for copying the application
C: BeforeInstall runs during the new deployment, not before; ApplicationStop is correct for stopping the previous version
D: Install and Start are not valid lifecycle hooks
Question 56
A company performs CodeDeploy blue/green deployments to EC2 instances behind an Application Load Balancer. After deployment, they want the old (blue) environment to remain for 1 hour before termination.
Which setting controls this?
A. TerminationWaitTimeInMinutes in the deployment group
B. BlueGreenDeploymentConfiguration with terminateBlueInstancesOnDeploymentSuccess wait time
C. AutoScaling group cooldown period
D. ALB deregistration delay
Answer: B
Explanation:
In CodeDeploy blue/green deployment configuration, the terminateBlueInstancesOnDeploymentSuccess setting includes a waitTimeInMinutes parameter that controls how long to wait before terminating the original (blue) instances after successful deployment.
Why others are wrong:
A: Not a valid setting name
C: Cooldown is for scaling activities
D: Deregistration delay is for target removal, not instance termination timing
Question 57
A DevOps engineer is configuring CodeDeploy for Lambda functions. The deployment should shift 10% of traffic to the new version, wait 5 minutes, then complete the shift.
Which deployment configuration should be used?
A. CodeDeployDefault.LambdaCanary10Percent5Minutes
B. CodeDeployDefault.LambdaLinear10PercentEvery5Minutes
C. CodeDeployDefault.LambdaAllAtOnce
D. Custom configuration with trafficRoutingConfig
Answer: A
Explanation:
CodeDeployDefault.LambdaCanary10Percent5Minutes shifts 10% of traffic initially, monitors for 5 minutes, then shifts the remaining 90%. This is the canary pattern - testing with a small percentage before full deployment.
Why others are wrong:
B: Linear shifts incrementally over time, not a single wait period
C: AllAtOnce shifts immediately with no gradual rollout
D: Custom config is possible but the pre-defined option matches requirements
Question 58
A company's CodeDeploy deployments to EC2 occasionally fail with "HEALTH_CONSTRAINTS" errors even though most instances deploy successfully.
What causes this error?
A. Too many instances failed the health check
B. The deployment configuration minimum healthy hosts wasn't met
C. The Auto Scaling group health check failed
D. Both A and B are correct
Answer: D
Explanation:
HEALTH_CONSTRAINTS errors occur when the number of healthy instances falls below the minimum required by the deployment configuration. This happens when too many instances fail deployment, causing the remaining healthy instances to be below the threshold.
Why others are wrong:
A: Correct but incomplete
B: Correct but incomplete
C: Auto Scaling health checks are separate from CodeDeploy
Question 59
A DevOps engineer needs to perform database migrations as part of a CodeDeploy deployment. The migration should run only once, not on every instance.
Which approach should be used?
A. Run migrations in the BeforeInstall hook
B. Run migrations in the ApplicationStart hook with a lock file
C. Use CodeDeploy's run_order configuration
D. Run migrations in a separate CodePipeline action before CodeDeploy
Answer: D
Explanation:
Database migrations should run once before deployment, not on every instance. Running migrations as a separate CodePipeline action (using Lambda or CodeBuild) before the CodeDeploy action ensures migrations complete once before any instance is updated.
Why others are wrong:
A: Would run on every instance
B: Lock files are complex to manage across instances
C: run_order doesn't prevent multiple executions
Question 60
A company uses CodeDeploy for on-premises server deployments. The CodeDeploy agent can't connect to AWS.
What should the DevOps engineer check? (Choose TWO)
A. The IAM instance profile attached to the servers
B. Network connectivity to CodeDeploy service endpoints
C. The CodeDeploy agent configuration file for correct Region
D. The server's ability to resolve AWS DNS
E. The EC2 key pair configuration
Answer: B, C
Explanation:
On-premises servers don't have instance profiles; they use IAM users for authentication (configured in the agent). Network connectivity to AWS endpoints and correct Region configuration in the agent are critical for successful connection.
Why others are wrong:
A: On-premises servers don't use instance profiles
D: DNS resolution helps but isn't specific to CodeDeploy
E: Key pairs are for SSH, not CodeDeploy
Question 61
A company wants to implement feature toggles during CodeDeploy deployments. New features should be disabled initially and enabled gradually.
Which approach integrates best with CodeDeploy?
A. Use environment variables in appspec.yml
B. Integrate with AWS AppConfig for feature flags
C. Use CodeDeploy deployment configuration variables
D. Store feature flags in Parameter Store and read during ApplicationStart
Answer: B
Explanation:
AWS AppConfig is designed for feature flags and configuration management. It integrates with deployment strategies for gradual rollout and can work alongside CodeDeploy for feature-level control independent of code deployment.
Why others are wrong:
A: appspec.yml doesn't support environment variables for feature flags
C: CodeDeploy doesn't have deployment configuration variables
D: Parameter Store stores values but lacks AppConfig's rollout features
Question 62
A DevOps engineer is configuring CodeDeploy for an ECS service. The deployment must use a test listener on the ALB to validate the new task set before shifting production traffic.
Which appspec.yml elements are required?
A. TaskDefinition, ContainerName, ContainerPort
B. TaskDefinition, LoadBalancerInfo, TestTrafficListenerArn
C. TaskDefinition, LoadBalancerInfo including TestTrafficRoute
D. ECSService, LoadBalancer, TestListener
Answer: C
Explanation:
For ECS blue/green deployments with test traffic, the appspec.yml needs TaskDefinition information and LoadBalancerInfo that includes the test traffic route configuration. This allows CodeDeploy to send test traffic to the new task set via the test listener.
Why others are wrong:
A: Missing LoadBalancerInfo and test listener configuration
B: TestTrafficListenerArn isn't the correct element name
D: Incorrect element names
Question 63
A company's CodeDeploy deployments succeed, but instances show as unhealthy after deployment. The ValidateService hook passes.
What could cause this issue?
A. The ELB health check is failing
B. The CodeDeploy agent is outdated
C. The appspec.yml has incorrect file permissions
D. The deployment configuration is incorrect
Answer: A
Explanation:
If ValidateService passes but instances are unhealthy, the ELB health check (separate from CodeDeploy validation) is likely failing. The application might respond to ValidateService scripts but not to the ELB health check endpoint correctly.
Why others are wrong:
B: Outdated agent would cause deployment issues, not post-deployment health
C: File permissions would cause application errors during deployment
D: Deployment configuration affects rollout, not post-deployment health
Question 64
A DevOps engineer needs to implement a deployment strategy where 25% of instances are updated first, validated, then the remaining 75% are updated.
Which CodeDeploy configuration achieves this?
A. CodeDeployDefault.HalfAtATime
B. Create a custom deployment configuration with MinimumHealthyHostsPerZone
C. CodeDeployDefault.OneAtATime with manual approval between batches
D. Create a custom deployment configuration with MinimumHealthyHosts of 75%
Answer: D
Explanation:
A custom deployment configuration with MinimumHealthyHosts set to 75% means at most 25% of instances can be deployed simultaneously. CodeDeploy will update 25% first, then the remaining instances while maintaining 75% healthy.
Why others are wrong:
A: HalfAtATime updates 50% at a time
B: MinimumHealthyHostsPerZone is for zone-aware deployments
C: OneAtATime is too slow and doesn't support batches
Question 65
A company uses CodeDeploy to deploy to a fleet of 100 EC2 instances. Deployments are taking too long because they update one instance at a time.
Which change would speed up deployments while limiting risk?
A. Use CodeDeployDefault.AllAtOnce
B. Use CodeDeployDefault.HalfAtATime
C. Create a custom configuration with MinimumHealthyHosts of 90%
D. Reduce the number of lifecycle hooks
Answer: C
Explanation:
A custom configuration with 90% minimum healthy hosts allows 10% of instances (10 instances) to be updated simultaneously. This speeds up deployment while limiting risk to 10% of the fleet at any time.
Why others are wrong:
A: AllAtOnce is too risky for 100 instances
B: HalfAtATime (50%) might be too aggressive
D: Reducing hooks might skip necessary steps
Question 66
A DevOps engineer needs to ensure that CodeDeploy deployments to EC2 instances include validation of the application's HTTP endpoint responding correctly.
Which approach is recommended?
A. Add a ValidateService hook script that performs HTTP health checks
B. Configure an ELB health check
C. Add CloudWatch alarms for HTTP 5xx errors
D. All of the above for comprehensive validation
Answer: D
Explanation:
Comprehensive validation includes ValidateService hooks for immediate post-deployment validation, ELB health checks for ongoing health, and CloudWatch alarms for automatic rollback on errors. Using all three provides defense in depth.
Why others are wrong:
A: Good but not comprehensive alone
B: Good but runs after deployment hooks
C: Good for rollback but after the fact
Question 67
A company is migrating from in-place deployments to blue/green deployments using CodeDeploy. What additional infrastructure is required?
A. A second Auto Scaling group
B. Two separate VPCs
C. An Application Load Balancer (if not already using one)
D. CodeDeploy creates and manages the green environment automatically
Answer: D
Explanation:
For EC2 blue/green deployments with Auto Scaling, CodeDeploy can automatically create the green environment by copying the Auto Scaling group configuration, launching new instances, deploying the application, and shifting traffic. You need an ALB but CodeDeploy manages the green ASG.
Why others are wrong:
A: CodeDeploy creates this automatically
B: Same VPC is typically used
C: ALB is required but might already exist
Question 68
A DevOps engineer is configuring CodeDeploy for Lambda. The deployment should use hooks to run integration tests before and after traffic shifting.
Which hooks are available for Lambda deployments?
A. BeforeInstall, AfterInstall, ApplicationStart
B. BeforeAllowTraffic, AfterAllowTraffic
C. BeforeTrafficShift, AfterTrafficShift
D. PreTraffic, PostTraffic
Answer: B
Explanation:
Lambda deployments in CodeDeploy support BeforeAllowTraffic and AfterAllowTraffic hooks. These hooks invoke specified Lambda functions for validation before shifting traffic to the new version and after the shift is complete.
Why others are wrong:
A: These are EC2/on-premises hooks
C: Not valid hook names
D: Not valid hook names
Question 69
A company's CodeDeploy deployment to EC2 fails at the DownloadBundle step.
What are possible causes? (Choose TWO)
A. The CodeDeploy agent cannot access S3 or GitHub
B. The appspec.yml is invalid
C. The IAM role doesn't have S3 read permissions
D. The EC2 instance has insufficient disk space
E. The application stop script failed
Answer: A, C
Explanation:
DownloadBundle failures typically indicate the agent can't download the revision from S3 (or GitHub). This is usually due to network connectivity issues to S3 or insufficient IAM permissions to read from the S3 bucket containing the artifact.
Why others are wrong:
B: appspec.yml validation happens after download
D: Disk space issues would show different errors
E: ApplicationStop runs after bundle download
Question 70
A DevOps engineer needs to implement gradual rollout for an ECS service using CodeDeploy. Traffic should shift 25% every 5 minutes.
Which deployment configuration should be used?
A. CodeDeployDefault.ECSCanary25Percent5Minutes
B. CodeDeployDefault.ECSLinear25PercentEvery5Minutes
C. Create a custom ECS deployment configuration
D. CodeDeployDefault.ECSAllAtOnce with CloudWatch alarms
Answer: C
Explanation:
While CodeDeploy offers predefined ECS configurations like ECSLinear10PercentEvery1Minute, a 25% every 5 minutes configuration requires creating a custom deployment configuration with the specific linear traffic routing settings.
Why others are wrong:
A: Not a predefined configuration name
B: Not a predefined configuration name (predefined options are 10% intervals)
D: AllAtOnce doesn't provide gradual rollout
Question 71
A company requires that CodeDeploy deployments only occur during maintenance windows. Deployments initiated outside these windows should be blocked.
Which approach enforces this?
A. Use IAM policies with time-based conditions
B. Configure CodeDeploy to respect maintenance windows
C. Use AWS Systems Manager Maintenance Windows with CodeDeploy
D. Create a Lambda function that blocks deployments outside windows
Answer: C
Explanation:
AWS Systems Manager Maintenance Windows can be configured to run CodeDeploy deployments only during specified time windows. This provides built-in scheduling and enforcement of maintenance windows for deployments.
Why others are wrong:
A: IAM time-based conditions are complex to implement for this use case
B: CodeDeploy doesn't have native maintenance window support
D: Lambda could work but isn't the native solution
Question 72
A DevOps engineer is implementing canary deployments for a Lambda function using CodeDeploy. The validation function should check that the new version responds correctly before allowing more traffic.
How should this be configured?
A. Create a separate Lambda function and reference it in the BeforeAllowTraffic hook in appspec.yml
B. Add validation logic to the main Lambda function
C. Configure CloudWatch alarms to validate the new version
D. Use Lambda destination configurations
Answer: A
Explanation:
For Lambda deployments, the BeforeAllowTraffic hook in appspec.yml specifies a Lambda function that runs before traffic shifts. This validation function should test the new version and return success/failure to control deployment progression.
Why others are wrong:
B: Main function shouldn't contain deployment validation logic
C: CloudWatch alarms are for rollback, not pre-traffic validation
D: Destinations are for async invocation results
Question 73
A company's CodeDeploy agent keeps reporting "agent is in an unhealthy state." The EC2 instances have outbound internet access.
What should the DevOps engineer check?
A. The IAM role attached to the EC2 instances has CodeDeploy permissions
B. The CodeDeploy agent service is running
C. The instance has sufficient memory and CPU
D. All of the above
Answer: D
Explanation:
An unhealthy agent can be caused by various issues: insufficient IAM permissions prevent the agent from polling for deployments, a stopped agent service prevents any communication, and resource constraints can cause the agent process to fail. All should be checked.
Why others are wrong:
A: Correct but not the only cause
B: Correct but not the only cause
C: Correct but not the only cause
Question 74
A DevOps engineer needs to track which CodeDeploy deployments have been applied to which instances over time.
Which AWS service provides this information?
A. AWS CloudTrail
B. AWS Config
C. CodeDeploy deployment history
D. Amazon CloudWatch
Answer: C
Explanation:
CodeDeploy maintains deployment history for each deployment group, showing all deployments, their status, which instances were included, and timing. The console and API provide access to this historical information.
Why others are wrong:
A: CloudTrail tracks API calls, not deployment details per instance
B: Config tracks configuration changes, not deployment history
D: CloudWatch provides metrics but not deployment history
Question 75
A company wants to implement a deployment strategy where they can easily switch between the current and previous versions of their application on EC2.
Which CodeDeploy deployment type supports this?
A. In-place deployment with automatic rollback
B. Blue/green deployment
C. Rolling deployment
D. Canary deployment
Answer: B
Explanation:
Blue/green deployments maintain both the current (blue) and new (green) environments. After deployment, you can easily switch traffic between them by re-routing at the load balancer level, enabling instant rollback or version switching.
Why others are wrong:
A: In-place deployment replaces the application; rollback requires redeployment
C: Rolling updates instances sequentially; no parallel environment
D: Canary is a traffic shifting pattern, not a deployment type
Question 76
A company uses CodeCommit for source control. They need to prevent direct pushes to the main branch; all changes must go through pull requests.
Which configuration achieves this?
A. Configure branch permissions in CodeCommit repository settings
B. Create an IAM policy that denies GitPush to the main branch
C. Use a pre-receive hook to reject direct pushes
D. Configure approval rule templates for pull requests
Answer: B
Explanation:
IAM policies can include conditions to deny GitPush to specific branches using the codecommit:References condition key. This prevents direct pushes to protected branches while allowing pushes to other branches.
Why others are wrong:
A: CodeCommit doesn't have branch permissions in repository settings
C: CodeCommit doesn't support custom pre-receive hooks
D: Approval rules control PR approval, not direct push prevention
Question 77
A DevOps engineer needs to trigger a Lambda function whenever code is pushed to a CodeCommit repository.
Which configuration is required?
A. Create a CodeCommit trigger pointing to the Lambda function
B. Configure EventBridge to capture CodeCommit events and invoke Lambda
C. Use SNS as an intermediary between CodeCommit and Lambda
D. Both A and B are valid approaches
Answer: D
Explanation:
CodeCommit triggers can directly invoke Lambda functions for repository events (push, branch creation, etc.). Alternatively, EventBridge can capture CodeCommit events and invoke Lambda with more filtering options. Both are valid approaches.
Why others are wrong:
A: Correct but not the only option
B: Correct but not the only option
C: SNS is one option but not required; direct trigger or EventBridge work
Question 78
A company needs to grant developers in a different AWS account access to their CodeCommit repository.
Which approach should be used?
A. Create IAM users for the developers in the repository account
B. Configure a cross-account IAM role and resource-based policy on the repository
C. Share the repository using AWS RAM
D. Configure repository mirroring to the other account
Answer: B
Explanation:
Cross-account CodeCommit access is achieved through a cross-account IAM role in the repository account that developers in the other account can assume. The role needs CodeCommit permissions, and the repository can have a resource-based policy.
Why others are wrong:
A: Creating users in another account is not best practice
C: CodeCommit doesn't integrate with AWS RAM
D: Mirroring creates a copy, not shared access
Question 79
A DevOps engineer needs to enforce that all commits to a CodeCommit repository are signed with GPG keys.
How can this be achieved?
A. Enable commit signing enforcement in CodeCommit settings
B. Use a Lambda trigger to validate commit signatures
C. Configure IAM policies to require signed commits
D. CodeCommit does not support commit signature verification
Answer: D
Explanation:
CodeCommit does not currently support GPG commit signature verification at the server side. While you can sign commits locally and push them, CodeCommit won't enforce or verify the signatures. This would require external validation if needed.
Why others are wrong:
A: No such setting exists in CodeCommit
B: Lambda could validate but can't reject the commit after push
C: IAM doesn't have conditions for commit signatures
Question 80
A company wants to automatically close pull requests that have been inactive for more than 30 days.
Which approach should be implemented?
A. Configure CodeCommit pull request expiration settings
B. Create a Lambda function triggered by EventBridge Scheduler to close stale PRs
C. Use AWS Config rules to detect and close stale PRs
D. Configure an SNS topic to notify about stale PRs
Answer: B
Explanation:
CodeCommit doesn't have native pull request expiration. A Lambda function scheduled by EventBridge Scheduler can use the CodeCommit API to list pull requests, identify those inactive for 30+ days, and close them automatically.
Why others are wrong:
A: CodeCommit doesn't have PR expiration settings
C: Config rules detect configuration drift, not PR status
D: SNS notifies but doesn't close PRs
Question 81
A DevOps engineer needs to migrate a large Git repository from GitHub to CodeCommit while preserving all history and branches.
Which approach should be used?
A. Clone the GitHub repository with --mirror flag and push to CodeCommit
B. Export the GitHub repository as a ZIP and import to CodeCommit
C. Use AWS DataSync to transfer the repository
D. Use CodePipeline to sync from GitHub to CodeCommit
Answer: A
Explanation:
Using git clone --mirror creates a bare clone with all refs (branches, tags, etc.). Then pushing this mirror to CodeCommit with git push --mirror transfers everything. This preserves complete history and all branches.
Why others are wrong:
B: ZIP export loses Git history
C: DataSync is for file/object transfer, not Git repositories
D: CodePipeline syncs specific branches, not mirrors
Question 82
A company requires that all pull requests have at least two approvals before merging, with at least one approval from a senior developer.
Which CodeCommit feature supports this?
A. Branch policies
B. Approval rule templates
C. IAM condition policies
D. Merge strategies
Answer: B
Explanation:
CodeCommit approval rule templates allow you to define approval requirements for pull requests, including the number of approvals required and specific approval pool members (like senior developers) who must approve.
Why others are wrong:
A: CodeCommit doesn't have branch policies
C: IAM controls actions, not approval requirements
D: Merge strategies control how merges happen, not approvals
Question 83
A DevOps engineer needs to configure notifications when pull request comments are added in CodeCommit.
Which service should be used?
A. CodeCommit triggers
B. Amazon EventBridge
C. Amazon SNS direct integration
D. AWS Chatbot
Answer: B
Explanation:
CodeCommit sends events to EventBridge for various activities including pull request comments. EventBridge rules can filter these events and route them to targets like SNS, Lambda, or other services for notification.
Why others are wrong:
A: Triggers support limited event types; comments might not be included
C: SNS requires EventBridge or triggers to receive CodeCommit events
D: Chatbot is a target, not an event source
Question 84
A company's CodeCommit repository is approaching the 2GB file size limit for some binary files.
What solution should be implemented?
A. Increase the CodeCommit file size limit via support request
B. Use Git LFS (Large File Storage) with CodeCommit
C. Store large files in S3 and reference them in the repository
D. Split the repository into smaller repositories
Answer: C
Explanation:
CodeCommit has a 2GB file size limit that cannot be increased and does not support Git LFS. The solution is to store large binary files in S3 and keep references (URLs or paths) in the CodeCommit repository.
Why others are wrong:
A: The limit cannot be increased
B: CodeCommit doesn't support Git LFS
D: Splitting doesn't solve the large file issue
Question 85
A DevOps engineer is configuring HTTPS access to CodeCommit for developers.
Which authentication method should be used?
A. SSH keys registered in CodeCommit
B. Git credentials (username/password) generated in IAM
C. AWS access keys
D. Federated identity with SAML
Answer: B
Explanation:
For HTTPS access to CodeCommit, developers use Git credentials (username and password) generated in the IAM console. These are specific to CodeCommit and different from regular AWS access keys.
Why others are wrong:
A: SSH keys are for SSH access, not HTTPS
C: AWS access keys work with credential helper, not standard HTTPS
D: Federated identity works with git-remote-codecommit helper
Question 86
A company wants to set up repository mirroring from CodeCommit to a backup repository in another Region.
What approach should be used?
A. Configure CodeCommit replication
B. Use a Lambda function triggered by repository events to push to the backup
C. Configure EventBridge to replicate commits
D. Use AWS Backup to back up CodeCommit repositories
Answer: B
Explanation:
CodeCommit doesn't have native cross-Region replication. A Lambda function triggered by CodeCommit events (via triggers or EventBridge) can perform git push to the backup repository to maintain synchronization.
Why others are wrong:
A: CodeCommit doesn't have native replication
C: EventBridge doesn't replicate; it triggers actions
D: AWS Backup doesn't support CodeCommit
Question 87
A DevOps engineer needs to find which commit introduced a specific bug in a CodeCommit repository.
Which approach is most efficient?
A. Use git log to review commit history
B. Use git bisect to perform binary search
C. Review all pull requests from the time period
D. Use AWS X-Ray to trace the issue
Answer: B
Explanation:
Git bisect performs a binary search through commit history to identify which commit introduced a bug. You mark commits as good or bad, and git bisect narrows down to the exact commit that introduced the issue.
Why others are wrong:
A: git log is manual and time-consuming for many commits
C: PRs might not correlate directly with the bug
D: X-Ray is for application tracing, not Git history
Question 88
A company uses CodeCommit and wants to automatically tag commits that pass all pipeline stages.
Which approach should be implemented?
A. Configure CodeDeploy to tag commits after successful deployment
B. Add a Lambda function in the pipeline that tags the commit after all stages pass
C. Use EventBridge to detect successful pipeline execution and tag via Lambda
D. Configure CodePipeline to automatically tag source commits
Answer: C
Explanation:
EventBridge can capture CodePipeline state change events (pipeline succeeded). A Lambda function triggered by this event can use the CodeCommit API to create a tag on the source commit, marking it as successfully deployed.
Why others are wrong:
A: CodeDeploy doesn't have commit tagging functionality
B: Lambda in pipeline runs during execution, not after all stages
D: CodePipeline doesn't have automatic commit tagging
Question 89
A DevOps engineer needs to implement a workflow where feature branches are automatically deleted after merging to main.
How should this be configured?
A. Configure CodeCommit auto-delete branch setting on the repository
B. Create a Lambda function triggered by pull request merge events
C. Configure the pull request merge to delete the source branch
D. Both B and C are valid approaches
Answer: D
Explanation:
CodeCommit allows you to delete the source branch when merging a pull request (option in the merge action). Additionally, a Lambda function triggered by pull request merge events via EventBridge can automate branch deletion for any merge method.
Why others are wrong:
A: No such repository-level setting exists
B: Correct but not the only option
C: Correct but requires manual selection each time without automation
Question 90
A company needs to audit all CodeCommit repository access and modifications.
Which AWS service provides this capability?
A. Amazon CloudWatch Logs
B. AWS CloudTrail
C. AWS Config
D. Amazon EventBridge
Answer: B
Explanation:
AWS CloudTrail logs all CodeCommit API calls including repository access, commits, merges, and configuration changes. This provides a complete audit trail of who did what and when.
Why others are wrong:
A: CloudWatch Logs stores application logs, not CodeCommit audit logs
C: Config tracks resource configuration changes, not access patterns
D: EventBridge captures events but doesn't store audit logs
Question 91
A DevOps engineer needs to restrict developers from deleting CodeCommit repositories.
Which IAM action should be explicitly denied?
A. codecommit:DeleteRepository
B. codecommit:RemoveRepository
C. codecommit:DestroyRepository
D. codecommit:TerminateRepository
Answer: A
Explanation:
The IAM action to delete a CodeCommit repository is codecommit:DeleteRepository. Denying this action in IAM policies prevents users from deleting repositories.
Why others are wrong:
B, C, D: These are not valid CodeCommit IAM actions
Question 92
A company uses CodeCommit and wants to enforce commit message formatting (requiring JIRA ticket numbers).
How can this be implemented?
A. Configure CodeCommit commit message validation
B. Use a Lambda trigger to reject commits with invalid messages
C. Configure pre-commit hooks on developer machines
D. Use client-side Git hooks enforced through repository configuration
Answer: C
Explanation:
CodeCommit doesn't support server-side commit message validation. The solution is to configure pre-commit hooks on developer machines that validate commit message format before allowing the commit. This must be set up in developer environments.
Why others are wrong:
A: CodeCommit doesn't have commit message validation
B: Lambda triggers run after push; can't reject commits already pushed
D: Repository can't enforce client-side hooks
Question 93
A DevOps engineer needs to configure a CodePipeline source action that triggers only when files in the /src directory change, not for documentation changes.
Which configuration achieves this?
A. Configure the CodeCommit source action with FilePaths filter
B. Use a Lambda action after source to check changed files
C. Configure EventBridge with file path pattern matching
D. Use a CodeBuild action to check files and stop pipeline if needed
Answer: B
Explanation:
CodePipeline's CodeCommit source action doesn't support file path filters. A Lambda action after the source stage can check which files changed (using the commit information) and stop the pipeline if only documentation was modified.
Why others are wrong:
A: Source actions don't support file path filters
C: EventBridge CodeCommit events don't include file paths
D: Lambda is simpler than CodeBuild for this task
Question 94
A company stores sensitive configuration in their CodeCommit repository. They want to detect if anyone commits secrets (API keys, passwords).
Which solution should be implemented?
A. Enable CodeCommit secret scanning
B. Use Amazon CodeGuru Security scanning
C. Add a CodeBuild step with secret scanning tools
D. Both B and C are valid approaches
Answer: D
Explanation:
Amazon CodeGuru Security can scan repositories for security issues including hardcoded secrets. Additionally, CodeBuild can run open-source secret scanning tools (like git-secrets, truffleHog) as part of the CI pipeline. Both approaches work.
Why others are wrong:
A: CodeCommit doesn't have native secret scanning
B: Correct but not the only option
C: Correct but not the only option
Question 95
A DevOps engineer is setting up a new CodeCommit repository with initial content from a local Git repository.
Which command sequence is correct?
D. `aws codecommit import-repository --local-path .`
Answer: C
Explanation:
Adding CodeCommit as a remote and pushing all branches with git push --all transfers all branches to CodeCommit. Using --all ensures all local branches are pushed, not just the current branch.
Why others are wrong:
A: Only pushes main branch, might miss other branches
B: Cloning empty repo and copying loses Git history
D: No such AWS CLI command exists
Question 96
A company wants to use CodeArtifact as a central package repository. Developers should be able to download public npm packages through CodeArtifact.
Which configuration is required?
A. Configure an upstream repository pointing to the public npm registry
B. Manually copy packages from npm to CodeArtifact
C. Configure npm to use both CodeArtifact and public npm registry
D. Use a Lambda function to sync packages from npm
Answer: A
Explanation:
CodeArtifact supports upstream repositories, including public registries like npmjs.com. When configured as an upstream, developers request packages from CodeArtifact, and if not found locally, CodeArtifact fetches from the public registry and caches the package.
Why others are wrong:
B: Manual copying is not practical or efficient
C: Direct access to public registry bypasses CodeArtifact's caching and control
D: Lambda sync is unnecessary with upstream configuration
Question 97
A DevOps engineer needs to configure CodeBuild to use CodeArtifact for npm package resolution.
Which buildspec.yml configuration is required?
A. Set NPM_REGISTRY environment variable to CodeArtifact URL
B. Run `aws codeartifact login --tool npm` in pre_build phase
C. Configure npm using .npmrc file in the source repository
D. All options can work depending on requirements
Answer: D
Explanation:
All options are valid: environment variables can configure npm registry, the aws codeartifact login command sets up authentication, and .npmrc can contain repository configuration. The login command is commonly used for temporary authentication.
Why others are wrong:
A: Works but may require authentication token
B: Correct and common approach
C: Works but credentials management is needed
Question 98
A company uses CodeArtifact across multiple AWS accounts. They want a central domain with repositories shared across accounts.
Which configuration is required?
A. Create a domain in the central account and configure resource policies for cross-account access
B. Create separate domains in each account with synchronization
C. Use AWS Organizations to share CodeArtifact resources
D. Configure VPC endpoints in each account
Answer: A
Explanation:
CodeArtifact domains support resource-based policies that allow cross-account access. Creating a central domain and configuring policies to allow other accounts access is the recommended approach for multi-account CodeArtifact usage.
Why others are wrong:
B: Separate domains don't share packages efficiently
D: VPC endpoints are for private access, not cross-account sharing
Question 99
A DevOps engineer needs to prevent developers from downloading packages older than a specific version due to security vulnerabilities.
Which CodeArtifact feature should be used?
A. Package origin controls
B. Package version disposition (unlisting)
C. Repository policies with version conditions
D. Upstream repository filtering
Answer: B
Explanation:
CodeArtifact package version disposition allows you to unlist specific package versions. Unlisted versions won't be returned in package listings and require explicit version requests, effectively preventing accidental use of vulnerable versions.
Why others are wrong:
A: Origin controls manage where packages can come from
C: Repository policies don't have version conditions
D: Upstream filtering doesn't control specific versions
Question 100
A company needs to ensure that all npm packages used in their builds come from CodeArtifact, not directly from the public npm registry.
Which controls should be implemented? (Choose TWO)
A. Configure CodeArtifact as the only upstream for external packages
B. Block direct internet access from CodeBuild VPC
C. Use IAM policies to restrict npm registry access
D. Configure package origin controls in CodeArtifact
E. Use AWS WAF to block npm registry access
Answer: A, D
Explanation:
Package origin controls in CodeArtifact restrict where packages can come from (upstream or direct publish). Combined with configuring CodeArtifact as the only npm registry source, this ensures all packages flow through CodeArtifact.
Why others are wrong:
B: Blocks all internet, not just npm
C: IAM doesn't control npm registry access
E: WAF is for web applications, not npm access
Question 101
A company uses Amazon ECR for container images. They want to automatically delete untagged images older than 30 days.
Which ECR feature should be configured?
A. Repository policies
B. Lifecycle policies
C. Image scanning policies
D. Replication policies
Answer: B
Explanation:
ECR lifecycle policies allow you to define rules for automatic image cleanup based on criteria like age, tag status (tagged/untagged), and count. A rule targeting untagged images older than 30 days will automatically delete them.
Why others are wrong:
A: Repository policies control access, not cleanup
C: Scanning policies are for vulnerability scanning
D: Replication copies images, doesn't delete them
Question 102
A DevOps engineer needs to ensure that container images pushed to ECR are scanned for vulnerabilities before being used in production.
Which scanning configuration is recommended?
A. Enable basic scanning on push
B. Enable enhanced scanning with Amazon Inspector
C. Use a third-party scanning tool in the CI pipeline
D. Scan images manually before deployment
Answer: B
Explanation:
Amazon Inspector enhanced scanning provides continuous vulnerability scanning with a more comprehensive vulnerability database than basic scanning. It scans on push and continuously rescans as new vulnerabilities are discovered.
Why others are wrong:
A: Basic scanning is less comprehensive
C: Third-party tools add complexity; native solution available
D: Manual scanning doesn't scale
Question 103
A company wants to use the same container image across multiple AWS Regions for disaster recovery.
Which ECR feature should be configured?
A. Cross-region replication
B. Pull-through cache
C. Multi-region S3 replication for the ECR backend
D. Manual image pushing to each Region
Answer: A
Explanation:
ECR supports cross-region replication through replication configuration on the registry. You can configure images to automatically replicate to specified destination Regions, ensuring availability for DR.
Why others are wrong:
B: Pull-through cache is for caching public images
C: You can't configure S3 replication for ECR directly
D: Manual pushing is error-prone and doesn't scale
Question 104
A DevOps engineer needs to prevent tags from being overwritten on ECR images to ensure deployment consistency.
Which configuration should be enabled?
A. Tag immutability
B. Repository locking
C. Image signing
D. Version pinning
Answer: A
Explanation:
ECR tag immutability prevents image tags from being overwritten. Once enabled, pushing an image with an existing tag will fail, ensuring that a specific tag always refers to the same image.
Why others are wrong:
B: Repository locking doesn't exist in ECR
C: Image signing verifies authenticity, not tag protection
D: Version pinning is a consumer-side concept
Question 105
A company uses CodeBuild to build and push Docker images. The builds sometimes fail because the ECR repository doesn't exist.
What solution ensures repositories exist before pushing?
A. Create repositories manually before the first build
B. Use CloudFormation to create ECR repositories
C. Add `aws ecr create-repository` command in buildspec.yml with --if-not-exists logic
D. Both B and C are valid approaches
Answer: D
Explanation:
Both approaches work: CloudFormation can manage ECR repositories as infrastructure, and buildspec.yml can include conditional repository creation. The CLI approach requires checking if the repository exists first or handling the already-exists error.
Why others are wrong:
A: Manual creation doesn't scale
B: Correct but not the only option
C: Correct but not the only option
Question 106
A DevOps engineer needs to implement integration testing in a CodePipeline. The tests require a running instance of the application.
Which approach is recommended?
A. Deploy to a testing environment, run tests, then deploy to production
B. Run tests against production with feature flags
C. Use CodeBuild to deploy and test in an isolated environment
D. Use Lambda to run integration tests
Answer: A
Explanation:
The standard approach is to deploy to a dedicated testing environment, run integration tests against it, and only proceed to production after tests pass. This isolates testing from production and provides realistic test conditions.
Why others are wrong:
B: Testing in production is risky
C: CodeBuild is for building; deployment should use CodeDeploy
D: Lambda is for serverless functions, not integration test environments
Question 107
A company wants to run parallel integration tests in CodeBuild to reduce test execution time.
Which configuration enables parallel test execution?
A. Configure CodeBuild batch builds with test matrix
B. Use multiple CodeBuild projects running simultaneously
C. Configure test parallelization in the test framework settings
D. All of the above can achieve parallel testing
Answer: D
Explanation:
All approaches enable parallel testing: batch builds run multiple builds in parallel, multiple projects can run simultaneously in a pipeline stage, and test frameworks (like Jest, pytest) support parallel test execution within a single build.
Why others are wrong:
A: Correct for build-level parallelism
B: Correct for project-level parallelism
C: Correct for test-level parallelism
Question 108
A DevOps engineer needs to store test results from CodeBuild and display trends over time.
Which CodeBuild feature should be used?
A. Build artifacts saved to S3
B. CodeBuild test reports
C. CloudWatch custom metrics
D. CodeBuild build logs
Answer: B
Explanation:
CodeBuild test reports store test results (JUnit, NUnit, etc.) with trend visualization in the console. You can see pass/fail rates over time, test duration trends, and individual test results across builds.
Why others are wrong:
A: S3 artifacts store files but don't provide visualization
C: Custom metrics require additional setup and don't store details
D: Build logs are text, not structured test results
Question 109
A company requires security scanning as part of their CI/CD pipeline. The scanning should check for vulnerable dependencies and code security issues.
Which AWS service should be integrated?
A. Amazon Inspector
B. Amazon CodeGuru Security
C. AWS WAF
D. Both A and B, depending on what's being scanned
Answer: D
Explanation:
Amazon Inspector scans container images and EC2 instances for vulnerabilities. Amazon CodeGuru Security scans code repositories for security issues and vulnerable dependencies. Both should be used for comprehensive security scanning.
Why others are wrong:
A: Correct for container/instance scanning
B: Correct for code/dependency scanning
C: WAF is for web application firewall, not code scanning
Question 110
A DevOps engineer needs to implement a quality gate in the pipeline that fails if code coverage drops below 80%.
Which approach should be used?
A. Configure CodeBuild to fail if coverage threshold isn't met
B. Use CodeBuild test reports with coverage thresholds
C. Add a Lambda function to check coverage reports
D. Configure CloudWatch alarms on coverage metrics
Answer: A
Explanation:
Configure your test framework and CodeBuild to fail the build if coverage drops below the threshold. Most test frameworks support coverage thresholds as part of their configuration, and CodeBuild will fail if the tests fail.
Why others are wrong:
B: Test reports display data but don't enforce thresholds
C: Lambda adds complexity; testing framework can enforce
D: CloudWatch alarms are after the fact
Question 111
A company uses S3 to store build artifacts between pipeline stages. They want to minimize storage costs for artifacts that are no longer needed.
Which S3 feature should be configured?
A. S3 Lifecycle policies to transition or delete old artifacts
B. S3 Intelligent Tiering
C. S3 versioning with lifecycle rules
D. Both A and C depending on requirements
Answer: D
Explanation:
S3 Lifecycle policies can delete or transition old artifacts to cheaper storage classes. If versioning is enabled (recommended for artifact integrity), lifecycle rules can also manage non-current versions. Both approaches work together.
Why others are wrong:
A: Correct for basic cleanup
B: Intelligent Tiering optimizes but doesn't delete
C: Correct when versioning is needed
Question 112
A DevOps engineer needs to promote artifacts through environments (dev → staging → production) while ensuring the exact same artifact is used.
Which practice ensures artifact consistency?
A. Rebuild artifacts for each environment
B. Use immutable artifacts with unique identifiers (like Git commit SHA)
C. Store artifacts in environment-specific S3 buckets
D. Use latest tag for artifacts
Answer: B
Explanation:
Immutable artifacts with unique identifiers (commit SHA, build number) ensure the exact same artifact is deployed across environments. This provides traceability and guarantees consistency between environments.
Why others are wrong:
A: Rebuilding can introduce inconsistencies
C: Different buckets don't ensure same artifact
D: Latest tag can change between environments
Question 113
A company needs to share a CodePipeline artifact bucket with a deployment account while maintaining security.
Which configuration is required?
A. Make the S3 bucket public
B. Configure bucket policy with cross-account principal and KMS key policy for the artifact encryption key
C. Copy artifacts to the deployment account's S3 bucket
D. Use AWS Resource Access Manager to share the bucket
Answer: B
Explanation:
For cross-account artifact access, the S3 bucket policy must allow the deployment account's role to read objects. Additionally, if artifacts are encrypted with KMS, the key policy must allow the deployment account to use the key.
Why others are wrong:
A: Public buckets are insecure
C: Copying adds latency and complexity
D: RAM doesn't support S3 bucket sharing
Question 114
A DevOps engineer needs to implement smoke tests that run immediately after deployment to verify basic functionality.
Where should smoke tests be implemented?
A. In CodeBuild before deployment
B. In CodeDeploy lifecycle hooks (ValidateService)
C. In a Lambda function triggered after deployment
D. Both B and C are valid, depending on the deployment target
Answer: D
Explanation:
For EC2/on-premises CodeDeploy deployments, the ValidateService lifecycle hook is ideal for smoke tests. For other deployment types or when more complex testing is needed, a Lambda function triggered by EventBridge after deployment can run smoke tests.
Why others are wrong:
A: Before deployment means testing the old version
B: Correct for CodeDeploy to EC2
C: Correct for other deployment types
Question 115
A company uses Python packages and wants to host internal packages while also accessing PyPI public packages.
Which CodeArtifact configuration is needed?
A. Create a repository with PyPI as an upstream
B. Create two repositories: one for internal, one for PyPI
C. Configure pip to use multiple registries
D. Mirror all PyPI packages locally
Answer: A
Explanation:
A single CodeArtifact repository can have the public PyPI as an upstream. Internal packages are published directly to the repository, and public packages are fetched through the upstream, cached, and served to developers.
Why others are wrong:
B: One repository with upstream is simpler
C: Multiple registries complicate configuration
D: Mirroring all of PyPI is impractical
Question 116
A company wants to implement canary deployments for their ECS service. They want to route 10% of traffic to the new version initially.
Which services are involved in this deployment? (Choose THREE)
A. AWS CodeDeploy
B. Application Load Balancer
C. Amazon CloudWatch
D. AWS Lambda
E. Amazon ECS
F. Amazon EC2
Answer: A, B, E
Explanation:
ECS canary deployments use CodeDeploy to manage the deployment, ECS to run the container tasks, and Application Load Balancer to route traffic between task sets. CodeDeploy controls traffic shifting based on the deployment configuration.
Why others are wrong:
C: CloudWatch can be used for alarms but isn't core to canary deployment
D: Lambda is for Lambda deployments, not ECS
F: ECS uses Fargate or EC2 for capacity, but EC2 isn't a required component
Question 117
A DevOps engineer is implementing blue/green deployments for Lambda functions. The deployment should automatically roll back if errors increase.
Which configuration is required? (Choose TWO)
A. Configure Lambda alias with weighted routing
B. Configure CodeDeploy with CloudWatch alarms for Lambda errors
C. Enable automatic rollback in the CodeDeploy deployment group
D. Configure Lambda provisioned concurrency
E. Create a Lambda destination for failures
Answer: B, C
Explanation:
CodeDeploy Lambda deployments support automatic rollback based on CloudWatch alarms. Configure alarms to trigger on Lambda error metrics and enable automatic rollback in the deployment group. CodeDeploy will roll back if alarms trigger during deployment.
Why others are wrong:
A: CodeDeploy manages alias traffic shifting automatically
D: Provisioned concurrency is for cold start optimization
E: Destinations are for async invocation handling
Question 118
A company needs to deploy a new version of their application without any downtime. The application runs on EC2 instances behind an ALB.
Which deployment strategies provide zero downtime? (Choose TWO)
A. In-place with AllAtOnce configuration
B. In-place with OneAtATime configuration
C. Blue/green deployment
D. Rolling deployment with minimum healthy hosts
Answer: C, D
Explanation:
Blue/green deployments maintain the original environment while deploying to a new one, then switch traffic. Rolling deployments with minimum healthy hosts ensure some instances are always serving traffic. Both provide zero downtime.
Why others are wrong:
A: AllAtOnce updates all instances simultaneously, causing downtime
B: OneAtATime provides near-zero downtime but each instance has brief downtime during update
Question 119
A DevOps engineer needs to implement a deployment strategy that allows instant rollback to the previous version.
Which deployment type provides this capability?
A. In-place deployment with automatic rollback
B. Blue/green deployment
C. Rolling deployment
D. Canary deployment
Answer: B
Explanation:
Blue/green deployments maintain both versions running simultaneously (before terminating the old environment). Rollback is instant by switching traffic back to the original (blue) environment without redeployment.
Why others are wrong:
A: In-place rollback requires redeploying the previous version
C: Rolling requires redeploying to roll back
D: Canary is a traffic pattern; rollback depends on underlying deployment type
Question 120
A company deploys Lambda functions using SAM. They want to use CodeDeploy for gradual traffic shifting with validation.
Which SAM template configuration enables this?
A. ```yaml
AutoPublishAlias: live
DeploymentPreference:
Type: Canary10Percent10Minutes
Alarms:
- !Ref AliasErrorMetricGreaterThanZeroAlarm
```
B. ```yaml
CodeDeployApplication: !Ref MyCodeDeployApp
DeploymentGroup: !Ref MyDeploymentGroup
```
C. ```yaml
DeploymentType: BlueGreen
TrafficShift: 10Percent
```
D. ```yaml
Deployment:
Strategy: Canary
Percentage: 10
```
Answer: A
Explanation:
SAM integrates with CodeDeploy through the DeploymentPreference property. AutoPublishAlias creates an alias, and DeploymentPreference specifies the traffic shifting type, alarms for rollback, and optional hooks.
Why others are wrong:
B: Doesn't specify deployment type
C: Invalid SAM syntax
D: Invalid SAM syntax
Question 121
A company uses feature flags to control feature availability. They want to gradually enable a feature for users over several days.
Which AWS service supports this use case?
A. AWS AppConfig
B. AWS Systems Manager Parameter Store
C. Amazon DynamoDB
D. AWS Lambda environment variables
Answer: A
Explanation:
AWS AppConfig supports feature flags with gradual rollout capabilities. You can deploy configuration changes gradually using deployment strategies (linear, canary) and roll back if issues occur. This is purpose-built for feature flag management.
Why others are wrong:
B: Parameter Store stores values but lacks gradual rollout
C: DynamoDB requires custom implementation
D: Environment variables require redeployment
Question 122
A DevOps engineer needs to implement a pipeline where deployments to production can only occur after business hours.
Which approach should be used?
A. Configure IAM policies with time-based conditions
B. Use AWS Systems Manager Maintenance Windows
C. Add a Lambda function that checks the time before approval
D. Configure CodePipeline with time-based triggers
Answer: B
Explanation:
Systems Manager Maintenance Windows can schedule deployment actions during specific time windows. Combined with CodePipeline, you can ensure production deployments only occur during allowed maintenance windows.
Why others are wrong:
A: IAM time conditions are complex for this use case
C: Lambda checking time doesn't prevent deployments
D: CodePipeline doesn't have native time-based triggers
Question 123
A company wants to implement a deployment approval process where the deployment must be approved by two different people.
Which CodePipeline configuration supports this?
A. Configure two consecutive manual approval actions
B. Configure a single approval action with two required approvers
C. Use IAM policies to require two approvals
D. Manual approval actions support only single approver
Answer: A
Explanation:
CodePipeline manual approval actions require one approval each. To require two approvals, configure two consecutive manual approval actions in the pipeline. Each action can have a different SNS topic and approval group.
Why others are wrong:
B: Single approval action requires only one approval
C: IAM doesn't control number of approvers
D: Correct that single action needs one approval, but multiple actions work
Question 124
A DevOps engineer is troubleshooting a canary deployment that rolled back. The team wants to understand why the rollback occurred.
Where should they look for information? (Choose TWO)
A. CodeDeploy deployment logs
B. CloudWatch alarm history
C. CodePipeline execution history
D. EC2 instance logs
E. CloudTrail logs
Answer: A, B
Explanation:
CodeDeploy deployment logs show the deployment progression and rollback reason. If the rollback was due to CloudWatch alarms, the alarm history shows when alarms triggered and their state changes during the deployment.
Why others are wrong:
C: CodePipeline shows action success/failure, not detailed rollback reasons
D: Instance logs might help but aren't the primary source
E: CloudTrail shows API calls, not deployment reasoning
Question 125
A company needs to implement a deployment strategy for their database schema changes that can't be rolled back automatically.
Which approach is recommended?
A. Use blue/green deployment for database changes
B. Use expand and contract pattern (backward compatible migrations)
C. Deploy database changes in the application deployment
D. Stop using automated deployments for database changes
Answer: B
Explanation:
The expand and contract pattern involves making backward-compatible schema changes (expand), deploying application changes, then removing deprecated elements (contract). This allows the application to work with both old and new schemas during transition.
Why others are wrong:
A: Blue/green for databases is complex and expensive
C: Coupling app and DB changes can cause issues
D: Automation is still valuable with proper patterns
Question 126
A company uses AWS CDK to define their CI/CD pipeline. They want the pipeline to automatically update itself when the CDK code changes.
Which CDK feature enables this?
A. CDK auto-deploy
B. CDK Pipelines (self-mutating pipeline)
C. CDK bootstrap with auto-update
D. CloudFormation StackSets
Answer: B
Explanation:
CDK Pipelines is a construct library for creating self-mutating CI/CD pipelines. When the pipeline definition in CDK code changes, the pipeline automatically updates itself before proceeding with application deployments.
Why others are wrong:
A: Not a real CDK feature
C: Bootstrap is for environment setup
D: StackSets are for multi-account deployment
Question 127
A DevOps engineer needs to deploy CloudFormation stacks to multiple AWS accounts simultaneously as part of a pipeline.
Which service should be used?
A. CodePipeline with multiple deploy actions
B. CloudFormation StackSets
C. AWS Organizations deployment
D. Both A and B can work
Answer: D
Explanation:
CodePipeline can have deploy actions for different accounts (assuming cross-account roles are configured). CloudFormation StackSets can deploy to multiple accounts from a single stack definition. Both approaches work for multi-account deployment.
Why others are wrong:
A: Correct but not the only option
B: Correct for simultaneous multi-account deployment
C: Organizations doesn't deploy directly
Question 128
A company is implementing GitOps practices with AWS. They want Kubernetes deployments to be driven by Git repository changes.
Which approach aligns with GitOps on AWS?
A. Use CodePipeline to deploy to EKS when CodeCommit changes
B. Use ArgoCD or Flux running in EKS, syncing from CodeCommit
C. Use Lambda to apply Kubernetes manifests on Git push
D. Both A and B are valid GitOps approaches
Answer: D
Explanation:
GitOps can be implemented with CI/CD pipelines (CodePipeline) triggering deployments, or with pull-based tools like ArgoCD/Flux running in the cluster, continuously syncing from Git. Both patterns are valid GitOps implementations.
Why others are wrong:
A: Correct (push-based GitOps)
B: Correct (pull-based GitOps)
C: Lambda is possible but not standard GitOps
Question 129
A DevOps engineer needs to implement semantic versioning for their application artifacts. Each build should produce an artifact with a version based on Git tags.
Which approach should be used?
A. Use CodeBuild environment variables to capture Git tag
B. Manually specify version in buildspec.yml
C. Use semantic-release tool in CodeBuild
D. Configure CodePipeline to pass tag information
Answer: C
Explanation:
Tools like semantic-release analyze commit messages (following conventional commits) and automatically determine the next version, create Git tags, and update package files. This automates semantic versioning in CI/CD.
Why others are wrong:
A: Captures existing tags but doesn't determine next version
B: Manual versioning is error-prone
D: CodePipeline passes source information but doesn't determine versions
Question 130
A company uses multiple AWS services in their CI/CD pipeline. They want centralized visibility into pipeline performance and failures.
Which approach provides this visibility?
A. Create a custom CloudWatch dashboard aggregating metrics
B. Use AWS Service Catalog for pipeline management
C. Enable AWS X-Ray tracing for pipelines
D. Use Amazon DevOps Guru
Answer: A
Explanation:
A CloudWatch dashboard can aggregate metrics from CodePipeline, CodeBuild, CodeDeploy, and other services. This provides a centralized view of pipeline health, build times, deployment success rates, and other KPIs.
Why others are wrong:
B: Service Catalog is for product provisioning
C: X-Ray is for application tracing, not pipeline tracing
D: DevOps Guru is for operational insights, not CI/CD metrics
Question 131
A DevOps engineer needs to implement a pipeline that deploys to EKS. The deployment should update Kubernetes deployments without downtime.
Which approach should be used?
A. Use CodeBuild to run kubectl apply with rolling update strategy
B. Use Lambda to call EKS API
C. Use CodeDeploy for EKS deployments
D. Use Helm with CodeBuild for deployment
Answer: D
Explanation:
Helm is the standard package manager for Kubernetes. Using CodeBuild to run Helm upgrade with appropriate values allows controlled deployments with rollback capability. Kubernetes rolling update strategy ensures zero downtime.
Why others are wrong:
A: kubectl works but Helm provides better management
B: Lambda adding complexity without benefit
C: CodeDeploy doesn't natively support EKS (only ECS)
Question 132
A company wants to prevent deployments if any security vulnerabilities are found in their container images.
Which pipeline configuration enforces this?
A. Add Amazon Inspector scanning and fail the build on critical vulnerabilities
B. Enable ECR image scanning and check results in the pipeline
C. Use CodeGuru Security as a quality gate
D. Both A and B can be used as quality gates
Answer: D
Explanation:
Amazon Inspector provides comprehensive vulnerability scanning and can report to CodeBuild or trigger Lambda. ECR scanning results can be checked via API. Both can be integrated into pipelines as quality gates to block vulnerable images.
Why others are wrong:
A: Correct for container vulnerability scanning
B: Correct for ECR-based scanning
C: CodeGuru Security is for code, not containers
Question 133
A DevOps engineer is implementing infrastructure deployment using Terraform in a CodePipeline.
Which approach is recommended for Terraform state management?
A. Store state files in the CodePipeline artifact bucket
B. Use Terraform Cloud or S3 backend with DynamoDB locking
C. Store state files in CodeCommit
D. Use local state in CodeBuild
Answer: B
Explanation:
Terraform state should be stored in a proper backend like S3 with DynamoDB for state locking. This prevents concurrent modifications and maintains state independently of the pipeline execution.
Why others are wrong:
A: Artifact bucket isn't designed for persistent state
C: CodeCommit isn't appropriate for state files
D: Local state is lost between builds
Question 134
A company needs to deploy the same application version to multiple environments (dev, test, staging) with environment-specific configurations.
Which pattern should be implemented?
A. Build separate artifacts per environment
B. Use the same artifact with environment-specific configuration files
C. Use the same artifact with parameter overrides during deployment
D. Both B and C are valid approaches
Answer: D
Explanation:
The same artifact should be used across environments for consistency. Configuration can be externalized in environment-specific files packaged with the artifact, or configuration can be injected at deployment time through parameter overrides.
Why others are wrong:
A: Different artifacts can have inconsistencies
B: Correct approach
C: Correct approach
Question 135
A DevOps engineer needs to implement a pipeline that handles both application code and infrastructure code changes.
Which design should be used?
A. Single pipeline for both application and infrastructure
B. Separate pipelines for application and infrastructure with dependencies
C. One pipeline with conditional stages based on changed files
D. The design depends on organizational and technical requirements
Answer: D
Explanation:
The design choice depends on several factors: team structure, change frequency, blast radius concerns, and coupling between application and infrastructure. All options can be valid depending on specific requirements.
Why others are wrong:
A: Can work for tightly coupled systems
B: Can work for loosely coupled systems
C: Can work when changes are usually isolated
Question 136
A company uses multiple microservices, each with its own CodePipeline. They want to trigger a deployment pipeline only when all component service pipelines have succeeded.
Which approach should be used?
A. Use EventBridge to track pipeline completions and trigger when all succeed
B. Create a parent pipeline that includes all service deployments
C. Use AWS Step Functions to orchestrate pipeline executions
D. Both A and C are valid approaches
Answer: D
Explanation:
EventBridge can capture pipeline completion events and a Lambda function can track whether all required pipelines have succeeded before triggering deployment. Step Functions can also orchestrate multiple pipeline executions with complex logic.
Why others are wrong:
A: Correct for event-driven orchestration
B: Monolithic pipeline is harder to manage
C: Correct for workflow orchestration
Question 137
A DevOps engineer needs to ensure that infrastructure changes are peer-reviewed before being applied.
Which approach implements this for CloudFormation deployments?
A. Use CloudFormation change sets with manual review
B. Implement infrastructure code in CodeCommit with pull requests
C. Add a manual approval action before CloudFormation deployment
D. All of the above can be part of the review process
Answer: D
Explanation:
Comprehensive review includes: code review via pull requests before merge, change sets to preview what will change, and manual approval in the pipeline as a final gate. All three can be combined for thorough review.
Why others are wrong:
A: Correct for reviewing actual changes
B: Correct for code review
C: Correct for final approval
Question 138
A company wants to implement chaos engineering practices integrated with their CI/CD pipeline.
Which approach should be used?
A. Run AWS Fault Injection Simulator experiments during pipeline testing stages
B. Use chaos testing only in production
C. Implement custom chaos scripts in Lambda
D. Chaos engineering isn't compatible with CI/CD
Answer: A
Explanation:
AWS Fault Injection Simulator (FIS) can be integrated into CI/CD pipelines to run chaos experiments during testing stages. This validates resilience before production deployment and can be automated as part of the release process.
Why others are wrong:
B: Testing in pre-production reduces production risk
C: FIS provides managed chaos engineering
D: Chaos engineering enhances CI/CD confidence
Question 139
A DevOps engineer needs to implement blue/green deployment for an RDS database along with the application.
Which approach is recommended?
A. Create a read replica, promote it, switch application connection
B. Use RDS proxy for connection switching
C. Implement database changes with backward-compatible migrations
D. Use Aurora with clone for blue/green
Answer: D
Explanation:
Amazon Aurora supports blue/green deployments natively. You can create a clone of the production database, apply changes to it, and switch over when ready. This provides safe database deployments with rollback capability.
Why others are wrong:
A: Works but is more complex to manage
B: Proxy helps with failover, not blue/green
C: Migration pattern, not blue/green for entire database
Question 140
A company needs to implement compliance checking as part of their CI/CD pipeline. Deployments should fail if resources don't comply with company policies.
Which service should be integrated?
A. AWS Config rules evaluated during deployment
B. AWS IAM Access Analyzer
C. CloudFormation Guard for template validation
D. Both A and C depending on what's being validated
Answer: D
Explanation:
CloudFormation Guard can validate templates against policies before deployment. AWS Config rules can be evaluated to ensure deployed resources comply with policies, triggering rollback if non-compliant. Both serve different stages of compliance.
Why others are wrong:
A: Correct for post-deployment compliance
B: Access Analyzer is for IAM policy analysis
C: Correct for pre-deployment template validation
Question 141
A DevOps engineer is implementing a pipeline for a containerized application. The pipeline should support both quick hotfix deployments and regular feature deployments.
Which pipeline design supports this?
A. Single pipeline with different paths based on branch
B. Two separate pipelines for hotfix and feature branches
C. Single pipeline with feature flags to control deployment speed
D. Both A and B are valid approaches
Answer: D
Explanation:
You can use a single pipeline with conditional logic based on the source branch, or maintain separate pipelines for hotfixes (faster, fewer tests) and features (complete testing). Both patterns are common in enterprise environments.
Why others are wrong:
A: Valid for unified pipeline management
B: Valid for different deployment needs
C: Feature flags control features, not deployment speed
Question 142
A company needs to implement database migration as part of their CodePipeline. The migration should run once before application deployment to any instance.
Which approach should be used?
A. Run migrations in CodeDeploy BeforeInstall hook
B. Add a Lambda action before CodeDeploy that runs migrations
C. Add a CodeBuild action before CodeDeploy that runs migrations
D. Both B and C work depending on migration complexity
Answer: D
Explanation:
Simple migrations can run in Lambda; complex migrations requiring database tools run better in CodeBuild with a custom image. Both run as a single action before application deployment, ensuring migration completes once.
Why others are wrong:
A: Would run on every instance
B: Works for simple migrations
C: Works for complex migrations
Question 143
A DevOps engineer needs to implement a pipeline that deploys to both AWS and on-premises servers.
Which approach should be used?
A. Use separate pipelines for AWS and on-premises
B. Use CodeDeploy with deployment groups for both AWS and on-premises
C. Use Ansible or similar tool for on-premises, CodeDeploy for AWS
D. Both B and C are valid approaches
Answer: D
Explanation:
CodeDeploy supports both EC2/Lambda and on-premises servers (with the agent installed). Alternatively, different deployment tools can be used for different environments within the same pipeline using CodeBuild or Lambda actions.
Why others are wrong:
A: Separate pipelines reduce consistency
B: Correct for unified CodeDeploy approach
C: Correct for hybrid tooling approach
Question 144
A company uses AWS Organizations with multiple accounts. They want to deploy a standardized CI/CD pipeline to all developer accounts.
Which approach should be used?
A. Manually create pipelines in each account
B. Use CloudFormation StackSets to deploy pipeline templates across accounts
C. Use AWS Service Catalog to provide pipeline products
D. Both B and C can work together
Answer: D
Explanation:
StackSets can deploy standardized pipeline infrastructure across accounts automatically. Service Catalog can provide self-service pipeline provisioning for teams. They can work together: StackSets for baseline, Service Catalog for team-specific pipelines.
Why others are wrong:
A: Manual creation doesn't scale
B: Correct for automated deployment
C: Correct for self-service
Question 145
A DevOps engineer needs to implement a pipeline that uses dynamic credentials rather than long-term access keys.
Which approach should be used?
A. Store access keys in Secrets Manager with rotation
B. Use IAM roles for CodeBuild and CodeDeploy
C. Use AssumeRole for cross-account access
D. Both B and C are best practices
Answer: D
Explanation:
CodeBuild and CodeDeploy should use IAM roles attached to the service, providing temporary credentials. Cross-account access should use AssumeRole rather than long-term credentials. Together, these eliminate long-term credential management.
Why others are wrong:
A: Even rotated, access keys are less secure than roles
B: Correct for service roles
C: Correct for cross-account access
Question 146
A company wants to implement trunk-based development with their CodeCommit and CodePipeline setup.
Which configuration supports trunk-based development?
A. Configure pipeline to trigger on main branch only
B. Use feature flags for incomplete features
C. Implement short-lived feature branches with frequent merges
D. All of the above are part of trunk-based development
Answer: D
Explanation:
Trunk-based development involves a single main branch (trunk) with short-lived feature branches, frequent integration to main, feature flags for incomplete features, and CI/CD that deploys from the main branch.
Why others are wrong:
A: Core principle
B: Core principle
C: Core principle
Question 147
A DevOps engineer needs to implement approval workflows that integrate with the company's existing ticketing system (JIRA).
Which approach should be used?
A. Use manual approval with external URL pointing to JIRA
B. Create a Lambda function that checks JIRA ticket status before allowing deployment
C. Use AWS Service Catalog with JIRA integration
D. Both A and B can be part of the solution
Answer: D
Explanation:
Manual approvals can include a custom URL for context (JIRA ticket link). Lambda can integrate with JIRA API to verify ticket status/approvals before allowing deployment to proceed. Both create a connected workflow.
Why others are wrong:
A: Provides context but doesn't enforce
B: Enforces but may need manual context
C: Service Catalog isn't for pipeline approval
Question 148
A company needs to implement rollback for serverless applications deployed with SAM.
Which approach provides rollback capability?
A. SAM with CloudFormation rollback triggers
B. SAM with CodeDeploy deployment preferences and alarms
C. Manual rollback using SAM CLI
D. Both A and B for comprehensive rollback
Answer: D
Explanation:
SAM deployments use CloudFormation, which supports rollback triggers. Adding DeploymentPreference in SAM templates enables CodeDeploy-managed Lambda deployments with automatic rollback based on alarms. Together, they provide infrastructure and application rollback.
Why others are wrong:
A: Handles infrastructure rollback
B: Handles traffic shift rollback
C: Manual isn't automatic
Question 149
A DevOps engineer is implementing a multi-region deployment pipeline. The application should deploy to the primary region first, and only proceed to secondary regions after validation.
Which pipeline design achieves this?
A. Sequential stages for each region
B. Parallel deployment to all regions with regional testing
C. Primary region deployment with validation, then parallel secondary deployments
D. Use CloudFormation StackSets for simultaneous deployment
Answer: C
Explanation:
Deploy to primary region first, run validation tests, then deploy to secondary regions in parallel. This ensures the primary region is working before affecting secondary regions, while still enabling fast secondary region rollout.
Why others are wrong:
A: Sequential is slower than needed
B: Doesn't ensure primary is validated first
D: StackSets deploys simultaneously without validation between
Question 150
A company wants to measure the performance of their CI/CD pipeline and identify bottlenecks.
Which metrics should be tracked? (Choose THREE)
A. Lead time for changes (commit to production)
B. Deployment frequency
C. Number of CodeCommit repositories
D. Change failure rate
E. Number of pipeline stages
F. Mean time to recovery
Answer: A, B, D
Explanation:
Lead time, deployment frequency, change failure rate, and mean time to recovery are the four DORA (DevOps Research and Assessment) metrics that measure CI/CD performance. These identify bottlenecks and areas for improvement.
Why others are wrong:
C: Repository count isn't a performance metric
E: Stage count doesn't indicate performance
F: Also a DORA metric but only three answers requested
---
📚 Summary: Key Points for Domain 1
Service Limits and Defaults
CodeBuild timeout: default 60 minutes, max 480 minutes
Lambda/ECS: AllAtOnce, Canary (X% then wait), Linear (X% every Y minutes)
Critical Concepts
Blue/Green: Two environments, instant rollback
Canary: Small percentage first for testing
Rolling: Batch updates maintaining availability
Feature Flags: AppConfig for gradual feature rollout
---
Good luck with your AWS DevOps Professional exam! Remember to focus on understanding *why* services and configurations are used, not just *what* they do. The exam tests practical knowledge and scenario-based problem solving.