A company uses AWS Organizations to manage its AWS accounts. A DevOps engineer must ensure that all users who access the AWS Management Console are authenticated through the company's corporate identity provider (IdP).
Which combination of steps will meet these requirements? (Select TWO.)
Correct : B, E
This corresponds to Option B: Use AWS IAM Identity Center to configure identity federation with SAML 2.0.
* Step 2: Creating an SCP to Deny Password Logins for IAM Users To enforce that IAM users do not create passwords or access the Management Console directly without going through the corporate IdP, you can create a Service Control Policy (SCP) in AWS Organizations that denies password creation for IAM users.
Action: Create an SCP that denies password creation for IAM users.
Why: This ensures that users cannot set passwords for their IAM user accounts, forcing them to use federated access through the corporate IdP for console login.
This corresponds to Option E: Create an SCP in Organizations to deny password creation for IAM users.
Start a Discussions
A company recently migrated its application to an Amazon Elastic Kubernetes Service (Amazon EKS) cluster that uses Amazon EC2 instances. The company configured the application to automatically scale based on CPU utilization.
The application produces memory errors when it experiences heavy loads. The application also does not scale out enough to handle the increased load. The company needs to collect and analyze memory metrics for the application over time.
Which combination of steps will meet these requirements? (Select THREE.)
Correct : A, C, E
This corresponds to Option A: Attach the CloudWatchAgentServerPolicy managed IAM policy to the IAM instance profile that the cluster uses.
* Step 2: Deploying the CloudWatch Agent to EC2 Instances To collect memory metrics from the EC2 instances running in the EKS cluster, the CloudWatch agent needs to be deployed on these instances. The agent collects system-level metrics, including memory usage.
Action: Deploy the unified Amazon CloudWatch agent to the existing EC2 instances in the EKS cluster. Update the Amazon Machine Image (AMI) for future instances to include the CloudWatch agent.
Why: The CloudWatch agent allows you to collect detailed memory metrics from the EC2 instances, which is not enabled by default.
This corresponds to Option C: Collect performance metrics by deploying the unified Amazon CloudWatch agent to the existing EC2 instances in the cluster. Add the agent to the AMI for any new EC2 instances that are added to the cluster.
* Step 3: Analyzing Memory Metrics Using Container Insights After collecting the memory metrics, you can analyze them using the pod_memory_utilization metric in Amazon CloudWatch Container Insights. This metric provides visibility into the memory usage of the containers (pods) in the EKS cluster.
Action: Analyze the pod_memory_utilization CloudWatch metric in the Container Insights namespace by using the Service dimension.
Why: This provides detailed insights into memory usage at the container level, which helps diagnose memory-related issues.
This corresponds to Option E: Analyze the pod_memory_utilization Amazon CloudWatch metric in the Container Insights namespace by using the Service dimension.
Start a Discussions
A company has developed a static website hosted on an Amazon S3 bucket. The website is deployed using AWS CloudFormation. The CloudFormation template defines an S3 bucket and a custom resource that copies content into the bucket from a source location.
The company has decided that it needs to move the website to a new location, so the existing CloudFormation stack must be deleted and re-created. However, CloudFormation reports that the stack could not be deleted cleanly.
What is the MOST likely cause and how can the DevOps engineer mitigate this problem for this and future versions of the website?
Correct : B
This corresponds to Option B: Deletion has failed because the S3 bucket is not empty. Modify the custom resource's AWS Lambda function code to recursively empty the bucket when RequestType is Delete.
Start a Discussions
A company uses an AWS CodeCommit repository to store its source code and corresponding unit tests. The company has configured an AWS CodePipeline pipeline that includes an AWS CodeBuild project that runs when code is merged to the main branch of the repository.
The company wants the CodeBuild project to run the unit tests. If the unit tests pass, the CodeBuild project must tag the most recent commit.
How should the company configure the CodeBuild project to meet these requirements?
Correct : A
This corresponds to Option A: Configure the CodeBuild project to use native Git to clone the CodeCommit repository. Configure the project to run the unit tests. Configure the project to use native Git to create a tag and to push the Git tag to the repository if the code passes the unit tests.
Start a Discussions
A company has an organization in AWS Organizations for its multi-account environment. A DevOps engineer is developing an AWS CodeArtifact based strategy for application package management across the organization. Each application team at the company has its own account in the organization. Each application team also has limited access to a centralized shared services account.
Each application team needs full access to download, publish, and grant access to its own packages. Some common library packages that the application teams use must also be shared with the entire organization.
Which combination of steps will meet these requirements with the LEAST administrative overhead? (Select THREE.)
Correct : B, D, E
This corresponds to Option B: Create a domain in the shared services account. Grant the organization read access and CreateRepository access.
* Step 2: Sharing Repositories Across Teams with Upstream Configurations To share common library packages across the organization, each application team's repository can point to the shared services repository as an upstream repository. This enables teams to access shared packages without managing them individually in each team's account.
Action: Create a repository in the shared services account and set it as the upstream repository for each application team.
Why: Upstream repositories allow package sharing while maintaining individual team repositories for managing their own packages.
This corresponds to Option D: Create a repository in the shared services account. Grant the organization read access to the repository in the shared services account. Set the repository as the upstream repository in each application team's repository.
* Step 3: Using Resource-Based Policies for Cross-Account Access For teams that need to share their packages with other application teams, resource-based policies can be applied to grant the necessary permissions. These policies allow cross-account access without having to manage permissions at the individual account level.
Action: Create resource-based policies that allow read access to the repositories across application teams.
Why: This simplifies management by centralizing permissions in the shared services account while allowing cross-team collaboration.
This corresponds to Option E: For teams that require shared packages, create resource-based policies that allow read access to the repository from other application teams' accounts.
Start a Discussions