You are designing a system with three different environments: development, quality assurance (QA), and production.
Each environment will be deployed with Terraform and has a Google Kubemetes Engine (GKE) cluster created so that application teams can deploy their applications. Anthos Config Management will be used and templated to deploy
infrastructure level resources in each GKE cluster. All users (for example, infrastructure operators and application owners) will use GitOps. How should you structure your source control repositories for both Infrastructure as Code (laC) and application code?
Correct : B
The correct answer is B, Cloud Infrastructure (Terraform) repository is shared: different directories are different environments. GKE Infrastructure (Anthos Config Management Kustomize manifests) repositories are separated: different branches are different environments. Application (app source code) repositories are separated: different branches are different features.
This answer follows the best practices for using Terraform and Anthos Config Management with GitOps, as described in the following sources:
1: Best practices for using Terraform | Google Cloud
2: Terraform Recommended Practices - Part 1 | Terraform - HashiCorp Learn
3: Deploy Anthos on GKE with Terraform part 1: GitOps with Config Sync | Google Cloud Blog
: GitOps-style continuous delivery with Cloud Build | Cloud Build Documentation | Google Cloud
Start a Discussions
Your Cloud Run application writes unstructured logs as text strings to Cloud Logging. You want to convert the unstructured logs to JSON-based structured logs. What should you do?
Correct : D
The correct answer is D, Modify the application to use Cloud Logging software development kit (SDK), and send log entries with a jsonPayload field.
// Imports the Google Cloud client library
const {Logging} = require('@google-cloud/logging');
// Creates a client
const logging = new Logging();
// Selects the log to write to
const log = logging.log('my-log');
// The data to write to the log
const text = 'Hello, world!';
const metadata = {
// Set the Cloud Run service name and revision as labels
labels: {
service_name: process.env.K_SERVICE || 'unknown',
revision_name: process.env.K_REVISION || 'unknown',
},
// Set the log entry payload type and value
jsonPayload: {
message: text,
timestamp: new Date(),
},
};
// Prepares a log entry
const entry = log.entry(metadata);
// Writes the log entry
await log.write(entry);
console.log(`Logged: ${text}`);
Using Cloud Logging SDKs is the best way to convert unstructured logs to structured logs, as it provides more flexibility and control over the format and content of your log entries.
Using the log agent in the Cloud Run container image is not possible, as the log agent is not supported on Cloud Run. The log agent is a service that runs on Compute Engine or Google Kubernetes Engine instances and collects logs from various applications and system components. However, Cloud Run does not allow you to install or run any agents on its underlying infrastructure, as it is a fully managed service that abstracts away the details of the underlying platform.
Storing the password directly in the code is not a good practice, as it exposes sensitive information and makes it hard to change or rotate the password. It also requires rebuilding and redeploying the application each time the password changes, which adds unnecessary work and downtime.
1: Writing structured logs | Cloud Run Documentation | Google Cloud
2: Write structured logs | Cloud Run Documentation | Google Cloud
3: Fluent Bit - Fast and Lightweight Log Processor & Forwarder
: Logging Best Practices for Serverless Applications - Google Codelabs
: About the logging agent | Cloud Logging Documentation | Google Cloud
: Cloud Run FAQ | Google Cloud
Start a Discussions
You are the Operations Lead for an ongoing incident with one of your services. The service usually runs at around 70% capacity. You notice that one node is returning 5xx errors for all requests. There has also been a noticeable increase in support cases from customers. You need to remove the offending node from the load balancer pool so that you can isolate and investigate the node. You want to follow Google-recommended practices to manage the incident and reduce the impact on users. What should you do?
Correct : A
The correct answer is A, Communicate your intent to the incident team. Perform a load analysis to determine if the remaining nodes can handle the increase in traffic offloaded from the removed node, and scale appropriately. When any new nodes report healthy, drain traffic from the unhealthy node, and remove the unhealthy node from service.
Maintain a clear line of command. Designate clearly defined roles. Keep a working record of debugging and mitigation as you go. Declare incidents early and often.
Communicate your intent before taking any action that might affect the service or the incident response. This helps to avoid confusion, duplication of work, or unintended consequences.
Perform a load analysis before removing a node from the load balancer pool, as this might affect the capacity and performance of the service. Scale the pool as necessary to handle the expected load.
Drain traffic from the unhealthy node before removing it from service, as this helps to avoid dropping requests or causing errors for users.
Answer A follows these best practices by communicating the intent to the incident team, performing a load analysis and scaling the pool, and draining traffic from the unhealthy node before removing it.
Answer B does not follow the best practice of performing a load analysis before adding or removing nodes, as this might cause overloading or underutilization of resources.
Answer C does not follow the best practice of communicating the intent before taking any action, as this might cause confusion or conflict with other responders.
Answer D does not follow the best practice of draining traffic from the unhealthy node before removing it, as this might cause errors for users.
Start a Discussions
You recently migrated an ecommerce application to Google Cloud. You now need to prepare the application for the upcoming peak traffic season. You want to follow Google-recommended practices. What should you do first to prepare for the busy season?
Correct : B
By load testing your ecommerce application before the peak traffic season, you can ensure that your application is ready to handle the expected load and provide a good user experience. You can also use the results of your load tests to plan and implement other steps to prepare your application for the busy season, such as migrating to a more scalable platform, creating a Terraform configuration for deploying to additional regions, or pre-provisioning additional compute power.
1: Load Testing 101: How To Test Website Performance | BlazeMeter
2: Scaling applications | Google Cloud
3: Load testing using Google Cloud | Solutions | Google Cloud
4: Serverless load testing using Cloud Functions | Solutions | Google Cloud
Start a Discussions
You are analyzing Java applications in production. All applications have Cloud Profiler and Cloud Trace installed and configured by default. You want to determine which applications need performance tuning. What should you do?
Choose 2 answers
Correct : A, D
The correct answers are A and D)
Answer B is incorrect because increasing the memory resource allocation will not help if the application is CPU-bound or I/O-bound. Memory allocation affects how much data the application can store and access in memory, but it does not affect how fast the application can process that data.
Answer C is incorrect because increasing the local disk storage allocation will not help if the application is CPU-bound or I/O-bound. Disk storage affects how much data the application can store and access on disk, but it does not affect how fast the application can process that data.
Answer E is incorrect because examining the heap usage of the application will not help to determine if the application needs performance tuning. Heap usage affects how much memory the application allocates for dynamic objects, but it does not affect how fast the application can process those objects. Moreover, low heap usage does not necessarily mean that the application is inefficient or unoptimized.
Start a Discussions