Google Professional Cloud Developer Practice Examgcp-examquestions
Notes: Hi all, Google Professional Cloud Developer Practice Exam will familiarize you with types of questions you may encounter on the certification exam and help you determine your readiness or if you need more preparation and/or experience. Successful completion of the practice exam does not guarantee you will pass the certification exam as the actual exam is longer and covers a wider range of topics.
We highly recommend you should take Google Professional Cloud Developer Guarantee Part because it include real questions and highlighted answers are collected in our exam. It will help you pass exam in easier way.
1. You have a service running on Compute Engine virtual machine instances behind a global load balancer. You need to ensure that when the instance fails, it is recovered. What should you do?
A. Set up health checks in the load balancer configuration.
B. Deploy a service to the instances to notify you when they fail.
C. Use Stackdriver alerting to trigger a workflow to reboot the instance.
D. Set up health checks in the managed instance group configuration.
Hint Answers: D is correct because the managed instance group health check will recreate the instance when it fails, and this is the platform-native way to satisfy this use case.
2. You are analyzing your application’s performance. You observe that certain Cloud Bigtable tables in your cluster are used much more than others, causing inconsistent application performance for end users. You discover that some tablets have large sections of similarly named row keys and are heavily utilized, while other tablets are running idle. You discover that a user’s ZIP code is the first component of the row key, and your application is being heavily used by profiles originating from that ZIP code. You want to change how you generate row keys so that they are human readable and so that Cloud Bigtable demand is more evenly distributed within the cluster. What should you do?
A. Use serially generated integer values.
B. Use a concatenation of multiple human-readable attributes.
C. Use a subset of the MD5 hash of the row contents.
D. Use UNIX epoch-styled timestamps represented in milliseconds.
Hint Answers: B is correct because using a sufficient number of delimited attributes can provide sufficient spreading.
3. Which architecture should HipLocal use for log analysis?
A. Use Cloud Spanner to store each event.
B. Start storing key metrics in Cloud Memorystore.
C. Use Stackdriver Logging with a BigQuery sink.
D. Use Stackdriver Logging with a Cloud Storage sink.
Hint Answers: C is correct because it utilizes GCP’s scalable logging solution with an automated sink to BigQuery in order to provide analytics.
4. Your company has a successful multi-player game that has become popular in the US. Now, it wants to expand to other regions. It is launching a new feature that allows users to trade points. This feature will work for users across the globe. Your company’s current MySQL backend is reaching the limit of the Compute Engine instance that hosts the game. Your company wants to migrate to a different database that will provide global consistency and high availability across the regions. Which database should they choose?
B. Cloud SQL
C. Cloud Spanner
D. Cloud Bigtable
Hint Answers: C is correct because only Cloud Spanner provides global consistency and availability.
5. Your company plans to expand their analytics use cases. One of the new use cases requires your data analysts to analyze events using SQL on a near real–time basis. You expect rapid growth and want to use managed services as much as possible. What should you do?
A. Create a Kafka instance on a large Compute Engine instance. Stream your events from the source into a Kafka pipeline. Leverage Cloud Dataflow to ingest these events into Cloud Storage.
B. Create a Cloud Pub/Sub topic and a subscription. Stream your events from the source into the Pub/Sub topic. Leverage Cloud Dataflow to ingest these events into Cloud Storage.
C. Create a Cloud Pub/Sub topic and a subscription. Stream your events from the source into the Pub/Sub topic. Leverage Cloud Dataflow to ingest these events into BigQuery.
D. Create a Cloud Pub/Sub topic and a subscription. Stream your events from the source into the Pub/Sub topic. Leverage Cloud Dataflow to ingest these events into Cloud Datastore.
Hint Answers: C is correct because all three products involved can scale to significant volumes, and writing the data to BigQuery allows for immediate analysis via SQL.
6. HipLocal needs to migrate their existing data analytics platform to Google Cloud without any major change in code. Which service should they use?
A. Cloud Storage
B. Cloud Dataflow
C. Cloud Dataproc
D. Persistent Disk
7. Your organization develops and tests multiple applications on Compute Engine virtual machine instances across 3 environments; Test, Staging, and Production. The separate development teams for each application require minimal access to Production but broad access in Test and Staging. You need to design the Resource Manager structure to support your organization in following least-privilege best practices. What should you do?
A. Create one project per environment. Assign the application team members an Identity Access Management role at the project level.
B. Create one project per environment. Group each application team member into a Google Group. Assign the application team’s Google Group an Identity Access Management role at the project level.
C. Create one project per environment per application. Assign the application team members an Identity Access Management role at the project level.
D. Create one project per environment per application. Group each application team member into a Google Group. Assign the application team’s Google Group an Identity Access Management role at the project level.
Hint Answers: D is correct because a project provides good isolation for each application team, and managing membership via a group is easier to maintain over time.
8. Your application in App Engine standard environment receives a large amount of traffic. You are concerned that deploying changes to the application could affect all users negatively. You want to avoid full-scale load testing due to cost concerns, but you still want to deploy new features as quickly as possible. Which approach should you take?
A. Schedule weekly load tests against the production application.
B. Use the local development environment to perform load testing outside Google Cloud Platform.
C. Before allowing users to access new features, deploy as a new version and perform smoke tests. Then enable all users to access the new features.
D. Use App Engine traffic splitting to have a smaller part of the users test out new features, and slowly adjust traffic splitting until all users get the new features.
Hint Answers: D is correct because traffic splitting allows real user testing without impacting all users and reduces load testing costs.
9. You have two tables in an ANSI-SQL compliant database with identical columns that you need to quickly combine into a single table, removing duplicate rows from the result set. What should you do?
A. Query the tables from a Linux shell, combine the results into a single CSV, and re-import the rows into the database. Use the UNION ALL operator in SQL to combine the tables.
B. Use the JOIN operator in SQL to combine the tables.
C. Use nested WITH statements to combine the tables.
D. Use the UNION operator in SQL to combine the tables.
Hint Answers: D is correct because the UNION operator combines result sets while removing duplicates.
10. Your website is deployed on Compute Engine. Your marketing team wants to test conversion rates between 3 different website designs. You are not able to make changes to your application code. What should you do?
A. Deploy website on App Engine and use traffic splitting.
B. Deploy website on App Engine as three separate services.
C. Deploy website on Cloud Functions and implement custom code to show different designs.
D. Deploy website on Cloud Functions as three separate functions.
Hint Answers: A is correct because it allows routing traffic to a single domain and split traffic based on IP or Cookie.
11. You are building a storage layer for an analytics Hadoop cluster for your company. This cluster will run multiple jobs on a nightly basis, and you need to access the data frequently. You want to use Cloud Storage for this purpose. Which storage option should you choose?
A. Multi-regional storage
B. Regional storage
C. Nearline storage
D. Coldline storage
B is correct.
12. You have an application that accepts inputs from users. The application needs to kick off different background tasks based on these inputs. You want to allow for automated asynchronous execution of these tasks as soon as input is submitted by the user. Which product should you use?
A. Cloud Tasks
B. Cloud Bigtable
C. Cloud Pub/Sub
D. Cloud Composer
Hint Answers: A is correct because this is the standard use case.
13. You have a data warehouse built on BigQuery that contains a table with array fields. To analyze the data for a specific use case using Standard SQL, you need to read all elements from the array and write them with all other non-array fields in a table. You don’t want to lose any records if they don’t match records in the array fields. What should you do?
A. Perform SELECT * FROM tablename.
B. Perform UNNEST and JOIN with the table to get these results.
C. Perform UNNEST and INNER JOIN with the table to get these results.
D. Perform UNNEST and CROSS JOIN with the table to get these results.
Hint Answers: D is correct because it does not lose records when the join is performed.
14. As part of their expansion, HipLocal is creating new projects in order to separate resources. They want to build a system to automate enabling of their APIs. What should they do?
A. Copy existing persistent disks to the new project.
B. Use the service management API to define a new service.
C. Use the service management API to enable the Compute API.
D. Use the service management API to enable the Cloud Storage API.
Hint Answers: C is correct because the Compute API will be required to provision VMs.
15. You have deployed your website in a managed instance group. The managed instance group is configured to have a size of three instances and to perform an HTTP health check on port 80. When the managed instance group is created, three instances are created and started. When you connect to the instance using SSH, you confirm that the website is running and available on port 80. However, the managed instance group is re-creating the instances when they fail verification. What should you do?
A. Change the type to an unmanaged instance group.
B. Disable autoscaling on the managed instance group.
C. Increase the initial delay timeout to ensure that the instance is created.
D. Check the firewall rules and ensure that the probes can access the instance.
Hint Answers: D is correct because the instance has been created and the website is being served, but the health check is failing verification.
16. Your team is using App Engine to write every Cloud Pub/Sub message to both a Cloud Storage object and a BigQuery table. You want to achieve the greatest resource efficiency. Which architecture should you implement?
Hint Answers: B is correct because each App Engine service will get its own message to write and can retry/fail independently.
17. Your teammate has asked you to review the code below. Its purpose is to query account entities in Cloud Datastore for those with a balance greater than 10000 and an age less than 4. Which improvement should you suggest your teammate make?
A. The query needs to have an ancestor query.
B. The query should be performed in a transaction.
C. Change the argument to OrderBy.desc to be “balance” instead of “age.”
D. Send two queries—one for balances over 10000, and another for ages less than 4—and compute the intersection.
Hint Answers: D is correct because two inequality comparisons aren’t permitted in a Datastore query and it requires two queries to be merged.
18. Your organization has grown, and new teams need access to manage network connectivity within and across projects. You are now seeing intermittent timeout errors in your application. You want to find the cause of the problem. What should you do?
A. Set up wireshark on each Google Cloud Virtual Machine instance.
B. Review the instance admin activity logs in Stackdriver for the application instances.
C. Configure VPC flow logs for each of the subnets in your VPC.
D. Configure firewall rules logging for each of the firewalls in your VPC.
Hint Answers: C is correct because it uses the substrate specific logging to capture everything.
19. Your application starts on the VM as a systemd service. Your application outputs its log information to stdout. You need to send the application logs to Stackdriver without changing the application. What should you do?
A. Review the application logs from the Compute Engine VM Instance activity logs in Stackdriver.
B. Review the application logs from the Compute Engine VM Instance data access logs in Stackdriver.
C. Install Stackdriver Logging Agent. Review the application logs from the Compute Engine VM Instance syslog logs in Stackdriver.
D. Install Stackdriver Logging Agent. Review the application logs from the Compute Engine VM Instance system event logs in Stackdriver.
Hint Answers: C is correct because a service running in systemd that outputs to stdout will have logs in syslog and will be scraped by the logging agent. (https://github.com/GoogleCloudPlatform/fluentd-catch-all-config/tree/master/configs/config.d)
20. You are capturing important audit activity in Stackdriver Logging. You need to read the information from Stackdriver Logging to perform real-time analysis of the logs. You will have multiple processes performing different types of analysis on the logging data. What should you do?
A. Read the logs directly from the Stackdriver Logging API.
B. Set up a Stackdriver Logging sync to BigQuery, and read the logs from the BigQuery table.
C. Set up a Stackdriver Logging sync to Cloud Pub/Sub, and read the logs from a Cloud Pub/Sub topic.
D. Set up a Stackdriver Logging sync to Cloud Storage, and read the logs from a Cloud Storage bucket.
Hint Answers: C is correct because this solution is real time. (https://cloud.google.com/logging/docs/export/using_exported_logs#pubsub-availability)