Weekend Sale - Limited Time 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: mxmas70

Home > Google > Cloud Database Engineer > Professional-Cloud-Database-Engineer

Professional-Cloud-Database-Engineer Google Cloud Certified - Professional Cloud Database Engineer Question and Answers

Question # 4

You work for a large retail and ecommerce company that is starting to extend their business globally. Your company plans to migrate to Google Cloud. You want to use platforms that will scale easily, handle transactions with the least amount of latency, and provide a reliable customer experience. You need a storage layer for sales transactions and current inventory levels. You want to retain the same relational schema that your existing platform uses. What should you do?

A.

Store your data in Firestore in a multi-region location, and place your compute resources in one of the constituent regions.

B.

Deploy Cloud Spanner using a multi-region instance, and place your compute resources close to the default leader region.

C.

Build an in-memory cache in Memorystore, and deploy to the specific geographic regions where your application resides.

D.

Deploy a Bigtable instance with a cluster in one region and a replica cluster in another geographic region.

Full Access
Question # 5

Your company is evaluating Google Cloud database options for a mission-critical global payments gateway application. The application must be available 24/7 to users worldwide, horizontally scalable, and support open source databases. You need to select an automatically shardable, fully managed database with 99.999% availability and strong transactional consistency. What should you do?

A.

Select Bare Metal Solution for Oracle.

B.

Select Cloud SQL.

C.

Select Bigtable.

D.

Select Cloud Spanner.

Full Access
Question # 6

You work in the logistics department. Your data analysis team needs daily extracts from Cloud SQL for MySQL to train a machine learning model. The model will be used to optimize next-day routes. You need to export the data in CSV format. You want to follow Google-recommended practices. What should you do?

A.

Use Cloud Scheduler to trigger a Cloud Function that will run a select * from table(s) query to call the cloudsql.instances.export API.

B.

Use Cloud Scheduler to trigger a Cloud Function through Pub/Sub to call the cloudsql.instances.export API.

C.

Use Cloud Composer to orchestrate an export by calling the cloudsql.instances.export API.

D.

Use Cloud Composer to execute a select * from table(s) query and export results.

Full Access
Question # 7

You are managing multiple applications connecting to a database on Cloud SQL for PostgreSQL. You need to be able to monitor database performance to easily identify applications with long-running and resource-intensive queries. What should you do?

A.

Use log messages produced by Cloud SQL.

B.

Use Query Insights for Cloud SQL.

C.

Use the Cloud Monitoring dashboard with available metrics from Cloud SQL.

D.

Use Cloud SQL instance monitoring in the Google Cloud Console.

Full Access
Question # 8

Your customer has a global chat application that uses a multi-regional Cloud Spanner instance. The application has recently experienced degraded performance after a new version of the application was launched. Your customer asked you for assistance. During initial troubleshooting, you observed high read latency. What should you do?

A.

Use query parameters to speed up frequently executed queries.

B.

Change the Cloud Spanner configuration from multi-region to single region.

C.

Use SQL statements to analyze SPANNER_SYS.READ_STATS* tables.

D.

Use SQL statements to analyze SPANNER_SYS.QUERY_STATS* tables.

Full Access
Question # 9

Your organization operates in a highly regulated industry. Separation of concerns (SoC) and security principle of least privilege (PoLP) are critical. The operations team consists of:

Person A is a database administrator.

Person B is an analyst who generates metric reports.

Application C is responsible for automatic backups.

You need to assign roles to team members for Cloud Spanner. Which roles should you assign?

A.

roles/spanner.databaseAdmin for Person A

roles/spanner.databaseReader for Person B

roles/spanner.backupWriter for Application C

B.

roles/spanner.databaseAdmin for Person A

roles/spanner.databaseReader for Person B

roles/spanner.backupAdmin for Application C

C.

roles/spanner.databaseAdmin for Person A

roles/spanner.databaseUser for Person B

roles/spanner databaseReader for Application C

D.

roles/spanner.databaseAdmin for Person A

roles/spanner.databaseUser for Person B

roles/spanner.backupWriter for Application C

Full Access
Question # 10

Your organization is running a critical production database on a virtual machine (VM) on Compute Engine. The VM has an ext4-formatted persistent disk for data files. Thedatabase will soon run out of storage space. You need to implement a solution that avoids downtime. What should you do?

A.

In the Google Cloud Console, increase the size of the persistent disk, and use the resize2fs command to extend the disk.

B.

In the Google Cloud Console, increase the size of the persistent disk, and use the fdisk command to verify that the new space is ready to use

C.

In the Google Cloud Console, create a snapshot of the persistent disk, restore the snapshot to a new larger disk, unmount the old disk, mount the new disk, and restart the database service.

D.

In the Google Cloud Console, create a new persistent disk attached to the VM, and configure the database service to move the files to the new disk.

Full Access
Question # 11

Your team is building an application that stores and analyzes streaming time series financial data. You need a database solution that can perform time series-based scans with sub-second latency. The solution must scale into the hundreds of terabytes and be able to write up to 10k records per second and read up to 200 MB per second. What should you do?

A.

Use Firestore.

B.

Use Bigtable

C.

Use BigQuery.

D.

Use Cloud Spanner.

Full Access
Question # 12

You have deployed a Cloud SQL for SQL Server instance. In addition, you created a cross-region read replica for disaster recovery (DR) purposes. Your company requires you to maintain and monitor a recovery point objective (RPO) of less than 5 minutes. You need to verify that your cross-region read replica meets the allowed RPO. What should you do?

A.

Use Cloud SQL instance monitoring.

B.

Use the Cloud Monitoring dashboard with available metrics from Cloud SQL.

C.

Use Cloud SQL logs.

D.

Use the SQL Server Always On Availability Group dashboard.

Full Access
Question # 13

You are choosing a database backend for a new application. The application will ingest data points from IoT sensors. You need to ensure that the application can scale up to millions of requests per second with sub-10ms latency and store up to 100 TB of history. What should you do?

A.

Use Cloud SQL with read replicas for throughput.

B.

Use Firestore, and rely on automatic serverless scaling.

C.

Use Memorystore for Memcached, and add nodes as necessary to achieve the required throughput.

D.

Use Bigtable, and add nodes as necessary to achieve the required throughput.

Full Access
Question # 14

Your team is building a new inventory management application that will require read and write database instances in multiple Google Cloud regions around the globe. Your database solution requires 99.99% availability and global transactional consistency. You need a fully managed backend relational database to store inventory changes. What should you do?

A.

Use Bigtable.

B.

Use Firestore.

C.

Use Cloud SQL for MySQL

D.

Use Cloud Spanner.

Full Access
Question # 15

You are designing a database strategy for a new web application in one region. You need to minimize write latency. What should you do?

A.

Use Cloud SQL with cross-region replicas.

B.

Use high availability (HA) Cloud SQL with multiple zones.

C.

Use zonal Cloud SQL without high availability (HA).

D.

Use Cloud Spanner in a regional configuration.

Full Access
Question # 16

Your company is developing a 24/7. global, real-lime analytics platform that needs to store and process large amounts of versioned time-series data. You need to

design a platform that Is highly scalable to accommodate traffic spikes and ensure high availability for mission-critical operations. What should you do?

A.

Implement a single-cluster Bigtable Instance with autoscaling enabled and row key design

B.

Implement a multi-cluster Bigtable Instance with autoscaling enabled and optimal schema design.

C.

Implement a multi-cluster Billable instance across multiple regions with replication.

D.

implement AlloyDB for PostgreSQL to handle the analytical workload using read replica

Full Access
Question # 17

You are building a data warehouse on BigQuery. Sources of data include several MySQL databases located on-premises.

You need to transfer data from these databases into BigQuery for analytics. You want to use a managed solution that has low latency and is easy to set up. What should you do?

A.

Create extracts from your on-premises databases periodically, and push these extracts to Cloud Storage.

Upload the changes into BigQuery, and merge them with existing tables.

B.

Use Cloud Data Fusion and scheduled workflows to extract data from MySQL. Transform this data into the appropriate schema, and load this data into your BigQuery database.

C.

Use Datastream to connect to your on-premises database and create a stream. Have Datastream write to Cloud Storage. Then use Dataflow to process the data into BigQuery.

D.

Use Database Migration Service to replicate data to a Cloud SQL for MySQL instance. Create federated tables in BigQuery on top of the replicated instances to transform and load the data into your BigQuery database.

Full Access
Question # 18

You are using Compute Engine on Google Cloud and your data center to manage a set of MySQL databases in a hybrid configuration. You need to create replicas to scale reads and to offload part of the management operation. What should you do?

A.

Use external server replication.

B.

Use Data Migration Service.

C.

Use Cloud SQL for MySQL external replica.

D.

Use the mysqldump utility and binary logs.

Full Access
Question # 19

You recently launched a new product to the US market. You currently have two Bigtable clusters in one US region to serve all the traffic. Your marketing team is planning an immediate expansion to APAC. You need to roll out the regional expansion while implementing high availability according to Google-recommended practices. What should you do?

A.

Maintain a target of 23% CPU utilization by locating:

cluster-a in zone us-central1-a

cluster-b in zone europe-west1-d

cluster-c in zone asia-east1-b

B.

Maintain a target of 23% CPU utilization by locating:

cluster-a in zone us-central1-a

cluster-b in zone us-central1-b

cluster-c in zone us-east1-a

C.

Maintain a target of 35% CPU utilization by locating:

cluster-a in zone us-central1-a

cluster-b in zone australia-southeast1-a

cluster-c in zone europe-west1-d

cluster-d in zone asia-east1-b

D.

Maintain a target of 35% CPU utilization by locating:

cluster-a in zone us-central1-a

cluster-b in zone us-central2-a

cluster-c in zone asia-northeast1-b

cluster-d in zone asia-east1-b

Full Access
Question # 20

You manage a production MySQL database running on Cloud SQL at a retail company. You perform routine maintenance on Sunday at midnight when traffic is slow, but you want to skip routine maintenance during the year-end holiday shopping season. You need to ensure that your production system is available 24/7 during the holidays. What should you do?

A.

Define a maintenance window on Sundays between 12 AM and 1 AM, and deny maintenance periods between November 1 and January 15.

B.

Define a maintenance window on Sundays between 12 AM and 5 AM, and deny maintenance periods between November 1 and February 15.

C.

Build a Cloud Composer job to start a maintenance window on Sundays between 12 AM and 1AM, and deny maintenance periods between November 1 and January 15.

D.

Create a Cloud Scheduler job to start maintenance at 12 AM on Sundays. Pause the Cloud Scheduler job between November 1 and January 15.

Full Access
Question # 21

You are working on a new centralized inventory management system to track items available in 200 stores, which each have 500 GB of data. You are planning a gradual rollout of the system to a few stores each week. You need to design an SQL database architecture that minimizes costs and user disruption during each regional rollout and can scale up or down on nights and holidays. What should you do?

A.

Use Oracle Real Application Cluster (RAC) databases on Bare Metal Solution for Oracle.

B.

Use sharded Cloud SQL instances with one or more stores per database instance.

C.

Use a Biglable cluster with autoscaling.

D.

Use Cloud Spanner with a custom autoscaling solution.

Full Access
Question # 22

Your ecommerce website captures user clickstream data to analyze customer traffic patterns in real time and support personalization features on your website. You plan to analyze this data using big data tools. You need a low-latency solution that can store 8TB of data and can scale to millions of read and write requests per second. What should you do?

A.

Write your data into Bigtable and use Dataproc and the Apache Hbase libraries for analysis.

B.

Deploy a Cloud SQL environment with read replicas for improved performance. Use Datastream to export data to Cloud Storage and analyze with Dataproc and the Cloud Storage connector.

C.

Use Memorystore to handle your low-latency requirements and for real-time analytics.

D.

Stream your data into BigQuery and use Dataproc and the BigQuery Storage API to analyze large volumes of data.

Full Access
Question # 23

You use Python scripts to generate weekly SQL reports to assess the state of your databases and determine whether you need to reorganize tables or run statistics. You want to automate this report but need to minimize operational costs and overhead. What should you do?

A.

Create a VM in Compute Engine, and run a cron job.

B.

Create a Cloud Composer instance, and create a directed acyclic graph (DAG).

C.

Create a Cloud Function, and call the Cloud Function using Cloud Scheduler.

D.

Create a Cloud Function, and call the Cloud Function from a Cloud Tasks queue.

Full Access
Question # 24

Your organization works with sensitive data that requires you to manage your own encryption keys. You are working on a project that stores that data in a Cloud SQLdatabase. You need to ensure that stored data is encrypted with your keys. What should you do?

A.

Export data periodically to a Cloud Storage bucket protected by Customer-Supplied Encryption Keys.

B.

Use Cloud SQL Auth proxy.

C.

Connect to Cloud SQL using a connection that has SSL encryption.

D.

Use customer-managed encryption keys with Cloud SQL.

Full Access
Question # 25

Your organization has a ticketing system that needs an online marketing analytics and reporting application. You need to select a relational database that can manage hundreds of terabytes of data to support this new application. Which database should you use?

A.

Cloud SQL

B.

BigQuery

C.

Cloud Spanner

D.

Bigtable

Full Access
Question # 26

You are deploying a Cloud SOL for MySQL database to serve a non-critical application. The database size is 10 GB and will be updated every night with data stored in a Cloud Storage bucket. The database serves read-only traffic from the application during the day. The data locally requirement of This application mandates that data must reside in a single region. You want to minimize the cost of running this database while maintaining an RTO of 1 day. What should you do?

A.

Create a Cloud SQL for MySQL instance with high availability (HA) enabled. Configure automated backups of the Cloud SQL instance and use the default backup location.

B.

Create a Cloud SQL for MySQL instance with high availability (HA) disabled. Create a read replica in the same zone.

C.

Create a Cloud SQL for MySQL instance with high availability (HA) disabled. Configure automated backups of the Cloud SQL instance and use a custom backup location to store backups in a Cloud Storage bucket in the same region.

D.

Create a Cloud SQL for MySQL instance with high availability (HA) disabled. Create a read replica in a second region.

Full Access
Question # 27

You are managing a Cloud SQL for MySQL environment in Google Cloud. You have deployed a primary instance in Zone A and a read replica instance in Zone B, both in the same region. You are notified that the replica instance in Zone B was unavailable for 10 minutes. You need to ensure that the read replica instance is still working. What should you do?

A.

Use the Google Cloud Console or gcloud CLI to manually create a new clone database.

B.

Use the Google Cloud Console or gcloud CLI to manually create a new failover replica from backup.

C.

Verify that the new replica is created automatically.

D.

Start the original primary instance and resume replication.

Full Access
Question # 28

Your company's mission-critical, globally available application is supported by a Cloud Spanner database. Experienced users of the application have read and write access to the database, but new users are assigned read-only access to the database. You need to assign the appropriate Cloud Spanner Identity and Access Management (IAM) role to new users being onboarded soon. What roles should you set up?

A.

roles/spanner.databaseReader

B.

roles/spanner.databaseUser

C.

roles/spanner.viewer

D.

roles/spanner.backupWriter

Full Access
Question # 29

You are deploying a new Cloud SQL instance on Google Cloud using the Cloud SQL Auth proxy. You have identified snippets of application code that need to access the new Cloud SQL instance. The snippets reside and execute on an application server running on a Compute Engine machine. You want to follow Google-recommended practices to set up Identity and Access Management (IAM) as quickly and securely as possible. What should you do?

A.

For each application code, set up a common shared user account.

B.

For each application code, set up a dedicated user account.

C.

For the application server, set up a service account.

D.

For the application server, set up a common shared user account.

Full Access
Question # 30

During an internal audit, you realized that one of your Cloud SQL for MySQL instances does not have high availability (HA) enabled. You want to follow Google-recommended practices to enable HA on your existing instance. What should you do?

A.

Create a new Cloud SQL for MySQL instance, enable HA, and use the export and import option to migrate your data.

B.

Create a new Cloud SQL for MySQL instance, enable HA, and use Cloud Data Fusion to migrate your data.

C.

Use the gcloud instances patch command to update your existing Cloud SQL for MySQL instance.

D.

Shut down your existing Cloud SQL for MySQL instance, and enable HA.

Full Access
Question # 31

You are designing a highly available (HA) Cloud SQL for PostgreSQL instance that will be used by 100 databases. Each database contains 80 tables that were migrated from your on-premises environment to Google Cloud. The applications that use these databases are located in multiple regions in the US, and you need to ensure that read and write operations have low latency. What should you do?

A.

Deploy 2 Cloud SQL instances in the us-central1 region with HA enabled, and create read replicas in us-east1 and us-west1.

B.

Deploy 2 Cloud SQL instances in the us-central1 region, and create read replicas in us-east1 and us-west1.

C.

Deploy 4 Cloud SQL instances in the us-central1 region with HA enabled, and create read replicas in us-central1, us-east1, and us-west1.

D.

Deploy 4 Cloud SQL instances in the us-central1 region, and create read replicas in us-central1, us-east1 and us-west1.

Full Access
Question # 32

You are running a mission-critical application on a Cloud SQL for PostgreSQL database with a multi-zonal setup. The primary and read replica instances are in the same regionbut in different zones. You need to ensure that you split the application load between both instances. What should you do?

A.

Use Cloud Load Balancing for load balancing between the Cloud SQL primary and read replica instances.

B.

Use PgBouncer to set up database connection pooling between the Cloud SQL primary and read replica instances.

C.

Use HTTP(S) Load Balancing for database connection pooling between the Cloud SQL primary and read replica instances.

D.

Use the Cloud SQL Auth proxy for database connection pooling between the Cloud SQL primary and read replica instances.

Full Access
Question # 33

You want to migrate an on-premises mission-critical PostgreSQL database to Cloud SQL. The database must be able to withstand a zonal failure with less than five minutes of downtime and still not lose any transactions. You want to follow Google-recommended practices for the migration. What should you do?

A.

Take nightly snapshots of the primary database instance, and restore them in a secondary zone.

B.

Build a change data capture (CDC) pipeline to read transactions from the primary instance, and replicate them to a secondary instance.

C.

Create a read replica in another region, and promote the read replica if a failure occurs.

D.

Enable high availability (HA) for the database to make it regional.

Full Access
Question # 34

Your organization has an existing app that just went viral. The app uses a Cloud SQL for MySQL backend database that is experiencing slow disk performance while using hard disk drives (HDDs). You need to improve performance and reduce disk I/O wait times. What should you do?

A.

Export the data from the existing instance, and import the data into a new instance with solid-state drives (SSDs).

B.

Edit the instance to change the storage type from HDD to SSD.

C.

Create a high availability (HA) failover instance with SSDs, and perform a failover to the new instance.

D.

Create a read replica of the instance with SSDs, and perform a failover to the new instance

Full Access
Question # 35

You plan to use Database Migration Service to migrate data from a PostgreSQL on-premises instance to Cloud SQL. You need to identify the prerequisites for creating and automating the task. What should you do? (Choose two.)

A.

Drop or disable all users except database administration users.

B.

Disable all foreign key constraints on the source PostgreSQL database.

C.

Ensure that all PostgreSQL tables have a primary key.

D.

Shut down the database before the Data Migration Service task is started.

E.

Ensure that pglogical is installed on the source PostgreSQL database.

Full Access
Question # 36

You are building an Android game that needs to store data on a Google Cloud serverless database. The database will log user activity, store user preferences, and receive in-game updates. The target audience resides in developing countries that have intermittent internet connectivity. You need to ensure that the game can synchronize game data to the backend database whenever an internet network is available. What should you do?

A.

Use Firestore.

B.

Use Cloud SQL with an external (public) IP address.

C.

Use an in-app embedded database.

D.

Use Cloud Spanner.

Full Access
Question # 37

Your company uses Cloud Spanner for a mission-critical inventory management system that is globally available. You recently loaded stock keeping unit (SKU) and product catalog data from a company acquisition and observed hot-spots in the Cloud Spanner database. You want to follow Google-recommended schema design practices to avoid performance degradation. What should you do? (Choose two.)

A.

Use an auto-incrementing value as the primary key.

B.

Normalize the data model.

C.

Promote low-cardinality attributes in multi-attribute primary keys.

D.

Promote high-cardinality attributes in multi-attribute primary keys.

E.

Use bit-reverse sequential value as the primary key.

Full Access
Question # 38

Your online delivery business that primarily serves retail customers uses Cloud SQL for MySQL for its inventory and scheduling application. The required recovery time objective (RTO) and recovery point objective (RPO) must be in minutes rather than hours as a part of your high availability and disaster recovery design. You need a high availability configuration that can recover without data loss during a zonal or a regional failure. What should you do?

A.

Set up all read replicas in a different region using asynchronous replication.

B.

Set up all read replicas in the same region as the primary instance with synchronous replication.

C.

Set up read replicas in different zones of the same region as the primary instance with synchronous replication, and set up read replicas in different regions with asynchronous replication.

D.

Set up read replicas in different zones of the same region as the primary instance with asynchronous replication, and set up read replicas in different regions with synchronous replication.

Full Access
Question # 39

Your organization stores marketing data such as customer preferences and purchase history on Bigtable. The consumers of this database are predominantly data analysts and operations users. You receive a service ticket from the database operations department citing poor database performance between 9 AM-10 AM every day. Theapplication team has confirmed no latency from their logs. A new cohort of pilot users that is testing a dataset loaded from a third-party data provider is experiencing poor database performance. Other users are not affected. You need to troubleshoot the issue. What should you do?

A.

Isolate the data analysts and operations user groups to use different Bigtable instances.

B.

Check the Cloud Monitoring table/bytes_used metric from Bigtable.

C.

Use Key Visualizer for Bigtable.

D.

Add more nodes to the Bigtable cluster.

Full Access
Question # 40

Your company has PostgreSQL databases on-premises and on Amazon Web Services (AWS). You are planning multiple database migrations to Cloud SQL in an effort to reduce costs and downtime. You want to follow Google-recommended practices anduse Google native data migration tools. You also want to closely monitor the migrations as part of the cutover strategy. What should you do?

A.

Use Database Migration Service to migrate all databases to Cloud SQL.

B.

Use Database Migration Service for one-time migrations, and use third-party or partner tools for change data capture (CDC) style migrations.

C.

Use data replication tools and CDC tools to enable migration.

D.

Use a combination of Database Migration Service and partner tools to support the data migration strategy.

Full Access
Question # 41

Your company wants to migrate an Oracle-based application to Google Cloud. The application team currently uses Oracle Recovery Manager (RMAN) to back up the database to tape for long-term retention (LTR). You need a cost-effective backup and restore solution that meets a 2-hour recovery time objective (RTO) and a 15-minute recovery point objective (RPO). What should you do?

A.

Migrate the Oracle databases to Bare Metal Solution for Oracle, and store backups on tapes on-premises.

B.

Migrate the Oracle databases to Bare Metal Solution for Oracle, and use Actifio to store backup files on Cloud Storage using the Nearline Storage class.

C.

Migrate the Oracle databases to Bare Metal Solution for Oracle, and back up the Oracle databases to Cloud Storage using the Standard Storage class.

D.

Migrate the Oracle databases to Compute Engine, and store backups on tapes on-premises.

Full Access
Question # 42

You are the DBA of your organization. You provided a cloned instance from the production Cloud SQL for PostSQL database to the developers for testing purposes. After the creation of the clone. Your developers notice missing data in one of the recently altered tables. What should you do to ensure that all data in included?

A.

Take a back up of the production database and restore It to another Cloud SQL for PostgreSQL Instance. Provide access to the new instance to the developers.

B.

check for missing rotes and privileges in 'he cloned Cloud SQL instance. Grant missing privileges to the developers.

C.

Clone the current production database, and restore it to an earlier point in time IPITR) Provide access to the cloned instance to the developers.

D.

Dump the production database to a fie. Modify the dumped file to ALTER TABLE to SET LOGGED on tables that were unlogged in production. Reload the data in the new Cloud SQL for PostgreSQL Instance.

Full Access