Spring Sale Special - Limited Time 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: mxmas70

Home > Snowflake > SnowPro Advanced: Architect > ARA-C01

ARA-C01 SnowPro Advanced: Architect Certification Exam Question and Answers

Question # 4

An Architect needs to design a solution for building environments for development, test, and pre-production, all located in a single Snowflake account. The environments should be based on production data.

Which solution would be MOST cost-effective and performant?

A.

Use zero-copy cloning into transient tables.

B.

Use zero-copy cloning into permanent tables.

C.

Use CREATE TABLE ... AS SELECT (CTAS) statements.

D.

Use a Snowflake task to trigger a stored procedure to copy data.

Full Access
Question # 5

A Snowflake Architect is designing a multi-tenant application strategy for an organization in the Snowflake Data Cloud and is considering using an Account Per Tenant strategy.

Which requirements will be addressed with this approach? (Choose two.)

A.

There needs to be fewer objects per tenant.

B.

Security and Role-Based Access Control (RBAC) policies must be simple to configure.

C.

Compute costs must be optimized.

D.

Tenant data shape may be unique per tenant.

E.

Storage costs must be optimized.

Full Access
Question # 6

A Snowflake Architect is designing a multiple-account design strategy.

This strategy will be MOST cost-effective with which scenarios? (Select TWO).

A.

The company wants to clone a production database that resides on AWS to a development database that resides on Azure.

B.

The company needs to share data between two databases, where one must support Payment Card Industry Data Security Standard (PCI DSS) compliance but the other one does not.

C.

The company needs to support different role-based access control features for the development, test, and production environments.

D.

The company security policy mandates the use of different Active Directory instances for the development, test, and production environments.

E.

The company must use a specific network policy for certain users to allow and block given IP addresses.

Full Access
Question # 7

A company wants to Integrate its main enterprise identity provider with federated authentication with Snowflake.

The authentication integration has been configured and roles have been created in Snowflake. However, the users are not automatically appearing in Snowflake when created and their group membership is not reflected in their assigned rotes.

How can the missing functionality be enabled with the LEAST amount of operational overhead?

A.

OAuth must be configured between the identity provider and Snowflake. Then the authorization server must be configured with the right mapping of users and roles.

B.

OAuth must be configured between the identity provider and Snowflake. Then the authorization server must be configured with the right mapping of users, and the resource server must be configured with the right mapping of role assignment.

C.

SCIM must be enabled between the identity provider and Snowflake. Once both are synchronized through SCIM, their groups will get created as group accounts in Snowflake and the proper roles can be granted.

D.

SCIM must be enabled between the identity provider and Snowflake. Once both are synchronized through SCIM. users will automatically get created and their group membership will be reflected as roles In Snowflake.

Full Access
Question # 8

A company’s daily Snowflake workload consists of a huge number of concurrent queries triggered between 9pm and 11pm. At the individual level, these queries are smaller statements that get completed within a short time period.

What configuration can the company’s Architect implement to enhance the performance of this workload? (Choose two.)

A.

Enable a multi-clustered virtual warehouse in maximized mode during the workload duration.

B.

Set the MAX_CONCURRENCY_LEVEL to a higher value than its default value of 8 at the virtual warehouse level.

C.

Increase the size of the virtual warehouse to size X-Large.

D.

Reduce the amount of data that is being processed through this workload.

E.

Set the connection timeout to a higher value than its default.

Full Access
Question # 9

A Snowflake Architect is designing an application and tenancy strategy for an organization where strong legal isolation rules as well as multi-tenancy are requirements.

Which approach will meet these requirements if Role-Based Access Policies (RBAC) is a viable option for isolating tenants?

A.

Create accounts for each tenant in the Snowflake organization.

B.

Create an object for each tenant strategy if row level security is viable for isolating tenants.

C.

Create an object for each tenant strategy if row level security is not viable for isolating tenants.

D.

Create a multi-tenant table strategy if row level security is not viable for isolating tenants.

Full Access
Question # 10

An Architect needs to allow a user to create a database from an inbound share.

To meet this requirement, the user’s role must have which privileges? (Choose two.)

A.

IMPORT SHARE;

B.

IMPORT PRIVILEGES;

C.

CREATE DATABASE;

D.

CREATE SHARE;

E.

IMPORT DATABASE;

Full Access
Question # 11

A company is designing its serving layer for data that is in cloud storage. Multiple terabytes of the data will be used for reporting. Some data does not have a clear use case but could be useful for experimental analysis. This experimentation data changes frequently and is sometimes wiped out and replaced completely in a few days.

The company wants to centralize access control, provide a single point of connection for the end-users, and maintain data governance.

What solution meets these requirements while MINIMIZING costs, administrative effort, and development overhead?

A.

Import the data used for reporting into a Snowflake schema with native tables. Then create external tables pointing to the cloud storage folders used for the experimentation data. Then create two different roles with grants to the different datasets to match the different user personas, and grant these roles to the corresponding users.

B.

Import all the data in cloud storage to be used for reporting into a Snowflake schema with native tables. Then create a role that has access to this schema and manage access to the data through that role.

C.

Import all the data in cloud storage to be used for reporting into a Snowflake schema with native tables. Then create two different roles with grants to the different datasets to match the different user personas, and grant these roles to the corresponding users.

D.

Import the data used for reporting into a Snowflake schema with native tables. Then create views that have SELECT commands pointing to the cloud storage files for the experimentation data. Then create two different roles to match the different user personas, and grant these roles to the corresponding users.

Full Access
Question # 12

What are purposes for creating a storage integration? (Choose three.)

A.

Control access to Snowflake data using a master encryption key that is maintained in the cloud provider’s key management service.

B.

Store a generated identity and access management (IAM) entity for an external cloud provider regardless of the cloud provider that hosts the Snowflake account.

C.

Support multiple external stages using one single Snowflake object.

D.

Avoid supplying credentials when creating a stage or when loading or unloading data.

E.

Create private VPC endpoints that allow direct, secure connectivity between VPCs without traversing the public internet.

F.

Manage credentials from multiple cloud providers in one single Snowflake object.

Full Access
Question # 13

Which of the following are characteristics of Snowflake’s parameter hierarchy?

A.

Session parameters override virtual warehouse parameters.

B.

Virtual warehouse parameters override user parameters.

C.

Table parameters override virtual warehouse parameters.

D.

Schema parameters override account parameters.

Full Access
Question # 14

A DevOps team has a requirement for recovery of staging tables used in a complex set of data pipelines. The staging tables are all located in the same staging schema. One of the requirements is to have online recovery of data on a rolling 7-day basis.

After setting up the DATA_RETENTION_TIME_IN_DAYS at the database level, certain tables remain unrecoverable past 1 day.

What would cause this to occur? (Choose two.)

A.

The staging schema has not been setup for MANAGED ACCESS.

B.

The DATA_RETENTION_TIME_IN_DAYS for the staging schema has been set to 1 day.

C.

The tables exceed the 1 TB limit for data recovery.

D.

The staging tables are of the TRANSIENT type.

E.

The DevOps role should be granted ALLOW_RECOVERY privilege on the staging schema.

Full Access
Question # 15

An Architect is troubleshooting a query with poor performance using the QUERY function. The Architect observes that the COMPILATION_TIME Is greater than the EXECUTION_TIME.

What is the reason for this?

A.

The query is processing a very large dataset.

B.

The query has overly complex logic.

C.

The query Is queued for execution.

D.

The query Is reading from remote storage

Full Access
Question # 16

Following objects can be cloned in snowflake

A.

Permanent table

B.

Transient table

C.

Temporary table

D.

External tables

E.

Internal stages

Full Access
Question # 17

Which data models can be used when modeling tables in a Snowflake environment? (Select THREE).

A.

Graph model

B.

Dimensional/Kimball

C.

Data lake

D.

lnmon/3NF

E.

Bayesian hierarchical model

F.

Data vault

Full Access
Question # 18

A Data Engineer is designing a near real-time ingestion pipeline for a retail company to ingest event logs into Snowflake to derive insights. A Snowflake Architect is asked to define security best practices to configure access control privileges for the data load for auto-ingest to Snowpipe.

What are the MINIMUM object privileges required for the Snowpipe user to execute Snowpipe?

A.

OWNERSHIP on the named pipe, USAGE on the named stage, target database, and schema, and INSERT and SELECT on the target table

B.

OWNERSHIP on the named pipe, USAGE and READ on the named stage, USAGE on the target database and schema, and INSERT end SELECT on the target table

C.

CREATE on the named pipe, USAGE and READ on the named stage, USAGE on the target database and schema, and INSERT end SELECT on the target table

D.

USAGE on the named pipe, named stage, target database, and schema, and INSERT and SELECT on the target table

Full Access
Question # 19

Company A would like to share data in Snowflake with Company B. Company B is not on the same cloud platform as Company A.

What is required to allow data sharing between these two companies?

A.

Create a pipeline to write shared data to a cloud storage location in the target cloud provider.

B.

Ensure that all views are persisted, as views cannot be shared across cloud platforms.

C.

Setup data replication to the region and cloud platform where the consumer resides.

D.

Company A and Company B must agree to use a single cloud platform: Data sharing is only possible if the companies share the same cloud provider.

Full Access
Question # 20

Which steps are recommended best practices for prioritizing cluster keys in Snowflake? (Choose two.)

A.

Choose columns that are frequently used in join predicates.

B.

Choose lower cardinality columns to support clustering keys and cost effectiveness.

C.

Choose TIMESTAMP columns with nanoseconds for the highest number of unique rows.

D.

Choose cluster columns that are most actively used in selective filters.

E.

Choose cluster columns that are actively used in the GROUP BY clauses.

Full Access
Question # 21

Which Snowflake objects can be used in a data share? (Select TWO).

A.

Standard view

B.

Secure view

C.

Stored procedure

D.

External table

E.

Stream

Full Access
Question # 22

What is a key consideration when setting up search optimization service for a table?

A.

Search optimization service works best with a column that has a minimum of 100 K distinct values.

B.

Search optimization service can significantly improve query performance on partitioned external tables.

C.

Search optimization service can help to optimize storage usage by compressing the data into a GZIP format.

D.

The table must be clustered with a key having multiple columns for effective search optimization.

Full Access
Question # 23

Which Snowflake data modeling approach is designed for BI queries?

A.

3 NF

B.

Star schema

C.

Data Vault

D.

Snowflake schema

Full Access
Question # 24

An event table has 150B rows and 1.5M micro-partitions, with the following statistics:

Column NDV*

A_ID 11K

C_DATE 110

NAME 300K

EVENT_ACT_0 1.1G

EVENT_ACT_4 2.2G

*NDV = Number of Distinct Values

What three clustering keys should be used, in order?

A.

C_DATE, A_ID, NAME

B.

A_ID, NAME, C_DATE

C.

C_DATE, A_ID, EVENT_ACT_0

D.

C_DATE, A_ID, EVENT_ACT_4

Full Access
Question # 25

When loading data into a table that captures the load time in a column with a default value of either CURRENT_TIME () or CURRENT_TIMESTAMP() what will occur?

A.

All rows loaded using a specific COPY statement will have varying timestamps based on when the rows were inserted.

B.

Any rows loaded using a specific COPY statement will have varying timestamps based on when the rows were read from the source.

C.

Any rows loaded using a specific COPY statement will have varying timestamps based on when the rows were created in the source.

D.

All rows loaded using a specific COPY statement will have the same timestamp value.

Full Access
Question # 26

An Architect would like to save quarter-end financial results for the previous six years.

Which Snowflake feature can the Architect use to accomplish this?

A.

Search optimization service

B.

Materialized view

C.

Time Travel

D.

Zero-copy cloning

E.

Secure views

Full Access
Question # 27

A company has a Snowflake account named ACCOUNTA in AWS us-east-1 region. The company stores its marketing data in a Snowflake database named MARKET_DB. One of the company’s business partners has an account named PARTNERB in Azure East US 2 region. For marketing purposes the company has agreed to share the database MARKET_DB with the partner account.

Which of the following steps MUST be performed for the account PARTNERB to consume data from the MARKET_DB database?

A.

Create a new account (called AZABC123) in Azure East US 2 region. From account ACCOUNTA create a share of database MARKET_DB, create a new database out of this share locally in AWS us-east-1 region, and replicate this new database to AZABC123 account. Then set up data sharing to the PARTNERB account.

B.

From account ACCOUNTA create a share of database MARKET_DB, and create a new database out of this share locally in AWS us-east-1 region. Then make this database the provider and share it with the PARTNERB account.

C.

Create a new account (called AZABC123) in Azure East US 2 region. From account ACCOUNTA replicate the database MARKET_DB to AZABC123 and from this account set up the data sharing to the PARTNERB account.

D.

Create a share of database MARKET_DB, and create a new database out of this share locally in AWS us-east-1 region. Then replicate this database to the partner’s account PARTNERB.

Full Access
Question # 28

An Architect is designing Snowflake architecture to support fast Data Analyst reporting. To optimize costs, the virtual warehouse is configured to auto-suspend after 2 minutes of idle time. Queries are run once in the morning after refresh, but later queries run slowly.

Why is this occurring?

A.

The warehouse is not large enough.

B.

The warehouse was not configured as a multi-cluster warehouse.

C.

The warehouse was not created with USE_CACHE = TRUE.

D.

When the warehouse was suspended, the cache was dropped.

Full Access
Question # 29

A retail company has over 3000 stores all using the same Point of Sale (POS) system. The company wants to deliver near real-time sales results to category managers. The stores operate in a variety of time zones and exhibit a dynamic range of transactions each minute, with some stores having higher sales volumes than others.

Sales results are provided in a uniform fashion using data engineered fields that will be calculated in a complex data pipeline. Calculations include exceptions, aggregations, and scoring using external functions interfaced to scoring algorithms. The source data for aggregations has over 100M rows.

Every minute, the POS sends all sales transactions files to a cloud storage location with a naming convention that includes store numbers and timestamps to identify the set of transactions contained in the files. The files are typically less than 10MB in size.

How can the near real-time results be provided to the category managers? (Select TWO).

A.

All files should be concatenated before ingestion into Snowflake to avoid micro-ingestion.

B.

A Snowpipe should be created and configured with AUTO_INGEST = true. A stream should be created to process INSERTS into a single target table using the stream metadata to inform the store number and timestamps.

C.

A stream should be created to accumulate the near real-time data and a task should be created that runs at a frequency that matches the real-time analytics needs.

D.

An external scheduler should examine the contents of the cloud storage location and issue SnowSQL commands to process the data at a frequency that matches the real-time analytics needs.

E.

The copy into command with a task scheduled to run every second should be used to achieve the near-real time requirement.

Full Access
Question # 30

The following statements have been executed successfully:

USE ROLE SYSADMIN;

CREATE OR REPLACE DATABASE DEV_TEST_DB;

CREATE OR REPLACE SCHEMA DEV_TEST_DB.SCHTEST WITH MANAGED ACCESS;

GRANT USAGE ON DATABASE DEV_TEST_DB TO ROLE DEV_PROJ_OWN;

GRANT USAGE ON SCHEMA DEV_TEST_DB.SCHTEST TO ROLE DEV_PROJ_OWN;

GRANT USAGE ON DATABASE DEV_TEST_DB TO ROLE ANALYST_PROJ;

GRANT USAGE ON SCHEMA DEV_TEST_DB.SCHTEST TO ROLE ANALYST_PROJ;

GRANT CREATE TABLE ON SCHEMA DEV_TEST_DB.SCHTEST TO ROLE DEV_PROJ_OWN;

USE ROLE DEV_PROJ_OWN;

CREATE OR REPLACE TABLE DEV_TEST_DB.SCHTEST.CURRENCY (

COUNTRY VARCHAR(255),

CURRENCY_NAME VARCHAR(255),

ISO_CURRENCY_CODE VARCHAR(15),

CURRENCY_CD NUMBER(38,0),

MINOR_UNIT VARCHAR(255),

WITHDRAWAL_DATE VARCHAR(255)

);

The role hierarchy is as follows (simplified from the diagram):

    ACCOUNTADMIN└─ DEV_SYSADMIN└─ DEV_PROJ_OWN└─ ANALYST_PROJ

Separately:

    ACCOUNTADMIN└─ SYSADMIN└─ MAPPING_ROLE

Which statements will return the records from the table

DEV_TEST_DB.SCHTEST.CURRENCY? (Select TWO)

A.

USE ROLE DEV_PROJ_OWN;

GRANT SELECT ON DEV_TEST_DB.SCHTEST.CURRENCY TO ROLE ANALYST_PROJ;

USE ROLE ANALYST_PROJ;

SELECT * FROM DEV_TEST_DB.SCHTEST.CURRENCY;

B.

USE ROLE DEV_PROJ_OWN;

SELECT * FROM DEV_TEST_DB.SCHTEST.CURRENCY;

C.

USE ROLE SYSADMIN;

SELECT * FROM DEV_TEST_DB.SCHTEST.CURRENCY;

D.

USE ROLE MAPPING_ROLE;

SELECT * FROM DEV_TEST_DB.SCHTEST.CURRENCY;

E.

USE ROLE ACCOUNTADMIN;

SELECT * FROM DEV_TEST_DB.SCHTEST.CURRENCY;

Full Access
Question # 31

An Architect needs to improve the performance of reports that pull data from multiple Snowflake tables, join, and then aggregate the data. Users access the reports using several dashboards. There are performance issues on Monday mornings between 9:00am-11:00am when many users check the sales reports.

The size of the group has increased from 4 to 8 users. Waiting times to refresh the dashboards has increased significantly. Currently this workload is being served by a virtual warehouse with the following parameters:

AUTO-RESUME = TRUE AUTO_SUSPEND = 60 SIZE = Medium

What is the MOST cost-effective way to increase the availability of the reports?

A.

Use materialized views and pre-calculate the data.

B.

Increase the warehouse to size Large and set auto_suspend = 600.

C.

Use a multi-cluster warehouse in maximized mode with 2 size Medium clusters.

D.

Use a multi-cluster warehouse in auto-scale mode with 1 size Medium cluster, and set min_cluster_count = 1 and max_cluster_count = 4.

Full Access
Question # 32

Which of the below commands will use warehouse credits?

A.

SHOW TABLES LIKE 'SNOWFL%';

B.

SELECT MAX(FLAKE_ID) FROM SNOWFLAKE;

C.

SELECT COUNT(*) FROM SNOWFLAKE;

D.

SELECT COUNT(FLAKE_ID) FROM SNOWFLAKE GROUP BY FLAKE_ID;

Full Access
Question # 33

A Developer is having a performance issue with a Snowflake query. The query receives up to 10 different values for one parameter and then performs an aggregation over the majority of a fact table. It then

joins against a smaller dimension table. This parameter value is selected by the different query users when they execute it during business hours. Both the fact and dimension tables are loaded with new data in an overnight import process.

On a Small or Medium-sized virtual warehouse, the query performs slowly. Performance is acceptable on a size Large or bigger warehouse. However, there is no budget to increase costs. The Developer

needs a recommendation that does not increase compute costs to run this query.

What should the Architect recommend?

A.

Create a task that will run the 10 different variations of the query corresponding to the 10 different parameters before the users come in to work. The query results will then be cached and ready to respond quickly when the users re-issue the query.

B.

Create a task that will run the 10 different variations of the query corresponding to the 10 different parameters before the users come in to work. The task will be scheduled to align with the users' working hours in order to allow the warehouse cache to be used.

C.

Enable the search optimization service on the table. When the users execute the query, the search optimization service will automatically adjust the query execution plan based on the frequently-used parameters.

D.

Create a dedicated size Large warehouse for this particular set of queries. Create a new role that has USAGE permission on this warehouse and has the appropriate read permissions over the fact and dimension tables. Have users switch to this role and use this warehouse when they want to access this data.

Full Access
Question # 34

Based on the architecture in the image, how can the data from DB1 be copied into TBL2? (Select TWO).

A)

B)

C)

D)

E)

A.

Option A

B.

Option B

C.

Option C

D.

Option D

E.

Option E

Full Access
Question # 35

A media company needs a data pipeline that will ingest customer review data into a Snowflake table, and apply some transformations. The company also needs to use Amazon Comprehend to do sentiment analysis and make the de-identified final data set available publicly for advertising companies who use different cloud providers in different regions.

The data pipeline needs to run continuously ang efficiently as new records arrive in the object storage leveraging event notifications. Also, the operational complexity, maintenance of the infrastructure, including platform upgrades and security, and the development effort should be minimal.

Which design will meet these requirements?

A.

Ingest the data using COPY INTO and use streams and tasks to orchestrate transformations. Export the data into Amazon S3 to do model inference with Amazon Comprehend and ingest the data back into a Snowflake table. Then create a listing in the Snowflake Marketplace to make the data available to other companies.

B.

Ingest the data using Snowpipe and use streams and tasks to orchestrate transformations. Create an external function to do model inference with Amazon Comprehend and write the final records to a Snowflake table. Then create a listing in the Snowflake Marketplace to make the data available to other companies.

C.

Ingest the data into Snowflake using Amazon EMR and PySpark using the Snowflake Spark connector. Apply transformations using another Spark job. Develop a python program to do model inference by leveraging the Amazon Comprehend text analysis API. Then write the results to a Snowflake table and create a listing in the Snowflake Marketplace to make the data available to other companies.

D.

Ingest the data using Snowpipe and use streams and tasks to orchestrate transformations. Export the data into Amazon S3 to do model inference with Amazon Comprehend and ingest the data back into a Snowflake table. Then create a listing in the Snowflake Marketplace to make the data available to other companies.

Full Access
Question # 36

Files arrive in an external stage every 10 seconds from a proprietary system. The files range in size from 500 K to 3 MB. The data must be accessible by dashboards as soon as it arrives.

How can a Snowflake Architect meet this requirement with the LEAST amount of coding? (Choose two.)

A.

Use Snowpipe with auto-ingest.

B.

Use a COPY command with a task.

C.

Use a materialized view on an external table.

D.

Use the COPY INTO command.

E.

Use a combination of a task and a stream.

Full Access
Question # 37

A user has activated primary and secondary roles for a session.

What operation is the user prohibited from using as part of SQL actions in Snowflake using the secondary role?

A.

Insert

B.

Create

C.

Delete

D.

Truncate

Full Access
Question # 38

A user, analyst_user has been granted the analyst_role, and is deploying a SnowSQL script to run as a background service to extract data from Snowflake.

What steps should be taken to allow the IP addresses to be accessed? (Select TWO).

A.

ALTERROLEANALYST_ROLESETNETWORK_POLICY='ANALYST_POLICY';

B.

ALTERUSERANALYSTJJSERSETNETWORK_POLICY='ANALYST_POLICY';

C.

ALTERUSERANALYST_USERSETNETWORK_POLICY='10.1.1.20';

D.

USE ROLE SECURITYADMIN;CREATE OR REPLACE NETWORK POLICY ANALYST_POLICY ALLOWED_IP_LIST = ('10.1.1.20');

E.

USE ROLE USERADMIN;CREATE OR REPLACE NETWORK POLICY ANALYST_POLICYALLOWED_IP_LIST = ('10.1.1.20');

Full Access
Question # 39

An Architect executes the following statements in order:

CREATE TABLE emp (id INTEGER);

INSERT INTO emp VALUES (1),(2);

CREATE TEMPORARY TABLE emp (id INTEGER);

INSERT INTO emp VALUES (1);

Then executes:

SELECT COUNT(*) FROM emp;

DROP TABLE emp;

SELECT COUNT(*) FROM emp;

What will be the result?

A.

COUNT() = 2

COUNT() = 1

B.

COUNT() = 1

COUNT() = 2

C.

COUNT() = 2

COUNT() = 2

D.

The final query results in an error.

Full Access
Question # 40

A retailer's enterprise data organization is exploring the use of Data Vault 2.0 to model its data lake solution. A Snowflake Architect has been asked to provide recommendations for using Data Vault 2.0 on Snowflake.

What should the Architect tell the data organization? (Select TWO).

A.

Change data capture can be performed using the Data Vault 2.0 HASH_DIFF concept.

B.

Change data capture can be performed using the Data Vault 2.0 HASH_DELTA concept.

C.

Using the multi-table insert feature in Snowflake, multiple Point-in-Time (PIT) tables can be loaded in parallel from a single join query from the data vault.

D.

Using the multi-table insert feature, multiple Point-in-Time (PIT) tables can be loaded sequentially from a single join query from the data vault.

E.

There are performance challenges when using Snowflake to load multiple Point-in-Time (PIT) tables in parallel from a single join query from the data vault.

Full Access
Question # 41

A new user user_01 is created within Snowflake. The following two commands are executed:

Command 1→ SHOW GRANTS TO USER user_01;

Command 2→ SHOW GRANTS ON USER user_01;

What inferences can be made about these commands?

A.

Command 1 defines which user owns user_01Command 2 defines all the grants which have been given to user_01

B.

Command 1 defines all the grants which are given to user_01Command 2 defines which user owns user_01

C.

Command 1 defines which role owns user_01Command 2 defines all the grants which have been given to user_01

D.

Command 1 defines all the grants which are given to user_01Command 2 defines which role owns user_01

Full Access