Which activities are included in the Cloud Services layer? {Select TWO).
Data storage
Dynamic data masking
Partition scanning
User authentication
Infrastructure management
The Cloud Services layer in Snowflake is responsible for a wide range of services that facilitate the management and use of Snowflake, including:
These services are part of Snowflake's fully managed, cloud-based architecture, which abstracts and automates many of the complexities associated with data warehousing.
References:
Which Snowflake edition offers the highest level of security for organizations that have the strictest requirements?
Standard
Enterprise
Business Critical
Virtual Private Snowflake (VPS)
The Virtual Private Snowflake (VPS) edition offers the highest level of security for organizations with the strictest security requirements. This edition provides a dedicated and isolated instance of Snowflake, including enhanced security features and compliance certifications to meet the needs of highly regulated industries or any organization requiring the utmost in data protection and privacy.
References:
What is it called when a customer managed key is combined with a Snowflake managed key to create a composite key for encryption?
Hierarchical key model
Client-side encryption
Tri-secret secure encryption
Key pair authentication
Tri-secret secure encryption is a security model employed by Snowflake that involves combining a customer-managed key with a Snowflake-managed key to create a composite key for encrypting data. This model enhances data security by requiring both the customer-managed key and the Snowflake-managed key to decrypt data, thus ensuring that neither party can access the data independently. It represents a balanced approach to key management, leveraging both customer control and Snowflake's managed services for robust data encryption.
References:
Regardless of which notation is used, what are considerations for writing the column name and element names when traversing semi-structured data?
The column name and element names are both case-sensitive.
The column name and element names are both case-insensitive.
The column name is case-sensitive but element names are case-insensitive.
The column name is case-insensitive but element names are case-sensitive.
When querying semi-structured data in Snowflake, the behavior towards case sensitivity is distinct between column names and the names of elements within the semi-structured data. Column names follow the general SQL norm of being case-insensitive, meaning you can reference them in any case without affecting the query. However, element names within JSON, XML, or other semi-structured data are case-sensitive. This distinction is crucial for accurate data retrieval and manipulation in Snowflake, especially when working with JSON objects where the case of keys can significantly alter the outcome of queries.
References:
Which role has the ability to create a share from a shared database by default?
ACCOUNTADMIN
SECURITYADMIN
SYSADMIN
ORGADMIN
By default, the ACCOUNTADMIN role in Snowflake has the ability to create a share from a shared database. This role has the highest level of access within a Snowflake account, including the management of all aspects of the account, such as users, roles, warehouses, and databases, as well as the creation and management of shares for secure data sharing with other Snowflake accounts.
References:
While clustering a table, columns with which data types can be used as clustering keys? (Select TWO).
BINARY
GEOGRAPHY
GEOMETRY
OBJECT
VARIANT
A clustering key can be defined when a table is created by appending a CLUSTER Where each clustering key consists of one or more table columns/expressions, which can be of any data type, except GEOGRAPHY, VARIANT, OBJECT, or ARRAY https://docs.snowflake.com/en/user-guide/tables-clustering-keys
A Snowflake user wants to temporarily bypass a network policy by configuring the user object property MINS_TO_BYPASS_NETWORK_POLICY.
What should they do?
Use the SECURITYADMIN role.
Use the SYSADMIN role.
Use the USERADMIN role.
Contact Snowflake Support.
To temporarily bypass a network policy by configuring the user object property MINS_TO_BYPASS_NETWORK_POLICY, the USERADMIN role should be used. This role has the necessary privileges to modify user properties, including setting a temporary bypass for network policies, which can be crucial for enabling access under specific circumstances without permanently altering the network security configuration.
References:
Which Snowflake mechanism is used to limit the number of micro-partitions scanned by a query?
Caching
Cluster depth
Query pruning
Retrieval optimization
Query pruning in Snowflake is the mechanism used to limit the number of micro-partitions scanned by a query. By analyzing the filters and conditions applied in a query, Snowflake can skip over micro-partitions that do not contain relevant data, thereby reducing the amount of data processed and improving query performance. This technique is particularly effective for large datasets and is a key component of Snowflake's performance optimization features.
References:
What is the Fail-safe period for a transient table in the Snowflake Enterprise edition and higher?
0 days
1 day
7 days
14 days
The Fail-safe period for a transient table in Snowflake, regardless of the edition (including Enterprise edition and higher), is 0 days. Fail-safe is a data protection feature that provides additional retention beyond the Time Travel period for recovering data in case of accidental deletion or corruption. However, transient tables are designed for temporary or short-term use and do not benefit from the Fail-safe feature, meaning that once their Time Travel period expires, data cannot be recovered.
References:
If a virtual warehouse runs for 61 seconds, shut down, and then restart and runs for 30 seconds, for how many seconds is it billed?
60
91
120
121
Snowflake bills virtual warehouse usage in one-minute increments, rounding up to the nearest minute for any partial minute of compute time used. If a virtual warehouse runs for 61 seconds and then, after being shut down, restarts and runs for an additional 30 seconds, the total time billed would be 120 seconds or 2 minutes. The first 61 seconds are rounded up to 2 minutes, and the subsequent 30 seconds are within a new minute, which is also rounded up to the nearest minute.
References:
Which data types optimally store semi-structured data? (Select TWO).
ARRAY
CHARACTER
STRING
VARCHAR
VARIANT
In Snowflake, semi-structured data is optimally stored using specific data types that are designed to handle the flexibility and complexity of such data. The VARIANT data type can store structured and semi-structured data types, including JSON, Avro, ORC, Parquet, or XML, in a single column. The ARRAY data type, on the other hand, is suitable for storing ordered sequences of elements, which can be particularly useful for semi-structured data types like JSON arrays. These data types provide the necessary flexibility to store and query semi-structured data efficiently in Snowflake.
References:
Which view can be used to determine if a table has frequent row updates or deletes?
TABLES
TABLE_STORAGE_METRICS
STORAGE_DAILY_HISTORY
STORAGE USAGE
The TABLE_STORAGE_METRICS view can be used to determine if a table has frequent row updates or deletes. This view provides detailed metrics on the storage utilization of tables within Snowflake, including metrics that reflect the impact of DML operations such as updates and deletes on table storage. For example, metrics related to the number of active and deleted rows can help identify tables that experience high levels of row modifications, indicating frequent updates or deletions.
References:
What is the default value in the Snowflake Web Interface (Ul) for auto suspending a Virtual Warehouse?
1 minutes
5 minutes
10 minutes
15 minutes
The default value for auto-suspending a Virtual Warehouse in the Snowflake Web Interface (UI) is 10 minutes. This setting helps manage compute costs by automatically suspending warehouses that are not in use, ensuring that compute resources are efficiently allocated and not wasted on idle warehouses.
References:
How does Snowflake reorganize data when it is loaded? (Select TWO).
Binary format
Columnar format
Compressed format
Raw format
Zipped format
When data is loaded into Snowflake, it undergoes a reorganization process where the data is stored in a columnar format and compressed. The columnar storage format enables efficient querying and data retrieval, as it allows for reading only the necessary columns for a query, thereby reducing IO operations. Additionally, Snowflake uses advanced compression techniques to minimize storage costs and improve performance. This combination of columnar storage and compression is key to Snowflake's data warehousing capabilities.
References:
A user has semi-structured data to load into Snowflake but is not sure what types of operations will need to be performed on the data. Based on this situation, what type of column does Snowflake recommend be used?
ARRAY
OBJECT
TEXT
VARIANT
When dealing with semi-structured data in Snowflake, and the specific types of operations to be performed on the data are not yet determined, Snowflake recommends using the VARIANT data type. The VARIANT type is highly flexible and capable of storing data in multiple formats, including JSON, AVRO, BSON, and more, within a single column. This flexibility allows users to perform various operations on the data, including querying and manipulation of nested data structures without predefined schemas.
References:
Which Snowflake layer is associated with virtual warehouses?
Cloud services
Query processing
Elastic memory
Database storage
The layer of Snowflake's architecture associated with virtual warehouses is the Query Processing layer. Virtual warehouses in Snowflake are dedicated compute clusters that execute SQL queries against the stored data. This layer is responsible for the entire query execution process, including parsing, optimization, and the actual computation. It operates independently of the storage layer, enabling Snowflake to scale compute and storage resources separately for efficiency and cost-effectiveness.
References:
What are the benefits of the replication feature in Snowflake? (Select TWO).
Disaster recovery
Time Travel
Fail-safe
Database failover and fallback
Data security
The replication feature in Snowflake provides several benefits, with disaster recovery and database failover and fallback being two of the primary advantages. Replication allows for the continuous copying of data from one Snowflake account to another, ensuring that a secondary copy of the data is available in case of outages or disasters. This capability supports disaster recovery strategies by allowing operations to quickly switch to the replicated data in a different account or region. Additionally, it facilitates database failover and fallback procedures, ensuring business continuity and minimizing downtime.
References:
Which command removes a role from another role or a user in Snowflak?
ALTER ROLE
REVOKE ROLE
USE ROLE
USE SECONDARY ROLES
The REVOKE ROLE command is used to remove a role from another role or a user in Snowflake. This command is part of Snowflake's role-based access control system, allowing administrators to manage permissions and access to database objects efficiently by adding or removing roles from users or other roles.
References:
What happens when a network policy includes values that appear in both the allowed and blocked IP address list?
Those IP addresses are allowed access to the Snowflake account as Snowflake applies the allowed IP address list first.
Those IP addresses are denied access lei the Snowflake account as Snowflake applies the blocked IP address list first.
Snowflake issues an alert message and adds the duplicate IP address values lo both 'he allowed and blocked IP address lists.
Snowflake issues an error message and adds the duplicate IP address values to both the allowed and blocked IP address list
In Snowflake, when setting up a network policy that specifies both allowed and blocked IP address lists, if an IP address appears in both lists, access from that IP address will be denied. The reason is that Snowflake prioritizes security, and the presence of an IP address in the blocked list indicates it should not be allowed regardless of its presence in the allowed list. This ensures that access controls remain stringent and that any potentially unsafe IP addresses are not inadvertently permitted access.
References:
When using the ALLOW_CLI£NT_MFA_CACHING parameter, how long is a cached Multi-Factor Authentication (MFA) token valid for?
1 hour
2 hours
4 hours
8 hours
A cached MFA token is valid for up to four hours. https://docs.snowflake.com/en/user-guide/security-mfa#using-mfa-token-caching-to-minimize-the-number-of-prompts-during-authentication-optional
Which SQL command can be used to verify the privileges that are granted to a role?
SHOW GRANTS ON ROLE
SHOW ROLES
SHOW GRANTS TO ROLE
SHOW GRANTS FOR ROLE
To verify the privileges that have been granted to a specific role in Snowflake, the correct SQL command is SHOW GRANTS TO ROLE <Role Name>. This command lists all the privileges granted to the specified role, including access to schemas, tables, and other database objects. This is a useful command for administrators and users with sufficient privileges to audit and manage role permissions within the Snowflake environment.
References:
What is the Fail-safe retention period for transient and temporary tables?
0 days
1 day
7 days
90 days
The Fail-safe retention period for transient and temporary tables in Snowflake is 0 days. Fail-safe is a feature designed to protect data against accidental loss or deletion by retaining historical data for a period after its Time Travel retention period expires. However, transient and temporary tables, which are designed for temporary or short-term storage and operations, do not have a Fail-safe period. Once the data is deleted or the table is dropped, it cannot be recovered.
References:
Which privilege is needed for a SnowFlake user to see the definition of a secure view?
OWNERSHIP
MODIFY
CREATE
USAGE
To see the definition of a secure view in Snowflake, the minimum privilege required is OWNERSHIP of the view. Ownership grants the ability to view the definition as well as to modify or drop the view. Secure views are designed to protect sensitive data, and thus the definition of these views is restricted to users with sufficient privileges to ensure data security.
References:
What are characteristics of reader accounts in Snowflake? (Select TWO).
Reader account users cannot add new data to the account.
Reader account users can share data to other reader accounts.
A single reader account can consume data from multiple provider accounts.
Data consumers are responsible for reader account setup and data usage costs.
Reader accounts enable data consumers to access and query data shared by the provider.
Characteristics of reader accounts in Snowflake include:
References:
How does the Access_History view enhance overall data governance pertaining to read and write operations? (Select TWO).
Shows how the accessed data was moved from the source lo the target objects
Provides a unified picture of what data was accessed and when it was accessed
Protects sensitive data from unauthorized access while allowing authorized users to access it at query runtime
Identifies columns with personal information and tags them so masking policies can be applied to protect sensitive data
Determines whether a given row in a table can be accessed by the user by filtering the data based on a given policy
The ACCESS_HISTORY view in Snowflake is a powerful tool for enhancing data governance, especially concerning monitoring and auditing data access patterns for both read and write operations. The key ways in which ACCESS_HISTORY enhances overall data governance are:
ACCESS_HISTORY does not automatically apply data masking or tag columns with personal information. However, the insights derived from analyzing ACCESS_HISTORY can be used to identify sensitive data and inform the application of masking policies or other security measures.
References:
While running a query on a virtual warehouse in auto-scale mode, additional clusters are stated immediately if which setting is configured?
A)
B)
C)
D)
Option A
Option B
Option C
Option D
In Snowflake, auto-scaling allows virtual warehouses to automatically start additional clusters to handle increasing query loads. The setting that triggers the immediate start of additional clusters when a warehouse is running in auto-scale mode is:A. MAX_CLUSTER_COUNT is increased and new_max_clusters is greater than running_clusters: When the maximum number of clusters (MAX_CLUSTER_COUNT) is increased and the new maximum is higher than the number of clusters currently running (running_clusters), additional clusters will start immediately if required by the workload. This configuration ensures that performance scales with demand by allowing more compute resources to be provisioned as needed.
This behavior is designed to maintain performance by dynamically adjusting the compute resources without manual intervention, ensuring that queries are executed with minimal delay, even under varying workloads. It aligns with the principles of elasticity and scalability in cloud computing, particularly within Snowflake's architecture.
What is the MINIMUM role required to set the value for the parameter ENABLE_ACCOUNT_DATABASE_REPLICATION?
ACCOUNTADMIN
SECURITYADMIN
SYSADMIN
ORGADMIN
The ENABLE_ACCOUNT_DATABASE_REPLICATION parameter is a critical setting in Snowflake that allows or restricts the replication of databases across Snowflake accounts. Given the significant impact of this parameter on data management and security, only the ACCOUNTADMIN role has the minimum required privileges to set or modify it. This ensures that only users with the highest level of access and responsibility within the Snowflake environment can control database replication settings, maintaining strict governance and security standards.References: Snowflake Documentation on Database Replication
How does Snowflake handle the data retention period for a table if a stream has not been consumed?
The data retention period is reduced to a minimum of 14 days.
The data retention period is permanently extended for the table.
The data retention period is temporarily extended to the stream's offset.
The data retention period is not affected by the stream consumption.
In Snowflake, the use of streams impacts how the data retention period for a table is handled, particularly in scenarios where the stream has not been consumed. The key point to understand is that Snowflake's streams are designed to capture data manipulation language (DML) changes such as INSERTS, UPDATES, and DELETES that occur on a source table. Streams maintain a record of these changes until they are consumed by a DML operation or a COPY command that references the stream.
When a stream is created on a table and remains unconsumed, Snowflake extends the data retention period of the table to ensure that the changes captured by the stream are preserved. This extension is specifically up to the point in time represented by the stream's offset, which effectively ensures that the data necessary for consuming the stream's contents is retained. This mechanism is in place to prevent data loss and ensure the integrity of the stream's data, facilitating accurate and reliable data processing and analysis based on the captured DML changes.
This behavior emphasizes the importance of managing streams and their consumption appropriately to balance between data retention needs and storage costs. It's also crucial to understand how this temporary extension of the data retention period impacts the overall management of data within Snowflake, including aspects related to data lifecycle, storage cost implications, and the planning of data consumption strategies.
References:
Authorization to execute CREATE
Primary role
Secondary role
Application role
Database role
In Snowflake, the authorization to execute CREATE <object> statements, such as creating tables, views, databases, etc., is determined by the role currently set as the user's primary role. The primary role of a user or session specifies the set of privileges (including creation privileges) that the user has. While users can have multiple roles, only the primary role is used to determine what objects the user can create unless explicitly specified in the session.
Snowflake's access control framework combines which models for securing data? (Select TWO).
Attribute-based Access Control (ABAC 1
Discretionary Access Control (DAC)
Access Control List (ACL)
Role-based Access Control (RBAC)
Rule-based Access Control (RuBAC)
Snowflake's access control framework utilizes a combination of Discretionary Access Control (DAC) and Role-based Access Control (RBAC). DAC in Snowflake allows the object owner to grant access privileges to other roles. RBAC involves assigning roles to users and then granting privileges to those roles. Through roles, Snowflake manages which users have access to specific objects and what actions they can perform, which is central to security and governance in the Snowflake environment.References: Snowflake Documentation on Access Control,
When snaring data in Snowflake. what privileges does a Provider need to grant along with a share? (Select TWO).
USAGE on the specific tables in the database.
USAGE on the specific tables in the database.
MODIFY on 1Mb specific tables in the database.
USAGE on the database and the schema containing the tables to share
OPEBATE on the database and the schema containing the tables to share.
When sharing data in Snowflake, the provider needs to grant the following privileges along with a share:
These privileges are crucial for setting up secure and controlled access to the shared data, ensuring that only authorized users can access the specified resources.
Reference to Snowflake documentation on sharing data and managing access:
Which system_defined Snowflake role has permission to rename an account and specify whether the original URL can be used to access the renamed account?
ACCOUNTADMIN
SECURITYADMIN
SYSADMIN
ORGADMIN
The ACCOUNTADMIN role in Snowflake has the highest level of privileges, including the ability to manage accounts, users, roles, and all objects within the account. This role is specifically granted the permission to rename an account and specify whether the original URL can be used to access the renamed account. The ACCOUNTADMIN role encompasses broad administrative capabilities, ensuring that users assigned this role can perform critical account management tasks.References: Snowflake Documentation on Roles and Permissions
A clustering key was defined on a table, but It is no longer needed. How can the key be removed?
ALTER TABLE