What types of worksheets can be created in Snowsight? (Select TWO).
SQL
Javascript
Scala
Java
Python
Snowsight supports two worksheet types:SQL worksheetsandPython worksheets. SQL worksheets allow users to execute queries, create objects, and perform data analysis using ANSI SQL and Snowflake-specific extensions. Python worksheets, powered by Snowpark, allow users to write Python code that interacts directly with Snowflake tables, data frames, and machine learning workflows.
Java, Scala, and JavaScript are supported via Snowpark APIs or UDF development, but they cannot be used as worksheet languages. Worksheets are designed for interactive analysis, visualization, and iterative development, with native runtimes only for SQL and Python.
Thus, only SQL and Python worksheets can be created within Snowsight.
==================
What is the Snowsight Query Profile used for?
To execute SQL queries
To create new database objects
To manage data loading processes
To visualize and analyze query performance
The Snowsight Query Profile is a powerful diagnostic tool that provides a visual breakdown of how Snowflake executed a query. Its primary purpose is to help users visualize and analyze query performance. It displays execution steps, including scan operations, join strategies, pruning results, aggregation methods, and data movement between processing nodes.
The profile shows metrics such as execution time per step, partition pruning effectiveness, bytes scanned, and operator relationships. This allows developers, analysts, and DBAs to identify bottlenecks—such as unnecessary full-table scans, non-selective filters, or inefficient joins—and tune SQL accordingly.
Query Profile does not execute queries; execution happens in worksheets or programmatic interfaces. It does not create objects or manage data loading; those tasks involve separate SQL commands and UI interfaces.
Overall, Query Profile is essential for performance tuning, helping teams reduce compute costs, optimize warehouse sizing, and improve query efficiency.
==================
Which statement is true about Snowflake Data Exchange? (Choose any 2 options)
It is limited to internal data sharing only
It requires complex ETL processes to transfer data
It supports data sharing between different regions and cloud providers
It allows organizations to securely share live, governed data
Snowflake Data Exchangeprovides governed, real-time data collaboration between Snowflake accounts. It enables providers to publishlive datasetswhile consumers query that data without copying or moving it. Because Snowflake uses secure data sharing primitives at the metadata layer,no ETL pipelinesor data duplication are required.
A key advantage is support forcross-region and cross-cloud sharing, allowing collaboration across AWS, Azure, and GCP regions seamlessly.
Data Exchange listings support controlled visibility, entitlement management, and auditing. Providers maintain full control over updates since consumers always access the live, authoritative version of the dataset.
Incorrect statements:
It is not limited to internal sharing—external sharing is a major feature.
ETL is not required because Snowflake’s architecture exposes shared objects directly.
====================================================
Given a table named MY_TABLE, which SQL statement would create a clone named MY_TABLE_CLONE?
COPY TABLE MY_TABLE TO MY_TABLE_CLONE;
CREATE TABLE MY_TABLE_CLONE CLONE MY_TABLE;
BACKUP TABLE MY_TABLE TO MY_TABLE_CLONE;
RESTORE TABLE MY_TABLE TO MY_TABLE_CLONE;
The correct SQL syntax to create a zero-copy clone of an existing table is:
CREATE TABLE MY_TABLE_CLONE CLONE MY_TABLE;
This command instantly creates a new table that references the same underlying micro-partitions as the original. Because of Snowflake’smetadata-only cloning, no storage is consumed at the time of creation. Storage only increases when either the original or the clone diverges through DML operations, following acopy-on-writemodel.
Cloning is available for multiple object types—tables, schemas, databases, stages, streams, tasks, and more. This capability enables rapid creation of development sandboxes, QA environments, rollback copies, or controlled experimentation without duplicating data.
Incorrect options:
“COPY TABLE†is not a valid Snowflake command.
BACKUP/RESTORE are not Snowflake SQL commands.
RESTORE applies only to Time Travel or Fail-safe, not to cloning.
Thus, the CLONE keyword is the only correct method for zero-copy duplication.
====================================================
Which function in Snowflake Cortex LLM are Task Specific Function? (Select two)
TRANSLATE
CLASSIFY_TEXT / AI_CLASSIFY
COUNT_TOKENS
PARSE_DOCUMENT
Snowflake’s Cortex LLM includestask-specific functions, meaning each performs a well-defined AI operation with predictable outputs. Examples include:
TRANSLATE– Converts text between languages; deterministic and domain-independent.
CLASSIFY_TEXT / AI_CLASSIFY– Assigns text to predefined categories, ideal for sentiment, topics, or routing tasks.
PARSE_DOCUMENT– Extracts structured information from documents (PDFs, invoices, receipts, contracts) including layout-aware content.
These functions are optimized for reliability, reproducibility, and governance, making them suitable for production pipelines.
COUNT_TOKENSis not task-specific—it’s a utility function used to estimate LLM token usage rather than perform a primary AI task.
Thus, TRANSLATE, CLASSIFY_TEXT, and PARSE_DOCUMENT are the correct task-specific functions.
====================================================
What information can be accessed using the Snowsight Monitoring tab?
Virtual warehouse usage metrics
Query execution history
Database Time Travel snapshots
Database schema changes history
The Snowsight Monitoring tab provides a centralized view of virtual warehouse usage metrics, enabling administrators and developers to evaluate how compute resources are being consumed. This includes critical insights such as credit usage, query load, concurrency levels, average queue times, execution durations, and auto-scaling activity (for multi-cluster warehouses). These metrics help determine whether a warehouse is correctly sized, whether concurrency issues are occurring, or whether workloads require scaling up or adding clusters.
Query history is available in a different section—“Activity → Query Historyâ€â€”not under Monitoring. Time Travel snapshots are not visualized within Monitoring; Time Travel is controlled via retention parameters and accessed with SQL (AT/BEFORE clauses). Schema change history is also not part of Monitoring and instead is discoverable through ACCOUNT_USAGE or specific metadata views.
The Monitoring tab exists specifically to help evaluate warehouse performance and resource consumption, enabling optimization of compute spending and better workload management.
==================
Which of the following parameters can be used with the COPY INTO