Summer Special Limited Time 65% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: myex65

Home > Oracle > Oracle Cloud Infrastructure > 1z0-1127-25

1z0-1127-25 Oracle Cloud Infrastructure 2025 Generative AI Professional Question and Answers

Question # 4

What differentiates Semantic search from traditional keyword search?

A.

It relies solely on matching exact keywords in the content.

B.

It depends on the number of times keywords appear in the content.

C.

It involves understanding the intent and context of the search.

D.

It is based on the date and author of the content.

Full Access
Question # 5

Which component of Retrieval-Augmented Generation (RAG) evaluates and prioritizes the information retrieved by the retrieval system?

A.

Retriever

B.

Encoder-Decoder

C.

Generator

D.

Ranker

Full Access
Question # 6

What is the purpose of embeddings in natural language processing?

A.

To increase the complexity and size of text data

B.

To translate text into a different language

C.

To create numerical representations of text that capture the meaning and relationships between words or phrases

D.

To compress text data into smaller files for storage

Full Access
Question # 7

What is the purpose of the "stop sequence" parameter in the OCI Generative AI Generation models?

A.

It specifies a string that tells the model to stop generating more content.

B.

It assigns a penalty to frequently occurring tokens to reduce repetitive text.

C.

It determines the maximum number of tokens the model can generate per response.

D.

It controls the randomness of the model’s output, affecting its creativity.

Full Access
Question # 8

Given the following code:

chain = prompt | llm

Which statement is true about LangChain Expression Language (LCEL)?

A.

LCEL is a programming language used to write documentation for LangChain.

B.

LCEL is a legacy method for creating chains in LangChain.

C.

LCEL is a declarative and preferred way to compose chains together.

D.

LCEL is an older Python library for building Large Language Models.

Full Access
Question # 9

Why is it challenging to apply diffusion models to text generation?

A.

Because text generation does not require complex models

B.

Because text is not categorical

C.

Because text representation is categorical unlike images

D.

Because diffusion models can only produce images

Full Access
Question # 10

An LLM emits intermediate reasoning steps as part of its responses. Which of the following techniques is being utilized?

A.

In-context Learning

B.

Step-Back Prompting

C.

Least-to-Most Prompting

D.

Chain-of-Thought

Full Access
Question # 11

What does in-context learning in Large Language Models involve?

A.

Pretraining the model on a specific domain

B.

Training the model using reinforcement learning

C.

Conditioning the model with task-specific instructions or demonstrations

D.

Adding more layers to the model

Full Access
Question # 12

Which statement is true about the "Top p" parameter of the OCI Generative AI Generation models?

A.

"Top p" selects tokens from the "Top k" tokens sorted by probability.

B.

"Top p" assigns penalties to frequently occurring tokens.

C.

"Top p" limits token selection based on the sum of their probabilities.

D.

"Top p" determines the maximum number of tokens per response.

Full Access
Question # 13

What does a cosine distance of 0 indicate about the relationship between two embeddings?

A.

They are completely dissimilar

B.

They are unrelated

C.

They are similar in direction

D.

They have the same magnitude

Full Access
Question # 14

When does a chain typically interact with memory in a run within the LangChain framework?

A.

Only after the output has been generated.

B.

Before user input and after chain execution.

C.

After user input but before chain execution, and again after core logic but before output.

D.

Continuously throughout the entire chain execution process.

Full Access
Question # 15

How does a presence penalty function in language model generation when using OCI Generative AI service?

A.

It penalizes all tokens equally, regardless of how often they have appeared.

B.

It only penalizes tokens that have never appeared in the text before.

C.

It applies a penalty only if the token has appeared more than twice.

D.

It penalizes a token each time it appears after the first occurrence.

Full Access
Question # 16

What is LangChain?

A.

A JavaScript library for natural language processing

B.

A Python library for building applications with Large Language Models

C.

A Java library for text summarization

D.

A Ruby library for text generation

Full Access
Question # 17

When does a chain typically interact with memory in a run within the LangChain framework?

A.

Only after the output has been generated

B.

Before user input and after chain execution

C.

After user input but before chain execution, and again after core logic but before output

D.

Continuously throughout the entire chain execution process

Full Access
Question # 18

Which statement describes the difference between "Top k" and "Top p" in selecting the next token in the OCI Generative AI Generation models?

A.

"Top k" and "Top p" are identical in their approach to token selection but differ in their application of penalties to tokens.

B.

"Top k" selects the next token based on its position in the list of probable tokens, whereas "Top p" selects based on the cumulative probability of the top tokens.

C.

"Top k" considers the sum of probabilities of the top tokens, whereas "Top p" selects from the "Top k" tokens sorted by probability.

D.

"Top k" and "Top p" both select from the same set of tokens but use different methods to prioritize them based on frequency.

Full Access
Question # 19

What do prompt templates use for templating in language model applications?

A.

Python's list comprehension syntax

B.

Python's str.format syntax

C.

Python's lambda functions

D.

Python's class and object structures

Full Access
Question # 20

Given the following prompts used with a Large Language Model, classify each as employing the Chain-of-Thought, Least-to-Most, or Step-Back prompting technique:

A.

"Calculate the total number of wheels needed for 3 cars. Cars have 4 wheels each. Then, use the total number of wheels to determine how many sets of wheels we can buy with $200 if one set (4 wheels) costs $50."

B.

"Solve a complex math problem by first identifying the formula needed, and then solve a simpler version of the problem before tackling the full question."

C.

"To understand the impact of greenhouse gases on climate change, let’s start by defining what greenhouse gases are. Next, we’ll explore how they trap heat in the Earth’s atmosphere."A. 1: Step-Back, 2: Chain-of-Thought, 3: Least-to-MostB. 1: Least-to-Most, 2: Chain-of-Thought, 3: Step-BackC. 1: Chain-of-Thought, 2: Step-Back, 3: Least-to-MostD. 1: Chain-of-Thought, 2: Least-to-Most, 3: Step-Back

Full Access
Question # 21

In which scenario is soft prompting especially appropriate compared to other training styles?

A.

When there is a significant amount of labeled, task-specific data available.

B.

When the model needs to be adapted to perform well in a different domain it was not originally trained on.

C.

When there is a need to add learnable parameters to a Large Language Model (LLM) without task-specific training.

D.

When the model requires continued pre-training on unlabeled data.

Full Access
Question # 22

What does the term "hallucination" refer to in the context of Large Language Models (LLMs)?

A.

The model's ability to generate imaginative and creative content

B.

A technique used to enhance the model's performance on specific tasks

C.

The process by which the model visualizes and describes images in detail

D.

The phenomenon where the model generates factually incorrect information or unrelated content as if it were true

Full Access
Question # 23

An AI development company is working on an AI-assisted chatbot for a customer, which happens to be an online retail company. The goal is to create an assistant that can best answer queries regarding the company policies as well as retain the chat history throughout a session. Considering the capabilities, which type of model would be the best?

A.

A keyword search-based AI that responds based on specific keywords identified in customer queries.

B.

An LLM enhanced with Retrieval-Augmented Generation (RAG) for dynamic information retrieval and response generation.

C.

An LLM dedicated to generating text responses without external data integration.

D.

A pre-trained LLM model from Cohere or OpenAI.

Full Access
Question # 24

Which is a key characteristic of Large Language Models (LLMs) without Retrieval Augmented Generation (RAG)?

A.

They always use an external database for generating responses.

B.

They rely on internal knowledge learned during pretraining on a large text corpus.

C.

They cannot generate responses without fine-tuning.

D.

They use vector databases exclusively to produce answers.

Full Access
Question # 25

What does the Loss metric indicate about a model's predictions?

A.

Loss measures the total number of predictions made by a model.

B.

Loss is a measure that indicates how wrong the model's predictions are.

C.

Loss indicates how good a prediction is, and it should increase as the model improves.

D.

Loss describes the accuracy of the right predictions rather than the incorrect ones.

Full Access
Question # 26

What is the function of the Generator in a text generation system?

A.

To collect user queries and convert them into database search terms

B.

To rank the information based on its relevance to the user's query

C.

To generate human-like text using the information retrieved and ranked, along with the user's original query

D.

To store the generated responses for future use

Full Access