For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE; Each correct selection is worth one point.



According to the Microsoft Azure AI Fundamentals (AI-900) official study materials and the Microsoft Learn module: “Describe features of common AI workloadsâ€, conversational AI solutions like chatbots can be created using various methods—not only through custom code. Azure provides both no- code/low-code and developer-focused approaches. For instance, users can design chatbots using Power Virtual Agents, which requires no programming knowledge, or they can use Azure Bot Service with the Bot Framework SDK for fully customized scenarios. Hence, the statement “Chatbots can only be built by using custom code†is False (No) because Azure supports multiple levels of technical involvement for building bots.
The second statement is True (Yes) because the Azure Bot Service is designed specifically to host, manage, and connect conversational bots to users across different channels. Microsoft Learn explicitly explains that the service provides integrated hosting, connection management, and telemetry for bots built using the Bot Framework or Power Virtual Agents. It acts as the foundation for deploying, scaling, and managing chatbot workloads in Azure.
The third statement is also True (Yes) because Azure Bot Service supports integration with Microsoft Teams, among many other channels such as Skype, Facebook Messenger, Slack, and web chat. Microsoft documentation states that Azure-hosted bots can communicate directly with Teams users through the Teams channel, enabling intelligent virtual assistants within the Teams environment.
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.



According to the Microsoft Azure AI Fundamentals (AI-900) study guide and Azure Cognitive Services documentation, the Custom Vision service is a specialized computer vision tool that allows users to build, train, and deploy custom image classification and object detection models. It is part of the Azure Cognitive Services suite, designed for scenarios where pre-built Computer Vision models do not meet specific business requirements.
“The Custom Vision service can be used to detect objects in an image.†→ YesThis statement is true. The Custom Vision service supports object detection, enabling the model to identify and locate multiple objects within a single image using bounding boxes. For example, it can locate cars, products, or animals in photos.
“The Custom Vision service requires that you provide your own data to train the model.†→ YesThis statement is true. Unlike pre-trained models such as the standard Computer Vision API, the Custom Vision service requires users to upload and label their own images. The system uses this labeled dataset to train a model specific to the user’s scenario, improving accuracy for custom use cases.
“The Custom Vision service can be used to analyze video files.†→ NoThis statement is false. The Custom Vision service works only with static images, not videos. To analyze video files, Azure provides Video Indexer and Azure Media Services, which are designed for extracting insights from moving visual content.
Which Azure Al Document Intelligence prebuilt model should you use to extract parties and jurisdictions from a legal document?
contract
layout
general document
read
Within Azure AI Document Intelligence (formerly Form Recognizer), the Contract prebuilt model is designed to extract key information from legal and business contracts, including parties, jurisdictions, dates, and terms. According to Microsoft Learn, this prebuilt model identifies structured entities such as contracting parties, effective dates, governing jurisdictions, and termination clauses.
Layout (B) extracts text, tables, and structure but does not identify semantic information such as parties or jurisdictions.
General document (C) extracts key-value pairs and entities but lacks domain-specific contract analysis.
Read (D) performs OCR (optical character recognition) to extract raw text but not contextual metadata.
Thus, when the requirement is to extract parties and jurisdictions from a legal document, the Contract model is the correct Azure AI Document Intelligence choice.
What is an example of the Microsoft responsible Al principle of transparency?
helping users understand the decisions made by an Al system
ensuring that developers are accountable for the solutions they create
ensuring that opportunities are allocated equally to all applicants
ensuring that the privileged data of users is stored in a secure manner
The correct answer is A. Helping users understand the decisions made by an AI system.
According to the Microsoft Responsible AI principles described in the AI-900 study guide and Microsoft Learn Responsible AI documentation, transparency focuses on ensuring that users and stakeholders clearly understand how an AI system functions, makes decisions, and what data it relies on. This includes communicating limitations, assumptions, and levels of confidence in AI-driven outcomes.
For instance, if an AI model recommends loan approvals, transparency means explaining which factors influenced the decision and how much weight each factor carried. This helps build trust and accountability while allowing users to make informed judgments about the AI’s reliability.
Option review:
A. Helping users understand decisions made by an AI system — ✅ Correct.
B. Accountability: Refers to ensuring developers or organizations are responsible for AI outcomes.
C. Fairness: Ensures equal opportunities and mitigates bias.
D. Privacy and security: Focuses on protecting user data.
Hence, Transparency = clarity and explainability about AI processes.
To complete the sentence, select the appropriate option in the answer area.



According to the Microsoft Azure AI Fundamentals (AI-900) official study guide and the Microsoft Learn module “Explore fundamental principles of machine learningâ€, feature engineering is the process used to generate additional features or transform existing data into forms that improve model performance. Features are individual measurable properties or characteristics used as input for machine learning algorithms. The goal of feature engineering is to create new informative variables that better represent the underlying patterns in the data.
Feature engineering may include tasks such as:
Combining or transforming raw data columns (e.g., creating a “total purchase amount†from price × quantity).
Extracting time-based components (e.g., year, month, day, hour) from datetime values.
Encoding categorical variables (e.g., one-hot encoding or label encoding).
Scaling or normalizing numerical features.
Creating polynomial or interaction terms to capture complex relationships.
Microsoft’s AI-900 learning material emphasizes that the process of preparing data for machine learning involves data cleaning, feature engineering, and feature selection. While feature selection is about choosing the most relevant features from the existing dataset, feature engineering focuses on creating or generating new features to enhance model accuracy and generalization.
The other options do not fit this definition:
Feature selection is about removing redundant or irrelevant features, not generating new ones.
Model evaluation involves assessing the model’s performance using metrics like accuracy or F1 score.
Model training is the phase where the algorithm learns patterns from the data, not when features are created.
Therefore, based on the AI-900 official concepts and Microsoft’s documentation, the correct answer is Feature engineering, as it is the process specifically used to generate additional features that improve machine learning model performance and predictive capability.
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.



Box 1: Yes
Automated machine learning, also referred to as automated ML or AutoML, is the process of automating the time consuming, iterative tasks of machine learning model development. It allows data scientists, analysts, and developers to build ML models with high scale, efficiency, and productivity all while sustaining model quality.
Box 2: No
Box 3: Yes
During training, Azure Machine Learning creates a number of pipelines in parallel that try different algorithms and parameters for you. The service iterates through ML algorithms paired with feature selections, where each iteration produces a model with a training score. The higher the score, the better the model is considered to " fit " your data. It will stop once it hits the exit criteria defined in the experiment.
Box 4: No
Apply automated ML when you want Azure Machine Learning to train and tune a model for you using the target metric you specify.
The label is the column you want to predict.
Providing contextual information to improve the responses quality of a generative Al solution is an example of which prompt engineering technique?
providing examples
fine-tuning
grounding data
system messages
In Microsoft Azure OpenAI Service and the AI-900/AI-102 study materials, grounding data is the correct term used to describe the process of providing contextual or external information to improve the accuracy, relevance, and quality of responses generated by a generative AI model such as GPT-3.5 or GPT-4.
Grounding is a prompt engineering technique where the AI model is supplemented with relevant background data, such as company documents, knowledge bases, or user context, that helps the model generate factually correct and context-aware responses. Microsoft Learn defines grounding as a way to connect the model’s general knowledge to specific, real-world information. For example, if you ask a GPT-3.5 model about your organization’s HR policies, the base model will not know them unless that policy information is provided (grounded) in the prompt. By embedding this contextual data, the AI becomes “grounded†in the facts it needs to respond reliably.
This technique differs from other prompt engineering concepts:
A. Providing examples (few-shot prompting) shows the model sample inputs and outputs to guide formatting or style, not factual context.
B. Fine-tuning involves retraining the model with labeled data to permanently adjust its behavior — it’s not a prompt-based technique.
D. System messages define the model’s role, tone, or style (for example, “You are a helpful assistantâ€) but do not add factual context.
Therefore, when you provide contextual information (like product details, policy documents, or reference text) within a prompt to enhance the quality and factual reliability of the model’s responses, you are applying the grounding data technique.
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.


Statements
Yes
No
A bot that responds to queries by internal users is an example of a conversational AI workload.
✅ Yes
An application that displays images relating to an entered search term is an example of a conversational AI workload.
✅ No
A web form used to submit a request to reset a password is an example of a conversational AI workload.
✅ No
According to the Microsoft Azure AI Fundamentals (AI-900) official study materials, conversational AI workloads are those that enable interaction between humans and AI systems through natural language conversation, either by text or speech. These workloads are typically implemented using Azure Bot Service, Azure Cognitive Services for Language, and Azure OpenAI Service. The key characteristic of a conversational AI workload is the presence of dialogue—the AI interprets user intent and provides a meaningful, contextual response in a conversation-like manner.
“A bot that responds to queries by internal users is an example of a conversational AI workload.†→ YESThis fits the definition perfectly. A chatbot that helps employees (internal users) by answering questions about policies, IT issues, or HR procedures is a typical example of conversational AI. It uses natural language understanding to interpret questions and provide automated responses. Microsoft Learn explicitly identifies chatbots as conversational AI solutions designed for both internal and external interactions.
“An application that displays images relating to an entered search term is an example of a conversational AI workload.†→ NOThis is not conversational AI because there is no dialogue or language understanding involved. It is an example of information retrieval or computer vision if it uses image recognition, but not conversation.
“A web form used to submit a request to reset a password is an example of a conversational AI workload.†→ NOA password reset form is a simple UI-driven process that doesn’t require AI or conversational logic. It performs a fixed function based on user input but does not understand or respond to natural language.
Therefore, based on the AI-900 study guide, only the first statement is an example of a conversational AI workload, while the second and third statements are not.
A smart device that responds to the question. " What is the stock price of Contoso, Ltd.? " is an example of which Al workload?
computer vision
anomaly detection
knowledge mining
natural language processing
The question describes a smart device that can understand and respond to a spoken or written question such as, “What is the stock price of Contoso, Ltd.?†This scenario directly maps to the Natural Language Processing (NLP) workload in Microsoft Azure AI.
According to the Microsoft AI Fundamentals (AI-900) study guide and the Microsoft Learn module “Describe features of common AI workloads,†NLP enables systems to understand, interpret, and generate human language. Azure AI Language and Azure Speech services are examples of NLP-based solutions.
In this case, the smart device performs several NLP tasks:
Speech recognition – converts spoken input into text.
Language understanding – interprets the user’s intent, i.e., retrieving the stock price of a specific company.
Response generation – formulates a meaningful answer that can be presented back as text or speech.
This process shows a full pipeline of natural language understanding (NLU) and conversational AI. It does not involve visual data (computer vision), data pattern analysis (anomaly detection), or document search (knowledge mining).
Hence, the correct AI workload is D. Natural Language Processing.
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.



The correct answers are Yes, Yes, and Yes.
According to the Microsoft Azure AI Fundamentals (AI-900) official study guide and Microsoft Learn content in the section “Describe features of conversational AI workloads on Azureâ€, bots created using Azure Bot Service can interact with users across multiple channels. The AI-900 syllabus explains that Azure Bot Service integrates with various communication platforms, allowing developers to build a single bot that can be deployed in many contexts without rewriting the logic.
“You can communicate with a bot by using Cortana.†– Yes.The AI-900 learning materials explain that Cortana, Microsoft’s intelligent personal assistant, can serve as a channel for bots built with the Azure Bot Service. Through the Bot Framework, bots can be connected to Cortana to allow users to interact via voice or text. Although Cortana is less prominent now, it remains conceptually included in the AI-900 coverage as an example of a voice-based conversational AI channel.
“You can communicate with a bot by using Microsoft Teams.†– Yes.This statement is true and directly referenced in the AI-900 syllabus. Microsoft Teams is a fully supported communication channel for Azure Bot Service. Bots in Teams can handle chat messages, commands, and interactions in team or personal contexts. The Microsoft Learn materials specify Teams as one of the native connectors where enterprise users can interact with organizational bots.
“You can communicate with a bot by using a webchat interface.†– Yes.This is also true. The Web Chat channel is one of the most common ways to deploy bots publicly. Azure Bot Service provides a Web Chat control that can be embedded directly into a webpage or web application. This allows users to interact with the bot using a chat window, just like on customer service websites.
Therefore, all three interfaces—Cortana (voice-based), Microsoft Teams (enterprise chat), and Web Chat (browser-based)—are valid and officially supported communication channels for Azure bots.
Select the answer that correctly completes the sentence.



According to the Microsoft Azure AI Fundamentals (AI-900) official study materials and the Microsoft Learn module “Describe Azure Machine Learning and Automated ML,†Azure Machine Learning designer (formerly known as Azure Machine Learning Studio) is a drag-and-drop, low-code/no-code environment that allows users to create, train, and evaluate machine learning models visually — without the need for extensive programming knowledge.
The designer provides a visual interface, known as the canvas, where users can:
Import and prepare data using modules for data transformation and cleaning.
Split data into training and testing datasets.
Select and configure algorithms (classification, regression, or clustering).
Train and evaluate the model.
Deploy the model as a web service directly from the designer.
The official Microsoft Learn content emphasizes that “Azure Machine Learning designer enables users to build, test, and deploy models by adding and connecting prebuilt modules on a visual interface.†This allows business analysts, data professionals, and beginners to experiment with machine learning workflows without writing code.
By comparison:
Automatically performing common data preparation tasks refers to Automated ML, not the designer.
Automatically selecting an algorithm is also part of Automated ML, which optimizes models algorithmically.
Using a code-first notebook experience applies to Azure Machine Learning notebooks, intended for data scientists familiar with Python and SDKs.
Therefore, as per the AI-900 study guide and Microsoft Learn documentation, the verified and correct answer is:
✅ Adding and connecting modules on a visual canvas, which accurately describes how Azure Machine Learning designer operates.
Select the answer that correctly completes the sentence.



This question refers to a system that monitors a user’s emotions or expressions—in this case, identifying whether a kiosk user is annoyed—through a video feed. According to the Microsoft Azure AI Fundamentals (AI-900) official study guide and Microsoft Learn module “Identify Azure services for computer vision,†this scenario falls under facial analysis, which is a capability of Azure AI Vision or the Face API.
Facial analysis involves detecting human faces in images or video and analyzing facial features to interpret emotions, expressions, age, gender, or facial landmarks. The AI model does not try to identify who the person is but rather interprets how they appear or feel. For example, facial analysis can detect emotions such as happiness, anger, sadness, or surprise, which allows applications to infer a user’s engagement or frustration level while interacting with a system.
Option review:
Face detection: Identifies the presence and location of a face in an image but does not interpret expressions or emotions.
Facial recognition: Matches a detected face to a known individual’s identity (for authentication or security), not for emotion detection.
Optical character recognition (OCR): Extracts text from images or scanned documents and has no relation to human emotion or facial features.
Therefore, determining whether a kiosk user is annoyed, happy, or frustrated involves emotion detection within facial analysis, making Facial analysis the correct answer.
This aligns with AI-900’s definition of computer vision workloads, where facial analysis provides insights into emotions and expressions, supporting user experience optimization and customer behavior analytics.
You have a bot that identifies the brand names of products in images of supermarket shelves.
Which service does the bot use?
Al enrichment for Azure Search capabilities
Computer Vision Image Analysis capabilities
Custom Vision Image Classification capabilities
Language understanding capabilities
According to the Microsoft Azure AI Fundamentals (AI-900) official study guide and Microsoft Learn module “Describe features of computer vision workloads on Azure,†the Custom Vision service is a specialized part of Azure Cognitive Services that allows developers to train image classification and object detection models tailored to their own data. It is particularly useful when prebuilt models, such as those in the standard Computer Vision service, cannot accurately recognize domain-specific objects — such as specific product brands or packaging.
In this scenario, the bot must identify brand names of products in images of supermarket shelves. Since brand logos and packaging designs are unique to each company, a general-purpose image analysis model would not perform accurately. The Custom Vision Image Classification capability allows you to upload labeled images (e.g., various brands) and train a model to distinguish between them. Once trained, the model can classify new images and recognize which brand appears on the shelf.
Let’s analyze the other options:
A. AI enrichment for Azure Search capabilities: Used in knowledge mining to extract information from documents, not image brand identification.
B. Computer Vision Image Analysis capabilities: Provides prebuilt functionality such as detecting objects, describing images, and identifying common items (like “bottle†or “boxâ€) but cannot differentiate custom brand names.
D. Language understanding capabilities: Deals with processing and understanding natural language text, not images.
Therefore, identifying specific brand names from images requires a custom-trained image classification model, making Custom Vision Image Classification capabilities the correct answer.
✅ Final Verified Answer:
C. Custom Vision Image Classification capabilities
To complete the sentence, select the appropriate option in the answer area.



The correct answer is “adding and connecting modules on a visual canvas.â€
According to the Microsoft Azure AI Fundamentals (AI-900) Official Study Guide and the Microsoft Learn module “Explore automated machine learning in Azure Machine Learning,†the Azure Machine Learning designer is a drag-and-drop, no-code environment that allows users to create, train, and deploy machine learning models visually. It is specifically designed for users who prefer an intuitive graphical interface rather than writing extensive code.
Microsoft Learn defines Azure Machine Learning designer as a tool that allows you to “build, test, and deploy machine learning models by dragging and connecting pre-built modules on a visual canvas.†These modules can represent data inputs, transformations, training algorithms, and evaluation processes. By linking them together, users can create an end-to-end machine learning pipeline.
The designer simplifies the machine learning workflow by allowing data scientists, analysts, and even non-developers to:
Import and prepare datasets visually.
Choose and connect algorithm modules (e.g., classification, regression, clustering).
Train and evaluate models interactively.
Publish inference pipelines as web services for prediction.
Let’s analyze the other options:
Automatically performing common data preparation tasks – This describes Automated ML (AutoML), not the Designer.
Automatically selecting an algorithm to build the most accurate model – Also a characteristic of AutoML, where the system tests multiple algorithms automatically.
Using a code-first notebook experience – This describes the Azure Machine Learning notebooks environment, which uses Python and SDKs, not the Designer interface.
Therefore, based on the official AI-900 learning objectives and Microsoft Learn documentation, the Azure Machine Learning designer allows you to create models by adding and connecting modules on a visual canvas, providing a no-code, interactive experience ideal for users building custom machine learning workflows visually.
Match the types of AI workloads to the appropriate scenarios.
To answer, drag the appropriate workload type from the column on the left to its scenario on the right. Each workload type may be used once, more than once, or not at all.
NOTE: Each correct selection is worth one point.



Box 3: Natural language processing
Natural language processing (NLP) is used for tasks such as sentiment analysis, topic detection, language detection, key phrase extraction, and document categorization.
Which AI service can you use to interpret the meaning of a user input such as “Call me back later?â€
Translator Text
Text Analytics
Speech
Language Understanding (LUIS)
According to the Microsoft Azure AI Fundamentals (AI-900) learning content, Language Understanding Intelligent Service (LUIS) is part of Azure Cognitive Services used to interpret the meaning or intent behind a user’s input in natural language. When a user says, “Call me back later,†the system must recognize that the user intends for a call to be scheduled or delayed—this is not just about translating or analyzing text but understanding intent and relevant entities.
LUIS allows developers to train models to identify intents (such as ScheduleCall, CancelMeeting, etc.) and extract key entities (like names, times, or actions) from text inputs. It is typically integrated with conversational agents such as Azure Bot Service, enabling more natural, human-like interactions.
Other options do not fit the scenario:
Translator Text (A) translates text between languages but does not interpret meaning.
Text Analytics (B) performs sentiment analysis, key phrase extraction, and named entity recognition, but it doesn’t identify intent.
Speech (C) converts spoken language to text or text to speech but doesn’t interpret what the words mean.
Therefore, for understanding user intent such as “Call me back later,†the correct AI service is D. Language Understanding (LUIS).
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.



This question assesses knowledge of the Azure Cognitive Services Speech and Text Analytics capabilities, as described in the Microsoft Azure AI Fundamentals (AI-900) official study guide and Microsoft Learn modules “Explore natural language processing†and “Explore speech capabilities.†These services are part of Azure Cognitive Services, which provide prebuilt AI capabilities for speech, language, and text understanding.
You can use the Speech service to transcribe a call to text → YesThe Speech-to-Text feature in the Azure Speech service automatically converts spoken words into written text. Microsoft Learn explains: “The Speech-to-Text capability enables applications to transcribe spoken audio to text in real time or from recorded files.†This makes it ideal for call transcription, voice assistants, and meeting captioning.
You can use the Text Analytics service to extract key entities from a call transcript → YesOnce a call has been transcribed into text, the Text Analytics service (part of Azure Cognitive Services for Language) can process that text to extract key entities, key phrases, and sentiment. For example, it can identify names, organizations, locations, and product mentions. Microsoft Learn notes: “Text Analytics can extract key phrases and named entities from text to derive insights and structure from unstructured data.â€
You can use the Speech service to translate the audio of a call to a different language → YesThe Azure Speech service also includes Speech Translation, which can translate spoken language in real time. It converts audio input from one language into translated text or speech output in another language. Microsoft Learn describes this as: “Speech Translation combines speech recognition and translation to translate spoken audio to another language.â€
Which natural language processing feature can be used to identify the main talking points in customer feedback surveys?
language detection
translation
entity recognition
key phrase extraction
According to the Microsoft Azure AI Fundamentals (AI-900) official study guide and the Microsoft Learn module “Explore natural language processing (NLP) in Azureâ€, key phrase extraction is a core feature of the Azure AI Language Service that enables you to automatically identify the most important ideas or topics discussed in a body of text.
When analyzing customer feedback surveys, key phrase extraction helps summarize the main talking points or recurring themes by detecting significant words and phrases. For instance, if multiple customers write comments like “The checkout process is slow†or “Website speed could be improved,†the model may extract key phrases such as “checkout process†and “website speed.†This allows businesses to quickly understand the most common subjects without manually reading each response.
Let’s review the other options:
A. Language detection: Determines the language of the text (e.g., English, French, or Spanish) but does not identify main ideas.
B. Translation: Converts text from one language to another using Azure Translator; it does not summarize or extract key information.
C. Entity recognition: Identifies named entities such as people, organizations, locations, or dates. While useful for identifying specific details, it does not capture general topics or overall discussion points.
Therefore, the appropriate NLP feature for identifying main topics or themes within textual data such as survey responses is Key Phrase Extraction.
This capability is part of the Azure AI Language Service and is commonly used in sentiment analysis pipelines, customer feedback analytics, and business intelligence workflows to summarize large text datasets efficiently.
You build a QnA Maker bot by using a frequently asked questions (FAQ) page.
You need to add professional greetings and other responses to make the bot more user friendly.
What should you do?
Increase the confidence threshold of responses
Enable active learning
Create multi-turn questions
Add chit-chat
According to the Microsoft Learn module “Build a QnA Maker knowledge baseâ€, QnA Maker allows developers to create bots that answer user queries based on documents like FAQs or manuals. To make a bot more natural and conversational, Microsoft provides a “chit-chat†feature — a prebuilt, professionally written set of responses to common conversational phrases such as greetings (“Helloâ€), small talk (“How are you?â€), and polite phrases (“Thank youâ€).
Adding chit-chat improves user experience by making the bot sound friendlier and more human-like. It doesn’t alter the main Q & A logic but enhances the bot’s tone and responsiveness.
The other options are not correct:
A. Increase the confidence threshold makes the bot more selective in responses but doesn’t add new conversational features.
B. Enable active learning improves knowledge base accuracy over time through user feedback.
C. Create multi-turn questions adds conversational flow for related topics but doesn’t add greetings or casual dialogue.
Thus, to make the bot more personable, the correct action is to Add chit-chat.
Match the Azure Cognitive Services service to the appropriate actions.
To answer, drag the appropriate service from the column on the left to its action on the right. Each service may he used once, more than once, or not at all.
NOTE: Each correct match is worth one point.



These matches are based on the Microsoft Azure AI Fundamentals (AI-900) Official Study Guide and the Microsoft Learn module “Explore Azure Cognitive Services.â€
Microsoft Azure provides Cognitive Services that enable developers to integrate artificial intelligence capabilities—such as vision, speech, language understanding, and decision-making—into applications without requiring in-depth AI expertise.
Convert a user’s speech to text → Speech ServiceThe Azure Speech Service supports speech-to-text (STT) conversion, which transcribes spoken language into written text. This feature is commonly used in voice assistants, transcription systems, and voice-enabled apps. The service uses advanced speech recognition models to handle different accents, languages, and background noises.
Identify a user’s intent → Language ServiceThe Azure AI Language Service (which includes capabilities from LUIS – Language Understanding) is used to interpret what a user means or wants to achieve based on their words. It identifies intents (the goal or action behind the input) and entities (key pieces of information) from natural language text. This is a key component in conversational AI applications, allowing chatbots and virtual assistants to respond intelligently.
Provide a spoken response to the user → Speech ServiceThe Speech Service also supports text-to-speech (TTS) functionality, which converts textual responses into natural-sounding speech. This enables applications to communicate audibly with users, completing the conversational loop.
Translator Text is not used here because it’s primarily designed for language translation between different languages, not for speech recognition or intent understanding.
You need to develop a web-based AI solution for a customer support system. Users must be able to interact with a web app that will guide them to the best resource or answer.
Which service should you integrate with the web app to meet the goal?
Azure Al Language Service
Face
Azure Al Translator
Azure Al Custom Vision
QnA Maker is a cloud-based API service that lets you create a conversational question-and-answer layer over your existing data. Use it to build a knowledge base by extracting questions and answers from your semistructured content, including FAQs, manuals, and documents. Answer users’ questions with the best answers from the QnAs in your knowledge base—automatically. Your knowledge base gets smarter, too, as it
continually learns from user behavior.
Match the Al workload to the appropriate task.
To answer, drag the appropriate Ai workload from the column on the left to its task on the right. Each workload may be used once, more than once, or not at all.
NOTE: Each correct match is worth one point.



This question tests your understanding of AI workloads as described in the Microsoft Azure AI Fundamentals (AI-900) study guide. Each Azure AI workload is designed to handle specific types of data and tasks: text, images, documents, or content generation.
Extract data from medical admission forms for import into a patient tracking database → Azure AI Document IntelligenceFormerly known as Form Recognizer, this service belongs to the Azure AI Document Intelligence workload. It extracts key-value pairs, tables, and textual information from structured and semi-structured documents such as forms, invoices, and admission sheets. For medical forms, Document Intelligence can identify fields like patient name, admission date, and diagnosis and export them into structured formats for database import.
Automatically create drafts for a monthly newsletter → Generative AIThis task involves creating original written content, which is a capability of Generative AI. Microsoft’s Azure OpenAI Service uses large language models (like GPT-4) to generate human-like text, summaries, or articles. Generative AI workloads are ideal for automating creative writing, drafting newsletters, producing blogs, or summarizing reports.
Analyze aerial photos to identify flooded areas → Computer VisionComputer Vision workloads involve analyzing and interpreting visual data from images or videos. This includes detecting objects, classifying scenes, and identifying patterns such as flooded regions in aerial imagery. Azure’s Computer Vision or Custom Vision services can be trained to detect water coverage or terrain changes using image recognition techniques.
Thus, the correct matches are:
Azure AI Document Intelligence → Extract medical form data
Generative AI → Create newsletter drafts
Computer Vision → Identify flooded areas from aerial photos
You are processing photos of runners in a race.
You need to read the numbers on the runners’ shirts to identity the runners in the photos.
Which type of computer vision should you use?
facial recognition
optical character recognition (OCR)
semantic segmentation
object detection
According to the Microsoft Azure AI Fundamentals (AI-900) official study guide and the Microsoft Learn module “Identify features of computer vision workloadsâ€, Optical Character Recognition (OCR) is a core capability within the computer vision domain that enables systems to detect and extract text from images or documents. OCR technology can identify printed or handwritten characters in photographs, scanned documents, or camera feeds, and convert them into machine-readable text.
In this scenario, the task is to read the numbers on runners’ shirts in race photos. These numbers are textual or numeric characters embedded within images. OCR is specifically designed for this purpose — to locate and recognize characters within visual data and convert them into usable text. Once extracted, those numbers can be cross-referenced with a database to identify each runner.
Let’s analyze why the other options are incorrect:
A. Facial recognition focuses on identifying individuals based on unique facial features, not reading text or numbers.
C. Semantic segmentation classifies each pixel of an image into categories (for example, separating road, sky, and people), but it doesn’t read text.
D. Object detection identifies and locates objects within an image (such as detecting people or vehicles) but does not extract readable text or numbers.
Therefore, since the task involves reading textual or numeric content from an image, the appropriate type of computer vision to use is Optical Character Recognition (OCR).
You need to generate images based on user prompts. Which Azure OpenAI model should you use?
GPT-4
DALL-E
GPT-3
Whisper
According to the Microsoft Azure OpenAI Service documentation and AI-900 official study materials, the DALL-E model is specifically designed to generate and edit images from natural language prompts. When a user provides a descriptive text input such as “a futuristic city skyline at sunsetâ€, DALL-E interprets the textual prompt and produces an image that visually represents the content described. This functionality is known as text-to-image generation and is one of the creative AI capabilities supported by Azure OpenAI.
DALL-E belongs to the family of generative models that can create new visual content, expand existing images, or apply transformations to images based on textual instructions. Within Azure OpenAI, the DALL-E API enables developers to integrate image creation directly into applications—useful for design assistance, marketing content generation, or visualization tools. The model learns from vast datasets of text–image pairs and is optimized to ensure alignment, diversity, and accuracy in the produced visuals.
By contrast, the other options serve different purposes:
A. GPT-4 is a large language model for text-based generation, reasoning, and conversation, not for creating images.
C. GPT-3 is an earlier text generation model, primarily used for language tasks like summarization, classification, and question answering.
D. Whisper is an automatic speech recognition (ASR) model used to convert spoken language into written text; it has no image-generation capability.
Therefore, when the requirement is to generate images based on user prompts, the only Azure OpenAI model that fulfills this purpose is DALL-E. This aligns directly with the AI-900 learning objective covering Azure OpenAI generative capabilities for text, code, and image creation.
You build a machine learning model by using the automated machine learning user interface (UI).
You need to ensure that the model meets the Microsoft transparency principle for responsible AI.
What should you do?
Set Validation type to Auto.
Enable Explain best model.
Set Primary metric to accuracy.
Set Max concurrent iterations to 0.
Model Explain Ability.
Most businesses run on trust and being able to open the ML “black box†helps build transparency and trust. In heavily regulated industries like healthcare and banking, it is critical to comply with regulations and best practices. One key aspect of this is understanding the relationship between input variables (features) and model output. Knowing both the magnitude and direction of the impact each feature (feature importance) has on the predicted value helps better understand and explain the model. With model explain ability, we enable you to understand feature importance as part of automated ML runs.
Match the types of computer vision workloads to the appropriate scenarios.
To answer, drag the appropriate workload type from the column on the left to its scenario on the right. Each workload type may be used once more than once, or not at all.
NOTE: Each correct match is worth one point.



In the Microsoft Azure AI Fundamentals (AI-900) curriculum, computer vision workloads are grouped into distinct types, each serving a specific purpose. The three major workloads illustrated here are image classification, object detection, and optical character recognition (OCR). Understanding their use cases is essential for correctly mapping them to real-world scenarios.
Generate captions for images → Image classificationThe image classification workload is used to identify the main subject or context of an image and assign descriptive labels. In Microsoft Learn’s “Describe features of computer vision workloads,†image classification models are trained to recognize content (e.g., a cat, a beach, or a city). Caption generation expands on classification results by describing the image’s contents in human-readable language—based on what the model identifies as key visual features.
Extract movie title names from movie poster images → Optical character recognition (OCR)OCR is a vision workload that detects and extracts text from images. Azure AI Vision’s Read API or Document Intelligence OCR models can identify printed or handwritten text within posters, signs, or documents. In this case, the movie title text from a poster is best extracted using OCR.
Locate vehicles in images → Object detectionThe object detection workload identifies multiple objects within an image and provides their locations using bounding boxes. It’s ideal for tasks like counting cars in a parking lot or tracking objects in traffic images.
Which OpenAI model does GitHub Copilot use to make suggestions for client-side JavaScript?
GPT-4
Codex
DALL-E
GPT-3
According to the Microsoft Azure AI Fundamentals (AI-900) learning path and Microsoft Learn documentation on GitHub Copilot, GitHub Copilot is powered by OpenAI Codex, a specialized language model derived from the GPT-3 family but fine-tuned specifically on programming languages and code data.
OpenAI Codex was designed to translate natural language prompts into executable code in multiple programming languages, including JavaScript, Python, C#, TypeScript, and Go. It can understand comments, function names, and code structure to generate relevant code suggestions in real time.
When a developer writes client-side JavaScript, GitHub Copilot uses Codex to analyze the context of the file and generate intelligent suggestions, such as completing functions, writing boilerplate code, or suggesting improvements. Codex can also explain what specific code does and provide inline documentation, which enhances developer productivity.
Option A (GPT-4): While some newer versions of GitHub Copilot (Copilot X) may integrate GPT-4 for conversational explanations, the core code completion engine remains based on Codex, as per the AI-900-level content.
Option C (DALL-E): Used for image generation, not for programming tasks.
Option D (GPT-3): Codex was fine-tuned from GPT-3 but has been further trained specifically for code generation tasks.
Therefore, the verified and official answer from Microsoft’s AI-900 curriculum is B. Codex — the OpenAI model used by GitHub Copilot to make suggestions for client-side JavaScript and other programming languages.
Select the answer that correctly completes the sentence.



This question is drawn from the Microsoft Azure AI Fundamentals (AI-900) syllabus section “Describe features of natural language processing (NLP) workloads on Azure.†According to the Microsoft Learn materials, Natural Language Processing (NLP) is a branch of artificial intelligence that allows computers to analyze, understand, and generate human language. NLP enables machines to work with text or speech data in a way that extracts meaning, sentiment, and intent.
Microsoft defines NLP as enabling scenarios such as language detection, text classification, key phrase extraction, sentiment analysis, and named entity recognition. The example given—classifying emails as “work-related†or “personalâ€â€”is a text classification task, which falls under NLP capabilities. The AI model processes the textual content of emails, identifies linguistic patterns, and categorizes them based on the detected topic or context.
Let’s analyze the other options:
Predict the number of future car rentals → This is a forecasting task, handled by machine learning regression models, not NLP.
Predict which website visitors will make a transaction → This is a classification or prediction problem in machine learning, not NLP, since it deals with behavioral or numerical data rather than language.
Stop a process in a factory when extremely high temperatures are registered → This is an IoT or anomaly detection scenario, focusing on sensor data, not language understanding.
Therefore, only classifying email messages as work-related or personal correctly represents an NLP use case. It illustrates how NLP can analyze written text and make intelligent categorizations—a key capability covered in AI-900’s natural language workloads section.
Select the answer that correctly completes the sentence.



The correct answer is Azure AI Language, which includes the Question Answering capability (previously known as QnA Maker). According to the Microsoft Azure AI Fundamentals (AI-900) study guide and Microsoft Learn documentation, the Azure AI Language service can be used to create a knowledge base from frequently asked questions (FAQ) and other structured or semi-structured text sources.
This service allows developers to build intelligent applications that can understand and respond to user questions in natural language by referencing prebuilt or custom knowledge bases. The Question Answering feature extracts pairs of questions and answers from documents, websites, or manually entered data and uses them to construct a searchable knowledge base. This knowledge base can then be integrated with Azure Bot Service or other conversational platforms to create interactive, self-service chatbots.
Here’s how it works:
Developers upload FAQ documents, URLs, or structured content.
Azure AI Language processes the content and identifies logical question-answer pairs.
The model stores these pairs in a knowledge base that can be queried by user input.
When users ask questions, the model finds the best matching answer using natural language understanding techniques.
In contrast:
Azure AI Document Intelligence (Form Recognizer) is used to extract structured data from forms and documents, not to create FAQ knowledge bases.
Azure AI Bot Service is for managing and deploying conversational bots but does not generate knowledge bases.
Microsoft Bot Framework SDK provides tools for building conversational logic but still requires a knowledge source like Question Answering from Azure AI Language.
Therefore, the service that can create a knowledge base from FAQ content is Azure AI Language.
Extracting relationships between data from large volumes of unstructured data is an example of which type of Al workload?
computer vision
knowledge mining
natural language processing (NLP)
anomaly detection
Extracting relationships and insights from large volumes of unstructured data (such as documents, text files, or images) aligns with the Knowledge Mining workload in Microsoft Azure AI. According to the Microsoft AI Fundamentals (AI-900) study guide and Microsoft Learn module “Describe features of common AI workloads,†knowledge mining involves using AI to search, extract, and structure information from vast amounts of unstructured or semi-structured content.
In a typical knowledge mining solution, tools like Azure AI Search and Azure AI Document Intelligence work together to index data, apply cognitive skills (such as OCR, key phrase extraction, and entity recognition), and then enable users to discover relationships and patterns through intelligent search. The process transforms raw content into searchable knowledge.
The key characteristics of knowledge mining include:
Using AI to extract entities and relationships between data points.
Applying cognitive skills to text, images, and documents.
Creating searchable knowledge stores from unstructured data.
Hence, B. Knowledge Mining is correct.
The other options—computer vision, NLP, and anomaly detection—deal with image recognition, language understanding, and data irregularities, respectively, not large-scale information extraction.
You use natural language processing to process text from a Microsoft news story.
You receive the output shown in the following exhibit.

Which type of natural languages processing was performed?
entity recognition
key phrase extraction
sentiment analysis
translation
https://docs.microsoft.com/en-us/azure/cognitive-services/text-analytics/overview
You can provide the Text Analytics service with unstructured text and it will return a list of entities in the text that it recognizes. You can provide the Text Analytics service with unstructured text and it will return a list of entities in the text that it recognizes. The service can also provide links to more information about that entity on the web. An entity is essentially an item of a particular type or a category; and in some cases, subtype, such as those as shown in the following table.
https://docs.microsoft.com/en-us/learn/modules/analyze-text-with-text-analytics-service/2-get-started-azure
You have an Azure Machine Learning model that uses clinical data to predict whether a patient has a disease.
You clean and transform the clinical data.
You need to ensure that the accuracy of the model can be proven.
What should you do next?
Train the model by using the clinical data.
Split the clinical data into Two datasets.
Train the model by using automated machine learning (automated ML).
Validate the model by using the clinical data.
According to the Microsoft Azure AI Fundamentals (AI-900) official study guide and Microsoft Learn modules on machine learning concepts, ensuring that the accuracy of a predictive model can be proven requires data partitioning—specifically splitting the available data into training and testing datasets. This is a foundational concept in supervised machine learning.
When you split the data, typically about 70–80% of the dataset is used for training the model, while the remaining 20–30% is used for testing (or validation). The reason behind this approach is to ensure that the model’s performance metrics—such as accuracy, precision, recall, and F1-score—are evaluated on data the model has never seen before. This prevents overfitting and allows you to demonstrate that the model generalizes well to new, unseen data.
In the AI-900 Microsoft Learn content under “Describe the machine learning processâ€, it is explained that after cleaning and transforming the data, the next essential step is data splitting to “evaluate model performance objectively.†By keeping training and testing data separate, you can prove the reliability and accuracy of the model’s predictions, which is particularly crucial in sensitive domains like clinical or healthcare analytics, where decision transparency and validation are vital.
Option A (Train the model by using the clinical data) is incorrect because you should not train and evaluate on the same data—it would lead to biased results.
Option C (Train the model using automated ML) is incorrect because automated ML is a method for training and tuning, but it doesn’t inherently prove accuracy.
Option D (Validate the model by using the clinical data) is also incorrect if you use the same dataset for validation and training—it would not prove true accuracy.
Therefore, per Microsoft’s official AI-900 study content, the verified correct answer is B. Split the clinical data into two datasets.
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.



According to the Microsoft Azure AI Fundamentals (AI-900) official study guide and Microsoft Learn module “Describe core concepts of machine learning on Azureâ€, the Azure Machine Learning Designer is a drag-and-drop, no-code/low-code interface that allows users to build, test, and deploy machine learning models visually without needing to write extensive code.
Drag-and-Drop Visual Canvas → YESThe Azure Machine Learning Designer indeed provides a graphical interface where users can connect prebuilt modules for data preprocessing, training, evaluation, and deployment. Microsoft documentation describes it as a “drag-and-drop visual environment that simplifies machine learning model creation.†This allows beginners and business users to construct machine learning pipelines intuitively, confirming this statement as True.
Save Progress as a Pipeline Draft → YESThe designer lets users save their current work as a pipeline draft, enabling them to pause and return later. Microsoft Learn explicitly states that you can “save and publish pipeline drafts before running or deploying them.†This functionality ensures workflow continuity, collaboration, and version management—making this statement also True.
Include Custom JavaScript Functions → NOThe Azure Machine Learning Designer allows the integration of Python scripts through the “Execute Python Script†module for custom logic, but it does not support JavaScript. Custom code in the designer environment is limited to Python, as the platform is built for data science and machine learning tasks typically handled in Python-based environments. Therefore, this statement is False.
Which parameter should you configure to produce more verbose responses from a chat solution that uses the Azure OpenAI GPT-3.5 model?
Presence penalty
Temperature
Stop sequence
Max responseB
In a chat solution using the Azure OpenAI GPT-3.5 model, the temperature parameter controls the creativity and variability of generated responses. According to the Microsoft Learn documentation for Azure OpenAI Service, temperature is a float value typically between 0 and 2, determining how deterministic or random the model’s output is. A lower temperature (e.g., 0–0.3) makes responses more focused and deterministic, while a higher temperature (e.g., 0.8–1.2) produces more verbose, creative, and diverse responses.
When you want the chat model to generate more detailed or expressive output, increasing the temperature encourages the model to explore a broader range of possible continuations, leading to longer and more varied text. This parameter directly affects how “verbose†or elaborate the model’s responses can be, which is why it is the correct answer.
The other options are not appropriate for this scenario:
A. Presence penalty reduces repetition by discouraging reuse of the same phrases but does not control verbosity.
C. Stop sequence defines tokens where generation should stop, limiting rather than extending response length.
D. Max response (max tokens) controls the maximum length of the response but does not inherently make answers more verbose or expressive.
Thus, to encourage more elaborate and detailed output from the Azure OpenAI GPT-3.5 model, the correct configuration parameter to adjust is Temperature (B).
Match the principles of responsible AI to appropriate requirements.
To answer, drag the appropriate principles from the column on the left to its requirement on the right. Each principle may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.
NOTE: Each correct selection is worth one point.



According to the Microsoft Azure AI Fundamentals (AI-900) official study guide and the Microsoft Learn module “Identify guiding principles for responsible AIâ€, responsible AI is built upon six foundational principles: Fairness, Reliability and Safety, Privacy and Security, Inclusiveness, Transparency, and Accountability. Each principle serves to guide the ethical design, deployment, and management of artificial intelligence systems.
Fairness – This principle ensures that AI systems treat all people fairly and do not discriminate based on personal attributes such as gender, race, or age. The Microsoft Learn content emphasizes that “AI systems should treat everyone fairly†and that organizations must evaluate datasets and model outputs for bias. In this scenario, “The system must not discriminate based on gender, race†clearly aligns with Fairness because it directly addresses equitable treatment and unbiased decision-making.
Privacy and Security – Microsoft’s responsible AI framework stresses that “AI systems must be secure and respect privacy.†This means personal data should be safeguarded, processed lawfully, and visible only to authorized users. The statement “Personal data must be visible only to approved users†reflects the importance of protecting sensitive information and controlling access—precisely the intent of the Privacy and Security principle.
Transparency – Transparency refers to ensuring that users understand how AI systems operate and make decisions. Microsoft notes that “AI systems should be understandable and users should be able to know why decisions are made.†The requirement “Automated decision-making processes must be recorded so that approved users can identify why a decision was made†directly supports this principle. Transparency promotes trust and accountability by documenting the reasoning behind AI outputs.
Reliability and Safety, though another core principle, does not directly relate to any of the provided statements in this question.
Select the answer that correctly completes the sentence.



During model training, a portion of the dataset (commonly 70–80%) is used to teach the machine learning algorithm to identify patterns and relationships between input features and the output label. The remaining data (usually 20–30%) is held back to evaluate the model’s performance and verify its accuracy on unseen data. This ensures the model is not overfitted (too tightly fitted to training data) and can generalize well to new inputs.
Key steps highlighted in Microsoft Learn materials:
Model Training: Use the training data to fit the model — the algorithm learns relationships between input features and labels.
Model Evaluation: Use the test or validation data to assess the accuracy, precision, recall, or other metrics of the trained model.
Model Deployment: Once validated, the model is deployed to make real-world predictions.
Other options explained:
Feature engineering: Involves preparing and transforming input data, not splitting datasets for training and testing.
Time constraints: Not a machine learning process step.
Feature stripping: Not a recognized ML concept.
MLflow models: Refers to an open-source tool for tracking and managing models, not dataset splitting or training.
Thus, when you use a portion of the dataset to prepare and train a machine learning model, and retain the rest to verify results, the process is known as model training.
Which Azure OpenAI model should you use to summarize the text from a document?
Whisper
DALL-E
Codex
GPT
According to the Microsoft Learn documentation and the Azure AI Fundamentals (AI-900) study guide, the GPT (Generative Pre-trained Transformer) family of models within Azure OpenAI Service is used for text-based natural language tasks, including summarization, content generation, and text completion.
When you need to summarize text from a document, GPT models (such as GPT-3.5 or GPT-4) can process large sections of text, extract the most relevant details, and generate concise summaries that retain the key meaning. The summarization task uses the model’s natural language understanding capabilities to identify core concepts and generate human-like, coherent text.
Other options are incorrect:
A. Whisper → Used for speech-to-text transcription, not text summarization.
B. DALL-E → Generates images from text prompts, not text summaries.
C. Codex → Specializes in code generation and completion, not document summarization.
To complete the sentence, select the appropriate option in the answer area.



The correct answer is object detection. According to the Microsoft Azure AI Fundamentals (AI-900) official study materials and Microsoft Learn module “Explore computer visionâ€, object detection is the process of identifying and locating objects within an image or video. The primary characteristic of object detection, as emphasized in the study guide, is its ability to return a bounding box around each detected object along with a corresponding label or class.
In this question, the task involves returning a bounding box that indicates the location of a vehicle in an image. This is the exact definition of object detection — identifying that the object exists (a vehicle) and determining its position within the frame. Microsoft Learn clearly differentiates this from other computer vision tasks. Image classification, for example, only determines what an image contains as a whole (for instance, “this image contains a vehicleâ€), but it does not indicate where in the image the object is located. Optical character recognition (OCR) is specifically used for extracting printed or handwritten text from images, and semantic segmentation involves classifying every pixel in an image to understand boundaries in greater detail, often used in autonomous driving or medical imaging.
The official AI-900 guide highlights object detection as one of the key computer vision workloads supported by Azure Computer Vision, Custom Vision, and Azure Cognitive Services. These services are designed to detect multiple instances of various object types in a single image, outputting bounding boxes and confidence scores for each.
Therefore, based on the AI-900 official curriculum and Microsoft Learn concepts, returning a bounding box that shows the location of a vehicle is a textbook example of object detection, as it involves both recognition and localization of the object within the image frame.
Select the answer that correctly completes the sentence.



According to the Microsoft Azure AI Fundamentals (AI-900) Official Study Guide and the Microsoft Learn module “Explore computer vision in Microsoft Azure,†Optical Character Recognition (OCR) is a computer vision capability that detects and extracts printed or handwritten text from images or scanned documents and converts it into machine-readable digital text.
In this scenario, a historian wants to digitize newspaper articles — which means converting physical or scanned images of printed text into digital text for easier searching, archiving, and analysis. This is exactly the function of OCR. By using OCR, the historian can take photos or scans of old newspapers and extract the words into editable digital documents, preserving valuable historical information.
OCR is a key feature of the Azure Computer Vision service, which provides capabilities such as:
Extracting text from images or PDFs.
Reading both printed and handwritten text in multiple languages.
Converting physical documents into searchable digital files.
Let’s examine the incorrect options:
Facial analysis: Detects facial features, age, gender, and emotions — unrelated to text extraction.
Image classification: Identifies what an image contains (e.g., “dog,†“car,†or “buildingâ€) but doesn’t extract text.
Object detection: Identifies and locates objects within an image using bounding boxes, not suitable for text recognition.
Therefore, to digitize newspaper articles and convert printed words into editable digital text, the correct technology to use is Optical Character Recognition (OCR), provided by the Azure Computer Vision API.
✅ Final Answer: optical character recognition (OCR)
You are developing a model to predict events by using classification.
You have a confusion matrix for the model scored on test data as shown in the following exhibit.

Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the graphic.
NOTE: Each correct selection is worth one point.



Box 1: 11

TP = True Positive.
The class labels in the training set can take on only two possible values, which we usually refer to as positive or negative. The positive and negative instances that a classifier predicts correctly are called true positives (TP) and true negatives (TN), respectively. Similarly, the incorrectly classified instances are called false positives (FP) and false negatives (FN).
Box 2: 1,033
FN = False Negative
Which feature of the Azure Al Language service should you use to automate the masking of names and phone numbers in text data?
Personally Identifiable Information (Pll) detection
entity linking
custom text classification
custom named entity recognition (NER)
The correct answer is A. Personally Identifiable Information (PII) detection.
In the Azure AI Language service, PII detection is a built-in feature designed to automatically identify and redact sensitive or confidential information from text data. According to the Microsoft Learn module “Identify capabilities of Azure AI Language†and the AI-900 study guide, this capability can detect personal data such as names, phone numbers, email addresses, credit card numbers, and other identifiers.
When applied, the service scans input text and either masks or removes these PII elements based on configurable parameters, ensuring compliance with data privacy regulations like GDPR or HIPAA.
For example, if a document contains “John Doe’s phone number is 555-123-4567,†PII detection can return “******’s phone number is ***********,†thereby preventing exposure of sensitive personal details.
Option analysis:
A. Personally Identifiable Information (PII) detection: ✅ Correct. It identifies and masks sensitive data in text.
B. Entity linking: Connects recognized entities to known data sources like Wikipedia; not used for redaction.
C. Custom text classification: Classifies text into predefined categories; not designed for masking personal data.
D. Custom named entity recognition (NER): Detects domain-specific entities you define but doesn’t automatically mask them.
Therefore, to automate masking of names and phone numbers, the appropriate Azure AI Language feature is PII detection.
Which two scenarios are examples of a conversational AI workload? Each correct answer presents a complete solution.
NOTE: Each correct selection is worth one point.
a telephone answering service that has a pre-recorder message
a chatbot that provides users with the ability to find answers on a website by themselves
telephone voice menus to reduce the load on human resources
a service that creates frequently asked questions (FAQ) documents by crawling public websites
According to the AI-900 official study guide and Microsoft Learn module “Describe features of conversational AI workloads on Azureâ€, conversational AI refers to artificial intelligence systems that interact with users through natural language via text or speech. These systems include chatbots, virtual assistants, and interactive voice response (IVR) systems that simulate human conversation.
B. Chatbot that provides users with the ability to find answers on a website by themselvesThis is a classic example of conversational AI. Chatbots use natural language understanding (LUIS) and Azure Bot Service to interpret user input, identify intent, and provide relevant responses automatically. They help users self-serve information without human support, such as retrieving account details or answering FAQs.
C. Telephone voice menus to reduce the load on human resourcesAutomated telephone systems or IVRs use conversational AI to interpret spoken commands and route calls intelligently. This is often implemented using Azure Cognitive Services Speech (for speech-to-text and text-to-speech) combined with Azure Bot Service for managing dialogue flow.
You need to create a training dataset and validation dataset from an existing dataset.
Which module in the Azure Machine Learning designer should you use?
Select Columns in Dataset
Add Rows
Split Data
Join Data
In Azure Machine Learning designer, the Split Data module is specifically designed to divide a dataset into training and validation (or testing) subsets. The AI-900 study guide and the Microsoft Learn module “Split data for training and evaluation†explain that this module allows users to control how data is partitioned, ensuring that models are trained on one portion of the data and tested on unseen data to assess performance.
By default, the Split Data module uses a 70/30 or 80/20 ratio, meaning 70–80% of the data is used for training and the remaining 20–30% for validation or testing. This ensures the model’s generalizability and prevents overfitting.
The other options serve different purposes:
A. Select Columns in Dataset: Used to choose specific columns or features from a dataset.
B. Add Rows: Combines multiple datasets vertically.
D. Join Data: Combines datasets horizontally based on a common key.
Only Split Data performs the function of dividing data into training and validation subsets.
What can be used to complete a paragraph based on a sentence provided by a user?
Azure Al Language
Azure OpenAI
Azure Machine Learning
Azure Al Vision
The service that can complete a paragraph based on a sentence is Azure OpenAI. According to Microsoft Learn’s AI-900 study guide, Azure OpenAI provides access to advanced language models like GPT-3.5 and GPT-4, which can generate and continue text, summarize, or create content based on prompts. The task described—text completion—is precisely what GPT models are designed for.
Azure AI Language performs language understanding and analysis (sentiment, key phrases, translation), Azure Machine Learning trains custom models, and Azure AI Vision handles images. Hence, Azure OpenAI is the correct choice.
You need to build an app that will identify celebrities in images.
Which service should you use?
Azure OpenAI Service
Azure Machine Learning
conversational language understanding (CLU)
Azure Al Vision
According to the Microsoft Azure AI Fundamentals (AI-900) official learning path, the appropriate service for recognizing celebrities in images is Azure AI Vision (formerly Computer Vision). This service is part of Azure’s Cognitive Services suite and specializes in analyzing visual content using pretrained deep learning models. One of its built-in capabilities, as documented in Microsoft Learn: “Analyze images with Azure AI Visionâ€, includes object detection, face detection, and celebrity recognition.
The Azure AI Vision Analyze API can detect and identify thousands of objects, brands, and celebrities. When an image is submitted to the service, the model compares detected faces to a known database of public figures and returns metadata including celebrity names, confidence scores, and bounding box coordinates. This makes it ideal for applications that need to recognize well-known individuals automatically—such as media cataloging, content tagging, or entertainment apps.
The other options are incorrect:
A. Azure OpenAI Service provides generative AI and language models (like GPT-4), but it cannot analyze image content directly in the context of AI-900 fundamentals.
B. Azure Machine Learning is for custom model training and deployment, not a prebuilt vision recognition service.
C. Conversational Language Understanding (CLU) processes natural language input, not images.
Therefore, the correct service for identifying celebrities in images is D. Azure AI Vision.
Match the Al workload to the appropriate task.
To answer, drag the appropriate Al workload from the column on the left to its task on the right. Each workload may be used once, more than once, or not at all
NOTE: Each correct match is worth one point.


You need to convert receipts into transactions in a spreadsheet. The spreadsheet must include the date of the transaction, the merchant the total spent and any taxes paid.
Which Azure Al service should you use?
Face
Azure Al Language
Azure Al Document Intelligence
Azure Al Custom Vision
To extract structured data such as transaction date, merchant name, total amount, and taxes from receipts, the best service is Azure AI Document Intelligence (formerly known as Form Recognizer). As described in the Microsoft Learn module: “Extract data from documents with Azure AI Document Intelligenceâ€, this service applies optical character recognition (OCR) combined with machine learning models to identify and extract key-value pairs and tabular data from semi-structured documents like invoices, receipts, and forms.
The prebuilt receipt model of Document Intelligence can automatically recognize common receipt fields, including:
Merchant Name
Transaction Date
Total Amount
Taxes
Items Purchased
It outputs structured JSON that can easily be converted into spreadsheet or database entries. This capability eliminates the need for manual data entry, ensuring accuracy and efficiency in digitizing financial documents.
The other options are incorrect:
A. Face detects and verifies human faces but does not extract text or numerical data.
B. Azure AI Language analyzes text sentiment, key phrases, and entities but does not interpret scanned documents.
D. Azure AI Custom Vision is for training image classification or object detection models, not document data extraction.
Therefore, the most accurate and Microsoft-verified service for converting receipts into structured transactions in a spreadsheet is C. Azure AI Document Intelligence.
What is an advantage of using a custom model in Form Recognizer?
Only a custom model can be deployed on-premises.
A custom model can be trained to recognize a variety of form types.
A custom model is less expensive than a prebuilt model.
A custom model always provides higher accuracy.
Azure AI Form Recognizer extracts information from structured and semi-structured documents. A custom model in Form Recognizer allows an organization to train the system on its specific document layouts and data fields.
As per the AI-900 study guide, a key advantage of a custom model is its flexibility. It can be trained with a set of labeled examples (e.g., invoices, purchase orders, receipts) that match the company’s format. Once trained, the model learns where to locate and extract fields such as invoice numbers, dates, or totals—regardless of layout differences between form types.
Option B is correct because a custom model can be trained to recognize a variety of form types, making it adaptable for diverse business processes.
Options A, C, and D are incorrect:
A: Both prebuilt and custom models are cloud-based; on-premises deployment is not an exclusive feature.
C: Custom models are not cheaper; they may involve additional training costs.
D: Custom models do not always guarantee higher accuracy—accuracy depends on the training data quality.
You need to provide content for a business chatbot that will help answer simple user queries.
What are three ways to create question and answer text by using Azure Al Language Service ' s question answering? Each correct answer presents a complete solution.
NOTE: Each correct and ask questions by selection is worth one point.
Connect the bot to the Cortana channel using Cortana.
Import chit-chat content from a predefined data source.
Manually enter the questions and answers.
Use Azure Machine Learning Automated ML to train a model based on a file that contains question and answer pairs.
Generate the questions and answers from an existing webpage.
The correct answers are B. Import chit-chat content from a predefined data source, C. Manually enter the questions and answers, and E. Generate the questions and answers from an existing webpage.
According to Microsoft Learn and the Azure AI Fundamentals (AI-900) study guide, the Question Answering feature of the Azure AI Language Service (formerly part of QnA Maker) allows developers to create a knowledge base (KB) that enables a chatbot to answer common questions automatically. This knowledge base can be built in three main ways:
Import chit-chat content (B):Azure provides predefined chit-chat datasets that can be imported to make a bot more conversational and natural. This includes small talk such as greetings, acknowledgments, and polite responses (for example, “How are you?†→ “I’m doing great, thanks!â€). Importing this content enriches the bot’s personality and improves user engagement.
Manually enter questions and answers (C):Developers can manually add pairs of questions and answers directly into the question answering knowledge base. This approach is suitable for custom FAQs or domain-specific content. It gives complete control over how each question is phrased and what answer is returned, ensuring high precision and clarity.
Generate questions and answers from an existing webpage (E):Azure AI Language can automatically extract Q & A pairs from a website’s FAQ or support page. This is done by providing the webpage URL to the service, which scans the page and builds a knowledge base from the detected questions and corresponding answers.
The other options are incorrect:
A (Cortana channel) relates to bot deployment, not knowledge creation.
D (Automated ML) is used for predictive modeling, not for building Q & A datasets.
Thus, the verified correct answers are B, C, and E.
When you design an AI system to assess whether loans should be approved, the factors used to make the decision should be explainable.
This is an example of which Microsoft guiding principle for responsible AI?
transparency
inclusiveness
fairness
privacy and security
Microsoft’s Responsible AI Principles, as outlined in the AI-900 certification materials and official Microsoft documentation, emphasize six guiding principles: fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. The principle of transparency means that AI systems should be designed so their decisions and processes are understandable and explainable to users and stakeholders.
In this scenario, the AI system is being developed to decide whether a loan should be approved. Such a decision directly affects people’s lives and finances, so it is essential that the system can explain which factors influenced its decision—for example, credit score, income, or payment history. Microsoft’s Responsible AI framework stresses that transparency helps ensure trust between humans and AI systems. When decisions are explainable, users can understand and contest the reasoning if necessary.
The other options do not align precisely with this scenario:
B. Inclusiveness focuses on making AI accessible to all people, regardless of ability or background.
C. Fairness ensures that AI systems treat all individuals equally and do not discriminate. While fairness is important for loan assessment, the question specifically highlights the need for explainability, not equality.
D. Privacy and Security deals with safeguarding user data, which is separate from explaining decisions.
Therefore, the principle demonstrated here is transparency, as it ensures decision-making processes are clear, explainable, and traceable—directly aligning with Microsoft’s responsible AI guidance.
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.



The Translator service, part of Microsoft Azure Cognitive Services, is designed specifically for text translation between multiple languages. It is a cloud-based neural machine translation service that supports more than 100 languages. According to Microsoft Learn’s module “Translate text with the Translator serviceâ€, this service provides two main capabilities: text translation and automatic language detection.
“You can use the Translator service to translate text between languages.†→ YesThis statement is true. The primary purpose of the Translator service is to translate text accurately and efficiently between supported languages, such as English to Spanish or French to Japanese. It maintains contextual meaning using neural machine translation models.
“You can use the Translator service to detect the language of a given text.†→ YesThis statement is also true. The Translator service includes automatic language detection, which determines the source language before translation. For instance, if a user submits text in an unknown language, the service can identify it automatically before performing translation.
“You can use the Translator service to transcribe audible speech into text.†→ NoThis statement is false. Transcribing speech (audio) into text is a function of the Azure Speech service, specifically the Speech-to-Text API, not the Translator service.
Therefore, the Translator service is used for text translation and language detection, while speech transcription belongs to the Speech service.
You ate building a Conversational Language Understanding model for an e-commerce business.
You need to ensure that the model detects when utterances are outside the intended scope of the model.
What should you do?
Export the model.
Create a new model.
Add utterances to the None intent.
Create a prebuilt task entity.
In Conversational Language Understanding (CLU), a core service within Azure AI Language, intents represent the goals or purposes behind user utterances (for example, “Track my order†or “Cancel my subscriptionâ€). However, in real-world scenarios, users often provide inputs that do not match any defined intent. To handle such cases gracefully, Microsoft recommends including a “None†intent that captures out-of-scope utterances — text that doesn’t belong to any other intent in your model.
According to the Microsoft Learn module: “Build a Conversational Language Understanding appâ€, the None intent serves as a catch-all or fallback category for utterances that the model should ignore or respond to with a default message (e.g., “I’m sorry, I don’t understand that.â€). By training the model with multiple examples of irrelevant or unrelated utterances in this intent, you improve its ability to distinguish between valid and invalid user inputs.
The other options are incorrect:
A. Export the model: Exporting only saves or transfers the model; it does not influence how the model detects irrelevant utterances.
B. Create a new model: A new model would not inherently solve out-of-scope detection unless properly trained with a None intent.
D. Create a prebuilt task entity: Entities identify specific data (like dates or products) within valid intents, not irrelevant utterances.
Thus, the correct approach to ensure that your CLU model can detect utterances outside its intended scope is to add examples of unrelated or off-topic phrases to the None intent. This improves classification accuracy and prevents incorrect intent matches.
✅ Correct Answer: C. Add utterances to the None intent
You have an app that identifies birds in images. The app performs the following tasks:
* Identifies the location of the birds in the image
* Identifies the species of the birds in the image
Which type of computer vision does each task use? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.



According to the Microsoft Azure AI Fundamentals (AI-900) Official Study Guide and the Microsoft Learn module “Explore computer vision in Microsoft Azure,†there are multiple types of computer vision tasks, each designed for different goals such as recognizing, categorizing, or locating objects within an image.
In this question, the application performs two distinct tasks: locating birds within an image and identifying their species. Each of these corresponds to a different type of computer vision capability.
Locate the birds → Object detection
Object detection is used when an AI system needs to identify and locate multiple objects within a single image.
It not only recognizes what the object is but also provides bounding boxes that indicate the exact position of each object.
In this scenario, locating the birds (drawing rectangles around each bird) is achieved through object detection models, such as those available in the Azure Custom Vision Object Detection domain.
Identify the species of the birds → Image classification
Image classification focuses on identifying what is in the image rather than where it is.
It assigns a single label (or multiple labels in multilabel classification) to an entire image based on its contents.
In this case, determining the species of a bird (e.g., robin, eagle, parrot) is achieved through image classification, where the model compares visual features against learned patterns from training data.
Incorrect options:
Automated captioning generates descriptive sentences about an image, not object locations or classifications.
Optical character recognition (OCR) extracts text from images, irrelevant in this case.
You have an Azure Machine Learning model that predicts product quality. The model has a training dataset that contains 50,000 records. A sample of the data is shown in the following table.

For each of the following Statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.



This question tests the understanding of features and labels in machine learning, a core concept covered in the Microsoft Azure AI Fundamentals (AI-900) syllabus under “Describe fundamental principles of machine learning on Azure.â€
In supervised machine learning, data is divided into features (inputs) and labels (outputs).
Features are the independent variables — measurable properties or characteristics used by the model to make predictions.
Labels are the dependent variables — the target outcome the model is trained to predict.
From the provided dataset, the goal of the Azure Machine Learning model is to predict product quality (Pass or Fail). Therefore:
Mass (kg) is a feature – Yes“Mass (kg)†represents an input variable used by the model to learn patterns that influence product quality. It helps the algorithm understand how variations in mass might correlate with passing or failing the quality test. Thus, it is correctly classified as a feature.
Quality Test is a label – YesThe “Quality Test†column indicates the outcome of the manufacturing process, marked as either Pass or Fail. This is the target the model tries to predict during training. In Azure ML terminology, this column is the label, as it represents the dependent variable.
Temperature (C) is a label – No“Temperature (C)†is an input that helps the model determine quality outcomes, not the outcome itself. It influences the quality result but is not the value being predicted. Therefore, temperature is another feature, not a label.
In conclusion, per Microsoft Learn and AI-900 study materials, features are measurable inputs (like mass and temperature), while the label is the target output (like the quality test result).
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.



“The Azure OpenAI GPT-3.5 Turbo model can transcribe speech to text.†— NOThis statement is false. The GPT-3.5 Turbo model is a text-based large language model (LLM) designed for natural language understanding and generation, such as answering questions, summarizing text, or writing content. It does not process or transcribe audio input. Speech-to-text capabilities belong to Azure AI Speech Services, specifically the Speech-to-Text API, not Azure OpenAI.
“The Azure OpenAI DALL-E model generates images based on text prompts.†— YESThis statement is true. The DALL-E model, available within Azure OpenAI Service, is a generative AI model that creates original images from natural language descriptions (text prompts). For example, given a prompt like “a futuristic city at sunset,†DALL-E generates a unique, high-quality image representing that concept. This aligns with generative AI workloads in the AI-900 study guide, where DALL-E is specifically mentioned as an image-generation model.
“The Azure OpenAI embeddings model can convert text into numerical vectors based on text similarities.†— YESThis statement is also true. The embeddings model in Azure OpenAI converts text into multi-dimensional numeric vectors that represent semantic meaning. These embeddings enable tasks such as semantic search, recommendations, and text clustering by comparing similarity scores between vectors. Words or phrases with similar meanings have vectors close together in the embedding space.
In summary:
GPT-3.5 Turbo → Text generation (not speech-to-text)
DALL-E → Image generation from text prompts
Embeddings → Convert text into numerical semantic representations
Correct selections: No, Yes, Yes.
You need to identify street names based on street signs in photographs.
Which type of computer vision should you use?
object detection
optical character recognition (OCR)
image classification
facial recognition
According to the Microsoft Azure AI Fundamentals (AI-900) official study guide and Microsoft Learn module “Describe features of computer vision workloads on Azureâ€, Optical Character Recognition (OCR) is a core computer vision workload that enables AI systems to detect and extract text from images or scanned documents.
In this scenario, the goal is to identify street names from street signs in photographs. Since the text is embedded within images, OCR is the correct technology to use. OCR works by analyzing the visual patterns of letters, numbers, and symbols, then converting them into machine-readable text. Azure’s Computer Vision API and Azure AI Vision Service provide OCR capabilities that can extract printed or handwritten text from pictures, documents, and even real-time camera feeds.
Let’s analyze the other options:
A. Object detection: Identifies and locates objects (like cars, people, or street signs) but not the text written on them.
C. Image classification: Classifies an entire image into categories (e.g., “street scene†or “traffic signâ€) but doesn’t extract text content.
D. Facial recognition: Identifies or verifies people by analyzing facial features, unrelated to text extraction.
Therefore, identifying street names on street signs is a text extraction problem, making Optical Character Recognition (OCR) the most accurate and verified answer per Microsoft Learn content.
Which type of natural language processing (NLP) entity is used to identify a phone number?
regular expression
machine-learned
list
Pattern-any
In Natural Language Processing (NLP), entities are pieces of information extracted from text, such as names, locations, or phone numbers. According to the Microsoft Learn module “Explore natural language processing in Azure,†Azure’s Language Understanding (LUIS) supports several entity types:
Machine-learned entities – Automatically learned based on context in training data.
List entities – Used for predefined, limited sets of values (e.g., colors or product names).
Pattern.any entities – Capture flexible, unstructured phrases in user input.
Regular expression entities – Use regex patterns to match specific data formats such as phone numbers, postal codes, or dates.
A regular expression is ideal for recognizing phone numbers because phone numbers follow specific numeric or symbol-based patterns (e.g., (555)-123-4567 or +1 212 555 0199). By defining a regex pattern, the AI model can accurately extract phone numbers regardless of text context.
You need to provide customers with the ability to query the status of orders by using phones, social media, or digital assistants.
What should you use?
Azure Al Bot Service
the Azure Al Translator service
an Azure Al Document Intelligence model
an Azure Machine Learning model
According to the Microsoft Azure AI Fundamentals (AI-900) official study guide and Microsoft Learn module “Identify Azure services for conversational AI,†the Azure AI Bot Service is specifically designed to create intelligent conversational agents (chatbots) that can interact with users across multiple communication channels, such as web chat, social media, phone calls, Microsoft Teams, and digital assistants.
In this scenario, customers need the ability to query the status of their orders through various interfaces — including voice and text platforms. Azure AI Bot Service enables this by integrating with Azure AI Language (for understanding natural language), Azure Speech (for speech-to-text and text-to-speech capabilities), and Azure Communication Services (for telephony or chat integration).
The bot can interpret user input like “Where is my order?†or “Check my delivery status,†call backend systems (such as an order database or API), and then respond appropriately to the user through the same communication channel.
Let’s analyze the incorrect options:
B. Azure AI Translator Service: Used for real-time text translation between languages; it doesn’t handle conversation logic or database queries.
C. Azure AI Document Intelligence model: Extracts data from structured and semi-structured documents (e.g., invoices, receipts), not user queries.
D. Azure Machine Learning model: Builds and deploys predictive models, but doesn’t provide conversational or multi-channel interaction capabilities.
Thus, for enabling multi-channel conversational experiences where customers can inquire about order statuses using voice, chat, or digital assistants, the most appropriate solution is Azure AI Bot Service, as outlined in Azure’s AI conversational workload documentation.
Which service should you use to extract text, key/value pairs, and table data automatically from scanned documents?
Azure Al Custom Vision
Azure Al Document Intelligence
Azure Al Language
Azure Al face
Accelerate your business processes by automating information extraction. Form Recognizer applies advanced machine learning to accurately extract text, key/value pairs, and tables from documents. With just a few samples, Form Recognizer tailors its understanding to your documents, both on-premises and in the cloud. Turn forms into usable data at a fraction of the time and cost, so you can focus more time acting on the information rather than compiling it.
Which two scenarios are examples of a conversational AI workload? Each correct answer presents a complete solution.
NOTE: Each correct selection is worth one point.
a smart device in the home that responds to questions such as “What will the weather be like today?â€
a website that uses a knowledge base to interactively respond to users’ questions
assembly line machinery that autonomously inserts headlamps into cars
monitoring the temperature of machinery to turn on a fan when the temperature reaches a specificThreshold
Conversational AI workloads involve human-like dialogue with AI systems.
A: A smart assistant (e.g., smart speaker) uses voice-based conversational AI.
B: A knowledge-based chatbot interacts with users via natural language.Options C and D describe automation/IoT workloads, not conversational AI.
✅ Final Answer (Q110): A and B
Which two resources can you use to analyze code and generate explanations of code function and code comments? Each correct answer presents a complete solution.
NOTE: Each correct answer is worth one point.
the Azure OpenAI DALL-E model
the Azure OpenAI Whisper model
the Azure OpenAI GPT-4 model
the GitHub Copilot service
The correct answers are C. the Azure OpenAI GPT-4 model and D. the GitHub Copilot service.
According to the Microsoft Azure AI Fundamentals (AI-900) curriculum and Microsoft Learn documentation on Azure OpenAI and GitHub Copilot, both GPT-4 and GitHub Copilot can be used to analyze and generate explanations for code functionality, as well as produce or refine code comments.
Azure OpenAI GPT-4 model (C):The GPT-4 model is a large language model (LLM) developed by OpenAI and available through the Azure OpenAI Service. It is trained on vast amounts of text, including programming languages, documentation, and natural language instructions. This enables it to interpret source code, explain what it does, suggest optimizations, and automatically generate detailed code comments. When prompted with code snippets, GPT-4 can provide structured natural language explanations describing the logic and intent of the code. In enterprise scenarios, developers use Azure OpenAI GPT models for code understanding, review automation, and documentation generation.
GitHub Copilot service (D):GitHub Copilot, powered by OpenAI Codex, is an AI coding assistant integrated into IDEs such as Visual Studio Code. It can analyze code context and generate inline comments and explanations in real time. GitHub Copilot understands the syntax and intent of numerous programming languages and provides intelligent suggestions or explanations directly in the developer’s environment.
The other options are not suitable:
A. DALL-E is a generative image model for creating visual content, not text or code analysis.
B. Whisper is an automatic speech recognition (ASR) model used for converting speech to text, unrelated to code interpretation.
Therefore, based on the official Azure AI and GitHub documentation, the correct and verified answers are C. Azure OpenAI GPT-4 model and D. GitHub Copilot service.
Select the answer that correctly completes the sentence.


Azure Kubernetes Service (AKS).
According to the Microsoft Azure AI Fundamentals (AI-900) official study guide and Microsoft Learn documentation on Azure Machine Learning, the Azure Kubernetes Service is commonly used to host and deploy machine learning models, including Automated ML models, into production environments. Once a model is trained using Azure Machine Learning (Azure ML), it must be deployed as a web service endpoint so it can receive data and return predictions.
Azure ML offers two primary options for hosting and deploying models:
Azure Kubernetes Service (AKS) – for high-scale, production-grade deployments that require fast response times, high availability, and scalability.
Azure Container Instances (ACI) – for testing or low-scale workloads where cost and simplicity are more important than performance.
AKS provides a managed Kubernetes cluster that allows for automated container orchestration, load balancing, scaling, and monitoring of deployed machine learning models. When you use Automated ML in Azure ML Studio, the generated model can be containerized and deployed directly to AKS, making it accessible as a REST API endpoint. This enables applications, systems, or users to send data and receive predictions in real time.
The other options serve different purposes:
Azure Data Factory is used for data integration and pipeline orchestration, not model hosting.
Azure Automation focuses on automating administrative tasks and runbooks, not ML deployment.
Azure Logic Apps is used to automate workflows and integrate services, not to serve ML models.
Therefore, the correct service to host automated machine learning (AutoML) models in production is Azure Kubernetes Service (AKS), as it provides a reliable, scalable, and secure environment for real-time inference and enterprise AI workloads.
You need to develop a mobile app for employees to scan and store their expenses while travelling.
Which type of computer vision should you use?
semantic segmentation
image classification
object detection
optical character recognition (OCR)
According to the Microsoft Azure AI Fundamentals (AI-900) official study guide and Microsoft Learn module “Explore computer visionâ€, Optical Character Recognition (OCR) is a form of computer vision that enables a system to detect and extract printed or handwritten text from images or documents. OCR is particularly useful in scenarios where the goal is to digitize textual information from physical documents, such as receipts, invoices, or travel expense forms — exactly as described in this question.
In the given scenario, employees need a mobile application that allows them to scan and store expenses while traveling. The process involves taking photos of receipts that contain printed text, such as vendor names, totals, dates, and item descriptions. The OCR technology automatically detects the text areas within the image and converts them into machine-readable and searchable data that can be stored in a database or processed further for expense management.
Microsoft’s Azure Cognitive Services include the Computer Vision API and the Form Recognizer service, both of which use OCR technology. The Form Recognizer builds upon OCR by adding intelligent document understanding, enabling it to extract structured data from expense receipts automatically.
Other answer options are incorrect for the following reasons:
A. Semantic segmentation assigns labels to every pixel in an image, typically used in autonomous driving or medical imaging, not for text extraction.
B. Image classification identifies the overall category of an image (e.g., “This is a receiptâ€), but it does not extract the textual content.
C. Object detection identifies and locates objects in an image with bounding boxes but is not used for text reading or conversion.
Therefore, based on the official AI-900 training and Microsoft Learn content, the correct answer is D. Optical Character Recognition (OCR) — the technology that enables extracting textual information from scanned expense receipts.
Your company is exploring the use of voice recognition technologies in its smart home devices. The company wants to identify any barriers that might unintentionally leave out specific user groups.
This an example of which Microsoft guiding principle for responsible AI?
accountability
fairness
inclusiveness
privacy and security
According to the Microsoft Azure AI Fundamentals (AI-900) Official Study Guide and the Microsoft Responsible AI Framework, Inclusiveness is one of the six guiding principles for responsible AI. The principle of inclusiveness ensures that AI systems are designed to empower everyone and engage people of all abilities. Microsoft emphasizes that inclusive AI systems must be developed with awareness of potential barriers that could unintentionally exclude certain user groups. This directly aligns with the scenario described—where the company is examining voice recognition technologies in smart home devices to identify barriers that might leave out users, such as those with speech impairments, accents, or language differences.
The official Microsoft Learn module “Identify guiding principles for responsible AI†explains that inclusiveness focuses on creating systems that can understand and serve users with diverse needs. For example, voice recognition models should account for variations in dialect, tone, accent, and speech patterns to ensure equitable access for all. A lack of inclusiveness could cause bias or misrecognition for underrepresented groups, leading to unintentional exclusion.
Microsoft’s guidance further stresses that designing for inclusiveness involves involving diverse users in the data collection and testing phases, conducting accessibility assessments, and continuously improving model performance across different demographic groups. In this way, inclusiveness promotes fairness, accessibility, and usability across cultural and physical differences.
In contrast:
A. Accountability is about ensuring humans are responsible for AI outcomes.
B. Fairness focuses on preventing bias and discrimination in data or algorithms.
D. Privacy and security ensure protection of personal data and secure handling of information.
Thus, evaluating potential barriers that could exclude specific user groups exemplifies Inclusiveness, as it demonstrates a proactive approach to making AI accessible and beneficial for all users.
What is an example of a regression model in machine learning?
dividing the student data in a dataset based on the age of the students and their educational achievements
identifying subtypes of spam email by examining a large collection of emails that were flagged by users
predicting the sale price of a house based on historical data, the size of the house, and the number of bedrooms in the house
identifying population counts of endangered animals by analyzing images
rrect answer is C. Predicting the sale price of a house based on historical data, the size of the house, and the number of bedrooms.
In machine learning, regression is a supervised learning technique used to predict continuous numeric values. Microsoft’s AI-900 study guide defines regression models as those that estimate relationships between variables—predicting a continuous outcome variable from one or more input features.
In this case, the house sale price is a continuous numeric value, and inputs such as size, location, and number of bedrooms are the features. Common regression algorithms include linear regression, decision tree regression, and boosted regression.
Other options represent different ML workloads:
A involves segmentation by categories (classification or clustering).
B represents clustering, grouping similar items without predefined labels.
D represents computer vision, counting animals in images rather than predicting a numeric value.
Hence, the verified answer is C. Regression.
Match the types of natural languages processing workloads to the appropriate scenarios.
To answer, drag the appropriate workload type from the column on the left to its scenario on the right. Each workload type may be used once, more than once, or not at all.
NOTE: Each correct selection is worth one point.


According to the Microsoft Azure AI Fundamentals (AI-900) official study guide and the Microsoft Learn module “Identify features of Natural Language Processing (NLP) workloads on Azureâ€, Azure Cognitive Services provides several text analytics and language understanding workloads that perform different types of language processing tasks. Each workload extracts specific information or performs distinct analysis operations on text data.
Entity Recognition → Extracts persons, locations, and organizations from the textEntity recognition is a feature of Azure Cognitive Service for Language (formerly Text Analytics). It identifies and categorizes named entities in unstructured text, such as people, organizations, locations, dates, and more. The study guide defines this workload as: “Entity recognition locates and classifies named entities in text into predefined categories.†This allows applications to extract structured information from raw text data—for example, identifying “Microsoft†as an organization and “Seattle†as a location.
Sentiment Analysis → Evaluates text along a positive–negative scaleSentiment analysis determines the emotional tone or opinion expressed in a piece of text. It classifies text as positive, negative, neutral, or mixed, which is widely used for social media monitoring, customer feedback, and product reviews. Microsoft’s official documentation describes it as: “Sentiment analysis evaluates text and returns a sentiment score indicating whether the sentiment is positive, negative, neutral, or mixed.â€
Translation → Returns text translated to the specified target languageThe Translator service, part of Azure Cognitive Services, automatically translates text from one language to another. It supports multiple languages and provides near real-time translation. The AI-900 content specifies that “translation workloads are used to automatically translate text between languages using machine translation models.â€
In summary:
Entity Recognition → Extracts entities like names and locations.
Sentiment Analysis → Determines emotional tone.
Translation → Converts text between languages.
✅ Final Answers:
Extracts persons, locations, and organizations → Entity recognition
Evaluates text along a positive–negative scale → Sentiment analysis
Returns text translated to the specified target language → Translation
For each of The following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.



The Azure AI Language service (part of Azure Cognitive Services) provides a set of natural language processing (NLP) capabilities designed to analyze and interpret text data. Its core features include language detection, key phrase extraction, sentiment analysis, and named entity recognition (NER).
Language Identification – YESAccording to the Microsoft Learn module “Analyze text with Azure AI Language,†one of the service’s built-in capabilities is language detection, which determines the language of a given text string (e.g., English, Spanish, or French). This allows applications to automatically adapt to multilingual input.
Handwritten Signature Detection – NOThe Azure AI Language service only processes text-based data; it does not analyze images or handwriting. Detecting handwritten signatures requires computer vision capabilities, specifically Azure AI Vision or Azure AI Document Intelligence, which can extract and interpret visual content from scanned documents or images.
Identifying Companies and Organizations – YESThe Named Entity Recognition (NER) feature within Azure AI Language can identify entities such as people, locations, dates, organizations, and companies mentioned in text. It tags these entities with categories, enabling structured analysis of unstructured data.
✅ Summary:
Language detection → Yes (supported by AI Language).
Handwritten signatures → No (requires Computer Vision).
Entity recognition for companies/organizations → Yes (supported by AI Language NER).
Which three actions improve the quality of responses returned by a generative Al solution that uses GPT-3.5? Each correct answer presents a complete solution.
NOTE: Each correct answer is worth one point.
Add grounding data to prompts.
Provide additional examples to prompts.
Modify tokenization.
Add training data to prompts.
Modify system messages.
To improve the quality and relevance of responses generated by a generative AI solution using GPT-3.5, the following three actions are emphasized in the Microsoft Learn: Azure OpenAI Service best practices and AI-900/AI-102 training materials:
(A) Add grounding data to prompts: Grounding ensures that the model’s output is based on factual, domain-specific information. By adding context or external knowledge sources, responses become more accurate and aligned with the organization’s data rather than relying on the model’s general training corpus.
(B) Provide additional examples to prompts: Also known as few-shot prompting, this method demonstrates desired response patterns by including examples in the prompt. This significantly improves output quality, consistency, and adherence to desired formats.
(E) Modify system messages: In Azure OpenAI chat completions, system messages define the model’s behavior, style, and tone. Adjusting system messages allows fine-tuning of the model’s response quality, ensuring it follows context or persona guidelines.
The remaining options are incorrect:
(C) Modify tokenization is a low-level text-processing technique not used to improve model response quality directly.
(D) Add training data to prompts is not possible at runtime since GPT-3.5 models are pre-trained; only prompt engineering can influence output behavior.
Therefore, based on Azure OpenAI and AI-900 guidance, the three best ways to enhance generative
A medical research project uses a large anonymized dataset of brain scan images that are categorized into predefined brain haemorrhage types.
You need to use machine learning to support early detection of the different brain haemorrhage types in the images before the images are reviewed by a person.
This is an example of which type of machine learning?
clustering
regression
classification
According to the Microsoft Azure AI Fundamentals (AI-900) official study guide and Microsoft Learn module “Identify features of classification machine learningâ€, classification is a supervised machine learning technique used when the output variable represents discrete categories or classes. In this case, the brain scan images are already labeled into predefined haemorrhage types, such as “subarachnoid,†“epidural,†or “intraventricular.†The model’s goal is to learn patterns from labeled examples and then predict the correct class for new, unseen images.
The use of categorized brain scan images clearly indicates a supervised learning setup because both the input (image data) and output (haemorrhage type) are known during training. This aligns with Microsoft’s definition: classification problems “predict which category or class an item belongs to,†often using algorithms such as logistic regression, decision trees, neural networks, or convolutional neural networks (CNNs) for image-based data.
In contrast:
A. Clustering is an unsupervised learning approach that groups data into clusters based on similarity when no predefined labels exist.
B. Regression predicts continuous numeric values (e.g., predicting age or temperature), not categories.
Because this project aims to automatically classify medical images into known diagnostic categories, it is a textbook example of classification.
Match the machine learning models to the appropriate deceptions.
To answer, drag the appropriate model from the column on the left to its description on the right Each model may be used once, more than once, or not at all.
NOTE: Each correct match is worth one point.



According to the Microsoft Azure AI Fundamentals (AI-900) official study guide and the Microsoft Learn module “Identify features of common machine learning typesâ€, the three main machine learning model types differ by their purpose and the kind of data they use — whether supervised (using labeled data) or unsupervised (using unlabeled data).
Regression → A supervised machine learning model used to predict numeric values.Regression is a type of supervised learning that predicts continuous numerical outcomes. It learns the relationship between input features (independent variables) and a continuous target variable (dependent variable). Examples include predicting house prices, sales revenue, or temperature. The AI-900 curriculum emphasizes regression for “predicting numeric values based on known data,†using algorithms such as linear regression or decision tree regression.
Classification → A supervised machine learning model used to predict categories.Classification is also a supervised learning technique, but it predicts discrete outcomes (classes) instead of continuous values. It assigns input data to one or more categories based on learned patterns. Typical examples include spam detection (spam vs. not spam), sentiment analysis (positive, neutral, negative), or predicting loan approval (approved/denied). The AI-900 study materials describe classification as “predicting a category or label for new observations.â€
Clustering → An unsupervised machine learning model used to group similar entities based on features.Clustering is an unsupervised learning approach, meaning it works on unlabeled data. It automatically identifies patterns and groups similar data points into clusters based on shared characteristics. Examples include customer segmentation (grouping customers by behavior) and grouping similar documents. The AI-900 learning module explains clustering as “discovering natural groupings in data without predefined labels.â€
Thus, per Microsoft’s official AI-900 learning objectives:
Regression → Predicts numeric/continuous values.
Classification → Predicts categories/labels.
Clustering → Groups similar entities (unsupervised).
✅ Final Verified Configuration:
Regression → Predict numeric values
Classification → Predict categories
Clustering → Group similar entities based on features
To complete the sentence, select the appropriate option in the answer area.


facial analysis.
According to the Microsoft Azure AI Fundamentals (AI-900) official study guide and the Microsoft Learn module “Describe features of computer vision workloads on Azure,†facial analysis is a computer vision capability that detects faces and extracts attributes such as facial expressions, emotions, pose, occlusion, and image quality factors like exposure and noise. It does not identify or verify individual identities; rather, it interprets facial features and image characteristics to analyze conditions in an image.
In this question, the AI solution helps photographers take better portrait photos by providing feedback on exposure, noise, and occlusion — tasks directly linked to facial analysis. The model analyzes the detected face to determine if the image is well-lit, clear, and unobstructed, thereby improving photo quality. These capabilities are part of the Azure Face service in Azure Cognitive Services, which includes both facial detection and facial analysis functionalities.
Here’s how the other options differ:
Facial detection only identifies that a face exists in an image and provides its location using bounding boxes, without further interpretation.
Facial recognition goes a step further — it attempts to identify or verify a person’s identity by comparing the detected face with stored images. This is not what the scenario describes.
Thus, when an AI solution evaluates image quality aspects like exposure, noise, and occlusion, it’s performing facial analysis, which focuses on understanding image and facial characteristics rather than identification.
In summary, based on Microsoft’s AI-900 study material, this scenario demonstrates facial analysis, a subcategory of computer vision tasks within Azure Cognitive Services.
What is a form of unsupervised machine learning?
multiclass classification
clustering
binary classification
regression
As outlined in the AI-900 study guide and Microsoft Learn’s “Explore fundamental principles of machine learning†module, clustering is a core example of unsupervised machine learning.
In unsupervised learning, the model is trained on data without labeled outcomes. The goal is to discover patterns or groupings naturally present in the data. Clustering algorithms, such as K-means, DBSCAN, or Hierarchical clustering, analyze similarities among data points and group them into clusters. For example, clustering can group customers by purchasing behavior or segment products by shared characteristics — all without predefined labels.
Supervised learning, by contrast, uses labeled data (input-output pairs) to train a model that predicts outcomes. This includes:
A. Multiclass classification – Predicts more than two categories (e.g., classifying images as dog, cat, or bird).
C. Binary classification – Predicts two categories (e.g., spam vs. not spam).
D. Regression – Predicts continuous numeric values (e.g., price prediction).
Therefore, the only option representing unsupervised learning is clustering, which enables data discovery without predefined labels.
You use drones to identify where weeds grow between rows of crops to send an Instruction for the removal of the weeds. This is an example of which type of computer vision?
scene segmentation
optical character recognition (OCR)
object detection
Object detection is similar to tagging, but the API returns the bounding box coordinates for each tag applied. For example, if an image contains a dog, cat and person, the Detect operation will list those objects together with their coordinates in the image.
To complete the sentence, select the appropriate option in the answer area.



According to the Microsoft Azure AI Fundamentals (AI-900) Official Study Guide and the Microsoft Learn module “Explore natural language processing (NLP) in Azureâ€, Natural Language Processing (NLP) is a branch of artificial intelligence that focuses on enabling computers to understand, interpret, and generate human language. NLP is used to extract meaning and intent from text or speech, perform sentiment analysis, identify entities, and classify content based on context.
One of the primary applications of NLP is text classification, where an AI model automatically categorizes text documents or messages into predefined classes. Classifying emails as work-related or personal is a textbook example of this NLP capability. It involves analyzing the words, phrases, and structure of the text to determine the email’s category. Microsoft Learn highlights this type of problem as document classification, an essential NLP use case often implemented through Azure Cognitive Services such as Text Analytics or Language Studio.
Let’s examine why the other options are incorrect:
Predict the number of future car rentals – This is a time series forecasting or regression task, not NLP.
Predict which website visitors will make a transaction – This is a predictive analytics or machine learning classification problem based on behavioral data, not language understanding.
Stop a process in a factory when extremely high temperatures are registered – This relates to IoT automation or sensor-based anomaly detection, not NLP.
Therefore, based on Microsoft’s AI-900 materials, Natural Language Processing is best used for tasks involving understanding and classifying text, such as classifying email messages as work-related or personal. This example perfectly aligns with NLP’s goal—to enable machines to process and derive insights from human language inputs.
To complete the sentence, select the appropriate option in the answer area.



Accelerate your business processes by automating information extraction. Form Recognizer applies advanced machine learning to accurately extract text, key/value pairs, and tables from documents. With just a few samples, Form Recognizer tailors its understanding to your documents, both on-premises and in the cloud. Turn forms into usable data at a fraction of the time and cost, so you can focus more time acting on the information rather than compiling it.
You have an app that identifies the coordinates of a product in an image of a supermarket shelf.
Which service does the app use?
Azure Al Custom Vision object detection
Azure Al Computer Vision Read
Azure Al Computer Vision optical character recognition (OCR)
Azure Al Custom Vision classification
The described app identifies the coordinates of a product within an image of a supermarket shelf. This scenario directly corresponds to the object detection capability of Azure AI Custom Vision. As per the Microsoft Learn module “Train a Custom Vision modelâ€, the object detection project type allows developers to train models that can both detect and locate objects within an image. It returns bounding box coordinates along with predicted labels for each detected item.
In this use case, the app doesn’t just classify what products are present—it needs the position of the product (coordinates). That function distinguishes object detection from classification. Classification simply assigns a label to the entire image, while object detection provides spatial information for multiple items in one image.
The other options are incorrect:
B. Azure AI Computer Vision Read and C. Azure AI Computer Vision OCR are used for extracting text from images, not locating objects.
D. Azure AI Custom Vision classification only categorizes images but cannot determine where objects appear.
Therefore, to build an app that finds and locates products in images, the correct choice is A. Azure AI Custom Vision object detection.
Match the principles of responsible AI to the appropriate descriptions.
To answer, drag the appropriate principle from the column on the left to its description on the right. Each principle may be used once, more than once, or not at all.
NOTE: Each correct match is worth one point.



The correct answers are derived from the Microsoft Azure AI Fundamentals (AI-900) Official Study Guide and the Microsoft Learn module “Identify guiding principles for responsible AI.â€
Microsoft defines six core principles of Responsible AI:
Fairness
Reliability and safety
Privacy and security
Inclusiveness
Transparency
Accountability
Each principle addresses a key ethical and operational requirement for developing and deploying trustworthy AI systems.
Reliability and safety – “AI systems must consistently operate as intended, even under unexpected conditions.â€This principle ensures that AI models are dependable, robust, and perform accurately under diverse circumstances. Microsoft emphasizes that systems should be thoroughly tested and monitored to guarantee predictable behavior, prevent harm, and maintain safety. A reliable AI solution should continue to function properly when faced with unusual or noisy inputs, and fail safely when issues arise. This principle focuses on stability, testing, and dependable performance.
Privacy and security – “AI systems must protect and secure personal and business information.â€This principle ensures that AI systems comply with data privacy laws and ethical standards. It protects users’ sensitive data against unauthorized access and misuse. Microsoft highlights that organizations must implement strong encryption, data anonymization, and access control mechanisms to maintain confidentiality. Protecting user data is essential to building trust and compliance with global standards like GDPR.
Other principles such as fairness and inclusiveness apply to ensuring equitable and accessible AI, but they do not directly relate to system operation or information protection.
✅ Final Answers:
“Operate as intended†→ Reliability and safety
“Protect and secure information†→ Privacy and security
You are developing a solution that uses the Text Analytics service.
You need to identify the main talking points in a collection of documents.
Which type of natural language processing should you use?
entity recognition
key phrase extraction
sentiment analysis
language detection
According to the Microsoft Azure AI Fundamentals (AI-900) learning path and Azure Text Analytics service documentation, key phrase extraction is a natural language processing (NLP) technique used to automatically identify the main topics or talking points within a text document or a collection of documents. This feature is designed to summarize textual data by detecting the most relevant words or short phrases that capture the essence of the content.
For example, if a document discusses “renewable energy sources such as solar and wind power,†the key phrases extracted might include “renewable energy,†“solar power,†and “wind power.†This helps users quickly understand the primary focus areas of large volumes of text without manual review.
In Azure, the Text Analytics service provides several core NLP capabilities, including:
Key phrase extraction – identifies main concepts or topics.
Entity recognition – detects and categorizes proper names like people, locations, or organizations.
Sentiment analysis – determines the emotional tone (positive, neutral, or negative).
Language detection – identifies the language used in the text.
Since the question specifies identifying main talking points, the correct feature is key phrase extraction, as it focuses on summarizing themes rather than identifying entities or emotions.
Therefore, the verified answer is B. key phrase extraction.
Select the answer that correctly completes the sentence.



The correct completion of the sentence is:
“The interactive answering of questions entered by a user as part of an application is an example of natural language processing.â€
According to the Microsoft Azure AI Fundamentals (AI-900) official study materials, Natural Language Processing (NLP) is a branch of Artificial Intelligence that focuses on enabling computers to understand, interpret, and respond to human language in a way that is both meaningful and useful. It is one of the key AI workloads described in the “Describe features of common AI workloads†module on Microsoft Learn.
When a user types a question into an application and the system responds interactively — such as in a chatbot, Q & A system, or virtual assistant — this process requires language understanding. NLP allows the system to process the input text, determine user intent, extract relevant entities, and generate an appropriate response. This is the foundational capability behind services such as Azure Cognitive Service for Language, Language Understanding (LUIS), and QnA Maker (now integrated as Question Answering in the Language service).
Microsoft’s study guide explains that NLP workloads include the following key scenarios:
Language understanding: Determining intent and context from text or speech.
Text analytics: Extracting meaning, key phrases, sentiment, or named entities.
Conversational AI: Powering bots and virtual agents to interact using natural language.These systems rely on NLP models to analyze user inputs and respond accordingly.
In contrast:
Anomaly detection identifies data irregularities.
Computer vision analyzes images or video.
Forecasting predicts future values based on historical data.
Therefore, based on the AI-900 official materials, the interactive answering of user questions through an application clearly falls under Natural Language Processing (NLP).
What is an example of a Microsoft responsible Al principle?
Al systems should treat people fairly.
Al systems should NOT reveal the details of their design.
Al systems should use black-box models.
Al systems should protect the interests of developers.
Full Detailed Explanation (250–300 words):
The correct answer is A. AI systems should treat people fairly.
This statement aligns with one of Microsoft’s six Responsible AI principles, which are:
Fairness – AI systems should treat all people fairly and avoid bias.
Reliability and Safety
Privacy and Security
Inclusiveness
Transparency
Accountability
The principle of Fairness ensures that AI models do not discriminate based on factors such as race, gender, age, or socioeconomic background. For example, a loan approval or hiring model must provide equal opportunity to all qualified applicants regardless of demographic differences.
B (Not revealing design details) contradicts Transparency, which promotes openness about AI functionality.
C (Black-box models) goes against Microsoft’s push for Explainable AI.
D (Protect developers’ interests) is not part of Microsoft’s Responsible AI framework.
Therefore, the verified correct answer is A. AI systems should treat people fairly.
In which two scenarios can you use a speech synthesis solution? Each correct answer presents a complete solution.
NOTE: Each correct selection is worth one point.
an automated voice that reads back a credit card number entered into a telephone by using a numeric keypad
generating live captions for a news broadcast
extracting key phrases from the audio recording of a meeting
an Al character in a computer game that speaks audibly to a player
According to the Microsoft Learn module “Explore speech capabilities of Azure AI†and the AI-900 Official Study Guide, speech synthesis (also known as text-to-speech) is the process of converting written text into spoken audio output. Azure’s Speech service provides this functionality, allowing applications to produce human-like voices dynamically.
Let’s evaluate each scenario:
A. Automated voice that reads back a credit card number entered into a telephone keypad → YesThis is a classic text-to-speech (TTS) use case. The application converts numeric or textual input (such as a credit card number) into audio output that the caller hears. Azure Speech service can handle such voice responses in automated phone systems or IVR (Interactive Voice Response) setups.
B. Generating live captions for a news broadcast → NoThis is a speech-to-text scenario (speech recognition), not speech synthesis. It involves converting audio speech into written text.
C. Extracting key phrases from an audio recording of a meeting → NoThis involves speech-to-text followed by text analytics, not speech synthesis.
D. An AI character in a computer game that speaks audibly to a player → YesThis is a direct example of speech synthesis, where the character’s dialog text is converted into realistic spoken output for immersive interaction.
Therefore, based on Microsoft’s AI-900 curriculum, speech synthesis is used in applications that convert text into audible speech, such as automated voice systems or interactive digital characters.
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.



According to the Microsoft Azure AI Fundamentals (AI-900) study guide and Azure Machine Learning documentation, Automated Machine Learning (AutoML) is a feature designed to help users build, train, and tune machine learning models automatically without requiring deep knowledge of programming or data science.
First Statement: “Automated machine learning provides you with the ability to include custom Python scripts in a training pipeline.â€This is False (No). AutoML automates the model selection and tuning process but does not allow the inclusion of custom Python scripts within its workflow. Custom Python integration is supported in Azure Machine Learning designer pipelines or SDK-based training, not in AutoML.
Second Statement: “Automated machine learning implements machine learning solutions without the need for programming experience.â€This is True (Yes). One of AutoML’s core benefits is that it enables non-programmers to train and evaluate models by simply selecting data, choosing a target column, and letting Azure automatically test algorithms and hyperparameters. This aligns with Microsoft’s AI-900 objective to democratize AI development.
Third Statement: “Automated machine learning provides you with the ability to visually connect datasets and modules on an interactive canvas.â€This is False (No). That feature belongs to Azure Machine Learning Designer, not AutoML. The designer offers a drag-and-drop visual interface for connecting datasets and modules, whereas AutoML provides a wizard-driven approach focused on automation.
You need to track multiple versions of a model that was trained by using Azure Machine Learning. What should you do?
Provision an inference duster.
Explain the model.
Register the model.
Register the training data.
According to the Microsoft Azure AI Fundamentals (AI-900) Official Study Guide and the Microsoft Learn module “Explore Azure Machine Learning,†registering a model is the correct way to track multiple versions of models in Azure Machine Learning.
When you train models in Azure Machine Learning, each trained version can be registered in the workspace’s Model Registry. Registration stores the model’s metadata, including version, training environment, parameters, and lineage. Each registration automatically increments the version number, enabling you to manage, deploy, and compare multiple model iterations efficiently.
The other options are incorrect:
A. Provision an inference cluster – Used for model deployment, not version tracking.
B. Explain the model – Provides interpretability but does not track versions.
D. Register the training data – Registers data assets, not models.
To complete the sentence, select the appropriate option in the answer area.



According to Microsoft’s Responsible AI principles, one of the six core principles is fairness, which ensures that AI systems treat all individuals equitably and that their outcomes are not influenced by biases present in the training data or algorithms. The official Microsoft Learn module “Identify the guiding principles for responsible AI†clearly defines fairness as the requirement that AI systems should not amplify or perpetuate existing societal biases.
In this scenario, the statement emphasizes that AI systems should NOT reflect biases from the datasets used to train them, which directly aligns with the fairness principle. Bias in AI models can arise when the data used for training is unbalanced or not representative of the real-world population. For instance, if a facial recognition model is trained mostly on images of one demographic group, it may perform poorly on others—an example of unfair bias. Microsoft advocates building and testing AI systems with diverse, high-quality datasets to ensure fair performance across all groups.
The other principles listed—accountability, inclusiveness, and transparency—are also important but do not directly address bias mitigation:
Accountability ensures that people remain responsible for AI systems and their decisions.
Inclusiveness promotes accessibility and usability for all people, including those with disabilities.
Transparency focuses on explaining how AI systems make decisions.
However, Fairness explicitly deals with avoiding discrimination and bias in AI outcomes and training data.
Thus, in Microsoft’s Responsible AI framework, ensuring that systems do not reflect biases from datasets is part of the Fairness principle, which promotes equitable and unbiased treatment for all individuals in AI-driven decisions.
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.


Yes, No, Yes.
According to the Microsoft Azure AI Fundamentals (AI-900) official study guide and the Microsoft Learn module “Identify capabilities of Azure Cognitive Services for Languageâ€, the Azure Translator service is a cloud-based machine translation service used to translate text or entire documents between languages in real time. It uses REST APIs or client libraries to translate text input, detect languages, and support multiple target languages in a single request.
“The following service call will accept English text as an input and output Italian and French text: /translate?from=en & to=it,fr – Yes.This URL format is correct because the Translator service API allows multiple target languages to be specified in a single to parameter separated by commas. In this case, from=en defines the source language (English), and to=it,fr requests translations into Italian (it) and French (fr). The API would return results in both target languages simultaneously. This syntax is officially documented in Microsoft Learn as the valid format for multi-language translation.
“The following service call will accept English text as an input and output Italian and French text: /translate?from=en & to=fr & to=it – No.This format is incorrect, as the Translator API does not support repeating the to parameter multiple times. Only one to parameter is valid, and multiple target languages must be provided as a comma-separated list within the same to parameter.
“The Translator service can be used to translate documents from English to French.†– Yes.This statement is true. The Translator service supports both text translation and document translation. The document translation capability allows the translation of whole files such as Word, PowerPoint, or PDF documents while preserving formatting and structure. This feature is included in the official Translator API under “Document Translation.â€
In summary, the AI-900 study content clarifies that:
✅ /translate?from=en & to=it,fr → Valid syntax
⌠/translate?from=en & to=fr & to=it → Invalid syntax
✅ Translator can translate full documents between languages
For each of the following statements, select Yes If the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.


 Location of a damaged product → Yes
 Multiple instances of the same product → Yes
 Multiple types of damaged products → Yes
All three statements are Yes, because they correctly describe the capabilities of object detection, one of the major workloads in computer vision, as defined in the Microsoft Azure AI Fundamentals (AI-900) study guide and Microsoft Learn module: “Describe features of computer vision workloads on Azure.â€
Object detection is an advanced computer vision technique that allows AI systems not only to classify objects within an image but also to locate them by drawing bounding boxes around each detected object. This differentiates it from simple image classification, which only identifies what objects exist in an image without specifying their locations.
Identifying the location of a damaged product – YesAccording to Microsoft Learn, object detection can return the coordinates or bounding boxes for recognized objects. Therefore, if the model is trained to detect damaged products, it can pinpoint exactly where those defects appear within an image.
Identifying multiple instances of a damaged product – YesObject detection models can detect multiple objects of the same class in one image. For instance, if an image contains several damaged products, each will be identified and marked individually. This feature supports tasks such as automated quality inspection in manufacturing, where several defective units may appear simultaneously.
Identifying multiple types of damaged products – YesObject detection can also distinguish different classes of objects. When trained on multiple labels (e.g., cracked, scratched, or broken items), the model can detect and classify each type of damage in one image.
In Microsoft’s AI-900 framework, object detection is presented as a critical part of computer vision workloads capable of locating and classifying multiple objects and categories within visual content.
You need to reduce the load on telephone operators by implementing a Chabot to answer simple questions with predefined answers.
Which two Al services should you use to achieve the goal? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.
Azure 8ol Service
Azure Machine Learning
Translator
Language Service
According to the Microsoft Azure AI Fundamentals (AI-900) Official Study Guide and the Microsoft Learn module “Explore conversational AI in Microsoft Azure,†to create a chatbot that can automatically answer simple, predefined user questions, you need two main Azure AI components — one to handle the conversation interface and another to manage the knowledge and language understanding aspect.
Azure Bot Service (A)This service is used to create, manage, and deploy chatbots that interact with users through text or voice. The Bot Service provides the framework for conversation management, user interaction, and channel integration (e.g., webchat, Microsoft Teams, Skype). It serves as the backbone of conversational AI applications and supports integration with other cognitive services like the Language Service.
Language Service (D)The Azure AI Language Service (which now includes Question Answering, formerly QnA Maker) is used to build and manage the knowledge base of predefined questions and answers. This service enables the chatbot to understand user queries and return appropriate responses automatically. The QnA capability allows you to import documents, FAQs, or structured data to create a searchable database of responses for the bot.
Why the other options are incorrect:
B. Azure Machine Learning: This service is used for building, training, and deploying custom machine learning models, not for chatbot Q & A automation.
C. Translator: This service performs language translation, which is not required for answering predefined questions unless multilingual support is specifically needed.
Therefore, to implement a chatbot that can answer simple, repetitive user questions and reduce the load on human operators, you combine Azure Bot Service (for interaction) with the Language Service (for question-answering intelligence).
Match the types of AI workloads to the appropriate scenarios.
To answer, drag the appropriate workload type from the column on the left to its scenario on the right. Each workload type may be used once, more than once, or not at all.
NOTE: Each correct selection is worth one point.



This question tests understanding of AI workload types, a fundamental topic in the Microsoft Azure AI Fundamentals (AI-900) curriculum. Each workload type—Computer Vision, Natural Language Processing, Machine Learning (Regression), and Anomaly Detection—serves a specific function within the AI landscape, as explained in Microsoft Learn’s module “Describe features of common AI workloads.â€
Computer Vision enables computers to “see†and interpret visual information such as images or videos. Identifying handwritten letters requires analyzing image patterns, shapes, and strokes, which is a classic image recognition task. Azure’s Computer Vision API and Custom Vision services are specifically designed for such tasks.
Natural Language Processing (NLP) involves interpreting human language, both written and spoken. Determining the sentiment of a social media post (positive, negative, or neutral) is a typical text analytics use case within NLP, often implemented using Azure’s Text Analytics for Sentiment Analysis.
Anomaly Detection focuses on identifying data points that deviate from normal patterns. Detecting fraudulent credit card payments requires finding transactions that are unusual compared to historical spending behavior. Azure’s Anomaly Detector API applies machine learning to identify such irregularities.
Machine Learning (Regression) is used for predicting continuous numerical outcomes based on historical data. Estimating next month’s toy sales is a regression problem—an example of supervised learning where the model predicts future sales values from past sales data.
Thus, based on Microsoft’s official AI-900 learning objectives, the correct mapping of workloads to scenarios is:
Computer Vision → Identify handwritten letters
NLP → Predict sentiment
Anomaly Detection → Fraud detection
Machine Learning (Regression) → Predict toy sales
What should you do to ensure that an Azure OpenAI model generates accurate responses that include recent events?
Modify the system message.
Add grounding data.
Add few-shot learning.
Add training data.
In Azure OpenAI, grounding refers to the process of connecting the model to external data sources (for example, a database, search index, or API) so that it can retrieve accurate and up-to-date information before generating a response. This is particularly important for scenarios requiring current facts or events, since OpenAI models like GPT-3.5 and GPT-4 are trained on data available only up to a certain cutoff date.
By adding grounding data, the model’s responses are “anchored†to factual sources retrieved at runtime, improving reliability and factual accuracy. Grounding is commonly implemented in Azure OpenAI + Azure Cognitive Search solutions (Retrieval-Augmented Generation or RAG).
Option review:
A. Modify the system message: Changes model tone or behavior but doesn’t supply real-time data.
B. Add grounding data: ✅ Correct — allows access to recent and domain-specific information.
C. Add few-shot learning: Provides examples in the prompt to improve context understanding but not factual accuracy.
D. Add training data: Refers to fine-tuning; this requires retraining and doesn’t update the model’s awareness of current events.
Hence, the best method to ensure accurate and current responses from an Azure OpenAI model is to add grounding data, enabling the model to reference real, updated sources dynamically.
You need to create a clustering model and evaluate the model by using Azure Machine Learning designer. What should you do?
Split the original dataset into a dataset for features and a dataset for labels. Use the features dataset for evaluation.
Split the original dataset into a dataset for training and a dataset for testing. Use the training dataset for evaluation.
Split the original dataset into a dataset for training and a dataset for testing. Use the testing dataset for evaluation.
Use the original dataset for training and evaluation.
According to the Microsoft Learn module “Explore fundamental principles of machine learning†and the AI-900 Official Study Guide, when building and evaluating a model (such as a clustering model) in Azure Machine Learning designer, data must be divided into two subsets:
Training dataset: Used to train the model so it can learn patterns and relationships in the data.
Testing dataset: Used to evaluate the model’s performance on unseen data, ensuring that it generalizes well and does not overfit.
In Azure ML Designer, this is typically done using the Split Data module, which separates the dataset into training and testing portions (for example, 70% training and 30% testing). After training, you connect the testing dataset to an Evaluate Model module to assess metrics such as accuracy, precision, or silhouette score (for clustering).
Other options are incorrect:
A. Split into features and labels: Clustering is an unsupervised learning technique, so it doesn’t use labeled data.
B. Use training dataset for evaluation: This would cause overfitting, as the model is being tested on the same data it learned from.
D. Use the original dataset for training and evaluation: Also causes overfitting, offering no measure of generalization.
What ate two common use cases for generative Al solutions? Each correct answer presents a complete solution.
NOTE: Each correct selection is worth one point.
generating draft responses for customer service agents
creating original artwork from textual descriptions
predicting sales revenue based on historical data
classifying email messages as spam or non-spam
Generative AI focuses on creating new content rather than just analyzing existing data. As per Microsoft’s AI-900 curriculum and Azure OpenAI documentation, typical use cases include generating text, images, code, or other creative outputs based on input prompts.
A. Generating draft responses for customer service agents — ✅ Correct. GPT-based models can automatically generate draft replies to customer queries, enabling agents to refine responses and increase efficiency.
B. Creating original artwork from textual descriptions — ✅ Correct. DALL-E, available through Azure OpenAI, can produce unique images based on natural language prompts.
Options C and D are incorrect because they involve predictive or classification models, not generative ones:
C. Predicting sales revenue → Regression (machine learning).
D. Classifying email messages → Classification (machine learning).
Correct answers: A and B.
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.


 You can communicate with a bot by using email → No
 You can communicate with a bot by using Microsoft Teams → Yes
 You can communicate with a bot by using a webchat interface → Yes
These answers are based on the Microsoft Azure AI Fundamentals (AI-900) Official Study Guide and the Microsoft Learn module “Explore conversational AI in Microsoft Azure.â€
The Azure Bot Service allows developers to build, test, deploy, and manage intelligent chatbots that can interact with users through various channels. Channels are communication platforms or interfaces that connect users to bots. Once a bot is built and published through the Azure Bot Service, it can be connected to multiple channels such as Microsoft Teams, webchat, Skype, Facebook Messenger, Direct Line, Slack, and others.
Let’s evaluate each statement:
You can communicate with a bot by using email → NoAzure Bot Service does not support direct interaction via email as a channel. Bots are designed for real-time or conversational interactions through messaging or voice-based platforms, not asynchronous email communication.
You can communicate with a bot by using Microsoft Teams → YesMicrosoft Teams is one of the primary channels supported by Azure Bot Service. Bots can be integrated directly into Teams to handle chat-based conversations, provide information, automate workflows, or assist users interactively within Teams.
You can communicate with a bot by using a webchat interface → YesThe Web Chat channel is another core feature of Azure Bot Service. It allows embedding the bot into a website or web application using the Web Chat control or the Direct Line API, enabling users to chat directly from a browser interface.
In summary, Azure Bot Service supports real-time conversational interfaces like Teams and webchat, but not email.
Your company manufactures widgets.
You have 1.000 digital photos of the widgets.
You need to identify the location of the widgets within the photos.
What should you use?
Computer Vision Spatial Analysis
Custom Vision object detection
Custom Vision classification
Computer Vision Image Analysis
According to the Microsoft Azure AI Fundamentals (AI-900) Official Study Guide and the Microsoft Learn module “Explore computer vision in Microsoft Azure,†object detection is a computer vision technique used to locate and identify objects within an image. It not only determines what objects are present but also where they appear in the image by returning bounding box coordinates around each detected item.
In this scenario, the goal is to identify the location of widgets within digital photos. This requires both recognition (knowing that the object is a widget) and localization (determining its position). The Custom Vision service in Azure allows you to train a model specifically for your own images, making it ideal for recognizing company-specific products such as widgets. By selecting the Object Detection domain in Custom Vision, you can label regions of interest in your training images. The model then learns to detect and locate those objects in new photos.
Let’s examine the other options:
A. Computer Vision Spatial Analysis: Used for people tracking, movement detection, and occupancy analytics in video streams — not for locating products in still images.
C. Custom Vision classification: This model categorizes an image as a whole (e.g., “contains a widget†or “does not contain a widgetâ€) but does not locate objects within the image.
D. Computer Vision Image Analysis: Provides general image tagging, description, and OCR capabilities but does not pinpoint object locations.
TESTED 23 Mar 2026
