Users are complaining about having to reconnecting to share when there are networking issues.
Which files feature should the administrator enable to ensure the sessions will auto-reconnect in such events?
Durable File Handles
Multi-Protocol Shares
Connected Shares
Workload Optimization
The Files feature that the administrator should enable to ensure the sessions will auto-reconnect in such events is Durable File Handles. Durable File Handles is a feature that allows SMB clients to reconnect to a file server after a temporary network disruption or a client sleep state without losing the handle to the open file. Durable File Handles can improve the user experience and reduce the risk of data loss or corruption. Durable File Handles can be enabled for each share in the Files Console. References: Nutanix Files Administration Guide, page 76; Nutanix Files Solution Guide, page 10
After configuring Smart DR, an administrator is unable to see the policy in the Policies tab. The administrator has confirmed that all FSVMs are able to connect to Prism Central via port 9440 bidirectionally. What is the possible reason for this issue?
The primary and recovery file servers do not have the same version.
Port 7515 should be open for all External/Client IPs of FSVMs on the Source and Target cluster.
The primary and recovery file servers do not have the same protocols.
Port 7515 should be open for all Internal/Storage IPs of FSVMs on the Source and Target cluster.
Smart DR in Nutanix Files, part of Nutanix Unified Storage (NUS), is a disaster recovery (DR) solution that simplifies the setup of replication policies between file servers (e.g., using NearSync, as seen in Question 24). After configuring a Smart DR policy, the administrator expects to see it in the Policies tab in Prism Central, but it is not visible despite confirmed connectivity between FSVMs and Prism Central via port 9440 (used for Prism communication, as noted in Question 21). This indicates a potential mismatch or configuration issue.
Analysis of Options:
Option A (The primary and recovery file servers do not have the same version): Correct. Smart DR requires that the primary and recovery file servers (source and target) run the same version of Nutanix Files to ensure compatibility. If the versions differ (e.g., primary on Files 4.0, recovery on Files 3.8), the Smart DR policy may fail to register properly in Prism Central, resulting in it not appearing in the Policies tab. This is a common issue in mixed-version environments, as Smart DR relies on consistent features and APIs across both file servers.
Option B (Port 7515 should be open for all External/Client IPs of FSVMs on the Source and Target cluster): Incorrect. Port 7515 is not a standard port for Nutanix Files or Smart DR communication. The External/Client network of FSVMs (used for SMB/NFS traffic) communicates with clients, not between FSVMs or with Prism Central for policy management. Smart DR communication between FSVMs and Prism Central uses port 9440 (already confirmed open), and replication traffic between FSVMs typically uses other ports (e.g., 2009, 2020), but not 7515.
Option C (The primary and recovery file servers do not have the same protocols): Incorrect. Nutanix Files shares can support multiple protocols (e.g., SMB, NFS), but Smart DR operates at the file server level, not the protocol level. The replication policy in Smart DR replicates share data regardless of the protocol, and a protocol mismatch would not prevent the policy from appearing in the Policies tab—it might affect client access, but not policy visibility.
Option D (Port 7515 should be open for all Internal/Storage IPs of FSVMs on the Source and Target cluster): Incorrect. Similar to option B, port 7515 is not relevant for Smart DR or Nutanix Files communication. The Internal/Storage network of FSVMs is used for communication with the Nutanix cluster’s storage pool, but Smart DR policy management and replication traffic do not rely on port 7515. The key ports for replication (e.g., 2009, 2020) are typically already open, and the issue here is policy visibility, not replication traffic.
Why Option A?
Smart DR requires compatibility between the primary and recovery file servers, including running the same version of Nutanix Files. A version mismatch can cause the Smart DR policy to fail registration in Prism Central, preventing it from appearing in the Policies tab. Since port 9440 connectivity is already confirmed, the most likely issue is a version mismatch, which is a common cause of such problems in Nutanix Files DR setups.
Exact Extract from Nutanix Documentation:
From the Nutanix Files Administration Guide (available on the Nutanix Portal):
“Smart DR requires that the primary and recovery file servers run the same version of Nutanix Files to ensure compatibility. A version mismatch between the source and target file servers can prevent the Smart DR policy from registering properly in Prism Central, resulting in the policy not appearing in the Policies tab.â€
An administrator needs to enable a Nutanix feature that will ensure automatic client reconnection to shares whenever there are intermittent server-side networking issues and FSVM HA events. Which Files feature should the administrator enable?
Multi-Protocol Shares
Connected Shares
Durable File Handles
Persistent File Handles
Nutanix Files, part of Nutanix Unified Storage (NUS), provides file shares (e.g., SMB, NFS) that clients access. Intermittent server-side networking issues or FSVM High Availability (HA) events (e.g., an FSVM failover, as discussed in Question 40) can disrupt client connections. The administrator needs a feature to ensure automatic reconnection to shares during such events, minimizing disruption for users.
Analysis of Options:
Option A (Multi-Protocol Shares): Incorrect. Multi-Protocol Shares allow a share to be accessed via both SMB and NFS (as in Questions 8 and 60), but this feature does not address client reconnection during networking issues or FSVM HA events—it focuses on protocol support, not connection resilience.
Option B (Connected Shares): Incorrect. “Connected Shares†is not a recognized feature in Nutanix Files. It appears to be a made-up term and does not apply to automatic client reconnection.
Option C (Durable File Handles): Correct. Durable File Handles is an SMB feature in Nutanix Files (as noted in Question 19) that ensures automatic client reconnection after temporary server-side disruptions, such as networking issues or FSVM HA events (e.g., failover when an FSVM’s IP is reassigned, as in Question 40). When enabled, Durable File Handles allow SMB clients to maintain their session state and automatically reconnect without user intervention, meeting the requirement.
Option D (Persistent File Handles): Incorrect. “Persistent File Handles†is not a standard feature in Nutanix Files. It may be confused with Durable File Handles (option C), which is the correct term for this SMB capability. Persistent File Handles is not a recognized Nutanix feature.
Why Option C?
Durable File Handles is an SMB 2.1+ feature supported by Nutanix Files that ensures clients can automatically reconnect to shares after server-side disruptions, such as intermittent networking issues or FSVM HA events (e.g., failover). This feature maintains the client’s session state, allowing seamless reconnection without manual intervention, directly addressing the administrator’s requirement.
Exact Extract from Nutanix Documentation:
From the Nutanix Files Administration Guide (available on the Nutanix Portal):
“Durable File Handles is an SMB feature in Nutanix Files that ensures automatic client reconnection to shares during server-side disruptions, such as intermittent networking issues or FSVM HA events. Enable Durable File Handles to maintain client session state and allow seamless reconnection without user intervention.â€
An administrator needs to improve the performance for Volume Group storage connected to a group of VMs with intensive I/O. Which vg.update vg_name command parameter should be used to distribute the I/O across multiple CVMs?
flash_mode=enable
load_balance_vm_attachments=true
load_balance_vm_attachments=enable
flash_mode=true
Nutanix Volumes, part of Nutanix Unified Storage (NUS), provides block storage via iSCSI to VMs and external hosts. A Volume Group (VG) in Nutanix Volumes is a collection of volumes that can be attached to VMs. For VMs with intensive I/O, performance can be improved by distributing the I/O load across multiple Controller VMs (CVMs) in the Nutanix cluster. The vg.update command in the Nutanix CLI (e.g., ncli) is used to modify Volume Group settings, including parameters that affect I/O distribution.
Analysis of Options:
Option A (flash_mode=enable): Incorrect. The flash_mode parameter enables flash mode for a Volume Group, which prioritizes SSDs for I/O operations to improve performance. While this can help with intensive I/O, it does not distribute I/O across multiple CVMs—it focuses on storage tiering, not load balancing.
Option B (load_balance_vm_attachments=true): Correct. The load_balance_vm_attachments=true parameter enables load balancing of VM attachments for a Volume Group. When enabled, this setting distributes the iSCSI connections from VMs to multiple CVMs in the cluster, balancing the I/O load across CVMs. This improves performance for VMs with intensive I/O by ensuring that no single CVM becomes a bottleneck.
Option C (load_balance_vm_attachments=enable): Incorrect. While this option is close to the correct parameter, the syntax is incorrect. The load_balance_vm_attachments parameter uses true or false as its value, not enable. The correct syntax is load_balance_vm_attachments=true (option B).
Option D (flash_mode=true): Incorrect. Similar to option A, flash_mode=true enables flash mode for the Volume Group, prioritizing SSDs for I/O. This does not distribute I/O across multiple CVMs, as it addresses storage tiering rather than load balancing.
Why Option B?
The load_balance_vm_attachments=true parameter in the vg.update command enables load balancing for VM attachments to a Volume Group, distributing iSCSI connections across multiple CVMs. This ensures that the I/O load from VMs with intensive I/O is balanced across the cluster’s CVMs, improving performance by preventing any single CVM from becoming a bottleneck. This directly addresses the requirement to distribute I/O for better performance.
Exact Extract from Nutanix Documentation:
From the Nutanix Volumes Administration Guide (available on the Nutanix Portal):
“To improve performance for Volume Groups with intensive I/O, use the vg.update command to enable load balancing with the parameter load_balance_vm_attachments=true. This setting distributes iSCSI connections from VMs across multiple CVMs in the cluster, balancing the I/O load and preventing bottlenecks.â€
What process is initiated when a share is protected for the first time?
Share data movement is started to the recovery site.
A remote snapshot is created for the share.
The share is created on the recovery site with a similar configuration.
A local snapshot is created for the share.
Nutanix Files, part of Nutanix Unified Storage (NUS), supports data protection for shares through mechanisms like replication and snapshots. When a share is “protected for the first time,†this typically refers to enabling a protection mechanism, such as a replication policy (e.g., NearSync, as seen in Question 24) or a snapshot schedule, to ensure the share’s data can be recovered in case of failure.
Analysis of Options:
Option A (Share data movement is started to the recovery site): Incorrect. While data movement to a recovery site occurs during replication (e.g., with NearSync), this is not the first step when a share is protected. Before data can be replicated, a baseline snapshot is typically created to capture the share’s initial state. Data movement follows the snapshot creation, not as the first step.
Option B (A remote snapshot is created for the share): Incorrect. A remote snapshot implies that a snapshot is created directly on the recovery site, which is not how Nutanix Files protection works initially. The first step is to create a local snapshot on the primary site, which is then replicated to the remote site as part of the protection process (e.g., via NearSync).
Option C (The share is created on the recovery site with a similar configuration): Incorrect. While this step may occur during replication setup (e.g., the remote site’s file server is configured to host a read-only copy of the share, as seen in the exhibit for Question 24), it is not the first process initiated. The share on the recovery site is created as part of the replication process, which begins after a local snapshot is taken.
Option D (A local snapshot is created for the share): Correct. When a share is protected for the first time (e.g., by enabling a snapshot schedule or replication policy), the initial step is to create a local snapshot of the share on the primary site. This snapshot captures the share’s current state and serves as the baseline for protection mechanisms like replication or recovery. For example, in a NearSync setup, a local snapshot is taken, and then the snapshot data is replicated to the remote site.
Why Option D?
Protecting a share in Nutanix Files typically involves snapshots as the foundation for data protection. The first step is to create a local snapshot of the share on the primary site, which captures the share’s data and metadata. This snapshot can then be used for local recovery (e.g., via Self-Service Restore) or replicated to a remote site for DR (e.g., via NearSync). The question focuses on the initial process, making the creation of a local snapshot the correct answer.
Exact Extract from Nutanix Documentation:
From the Nutanix Files Administration Guide (available on the Nutanix Portal):
“When a share is protected for the first time, whether through a snapshot schedule or a replication policy, the initial step is to create a local snapshot of the share on the primary site. This snapshot captures the share’s current state and serves as the baseline for subsequent protection operations, such as replication to a remote site or local recovery.â€
Which port is required between a CVM or Prism Central to insights,nutanix.com for Data Lens configuration?
80
443
8443
9440
Data Lens is a SaaS that provides file analytics and reporting, anomaly detection, audit trails, ransomware protection features, and tiering management for Nutanix Files. To configure Data Lens, one of the network requirements is to allow HTTPS (port 443) traffic between a CVM or Prism Central to insights.nutanix.com. This allows Data Lens to collect metadata and statistics from the FSVMs and display them in a graphical user interface. References: Nutanix Files Administration Guide, page 93; Nutanix Data Lens User Guide
Data Lens is a cloud-based service hosted at insights.nutanix.com, and Nutanix requires secure communication over HTTPS (port 443) for configuration and operation. The CVMs or Prism Central must have outbound access to insights.nutanix.com on port 443 to enable Data Lens, authenticate with the service, and send/receive analytics data.
Exact Extract from Nutanix Documentation:
From the Nutanix Data Lens Administration Guide (available on the Nutanix Portal):
“Data Lens requires outbound connectivity from the Nutanix cluster (CVMs or Prism Central) to insights.nutanix.com over port 443 (HTTPS). Ensure that this port is open for secure communication to enable Data Lens configuration and operation.â€
What are two network requirements for a four-node FSVM deployment? (Choose two.)
Four available IP addresses on the Client network
Four available IP addresses on the Storage network
Five available IP addresses on the Storage network
Five available IP addresses on the Client network
Nutanix Files, part of Nutanix Unified Storage (NUS), uses File Server Virtual Machines (FSVMs) to manage file services. A four-node FSVM deployment means four FSVMs are deployed, typically one per node in a four-node cluster. Nutanix Files requires two networks for FSVMs:
Client Network: Used for client-facing communication (e.g., SMB, NFS access).
Storage Network: Used for internal communication with the Nutanix cluster’s storage pool.
Each FSVM requires one IP address on each network, as established in Question 1.
Analysis of Options:
Option A (Four available IP addresses on the Client network): Correct. In a four-node FSVM deployment, each FSVM requires one IP address on the Client network for client communication (e.g., SMB, NFS). With four FSVMs, this means four IP addresses are needed on the Client network, one for each FSVM.
Option B (Four available IP addresses on the Storage network): Correct. Each FSVM also requires one IP address on the Storage network for internal communication with the Nutanix cluster’s storage pool. For four FSVMs, this means four IP addresses are needed on the Storage network, one for each FSVM.
Option C (Five available IP addresses on the Storage network): Incorrect. Only four IP addresses are needed on the Storage network for a four-node FSVM deployment—one per FSVM. A fifth IP address is not required, as there is no additional entity (e.g., a virtual IP) needed for the Storage network in this context.
Option D (Five available IP addresses on the Client network): Incorrect. Similarly, only four IP addresses are needed on the Client network for the four FSVMs. A fifth IP address might be needed in other scenarios (e.g., a virtual IP for load balancing in some configurations), but for a standard four-node FSVM deployment, four IPs suffice, as established in Question 1.
Selected Requirements:
A: Four IP addresses on the Client network are required, one for each of the four FSVMs.
B: Four IP addresses on the Storage network are required, one for each of the four FSVMs.
Why These Requirements?
Each FSVM in a Nutanix Files deployment requires one IP address on the Client network for client access and one on the Storage network for internal storage communication. For a four-node FSVM deployment, this translates to exactly four IP addresses on each network, matching the number of FSVMs.
Exact Extract from Nutanix Documentation:
From the Nutanix Files Deployment Guide (available on the Nutanix Portal):
“A Nutanix Files deployment with four FSVMs requires four available IP addresses on the Client network for client communication (SMB/NFS) and four available IP addresses on the Storage network for internal communication with the Nutanix cluster’s storage pool.â€
Which prerequisite is required to deploy Objects on AHV or ESXi?
Prism Central version is 5.17.1 or later
Port 9440 is accessible on both PE and PC
Valid SSL Certificate
Nutanix STARTER License
Nutanix Objects, part of Nutanix Unified Storage (NUS), is an S3-compatible object storage solution that can be deployed on AHV or ESXi hypervisors. Deploying Objects has specific prerequisites to ensure successful installation and operation.
Analysis of Options:
Option A (Prism Central version is 5.17.1 or later): Incorrect. While Nutanix Objects requires Prism Central for deployment and management, the minimum version for Objects deployment is typically lower (e.g., Prism Central 5.15 or later, depending on the Objects version). Version 5.17.1 is not a specific requirement for Objects deployment on AHV or ESXi.
Option B (Port 9440 is accessible on both PE and PC): Correct. Port 9440 is used for communication between Prism Element (PE) and Prism Central (PC), as well as for internal Nutanix services. When deploying Objects, Prism Central communicates with the cluster (via Prism Element) to deploy Object Store Service VMs. This communication requires port 9440 to be open between PE and PC, making it a key prerequisite.
Option C (Valid SSL Certificate): Incorrect. While a valid SSL certificate is recommended for secure communication (e.g., for S3 API access), it is not a strict prerequisite for deploying Objects. Objects can be deployed with self-signed certificates, though Nutanix recommends replacing them with valid certificates for production use.
Option D (Nutanix STARTER License): Incorrect. The Nutanix STARTER license is an entry-level license for basic cluster functionality (e.g., VMs, storage). However, Nutanix Objects requires a separate license (e.g., Objects license or a higher-tier AOS license like Pro or Ultimate). The STARTER license alone does not support Objects deployment.
Why Option B?
Port 9440 is critical for communication between Prism Element and Prism Central during the deployment of Objects. If this port is blocked, the deployment will fail, as Prism Central cannot communicate with the cluster to deploy the Object Store Service VMs.
Exact Extract from Nutanix Documentation:
From the Nutanix Objects Deployment Guide (available on the Nutanix Portal):
“Before deploying Nutanix Objects on AHV or ESXi, ensure that port 9440 is accessible between Prism Element (PE) and Prism Central (PC). This port is required for communication during the deployment process, as Prism Central manages the deployment of Object Store Service VMs on the cluster.â€
An administrator is trying to create a Distributed Share, but the Use Distributed Share/Export type instead of Standard option is not present when creating the share.
What is most likely the cause for this?
The file server does not have the correct license
The cluster only has three nodes.
The file server resides on a single node cluster.
The cluster is configured with hybrid storage
The most likely cause for this issue is that the file server resides on a single node cluster. A distributed share is a type of SMB share or NFS export that distributes the hosting of top-level directories across multiple FSVMs, which improves load balancing and performance. A distributed share cannot be created on a single node cluster, because there is only one FSVM available. A distributed share requires at least two nodes in the cluster to distribute the directories. Therefore, the option to use distributed share/export type instead of standard is not present when creating a share on a single node cluster. References: Nutanix Files Administration Guide, page 33; Nutanix Files Solution Guide, page 8
A single-node cluster cannot support a Distributed Share because it can only host one FSVM, whereas Distributed Shares require at least three FSVMs for distribution and high availability. This limitation causes the “Use Distributed Share/Export type instead of Standard†option to be absent when creating a share, as the cluster does not meet the minimum requirements.
Exact Extract from Nutanix Documentation:
From the Nutanix Files Administration Guide (available on the Nutanix Portal):
“Distributed Shares require a minimum of three FSVMs to ensure scalability and high availability, which typically requires a cluster with at least three nodes. On a single-node cluster, only Standard Shares are supported, and the option to create a Distributed Share will not be available in the Files Console.â€
Immediately after creation, the administrator is asked to change the name of Objects store.
How will the administrator achieve this request?
Enable versioning and then rename the Objects store, disable versioning
The Objects store can only be renamed if hosted on ESXi.
Delete and recreate a new Objects store with the updated name.
Update the name of the Objects stores by using a CORS XML file
The administrator can achieve this request by deleting and recreating a new Objects store with the updated name. Objects is a feature that allows users to create and manage object storage clusters on a Nutanix cluster. Objects clusters can provide S3-compatible access to buckets and objects for various applications and users. Objects clusters can be created and configured in Prism Central. However, once an Objects cluster is created, its name cannot be changed or edited. Therefore, the only way to change the name of an Objects cluster is to delete the existing cluster and create a new cluster with the updated name. References: Nutanix Objects User Guide, page 9; Nutanix Objects Solution Guide, page 8
Question: Deploying Files instances requires which two minimum resources? (Choose two.)
Options:
8 vCPUs per host
8 GiB of memory per host
12 GiB of memory per host
4 vCPUs per host
Nutanix Files instances are deployed using File Server Virtual Machines (FSVMs) that run on the Nutanix cluster’s hypervisor (AHV, ESXi, or Hyper-V). The minimum resource requirements for deploying FSVMs are specified in the Nutanix Files documentation to ensure proper performance. These requirements are typically defined per FSVM, not per host, as FSVMs are virtual machines distributed across the cluster’s hosts.
According to the official Nutanix documentation:
vCPUs: Each FSVM requires a minimum of 4 vCPUs to operate effectively.
Memory: Each FSVM requires a minimum of 12 GiB of memory (RAM).
The question asks for the “minimum resources†required for deploying Files instances, and the options are framed as “per host.†However, in the context of Nutanix Files, resource requirements are specified per FSVM, as FSVMs are the entities consuming these resources. The options likely reflect a misunderstanding in the original question phrasing, but based on the standard Nutanix Files deployment requirements:
4 vCPUs per FSVM (option D) is correct, as this is the minimum vCPU requirement.
12 GiB of memory per FSVM (option C) is correct, as this is the minimum memory requirement.
Options A (8 vCPUs per host) and B (8 GiB of memory per host) do not align with the documented minimum requirements for FSVMs:
8 vCPUs is higher than the minimum requirement of 4 vCPUs per FSVM.
8 GiB of memory is lower than the minimum requirement of 12 GiB per FSVM.
Exact Extract from Nutanix Documentation:
“For a Nutanix Files deployment, each File Server Virtual Machine (FSVM) requires the following minimum resources:
4 vCPUs
12 GiB of RAMThese resources ensure that the FSVM can handle file service operations efficiently.â€â€” Nutanix Files Deployment Guide, Version 4.0, Section: “System Requirements for Nutanix Filesâ€
Which two platform are currently supported for Smart Tiering? (Choose two.)
Google Cloud Storage
AWS Standard
Wasabi
Azure Blob
The two platforms that are currently supported for Smart Tiering are AWS Standard and Azure Blob. Smart Tiering is a feature that allows administrators to tier data from Files to cloud storage based on file age, file size, and file type. Smart Tiering can help reduce the storage cost and optimize the performance of Files. Smart Tiering currently supports AWS Standard and Azure Blob as the cloud storage platforms, and more platforms will be added in the future. References: Nutanix Files Administration Guide, page 99; Nutanix Files Solution Guide, page 11
Which confirmation is required for an Objects deployment?
Configure Domain Controllers on both Prism Element and Prism Central.
Configure VPC on both Prism Element and Prism Central.
Configure a dedicated storage container on Prism Element or Prism Cent
Configure NTP servers on both Prism Element and Prism Central.
The configuration that is required for an Objects deployment is to configure NTP servers on both Prism Element and Prism Central. NTP (Network Time Protocol) is a protocol that synchronizes the clocks of devices on a network with a reliable time source. NTP servers are devices that provide accurate time information to other devices on a network. Configuring NTP servers on both Prism Element and Prism Central is required for an Objects deployment, because it ensures that the time settings are consistent and accurate across the Nutanix cluster and the Objects cluster, which can prevent any synchronization issues or errors. References: Nutanix Objects User Guide, page 9; Nutanix Objects Deployment Guide
An administrator is tasked with deploying a Microsoft Server Failover Cluster for a critical application that uses shared storage.
The failover cluster instance will consist of VMs running on an AHV-hosted cluster and bare metal servers for maximum resiliency.
What should the administrator do to satisfy this requirement?
Create a Bucket with Objects.
Provision a Volume Group with Volume.
Create an SMB Share with Files.
Provision a new Storage Container.
Nutanix Volumes allows administrators to provision a volume group with one or more volumes that can be attached to multiple VMs or physical servers via iSCSI. This enables the creation of a Microsoft Server Failover Cluster that uses shared storage for a critical application.Â
Microsoft Server Failover Cluster typically uses shared block storage for its quorum disk and application data. Nutanix Volumes provides this via iSCSI by provisioning a Volume Group, which can be accessed by both the AHV-hosted VMs and bare metal servers. This setup ensures maximum resiliency, as the shared storage is accessible to all nodes in the cluster, allowing failover between VMs and bare metal servers as needed.
Exact Extract from Nutanix Documentation:
From the Nutanix Volumes Administration Guide (available on the Nutanix Portal):
“Nutanix Volumes provides block storage via iSCSI, which is ideal for Microsoft Server Failover Clusters requiring shared storage. To deploy an MSFC with VMs and bare metal servers, provision a Volume Group in Nutanix Volumes and expose it via iSCSI to all cluster nodes, ensuring shared access to the storage for high availability and failover.â€
Which two methods can be used to upgrade Files? (Choose two.)
Prism Element - LCM
Prism Element - One-click
Prism Central - LCM
Prism Central - Files Manager
Nutanix Files, part of Nutanix Unified Storage (NUS), can be upgraded to newer versions to gain access to new features, bug fixes, and improvements. Upgrading Files involves updating the File Server Virtual Machines (FSVMs) and can be performed using Nutanix’s management tools.
Analysis of Options:
Option A (Prism Element - LCM): Incorrect. Life Cycle Manager (LCM) in Prism Element is used to manage upgrades for AOS, hypervisors, and other cluster components, but it does not directly handle Nutanix Files upgrades. Files upgrades are managed through Prism Central, as Files is a distributed service that requires centralized management.
Option B (Prism Element - One-click): Incorrect. Prism Element does not have a “one-click†upgrade option for Nutanix Files. One-click upgrades are typically associated with hypervisor upgrades (e.g., ESXi, as in Question 47) or AOS upgrades, not Files. Files upgrades are performed via Prism Central.
Option C (Prism Central - LCM): Correct. Life Cycle Manager (LCM) in Prism Central can be used to upgrade Nutanix Files. LCM in Prism Central manages upgrades for Files by downloading the Files software bundle, distributing it to FSVMs, and performing a rolling upgrade to minimize downtime. This is a supported and recommended method for upgrading Files.
Option D (Prism Central - Files Manager): Correct. The Files Manager (or Files Console) in Prism Central provides a UI for managing Nutanix Files, including upgrades. The administrator can use the Files Manager to initiate an upgrade by uploading a Files software bundle or selecting an available version, and the upgrade process is managed through Prism Central, ensuring a coordinated update across all FSVMs.
Selected Methods:
C: LCM in Prism Central automates the Files upgrade process, making it a streamlined method.
D: The Files Manager in Prism Central provides a manual upgrade option through the UI, offering flexibility for administrators.
Exact Extract from Nutanix Documentation:
From the Nutanix Files Administration Guide (available on the Nutanix Portal):
“Nutanix Files can be upgraded using two methods in Prism Central: Life Cycle Manager (LCM) and the Files Manager. LCM in Prism Central automates the upgrade process by downloading and applying the Files software bundle, while the Files Manager allows administrators to manually initiate the upgrade by uploading a software bundle or selecting an available version.â€
An organization is implementing their first Nutanix cluster. In addition to hosting VMs, the cluster will be providing block storage services to existing physical servers, as well as CIFS shares and NFS exports to the end users. Security policies dictate that separate networks are used for different functions, which are already configured as:
Management - VLAN 500 - 10.10.50.0/24
iSCSI - VLAN 510 - 10.10.51.0/24
Files - VLAN 520 - 10.10.52.0/24How should the administrator configure the cluster to ensure the iSCSI traffic is on the correct network and accessible by the existing physical servers?
Configure the Data Services IP in Prism Central with an IP on VLAN 510.
Configure the Data Services IP in Prism Element with an IP on VLAN 510.
Create a new internal interface on VLAN 510 in Network Configuration, enabling it for Volumes.
Create a new virtual switch on VLAN 510 in Network Configuration, enabling it for Volumes.
The organization is deploying a Nutanix cluster to provide block storage services (via iSCSI), CIFS shares, and NFS exports (via Nutanix Files). Nutanix Volumes, part of Nutanix UnifiedILON Storage (NUS), is used to provide block storage to physical servers via iSCSI. The security policy requires separate networks:
Management traffic on VLAN 500 (10.10.50.0/24).
iSCSI traffic on VLAN 510 (10.10.51.0/24).
Files traffic on VLAN 520 (10.10.52.0/24).
To ensure iSCSI traffic uses VLAN 510 and is accessible by physical servers, the cluster must be configured to route iSCSI traffic over the correct network.
The Data Services IP is the key configuration for iSCSI traffic in a Nutanix cluster. By setting this IP to an address on VLAN 510 (e.g., 10.10.51.x), the administrator ensures that iSCSI traffic is routed over the correct network. Physical servers can then connect to this IP to access block storage via iSCSI. This configuration is done in Prism Element under the cluster’s iSCSI settings.
Exact Extract from Nutanix Documentation:
From the Nutanix Volumes Administration Guide (available on the Nutanix Portal):
“To enable iSCSI traffic for Nutanix Volumes, configure the Data Services IP in Prism Element. This IP address is used by external hosts (e.g., physical servers) to connect to the cluster for block storage access. Assign the Data Services IP to the appropriate VLAN for iSCSI traffic (e.g., VLAN 510) to ensure network isolation and accessibility.â€
An administrator has deployed a new Files cluster within a Windows Environment.
After some days, he Files environment is not able to synchronize users with the Active Directory server anymore. The administrator observes a large time difference between the Files environment and the Active Directory Server that is responsible for the behavior.
How should the administrator prevent the Files environment and the AD Server from having such a time difference in future?
Use the same NTP Servers for the File environment and the AD Server.
Use 0.pool.ntp.org as the NTP Server for the AD Server.
Use 0.pool.ntp.org as the NTP Server for the Files environment.
Connect to every FSVM and edit the time manually.
The administrator should prevent the Files environment and the AD Server from having such a time difference in future by using the same NTP Servers for the File environment and the AD Server. NTP (Network Time Protocol) is a protocol that synchronizes the clocks of devices on a network with a reliable time source. NTP Servers are devices that provide accurate time information to other devices on a network. By using the same NTP Servers for the File environment and the AD Server, the administrator can ensure that they have consistent and accurate time settings and avoid any synchronization issues or errors. References: Nutanix Files Administration Guide, page 32; Nutanix Files Troubleshooting Guide
An administrator has created a volume and needs to attach it to a windows host a via iSCSI. The data Services IP has been configured in the MS iSCSI Initiator, but no target are visible.
What is most likely the cause this issue?
The host’ s IP address is not authorized to access the volume.
The CHAP password configured on the client is incorrect.
The CHAP Authentication has not been configured on the client.
The host’s IQN is not authorized to access to the volume.
Nutanix Volumes uses IQN-based authorization to control access to volumes. The administrator must specify the IQN of the host that needs to access the volume when creating or editing the volume. If the host’s IQN is not authorized, it will not be able to see the target in the MS iSCSI Initiator3. References: Nutanix Volumes Administration Guide3
An administrator created a bucket for an upcoming project where internal users as well as an outside consultant …. Object Browser. The administrator needs to provide both internal and consultant access to the same bucket.
The organization would like to prevent internal access to the consultant, based on their security policy.
Which two items are requires to fulfill this requirement (Choose two.)
Configure Directory Services under the Access Keys section
Generate access keys based on directory and email-based users.
Install third-party software for bucket access to all users.
Generate access keys using third-party software.
Nutanix Objects supports directory services integration, which allows administrators to configure access keys based on directory and email-based users. This enables granular access control and security for buckets and objects. The administrator can configure directory services under the Access Keys section in Prism Central, and then generate access keys for internal users from the directory service and for the consultant from an email address2. References: Nutanix Objects Administration Guide2
Nutanix Objects can use no more than how many vCPUs for each AHV or ESXi node?
12
16
8
10
Nutanix Objects, a component of Nutanix Unified Storage (NUS), provides an S3-compatible object storage solution. It is deployed as a set of virtual machines (Object Store Service VMs) running on the Nutanix cluster’s hypervisor (AHV or ESXi). The resource allocation for these VMs, including the maximum number of vCPUs per node, is specified in the Nutanix Objects documentation to ensure optimal performance and resource utilization.
According to the official Nutanix documentation, each Object Store Service VM is limited to a maximum of 8 vCPUs per node (AHV or ESXi). This constraint ensures that the object storage service does not overburden the cluster’s compute resources, maintaining balance with other workloads.
Option C: Correct. The maximum number of vCPUs for Nutanix Objects per node is 8.
Option A (12), Option B (16), and Option D (10): Incorrect, as they exceed or do not match the documented maximum of 8 vCPUs per node.
Exact Extract from Nutanix Documentation:
From the Nutanix Objects Administration Guide (available on the Nutanix Portal):
“Each Object Store Service VM deployed on an AHV or ESXi node is configured with a maximum of 8 vCPUs to ensure efficient resource utilization and performance. This limit applies per node hosting the Object Store Service.â€
Additional Notes:
The vCPU limit is per Object Store Service VM on a given node, not for the entire Objects deployment. Multiple VMs may run across different nodes, but each is capped at 8 vCPUs.
The documentation does not specify different limits for AHV versus ESXi, so the 8 vCPU maximum applies universally.
ionization deployed Files in multiple sites, including different geographical locations across the globe. The organization has the following requirements to improves their data management lifecycle:
• Provide a centralized management solution.
• Automate archiving tier policies for compliance purposes.
• Protect the data against ransomware.
Which solution will satisfy the organization's requirements?
Prims Central
Data Lens
Files Analytics
Data Lens can provide a centralized management solution for Files deployments in multiple sites, including different geographical locations. Data Lens can also automate archiving tier policies for compliance purposes, by allowing administrators to create policies based on file attributes, such as age, size, type, or owner, and move files to a lower-cost tier or delete them after a specified period. Data Lens can also protect the data against ransomware, by allowing administrators to block malicious file signatures from being written to the file system. References: Nutanix Data Lens Administration Guide
An administrator has received an alert AI303551 – VolumeGroupProtectionFailed details of alerts as follows:
Which error logs should the administrator be reviewing to determine why the relative service is down:
solver.log
arithmos.ERROR
The error log that the administrator should review to determine why the relative service is down is arithmos.ERROR. Arithmos is a service that runs on each CVM and provides volume group protection functionality for Volumes. Volume group protection is a feature that allows administrators to create protection policies for volume groups, which define how often snapshots are taken, how long they are retained, and where they are replicated. If arithmos.ERROR log shows any errors or exceptions related to volume group protection, it can indicate that the relative service is down or not functioning properly. References: Nutanix Volumes Administration Guide, page 29; Nutanix Volumes Troubleshooting Guide
How many configure snapshots are supported for SSR in a file server?
25
50
100
200
The number of configurable snapshots that are supported for SSR in a file server is 200. SSR (Snapshot-based Replication) is a feature that allows administrators to replicate snapshots of shares or exports from one file server to another file server on a different cluster or site for disaster recovery purposes. SSR can be configured with various parameters, such as replication frequency, replication status, replication mode, etc. SSR supports up to 200 configurable snapshots per share or export in a file server. References: Nutanix Files Administration Guide, page 81; Nutanix Files Solution Guide, page 9
An administrator has planned to copy any large files to a Files share through the RoboCopy tool. While moving the data, the copy operation was interrupted due to a network bandwidth issue. Which command option resumes any interrupted copy operation?
robocopy with the /c option
robocopy with the /s option
robocopy with the /z option
robocopy with the /r option
Nutanix Files, part of Nutanix Unified Storage (NUS), provides CIFS (SMB) shares that can be accessed by Windows clients. RoboCopy (Robust File Copy) is a Windows command-line tool commonly used to copy files to SMB shares, such as those provided by Nutanix Files. The administrator is copying large files to a Files share using RoboCopy, but the operation was interrupted due to a network bandwidth issue. The goal is to resume the interrupted copy operation without restarting from scratch.
Analysis of Options:
Option A (robocopy with the /c option): Incorrect. The /c option is not a valid RoboCopy option. RoboCopy options typically start with a forward slash (e.g., /z, /s), and there is no /c option for resuming interrupted copies.
Option B (robocopy with the /s option): Incorrect. The /s option in RoboCopy copies subdirectories (excluding empty ones) but does not provide functionality to resume interrupted copy operations. It is used to define the scope of the copy, not to handle interruptions.
Option C (robocopy with the /z option): Correct. The /z option in RoboCopy enables “restartable mode,†which allows the tool to resume a copy operation from where it left off if it is interrupted (e.g., due to a network issue). This mode is specifically designed for copying large files over unreliable networks, as it checkpoints the progress and can pick up where it stopped, ensuring the copy operation completes without restarting from the beginning.
Option D (robocopy with the /r option): Incorrect. The /r option in RoboCopy specifies the number of retries for failed copies (e.g., /r:3 retries 3 times). While this can help with transient errors, it does not resume an interrupted copy operation from the point of interruption—it retries the entire file copy, which is inefficient for large files.
Why Option C?
The /z option in RoboCopy enables restartable mode, which is ideal for copying large files to a Nutanix Files share over a network that may experience interruptions. This option ensures that if the copy operation is interrupted (e.g., due to a network bandwidth issue), RoboCopy can resume from the point of interruption, minimizing data retransmission and ensuring efficient completion of the copy.
Exact Extract from Microsoft Documentation (RoboCopy):
From the Microsoft RoboCopy Documentation (available on Microsoft Docs):
“/z : Copies files in restartable mode. In restartable mode, if a file copy is interrupted, RoboCopy can resume the copy operation from where it left off, which is particularly useful for large files or unreliable networks.â€
Additional Notes:
Since RoboCopy is a Microsoft tool interacting with Nutanix Files SMB shares, the behavior of RoboCopy options is standard and not specific to Nutanix. However, Nutanix documentation recommends using tools like RoboCopy with appropriate options (e.g., /z) for reliable data migration to Files shares.
Nutanix Files supports SMB features like Durable File Handles (as noted in Question 19), which complement tools like RoboCopy by maintaining session state during brief network interruptions, but the /z option directly addresses resuming the copy operation itself.
An administrator is attempting to create a share that will provide user access via SMB and NFS. However, the Enable multiprosotocol accounts for NFS clients settings is not available.
What would cause this issue?
The connection to Active Directory has not been configured.
The file server instance was only configured with SMB.
The incorrect Files license has been applied.
NFS configured to use unmanaged authentication.
The cause of this issue is that the connection to Active Directory has not been configured. Active Directory is a service that provides centralized authentication and authorization for Windows-based clients and servers. To create a share that will provide user access via SMB and NFS, the administrator must first configure the connection to Active Directory in the Files Console. This will allow the administrator to enable multiprotocol accounts for NFS clients, which are accounts that map NFS users to SMB users and groups for consistent access control across both protocols. References: Nutanix Files Administration Guide, page 32; Nutanix Files Solution Guide, page 6
An administrator needs to add a signature to the ransomware block list. How should the administrator complete this task?
Open a support ticket to have the new signature added. Nutanix support will provide an updated Block List file.
Add the file signature to the Blocked Files Type in the Files Console.
Search the Block List for the file signature to be added, click Add to Block List when the signature is not found in File Analytics.
Download the Block List CSV file, add the new signature, then upload the CSV.
Nutanix Files, part of Nutanix Unified Storage (NUS), can protect against ransomware using integrated tools like File Analytics and Data Lens, or through integration with third-party solutions. In Question 56, we established that a third-party solution is best for signature-based ransomware prevention with a large list of malicious file signatures (300+). The administrator now needs to add a new signature to the ransomware block list, which refers to the list of malicious file signatures used for blocking.
Analysis of Options:
Option A (Open a support ticket to have the new signature added. Nutanix support will provide an updated Block List file): Correct. Nutanix Files does not natively manage a signature-based ransomware block list within its own tools (e.g., File Analytics, Data Lens), as these focus on behavioral detection (as noted in Question 56). For signature-based blocking, Nutanix integrates with third-party solutions, and the block list (signature database) is typically managed by Nutanix or the third-party provider. To add a new signature, the administrator must open a support ticket with Nutanix, who will coordinate with the third-party provider (if applicable) to update the Block List file and provide it to the customer.
Option B (Add the file signature to the Blocked Files Type in the Files Console): Incorrect. The “Blocked Files Type†in the Files Console allows administrators to blacklist specific file extensions (e.g., .exe, .bat) to prevent them from being stored on shares. This is not a ransomware block list based on signatures—it’s a simple extension-based blacklist, and file signatures (e.g., hashes or patterns used for ransomware detection) cannot be added this way.
Option C (Search the Block List for the file signature to be added, click Add to Block List when the signature is not found in File Analytics): Incorrect. File Analytics provides ransomware detection through behavioral analysis (e.g., anomaly detection, as in Question 7), not signature-based blocking. There is no “Block List†in File Analytics for managing ransomware signatures, and it does not have an “Add to Block List†option for signatures.
Option D (Download the Block List CSV file, add the new signature, then upload the CSV): Incorrect. Nutanix Files does not provide a user-editable Block List CSV file for ransomware signatures. The block list for signature-based blocking is managed by Nutanix or a third-party integration, and updates are handled through support (option A), not by manually editing a CSV file.
Why Option A?
Signature-based ransomware prevention in Nutanix Files relies on third-party integrations, as established in Question 56. The block list of malicious file signatures is not user-editable within Nutanix tools like the Files Console or File Analytics. To add a new signature, the administrator must open a support ticket with Nutanix, who will provide an updated Block List file, ensuring the new signature is properly integrated with the third-party solution.
Exact Extract from Nutanix Documentation:
From the Nutanix Files Administration Guide (available on the Nutanix Portal):
“For signature-based ransomware prevention, Nutanix Files integrates with third-party solutions that maintain a block list of malicious file signatures. To add a new signature to the block list, open a support ticket with Nutanix. Support will coordinate with the third-party provider (if applicable) and provide an updated Block List file to include the new signature.â€
An organization currently has a Files cluster for their office data including all department shares. Most of the data is considered cold Data and they are looking to migrate to free up space for future growth or newer data.
The organization has recently added an additional node with more storage. In addition, the organization is using the Public Cloud for .. storage needs.
What will be the best way to achieve this requirement?
Migrate cold data from the Files to tape storage.
Backup the data using a third-party software and replicate to the cloud.
Setup another cluster and replicate the data with Protection Domain.
Enable Smart Tiering in Files within the File Console.
The organization uses a Nutanix Files cluster, part of Nutanix Unified Storage (NUS), for back office data, with most data classified as Cold Data (infrequently accessed). They want to free up space on the Files cluster for future growth or newer data. They have added a new node with more storage to the cluster and are already using the Public Cloud for other storage needs. The goal is to migrate Cold Data to free up space while considering the best approach.
Analysis of Options:
Option A (Set up another cluster and replicate the data with Protection Domain): Incorrect. Setting up another cluster and using a Protection Domain to replicate data is a disaster recovery (DR) strategy, not a solution for migrating Cold Data to free up space. Protection Domains are used to protect and replicate VMs or Volume Groups, not Files shares directly, and this approach would not address the goal of freeing up space on the existing Files cluster—it would simply create a copy on another cluster.
Option B (Enable Smart Tiering in Files within the Files Console): Correct. Nutanix Files supports Smart Tiering, a feature that allows data to be tiered to external storage, such as the Public Cloud (e.g., AWS S3, Azure Blob), based on access patterns. Cold Data (infrequently accessed) can be automatically tiered to the cloud, freeing up space on the Files cluster while keeping the data accessible through the same share. Since the organization is already using the Public Cloud, Smart Tiering aligns perfectly with their infrastructure and requirements.
Option C (Migrate cold data from Files to tape storage): Incorrect. Migrating data to tape storage is a manual and outdated process for archival. Nutanix Files does not have native integration with tape storage, and this approach would require significant manual effort, making it less efficient than Smart Tiering. Additionally, tape storage is not as easily accessible as cloud storage for future retrieval.
Option D (Back up the data using a third-party software and replicate to the cloud): Incorrect. While backing up data with third-party software and replicating it to the cloud is feasible, it is not the best approach for this scenario. This method would create a backup copy rather than freeing up space on the Files cluster, and it requires additional software and management overhead. Smart Tiering is a native feature that achieves the goal more efficiently by moving Cold Data to the cloud while keeping it accessible.
Why Option B?
Smart Tiering in Nutanix Files is designed for exactly this use case: moving Cold Data to a lower-cost storage tier (e.g., Public Cloud) to free up space on the primary cluster while maintaining seamless access to the data. Since the organization is already using the Public Cloud and has added a new node (which increases local capacity but doesn’t address Cold Data directly), Smart Tiering leverages their existing cloud infrastructure to offload Cold Data, freeing up space for future growth or newer data. This can be configured in the Files Console by enabling Smart Tiering and setting policies to tier Cold Data to the cloud.
Exact Extract from Nutanix Documentation:
From the Nutanix Files Administration Guide (available on the Nutanix Portal):
“Smart Tiering in Nutanix Files allows administrators to tier Cold Data to external storage, such as AWS S3 or Azure Blob, to free up space on the primary Files cluster. This feature can be enabled in the Files Console, where policies can be configured to identify and tier infrequently accessed data while keeping it accessible through the same share.â€
An administrator is looking for a tool that includes these features:
• Permission Denials
• Top 5 Active Users
• Top 5 Accessed Files
• File Distribution by Type
Nutanix tool should the administrator choose?
File Server Manager
Prism Central
File Analytics
Files Console
The tool that includes these features is File Analytics. File Analytics is a feature that provides insights into the usage and activity of file data stored on Files. File Analytics consists of a File Analytics VM (FAVM) that runs on a Nutanix cluster and communicates with the File Server VMs (FSVMs) that host the file shares. File Analytics can display various reports and dashboards that include these features:
Permission Denials: This report shows the number of permission denied events for file operations, such as read, write, delete, etc., along with the user, file, share, and server details.
Top 5 Active Users: This dashboard shows the top five users who performed the most file operations in a given time period, along with the number and type of operations.
Top 5 Accessed Files: This dashboard shows the top five files that were accessed the most in a given time period, along with the number of accesses and the file details.
File Distribution by Type: This dashboard shows the distribution of files by their type or extension, such as PDF, DOCX, JPG, etc., along with the number and size of files for each type. References: Nutanix Files Administration Guide, page 93; Nutanix File Analytics User Guide
An administrator is tasked with performing an upgrade to the latest Objects version.
What should the administrator do prior to upgrade Objects Manager?
Upgrade Lifecycle Manager
Upgrade MSP
Upgrade Objects service
Upgrade AOS
Before upgrading Objects Manager, the administrator must upgrade AOS to the latest version. AOS is the core operating system that runs on each node in a Nutanix cluster and provides the foundation for Objects Manager and Objects service. Upgrading AOS will ensure compatibility and stability for Objects components. References: Nutanix Objects Administration Guide, Acropolis Operating System Upgrade Guide
Life Cycle Manager must have compatible versions of which two components before installing or upgrading Files? (Choose two.)
Nutanix Cluster Check
Active Directory Services
File Server Module
Acropolis Operating System
Nutanix Files, part of Nutanix Unified Storage (NUS), can be installed or upgraded using Life Cycle Manager (LCM), a tool in Prism Central or Prism Element for managing software updates. Before installing or upgrading Files, LCM must ensure that the underlying components are compatible to avoid issues during the process.
Analysis of Options:
Option A (Nutanix Cluster Check): Correct. Nutanix Cluster Check (NCC) is a health and compatibility checking tool integrated with LCM. LCM requires a compatible version of NCC to perform pre-upgrade checks and validate the cluster’s readiness for a Files installation or upgrade. NCC ensures that the cluster environment (e.g., hardware, firmware, software) is compatible with the Files version being installed or upgraded.
Option B (Active Directory Services): Incorrect. Active Directory (AD) Services are used by Nutanix Files for user authentication (e.g., for SMB shares or multiprotocol access, as in Question 60), but AD is not a component managed by LCM, nor is it a prerequisite for LCM compatibility. AD configuration is a separate requirement for Files functionality, not LCM operations.
Option C (File Server Module): Incorrect. There is no “File Server Module†component in Nutanix terminology. Nutanix Files itself consists of File Server Virtual Machines (FSVMs), but this is the component being upgraded, not a prerequisite for LCM. LCM manages the Files upgrade directly and does not require a separate “module†compatibility.
Option D (Acropolis Operating System): Correct. The Acropolis Operating System (AOS) is the core operating system of the Nutanix cluster, managing storage, compute, and virtualization. LCM requires a compatible AOS version to install or upgrade Files, as Files relies on AOS features (e.g., storage, networking) and APIs. LCM checks the AOS version to ensure it meets the minimum requirements for the target Files version.
Selected Components:
A: NCC ensures cluster compatibility and readiness, which LCM relies on for Files installation or upgrades.
D: AOS provides the underlying platform for Files, and LCM must ensure its version is compatible with the Files version being deployed.
Exact Extract from Nutanix Documentation:
From the Nutanix Files Administration Guide (available on the Nutanix Portal):
“Before installing or upgrading Nutanix Files using Life Cycle Manager (LCM), ensure that LCM has compatible versions of Nutanix Cluster Check (NCC) and Acropolis Operating System (AOS). NCC performs pre-upgrade checks to validate cluster readiness, while AOS must meet the minimum version requirements for the target Files version.â€
An administrator needs to generate a File Analytics report which lists the top owners with space consumed. Which two formats are available to the administrator for this task? (Choose two.)
XML
CSV
JSON
Nutanix File Analytics, part of Nutanix Unified Storage (NUS), provides reporting capabilities for monitoring file server activity, including space usage by owners. The administrator wants to generate a report listing the top owners by space consumed, which is a standard report in File Analytics. The available export formats for such reports determine how the data can be shared or analyzed.
Analysis of Options:
Option A (XML): Incorrect. File Analytics does not support exporting reports in XML format. While XML is a common data format, Nutanix File Analytics focuses on more user-friendly formats like PDF and CSV for report exports.
Option B (PDF): Correct. File Analytics allows reports, such as the top owners by space consumed, to be exported in PDF format. This format is useful for creating a formatted, printable report that can be shared with stakeholders or archived for documentation purposes.
Option C (CSV): Correct. File Analytics also supports exporting reports in CSV (Comma-Separated Values) format. This format is ideal for further analysis, as the data can be imported into tools like Excel or other data processing software to manipulate the list of top owners and their space consumption.
Option D (JSON): Incorrect. JSON is a data format often used for APIs or data interchange, but File Analytics does not support exporting reports in JSON format. The focus is on PDF for presentation and CSV for data analysis.
Selected Formats:
B: PDF format provides a formatted report suitable for sharing or printing.
C: CSV format allows for data export and further analysis in external tools.
Exact Extract from Nutanix Documentation:
From the Nutanix File Analytics Administration Guide (available on the Nutanix Portal):
“File Analytics reports, such as top owners by space consumed, can be exported in PDF format for presentation or CSV format for further analysis. These formats allow administrators to share reports with stakeholders or import the data into other tools for additional processing.â€
TESTED 30 Apr 2025