Risk within the Zero Trust Exchange is a dynamic value calculated to:
Be hashed, truncated, and stored in an obfuscated manner.
Give visibility of risky activity and allow enterprises to set acceptable thresholds of risk.
Provide access to the network.
Reduce processing load by enabling low-risk traffic to bypass less critical inspections.
The correct answer is B . In Zero Trust architecture, risk is calculated dynamically so that the organization can see risky behavior and make informed policy decisions based on its own business tolerance. A dynamic risk value helps determine whether a request should be allowed, restricted, isolated, deceived, or blocked. This supports one of the central principles of Zero Trust: trust is not static, and policy decisions should reflect current conditions rather than fixed assumptions.
The purpose of calculating risk is not to provide generic network access. Zero Trust is not about putting users onto a trusted network. It is about making precise decisions for each request. Dynamic risk also is not primarily about reducing system load by skipping controls. While organizations may prioritize resources intelligently, the main architectural reason for risk calculation is to support visibility and policy enforcement .
Enterprises can use this dynamic assessment to align security decisions with their own acceptable thresholds, application sensitivity, user context, device posture, and observed behavior. Therefore, the best answer is that risk is calculated to provide visibility into risky activity and allow enterprises to define acceptable risk thresholds .
By definition, Zero Trust connections are:
Independent of any network for control or trust.
Highly dependent on the network type, including whether that network is IPv4 or IPv6.
Based purely on a network appliance, constrained by how much CPU may be available.
Hairpinned through service chaining by an SD-WAN appliance.
The correct answer is A . By definition, Zero Trust connections are independent of the network for control or trust . This is one of the most important distinctions between Zero Trust and legacy security models. In traditional architectures, trust is often inherited from network location. If a user is on the corporate network, or connected into it by VPN, that user may gain broad access based on network reachability. Zero Trust rejects that model. Instead, trust is established through identity, posture, context, and policy for each access request.
Because of this, the underlying transport network becomes less important from a trust perspective. Whether the user is on Wi-Fi, broadband, mobile internet, IPv4, or IPv6 is not the defining factor in the access decision. The connection can operate over many types of networks, but the network itself is not what grants trust . Options B, C, and D all describe legacy or infrastructure-specific dependencies that Zero Trust is designed to avoid. A Zero Trust connection is therefore defined by policy-controlled, context-aware access , not by dependence on a particular network type or appliance path.
Connections approved by the Zero Trust Exchange must then enable permanent network-level access for at least 30 days.
True
False
The correct answer is B. False . Zero Trust architecture is specifically designed to avoid giving users broad, lasting network-level access after a connection is approved. Zscaler’s Universal ZTNA guidance states that users connect directly to applications, not the network , which minimizes attack surface and eliminates lateral movement. This means approval is tied to the specific access request and the relevant context at that moment, not to an ongoing entitlement to the underlying network.
The idea of granting network-level access for 30 days is much closer to a legacy VPN model, where a user is placed onto a routable network and may retain broad reachability beyond the immediate business need. Zero Trust does the opposite. It verifies identity and context, evaluates policy, and then enforces a specific control outcome for that request. If the user’s context changes, the policy outcome can also change. That is why Zero Trust is often described as dynamic and per-access , rather than static and persistent. A connection approved by the Zero Trust Exchange does not imply a long-term network privilege; it enables only the necessary application access under current policy conditions.
How is policy enforcement in Zero Trust done?
As a binary decision of allow or block.
Without trust, for example Zero Trust.
Conditionally, in that an allow or a block will have additional controls assigned, for example Allow and isolate, or Block and Deceive.
At the network level, by source IP.
In Zero Trust architecture, policy enforcement is conditional and context-based , not limited to a simple binary allow-or-block model. Zscaler’s reference architectures explain that policy is evaluated using the full user context, including identity, device posture, location, group membership, and other conditions. Access decisions are therefore based on whether specific policy conditions are true, rather than only on static network attributes such as source IP address. For example, the same authenticated user may be allowed access from a managed device at headquarters but denied from an airport, even with the same credentials.
Zscaler documentation also shows that Zero Trust policy can go beyond simple pass or deny outcomes by applying additional controls . In DNS Security and Control, requests can be allowed, blocked, or modified. In ZIA policy development, Cloud App controls allow more granular outcomes than standard allow/block, such as restricting specific actions, applying quotas, or controlling what a user can do inside an application. This reflects the Zero Trust principle that enforcement is adaptive, granular, and tied to business and security context rather than network location alone.
The only way to deploy inspection is to inspect all traffic. Technically speaking, at an architectural level, there is no way to have exceptions, such as for certain websites or for certain types of applications.
True
False
This statement is false . In Zscaler’s Zero Trust architecture, the recommended design objective is to inspect as much encrypted traffic as possible because inspection enables security controls such as malware protection, sandboxing, intrusion prevention system (IPS), browser isolation, Data Loss Prevention (DLP), cloud application controls, tenancy restrictions, and file type controls. The reference architecture states that inspecting all TLS/SSL traffic provides the fullest visibility and strongest protection across the Zero Trust Exchange. However, the same document also clearly confirms that inspection bypasses are supported in specific circumstances . These documented exceptions include banking and finance destinations, healthcare destinations, business functions that require unencryptable traffic, certificate-pinned applications, and some Microsoft 365 application flows that may not function properly under inspection. Zscaler strongly recommends using bypasses only in extreme circumstances , but it does not say exceptions are architecturally impossible. Therefore, from a verified Zero Trust design standpoint, full inspection is the preferred security posture, while selective exceptions are still an allowed and documented deployment option.
Why should an enterprise categorize applications as part of its secure digital transformation to a Zero Trust architecture?
To build structured naming conventions for applications, for example Country:City:Location:Function.
So that these can be stored in a CMDB (Configuration Management Database) system, which can be used as a policy enforcement plane for application traffic.
To differentiate destination applications from each other, thus enabling the deployment of granular control from valid initiator to valid destination application.
To know which ACLs to set on their firewall.
The correct answer is C. In Zero Trust architecture, applications must be identified, defined, and differentiated so that policy can be applied at a granular level. Zscaler’s Zero Trust User-to-App Segmentation guidance explains that organizations should identify, define, and characterize applications and application segments as part of the move from legacy network-based access to a user-based approach using application segments and access policies. That directly supports the idea that application categorization is necessary to distinguish one destination from another and apply the correct user-to-application policy.
This is important because Zero Trust does not grant broad network access and then rely on downstream controls. Instead, it gives access to the right application for the right initiator under the right conditions. Without meaningful application categorization, organizations cannot create granular segmentation or precise access policies. Naming conventions and CMDB storage may be useful operationally, but they are not the core reason. Likewise, ACL planning belongs to legacy firewall thinking rather than Zero Trust design. Therefore, the strongest architecture-aligned answer is that applications are categorized in order to differentiate destinations and enable granular control from valid initiator to valid destination application.
Data center applications are moving to:
The branch.
Castle and moat type architectures.
The DMZ.
The cloud.
The correct answer is D. The cloud . Zero Trust architecture assumes that applications are no longer confined to traditional on-premises data centers. Zscaler’s Universal Zero Trust Network Access (ZTNA) guidance reflects that private applications increasingly exist across public cloud, private cloud, and data center environments , and users must securely access them without being placed on the network. This shift is one of the main reasons legacy castle-and-moat models are no longer sufficient.
In older architectures, applications were commonly protected by network location, perimeter firewalls, and DMZ-based publishing patterns. But as applications move to cloud environments, those location-based controls become harder to manage and less effective. Zero Trust instead applies identity, device posture, context, and application-specific policy, regardless of where the workload is hosted. Zscaler specifically positions ZPA and Universal ZTNA to support access to applications in public cloud instances , private cloud environments, and internal data centers through the same policy-driven model.
Because the long-term trend is away from fixed perimeters and toward distributed application hosting, the most accurate answer is that data center applications are moving to the cloud .
The second part of a Zero Trust architecture after verifying identity and context is:
Controlling content and access.
Re-checking the SAML assertion.
Enforcing policy.
Microsegmentation.
The correct answer is A. Controlling content and access. In the Zero Trust architecture sequence used in Zscaler’s architectural model, the flow is first to verify identity and context , then to control content and access , and finally to enforce policy . This order is important because Zero Trust does not begin by trusting the network. Instead, it first determines who the user is and what the conditions of the request are, such as device posture, location, group membership, and other contextual factors. Once that context is established, the architecture then evaluates the application request and the content flowing through the connection so that appropriate controls can be applied.
This second stage is where Zero Trust moves beyond identity alone. It is not enough to know who the user is; the architecture must also assess what they are trying to access and whether the transaction itself should be restricted, inspected, isolated, or blocked. Re-checking a SAML assertion is too narrow, microsegmentation is a design technique rather than the named architecture stage, and enforcing policy is the third stage. Therefore, the second part is controlling content and access .
As a connection goes through, the Zero Trust Exchange:
Initiates the three sections of a Zero Trust architecture (Verify, Control, Enforce), which once completed, will allow the Zero Trust Exchange and the application to complete the transaction.
Sits as a ruggedized, hardened appliance in the data center of the enterprise, where the enterprise must establish private links to major peering hubs.
Acts as the opposite of a reverse proxy, inspecting every single packet that goes out, but strictly without the ability to provide controls such as firewalling, intrusion prevention system (IPS), or data loss prevention (DLP).
Forwards packets as a passthrough cloud security firewall.
The correct answer is A . In Zscaler’s architecture, the Zero Trust Exchange is not just a packet-forwarding firewall or a single appliance. It is the cloud-delivered policy and security fabric that evaluates access through the core Zero Trust sequence of verify, control, and enforce . The architecture documents describe Zero Trust access as depending on establishing identity, evaluating context, and then applying the appropriate control for that specific request. ZPA guidance explains that users are evaluated for context such as location, device posture, groups, and time of day, and access is granted only if the request matches the required policies.
Option B is incorrect because the Zero Trust Exchange is not limited to a hardened enterprise data center appliance. Option C is incorrect because Zscaler explicitly provides inline controls such as firewalling, DLP, and related inspection services. Option D is also incomplete because the Zero Trust Exchange does more than pass traffic through; it makes access and security decisions. Therefore, the best architecture-aligned answer is that the Zero Trust Exchange carries out the Zero Trust process of Verify, Control, and Enforce as part of completing the transaction.
Assessing risk is:
A non-recurring process to determine how to treat requests from a specific initiator for the next 30 days.
Universal control across the entire enterprise. Once assessed, risk applies to all traffic from that enterprise.
An ongoing process to verify publicly known bad actor IP addresses.
An assessment of all things related to the current connection, previous context, and considered on an ongoing basis for future requests, thus allowing for unique and dynamic changes in the consideration of risk.
The correct answer is D . In Zero Trust architecture, risk assessment is continuous and adaptive , not static. Zscaler documentation states that policy decisions consider far more than a one-time identity check. User access is evaluated using context such as user identity, device posture, location, group membership, and time of day , and those conditions can change between requests. ZPA guidance also states that organizations should use logs to determine which users are accessing which apps, and automatically adapt based on any changes in context .
This directly supports the idea that risk is based on the current connection , informed by previous context , and continually reconsidered for future access attempts. Option A is incorrect because Zero Trust does not create a long-lived 30-day trust decision. Option B is incorrect because risk is not universally applied to all enterprise traffic once assessed. Option C is too narrow, since risk is not limited to checking public bad-IP lists. Instead, Zero Trust risk is dynamic and contextual, enabling policy to change uniquely for each request as conditions evolve. That is why the best answer is D .
Enterprises can deliver full security controls inline, without needing to decrypt traffic.
True
False
The correct answer is B. False . In Zero Trust architecture, full inline security depends on the ability to inspect what is actually inside the traffic flow, not just the fact that a connection exists. When traffic is encrypted, security services cannot fully evaluate malware, command-and-control traffic, sensitive data movement, risky application behavior, or policy violations unless the traffic is decrypted and inspected . Zscaler’s TLS/SSL inspection guidance makes this clear by positioning decryption as essential for complete visibility and enforcement across encrypted internet traffic.
Without decryption, an organization may still apply limited controls such as destination reputation, IP-based filtering, category decisions, or metadata-based enforcement. However, that is not the same as full security controls inline . Full Zero Trust protection requires deeper visibility into content and transactions so that threat prevention, Data Loss Prevention (DLP), cloud application controls, sandboxing, and other advanced protections can be applied accurately. Because modern traffic is heavily encrypted, failing to decrypt creates blind spots and weakens policy enforcement. Therefore, the statement is false: enterprises cannot deliver full inline security controls across encrypted traffic without decryption.
Connections to destination applications are the same, regardless of location or function.
True
False, each application, whether internal or external, trusted or untrusted, must be considered for connectivity based on the risk profile and risk acceptance of each enterprise.
The correct answer is B . In Zero Trust architecture, application connectivity is not treated as identical across all destinations . Each application must be evaluated according to its business purpose, sensitivity, exposure, trust level, data handled, user population, and enterprise risk tolerance . This is a core departure from legacy network-centric design, where many applications were reached through the same broad network access model once a user was connected.
Zero Trust instead applies application-specific and context-aware access control . An internal private application, a sanctioned Software as a Service (SaaS) platform, an unmanaged external website, and a high-risk destination should not all receive the same access treatment. Some may require direct allow, some may require isolation, some may require additional inspection, and some may need to be blocked entirely.
This is why Zero Trust policy is granular rather than uniform. The architecture assumes that connectivity decisions must reflect risk . Application location alone does not determine trust, and neither does function alone. The enterprise must decide how each destination is handled based on its overall risk profile and policy requirements. Therefore, the statement is false.
The first step of verifying identity is the “who.†And “who†is not just who is the user, but also, in addition:
The destination, who can also be a user.
The device, and understanding what levels of access that device has.
The type of bare-metal server that the packets traverse on their way to the destination.
The IaaS destination that the user is connecting to.
The correct answer is B . In Zero Trust architecture, the “who†is broader than just the username or authenticated person. It also includes the device context associated with that request. This is important because Zero Trust does not make access decisions based only on user identity. It also considers whether the device is trusted, managed, compliant, encrypted, protected by endpoint security, or otherwise suitable for the requested level of access.
That means the “who†can be understood as the user together with the device being used, since both contribute to the trust decision. A user on a managed endpoint with proper posture may receive a different access outcome from the same user on an unmanaged or risky device. This is a core Zero Trust principle because it prevents identity-only decisions from becoming overly permissive.
The other options do not best match this concept. The destination is part of access context, but it is not the added meaning of “who†in this question. Bare-metal server type and IaaS destination are unrelated to verifying the requesting identity. Therefore, the correct answer is the device, and understanding what levels of access that device has .
Content stored within a SaaS/PaaS/IaaS location can be:
100% trusted, as cloud providers make sure content is safe before it is uploaded.
Considered risky until inspected, either through inline SSL/TLS controls or through assessing the files “at rest†using an out-of-band assessment.
Partially trusted depending on whether you maintain a proper audit log for access.
Should never be trusted.
The correct answer is B . In Zero Trust architecture, content stored in Software as a Service (SaaS), Platform as a Service (PaaS), or Infrastructure as a Service (IaaS) environments should not be assumed safe simply because it resides in a cloud platform. Zscaler’s security model emphasizes that trust must be established through inspection and policy , not by location alone. The TLS/SSL inspection architecture shows that inline inspection is necessary to evaluate content moving through encrypted sessions, while Zscaler’s broader data protection model also includes out-of-band assessment for content already stored in cloud services.
This aligns with the Zero Trust principle that applications and content can exist anywhere, but they are not automatically trustworthy because of where they are hosted. Cloud providers secure the platform, but they do not guarantee that every uploaded file, shared object, or stored dataset is safe, compliant, or free from malware or data exposure risk. At the same time, saying content should never be trusted is too absolute; Zero Trust is about verification , not blanket denial. Therefore, the most accurate answer is that cloud-stored content should be treated as risky until inspected , whether inline during transfer or out of band while at rest.
A Zero Trust network can be:
Located anywhere.
Built on IPv4 or IPv6.
Built using VPN concentrators.
Located anywhere and built on IPv4 or IPv6.
The correct answer is D. Located anywhere and built on IPv4 or IPv6. In Zero Trust architecture, the network and application access model is not tied to a specific physical location, branch, or data center. Zscaler’s Zero Trust guidance emphasizes that users, devices, and applications can be securely connected in any location , which is a core shift away from legacy perimeter-based designs. The architecture is also described as IP independent , meaning policy and access decisions are not fundamentally anchored to traditional network constructs such as fixed addressing or trusted subnets. This is why Zero Trust can operate across modern environments regardless of where workloads reside.
The option about VPN concentrators is incorrect because VPN-based architecture is associated with legacy remote-access models that extend network trust and expose services differently from Zero Trust. In contrast, Zero Trust reduces implicit trust, avoids broad network-level access, and focuses on secure, application-aware connectivity. Therefore, the most complete and accurate answer is that a Zero Trust network can be located anywhere and built on IPv4 or IPv6 , rather than being limited to a legacy transport or perimeter model.
Identity is a binary decision, not to be revisited. Once a decision is made about who, what, and where, that is final for at least 48 hours.
True
False
The correct answer is B. False . Zero Trust architecture does not treat identity and context as a one-time, fixed decision. Zscaler’s architecture guidance shows that access is based on ongoing context , including user identity, device posture, location, and other factors that can change over time. For ZIA, policy assignment evaluates the user, device, location, group, and more to determine which policies apply. For ZPA, user access is matched against current conditions such as location, device posture, user group, department, and time of day .
Zscaler documentation also describes reauthentication intervals and session timeout controls, which further shows that identity and authorization are not treated as permanently settled after one decision. In addition, device posture checks can be repeated over time, and a failed posture check can cause a different policy to be applied.
This is fundamental to Zero Trust: trust is continually evaluated , not granted once and assumed valid for an arbitrary period such as 48 hours. Therefore, the statement is false because identity and access context must be revisited as conditions change.
If an enterprise is protecting its services at a network level, such as using firewalls, what happens to that protection when a user leaves the network? (Select 2)
The initiator will not have access to the service.
Network access is maintained via TCP keepalive messages.
Users will continue to be able to access services via the internet.
A path from initiator to the network must be put in place, for example VPN.
The correct answers are A and D . In a legacy, network-based protection model, security controls such as firewalls are tied to the enterprise network perimeter. When a user leaves that network, the user typically loses direct access to internal services because the protection model assumes the user is on the trusted network or connected into it. To restore access, the organization usually has to establish a path back into the network , most commonly through a virtual private network (VPN) or another routable connection. Zscaler’s Zero Trust guidance contrasts directly with this legacy pattern by stating that users should access applications without sharing network context with them.
This is one of the reasons Zero Trust replaces legacy VPN-centric design. ZPA documentation explicitly contrasts Zero Trust with legacy VPNs and firewalls by emphasizing that users connect directly to applications, not the network , thereby minimizing attack surface and removing dependence on being “inside†the network. Therefore, in a network-level protection model, once the user leaves the network, access is not naturally preserved; instead, access is lost unless a path such as VPN is put in place . The TCP keepalive option is unrelated, and unrestricted internet access to services would contradict the private, firewall-protected network design.
What is a security limitation of traditional firewall/VPN products?
Their IP addresses are published on the internet.
SSL-encrypted VPN traffic bypasses security inspection.
They cannot be scaled to handle increased load.
They rely on easily tampered-with endpoint software.
The correct answer is B. A key limitation of many traditional firewall and virtual private network (VPN) architectures is that encrypted VPN traffic can bypass or reduce effective security inspection, especially when the architecture is designed mainly to provide network connectivity rather than full inline content inspection. Zscaler’s TLS/SSL inspection guidance explains that without decryption, organizations are limited in how well they can inspect content for malware, data exfiltration, and risky activity. It also notes that legacy platforms often struggle to inspect encrypted traffic at scale, which creates blind spots in protection.
This matters because Zero Trust is not satisfied by simply creating a secure tunnel. A tunnel can protect confidentiality in transit, but it does not guarantee that the content inside the connection is safe or compliant. Zscaler’s Zero Trust architecture shifts away from broad network access and toward inline, policy-driven inspection and enforcement. The issue is not merely internet publication of IPs or scalability in the abstract; the deeper security weakness is that encrypted traffic can traverse the legacy VPN model without full security visibility and control.
What types of attributes can be used to assess whether access is risky? (Select 2)
The endpoint operating system of the initiator.
An analysis of device posture to examine attributes such as domain joined status, a certificate, whether the device has AV/EDR installed, and whether the device is running disk encryption.
Leveraging APIs available on the Layer 3 devices on the network to scan for malicious services or hosts in the environment.
Seeing patterns in user behavior around things such as blocked malware downloads and blocked access to phishing sites.
The correct answers are B and D . In Zero Trust architecture, risk is determined from multiple contextual signals , not from a single static attribute. Zscaler’s architecture guidance states that policy decisions evaluate the user, machine, location, group, and more , which directly supports the use of device posture as a risk input. Device posture factors such as domain membership, certificate presence, endpoint protection tools like antivirus or endpoint detection and response (EDR), and disk encryption status are strong indicators of whether the device can be trusted for a given access request.
Behavioral patterns are also valid risk indicators. Zero Trust does not look only at who the user is; it also considers how that user and device are behaving over time. Repeated blocked malware downloads, blocked phishing attempts, and similar negative security events can indicate elevated risk and justify tighter policy enforcement on future requests. By contrast, the operating system alone is too narrow to be the best answer, and Layer 3 device API scanning is not the access-risk attribute model being tested here. Therefore, the strongest Zero Trust choices are device posture analysis and behavioral risk patterns .
TESTED 15 Mar 2026