You want to assign resources to your blueprint during the deployment phase. In this scenario, which statement is correct?
To assign resources in the blueprint, you must have completed the device profile and device assignments.
To assign resources in the blueprint, you must have already created them under global resources.
All resources are created and assigned under the blueprint's Resources tab.
All resources are automatically assigned values from the available resource pools.
In Apstra 5.1, “resources†(such as ASNs, IP addressing, and VNIs) are allocated to blueprint elements using resource pools. The blueprint does not require you to manually craft every individual resource value; instead, Apstra’s workflow is to have you indicate which pool(s) should be used for the blueprint, and then Apstra automatically pulls and assigns the required values. This automation is fundamental to Apstra’s intent-based model: once the blueprint knows which pools to consume, it can deterministically allocate unique values across the fabric and generate consistent Junos configuration for the assigned devices.
Option D best matches this behavior because it reflects the documented mechanism: required resources are automatically pulled from the selected pool(s) and assigned in a fast, bulk transaction. This is what enables repeatable deployments—especially in EVPN-VXLAN data center fabrics—because resource collisions and manual tracking are avoided.
Option A is not the defining prerequisite for resource assignment; device profile and device assignment are important overall build steps, but the correctness of resource assignment is tied to pool selection and availability rather than being strictly gated by those tasks. Option B is incorrect because pools can be created and managed beyond only “global†contexts, and Apstra also supports creating additional pools from within the blueprint when needed. Option C is misleading because resources are governed by pools and allocation, not only by manual creation under a single tab.
Verified Juniper sources (URLs):
https://www.juniper.net/documentation/us/en/software/apstra5.1/apstra-user-guide/topics/concept/resources.html
https://www.juniper.net/documentation/us/en/software/apstra5.1/apstra-user-guide/topics/concept/freeform-resource-management.html
https://www.juniper.net/documentation/us/en/software/apstra5.1/apstra-user-guide/topics/ref/resource-pools-api.html
What are two agent processes that operate within the Juniper Apstra device agent? (Choose two.)
Routing agent
Authentication agent
Telemetry agent
Deployment agent
In Apstra deployments that use on-box device agents, the agent package installs multiple processes inside the switch’s NOS namespace to provide an isolated runtime environment for Apstra control and telemetry collection. Two of those processes are the Telemetry Agent and the Deployment Agent. The Telemetry Agent is responsible for collecting operational information from the device—such as LLDP neighbor details, routing-related state, and interface information—and sending that telemetry upstream to Apstra. This telemetry is a key input for closed-loop assurance in EVPN-VXLAN fabrics, where Apstra correlates underlay health (interfaces, neighbors, sessions) with overlay services.
The Deployment Agent is responsible for receiving configuration content pushed from Apstra and applying it on the device. In a Junos v24.4 fabric, this is the component that enables Apstra to converge device configuration to the blueprint’s intent (for example, BGP underlay, EVPN signaling, and VXLAN constructs) without requiring manual CLI workflows. Both agents are typically idle most of the time, becoming active when Apstra needs to apply configuration changes or when significant state changes trigger telemetry updates.
Other listed options—“routing agent†and “authentication agentâ€â€”are not the named Apstra device-agent processes described for the on-box agent package in Juniper documentation.
Verified Juniper sources (URLs):
https://www.juniper.net/documentation/us/en/software/apstra4.2/apstra-server-and-security-guide/topics/concept/apstra-device-agents.html
You are building a blueprint using Juniper Apstra and must change the cable map to match the physical environment. Where in the blueprint UI is this task accomplished?
Active → Physical → Links
Staged → Physical → Links
Active → Connectivity Templates
Staged → Connectivity Templates
In Apstra 5.1, the cabling map is part of the blueprint’s intended physical topology. Cable-map edits are performed in the Staged workspace because Staged is where you modify intent (what the fabric should look like) before committing those changes and deploying them. The Staged → Physical → Links view provides both a tabular and topology-oriented representation of spine-to-leaf and other physical connections. When Apstra auto-assigns interfaces during initial build, the logical mapping may not match the real patching in the data center. The cabling map editor allows you to override interface names (and where applicable, link addressing metadata) so the blueprint accurately reflects the actual patch panel and switchport usage.
This accuracy is critical in a Junos v24.4 leaf-spine fabric because underlay correctness depends on the real physical adjacencies: link membership, LAG expectations (where used), and the resulting BGP neighbor relationships that carry EVPN signaling for VXLAN overlays. By updating the cabling map in Staged, you ensure Apstra can correctly validate neighbor discovery, verify intent, and produce consistent device configuration aligned to the real-world wiring. After making the cabling corrections, you commit the staged changes and then deploy/apply so that Apstra’s intent and the running network converge. This work is not performed under Active (which reflects deployed state) and is not a function of Connectivity Templates (which are for endpoint/service attachment rather than fabric cabling).
Verified Juniper sources (URLs):
https://www.juniper.net/documentation/us/en/software/apstra5.0/apstra-user-guide/topics/topic-map/cabling-map-edit-datacenter.html
https://www.juniper.net/documentation/us/en/software/apstra6.0/apstra-user-guide/topics/topic-map/cabling-map-edit-datacenter.html
https://www.juniper.net/documentation/us/en/software/jvd/jvd-dcfabric-5-stage/configuration_walkthrough.html
In Juniper Apstra terminology, to which network operating system concept does a routing zone refer?
IRB
VRF
VLAN
Access list
In Apstra 5.1, a routing zone is the primary construct used to represent an L3 domain for multitenant isolation. In traditional network operating system terms, that maps to a VRF (Virtual Routing and Forwarding instance). Each routing zone is placed “in its own VRF,†which provides independent routing tables and isolates IP traffic so that different tenants can reuse overlapping IP subnets without conflict. This is central to modern EVPN-VXLAN data center design, where tenants typically require clean separation of routing and policy boundaries.
Within a routing zone, you can create one or more virtual networks (often mapped to VXLAN segments) that provide L2 extension across racks while still being contained by the tenant’s VRF. If L3 gateway services are enabled for those virtual networks, their gateway interfaces (for example, IRB interfaces on Junos v24.4 leaf switches) are associated with the routing zone’s VRF so that inter-subnet routing occurs within the tenant boundary.
This terminology distinction is important: an IRB is an interface construct used to provide L3 gateway functionality for a VLAN/VXLAN segment; a VLAN is a Layer 2 segmentation mechanism; and an access list is a policy enforcement tool. A routing zone, however, defines the tenant’s L3 routing context, which is precisely what a VRF provides on Junos.
Verified Juniper sources (URLs):
https://www.juniper.net/documentation/us/en/software/apstra5.0/apstra-user-guide/topics/concept/routing-zones.html
https://www.juniper.net/documentation/us/en/software/apstra4.2/apstra-user-guide/topics/concept/routing-zones.html
You are allowed to assign tags for which three objects? (Choose three.)
Virtual networks
Interfaces
Generic systems
Property sets
Device profiles
In Juniper Apstra 5.1, tags are a lightweight metadata mechanism used to classify objects and enable conditional automation (for example, driving dynamic configlets or simplifying filtering/searching in the UI). Apstra supports tagging several blueprint-operational objects that commonly participate in day-1/day-2 workflows.
Virtual networks can be tagged so operators can group, search, and apply automation consistently across sets of segments. This is useful in EVPN-VXLAN fabrics where virtual networks represent VLAN- or VXLAN-backed broadcast domains and you may want policies or configlet logic to apply to all “finance†or “pci†segments as a group.
Interfaces can be tagged directly within a blueprint (for example, leaf access ports, uplinks, or specific border-facing ports). Interface tags are often used to drive template-based configuration behavior and to simplify operational actions across many ports without relying on fragile naming conventions.
Generic systems (internal or external) can also be tagged. Apstra documentation explicitly describes using tags to specify roles for internal generic systems, enabling you to differentiate server types or attachment roles and then apply the correct intent (connectivity templates, VN attachments, or policies) in a repeatable way.
By contrast, property sets are structured data objects used for parameterization (YAML/JSON values for templates/probes), and device profiles describe hardware/NOS capabilities; they are not the standard blueprint objects for tag assignment in this scenario.
Verified Juniper sources (URLs):
https://www.juniper.net/documentation/us/en/software/apstra5.1/apstra-user-guide/topics/task/tag-virtual-network-update.html
https://www.juniper.net/documentation/us/en/software/apstra5.1/apstra-user-guide/topics/topic-map/tag-interface-add-remove-datacenter.html
https://www.juniper.net/documentation/us/en/software/apstra5.1/apstra-user-guide/topics/topic-map/internal-generic-system-create.html
What is the purpose of an EVPN Ethernet segment identifier (ESI)?
To provide a hop count between devices
To identify Layer 2 frame types for filtering purposes
To specify a BGP community
To prevent loops within a LAG connection
In EVPN multihoming, the Ethernet Segment Identifier (ESI) is the mandatory identifier used to represent a multihomed Ethernet segment—for example, a server or downstream switch that is dual-homed to two leaf devices using a single logical LAG/port-channel. By assigning the same ESI to the participating leaf-facing interfaces, the fabric recognizes those links as belonging to one Ethernet segment and can apply EVPN multihoming procedures consistently across the pair.
A key outcome of EVPN multihoming is loop prevention for multi-attached Layer 2 domains. EVPN uses the Ethernet segment concept (identified by the ESI) along with Designated Forwarder (DF) election to ensure that only the appropriate device forwards BUM (broadcast, unknown unicast, multicast) traffic toward the multihomed segment, avoiding duplicate forwarding and L2 loops. In addition, ESI-based multihoming supports resilient forwarding behavior during failures (for example, link or node loss) while maintaining correct advertisement and convergence in the EVPN control plane.
Therefore, among the provided options, the purpose that best matches how ESI is used operationally is to prevent loops within a LAG/multihomed connection, which is fundamental to EVPN-VXLAN data center designs on Junos v24.4 leaf devices and is also explicitly supported by Apstra when modeling ESI-based dual-homing.
Verified Juniper sources (URLs):
https://www.juniper.net/documentation/us/en/software/nce/evpn-lag-multihoming-guide/topics/concept/evpn-lag-guide-introduction.html
https://www.juniper.net/documentation/us/en/software/nce/evpn-lag-multihoming-guide/topics/task/evpn-lag-guide-esi-types-lacp.html
https://www.juniper.net/documentation/us/en/software/junos/evpn/topics/topic-map/evpn-mh-df-election.html
You want to route between tenants in a multitenant environment in Juniper Apstra. What are two ways to accomplish this task? (Choose two.)
Route between VRFs on a VTEP-enabled device.
Use an external device to route between tenants.
Use iBGP to route within the same AS number.
Use virtual networks to route between VRFs.
In Apstra 5.1 multitenancy, tenants are modeled as routing zones, and each routing zone maps to a distinct VRF to provide strict Layer 3 isolation. Because each tenant’s VRF is separate, “routing between tenants†is effectively inter-VRF routing. Apstra’s routing-zone behavior emphasizes that inter-tenant routing is achieved via external systems: you connect each tenant/routing zone to an external router or firewall (often attached to border leafs), and that external device performs the policy-controlled inter-VRF routing between tenants. This approach is the most common because it centralizes security and compliance controls (stateful inspection, zone policies, NAT, logging) on the firewall/router while keeping the fabric clean and consistent.
A second method is to perform inter-VRF routing on a VTEP-capable border leaf that terminates the tenant VRFs. In EVPN-VXLAN designs, border leafs are frequently the demarcation where tenant VRFs connect to outside domains; when the same border leaf hosts multiple tenant VRFs and is designed to provide L3 services for them, it can act as the routing point between VRFs (subject to your design and security requirements). Junos v24.4 supports VRFs and policy constructs required for controlled route exchange and forwarding behavior, but Apstra’s intent model still expects routing-zone isolation by default—so any inter-tenant connectivity should be explicitly designed and governed, typically at the border.
You have a configuration deviation in the Juniper Apstra dashboard. What does this anomaly indicate in this scenario?
A device’s configuration has been updated by the server.
A device is ready to be configured by the system.
A device’s configuration has been changed using a method outside of Apstra.
A device cannot support a configuration command sent by the system.
A configuration deviation (also called a configuration anomaly) in Apstra indicates that the device’s running configuration differs from Apstra’s intended (golden) configuration for that node. In day-to-day operations, this most commonly occurs when an operator makes a change outside of Apstra’s control, such as entering commands directly on the device CLI (for example, on a Junos v24.4 switch), using another automation system, or applying an out-of-band configuration method.
Apstra continuously compares the device’s operational configuration against what it expects based on blueprint intent. When it detects drift, it raises a deviation anomaly so operators can decide how to restore compliance. Typical remediations are either (1) remove/revert the out-of-band change so the device matches intent again, or (2) explicitly acknowledge the change in Apstra (for example, via an accept/suppress workflow, depending on the exact UI action and version), so the deviation is no longer treated as unexpected.
While it is also possible for a deviation to be triggered by a device not accepting a rendered command (capability mismatch), the question asks what the anomaly indicates in this scenario; the primary meaning of “configuration deviation†is configuration changed outside of Apstra and therefore the network is no longer aligned with the intended state. That corresponds to option C.
An operator is working on a capacity-planning exercise. The operator needs to examine the pre-built time-series information regarding link utilization. In the Juniper Apstra UI, which top-level tab would the operator have to access to find this information?
Active
Staged
Analytics
Dashboard
In Apstra 5.1, capacity planning based on pre-built time-series telemetry (such as link utilization trends) is part of Intent-Based Analytics (IBA). IBA is where Apstra ingests streaming telemetry from fabric devices, stores it as time-series data, and presents it through built-in analytics views (dashboards/widgets) and probes. Because the question specifically calls out “pre-built time series information regarding link utilization,†the correct UI location is the Analytics top-level tab within the blueprint.
The Active tab is primarily oriented to operational state and day-2 workflows (for example, viewing live state, queries, and device-level operational views). The Staged tab is where you modify intent (physical/virtual design, policies, catalog items) prior to committing and deploying. The Dashboard provides a high-level blueprint overview and navigation, but the drill-down and time-series analytics views that support trending and capacity analysis are accessed via Analytics.
In an EVPN-VXLAN fabric using Junos v24.4, link utilization time-series is particularly valuable because underlay congestion can degrade overlay performance (BGP convergence behavior, ECMP distribution effectiveness, and endpoint experience). Apstra’s Analytics tab centralizes these metrics so operators can evaluate utilization baselines, identify sustained hot links, and support proactive actions (rebalancing, adding capacity, or adjusting design intent) without relying on ad-hoc per-device CLI polling.
Verified Juniper sources (URLs):
https://www.juniper.net/documentation/us/en/software/apstra5.1/apstra-custom-telemetry-collection-guide/topics/concept/apstra-telemetry-and-intent-based-analytics.html
What is the primary reason for creating an Apstra worker node?
To support more than one blueprint
To create a space for storing event logs
To run Zero Touch Provisioning (ZTP)
To offload off-box agents and Intent-Based Analytics (IBA)
In Apstra 5.1, the worker node’s primary purpose is to add scalable runtime capacity to an Apstra cluster by hosting off-box services that would otherwise consume resources on the controller. Specifically, worker nodes run containerized services such as off-box device agents (used to communicate with and manage devices) and Intent-Based Analytics (IBA) components (such as probes and analytics-related services). This design keeps the controller node focused on cluster management and control-plane functions (API handling, cluster-wide state, blueprint control workflows), while shifting resource-intensive operational services to worker nodes.
As your fabric grows—more switches, more telemetry, more devices requiring agent connectivity—CPU and memory demand increases notably, especially when IBA is enabled. Adding worker nodes allows you to scale those container workloads horizontally without redesigning the fabric or reducing analytics coverage. In a Juniper data center built on EVPN-VXLAN with Junos v24.4 leaf-spine roles, this separation helps ensure that Apstra can continuously validate intent, process streaming telemetry, and maintain device communications reliably at scale. Worker nodes therefore exist primarily to offload and scale operational agents and IBA services, improving performance and resilience for larger deployments.
Which type of generic system should you select when adding a new server inside an existing rack type?
Internal generic
Rack generic
External generic
Embedded generic
In Apstra 5.1, servers that connect to leaf switches are represented as generic systems so Apstra can model links, apply connectivity templates, attach virtual networks, and validate intent. The selection of generic system type depends on whether the endpoint is considered part of the rack’s internal topology or an external attachment. When you add a new server inside an existing rack type, that server is treated as a component of the rack topology (that is, it lives “within†the rack alongside leaf switches and any other rack-internal endpoints). Apstra documentation refers to such systems as internal generic systems.
Internal generic systems are not managed like switches (no full device management), but they are first-class topology objects: they occupy ports on leaf switches, can be tagged with roles, and can be associated with link definitions that drive correct interface intent (LAG vs single link, VLAN tagging, and virtual network association). This modeling is essential in EVPN-VXLAN fabrics because correct endpoint attachment on leaf ports determines VLAN/VNI mapping and the resulting Junos v24.4 configuration rendered by Apstra.
External generic systems, by contrast, represent devices outside the rack topology (often used for external routers, firewalls, or other non-rack-contained endpoints). Because the question explicitly places the server inside an existing rack type, the correct choice is Internal generic.
Verified Juniper sources (URLs):
https://www.juniper.net/documentation/us/en/software/apstra5.1/apstra-user-guide/topics/topic-map/internal-generic-system-create.html
You are assigning managed devices to a blueprint, for a fully functioning IP fabric. In the Juniper Apstra UI, which mode should you choose for this task?
Deploy
Ready
Not Set
Drain
In Apstra, Deploy mode is the state in which a device is intended to fully participate in the fabric. For a three-stage eBGP IP Clos (typical EVPN-VXLAN underlay), “fully functioning†means the switch receives the complete, intent-derived configuration required for production operation—underlay interface addressing, BGP peering, routing policy constructs, and any overlay-related prerequisites appropriate for its role (leaf, spine, border leaf). In Apstra’s device configuration lifecycle, Deploy is the mode that causes Apstra to render and apply the full set of intended services for that node so it becomes an active member of the IP fabric and contributes to ECMP pathing and control-plane adjacency.
By contrast, Ready is commonly used when you want the device discovered and prepared (for example, basic identity and interface readiness), but not actively routing in the fabric. Drain is a maintenance state used to gracefully withdraw an already-deployed device from forwarding to minimize impact (for example, for upgrades or repairs). Not Set indicates the deploy mode has not been chosen and therefore does not represent an operationally complete participation state.
Therefore, when your objective is an operational IP fabric where the assigned devices are actively routing and forwarding according to blueprint intent on Junos v24.4, the correct choice is Deploy.
Which two statements are correct about a Juniper Apstra server? (Choose two.)
The Juniper Apstra server uses Layer 2 to communicate with managed devices.
The Juniper Apstra server requires one network adapter connection for each managed device.
The Juniper Apstra server uses Layer 3 to communicate with managed devices.
The Juniper Apstra server requires a single network adapter.
Apstra manages devices using IP connectivity over the management network, which is a Layer 3 relationship. Whether you are using on-box agents or off-box agents, the controller (or cluster) communicates with the fabric devices using IP reachability (for example, to exchange management traffic, retrieve discovery state, collect telemetry, and push configuration). This is why Layer 2 adjacency is not required between the Apstra server and the managed switches; the essential requirement is routable IP connectivity and appropriate access (credentials/agent connectivity) to the device management interfaces.
From a platform perspective, Apstra does not need a dedicated physical NIC per managed device. Instead, the server/VM requires connectivity to the management network through a single network adapter, and that interface can route to all managed devices. In a typical data center deployment, the Apstra controller VM sits on a management VLAN/subnet and reaches the entire fabric through routed management. This scales operationally: adding devices does not require adding additional server NICs; it only requires IP reachability and capacity planning for telemetry and agent workloads. Thus, the correct statements are that Apstra uses Layer 3 to communicate with managed devices and that it requires a single network adapter for that management connectivity model.
What are three port group roles that you are allowed to assign to a logical device? (Choose three.)
Leaf
Empty
Generic
Spine
Root
In Apstra, a logical device abstracts a physical switch’s front-panel layout into one or more panels containing port groups. Each port group has a defined speed and one or more roles that describe how those ports are expected to be used in the fabric. These roles are essential because they constrain where ports may be consumed during rack type and template construction (for example, spine-facing vs server-facing vs generic connectivity).
Apstra-supported port group roles include fabric roles such as Spine and Leaf, and endpoint-facing roles such as Generic (commonly used for ports that connect to servers or external generic systems). Assigning Leaf and Spine roles ensures Apstra can correctly validate and render intent for uplinks and interconnects in a three-stage Clos or larger topologies. Assigning Generic indicates ports that can be used for non-fabric connections (such as server links, external routers modeled as generic systems, or other non-managed endpoints).
The options Empty and Root are not valid Apstra port group roles in the logical device model; Apstra uses other explicit role names (for example, Access, Peer, Unused, Generic, Leaf, Spine, Superspine depending on design type and version). In Junos v24.4 EVPN-VXLAN fabrics, getting these roles correct is foundational because Apstra relies on them to place underlay and overlay configuration onto the right interfaces with predictable results.
Verified Juniper sources (URLs):
https://www.juniper.net/documentation/us/en/software/apstra4.2/apstra-user-guide/topics/concept/logical-devices.html
https://www.juniper.net/documentation/us/en/software/jvd/jvd-collapsed-dc-fabric-juniper-apstra-access-switches/configuration_walkthrough.html
Which Root Cause Identifier is currently supported in Juniper Apstra software?
Virtual network
Connectivity
ESI imbalance
BGP
In Juniper Apstra 5.1, Root Cause Identification (RCI) is implemented with a currently supported model focused on connectivity. Practically, this means RCI is designed to take telemetry and state learned from the fabric (for example, interface operational status, LLDP neighbor information, and routing session status) and correlate those signals to determine the most likely underlying cause of a connectivity-impacting event. Within an EVPN-VXLAN IP fabric, many operational symptoms can appear similar at the service layer (endpoints cannot reach each other, routes disappear, overlays degrade), but RCI narrows the problem by correlating evidence across the underlay and control plane.
The “connectivity†RCI model targets common failure scenarios that directly break device-to-device reachability, such as a broken link, a miscabled link (wrong LLDP neighbors), or an operator-disabled interface. These conditions often cascade into higher-level symptoms, including BGP sessions dropping over affected links. With Junos v24.4-based leaf-spine fabrics, maintaining stable underlay connectivity is foundational for EVPN signaling and VXLAN forwarding; therefore, Apstra’s connectivity-focused RCI helps operators rapidly isolate whether the primary fault lies in physical adjacency, cabling/neighbor correctness, or administrative shutdown—reducing mean time to repair by pointing to the most probable root cause rather than only listing alarms.
What does VXLAN use to uniquely label and identify broadcast domains?
VLAN ID
Agent Circuit Identifier (ACI)
Virtual Network Identifier (VNI)
End System Identifier (ESI)
In a VXLAN overlay, each Layer 2 broadcast domain (the logical equivalent of a VLAN/bridge domain) is identified by a 24-bit VXLAN Network Identifier (VNI) carried in the VXLAN header. This VNI is what allows the overlay to scale far beyond traditional VLAN space (12-bit VLAN IDs), enabling up to ~16 million distinct segments. In an EVPN-VXLAN data center fabric, Junos v24.4 leaf switches operate as VTEPs and map local bridge domains (often associated with VLANs on server-facing ports) to a VNI. When traffic is sent across the routed underlay, the leaf encapsulates Ethernet frames into VXLAN packets and inserts the VNI so the receiving VTEP can place the frame into the correct broadcast domain on decapsulation.
Apstra 5.1 abstracts this mapping through virtual networks and resource allocation: when you define a VXLAN-based virtual network, Apstra allocates a VNI from the appropriate pool and consistently programs the necessary constructs on all participating leaves. The key point is that VNI is the unique identifier in the VXLAN data plane used to label the broadcast domain across the IP fabric; VLAN IDs may exist locally at the edge for tagging, but the globally significant overlay identifier is the VNI.
Verified Juniper sources (URLs):
https://www.juniper.net/documentation/us/en/software/junos/evpn/topics/topic-map/sdn-vxlan.html
TESTED 01 Mar 2026