Which of these is a valid container restart policy?
On login
On update
On start
On failure
The correct answer is D: On failure. In Kubernetes, restart behavior is controlled by the Pod-level field spec.restartPolicy, with valid values Always, OnFailure, and Never. The option presented here (“On failureâ€) maps to Kubernetes’ OnFailure policy. This setting determines what the kubelet should do when containers exit:
Always: restart containers whenever they exit (typical for long-running services)
OnFailure: restart containers only if they exit with a non-zero status (common for batch workloads)
Never: do not restart containers (fail and leave it terminated)
So “On failure†is a valid restart policy concept and the only one in the list that matches Kubernetes semantics.
The other options are not Kubernetes restart policies. “On login,†“On update,†and “On start†are not recognized values and don’t align with how Kubernetes models container lifecycle. Kubernetes is declarative and event-driven: it reacts to container exit codes and controller intent, not user “logins.â€
Operationally, choosing the right restart policy is important. For example, Jobs typically use restartPolicy: OnFailure or Never because the goal is completion, not continuous uptime. Deployments usually imply “Always†because the workload should keep serving traffic, and a crashed container should be restarted. Also note that controllers interact with restarts: a Deployment may recreate Pods if they fail readiness, while a Job counts completions and failures based on Pod termination behavior.
Therefore, among the options, the only valid (Kubernetes-aligned) restart policy is D.
=========
What is an important consideration when choosing a base image for a container in a Kubernetes deployment?
It should be minimal and purpose-built for the application to reduce attack surface and improve performance.
It should always be the latest version to ensure access to the newest features.
It should be the largest available image to ensure all dependencies are included.
It can be any existing image from the public repository without consideration of its contents.
Choosing an appropriate base image is a critical decision in building containerized applications for Kubernetes, as it directly impacts security, performance, reliability, and operational efficiency. A key best practice is to select a minimal, purpose-built base image, making option A the correct answer.
Minimal base images—such as distroless images or slim variants of common distributions—contain only the essential components required to run the application. By excluding unnecessary packages, shells, and utilities, these images significantly reduce the attack surface. Fewer components mean fewer potential vulnerabilities, which is especially important in Kubernetes environments where containers are often deployed at scale and exposed to dynamic network traffic.
Smaller images also improve performance and efficiency. They reduce image size, leading to faster image pulls, quicker Pod startup times, and lower network and storage overhead. This is particularly beneficial in large clusters or during frequent deployments, scaling events, or rolling updates. Kubernetes’ design emphasizes fast, repeatable deployments, and lightweight images align well with these goals.
Option B is incorrect because always using the latest image version can introduce instability or unexpected breaking changes. Kubernetes best practices recommend using explicitly versioned and tested images to ensure predictable behavior and reproducibility. Option C is incorrect because large images increase the attack surface, slow down deployments, and often include unnecessary dependencies that are never used by the application. Option D is incorrect because blindly using public images without inspecting their contents or provenance introduces serious security and compliance risks.
Kubernetes documentation and cloud-native security guidance consistently emphasize the principle of least privilege and minimalism in container images. A well-chosen base image supports secure defaults, faster operations, and easier maintenance, all of which are essential for running reliable workloads in production Kubernetes environments.
Therefore, the correct and verified answer is Option A.
What default level of protection is applied to the data in Secrets in the Kubernetes API?
The values use AES symmetric encryption
The values are stored in plain text
The values are encoded with SHA256 hashes
The values are base64 encoded
Kubernetes Secrets are designed to store sensitive data such as tokens, passwords, or certificates and make them available to Pods in controlled ways (as environment variables or mounted files). However, the default protection applied to Secret values in the Kubernetes API is base64 encoding, not encryption. That is why D is correct. Base64 is an encoding scheme that converts binary data into ASCII text; it is reversible and does not provide confidentiality.
By default, Secret objects are stored in the cluster’s backing datastore (commonly etcd) as base64-encoded strings inside the Secret manifest. Unless the cluster is configured for encryption at rest, those values are effectively stored unencrypted in etcd and may be visible to anyone who can read etcd directly or who has API permissions to read Secrets. This distinction is critical for security: base64 can prevent accidental issues with special characters in YAML/JSON, but it does not protect against attackers.
Option A is only correct if encryption at rest is explicitly configured on the API server using an EncryptionConfiguration (for example, AES-CBC or AES-GCM providers). Many managed Kubernetes offerings enable encryption at rest for etcd as an option or by default, but that is a deployment choice, not the universal Kubernetes default. Option C is incorrect because hashing is used for verification, not for secret retrieval; you typically need to recover the original value, so hashing isn’t suitable for Secrets. Option B (“plain textâ€) is misleading: the stored representation is base64-encoded, but because base64 is reversible, the security outcome is close to plain text unless encryption at rest and strict RBAC are in place.
The correct operational stance is: treat Kubernetes Secrets as sensitive; lock down access with RBAC, enable encryption at rest, avoid broad Secret read permissions, and consider external secret managers when appropriate. But strictly for the question’s wording—default level of protection—base64 encoding is the right answer.
=========
What edge and service proxy tool is designed to be integrated with cloud native applications?
CoreDNS
CNI
gRPC
Envoy
The correct answer is D: Envoy. Envoy is a high-performance edge and service proxy designed for cloud-native environments. It is commonly used as the data plane in service meshes and modern API gateways because it provides consistent traffic management, observability, and security features across microservices without requiring every application to implement those capabilities directly.
Envoy operates at Layer 7 (application-aware) and supports protocols like HTTP/1.1, HTTP/2, gRPC, and more. It can handle routing, load balancing, retries, timeouts, circuit breaking, rate limiting, TLS termination, and mutual TLS (mTLS). Envoy also emits rich telemetry (metrics, access logs, tracing) that integrates well with cloud-native observability stacks.
Why the other options are incorrect:
CoreDNS (A) provides DNS-based service discovery within Kubernetes; it is not an edge/service proxy.
CNI (B) is a specification and plugin ecosystem for container networking (Pod networking), not a proxy.
gRPC (C) is an RPC protocol/framework used by applications; it’s not a proxy tool. (Envoy can proxy gRPC traffic, but gRPC itself isn’t the proxy.)
In Kubernetes architectures, Envoy often appears in two places: (1) at the edge as part of an ingress/gateway layer, and (2) sidecar proxies alongside Pods in a service mesh (like Istio) to standardize service-to-service communication controls and telemetry. This is why it is described as “designed to be integrated with cloud native applicationsâ€: it’s purpose-built for dynamic service discovery, resilient routing, and operational visibility in distributed systems.
So the verified correct choice is D (Envoy).
=========
Which group of container runtimes provides additional sandboxed isolation and elevated security?
rune, cgroups
docker, containerd
runsc, kata
crun, cri-o
The runtimes most associated with sandboxed isolation are gVisor’s runsc and Kata Containers, making C correct. Standard container runtimes (like containerd with runc) rely primarily on Linux namespaces and cgroups for isolation. That isolation is strong for many use cases, but it shares the host kernel, which can be a concern for multi-tenant or high-risk workloads.
gVisor (runsc) provides a user-space kernel-like layer that intercepts and mediates system calls, reducing the container’s direct interaction with the host kernel. Kata Containers takes a different approach: it runs containers inside lightweight virtual machines, providing hardware-virtualization boundaries (or VM-like isolation) while still integrating into container workflows. Both are used to increase isolation compared to traditional containers, and both can be integrated with Kubernetes through compatible CRI/runtime configurations.
The other options are incorrect for the question’s intent. “rune, cgroups†is not a meaningful pairing here (cgroups is a Linux resource mechanism, not a runtime). “docker, containerd†are commonly used container platforms/runtimes but are not specifically the “sandboxed isolation†category (containerd typically uses runc for standard isolation). “crun, cri-o†represents a low-level OCI runtime (crun) and a CRI implementation (CRI-O), again not specifically a sandboxed-isolation grouping.
So, when the question asks for the group that provides additional sandboxing and elevated security, the correct, well-established answer is runsc + Kata.
The Container Runtime Interface (CRI) defines the protocol for the communication between:
The kubelet and the container runtime.
The container runtime and etcd.
The kube-apiserver and the kubelet.
The container runtime and the image registry.
The CRI (Container Runtime Interface) defines how the kubelet talks to the container runtime, so A is correct. The kubelet is the node agent responsible for ensuring containers are running in Pods on that node. It needs a standardized way to request operations such as: create a Pod sandbox, pull an image, start/stop containers, execute commands, attach streams, and retrieve logs. CRI provides that contract so kubelet does not need runtime-specific integrations.
This interface is a key part of Kubernetes’ modular design. Different container runtimes implement the CRI, allowing Kubernetes to run with containerd, CRI-O, and other CRI-compliant runtimes. This separation of concerns lets Kubernetes focus on orchestration, while runtimes focus on executing containers according to the OCI runtime spec, managing images, and handling low-level container lifecycle.
Why the other options are incorrect:
etcd is the control plane datastore; container runtimes do not communicate with etcd via CRI.
kube-apiserver and kubelet communicate using Kubernetes APIs, but CRI is not their protocol; CRI is specifically kubelet ↔ runtime.
container runtime and image registry communicate using registry protocols (image pull/push APIs), but that is not CRI. CRI may trigger image pulls via runtime requests, yet the actual registry communication is separate.
Operationally, this distinction matters when debugging node issues. If Pods are stuck in “ContainerCreating†due to image pull failures or runtime errors, you often investigate kubelet logs and the runtime (containerd/CRI-O) logs. Kubernetes administrators also care about CRI streaming (exec/attach/logs streaming), runtime configuration, and compatibility across Kubernetes versions.
So, the verified answer is A: the kubelet and the container runtime.
=========
At which layer would distributed tracing be implemented in a cloud native deployment?
Network
Application
Database
Infrastructure
Distributed tracing is implemented primarily at the application layer, so B is correct. The reason is simple: tracing is about capturing the end-to-end path of a request as it traverses services, libraries, queues, and databases. That “request context†(trace ID, span ID, baggage) must be created, propagated, and enriched as code executes. While infrastructure components (proxies, gateways, service meshes) can generate or augment trace spans, the fundamental unit of tracing is still tied to application operations (an HTTP handler, a gRPC call, a database query, a cache lookup).
In Kubernetes-based microservices, distributed tracing typically uses standards like OpenTelemetry for instrumentation and context propagation. Application frameworks emit spans for key operations, attach attributes (route, status code, tenant, retry count), and propagate context via headers (e.g., W3C Trace Context). This is what lets you reconstruct “Service A → Service B → Service C†for one user request and identify the slow or failing hop.
Why other layers are not the best answer:
Network focuses on packets/flows, but tracing is not a packet-capture problem; it’s a causal request-path problem across services.
Database spans are part of traces, but tracing is not “implemented in the database layer†overall; DB spans are one component.
Infrastructure provides the platform and can observe traffic, but without application context it can’t fully represent business operations (and many useful attributes live in app code).
So the correct layer for “where tracing is implemented†is the application layer—even when a mesh or proxy helps, it’s still describing application request execution across components.
=========
What is the telemetry component that represents a series of related distributed events that encode the end-to-end request flow through a distributed system?
Metrics
Logs
Spans
Traces
In observability, traces represent an end-to-end view of a request as it flows through multiple services, so D is correct. Tracing is particularly important in cloud-native microservices architectures because a single user action (like “checkout†or “searchâ€) may traverse many services via HTTP/gRPC calls, message queues, and databases. Traces link those related events together so you can see where time is spent, where errors occur, and how dependencies behave.
A trace is typically composed of multiple spans (option C). A span is a single timed operation (e.g., “HTTP GET /ordersâ€, “DB queryâ€, “call payment serviceâ€). Spans include timing, attributes (tags), status/error information, and parent/child relationships. While spans are essential building blocks, the “series of related distributed events encoding end-to-end request flow†is the trace as a whole, not an individual span.
Metrics (option A) are numeric time series used for aggregation and alerting (rates, latency percentiles when derived, resource usage). Logs (option B) are discrete event records (text or structured) useful for forensic detail and debugging. Both are valuable, but neither inherently provides a stitched, causal, end-to-end request path across services. Traces do exactly that by propagating trace context (trace IDs/span IDs) across service boundaries (often via headers).
In Kubernetes environments, traces are commonly exported via OpenTelemetry instrumentation/collectors and visualized in tracing backends. Tracing enables faster incident resolution by pinpointing the slow hop, the failing downstream dependency, or unexpected fan-out. Therefore, the correct telemetry component for end-to-end distributed request flow is Traces (D).
=========
What is the resource type used to package sets of containers for scheduling in a cluster?
Pod
ContainerSet
ReplicaSet
Deployment
The Kubernetes resource used to package one or more containers into a schedulable unit is the Pod, so A is correct. Kubernetes schedules Pods onto nodes; it does not schedule individual containers. A Pod represents a single “instance†of an application component and includes one or more containers that share key runtime properties, including the same network namespace (same IP and port space) and the ability to share volumes.
Pods enable common patterns beyond “one container per Pod.†For example, a Pod may include a main application container plus a sidecar container for logging, proxying, or configuration reload. Because these containers share localhost networking and volume mounts, they can coordinate efficiently without requiring external service calls. Kubernetes manages the Pod lifecycle as a unit: the containers in a Pod are started according to container lifecycle rules and are co-located on the same node.
Option B (ContainerSet) is not a standard Kubernetes workload resource. Option C (ReplicaSet) manages a set of Pod replicas, ensuring a desired count is running, but it is not the packaging unit itself. Option D (Deployment) is a higher-level controller that manages ReplicaSets and provides rollout/rollback behavior, again operating on Pods rather than being the container-packaging unit.
From the scheduling perspective, the PodSpec defines container images, commands, resources, volumes, security context, and placement constraints. The scheduler evaluates these constraints and assigns the Pod to a node. This “Pod as the atomic scheduling unit†is fundamental to Kubernetes architecture and explains why Kubernetes-native concepts (Services, selectors, readiness, autoscaling) all revolve around Pods.
=========
What Kubernetes component handles network communications inside and outside of a cluster, using operating system packet filtering if available?
kube-proxy
kubelet
etcd
kube-controller-manager
kube-proxy is the Kubernetes component responsible for implementing Service networking on nodes, commonly by programming operating system packet filtering / forwarding rules (like iptables or IPVS), which makes A correct.
Kubernetes Services provide stable virtual IPs and ports that route traffic to a dynamic set of Pod endpoints. kube-proxy watches the API server for Service and EndpointSlice/Endpoints updates and then configures the node’s networking so that traffic to a Service is correctly forwarded to one of the backend Pods. In iptables mode, kube-proxy installs NAT and forwarding rules; in IPVS mode, it programs kernel load-balancing tables. In both cases, it leverages OS-level packet handling to efficiently steer traffic. This is the “packet filtering if available†concept referenced in the question.
kube-proxy’s work affects both “inside†and “outside†paths in typical setups. Internal cluster clients reach Services via ClusterIP and DNS, and kube-proxy rules forward that traffic to Pods. For external traffic, paths often involve NodePort or LoadBalancer Services or Ingress controllers that ultimately forward into Services/Pods—again relying on node-level service rules. While some modern CNI/eBPF dataplanes can replace or bypass kube-proxy, the classic Kubernetes architecture still defines kube-proxy as the component implementing Service connectivity.
The other options are not networking dataplane components: kubelet runs Pods and reports status; etcd stores cluster state; kube-controller-manager runs control loops for API objects. None of these handle node-level packet routing for Services. Therefore, the correct verified answer is A: kube-proxy.
What is the purpose of the kube-proxy?
The kube-proxy balances network requests to Pods.
The kube-proxy maintains network rules on nodes.
The kube-proxy ensures the cluster connectivity with the internet.
The kube-proxy maintains the DNS rules of the cluster.
The correct answer is B: kube-proxy maintains network rules on nodes. kube-proxy is a node component that implements part of the Kubernetes Service abstraction. It watches the Kubernetes API for Service and EndpointSlice/Endpoints changes, and then programs the node’s dataplane rules (commonly iptables or IPVS, depending on configuration) so that traffic sent to a Service virtual IP and port is correctly forwarded to one of the backing Pod endpoints.
This is how Kubernetes provides stable Service addresses even though Pod IPs are ephemeral. When Pods scale up/down or are replaced during a rollout, endpoints change; kube-proxy updates the node rules accordingly. From the perspective of a client, the Service name and ClusterIP remain stable, while the actual backend endpoints are load-distributed.
Option A is a tempting phrasing but incomplete: load distribution is an outcome of the forwarding rules, but kube-proxy’s primary role is maintaining the network forwarding rules that make Services work. Option C is incorrect because internet connectivity depends on cluster networking, routing, NAT, and often CNI configuration—not kube-proxy’s job description. Option D is incorrect because DNS is typically handled by CoreDNS; kube-proxy does not “maintain DNS rules.â€
Operationally, kube-proxy failures often manifest as Service connectivity issues: Pod-to-Service traffic fails, ClusterIP routing breaks, NodePort behavior becomes inconsistent, or endpoints aren’t updated correctly. Modern Kubernetes environments sometimes replace kube-proxy with eBPF-based dataplanes, but in the classic architecture the correct statement remains: kube-proxy runs on each node and maintains the rules needed for Service traffic steering.
=========
What is a Kubernetes Service Endpoint?
It is the API endpoint of our Kubernetes cluster.
It is a name of special Pod in kube-system namespace.
It is an IP address that we can access from the Internet.
It is an object that gets IP addresses of individual Pods assigned to it.
A Kubernetes Service routes traffic to a dynamic set of backends (usually Pods). The set of backend IPs and ports is represented by endpoint-tracking resources. Historically this was the Endpoints object; today Kubernetes commonly uses EndpointSlice for scalability, but the concept remains the same: endpoints represent the concrete network destinations behind a Service. That’s why D is correct: a Service endpoint is an object that contains the IP addresses (and ports) of the individual Pods (or other backends) associated with that Service.
When a Service has a selector, Kubernetes automatically maintains endpoints by watching which Pods match the selector and are Ready, then publishing those Pod IPs into Endpoints/EndpointSlices. Consumers don’t usually use endpoints directly; instead they call the Service DNS name, and kube-proxy (or an alternate dataplane) forwards traffic to one of the endpoints. Still, endpoints are critical because they are what make Service routing accurate and up to date during scaling events, rolling updates, and failures.
Option A confuses this with the Kubernetes API server endpoint (the cluster API URL). Option B is incorrect; there’s no special “Service Endpoint Pod.†Option C describes an external/public IP concept, which may exist for LoadBalancer Services, but “Service endpoint†in Kubernetes vocabulary is about the backend destinations, not the public entrypoint.
Operationally, endpoints are useful for debugging: if a Service isn’t routing traffic, checking Endpoints/EndpointSlices shows whether the Service actually has backends and whether readiness is excluding Pods. This ties directly into Kubernetes service discovery and load balancing: the Service is the stable front door; endpoints are the actual backends.
=========
Which kubectl command is useful for collecting information about any type of resource that is active in a Kubernetes cluster?
describe
list
expose
explain
The correct answer is A (describe), used as kubectl describe
kubectl get (not listed) is typically used for listing objects and their summary fields, but kubectl describe goes deeper: for a Pod it will show container images, resource requests/limits, probes, mounted volumes, node assignment, IPs, conditions, and recent scheduling/pulling/starting events. For a Node it shows capacity/allocatable resources, labels/taints, conditions, and node events. Those event details often explain why something is Pending, failing to pull images, failing readiness checks, or being evicted.
Option B (“listâ€) is not a standard kubectl subcommand for retrieving resource information (you would use get for listing). Option C (expose) is for creating a Service to expose a resource (like a Deployment). Option D (explain) is for viewing API schema/field documentation (e.g., kubectl explain deployment.spec.replicas) and does not report what is currently happening in the cluster.
So, for gathering detailed live diagnostics about a resource in the cluster, the best kubectl command is kubectl describe, which corresponds to option A.
=========
Which API object is the recommended way to run a scalable, stateless application on your cluster?
ReplicaSet
Deployment
DaemonSet
Pod
For a scalable, stateless application, Kubernetes recommends using a Deployment because it provides a higher-level, declarative management layer over Pods. A Deployment doesn’t just “run replicasâ€; it manages the entire lifecycle of rolling out new versions, scaling up/down, and recovering from failures by continuously reconciling the current cluster state to the desired state you define. Under the hood, a Deployment typically creates and manages a ReplicaSet, and that ReplicaSet ensures a specified number of Pod replicas are running at all times. This layering is the key: you get ReplicaSet’s self-healing replica maintenance plus Deployment’s rollout/rollback strategies and revision history.
Why not the other options? A Pod is the smallest deployable unit, but it’s not a scalable controller—if a Pod dies, nothing automatically replaces it unless a controller owns it. A ReplicaSet can maintain N replicas, but it does not provide the full rollout orchestration (rolling updates, pause/resume, rollbacks, and revision tracking) that you typically want for stateless apps that ship frequent releases. A DaemonSet is for node-scoped workloads (one Pod per node or subset of nodes), like log shippers or node agents, not for “scale by replicas.â€
For stateless applications, the Deployment model is especially appropriate because individual replicas are interchangeable; the application does not require stable network identities or persistent storage per replica. Kubernetes can freely replace or reschedule Pods to maintain availability. Deployment strategies (like RollingUpdate) allow you to upgrade without downtime by gradually replacing old replicas with new ones while keeping the Service endpoints healthy. That combination—declarative desired state, self-healing, and controlled updates—makes Deployment the recommended object for scalable stateless workloads.
=========
What is the minimum number of etcd members that are required for a highly available Kubernetes cluster?
Two etcd members.
Five etcd members.
Six etcd members.
Three etcd members.
D (three etcd members) is correct. etcd is a distributed key-value store that uses the Raft consensus algorithm. High availability in consensus systems depends on maintaining a quorum (majority) of members to continue serving writes reliably. With 3 members, the cluster can tolerate 1 failure and still have 2/3 available—enough for quorum.
Two members is a common trap: with 2, a single failure leaves 1/2, which is not a majority, so the cluster cannot safely make progress. That means 2-member etcd is not HA; it is fragile and can be taken down by one node loss, network partition, or maintenance event. Five members can tolerate 2 failures and is a valid HA configuration, but it is not the minimum. Six is even-sized and generally discouraged for consensus because it doesn’t improve failure tolerance compared to five (quorum still requires 4), while increasing coordination overhead.
In Kubernetes, etcd reliability directly affects the API server and the entire control plane because etcd stores cluster state: object specs, status, controller state, and more. If etcd loses quorum, the API server will be unable to persist or reliably read/write state, leading to cluster management outages. That’s why the minimum HA baseline is three etcd members, often across distinct failure domains (nodes/AZs), with strong disk performance and consistent low-latency networking.
So, the smallest etcd topology that provides true fault tolerance is 3 members, which corresponds to option D.
=========
What does “continuous†mean in the context of CI/CD?
Frequent releases, manual processes, repeatable, fast processing
Periodic releases, manual processes, repeatable, automated processing
Frequent releases, automated processes, repeatable, fast processing
Periodic releases, automated processes, repeatable, automated processing
The correct answer is C: in CI/CD, “continuous†implies frequent releases, automation, repeatability, and fast feedback/processing. The intent is to reduce batch size and latency between code change and validation/deployment. Instead of integrating or releasing in large, risky chunks, teams integrate changes continually and rely on automation to validate and deliver them safely.
“Continuous†does not mean “periodic†(which eliminates B and D). It also does not mean “manual processes†(which eliminates A and B). Automation is core: build, test, security checks, and deployment steps are consistently executed by pipeline systems, producing reliable outcomes and auditability.
In practice, CI means every merge triggers automated builds and tests so the main branch stays in a healthy state. CD means those validated artifacts are promoted through environments with minimal manual steps, often including progressive delivery controls (canary, blue/green), automated rollbacks on health signal failures, and policy checks. Kubernetes works well with CI/CD because it is declarative and supports rollout primitives: Deployments, readiness probes, and rollback revision history enable safer continuous delivery when paired with pipeline automation.
Repeatability is a major part of “continuous.†The same pipeline should run the same way every time, producing consistent artifacts and deployments. This reduces “works on my machine†issues and shortens incident resolution because changes are traceable and reproducible. Fast processing and frequent releases also mean smaller diffs, easier debugging, and quicker customer value delivery.
So, the combination that accurately reflects “continuous†in CI/CD is frequent + automated + repeatable + fast, which is option C.
=========
In a cloud native environment, how do containerization and virtualization differ in terms of resource management?
Containerization uses hypervisors to manage resources, while virtualization does not.
Containerization shares the host OS, while virtualization runs a full OS for each instance.
Containerization consumes more memory than virtualization by default.
Containerization allocates resources per container, virtualization does not isolate them.
The fundamental difference between containerization and virtualization in a cloud native environment lies in how they manage and isolate resources, particularly with respect to the operating system. The correct description is that containerization shares the host operating system, while virtualization runs a full operating system for each instance, making option B the correct answer.
In virtualization, each virtual machine (VM) includes its own complete guest operating system running on top of a hypervisor. The hypervisor virtualizes hardware resources—CPU, memory, storage, and networking—and allocates them to each VM. Because every VM runs a full OS, virtualization introduces significant overhead in terms of memory usage, disk space, and startup time. However, it provides strong isolation between workloads, which is useful for running different operating systems or untrusted workloads on the same physical hardware.
In contrast, containerization operates at the operating system level rather than the hardware level. Containers share the host OS kernel and isolate applications using kernel features such as namespaces and control groups (cgroups). This design makes containers much lighter weight than virtual machines. Containers start faster, consume fewer resources, and allow higher workload density on the same infrastructure. Resource limits and isolation are still enforced, but without duplicating the entire operating system for each application instance.
Option A is incorrect because hypervisors are a core component of virtualization, not containerization. Option C is incorrect because containers generally consume less memory than virtual machines due to the absence of a full guest OS. Option D is incorrect because virtualization does isolate resources very strongly, while containers rely on OS-level isolation rather than hardware-level isolation.
In cloud native architectures, containerization is preferred for microservices and scalable workloads because of its efficiency and portability. Virtualization is still valuable for stronger isolation and heterogeneous operating systems. Therefore, Option B accurately captures the key resource management distinction between the two models.
Imagine there is a requirement to run a database backup every day. Which Kubernetes resource could be used to achieve that?
kube-scheduler
CronJob
Task
Job
To run a workload on a repeating schedule (like “every dayâ€), Kubernetes provides CronJob, making B correct. A CronJob creates Jobs according to a cron-formatted schedule, and then each Job creates one or more Pods that run to completion. This is the Kubernetes-native replacement for traditional cron scheduling, but implemented as a declarative resource managed by controllers in the cluster.
For a daily database backup, you’d define a CronJob with a schedule (e.g., "0 2 * * *" for 2:00 AM daily), and specify the Pod template that performs the backup (invokes backup scripts/tools, writes output to durable storage, uploads to object storage, etc.). Kubernetes will then create a Job at each scheduled time. CronJobs also support operational controls like concurrencyPolicy (Allow/Forbid/Replace) to decide what happens if a previous backup is still running, startingDeadlineSeconds to handle missed schedules, and history limits to retain recent successful/failed Job records for debugging.
Option D (Job) is close but not sufficient for “every day.†A Job runs a workload until completion once; you would need an external scheduler to create a Job every day. Option A (kube-scheduler) is a control plane component responsible for placing Pods onto nodes and does not schedule recurring tasks. Option C (“Taskâ€) is not a standard Kubernetes workload resource.
This question is fundamentally about mapping a recurring operational requirement (backup cadence) to Kubernetes primitives. The correct design is: CronJob triggers Job creation on a schedule; Job runs Pods to completion. Therefore, the correct answer is B.
=========
Which of the following capabilities are you allowed to add to a container using the Restricted policy?
CHOWN
SYS_CHROOT
SETUID
NET_BIND_SERVICE
Under the Kubernetes Pod Security Standards (PSS), the Restricted profile is the most locked-down baseline intended to reduce container privilege and host attack surface. In that profile, adding Linux capabilities is generally prohibited except for very limited cases. Among the listed capabilities, NET_BIND_SERVICE is the one commonly permitted in restricted-like policies, so D is correct.
NET_BIND_SERVICE allows a process to bind to “privileged†ports below 1024 (like 80/443) without running as root. This aligns with restricted security guidance: applications should run as non-root, but still sometimes need to listen on standard ports. Allowing NET_BIND_SERVICE enables that pattern without granting broad privileges.
The other capabilities listed are more sensitive and typically not allowed in a restricted profile: CHOWN can be used to change file ownership, SETUID relates to privilege changes and can be abused, and SYS_CHROOT is a broader system-level capability associated with filesystem root changes. In hardened Kubernetes environments, these are normally disallowed because they increase the risk of privilege escalation or container breakout paths, especially if combined with other misconfigurations.
A practical note: exact enforcement depends on the cluster’s admission configuration (e.g., the built-in Pod Security Admission controller) and any additional policy engines (OPA/Gatekeeper). But the security intent of “Restricted†is consistent: run as non-root, disallow privilege escalation, restrict capabilities, and lock down host access. NET_BIND_SERVICE is a well-known exception used to support common application networking needs while staying non-root.
So, the verified correct choice for an allowed capability in Restricted among these options is D: NET_BIND_SERVICE.
=========
A platform engineer wants to ensure that a new microservice is automatically deployed to every cluster registered in Argo CD. Which configuration best achieves this goal?
Set up a Kubernetes CronJob that redeploys the microservice to all registered clusters on a schedule.
Manually configure every registered cluster with the deployment YAML for installing the microservice.
Create an Argo CD ApplicationSet that uses a Git repository containing the microservice manifests.
Use a Helm chart to package the microservice and manage it with a single Application defined in Argo CD.
Argo CD is a declarative GitOps continuous delivery tool designed to manage Kubernetes applications across one or many clusters. When the requirement is to automatically deploy a microservice to every cluster registered in Argo CD, the most appropriate and scalable solution is to use an ApplicationSet.
The ApplicationSet controller extends Argo CD by enabling the dynamic generation of multiple Argo CD Applications from a single template. One of its most powerful features is the cluster generator, which automatically discovers all clusters registered with Argo CD and creates an Application for each of them. By combining this generator with a Git repository containing the microservice manifests, the platform engineer ensures that the microservice is consistently deployed to all existing clusters—and any new clusters added in the future—without manual intervention.
This approach aligns perfectly with GitOps principles. The desired state of the microservice is defined once in Git, and Argo CD continuously reconciles that state across all target clusters. Any updates to the microservice manifests are automatically rolled out everywhere in a controlled and auditable manner. This provides strong guarantees around consistency, scalability, and operational simplicity.
Option A is incorrect because a CronJob introduces imperative redeployment logic and does not integrate with Argo CD’s reconciliation model. Option B is not scalable or maintainable, as it requires manual configuration for each cluster and increases the risk of configuration drift. Option D, while useful for packaging applications, still results in a single Application object and does not natively handle multi-cluster fan-out by itself.
Therefore, the correct and verified answer is Option C: creating an Argo CD ApplicationSet backed by a Git repository, which is the recommended and documented solution for multi-cluster application delivery in Argo CD.
What is the main purpose of the Ingress in Kubernetes?
Access HTTP and HTTPS services running in the cluster based on their IP address.
Access services different from HTTP or HTTPS running in the cluster based on their IP address.
Access services different from HTTP or HTTPS running in the cluster based on their path.
Access HTTP and HTTPS services running in the cluster based on their path.
D is correct. Ingress is a Kubernetes API object that defines rules for external access to HTTP/HTTPS services in a cluster. The defining capability is Layer 7 routing—commonly host-based and path-based routing—so you can route requests like example.com/app1 to one Service and example.com/app2 to another. While the question mentions “based on their path,†that’s a classic and correct Ingress use case (and host routing is also common).
Ingress itself is only the specification of routing rules. An Ingress controller (e.g., NGINX Ingress Controller, HAProxy, Traefik, cloud-provider controllers) is what actually implements those rules by configuring a reverse proxy/load balancer. Ingress typically terminates TLS (HTTPS) and forwards traffic to internal Services, giving a more expressive alternative to exposing every service via NodePort/LoadBalancer.
Why the other options are wrong:
A suggests routing by IP address; Ingress is fundamentally about HTTP(S) routing rules (host/path), not direct Service IP access.
B and C describe non-HTTP protocols; Ingress is specifically for HTTP/HTTPS. For TCP/UDP or other protocols, you generally use Services of type LoadBalancer/NodePort, Gateway API implementations, or controller-specific TCP/UDP configuration.
Ingress is a foundational building block for cloud-native application delivery because it centralizes edge routing, enables TLS management, and supports gradual adoption patterns (multiple services under one domain). Therefore, the main purpose described here matches D.
=========
What is the primary purpose of a Horizontal Pod Autoscaler (HPA) in Kubernetes?
To automatically scale the number of Pod replicas based on resource utilization.
To track performance metrics and report health status for nodes and Pods.
To coordinate rolling updates of Pods when deploying new application versions.
To allocate and manage persistent volumes required by stateful applications.
The Horizontal Pod Autoscaler (HPA) is a core Kubernetes feature designed to automatically scale the number of Pod replicas in a workload based on observed metrics, making option A the correct answer. Its primary goal is to ensure that applications can handle varying levels of demand while maintaining performance and resource efficiency.
HPA works by continuously monitoring metrics such as CPU utilization, memory usage, or custom and external metrics provided through the Kubernetes metrics APIs. Based on target thresholds defined by the user, the HPA increases or decreases the number of replicas in a scalable resource like a Deployment, ReplicaSet, or StatefulSet. When demand increases, HPA adds more Pods to handle the load. When demand decreases, it scales down Pods to free resources and reduce costs.
Option B is incorrect because tracking performance metrics and reporting health status is handled by components such as the metrics-server, monitoring systems, and observability tools—not by the HPA itself. Option C is incorrect because rolling updates are managed by Deployment strategies, not by the HPA. Option D is incorrect because persistent volume management is handled by Kubernetes storage resources and CSI drivers, not by autoscalers.
HPA operates at the Pod replica level, which is why it is called “horizontal†scaling—scaling out or in by changing the number of Pods, rather than adjusting resource limits of individual Pods (which would be vertical scaling). This makes HPA particularly effective for stateless applications that can scale horizontally to meet demand.
In practice, HPA is commonly used in production Kubernetes environments to maintain application responsiveness under load while optimizing cluster resource usage. It integrates seamlessly with Kubernetes’ declarative model and self-healing mechanisms.
Therefore, the correct and verified answer is Option A, as the Horizontal Pod Autoscaler’s primary function is to automatically scale Pod replicas based on resource utilization and defined metrics.
In the DevOps framework and culture, who builds, automates, and offers continuous delivery tools for developer teams?
Application Users
Application Developers
Platform Engineers
Cluster Operators
The correct answer is C (Platform Engineers). In modern DevOps and platform operating models, platform engineering teams build and maintain the shared delivery capabilities that product/application teams use to ship software safely and quickly. This includes CI/CD pipeline templates, standardized build and test automation, artifact management (registries), deployment tooling (Helm/Kustomize/GitOps), secrets management patterns, policy guardrails, and paved-road workflows that reduce cognitive load for developers.
While application developers (B) write the application code and often contribute pipeline steps for their service, the “build, automate, and offer tooling for developer teams†responsibility maps directly to platform engineering: they provide the internal platform that turns Kubernetes and cloud services into a consumable product. This is especially common in Kubernetes-based organizations where you want consistent deployment standards, repeatable security checks, and uniform observability.
Cluster operators (D) typically focus on the health and lifecycle of the Kubernetes clusters themselves: upgrades, node pools, networking, storage, cluster security posture, and control plane reliability. They may work closely with platform engineers, but “continuous delivery tools for developer teams†is broader than cluster operations. Application users (A) are consumers of the software, not builders of delivery tooling.
In cloud-native application delivery, this division of labor is important: platform engineers enable higher velocity with safety by automating the software supply chain—builds, tests, scans, deploys, progressive delivery, and rollback. Kubernetes provides the runtime substrate, but the platform team makes it easy and safe for developers to use it repeatedly and consistently across many services.
Therefore, Platform Engineers (C) is the verified correct choice.
=========
Which of the following is a good habit for cloud native cost efficiency?
Follow an automated approach to cost optimization, including visibility and forecasting.
Follow manual processes for cost analysis, including visibility and forecasting.
Use only one cloud provider to simplify the cost analysis.
Keep your legacy workloads unchanged, to avoid cloud costs.
The correct answer is A. In cloud-native environments, costs are highly dynamic: autoscaling changes compute footprint, ephemeral environments come and go, and usage-based billing applies to storage, network egress, load balancers, and observability tooling. Because of this variability, automation is the most sustainable way to achieve cost efficiency. Automated visibility (dashboards, chargeback/showback), anomaly detection, and forecasting help teams understand where spend is coming from and how it changes over time. Automated optimization actions can include right-sizing requests/limits, enforcing TTLs on preview environments, scaling down idle clusters, and cleaning unused resources.
Manual processes (B) don’t scale as complexity grows. By the time someone reviews a spreadsheet or dashboard weekly, cost spikes may have already occurred. Automation enables fast feedback loops and guardrails, which is essential for preventing runaway spend caused by misconfiguration (e.g., excessive log ingestion, unbounded autoscaling, oversized node pools).
Option C is not a cost-efficiency “habit.†Single-provider strategies may simplify some billing views, but they can also reduce leverage and may not be feasible for resilience/compliance; it’s a business choice, not a best practice for cloud-native cost management. Option D is counterproductive: keeping legacy workloads unchanged often wastes money because cloud efficiency typically requires adapting workloads—right-sizing, adopting autoscaling, and using managed services appropriately.
In Kubernetes specifically, cost efficiency is tightly linked to resource management: accurate CPU/memory requests, limits where appropriate, cluster autoscaler tuning, and avoiding overprovisioning. Observability also matters because you can’t optimize what you can’t measure. Therefore, the best habit is an automated cost optimization approach with strong visibility and forecasting—A.
=========
In a cloud native world, what does the IaC abbreviation stand for?
Infrastructure and Code
Infrastructure as Code
Infrastructure above Code
Infrastructure across Code
IaC stands for Infrastructure as Code, which is option B. In cloud native environments, IaC is a core operational practice: infrastructure (networks, clusters, load balancers, IAM roles, storage classes, DNS records, and more) is defined using code-like, declarative configuration rather than manual, click-driven changes. This approach mirrors Kubernetes’ own declarative model—where you define desired state in manifests and controllers reconcile the cluster to match.
IaC improves reliability and velocity because it makes infrastructure repeatable, version-controlled, reviewable, and testable. Teams can store infrastructure definitions in Git, use pull requests for change review, and run automated checks to validate formatting, policies, and safety constraints. If an environment must be recreated (disaster recovery, test environments, regional expansion), IaC enables consistent reproduction with fewer human errors.
In Kubernetes-centric workflows, IaC often covers both the base platform and the workloads layered on top. For example, provisioning might include the Kubernetes control plane, node pools, networking, and identity integration, while Kubernetes manifests (or Helm/Kustomize) define Deployments, Services, RBAC, Ingress, and storage resources. GitOps extends this further by continuously reconciling cluster configuration from a Git source of truth.
The incorrect options (Infrastructure and Code / above / across) are not standard terms. The key idea is “infrastructure treated like softwareâ€: changes are made through code commits, go through CI checks, and are rolled out in controlled ways. This aligns with cloud native goals: faster iteration, safer operations, and easier auditing. In short, IaC is the operational backbone that makes Kubernetes and cloud platforms manageable at scale, enabling consistent environments and reducing configuration drift.
You’re right — my previous 16–30 were not taken from your PDF. Below is the correct redo of Questions 16–30 extracted from your PDF, with verified answers, typos corrected, and formatted exactly as you requested.
What feature must a CNI support to control specific traffic flows for workloads running in Kubernetes?
Border Gateway Protocol
IP Address Management
Pod Security Policy
Network Policies
To control which workloads can communicate with which other workloads in Kubernetes, you use NetworkPolicy resources—but enforcement depends on the cluster’s networking implementation. Therefore, for traffic-flow control, the CNI/plugin must support Network Policies, making D correct.
Kubernetes defines the NetworkPolicy API as a declarative way to specify allowed ingress and egress traffic based on selectors (Pod labels, namespaces, IP blocks) and ports/protocols. However, Kubernetes itself does not enforce NetworkPolicy rules; enforcement is provided by the network plugin (or associated dataplane components). If your CNI does not implement NetworkPolicy, the objects may exist in the API but have no effect—Pods will communicate freely by default.
Option B (IP Address Management) is often part of CNI responsibilities, but IPAM is about assigning addresses, not enforcing L3/L4 security policy. Option A (BGP) is used by some CNIs to advertise routes (for example, in certain Calico deployments), but BGP is not the general requirement for policy enforcement. Option C (Pod Security Policy) is a deprecated/removed Kubernetes admission feature related to Pod security settings, not network flow control.
From a Kubernetes security standpoint, NetworkPolicies are a key tool for implementing least privilege at the network layer—limiting lateral movement, reducing blast radius, and segmenting environments. But they only work when the chosen CNI supports them. Thus, the correct answer is D: Network Policies.
=========
Which of the following would fall under the responsibilities of an SRE?
Developing a new application feature.
Creating a monitoring baseline for an application.
Submitting a budget for running an application in a cloud.
Writing policy on how to submit a code change.
Site Reliability Engineering (SRE) focuses on reliability, availability, performance, and operational excellence using engineering approaches. Among the options, creating a monitoring baseline for an application is a classic SRE responsibility, so B is correct. A monitoring baseline typically includes defining key service-level signals (latency, traffic, errors, saturation), establishing dashboards, setting sensible alert thresholds, and ensuring telemetry is complete enough to support incident response and capacity planning.
In Kubernetes environments, SRE work often involves ensuring that workloads expose health endpoints for probes, that resource requests/limits are set to allow stable scheduling and autoscaling, and that observability pipelines (metrics, logs, traces) are consistent. Building a monitoring baseline also ties into SLO/SLI practices: SREs define what “good†looks like, measure it continuously, and create alerts that notify teams when the system deviates from those expectations.
Option A is primarily an application developer task—SREs may contribute to reliability features, but core product feature development is usually owned by engineering teams. Option C is more aligned with finance, FinOps, or management responsibilities, though SRE data can inform costs. Option D is closer to governance, platform policy, or developer experience/process ownership; SREs might influence processes, but “policy on how to submit code change†is not the defining SRE duty compared to monitoring and reliability engineering.
Therefore, the best verified choice is B, because establishing monitoring baselines is central to operating reliable services on Kubernetes.
=========
In Kubernetes, what is the primary function of a RoleBinding?
To provide a user or group with permissions across all resources at the cluster level.
To assign the permissions of a Role to a user, group, or service account within a namespace.
To enforce namespace network rules by binding policies to Pods running in the namespace.
To create and define a new Role object that contains a specific set of permissions.
In Kubernetes, authorization is managed using Role-Based Access Control (RBAC), which defines what actions identities can perform on which resources. Within this model, a RoleBinding plays a crucial role by connecting permissions to identities, making option B the correct answer.
A Role defines a set of permissions—such as the ability to get, list, create, or delete specific resources—but by itself, a Role does not grant those permissions to anyone. A RoleBinding is required to bind that Role to a specific subject, such as a user, group, or service account. This binding is namespace-scoped, meaning it applies only within the namespace where the RoleBinding is created. As a result, RoleBindings enable fine-grained access control within individual namespaces, which is essential for multi-tenant and least-privilege environments.
When a RoleBinding is created, it references a Role (or a ClusterRole) and assigns its permissions to one or more subjects within that namespace. This allows administrators to reuse existing roles while precisely controlling who can perform certain actions and where. For example, a RoleBinding can grant a service account read-only access to ConfigMaps in a single namespace without affecting access elsewhere in the cluster.
Option A is incorrect because cluster-wide permissions are granted using a ClusterRoleBinding, not a RoleBinding. Option C is incorrect because network rules are enforced using NetworkPolicies, not RBAC objects. Option D is incorrect because Roles are defined independently and only describe permissions; they do not assign them to identities.
In summary, a RoleBinding’s primary purpose is to assign the permissions defined in a Role to users, groups, or service accounts within a specific namespace. This separation of permission definition (Role) and permission assignment (RoleBinding) is a fundamental principle of Kubernetes RBAC and is clearly documented in Kubernetes authorization architecture.
Which of the following is a challenge derived from running cloud native applications?
The operational costs of maintaining the data center of the company.
Cost optimization is complex to maintain across different public cloud environments.
The lack of different container images available in public image repositories.
The lack of services provided by the most common public clouds.
The correct answer is B. Cloud-native applications often run across multiple environments—different cloud providers, regions, accounts/projects, and sometimes hybrid deployments. This introduces real cost-management complexity: pricing models differ (compute types, storage tiers, network egress), discount mechanisms vary (reserved capacity, savings plans), and telemetry/charge attribution can be inconsistent. When you add Kubernetes, the abstraction layer can further obscure cost drivers because costs are incurred at the infrastructure level (nodes, disks, load balancers) while consumption happens at the workload level (namespaces, Pods, services).
Option A is less relevant because cloud-native adoption often reduces dependence on maintaining a private datacenter; many organizations adopt cloud-native specifically to avoid datacenter CapEx/ops overhead. Option C is generally untrue—public registries and vendor registries contain vast numbers of images; the challenge is more about provenance, security, and supply chain than “lack of images.†Option D is incorrect because major clouds offer abundant services; the difficulty is choosing among them and controlling cost/complexity, not a lack of services.
Cost optimization being complex is a recognized challenge because cloud-native architectures include microservices sprawl, autoscaling, ephemeral environments, and pay-per-use dependencies (managed databases, message queues, observability). Small misconfigurations can cause big bills: noisy logs, over-requested resources, unbounded HPA scaling, and egress-heavy architectures. That’s why practices like FinOps, tagging/labeling for allocation, and automated guardrails are emphasized.
So the best answer describing a real, common cloud-native challenge is B.
=========
What fields must exist in any Kubernetes object (e.g. YAML) file?
apiVersion, kind, metadata
kind, namespace, data
apiVersion, metadata, namespace
kind, metadata, data
Any Kubernetes object manifest must include apiVersion, kind, and metadata, which makes A correct. This comes directly from how Kubernetes resources are represented and processed by the API server.
apiVersion tells Kubernetes which API group and version should be used to interpret the object (for example v1, apps/v1, batch/v1). This matters because schemas and available fields can change between versions.
kind specifies the type of object you are creating (for example Pod, Service, Deployment, ConfigMap). Kubernetes uses this to route the request to the correct API endpoint and schema.
metadata contains identifying and organizational information such as name, namespace (when namespaced), labels, and annotations. At minimum, most objects require a name; labels and annotations are optional but extremely common for selection and tooling.
A common point of confusion is spec. Many Kubernetes objects include spec because they define desired state (like a Deployment’s replica count, Pod template, update strategy). However, the question asks what fields must exist in any Kubernetes object file. Not all objects require a spec in the same way (and some objects include other top-level sections like data for ConfigMaps/Secrets or rules for RBAC objects). The truly universal top-level requirements are the trio in option A.
Options B, C, and D include fields that are not universally required (namespace is not required for cluster-scoped objects, and data only applies to certain kinds like ConfigMaps/Secrets). Therefore, apiVersion + kind + metadata is the correct, general rule and matches Kubernetes object structure.
=========
What is the reference implementation of the OCI runtime specification?
lxc
CRI-O
runc
Docker
The verified correct answer is C (runc). The Open Container Initiative (OCI) defines standards for container image format and runtime behavior. The OCI runtime specification describes how to run a container (process execution, namespaces, cgroups, filesystem mounts, capabilities, etc.). runc is widely recognized as the reference implementation of that runtime spec and is used underneath many higher-level container runtimes.
In common container stacks, Kubernetes nodes typically run a CRI-compliant runtime such as containerd or CRI-O. Those runtimes handle image management, container lifecycle coordination, and CRI integration, but they usually invoke an OCI runtime to actually create and start containers. In many deployments, that OCI runtime is runc (or a compatible alternative). This layering helps keep responsibilities separated: CRI runtime manages orchestration-facing operations; OCI runtime performs the low-level container creation according to the standardized spec.
Option A (lxc) is an older Linux containers technology and tooling ecosystem, but it is not the OCI runtime reference implementation. Option B (CRI-O) is a Kubernetes-focused container runtime that implements CRI; it uses OCI runtimes (often runc) underneath, so it’s not the reference implementation itself. Option D (Docker) is a broader platform/tooling suite; while Docker historically used runc under the hood and helped popularize containers, the OCI reference runtime implementation is runc, not Docker.
Understanding this matters in container orchestration contexts because it clarifies what Kubernetes depends on: Kubernetes relies on CRI for runtime integration, and runtimes rely on OCI standards for interoperability. OCI standards ensure that images and runtime behavior are portable across tools and vendors, and runc is the canonical implementation that demonstrates those standards in practice.
Therefore, the correct answer is C: runc.
=========
What can be used to create a job that will run at specified times/dates or on a repeating schedule?
Job
CalendarJob
BatchJob
CronJob
The correct answer is D: CronJob. A Kubernetes CronJob is specifically designed for creating Jobs on a schedule—either at specified times/dates (expressed via cron syntax) or on a repeating interval (hourly, daily, weekly). When the schedule triggers, the CronJob controller creates a Job, and the Job controller creates the Pods that execute the workload to completion.
Option A (Job) is not inherently scheduled. A Job runs when you create it, and it continues until it completes successfully or fails according to its retry/backoff behavior. If you want it to run periodically, you need something else to create the Job each time. CronJob is the built-in mechanism for that scheduling.
Options B and C are not standard Kubernetes workload objects. Kubernetes does not include “CalendarJob†or “BatchJob†as official API kinds. The scheduling primitive is CronJob.
CronJobs also include important operational controls: concurrency policies prevent overlapping runs, deadlines control missed schedules, and history limits manage old Job retention. This makes CronJobs more robust than ad-hoc scheduling approaches and keeps the workload lifecycle visible in the Kubernetes API (status/events/logs). It also means you can apply standard Kubernetes patterns: use a service account with least privilege, mount Secrets/ConfigMaps, run in specific namespaces, and manage resource requests/limits so that scheduled workloads don’t destabilize the cluster.
So the correct Kubernetes resource for scheduled and repeating job execution is CronJob (D).
=========
In the Kubernetes platform, which component is responsible for running containers?
etcd
CRI-O
cloud-controller-manager
kube-controller-manager
In Kubernetes, the actual act of running containers on a node is performed by the container runtime. The kubelet instructs the runtime via CRI, and the runtime pulls images, creates containers, and manages their lifecycle. Among the options provided, CRI-O is the only container runtime, so B is correct.
It’s important to be precise: the component that “runs containers†is not the control plane and not etcd. etcd (option A) stores cluster state (API objects) as the backing datastore. It never runs containers. cloud-controller-manager (option C) integrates with cloud APIs for infrastructure like load balancers and nodes. kube-controller-manager (option D) runs controllers that reconcile Kubernetes objects (Deployments, Jobs, Nodes, etc.) but does not execute containers on worker nodes.
CRI-O is a CRI implementation that is optimized for Kubernetes and typically uses an OCI runtime (like runc) under the hood to start containers. Another widely used runtime is containerd. The runtime is installed on nodes and is a prerequisite for kubelet to start Pods. When a Pod is scheduled to a node, kubelet reads the PodSpec and asks the runtime to create a “pod sandbox†and then start the container processes. Runtime behavior also includes pulling images, setting up namespaces/cgroups, and exposing logs/stdout streams back to Kubernetes tooling.
So while “the container runtime†is the most general answer, the question’s option list makes CRI-O the correct selection because it is a container runtime responsible for running containers in Kubernetes.
=========
Which of the following resources helps in managing a stateless application workload on a Kubernetes cluster?
DaemonSet
StatefulSet
kubectl
Deployment
The correct answer is D: Deployment. A Deployment is the standard Kubernetes controller for managing stateless applications. It provides declarative updates, replica management, and rollout/rollback functionality. You define the desired state (container image, environment variables, ports, replica count) in the Deployment spec, and Kubernetes ensures the specified number of Pods are running and updated according to strategy (RollingUpdate by default).
Stateless workloads are ideal for Deployments because each replica is interchangeable. If a Pod dies, a new one can be created anywhere; if traffic increases, replicas can be increased; if you need to update the app, a new ReplicaSet is created and traffic shifts gradually to new Pods. Deployments integrate naturally with Services for stable networking and load balancing.
Why the other options are incorrect:
A DaemonSet ensures one Pod per node (or selected nodes). It’s for node-level agents, not generic stateless service replicas.
A StatefulSet is for workloads needing stable identity, ordered rollout, and persistent storage per replica (databases, quorum systems). That’s not the typical stateless app case.
kubectl is a CLI tool; it doesn’t “manage†workloads as a controller resource.
In real cluster operations, almost every stateless microservice is represented as a Deployment plus a Service (and often an Ingress/Gateway for edge routing). Deployments also support advanced delivery patterns (maxSurge/maxUnavailable tuning) and easy integration with HPA for horizontal scaling. Because the question is specifically “managing a stateless application workload,†the Kubernetes resource designed for that is clearly the Deployment.
=========
What is the role of the ingressClassName field in a Kubernetes Ingress resource?
It defines the type of protocol (HTTP or HTTPS) that the Ingress Controller should process.
It specifies the backend Service used by the Ingress Controller to route external requests.
It determines how routing rules are prioritized when multiple Ingress objects are applied.
It indicates which Ingress Controller should implement the rules defined in the Ingress resource.
The ingressClassName field in a Kubernetes Ingress resource is used to explicitly specify which Ingress Controller is responsible for processing and enforcing the rules defined in that Ingress. This makes option D the correct answer.
In Kubernetes clusters, it is common to have multiple Ingress Controllers running at the same time. For example, a cluster might run an NGINX Ingress Controller, a cloud-provider-specific controller, and an internal-only controller simultaneously. Without a clear mechanism to select which controller should handle a given Ingress resource, multiple controllers could attempt to process the same rules, leading to conflicts or undefined behavior.
The ingressClassName field solves this problem by referencing an IngressClass object. The IngressClass defines the controller implementation (via the controller field), and the Ingress resource uses ingressClassName to declare which class—and therefore which controller—should act on it. This creates a clean and explicit binding between an Ingress and its controller.
Option A is incorrect because protocol handling (HTTP vs HTTPS) is defined through TLS configuration and service ports, not by ingressClassName. Option B is incorrect because backend Services are defined in the rules and backend sections of the Ingress specification. Option C is incorrect because routing priority is determined by path matching rules and controller-specific logic, not by ingressClassName.
Historically, annotations were used to select Ingress Controllers, but ingressClassName is now the recommended and standardized approach. It improves clarity, portability, and compatibility across different Kubernetes distributions and controllers.
In summary, the primary purpose of ingressClassName is to indicate which Ingress Controller should implement the routing rules for a given Ingress resource, making Option D the correct and verified answer.
What is a best practice to minimize the container image size?
Use a DockerFile.
Use multistage builds.
Build images with different tags.
Add a build.sh script.
A proven best practice for minimizing container image size is to use multi-stage builds, so B is correct. Multi-stage builds allow you to separate the “build environment†from the “runtime environment.†In the first stage, you can use a full-featured base image (with compilers, package managers, and build tools) to compile your application or assemble artifacts. In the final stage, you copy only the resulting binaries or necessary runtime assets into a much smaller base image (for example, a distroless image or a slim OS image). This dramatically reduces the final image size because it excludes compilers, caches, and build dependencies that are not needed at runtime.
In cloud-native application delivery, smaller images matter for several reasons. They pull faster, which speeds up deployments, rollouts, and scaling events (Pods become Ready sooner). They also reduce attack surface by removing unnecessary packages, which helps security posture and scanning results. Smaller images tend to be simpler and more reproducible, improving reliability across environments.
Option A is not a size-minimization practice: using a Dockerfile is simply the standard way to define how to build an image; it doesn’t inherently reduce size. Option C (different tags) changes image identification but not size. Option D (a build script) may help automation, but it doesn’t guarantee smaller images; the image contents are determined by what ends up in the layers.
Multi-stage builds are commonly paired with other best practices: choosing minimal base images, cleaning package caches, avoiding copying unnecessary files (use .dockerignore), and reducing layer churn. But among the options, the clearest and most directly correct technique is multi-stage builds.
Therefore, the verified answer is B.
=========
How long should a stable API element in Kubernetes be supported (at minimum) after deprecation?
9 months
24 months
12 months
6 months
Kubernetes has a formal API deprecation policy to balance stability for users with the ability to evolve the platform. For a stable (GA) API element, Kubernetes commits to supporting that API for a minimum period after it is deprecated. The correct minimum in this question is 12 months, which corresponds to option C.
In practice, Kubernetes releases occur roughly every three to four months, and the deprecation policy is commonly communicated in terms of “releases†as well as time. A GA API that is deprecated in one release is typically kept available for multiple subsequent releases, giving cluster operators and application teams time to migrate manifests, client libraries, controllers, and automation. This matters because Kubernetes is often at the center of production delivery pipelines; abrupt API removals would break deployments, upgrades, and tooling. By guaranteeing a minimum support window, Kubernetes enables predictable upgrades and safer lifecycle management.
This policy also encourages teams to track API versions and plan migrations. For example, workloads might start on a beta API (which can change), but once an API reaches stable, users can expect a stronger compatibility promise. Deprecation warnings help surface risk early. In many clusters, you’ll see API server warnings and tooling hints when manifests use deprecated fields/versions, allowing proactive remediation before the removal release.
Options 6 or 9 months would be too short for many enterprises to coordinate changes across multiple teams and environments. 24 months may be true for some ecosystems, but the Kubernetes stated minimum in this exam-style framing is 12 months. The key operational takeaway is: don’t ignore deprecation notices—they’re your clock for migration planning. Treat API version upgrades as part of routine cluster lifecycle hygiene to avoid being blocked during Kubernetes version upgrades when deprecated APIs are finally removed.
=========
Which of the following is a recommended security habit in Kubernetes?
Run the containers as the user with group ID 0 (root) and any user ID.
Disallow privilege escalation from within a container as the default option.
Run the containers as the user with user ID 0 (root) and any group ID.
Allow privilege escalation from within a container as the default option.
The correct answer is B. A widely recommended Kubernetes security best practice is to disallow privilege escalation inside containers by default. In Kubernetes Pod/Container security context, this is represented by allowPrivilegeEscalation: false. This setting prevents a process from gaining more privileges than its parent process—commonly via setuid/setgid binaries or other privilege-escalation mechanisms. Disallowing privilege escalation reduces the blast radius of a compromised container and aligns with least-privilege principles.
Options A and C are explicitly unsafe because they encourage running as root (UID 0 and/or GID 0). Running containers as root increases risk: if an attacker breaks out of the application process or exploits kernel/runtime vulnerabilities, having root inside the container can make privilege escalation and lateral movement easier. Modern Kubernetes security guidance strongly favors running as non-root (runAsNonRoot: true, explicit runAsUser), dropping Linux capabilities, using read-only root filesystems, and applying restrictive seccomp/AppArmor/SELinux profiles where possible.
Option D is the opposite of best practice. Allowing privilege escalation by default increases the attack surface and violates the idea of secure defaults.
Operationally, this habit is often enforced via admission controls and policies (e.g., Pod Security Admission in “restricted†mode, or policy engines like OPA Gatekeeper/Kyverno). It’s also important for compliance: many security baselines require containers to run as non-root and to prevent privilege escalation.
So, the recommended security habit among the choices is clearly B: Disallow privilege escalation.
=========
Which of the following is a valid PromQL query?
SELECT * from http_requests_total WHERE job=apiserver
http_requests_total WHERE (job="apiserver")
SELECT * from http_requests_total
http_requests_total(job="apiserver")
Prometheus Query Language (PromQL) uses a function-and-selector syntax, not SQL. A valid query typically starts with a metric name and optionally includes label matchers in curly braces. In the simplified quiz syntax given, the valid PromQL-style selector is best represented by D: http_requests_total(job="apiserver"), so D is correct.
Conceptually, what this query means is “select time series for the metric http_requests_total where the job label equals apiserver.†In standard PromQL formatting you most often see this as: http_requests_total{job="apiserver"}. Many training questions abbreviate braces and focus on the idea of filtering by labels; the key is that PromQL uses label matchers rather than SQL WHERE clauses.
Options A and C are invalid because they use SQL (SELECT * FROM ...) which is not PromQL. Option B is also invalid because PromQL does not use the keyword WHERE. PromQL filtering is done by applying label matchers directly to the metric selector.
In Kubernetes observability, PromQL is central to building dashboards and alerts from cluster metrics. For example, you might compute rates from counters: rate(http_requests_total{job="apiserver"}[5m]), aggregate by labels: sum by (code) (...), or alert on error ratios. Understanding the selector and label-matcher model is foundational because Prometheus metrics are multi-dimensional—labels define the slices you can filter and aggregate on.
So, within the provided options, D is the only one that follows PromQL’s metric+label-filter style and therefore is the verified correct answer.
=========
Which statement about Secrets is correct?
A Secret is part of a Pod specification.
Secret data is encrypted with the cluster private key by default.
Secret data is base64 encoded and stored unencrypted by default.
A Secret can only be used for confidential data.
The correct answer is C. By default, Kubernetes Secrets store their data as base64-encoded values in the API (backed by etcd). Base64 is an encoding mechanism, not encryption, so this does not provide confidentiality. Unless you explicitly configure encryption at rest for etcd (via the API server encryption provider configuration) and secure access controls, Secret contents should be treated as potentially readable by anyone with sufficient API access or access to etcd backups.
Option A is misleading: a Secret is its own Kubernetes resource (kind: Secret). While Pods can reference Secrets (as environment variables or mounted volumes), the Secret itself is not “part of the Pod spec†as an embedded object. Option B is incorrect because Kubernetes does not automatically encrypt Secret data with a cluster private key by default; encryption at rest is optional and must be enabled. Option D is incorrect because Secrets can store a range of sensitive or semi-sensitive data (tokens, certs, passwords), but Kubernetes does not enforce “only confidential data†semantics; it’s a storage mechanism with size and format constraints.
Operationally, best practices include: enabling encryption at rest, limiting access via RBAC, avoiding broad “list/get secrets†permissions, using dedicated service accounts, auditing access, and considering external secrets managers (Vault, cloud KMS-backed solutions) for higher assurance. Also, don’t confuse “Secret†with “secure by default.†The default protection is mainly about avoiding accidental plaintext exposure in manifests, not about cryptographic security.
So the only correct statement in the options is C.
=========
Which command will list the resource types that exist within a cluster?
kubectl api-resources
kubectl get namespaces
kubectl api-versions
curl https://kubectrl/namespaces
To list the resource types available in a Kubernetes cluster, you use kubectl api-resources, so A is correct. This command queries the API server’s discovery endpoints and prints a table of resources (kinds) that the cluster knows about, including their names, shortnames, API group/version, whether they are namespaced, and supported verbs. It’s extremely useful for learning what objects exist in a cluster—especially when CRDs are installed, because those custom resource types will also appear in the output.
Option C (kubectl api-versions) lists available API versions (group/version strings like v1, apps/v1, batch/v1) but does not directly list the resource kinds/types. It’s related discovery information but answers a different question. Option B (kubectl get namespaces) lists namespaces, not resource types. Option D is invalid (typo in URL and conceptually not the Kubernetes discovery mechanism).
Practically, kubectl api-resources is used during troubleshooting and exploration: you might use it to confirm whether a CRD is installed (e.g., certificates.cert-manager.io kinds), to check whether a resource is namespaced, or to find the correct kind name for kubectl get. It also helps understand what your cluster supports at the API layer (including aggregated APIs).
So, the verified correct command to list resource types that exist in the cluster is A: kubectl api-resources.
What does the livenessProbe in Kubernetes help detect?
When a container is ready to serve traffic.
When a container has started successfully.
When a container exceeds resource limits.
When a container is unresponsive.
The liveness probe in Kubernetes is designed to detect whether a container is still running correctly or has entered a failed or unresponsive state. Its primary purpose is to determine whether a container should be restarted. When a liveness probe fails repeatedly, Kubernetes assumes the container is unhealthy and automatically restarts it to restore normal operation.
Option D correctly describes this behavior. Liveness probes are used to identify situations where an application is running but no longer functioning as expected—for example, a deadlock, infinite loop, or hung process that cannot recover on its own. In such cases, restarting the container is often the most effective remediation, and Kubernetes handles this automatically through the liveness probe mechanism.
Option A is incorrect because readiness probes—not liveness probes—determine whether a container is ready to receive traffic. A container can be alive but not ready, such as during startup or temporary maintenance. Option B is incorrect because startup success is handled by startup probes, which are specifically designed to manage slow-starting applications and delay liveness and readiness checks until initialization is complete. Option C is incorrect because exceeding resource limits is managed by the container runtime and kubelet (for example, OOMKills), not by probes.
Liveness probes can be implemented using HTTP requests, TCP socket checks, or command execution inside the container. If the probe fails beyond a configured threshold, Kubernetes restarts the container according to the Pod’s restart policy. This self-healing behavior is a core feature of Kubernetes and contributes significantly to application reliability.
Kubernetes documentation emphasizes using liveness probes carefully, as misconfiguration can cause unnecessary restarts. However, when used correctly, they provide a powerful way to automatically recover from application-level failures that Kubernetes cannot otherwise detect.
In summary, the liveness probe’s role is to detect when a container is unresponsive and needs to be restarted, making option D the correct and fully verified answer.
Which are the two primary modes for Service discovery within a Kubernetes cluster?
Environment variables and DNS
API calls and LDAP
Labels and RADIUS
Selectors and DHCP
Kubernetes supports two primary built-in modes of Service discovery for workloads: environment variables and DNS, making A correct.
Environment variables: When a Pod is created, kubelet can inject environment variables for Services that exist in the same namespace at the time the Pod starts. These variables include the Service host and port (for example, MY_SERVICE_HOST and MY_SERVICE_PORT). This approach is simple but has limitations: values are captured at Pod creation time and don’t automatically update if Services change, and it can become cluttered in namespaces with many Services.
DNS-based discovery: This is the most common and flexible method. Kubernetes cluster DNS (usually CoreDNS) provides names like service-name.namespace.svc.cluster.local. Clients resolve the name and connect to the Service, which then routes to backend Pods. DNS scales better, is dynamic with endpoint updates, supports headless Services for per-Pod discovery, and is the default pattern for microservice communication.
The other options are not Kubernetes service discovery modes. Labels and selectors are used internally to relate Services to Pods, but they are not what application code uses for discovery (apps typically don’t query selectors; they call DNS names). LDAP and RADIUS are identity/authentication protocols, not service discovery. DHCP is for IP assignment on networks, not for Kubernetes Service discovery.
Operationally, DNS is central: many applications assume name-based connectivity. If CoreDNS is misconfigured or overloaded, service-to-service calls may fail even if Pods and Services are otherwise healthy. Environment-variable discovery can still work for some legacy apps, but modern cloud-native practice strongly prefers DNS (and sometimes service meshes on top of it). The key exam concept is: Kubernetes provides service discovery via env vars and DNS.
=========
What is a Service?
A static network mapping from a Pod to a port.
A way to expose an application running on a set of Pods.
The network configuration for a group of Pods.
An NGINX load balancer that gets deployed for an application.
The correct answer is B: a Kubernetes Service is a stable way to expose an application running on a set of Pods. Pods are ephemeral—IPs can change when Pods are recreated, rescheduled, or scaled. A Service provides a consistent network identity (DNS name and usually a ClusterIP virtual IP) and a policy for routing traffic to the current healthy backends.
Typically, a Service uses a label selector to determine which Pods are part of the backend set. Kubernetes then maintains the corresponding endpoint data (Endpoints/EndpointSlice), and the cluster dataplane (kube-proxy or an eBPF-based implementation) forwards traffic from the Service IP/port to one of the Pod IPs. This enables reliable service discovery and load distribution across replicas, especially during rolling updates where Pods are constantly replaced.
Option A is incorrect because Service routing is not a “static mapping from a Pod to a port.†It’s dynamic and targets a set of Pods. Option C is too vague and misstates the concept; while Services relate to networking, they are not “the network configuration for a group of Pods†(that’s closer to NetworkPolicy/CNI configuration). Option D is incorrect because Kubernetes does not automatically deploy an NGINX load balancer when you create a Service. NGINX might be used as an Ingress controller or external load balancer in some setups, but a Service is a Kubernetes API abstraction, not a specific NGINX component.
Services come in several types (ClusterIP, NodePort, LoadBalancer, ExternalName), but the core definition remains the same: stable access to a dynamic set of Pods. This is foundational for microservices and for decoupling clients from the churn of Pod lifecycles.
So, the verified correct definition is B.
=========
What is the default deployment strategy in Kubernetes?
Rolling update
Blue/Green deployment
Canary deployment
Recreate deployment
For Kubernetes Deployments, the default update strategy is RollingUpdate, which corresponds to “Rolling update†in option A. Rolling updates replace old Pods with new Pods gradually, aiming to maintain availability during the rollout. Kubernetes does this by creating a new ReplicaSet for the updated Pod template and then scaling the new ReplicaSet up while scaling the old one down.
The pace and safety of a rolling update are controlled by parameters like maxUnavailable and maxSurge. maxUnavailable limits how many replicas can be unavailable during the update, protecting availability. maxSurge controls how many extra replicas can be created temporarily above the desired count, helping speed up rollouts while maintaining capacity. If readiness probes fail, Kubernetes will pause progression because new Pods aren’t becoming Ready, helping prevent a bad version from fully replacing a good one.
Options B (Blue/Green) and C (Canary) are popular progressive delivery patterns, but they are not the default built-in Deployment strategy. They are typically implemented using additional tooling (service mesh routing, traffic splitting controllers, or specialized rollout controllers) or by operating multiple Deployments/Services. Option D (Recreate) is a valid strategy but not the default; it terminates all old Pods before creating new ones, causing downtime unless you have external buffering or multi-tier redundancy.
From an application delivery perspective, RollingUpdate aligns with Kubernetes’ declarative model: you update the desired Pod template and let the controller converge safely. kubectl rollout status is commonly used to monitor progress. Rollbacks are also supported because the Deployment tracks history. Therefore, the verified correct answer is A: Rolling update.
=========
What is the default value for authorization-mode in Kubernetes API server?
--authorization-mode=RBAC
--authorization-mode=AlwaysAllow
--authorization-mode=AlwaysDeny
--authorization-mode=ABAC
The Kubernetes API server supports multiple authorization modes that determine whether an authenticated request is allowed to perform an action (verb) on a resource. Historically, the API server’s default authorization mode was AlwaysAllow, meaning that once a request was authenticated, it would be authorized without further checks. That is why the correct answer here is B.
However, it’s crucial to distinguish “default flag value†from “recommended configuration.†In production clusters, running with AlwaysAllow is insecure because it effectively removes authorization controls—any authenticated user (or component credential) could do anything the API permits. Modern Kubernetes best practices strongly recommend enabling RBAC (Role-Based Access Control), often alongside Node and Webhook authorization, so that permissions are granted explicitly using Roles/ClusterRoles and RoleBindings/ClusterRoleBindings. Many managed Kubernetes distributions and kubeadm-based setups commonly enable RBAC by default as part of cluster bootstrap profiles, even if the API server’s historical default flag value is AlwaysAllow.
So, the exam-style interpretation of this question is about the API server flag default, not what most real clusters should run. With RBAC enabled, authorization becomes granular: you can control who can read Secrets, who can create Deployments, who can exec into Pods, and so on, scoped to namespaces or cluster-wide. ABAC (Attribute-Based Access Control) exists but is generally discouraged compared to RBAC because it relies on policy files and is less ergonomic and less commonly used. AlwaysDeny is useful for hard lockdown testing but not for normal clusters.
In short: AlwaysAllow is the API server’s default mode (answer B), but RBAC is the secure, recommended choice you should expect to see enabled in almost any serious Kubernetes environment.
=========
What is the main role of the Kubernetes DNS within a cluster?
Acts as a DNS server for virtual machines that are running outside the cluster.
Provides a DNS as a Service, allowing users to create zones and registries for domains that they own.
Allows Pods running in dual stack to convert IPv6 calls into IPv4 calls.
Provides consistent DNS names for Pods and Services for workloads that need to communicate with each other.
Kubernetes DNS (commonly implemented by CoreDNS) provides service discovery inside the cluster by assigning stable, consistent DNS names to Services and (optionally) Pods, which makes D correct. In a Kubernetes environment, Pods are ephemeral—IP addresses can change when Pods restart or move between nodes. DNS-based discovery allows applications to communicate using stable names rather than hardcoded IPs.
For Services, Kubernetes creates DNS records like service-name.namespace.svc.cluster.local, which resolve to the Service’s virtual IP (ClusterIP) or, for headless Services, to the set of Pod endpoints. This supports both load-balanced communication (standard Service) and per-Pod addressing (headless Service, commonly used with StatefulSets). Kubernetes DNS is therefore a core building block that enables microservices to locate each other reliably.
Option A is not Kubernetes DNS’s purpose; it serves cluster workloads rather than external VMs. Option B describes a managed DNS hosting product (creating zones/registries), which is outside the scope of cluster DNS. Option C describes protocol translation, which is not the role of DNS. Dual-stack support relates to IP families and networking configuration, not DNS translating IPv6 to IPv4.
In day-to-day Kubernetes operations, DNS reliability impacts everything: if DNS is unhealthy, Pods may fail to resolve Services, causing cascading outages. That’s why CoreDNS is typically deployed as a highly available add-on in kube-system, and why DNS caching and scaling are important for large clusters.
So the correct statement is D: Kubernetes DNS provides consistent DNS names so workloads can communicate reliably.
=========
In a cloud native environment, who is usually responsible for maintaining the workloads running across the different platforms?
The cloud provider.
The Site Reliability Engineering (SRE) team.
The team of developers.
The Support Engineering team (SE).
B (the Site Reliability Engineering team) is correct. In cloud-native organizations, SREs are commonly responsible for the reliability, availability, and operational health of workloads across platforms (multiple clusters, regions, clouds, and supporting services). While responsibilities vary by company, the classic SRE charter is to apply software engineering to operations: build automation, standardize runbooks, manage incident response, define SLOs/SLIs, and continuously improve system reliability.
Maintaining workloads “across different platforms†implies cross-cutting operational ownership: deployments need to behave consistently, rollouts must be safe, monitoring and alerting must be uniform, and incident practices must work across environments. SRE teams typically own or heavily influence the observability stack (metrics/logs/traces), operational readiness, capacity planning, and reliability guardrails (error budgets, progressive delivery, automated rollback triggers). They also collaborate closely with platform engineering and application teams, but SRE is often the group that ensures production workloads meet reliability targets.
Why other options are less correct:
The cloud provider (A) maintains the underlying cloud services, but not your application workloads’ correctness, SLOs, or operational processes.
Developers (C) do maintain application code and may own on-call in some models, but the question asks “usually†in cloud-native environments; SRE is the widely recognized function for workload reliability across platforms.
Support Engineering (D) typically focuses on customer support and troubleshooting from a user perspective, not maintaining platform workload reliability at scale.
So, the best and verified answer is B: SRE teams commonly maintain and ensure reliability of workloads across cloud-native platforms.
=========
Which Kubernetes Service type exposes a service only within the cluster?
ClusterIP
NodePort
LoadBalancer
ExternalName
In Kubernetes, a Service provides a stable network endpoint for a set of Pods and abstracts away their dynamic nature. Kubernetes offers several Service types, each designed for different exposure requirements. Among these, ClusterIP is the Service type that exposes an application only within the cluster, making it the correct answer.
When a Service is created with the ClusterIP type, Kubernetes assigns it a virtual IP address that is reachable exclusively from within the cluster’s network. This IP is used by other Pods and internal components to communicate with the Service through cluster DNS or environment variables. External traffic from outside the cluster cannot directly access a ClusterIP Service, which makes it ideal for internal APIs, backend services, and microservices that should not be publicly exposed.
Option B (NodePort) is incorrect because NodePort exposes the Service on a static port on each node’s IP address, allowing access from outside the cluster. Option C (LoadBalancer) is incorrect because it provisions an external load balancer—typically through a cloud provider—to expose the Service publicly. Option D (ExternalName) is incorrect because it does not create a proxy or internal endpoint at all; instead, it maps the Service name to an external DNS name outside the cluster.
ClusterIP is also the default Service type in Kubernetes. If no type is explicitly specified in a Service manifest, Kubernetes automatically assigns it as ClusterIP. This default behavior reflects the principle of least exposure, encouraging internal-only access unless external access is explicitly required.
From a cloud native architecture perspective, ClusterIP Services are fundamental to building secure, scalable microservices systems. They enable internal service-to-service communication while reducing the attack surface by preventing unintended external access.
According to Kubernetes documentation, ClusterIP Services are intended for internal communication within the cluster and are not reachable from outside the cluster network. Therefore, ClusterIP is the correct and fully verified answer, making option A the right choice.
Which of the following characteristics is associated with container orchestration?
Application message distribution
Dynamic scheduling
Deploying application JAR files
Virtual machine distribution
A core capability of container orchestration is dynamic scheduling, so B is correct. Orchestration platforms (like Kubernetes) are responsible for deciding where containers (packaged as Pods in Kubernetes) should run, based on real-time cluster conditions and declared requirements. “Dynamic†means the system makes placement decisions continuously as workloads are created, updated, or fail, and as cluster capacity changes.
In Kubernetes, the scheduler evaluates Pods that have no assigned node, filters nodes that don’t meet requirements (resources, taints/tolerations, affinity/anti-affinity, topology constraints), and then scores remaining nodes to pick the best target. This scheduling happens at runtime and adapts to the current state of the cluster. If nodes go down or Pods crash, controllers create replacements and the scheduler places them again—another aspect of dynamic orchestration.
The other options don’t define container orchestration: “application message distribution†is more about messaging systems or service communication patterns, not orchestration. “Deploying application JAR files†is a packaging/deployment detail relevant to Java apps but not a defining orchestration capability. “Virtual machine distribution†refers to VM management rather than container orchestration; Kubernetes focuses on containers and Pods (even if those containers sometimes run in lightweight VMs via sandbox runtimes).
So, the defining trait here is that an orchestrator automatically and continuously schedules and reschedules workloads, rather than relying on static placement decisions.
What is the core functionality of GitOps tools like Argo CD and Flux?
They track production changes made by a human in a Git repository and generate a human-readable audit trail.
They replace human operations with an agent that tracks Git commands.
They automatically create pull requests when dependencies are outdated.
They continuously compare the desired state in Git with the actual production state and notify or act upon differences.
The defining capability of GitOps controllers such as Argo CD and Flux is continuous reconciliation: they compare the desired state stored in Git to the actual state in the cluster and then alert and/or correct drift, making D correct. In GitOps, Git becomes the single source of truth for declarative configuration (Kubernetes manifests, Helm charts, Kustomize overlays). The controller watches Git for changes and applies them, and it also watches the cluster for divergence.
This is more than “auditing human changes†(option A). GitOps does provide auditability because changes are made via commits and pull requests, but the core functionality is the reconciliation loop that keeps cluster state aligned with Git, including optional automated sync/remediation. Option B is not accurate because GitOps is not about tracking user Git commands; it’s about reconciling desired state definitions. Option C (automatically creating pull requests for outdated dependencies) is a useful feature in some tooling ecosystems, but it is not the central defining behavior of GitOps controllers.
In Kubernetes delivery terms, this approach improves reliability: rollouts become repeatable, configuration drift is detected, and recovery is simpler (reapply known-good state from Git). It also supports separation of duties: platform teams can control policies and base layers, while app teams propose changes via PRs.
So the verified statement is: GitOps tools continuously reconcile Git desired state with cluster actual state—exactly option D.
If a Pod was waiting for container images to download on the scheduled node, what state would it be in?
Failed
Succeeded
Unknown
Pending
If a Pod is waiting for its container images to be pulled to the node, it remains in the Pending phase, so D is correct. Kubernetes Pod “phase†is a high-level summary of where the Pod is in its lifecycle. Pending means the Pod has been accepted by the cluster but one or more of its containers has not started yet. That can occur because the Pod is waiting to be scheduled, waiting on volume attachment/mount, or—very commonly—waiting for the container runtime to pull the image.
When image pulling is the blocker, kubectl describe pod
Why the other phases don’t apply:
Succeeded is for run-to-completion Pods that have finished successfully (typical for Jobs).
Failed means the Pod terminated and at least one container terminated in failure (and won’t be restarted, depending on restartPolicy).
Unknown is used when the node can’t be contacted and the Pod’s state can’t be reliably determined (rare in healthy clusters).
A subtle but important Kubernetes detail: status “Waiting†reasons like ImagePullBackOff are container states inside .status.containerStatuses, while the Pod phase can still be Pending. So, “waiting for images to download†maps to Pod Pending, with container waiting reasons providing the deeper diagnosis.
Therefore, the verified correct answer is D: Pending.
=========
Which resource do you use to attach a volume in a Pod?
StorageVolume
PersistentVolume
StorageClass
PersistentVolumeClaim
In Kubernetes, Pods typically attach persistent storage by referencing a PersistentVolumeClaim (PVC), making D correct. A PVC is a user’s request for storage with specific requirements (size, access mode, storage class). Kubernetes then binds the PVC to a matching PersistentVolume (PV) (either pre-provisioned statically or created dynamically via a StorageClass and CSI provisioner). The Pod does not directly attach a PV; it references the PVC, and Kubernetes handles the binding and mounting.
This design separates responsibilities: administrators (or CSI drivers) manage PV provisioning and backend storage details, while developers consume storage via PVCs. In a Pod spec, you define a volume of type persistentVolumeClaim and set claimName:
Option B (PersistentVolume) is not directly referenced by Pods; PVs are cluster resources that represent actual storage. Pods don’t “pick†PVs; claims do. Option C (StorageClass) defines provisioning parameters (e.g., disk type, replication, binding mode) but is not what a Pod references to mount a volume. Option A is not a Kubernetes resource type.
Operationally, using PVCs enables dynamic provisioning and portability: the same Pod spec can be deployed across clusters where the StorageClass name maps to appropriate backend storage. It also supports lifecycle controls like reclaim policies (Delete/Retain) and snapshot/restore workflows depending on CSI capabilities.
So the Kubernetes resource you use in a Pod to attach a persistent volume is PersistentVolumeClaim, option D.
=========
Why do administrators need a container orchestration tool?
To manage the lifecycle of an elevated number of containers.
To assess the security risks of the container images used in production.
To learn how to transform monolithic applications into microservices.
Container orchestration tools such as Kubernetes are the future.
The correct answer is A. Container orchestration exists because running containers at scale is hard: you need to schedule workloads onto machines, keep them healthy, scale them up and down, roll out updates safely, and recover from failures automatically. Administrators (and platform teams) use orchestration tools like Kubernetes to manage the lifecycle of many containers across many nodes—handling placement, restart, rescheduling, networking/service discovery, and desired-state reconciliation.
At small scale, you can run containers manually or with basic scripts. But at “elevated†scale (many services, many replicas, many nodes), manual management becomes unreliable and brittle. Orchestration provides primitives and controllers that continuously converge actual state toward desired state: if a container crashes, it is restarted; if a node dies, replacement Pods are scheduled; if traffic increases, replicas can be increased via autoscaling; if configuration changes, rolling updates can be coordinated with readiness checks.
Option B (security risk assessment) is important, but it’s not why orchestration tools exist. Image scanning and supply-chain security are typically handled by CI/CD tooling and registries, not by orchestration as the primary purpose. Option C is a separate architectural modernization effort; orchestration can support microservices, but it isn’t required “to learn transformation.†Option D is an opinion statement rather than a functional need.
So the core administrator need is lifecycle management at scale: ensuring workloads run reliably, predictably, and efficiently across a fleet. That is exactly what option A states.
=========
How can you monitor the progress for an updated Deployment/DaemonSets/StatefulSets?
kubectl rollout watch
kubectl rollout progress
kubectl rollout state
kubectl rollout status
To monitor rollout progress for Kubernetes workload updates (most commonly Deployments, and also StatefulSets and DaemonSets where applicable), the standard kubectl command is kubectl rollout status, which makes D correct.
Kubernetes manages updates declaratively through controllers. For a Deployment, an update typically creates a new ReplicaSet and gradually shifts replicas from the old to the new according to the strategy (e.g., RollingUpdate with maxUnavailable and maxSurge). For StatefulSets, updates may be ordered and respect stable identities, and for DaemonSets, an update replaces node-level Pods according to update strategy. In all cases, you often want a single command that tells you whether the controller has completed the update and whether the new replicas are available. kubectl rollout status queries the resource status and prints a progress view until completion or timeout.
The other commands listed are not the canonical kubectl subcommands. kubectl rollout watch, kubectl rollout progress, and kubectl rollout state are not standard rollout verbs in kubectl. The supported rollout verbs typically include status, history, undo, pause, and resume (depending on kubectl version and resource type).
Operationally, kubectl rollout status deployment/
=========
Which field in a Pod or Deployment manifest ensures that Pods are scheduled only on nodes with specific labels?
resources:
disktype: ssd
labels:
disktype: ssd
nodeSelector:
disktype: ssd
annotations:
disktype: ssd
In Kubernetes, Pod scheduling is handled by the Kubernetes scheduler, which is responsible for assigning Pods to suitable nodes based on a set of constraints and policies. One of the simplest and most commonly used mechanisms to control where Pods are scheduled is the nodeSelector field. The nodeSelector field allows you to constrain a Pod so that it is only eligible to run on nodes that have specific labels.
Node labels are key–value pairs attached to nodes by cluster administrators or automation tools. These labels typically describe node characteristics such as hardware type, disk type, geographic zone, or environment. For example, a node might be labeled with disktype=ssd to indicate that it has SSD-backed storage. When a Pod specification includes a nodeSelector with the same key–value pair, the scheduler will only consider nodes that match this label when placing the Pod.
Option A (resources) is incorrect because resource specifications are used to define CPU and memory requests and limits for containers, not to influence node selection based on labels. Option B (labels) is also incorrect because Pod labels are metadata used for identification, grouping, and selection by other Kubernetes objects such as Services and Deployments; they do not affect scheduling decisions. Option D (annotations) is incorrect because annotations are intended for storing non-identifying metadata and are not interpreted by the scheduler for placement decisions.
The nodeSelector field is evaluated during scheduling, and if no nodes match the specified labels, the Pod will remain in a Pending state. While nodeSelector is simple and effective, it is considered a basic scheduling mechanism. For more advanced scheduling requirements—such as expressing preferences, using set-based matching, or combining multiple conditions—Kubernetes also provides node affinity and anti-affinity. However, nodeSelector remains a foundational and widely used feature for enforcing strict node placement based on labels, making option C the correct and verified answer according to Kubernetes documentation.
What best describes cloud native service discovery?
It's a mechanism for applications and microservices to locate each other on a network.
It's a procedure for discovering a MAC address, associated with a given IP address.
It's used for automatically assigning IP addresses to devices connected to the network.
It's a protocol that turns human-readable domain names into IP addresses on the Internet.
Cloud native service discovery is fundamentally about how services and microservices find and connect to each other reliably in a dynamic environment, so A is correct. In cloud native systems (especially Kubernetes), instances are ephemeral: Pods can be created, destroyed, rescheduled, and scaled at any time. Hardcoding IPs breaks quickly. Service discovery provides stable names and lookup mechanisms so that one component can locate another even as underlying endpoints change.
In Kubernetes, service discovery is commonly achieved through Services (stable virtual IP + DNS name) and cluster DNS (CoreDNS). A Service selects a group of Pods via labels, and Kubernetes maintains the set of endpoints behind that Service. Clients connect to the Service name (DNS) and Kubernetes routes traffic to the current healthy Pods. For some workloads, headless Services provide DNS records that map directly to Pod IPs for per-instance discovery.
The other options describe different networking concepts: B is ARP (MAC discovery), C is DHCP (IP assignment), and D is DNS in a general internet sense. DNS is often used as a mechanism for service discovery, but cloud native service discovery is broader: it’s the overall mechanism enabling dynamic location of services, often implemented via DNS and/or environment variables and sometimes enhanced by service meshes.
So the best description remains A: a mechanism that allows applications and microservices to locate each other on a network in a dynamic environment.
What component enables end users, different parts of the Kubernetes cluster, and external components to communicate with one another?
kubectl
AWS Management Console
Kubernetes API
Google Cloud SDK
The Kubernetes API is the central interface that enables communication between users, controllers, nodes, and external integrations, so C is correct. Kubernetes is fundamentally an API-driven system: all cluster state is represented as API objects, and all operations—create, update, delete, watch—flow through the API server.
End users typically interact with the Kubernetes API using tools like kubectl, client libraries, or dashboards. But those tools are clients; the shared communication “hub†is the API itself. Inside the cluster, core control plane components (controllers, scheduler) continuously watch the API for desired state and write status updates back. Worker nodes (via kubelet) also communicate with the API server to receive Pod specs, report node health, and update Pod statuses. External systems—cloud provider integrations, CI/CD pipelines, GitOps controllers, monitoring and policy engines—also integrate primarily through the Kubernetes API.
Option A (kubectl) is a CLI that talks to the Kubernetes API; it is not the underlying component that all parts use to communicate. Options B and D are cloud-provider tools and are not universal to Kubernetes clusters. Kubernetes runs across many environments, and the consistent interoperability layer is the Kubernetes API.
This API-centric architecture is what enables Kubernetes’ declarative model: you submit desired state to the API, and controllers reconcile actual state to match. It also enables extensibility: CRDs and admission webhooks expand what the API can represent and enforce. Therefore, the correct answer is C: Kubernetes API.
=========
What sentence is true about CronJobs in Kubernetes?
A CronJob creates one or multiple Jobs on a repeating schedule.
A CronJob creates one container on a repeating schedule.
CronJobs are useful on Linux but are obsolete in Kubernetes.
The CronJob schedule format is different in Kubernetes and Linux.
The true statement is A: a Kubernetes CronJob creates Jobs on a repeating schedule. CronJob is a controller designed for time-based execution. You define a schedule using standard cron syntax (minute, hour, day-of-month, month, day-of-week), and when the schedule triggers, the CronJob controller creates a Job object. Then the Job controller creates one or more Pods to run the task to completion.
Option B is incorrect because CronJobs do not “create one containerâ€; they create Jobs, and Jobs create Pods (which may contain one or multiple containers). Option C is wrong because CronJobs are a core Kubernetes workload primitive for recurring tasks and remain widely used for periodic work like backups, batch processing, and cleanup. Option D is wrong because Kubernetes CronJobs intentionally use cron-like scheduling expressions; the format aligns with the cron concept (with Kubernetes-specific controller behavior around missed runs, concurrency, and history).
CronJobs also provide operational controls you don’t get from plain Linux cron on a node:
concurrencyPolicy (Allow/Forbid/Replace) to manage overlapping runs
startingDeadlineSeconds to control how missed schedules are handled
history limits for successful/failed Jobs to avoid clutter
integration with Kubernetes RBAC, Secrets, ConfigMaps, and volumes for consistent runtime configuration
consistent execution environment via container images, not ad-hoc node scripts
Because the CronJob creates Jobs as first-class API objects, you get observability (events/status), predictable retries, and lifecycle management. That’s why the accurate statement is A.
=========
Which Kubernetes feature would you use to guard against split brain scenarios with your distributed application?
Replication controllers
Consensus protocols
Rolling updates
StatefulSet
The exam-expected Kubernetes feature here is StatefulSet, so D is the correct answer. StatefulSets are designed for distributed/stateful applications that require stable network identities, stable storage, and ordered deployment/termination. Those properties are commonly required by systems that must avoid “split brain†behaviors—where multiple nodes believe they are the leader/primary due to partitions or identity confusion.
StatefulSets give each Pod a persistent identity (e.g., app-0, app-1) and stable DNS naming (typically via a headless Service), which supports consistent peer discovery and membership. They also commonly pair with PersistentVolumeClaims so that each replica keeps its own data across restarts and reschedules. The ordered rollout semantics help clustered systems bootstrap and expand in controlled sequences, reducing the chance of chaotic membership changes.
Important nuance: StatefulSet alone does not magically prevent split brain. Split brain prevention is primarily a property of the application’s own clustering/consensus design (e.g., leader election, quorum, fencing). That’s why option B (“consensus protocolsâ€) is conceptually the true prevention mechanism—but it’s not a Kubernetes feature in the way the question frames it. Kubernetes provides primitives that make it feasible to run such systems safely (stable IDs, stable storage, predictable DNS), and StatefulSet is the Kubernetes workload API designed for that class of distributed stateful apps.
Replication controllers and rolling updates don’t address identity/quorum concerns. Therefore, within Kubernetes constructs, StatefulSet is the best verified choice for workloads needing stable identity patterns commonly used to reduce split-brain risk.
=========
What is a key feature of a container network?
Proxying REST requests across a set of containers.
Allowing containers running on separate hosts to communicate.
Allowing containers on the same host to communicate.
Caching remote disk access.
A defining requirement of container networking in orchestrated environments is enabling workloads to communicate across hosts, not just within a single machine. That’s why B is correct: a key feature of a container network is allowing containers (Pods) running on separate hosts to communicate.
In Kubernetes, this idea becomes the Kubernetes network model: every Pod gets an IP address, and Pods should be able to communicate with other Pods across nodes without needing NAT (depending on implementation details). Achieving that across a cluster requires a networking layer (typically implemented by a CNI plugin) that can route traffic between nodes so that Pod-to-Pod communication works regardless of placement. This is crucial because schedulers dynamically place Pods; you cannot assume two communicating components will land on the same node.
Option C is true in a trivial sense—containers on the same host can communicate—but that capability alone is not the key feature that makes orchestration viable at scale. Cross-host connectivity is the harder and more essential property. Option A describes application-layer behavior (like API gateways or reverse proxies) rather than the foundational networking capability. Option D describes storage optimization, unrelated to container networking.
From a cloud native architecture perspective, reliable cross-host networking enables microservices patterns, service discovery, and distributed systems behavior. Kubernetes Services, DNS, and NetworkPolicies all depend on the underlying ability for Pods across the cluster to send traffic to each other. If your container network cannot provide cross-node routing and reachability, the cluster behaves like isolated islands and breaks the fundamental promise of orchestration: “schedule anywhere, communicate consistently.â€
=========
What is a cloud native application?
It is a monolithic application that has been containerized and is running now on the cloud.
It is an application designed to be scalable and take advantage of services running on the cloud.
It is an application designed to run all its functions in separate containers.
It is any application that runs in a cloud provider and uses its services.
B is correct. A cloud native application is designed to be scalable, resilient, and adaptable, and to leverage cloud/platform capabilities rather than merely being “hosted†on a cloud VM. Cloud-native design emphasizes principles like elasticity (scale up/down), automation, fault tolerance, and rapid, reliable delivery. While containers and Kubernetes are common enablers, the key is the architectural intent: build applications that embrace distributed systems patterns and cloud-managed primitives.
Option A is not enough. Simply containerizing a monolith and running it in the cloud does not automatically make it cloud native; that may be “lift-and-shift†packaging. The application might still be tightly coupled, hard to scale, and operationally fragile. Option C is too narrow and prescriptive; cloud native does not require “all functions in separate containers†(microservices are common but not mandatory). Many cloud-native apps use a mix of services, and even monoliths can be made more cloud native by adopting statelessness, externalized state, and automated delivery. Option D is too broad; “any app running in a cloud provider†includes legacy apps that don’t benefit from elasticity or cloud-native operational models.
Cloud-native applications typically align with patterns: stateless service tiers, declarative configuration, health endpoints, horizontal scaling, graceful shutdown, and reliance on managed backing services (databases, queues, identity, observability). They are built to run reliably in dynamic environments where instances are replaced routinely—an assumption that matches Kubernetes’ reconciliation and self-healing model.
So, the best verified definition among these options is B.
=========
What is Flux constructed with?
GitLab Environment Toolkit
GitOps Toolkit
Helm Toolkit
GitHub Actions Toolkit
The correct answer is B: GitOps Toolkit. Flux is a GitOps solution for Kubernetes, and in Flux v2 the project is built as a set of Kubernetes controllers and supporting components collectively referred to as the GitOps Toolkit. This toolkit provides the building blocks for implementing GitOps reconciliation: sourcing artifacts (Git repositories, Helm repositories, OCI artifacts), applying manifests (Kustomize/Helm), and continuously reconciling cluster state to match the desired state declared in Git.
This construction matters because it reflects Flux’s modular architecture. Instead of being a single monolithic daemon, Flux is composed of controllers that each handle a part of the GitOps workflow: fetching sources, rendering configuration, and applying changes. This makes it more Kubernetes-native: everything is declarative, runs in the cluster, and can be managed like other workloads (RBAC, namespaces, upgrades, observability).
Why the other options are wrong:
“GitLab Environment Toolkit†and “GitHub Actions Toolkit†are not what Flux is built from. Flux can integrate with many SCM providers and CI systems, but it is not “constructed with†those.
“Helm Toolkit†is not the named foundational set Flux is built upon. Flux can deploy Helm charts, but that’s a capability, not its underlying construction.
In cloud-native delivery, Flux implements the key GitOps control loop: detect changes in Git (or other declared sources), compute desired Kubernetes state, and apply it while continuously checking for drift. The GitOps Toolkit is the set of controllers enabling that loop.
Therefore, the verified correct answer is B.
=========
In Kubernetes, which command is the most efficient way to check the progress of a Deployment rollout and confirm if it has completed successfully?
kubectl get deployments --show-labels -o wide
kubectl describe deployment my-deployment --namespace=default
kubectl logs deployment/my-deployment --all-containers=true
kubectl rollout status deployment/my-deployment
When performing rolling updates in Kubernetes, it is important to have a clear and efficient way to track the progress of a Deployment rollout and determine whether it has completed successfully. The most direct and purpose-built command for this task is kubectl rollout status deployment/my-deployment, making option D the correct answer.
The kubectl rollout status command is specifically designed to monitor the state of rollouts for resources such as Deployments, StatefulSets, and DaemonSets. It provides real-time feedback on the rollout process, including whether new Pods have been created, old Pods are being terminated, and if the desired number of updated replicas has become available. The command blocks until the rollout either completes successfully or fails, which makes it especially useful in automation and CI/CD pipelines.
Option A is incorrect because kubectl get deployments only provides a snapshot view of deployment status fields and does not actively track rollout progress. Option B can provide detailed information and events, but it is verbose and not optimized for quickly confirming rollout completion. Option C is incorrect because Deployment objects themselves do not produce logs; logs are generated by Pods and containers, not higher-level workload resources.
The rollout status command also integrates with Kubernetes’ revision history, ensuring that it accurately reflects the current state of the Deployment’s update strategy. If a rollout is stuck due to failed Pods, readiness probe failures, or resource constraints, the command will indicate that the rollout is not progressing, helping operators quickly identify issues.
In summary, kubectl rollout status deployment/my-deployment is the most efficient and reliable way to check rollout progress and confirm success. It is purpose-built for rollout tracking, easy to interpret, and widely used in production Kubernetes workflows, making Option D the correct and verified answer.
Which Kubernetes resource workload ensures that all (or some) nodes run a copy of a Pod?
DaemonSet
StatefulSet
kubectl
Deployment
A DaemonSet is the workload controller that ensures a Pod runs on all nodes or on a selected subset of nodes, so A is correct. DaemonSets are used for node-level agents and infrastructure components that must be present everywhere—examples include log collectors, monitoring agents, storage daemons, CNI components, and node security tools.
The DaemonSet controller watches for node additions/removals. When a new node joins the cluster, Kubernetes automatically schedules a new DaemonSet Pod onto that node (subject to constraints such as node selectors, affinities, and taints/tolerations). When a node is removed, its DaemonSet Pod naturally disappears with it. This creates the “one per node†behavior that differentiates DaemonSets from other workload types.
A Deployment manages a replica count across the cluster, not “one per node.†A StatefulSet manages stable identity and ordered operations for stateful replicas; it does not inherently map one Pod to every node. kubectl is a CLI tool and not a workload resource.
DaemonSets can also be scoped: by using node selectors, node affinity, and tolerations, you can ensure Pods run only on GPU nodes, only on Linux nodes, only in certain zones, or only on nodes with a particular label. That’s why the question says “all (or some) nodes.â€
Therefore, the correct and verified answer is DaemonSet (A).
What is the main purpose of etcd in Kubernetes?
etcd stores all cluster data in a key value store.
etcd stores the containers running in the cluster for disaster recovery.
etcd stores copies of the Kubernetes config files that live /etc/.
etcd stores the YAML definitions for all the cluster components.
The main purpose of etcd in Kubernetes is to store the cluster’s state as a distributed key-value store, so A is correct. Kubernetes is API-driven: objects like Pods, Deployments, Services, ConfigMaps, Secrets, Nodes, and RBAC rules are persisted by the API server into etcd. Controllers, schedulers, and other components then watch the API for changes and reconcile the cluster accordingly. This makes etcd the “source of truth†for desired and observed cluster state.
Options B, C, and D are misconceptions. etcd does not store the running containers; that’s the job of the kubelet/container runtime on each node, and container state is ephemeral. etcd does not store /etc configuration file copies. And while you may author objects as YAML manifests, Kubernetes stores them internally as API objects (serialized) in etcd—not as “YAML definitions for all components.†The data is structured key/value entries representing Kubernetes resources and metadata.
Because etcd is so critical, its performance and reliability directly affect the cluster. Slow disk I/O or poor network latency increases API request latency and can delay controller reconciliation, leading to cascading operational problems (slow rollouts, delayed scheduling, timeouts). That’s why etcd is typically run on fast, reliable storage and in an HA configuration (often 3 or 5 members) to maintain quorum and tolerate failures. Backups (snapshots) and restore procedures are also central to disaster recovery: if etcd is lost, the cluster loses its state.
Security is also important: etcd can contain sensitive information (especially Secrets unless encrypted at rest). Proper TLS, restricted access, and encryption-at-rest configuration are standard best practices.
So, the verified correct answer is A: etcd stores all cluster data/state in a key-value store.
=========
Which of the following is the correct command to run an nginx deployment with 2 replicas?
kubectl run deploy nginx --image=nginx --replicas=2
kubectl create deploy nginx --image=nginx --replicas=2
kubectl create nginx deployment --image=nginx --replicas=2
kubectl create deploy nginx --image=nginx --count=2
The correct answer is B: kubectl create deploy nginx --image=nginx --replicas=2. This uses kubectl create deployment (shorthand create deploy) to generate a Deployment resource named nginx with the specified container image. The --replicas=2 flag sets the desired replica count, so Kubernetes will create two Pod replicas (via a ReplicaSet) and keep that number stable.
Option A is incorrect because kubectl run is primarily intended to run a Pod (and in older versions could generate other resources, but it’s not the recommended/consistent way to create a Deployment in modern kubectl usage). Option C is invalid syntax: kubectl subcommand order is incorrect; you don’t say kubectl create nginx deployment. Option D uses a non-existent --count flag for Deployment replicas.
From a Kubernetes fundamentals perspective, this question tests two ideas: (1) Deployments are the standard controller for running stateless workloads with a desired number of replicas, and (2) kubectl create deployment is a common imperative shortcut for generating that resource. After running the command, you can confirm with kubectl get deploy nginx, kubectl get rs, and kubectl get pods -l app=nginx (label may vary depending on kubectl version). You’ll see a ReplicaSet created and two Pods brought up.
In production, teams typically use declarative manifests (kubectl apply -f) or GitOps, but knowing the imperative command is useful for quick labs and validation. The key is that replicas are managed by the controller, not by manually starting containers—Kubernetes reconciles the state continuously.
Therefore, B is the verified correct command.
=========
Which of these events will cause the kube-scheduler to assign a Pod to a node?
When the Pod crashes because of an error.
When a new node is added to the Kubernetes cluster.
When the CPU load on the node becomes too high.
When a new Pod is created and has no assigned node.
The kube-scheduler assigns a node to a Pod when the Pod is unscheduled—meaning it exists in the API server but has no spec.nodeName set. The event that triggers scheduling is therefore: a new Pod is created and has no assigned node, which is option D.
Kubernetes scheduling is declarative and event-driven. The scheduler continuously watches for Pods that are in a “Pending†unscheduled state. When it sees one, it runs a scheduling cycle: filtering nodes that cannot run the Pod (insufficient resources based on requests, taints/tolerations, node selectors/affinity rules, topology spread constraints), then scoring the remaining feasible nodes to pick the best candidate. Once selected, the scheduler “binds†the Pod to that node by updating the Pod’s spec.nodeName. After that, kubelet on the chosen node takes over to pull images and start containers.
Option A (Pod crashes) does not directly cause scheduling. If a container crashes, kubelet may restart it on the same node according to restart policy. If the Pod itself is replaced (e.g., by a controller like a Deployment creating a new Pod), that new Pod will be scheduled because it’s unscheduled—but the crash event itself isn’t the scheduler’s trigger. Option B (new node added) might create more capacity and affect future scheduling decisions, but it does not by itself trigger assigning a particular Pod; scheduling still happens because there are unscheduled Pods. Option C (CPU load high) is not a scheduling trigger; scheduling is based on declared requests and constraints, not instantaneous node CPU load (that’s a common misconception).
So the correct, Kubernetes-architecture answer is D: kube-scheduler assigns nodes to Pods that are newly created (or otherwise pending) and have no assigned node.
=========
TESTED 21 Feb 2026