Which of the following mechanisms are used by LXC and Docker to create containers? (Choose three.)
Linux Capabilities
Kernel Namespaces
Control Groups
POSIXACLs
File System Permissions
 LXC and Docker are both container technologies that use Linux kernel features to create isolated environments for running applications. The main mechanisms that they use are:
Linux Capabilities: These are a set of privileges that can be assigned to processes to limit their access to certain system resources or operations. For example, a process with the CAP_NET_ADMIN capability can perform network administration tasks, such as creating or deleting network interfaces. Linux capabilities allow containers to run with reduced privileges, enhancing their security and isolation.
Kernel Namespaces: These are a way of creating separate views of the system resources for different processes. For example, a process in a mount namespace can have a different file system layout than the host or other namespaces. Kernel namespaces allow containers to have their own network interfaces, process IDs, user IDs, and other resources, without interfering with the host or other containers.
Control Groups: These are a way of grouping processes and applying resource limits and accounting to them. For example, a control group can limit the amount of CPU, memory, disk I/O, or network bandwidth that a process or a group of processes can use. Control groups allow containers to have a fair share of the system resources and prevent them from exhausting the host resources.
POSIX ACLs and file system permissions are not mechanisms used by LXC and Docker to create containers. They are methods of controlling the access to files and directories on a file system, which can be applied to any process, not just containers.
References:
LXC vs Docker: Which Container Platform Is Right for You?
LXC vs Docker: Why Docker is Better in 2023 | UpGuard
What is the Difference Between LXC, LXD and Docker Containers
lxc - Which container implementation docker is using - Unix & Linux Stack Exchange
Which of the following tasks are part of a hypervisor’s responsibility? (Choose two.)
Create filesystems during the installation of new virtual machine quest operating systems.
Provide host-wide unique PIDs to the processes running inside the virtual machines in order to ease inter-process communication between virtual machines.
Map the resources of virtual machines to the resources of the host system.
Manage authentication to network services running inside a virtual machine.
Isolate the virtual machines and prevent unauthorized access to resources of other virtual machines.
A hypervisor is a software that creates and runs virtual machines (VMs) by separating the operating system and resources from the physical hardware. One of the main tasks of a hypervisor is to map the resources of VMs to the resources of the host system, such as CPU, memory, disk, and network. This allows the hypervisor to allocate and manage the resources among multiple VMs and ensure that they run efficiently and independently123. Another important task of a hypervisor is to isolate the VMs and prevent unauthorized access to resources of other VMs. This ensures the security and privacy of the VMs and their data, as well as the stability and performance of the host system. The hypervisor can use various techniques to isolate the VMs, such as virtual LANs, firewalls, encryption, and access control145.
The other tasks listed are not part of a hypervisor’s responsibility, but rather of the guest operating system or the application running inside the VM. A hypervisor does not create filesystems during the installation of new VMs, as this is done by the installer of the guest operating system6. A hypervisor does not provide host-wide unique PIDs to the processes running inside the VMs, as this is done by the kernel of the guest operating system7. A hypervisor does not manage authentication to network services running inside a VM, as this is done by the network service itself or by a directory service such as LDAP or Active Directory8. References: 1 (search for “What is a hypervisor?â€), 2 (search for “How does a hypervisor work?â€), 3 (search for “The hypervisor gives each virtual machine the resources that have been allocatedâ€), 4 (search for “Benefits ofhypervisorsâ€), 5 (search for “Isolate the virtual machines and prevent unauthorized accessâ€), 6 (search for “Create filesystems during the installation of new virtual machine quest operating systemsâ€), 7 (search for “Provide host-wide unique PIDs to the processes running inside the virtual machinesâ€), 8 (search for “Manage authentication to network services running inside a virtual machineâ€).
What is the purpose of thekubeletservice in Kubernetes?
Provide a command line interface to manage Kubernetes.
Build a container image as specified in a Dockerfile.
Manage permissions of users when interacting with the Kubernetes API.
Run containers on the worker nodes according to the Kubernetes configuration.
Store and replicate Kubernetes configuration data.
The purpose of the kubelet service in Kubernetes is to run containers on the worker nodes according to the Kubernetes configuration. The kubelet is an agent or program that runs on each node and communicates with the Kubernetes control plane. It receives a set of PodSpecs that describe the desired state of the pods that should be running on the node, and ensures that the containers described in those PodSpecs are running and healthy. The kubelet also reports the status of the node and the pods back to the control plane. The kubelet does not manage containers that were not created by Kubernetes. References:
Kubernetes Docs - kubelet
Learn Steps - What is kubelet and what it does: Basics on Kubernetes
Which directory is used bycloud-initto store status information and configuration information retrieved from external sources?
/var/lib/cloud/
/etc/cloud-init/cache/
/proc/sys/cloud/
/tmp/.cloud/
/opt/cloud/var/
 cloud-init uses the /var/lib/cloud/ directory to store status information and configuration information retrieved from external sources, such as the cloud platform’smetadata service or user data files. The directory contains subdirectories for different types of data, such as instance, data, handlers, scripts, and sem. The instance subdirectory contains information specific to the current instance, such as the instance ID, the user data, and the cloud-init configuration. The data subdirectory contains information about the data sources that cloud-init detected and used. The handlers subdirectory contains information about the handlers that cloud-init executed. The scripts subdirectory contains scripts that cloud-init runs at different stages of the boot process, such as per-instance, per-boot, per-once, and vendor. The sem subdirectory contains semaphore files that cloud-init uses to track the execution status of different modules and stages. References:
Configuring and managing cloud-init for RHEL 8 - Red Hat Customer Portal
vsphere - what is the linux file location where the cloud-init user …
Which of the following types of guest systems does Xen support? (Choose two.)
Foreign architecture guests (FA)
Paravirtualized quests (PVI
Emulated guests
Container virtualized guests
Fully virtualized guests
Xen supports two types of guest systems: paravirtualized guests (PV) and fully virtualized guests (HVM).
Paravirtualized guests (PV) are guests that have been modified to run on the Xen hypervisor. They use a special kernel that communicates with the hypervisor through hypercalls, and use paravirtualized drivers for I/O devices. PV guests can run faster and more efficiently than HVM guests, but they require the guest operating system to be ported to Xen and to support the Xen ABI12.
Fully virtualized guests (HVM) are guests that run unmodified operating systems on the Xen hypervisor. They use hardware virtualization extensions, such as Intel VT-x or AMD-V, to create a virtual platform for the guest. HVM guests can run any operating system that supports the hardware architecture, but they incur more overhead and performance penalties than PV guests. HVM guests can also use paravirtualized drivers for I/O devices to improve their performance12.
The other options are not correct. Xen does not support foreign architecture guests (FA), emulated guests, or container virtualized guests.
Foreign architecture guests (FA) are guests that run on a different hardware architecture than the host. For example, running an ARM guest on an x86 host. Xen does not support this type of virtualization, as it would require emulation or binary translation, which are very complex and slow techniques3.
Emulated guests are guests that run on a software emulator that mimics the hardware of the host or another platform. For example, running a Windows guest on a QEMU emulator. Xen does not support this type of virtualization, as it relies on the emulator to provide the virtual platform, not the hypervisor. Xen can use QEMU to emulate some devices for HVM guests, but not the entire platform14.
Container virtualized guests are guests that run on a shared kernel with the host and other guests, using namespaces and cgroups to isolate them. For example, running a Linux guest on a Docker container. Xen does not support this type of virtualization, as it requires the guest operating system to be compatible with the host kernel, and does not provide the same level of isolation and security as hypervisor-based virtualization56.
References:
Xen Project Software Overview - Xen
Xen ARM with Virtualization Extensions - Xen
Xen Project Beginners Guide - Xen
QEMU - Xen
Docker overview | Docker Documentation
What is a Container? | App Containerization | VMware
Which of the following commands lists all differences between the disk images vm1-snap.img and vm1.img?
virt-delta -a vm1-snap.img -A vm1.img
virt-cp-in -a vm1-snap.img -A vm1.img
virt-cmp -a vm1-snap.img -A vm1.img
virt-history -a vm1-snap.img -A vm1.img
virt-diff -a vm1-snap.img -A vm1.img
The virt-diff command-line tool can be used to list the differences between files in two virtual machines or disk images. The output shows the changes to a virtual machine’s disk images after it has been running. The command can also be used to show the difference between overlays1. To specify two guests, you have to use the -a or -d option for the first guest, and the -A or -D option for the second guest. For example: virt-diff -a old.img -A new.img1. Therefore, the correct command to list all differences between the disk images vm1-snap.img and vm1.img is: virt-diff -a vm1-snap.img -A vm1.img. The other commands are not related to finding differences between disk images. virt-delta is a tool to create delta disks from two disk images2. virt-cp-in is a tool to copy files and directories into a virtual machine disk image3. virt-cmp is a tool to compare two files or directories in a virtual machine disk image4. virt-history is a tool to show the history of a virtual machine disk image5. References:
21.13. virt-diff: Listing the Differences between Virtual Machine Files …
21.14. virt-delta: Creating Delta Disks from Two Disk Images …
21.6. virt-cp-in: Copying Files and Directories into a Virtual Machine Disk Image …
21.7. virt-cmp: Comparing Two Files or Directories in a Virtual Machine Disk Image …
21.8. virt-history: Showing the History of a Virtual Machine Disk Image …
What is true aboutcontainerd?
It is a text file format defining the build process of containers.
It runs in each Docker container and provides DHCP client functionality
It uses rune to start containers on a container host.
It is the initial process run at the start of any Docker container.
It requires the Docker engine and Docker CLI to be installed.
Containerd is an industry-standard container runtime that uses Runc (a low-level container runtime) by default, but can be configured to use others as well1. Containerd manages the complete container lifecycle of its host system, from image transfer and storage to containerexecution and supervision1. It supports the standards established by the Open Container Initiative (OCI)1. Containerd does not require the Docker engine and Docker CLI to be installed, as it can be used independently or with other container platforms2. Containerd is not a text file format, nor does it run in each Docker container or provide DHCP client functionality. Containerd is not the initial process run at the start of any Docker container, as that is the role of the container runtime, such as Runc3. References: 1 (search for “containerdâ€), 2 (search for “Containerd is an open sourceâ€), 3 (search for “It uses rune to start containersâ€).
Which of the following resources can be limited by libvirt for a KVM domain? (Choose two.)
Amount of CPU lime
Size of available memory
File systems allowed in the domain
Number of running processes
Number of available files
 Libvirt is a toolkit that provides a common API for managing different virtualization technologies, such as KVM, Xen, LXC, and others. Libvirt allows users to configure and control various aspects of a virtual machine (also called a domain), such as its CPU, memory, disk, network, and other resources. Among the resources that can be limited by libvirt for a KVM domain are:
Amount of CPU time: Libvirt allows users to specify the number of virtual CPUs (vCPUs) that a domain can use, as well as the CPU mode, model, topology, and tuning parameters. Users can also set the CPU shares, quota, and period to control the relative or absolute amount of CPU time that a domain can consume. Additionally, users can pin vCPUs to physical CPUs or NUMA nodes to improve performance and isolation. These settings can be configured in the domain XML file under theÂ
Size of available memory: Libvirt allows users to specify the amount of memory that a domain can use, as well as the memory backing, tuning, and NUMA node parameters. Users can also set the memory hard and soft limits, swap hard limit, and minimum guarantee to control the memory allocation and reclaim policies for a domain. These settings can be configured in the domain XML file under theÂ
The other resources listed in the question are not directly limited by libvirt for a KVM domain. File systems allowed in the domain are determined by the disk and filesystem devices that are attached to the domain, which can be configured in the domain XML file under theÂ
References:
libvirt: Domain XML format
CPU Allocation
Memory Allocation
Hard drives, floppy disks, CDROMs
Which of the following values would be valid in the FROM statement in aDockerfile?
ubuntu:focal
docker://ubuntu: focal
registry:ubuntu:focal
file:/tmp/ubuntu/Dockerfile
http://docker.example.com/images/ubuntu-focal.iso
The FROM statement in a Dockerfile specifies the base image from which the subsequent instructions are executed1. The value of the FROM statement can be either an image name, an image name with a tag, or an image ID1. The image name can be either a repository name or a repository name with a registry prefix2. For example, ubuntu is a repository name, and docker.io/ubuntu is a repository name with a registry prefix2. The tag is an optional identifier that can be used to specify a particular version or variant of an image1. For example, ubuntu:focal refers to the image with the focal tag in the ubuntu repository2. The image ID is a unique identifier that is automatically generated when an image is built or pulled1. For example, sha256:9b0dafaadb1cd1d14e4db51bd0f4c0d56b6b551b2982b2b7c637ca143ad605d2 is an image ID3.
Therefore, the only valid value in the FROM statement among the given options is ubuntu:focal, which is an image name with a tag. The other options are invalid because:
docker://ubuntu:focal is not a valid image name format. The docker:// prefix is used to specify a transport protocol, not a registry prefix4.
registry:ubuntu:focal is not a valid image name format. The registry prefix should be a valid hostname or IP address, not a generic term2.
file:/tmp/ubuntu/Dockerfile is not a valid image name format. The file: prefix is used to specify a local file path, not an image name5.
http://docker.example.com/images/ubuntu-focal.iso is not a valid image name format. The http:// prefix is used to specify a web URL, not an image name 5.
References:
1: Dockerfile reference | Docker Docs
2: docker - Using FROM statement in dockerfile - Stack Overflow
3: How to get the image id from a docker image - Stack Overflow
4: skopeo - Docker Registry v2 API tool - Linux Man Pages (1)
5: How to build a Docker image from a local Dockerfile? - Stack Overflow
What does IaaS stand for?
Information as a Service
Intelligence as a Service
Integration as a Service
Instances as a Service
Infrastructure as a Service
IaaS is a type of cloud computing service that offers essential compute, storage, and networking resources on demand, on a pay-as-you-go basis. IaaS is one of the four types of cloud services, along with software as a service (SaaS), platform as a service (PaaS), and serverless12. IaaS eliminates the need for enterprises to procure, configure, or manage infrastructure themselves, and they only pay for what they use23. Some examples of IaaS providers are Microsoft Azure, Google Cloud, and Amazon Web Services.
How does Packer interact with system images?
Packer has to be installed within the target image and is executed during the image's first boot in order to execute preparation tasks.
Packer installs a client within the image which has to be run periodically via cron in order to retrieve the latest template from the Packer server and apply it locally.
Packer periodically connects through the network to the Packer daemons of all running Packer images in order to re-apply the whole template to the running instance.
Packer downloads and extracts an image in order to make changes to the image's file system, repack the modified image and upload it again.
Packer creates an instance based on a source image, prepares the instance through a network connection and bundles the resulting instance as a new system image.
 Packer is a tool that automates the creation of identical machine images for multiple platforms from a single source configuration. Packer works by creating an instance based on a source image, which is a pre-existing image that serves as a starting point. Packer then connects to the instance through a network connection, such as SSH or WinRM, and runs various commands and scripts to install and configure software within the instance. Packer then shuts down the instance and creates a new system image from it, which can be used to launch new instances. Packer supports many platforms, such as AWS, Azure, VMware, Docker, and others. Packer does not install any software or run any daemon within the target image, nor does it periodically connect to the running instances to re-apply the template. Packer also does not modify the source image directly, but creates a new image from the modified instance. References:
Packer by HashiCorp
HashiCorp Packer - Build Automated Machine Images
Introduction | Packer | HashiCorp Developer
What kind of virtualization is implemented by LXC?
System containers
Application containers
Hardware containers
CPU emulation
Paravirtualization
LXC implements system containers, which are a type of operating-system-level virtualization. System containers allow running multiple isolated Linux systems on a single Linux control host, using a single Linux kernel. System containers share the same kernel with the host and each other, but have their own file system, libraries, andprocesses. System containers are different from application containers, which are designed to run a single application or service in an isolated environment. Application containers are usually smaller and more portable than system containers, but also more dependent on the host kernel and libraries. Hardware containers, CPU emulation, and paravirtualization are not related to LXC, as they are different kinds of virtualization methods that involve hardware abstraction, instruction translation, or modification of the guest operating system. References:
1: LXC - Wikipedia
2: Linux Virtualization : Linux Containers (lxc) - GeeksforGeeks
3: Features - Proxmox Virtual Environment
FILL BLANK
What LXC command lists containers sorted by their CPU, block I/O or memory consumption? (Specify ONLY the command without any path or parameters.)
lxc-top
 LXD supports the following network interface types for containers: macvlan, bridged, physical, sriov, and ovn1. Macvlan creates a virtual interface on the host that is connected to the same network as the parent interface2. Bridged connects the container to a network bridge that acts as a virtual switch3. Physical attaches the container to a physical network interface on the host2. Ipsec and wifi are not valid network interface types for LXD containers. References:
1: Bridge network - Canonical LXD documentation
2: How to create a network - Canonical LXD documentation
4: LXD containers and networking with static IP - Super User
Which of the following statements are true regarding a Pod in Kubernetes? (Choose two.)
All containers of a Pod run on the same node.
Pods are always created automatically and cannot be explicitly configured.
A Pod is the smallest unit of workload Kubernetes can run.
When a Pod fails, Kubernetes restarts the Pod on another node by default.
systemd is used to manage individual Pods on the Kubernetes nodes.
 A Pod in Kubernetes is a collection of one or more containers that share the same network and storage resources, and a specification for how to run the containers. A Pod is the smallest unit of workload Kubernetes can run, meaning that it cannot be divided into smaller units. Therefore, option C is correct. All containers of a Pod run on the same node, which is the smallest unit of computing hardware in Kubernetes. A node is a physical or virtual machine that hosts one or more Pods. Therefore, option A is also correct. Pods are not always created automatically and cannot be explicitly configured. Pods can be created manually using YAML or JSON files, or using commands like kubectl run or kubectl create. Pods can also be created automatically by higher-level controllers, such as Deployment, ReplicaSet, or StatefulSet. Therefore, option B is incorrect. When a Pod fails, Kubernetes does not restart the Pod on another node by default. Pods are ephemeral by nature, meaning that they can be terminated or deleted at any time. If a Pod is managed by a controller, the controller will create a new Pod to replace the failed one, but it may not be on the same node. Therefore, option D is incorrect. systemd is not used to manage individual Pods on the Kubernetes nodes. systemd is a system and service manager for Linux operating systems that can start and stop services, such as docker or kubelet. However, systemd does not interact with Podsdirectly. Pods are managed by the kubelet service, which is an agent that runs on each node and communicates with the Kubernetes control plane. Therefore, option E is incorrect. References:
Pods | Kubernetes
What is a Kubernetes pod? - Red Hat
What’s the difference between a pod, a cluster, and a container?
What are Kubernetes Pods? | VMware Glossary
Kubernetes Node Vs. Pod Vs.Cluster: Key Differences - CloudZero
Which of the following values are valid in thefirmwareattribute of a
scsi
virtio
efi
bios
pcie
The firmware attribute of the
TESTED 06 Feb 2026