102.6 Lesson 1
Certificate: |
LPIC-1 |
---|---|
Version: |
5.0 |
Topic: |
102 Linux Installation and Package Management |
Objective: |
102.6 Linux as a virtualization guest |
Lesson: |
1 of 1 |
Introduction
One of the great strengths of Linux is its versatility. One aspect of this versatility is the ability to use Linux as a means of hosting other operating systems, or individual applications, in a completely isolated and secure environment. This lesson will focus on the concepts of virtualization and container technologies, along with some technical details that should be taken into consideration when deploying a virtual machine on a cloud platform.
Virtualization Overview
Virtualization is a technology that allows for a software platform, called a hypervisor, to run processes that contain a fully emulated computer system. The hypervisor is responsible for managing the physical hardware’s resources that can be used by individual virtual machines. These virtual machines are called guests of the hypervisor. A virtual machine has many aspects of a physical computer emulated in software, such as a system’s BIOS and hard drive disk controllers. A virtual machine will often use hard disk images that are stored as individual files, and will have access to the host machine’s RAM and CPU through the hypervisor software. The hypervisor separates the access to the host system’s hardware resources among the guests, thus allowing for multiple operating systems to run on a single host system.
Commonly used hypervisors for Linux include:
- Xen
-
Xen is an open source Type-1 hypervisor, meaning that it does not rely on an underlying operating system to function. A hypervisor of this sort is known as a bare-metal hypervisor since the computer can boot directly into the hypervisor.
- KVM
-
The Kernel Virtual Machine is a Linux kernel module for virtualization. KVM is a hypervisor of both Type-1 and Type-2 hypervisor because, although it needs a generic Linux operating system to work, it is able to perform as a hypervisor perfectly well by integrating with a running Linux installation. Virtual machines deployed with KVM use the
libvirt
daemon and associated software utilities to be created and managed. - VirtualBox
-
A popular desktop application that makes it easy to create and manage virtual machines. Oracle VM VirtualBox is cross-platform, and will work on Linux, macOS, and Microsoft Windows. Since VirtualBox requires an underlying operating system to run, it is a Type-2 hypervisor.
Some hypervisors allow for the dynamic relocation of a virtual machine. The process of moving a virtual machine from one hypervisor installation to another is called a migration, and the techniques involved differ between hypervisor implementations. Some migrations can only be performed when the guest system has been completely shut down, and other can be performed while the guest is running (called a live migration). Such techniques can be of aid during maintenance windows for hypervisors, or for system resiliency when a hypervisor becomes non-functional and the guest can be moved to a hypervisor that is working.
Types of Virtual Machines
There are three main types of virtual machines, the fully virtualized guest, the paravirtualized guest and the hybrid guest.
- Fully Virtualized
-
All instructions that a guest operating system is expected to execute must be able to run within a fully virtualized operating system installation. The reason for this is that no additional software drivers are installed within the guest to translate the instructions to either simulated or real hardware. A fully virtualized guest is one where the guest (or HardwareVM) is unaware that it is a running virtual machine instance. In order for this type of virtualization to take place on x86 based hardware the Intel VT-x or AMD-V CPU extensions must be enabled on the system that has the hypervisor installed. This can be done from a BIOS or UEFI firmware configuration menu.
- Paravirtualized
-
A paravirtualized guest (or PVM) is one where the guest operating system is aware that it is a running virtual machine instance. These types of guests will make use of a modified kernel and special drivers (known as guest drivers) that will help the guest operating system utilize software and hardware resources of the hypervisor. The performance of a paravirtualized guest is often better than that of the fully virtualized guest due to the advantage that these software drivers provide.
- Hybrid
-
Paravirtualization and full virtualization can be combined to allow unmodified operating systems to receive near native I/O performance by using paravirtualized drivers on fully virtualized operating systems. The paravirtualized drivers contain storage and network device drivers with enhanced disk and network I/O performance.
Virtualization platforms often provide packaged guest drivers for virtualized operating systems. The KVM utilizes drivers from the Virtio project, whereas Oracle VM VirtualBox uses Guest Extensions available from a downloadable ISO CD-ROM image file.
Example libvirt
Virtual Machine
We will look at an example virtual machine that is managed by libvirt
and uses the KVM hypervisor. A virtual machine often consists of a group of files, primarily an XML file that defines the virtual machine (such as its hardware configuration, network connectivity, display capabilities, and more) and an associated hard disk image file that contains the installation of the operating system and its software.
First, let us start to examine an example XML configuration file for a virtual machine and its network environment:
$ ls /etc/libvirt/qemu total 24 drwxr-xr-x 3 root root 4096 Oct 29 17:48 networks -rw------- 1 root root 5667 Jun 29 17:17 rhel8.0.xml
Note
|
The |
Note that there is a directory named networks
. This directory contains definition files (also using XML) that create network configurations that the virtual machines can use. This hypervisor is only using one network, and so there is only one definition file that contains a configuration for a virtual network segment that these systems will utilize.
$ ls -l /etc/libvirt/qemu/networks/ total 8 drwxr-xr-x 2 root root 4096 Jun 29 17:15 autostart -rw------- 1 root root 576 Jun 28 16:39 default.xml $ sudo cat /etc/libvirt/qemu/networks/default.xml <!-- WARNING: THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE LIKELY TO BE OVERWRITTEN AND LOST. Changes to this xml configuration should be made using: virsh net-edit default or other application using the libvirt API. --> <network> <name>default</name> <uuid>55ab064f-62f8-49d3-8d25-8ef36a524344</uuid> <forward mode='nat'/> <bridge name='virbr0' stp='on' delay='0'/> <mac address='52:54:00:b8:e0:15'/> <ip address='192.168.122.1' netmask='255.255.255.0'> <dhcp> <range start='192.168.122.2' end='192.168.122.254'/> </dhcp> </ip> </network>
This definition includes a Class C private network and an emulated hardware device to act as a router for this network. There is also a range of IP addresses for the hypervisor to use with a DHCP server implementation that can be assigned to the virtual machines that use this network. This network configuration also utilizes network address translation (NAT) to forward packets to other networks, such as the hypervisor’s LAN.
Now we turn our attention to a Red Hat Enterprise Linux 8 virtual machine definition file. (sections of special note are in bold text):
$ sudo cat /etc/libvirt/qemu/rhel8.0.xml <!-- WARNING: THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE LIKELY TO BE OVERWRITTEN AND LOST. Changes to this xml configuration should be made using: virsh edit rhel8.0 or other application using the libvirt API. --> <domain type='kvm'> <name>rhel8.0</name> <uuid>fadd8c5d-c5e1-410e-b425-30da7598d0f6</uuid> <metadata> <libosinfo:libosinfo xmlns:libosinfo="http://libosinfo.org/xmlns/libvirt/domain/1.0"> <libosinfo:os id="http://redhat.com/rhel/8.0"/> </libosinfo:libosinfo> </metadata> <memory unit='KiB'>4194304</memory> <currentMemory unit='KiB'>4194304</currentMemory> <vcpu placement='static'>2</vcpu> <os> <type arch='x86_64' machine='pc-q35-3.1'>hvm</type> <boot dev='hd'/> </os> <features> <acpi/> <apic/> <vmport state='off'/> </features> <cpu mode='host-model' check='partial'> <model fallback='allow'/> </cpu> <clock offset='utc'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>destroy</on_crash> <pm> <suspend-to-mem enabled='no'/> <suspend-to-disk enabled='no'/> </pm> <devices> <emulator>/usr/bin/qemu-system-x86_64</emulator> <disk type='file' device='disk'> <driver name='qemu' type='qcow2'/> <source file='/var/lib/libvirt/images/rhel8'/> <target dev='vda' bus='virtio'/> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </disk> <controller type='usb' index='0' model='qemu-xhci' ports='15'> <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </controller> <controller type='sata' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <controller type='pci' index='0' model='pcie-root'/> <controller type='pci' index='1' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='1' port='0x10'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/> </controller> <controller type='pci' index='2' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='2' port='0x11'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/> </controller> <controller type='pci' index='3' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='3' port='0x12'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/> </controller> <controller type='pci' index='4' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='4' port='0x13'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/> </controller> <controller type='pci' index='5' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='5' port='0x14'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/> </controller> <controller type='pci' index='6' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='6' port='0x15'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/> </controller> <controller type='pci' index='7' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='7' port='0x16'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/> </controller> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </controller> <interface type='network'> <mac address='52:54:00:50:a7:18'/> <source network='default'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </interface> <serial type='pty'> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <channel type='spicevmc'> <target type='virtio' name='com.redhat.spice.0'/> <address type='virtio-serial' controller='0' bus='0' port='2'/> </channel> <input type='tablet' bus='usb'> <address type='usb' bus='0' port='1'/> </input> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <graphics type='spice' autoport='yes'> <listen type='address'/> <image compression='off'/> </graphics> <sound model='ich9'> <address type='pci' domain='0x0000' bus='0x00' slot='0x1b' function='0x0'/> </sound> <video> <model type='virtio' heads='1' primary='yes'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/> </video> <redirdev bus='usb' type='spicevmc'> <address type='usb' bus='0' port='2'/> </redirdev> <redirdev bus='usb' type='spicevmc'> <address type='usb' bus='0' port='3'/> </redirdev> <memballoon model='virtio'> <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/> </memballoon> <rng model='virtio'> <backend model='random'>/dev/urandom</backend> <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/> </rng> </devices> </domain>
This file defines a number of hardware settings that are used by this guest, such as the amount of RAM that it will have assigned to it, the number of CPU cores from the hypervisor that the guest will have access to, the hard disk image file that is associated with this guest (under the disk
stanza), its display capabilities (via the SPICE protocol), and the guest’s access to USB devices as well as emulated keyboard and mice input.
Example Virtual Machine Disk Storage
This virtual machine’s hard disk image resides at /var/lib/libvirt/images/rhel8
. Here is the disk image itself on this hypervisor:
$ sudo ls -lh /var/lib/libvirt/images/rhel8 -rw------- 1 root root 5.5G Oct 25 15:57 /var/lib/libvirt/images/rhel8
The current size of this disk image consumes only 5.5 GB of space on the hypervisor. However, the operating system within this guest sees a 23.3 GB sized disk, as evidenced by the output of the following command from within the running virtual machine:
$ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT vda 252:0 0 23.3G 0 disk ├─vda1 252:1 0 1G 0 part /boot └─vda2 252:2 0 22.3G 0 part ├─rhel-root 253:0 0 20G 0 lvm / └─rhel-swap 253:1 0 2.3G 0 lvm [SWAP]
This is due to the type of disk provisioning used for this guest. There are multiple types of disk images that a virtual machine can use but the two primary types are:
- COW
-
Copy-on-write (also referred to as thin-provisioning or sparse images) is a method where a disk file is created with a pre-defined upper size limit. The disk image size only increases as new data is written to the disk. Just like the previous example, the guest operating system sees the predefined disk limit of 23.3 GB, but has only written 5.5 GB of data to the disk file. The disk image format used for the example virtual machine is
qcow2
which is a QEMU COW file format. - RAW
-
A raw or full disk type is a file that has all of its space pre-allocated. For example, a 10 GB raw disk image file consumes 10 GB of actual disk space on the hypervisor. There is a performance benefit to this style of disk as all of the needed disk space already exists, so the underlying hypervisor can just write data to the disk without the performance hit of monitoring the disk image to ensure that it has not yet reached its limit and extending the size of the file as new data is written to it.
There are other virtualization management platforms such as Red Hat Enterprise Virtualization and oVirt that can make use of physical disks to act as backing storage locations for a virtual machine’s operating system. These systems can utilize storage area network (SAN) or network attached storage (NAS) devices to write their data to, and the hypervisor keeps track of what storage locations belong to which virtual machines. These storage systems can use technologies such as logical volume management (LVM) to grow or shrink the size of a virtual machine’s disk storage as needed, and to aid in the creation and management of storage snapshots.
Working with Virtual Machine Templates
Since virtual machines are typically just files running on a hypervisor, it is easy to create templates that can be customized for particular deployment scenarios. Often a virtual machine will have a basic operating system installation and some pre-configured authentication configuration settings set up to ease future system launches. This cuts down on the amount of time it takes to build a new system by reducing the amount of work that is often repeated, such as base package installation and locale settings.
This virtual machine template could then later get copied to a new guest system. In this case, the new guest would get renamed, a new MAC address generated for its network interface, and other modifications can be made depending on its intended use.
The D-Bus Machine ID
Many Linux installations will utilize a machine identification number generated at install time, called the D-Bus machine ID. However, if a virtual machine is cloned to be used as a template for other virtual machine installations, a new D-Bus machine ID would need to be created to ensure that system resources from the hypervisor get directed to the appropriate guest system.
The following command can be used to validate that a D-Bus machine ID exists for the running system:
$ dbus-uuidgen --ensure
If no error messages are displayed, then an ID exists for the system. To view the current D-Bus machine ID, run the following:
$ dbus-uuidgen --get 17f2e0698e844e31b12ccd3f9aa4d94a
The string of text that is displayed is the current ID number. No two Linux systems running on a hypervisor should have the same D-Bus machine ID.
The D-Bus machine ID is located at /var/lib/dbus/machine-id
and is symbolically linked to /etc/machine-id
. Changing this ID number on a running system is discouraged as system instability and crashes are likely to occur. If two virtual machines do have the same D-Bus machine ID, follow the procedure below to generate a new one:
$ sudo rm -f /etc/machine-id $ sudo dbus-uuidgen --ensure=/etc/machine-id
In the event that /var/lib/dbus/machine-id
is not a symbolic link back to /etc/machine-id
, then /var/lib/dbus/machine-id
will need to be removed.
Deploying Virtual Machines to the Cloud
There are a multitude of IaaS (infrastructure as a service) providers available that run hypervisor systems and that can deploy virtual guest images for an organization. Practically all of these providers have tools in place that allows an administrator to build, deploy and configure custom virtual machines based on a variety of Linux distributions. Many of these companies also have systems in place that allow for the deployment and migrations of virtual machines built from within a customer’s organization.
When assessing a deployment of a Linux system in an IaaS environment, there are some key elements that an administrator should be cognizant of:
- Computing Instances
-
Many cloud providers will charge usage rates based on “computing instances”, or how much CPU time your cloud-based infrastructure will use. Careful planning of how much processing time applications will actually require will aid in keeping the costs of a cloud solution manageable.
Computing instances will also often refer to the number of virtual machines that are provisioned in a cloud environment. Again, the more instances of systems that are running at one time will also factor into how much overall CPU time an organization will be charged for.
- Block Storage
-
Cloud providers also have various levels of block storage available for an organization to use. Some offerings are simply meant to be web-based network storage for files, and other offerings relate to external storage for a cloud provisioned virtual machine to use for hosting files.
The cost for such offerings will vary based on the amount of storage used, and the speed of the storage within the provider’s data centers. Faster storage access typically will cost more, and conversely data “at rest” (as in archival storage) is often very inexpensive.
- Networking
-
One of the main components of working with a cloud solutions provider is how the virtual network will be configured. Many IaaS providers will have some form of web-based utilities that can be utilized for the design and implementation of different network routes, subnetting, and firewall configurations. Some will even provide DNS solutions so that publicly accessible FQDN (fully qualified domain names) can be assigned to your internet facing systems. There are even “hybrid” solutions available that can connect an existing, on-premise network infrastructure to a cloud-based infrastructure through the means of a VPN (virtual private network), thus tying the two infrastructures together.
Securely Accessing Guests in the Cloud
The most prevalent method in use for accessing a remote virtual guest on a cloud platform is through the use of OpenSSH software. A Linux system that resides in the cloud would have the OpenSSH server running, while an administrator would use an OpenSSH client with pre-shared keys for remote access.
An administrator would run the following command
$ ssh-keygen
and follow the prompts to create a public and private SSH key pair. The private key remains on the administrator’s local system (stored in ~/.ssh/
) and the public key gets copied to the remote cloud system, the exact same method one would use when working with networked machines on a corporate LAN.
The administrator would then run the following command:
$ ssh-copy-id -i <public_key> user@cloud_server
This will copy the public SSH key from the key pair just generated to the remote cloud server. The public key will be recorded in the ~/.ssh/authorized_keys
file of the cloud server, and set the appropriate permissions on the file.
Note
|
If there is only one public key file in the |
Some cloud providers will automatically generate a key pair when a new Linux system is provisioned. The administrator will then need to download the private key for the new system from the cloud provider and store it on their local system. Note that the permissions for SSH keys must be 0600
for a private key, and 0644
for a public key.
Preconfiguring Cloud Systems
A useful tool that simplifies the deployments of cloud-based virtual machine is the cloud-init
utility. This command, along with the associated configuration files and pre-defined virtual machine image, is a vendor-neutral method for deploying a Linux guest across a plethora of IaaS providers. Utilizing YAML (YAML Ain’t Markup Language) plain-text files an administrator can pre-configure network settings, software package selections, SSH key configuration, user account creations, locale settings, along with a myriad of other options to quickly build out new systems.
During a new system’s initial boot up, cloud-init
will read in the settings from YAML configurations files and apply them. This process only needs to apply to a system’s initial setup, and makes deploying a fleet of new systems on a cloud provider’s platform easy.
The YAML file syntax used with cloud-init
is called cloud-config. Here is a sample cloud-config
file:
#cloud-config timezone: Africa/Dar_es_Salaam hostname: test-system # Update the system when it first boots up apt_update: true apt_upgrade: true # Install the Nginx web server packages: - nginx
Note that on the top line there is no space between the hash symbol (#
) and the term cloud-config
.
Note
|
|
Containers
Container technology is similar in some aspects to a virtual machine, where you get an isolated environment to easily deploy an application. Whereas with a virtual machine an entire computer is emulated, a container uses just enough software to run an application. In this way, there is far less overhead.
Containers allow for greater flexibility over that of a virtual machine. An application container can be migrated from one host to another, just as a virtual machine can be migrated from one hypervisor to another. However, sometimes a virtual machine will need to be powered off before it could be migrated, whereas with a container the application is always running while it is getting migrated. Containers also make it easy to deploy new versions of applications in tandem with an existing version. As users close their sessions with running containers, these containers can get automatically removed from the system by the container orchestration software and replaced with the new version thus reducing downtime.
Note
|
There are numerous container technologies available for Linux, such as Docker, Kubernetes, LXD/LXC, systemd-nspawn, OpenShift and more. The exact implementation of a container software package is beyond the scope of the LPIC-1 exam. |
Containers make use of the control groups (better known as cgroups) mechanism within the Linux kernel. The cgroup is a way to partition system resources such as memory, processor time as well as disk and network bandwidth for an individual application. An administrator can use cgroups directly to set system resource limits on an application, or a group of applications that could exist within a single cgroup. In essence this is what container software does for the administrator, along with providing tools that ease the management and deployment of cgroups.
Note
|
Currently, knowledge of cgroups are not necessary for passing the LPIC-1 exam. The concept of the cgroup is mentioned here so that the candidate would at least have some background knowledge of how an application is segregated for the sake of system resource utilization. |
Guided Exercises
-
What CPU extensions are necessary on an x86 based hardware platform that will run fully virtualized guests?
-
A mission-critical server installation that will require the fastest performance will likely use what type of virtualization?
-
Two virtual machines that have been cloned from the same template and that utilize D-Bus are performing erratically. They both have separate hostnames and network configuration settings. What command would be used to determine if each of the virtual machines have different D-Bus Machine IDs?
Explorational Exercises
-
Run the following command to see if your system already has CPU extensions enabled to run a virtual machine (your results may vary depending on your CPU):
grep --color -E "vmx|svm" /proc/cpuinfo
Depending on the output, you may have
vmx
highlighted (for Intel VT-x enabled CPU’s) orsvm
highlighted (for AMD SVM enabled CPU’s). Should you get no results, consult your BIOS or UEFI firmware instructions on how to enable virtualization for your processor. -
If your processor supports virtualizations, seek out your distribution’s documentation for running a KVM hypervisor.
-
Install the necessary packages to run a KVM hypervisor.
-
If you are using a graphical desktop environment, it is recommended to also install the
virt-manager
application which is a graphical front-end that can be used on a KVM installation. This will aid in virtual machine installations and management. -
Download a Linux distribution ISO image of your choice, and following your distribution’s documentation create a new virtual machine using this ISO.
-
Summary
In this lesson we covered the conceptual basics of virtual machines and containers, and how these technologies can be used with Linux.
We briefly described the following commands:
dbus-uuidgen
-
Used to verify and view a system’s DBus ID.
ssh-keygen
-
Used to generate a public and private SSH key pair for use when accessing remote, cloud-based systems.
ssh-copy-id
-
Used to copy a system’s public SSH key to a remote system to facilitate remote authentication.
cloud-init
-
Used to aid in the configuration and deployment of virtual machines and containers to a cloud environment.
Answers to Guided Exercises
-
What CPU extensions are necessary on an x86 based hardware platform that will run fully virtualized guests?
VT-x for Intel CPUs or AMD-V for AMD CPUs
-
A mission-critical server installation that will require the fastest performance will likely use what type of virtualization?
An operating system that makes use of paravirtualization, such as Xen, as the guest operating system can make better use of hardware resources available to it through the use of software drivers designed to work with the hypervisor.
-
Two virtual machines that have been cloned from the same template and that utilize D-Bus are performing erratically. They both have separate hostnames and network configuration settings. What command would be used to determine if each of the virtual machines have different D-Bus Machine IDs?
dbus-uuidgen --get
Answers to Explorational Exercises
-
Run the following command to see if your system already has CPU extensions enabled to run a virtual machine (your results may vary depending on your CPU):
grep --color -E "vmx|svm" /proc/cpuinfo
. Depending on the output, you may have "vmx" highlighted (for Intel VT-x enabled CPU’s) or "svm" highlighted (for AMD SVM enabled CPU’s). Should you get no results, consult your BIOS or UEFI firmware instructions on how to enable virtualization for your processor.The results will vary depending on your CPU that you have. Here is example output from a computer with an Intel CPU with virtualization extensions enabled in the UEFI firmware:
$ grep --color -E "vmx|svm" /proc/cpuinfo flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp md_clear flush_l1d
-
If your processor supports virtualizations, seek out your distribution’s documentation for running a KVM hypervisor.
-
Install the necessary packages to run a KVM hypervisor.
This will vary depending on your distribution, but here are some starting points:
Arch Linux — https://wiki.archlinux.org/index.php/KVM
-
If you are using a graphical desktop environment, it is recommended to also install the
virt-manager
application which is a graphical front-end that can be used on a KVM installation. This will aid in virtual machine installations and management.Again, this will vary by distribution. An example using Ubuntu looks like this:
$ sudo apt install virt-manager
-
Download a Linux distribution ISO image of your choice, and following your distribution’s documentation create a new virtual machine using this ISO.
This task is easily handled by the
virt-manager
package. However a virtual machine can be created from the command-line using thevirt-install
command. Try both methods to get an understanding of how virtual machines are deployed.
-