Building nested 64-bit QEMu images on GCE and AWS using Packer

I recently encountered a couple of issues whilst I was trying to use Packer to deploy machine images inside Travis CI when including VirtualBox in my travis.yml file.

It seemed like I would only ever be able to build a 32-bit image even though the Travis instance that spun up was Ubuntu 64-bit Trusty.

I did a bit of digging around and at first thought that Travis was hosting its infrastructure on an OpenVZ stack, but this is no longer the case and it seems they are using Google Compute Engine(GCE).

So I changed my approach and spun up my own GCE instance to try the Packer build myself.

To make my use case clear – I wanted to completely externalise the Packer build process and only spin up a compute node to complete the Packer build and then tear it down afterwards shipping the build artifact off to another location afterwards.

I wasn’t interested in completing this task locally for several different reasons, and also didnt want to use the vsphere-postprocessor provided by packer due to other constraints.

The home-grown GCE instance I span up had the same issue, it would only begin to attempt a build of a 32bit image. A 64-bit build would only yield an error like this one:

64-bit CPU failure message
64-bit CPU failure message

Hmmm.. Whats going on…

I looked around on Google for similar issues and found a Google Forums posting mentioning the same kind of problem on GCE.

The issue:

Hosting providers like Google and AWS do not currently expose the VT-x information needed for the running instance to be able to query the underlying hardware, you can see this (or the lack of it) by querying /proc/cpuinfo and seeing the abscence of ‘vmx’ against the ‘cpuflags’ section.

This means that any attempt to try the nested 64-bit virtualisation in question will result in failure.

The solution:

I needed to find a way of emulating what the hardware underneath would provide, and decided that using QEMu may be successful as it provides full emulation instead of using VirtualBox. – To begin with I did this using a manual QEmu build on GCE without Packer, just to confirm my theory.

Whilst QEmu is a lot slower than using something like KVM or VirtualBox, it still allows you to get the task done. So I successfully tested out my theory, now I needed to get it into Packer!

The purpose for me was to be able to completely reproduce a machine image and make it runnable on multiple infrastructure and get a nice Green pass in Travis. So I would want the QEmu base image to effectively run ‘unedited’ on AWS/GCE/VMware ESXi.

I will assume if you read this far you have more than a passing familiarity with Packer, so you know where these files need to go.

You will need to install Packer, QEmu, OVFTool and the AWS CLI tools on your instance/local machine. – There may be a few dependencies I havent noted, but Packer and the other tools mentioned here are pretty good at letting you know whats missing.

So now for the code part:

My Packer Template


"variables": {
"ssh_name": "packerssh",
"ssh_pass": "packerpass",
"hostname": "packertest",
"vmname": "qemu-test",
"cpucount": "2",
"builders": [
"type": "qemu",
"qemuargs": [
["-m", "2048M"],
["-smp", "cpus={{user `cpucount`}}"]
"iso_url": "",
"iso_checksum": "3e1b9029a0cf188730646c379d15073f",
"iso_checksum_type": "md5",
"output_directory": "OVF-TEST",
"shutdown_command": "echo {{user `ssh_pass`}} | sudo -S shutdown -P -h now",
"disk_size": 5000,
"format": "raw",
"headless": true,
"accelerator": "none",
"http_directory": "files",
"ssh_username": "{{user `ssh_name`}}",
"ssh_password": "{{user `ssh_pass`}}",
"ssh_port": 22,
"ssh_wait_timeout": "120m",
"vm_name": "{{user `vmname`}}",
"net_device": "virtio-net",
"disk_interface": "virtio",
"boot_wait": "5s",
"boot_command": [
"install ",
"preseed/url=http://{{ .HTTPIP }}:{{ .HTTPPort }}/qemu_preseed.cfg ",
"debian-installer=en_GB auto locale=en_GB ",
"kbd-chooser/method=uk ",
"netcfg/get_hostname={{user `hostname`}} ",
"netcfg/get_domain=localdomain.local fb=false ",
"debconf/frontend=noninteractive ",
"console-setup/ask_detect=false ",
"console-keymaps-at/keymaps=uk ",
"keyboard-configuration/xkb-keymap=uk ",
"post-processors": [
"type": "shell-local",
"inline": ["echo 'config.version = 8\nvirtualHW.version = 10\nvmci0.present = TRUE\ndisplayName = {{user `vmname`}}\nfloppy0.present = FALSE\nnumvcpus = {{user `cpucount`}}\nscsi0.present = TRUE\nscsi0.sharedBus = none\nscsi0.virtualDev = lsilogic\nmemsize = {{user `RAM`}}\nscsi0:0.present = TRUE\nscsi0:0.fileName = {{user `vmname`}}.vmdk\nscsi0:0.deviceType = scsi-hardDisk\nide1:0.present = TRUE\nide1:0.fileName = emptyBackingString\nide1:0.deviceType = atapi-cdrom\npciBridge0.present = TRUE\npciBridge4.present = TRUE\npciBridge4.virtualDev = pcieRootPort\npciBridge4.functions = 8\npciBridge5.present = TRUE\npciBridge5.virtualDev = pcieRootPort\npciBridge5.functions = 8\npciBridge6.present = TRUE\npciBridge6.virtualDev = pcieRootPort\npciBridge6.functions = 8\npciBridge7.present = TRUE\npciBridge7.virtualDev = pcieRootPort\npciBridge7.functions = 8\nethernet0.pciSlotNumber = 32\nethernet0.present = TRUE\nethernet0.virtualDev = e1000\nethernet0.networkName = Inside\nethernet0.generatedAddressOffset = 0\nguestOS = other26xlinux-64' > OVF-TEST/{{user `vmname`}}.STG"]
"type": "shell-local",
"inline": ["/usr/bin/qemu-img convert -O vmdk OVF-TEST/{{user `vmname`}} OVF-TEST/{{user `vmname`}}.vmdk"]
"type": "shell-local",
"inline": ["echo 'STG=OVF-TEST/{{user `vmname`}}.STG\nSTG1=OVF-TEST/{{user `vmname`}}.STG1\nVMX=OVF-TEST/{{user `vmname`}}.vmx' > files/"]
"type": "shell-local",
"inline": ["sh files/"]
"type": "shell-local",
"inline": ["/usr/bin/ovftool --lax OVF-TEST/{{user `vmname`}}.vmx OVF-TEST/{{user `vmname`}}.ovf"]
"type": "shell-local",
"inline": ["rm -f OVF-TEST/{{user `vmname`}}.vmdk"]
"type": "shell-local",
"inline": ["aws s3 cp --recursive ./OVF-TEST/ s3://my-bucket-name/"]


The Packer template file does the machine image build and then uses the builder in the Packer template to install the required OS, further installation configuration is customised by the use of a preseed configuration file (included below).

Packer then launches a series of post-processor tasks to create additional config files, sanitise them, convert the QEmu base image to a VMDK file, then generate an OVF using the dynamic VMX file and VMDK and ship it up to a non-public S3 bucket.

This was the trickiest part as the dynamic VMX config file required variable values to be inserted and the values to be enclosed in double quotes. So I thought “OK I can escape the quotes or encapsulate them in single quotes then when I run packer validate all will be good” – How wrong I was :-)

I just couldn’t get to grips with it as it always seemed that everything I did that created a valid JSON file resulted in a config file being created that was a mess, I tried just about everything – including urlencoding and decoding the file.

In the end I opted to create the config file without no quotes and run a post-processor that read through the config file and added the double quotes afterwards.

My Preseed Config file

# English plx
d-i debian-installer/language string en
d-i debian-installer/locale string en_GB.UTF-8
d-i localechooser/preferred-locale string en_GB.UTF-8
d-i localechooser/supported-locales en_GB.UTF-8

# Including keyboards
d-i console-setup/ask_detect boolean false
#d-i keyboard-configuration/layout select UK
#d-i keyboard-configuration/variant select UK
#d-i keyboard-configuration/modelcode string pc105
d-i keymap select uk

# Just roll with it
d-i netcfg/get_hostname string mattpackertest
d-i netcfg/get_domain string localdomain.local
d-i time/zone string UTC
d-i clock-setup/utc-auto boolean true
d-i clock-setup/utc boolean true

# Choices: Dialog, Readline, Gnome, Kde, Editor, Noninteractive
d-i debconf debconf/frontend select Noninteractive

d-i pkgsel/install-language-support boolean false
tasksel tasksel/first multiselect standard

# Stuck between a rock and a HDD place
d-i partman-auto/method string lvm
d-i partman-lvm/confirm boolean true
d-i partman-lvm/device_remove_lvm boolean true
d-i partman-auto/choose_recipe select atomic

d-i partman/confirm_write_new_label boolean true
d-i partman/confirm_nooverwrite boolean true
d-i partman/choose_partition select finish
d-i partman/confirm boolean true

# Write the changes to disks and configure LVM?
d-i partman-lvm/confirm boolean true
d-i partman-lvm/confirm_nooverwrite boolean true
d-i partman-auto-lvm/guided_size string max

d-i mirror/country string enter information manually
d-i mirror/http/hostname string
d-i mirror/http/directory string /debian
d-i mirror/suite string testing
d-i mirror/http/proxy string

d-i cdrom-checker/start boolean false
# Debian archive mirror country:
# Choices: enter information manually, Algeria, Argentina, Australia, Austria, Bangladesh, Belarus, Belgium, Bosnia and Herzegovina, Brazil, Bulgaria, Canada, Chile, China, Colombia, Costa Rica, Croatia, Czech Republic, Denmark, El Salvador, Estonia, Finland, France, French Polynesia, Georgia, Germany, Greece, Hong Kong, Hungary, Iceland, India, Indonesia, Ireland, Israel, Italy, Japan, Kazakhstan, Kenya, Korea\, Republic of, Latvia, Lithuania, Luxembourg, Macedonia\, Republic of, Madagascar, Malaysia, Malta, Mexico, Moldova, Netherlands, New Caledonia, New Zealand, Nicaragua, Norway, Philippines, Poland, Portugal, Romania, Russian Federation, Serbia, Singapore, Slovakia, Slovenia, South Africa, Spain, Sweden, Switzerland, Taiwan, Tajikistan, Thailand, Turkey, Ukraine, United Kingdom, United States, Uzbekistan, Venezuela, Viet Nam
choose-mirror-bin mirror/http/countries select GB
# for internal use only
user-setup-udeb passwd/user-default-groups string audio cdrom dip floppy video plugdev netdev powerdev scanner bluetooth debian-tor sudo
# location
# Choices: Guayaquil, Galapagos
tzsetup-udeb tzsetup/country/EC select
# Not installing to unclean target
base-installer base-installer/unclean_target_cancel error
# No partitions to encrypt
partman-crypto partman-crypto/nothing_to_setup note
# New partition size:
partman-partitioning partman-partitioning/new_partition_size string some number
# for internal use; can be preseeded
# Choices: Network Manager, ifupdown (/etc/network/interfaces), No network configuration
netcfg netcfg/target_network_config select ifupdown

# Failed to retrieve the preconfiguration file
# No proxy, plx
d-i mirror/http/proxy string

d-i passwd/root-login boolean false
d-i passwd/make-user boolean true
#d-i passwd/root-password password ""
#d-i passwd/root-password-again password ""

# Default user, change
d-i passwd/user-fullname string packerssh
d-i passwd/username string packerssh
d-i passwd/user-password password packerpass
d-i passwd/user-password-again password packerpass
d-i user-setup/encrypt-home boolean false
d-i user-setup/allow-password-weak boolean true

# No language support packages.
d-i pkgsel/install-language-support boolean false

# Individual additional packages to install
d-i pkgsel/include string build-essential openssh-server ssh wget sudo linux-headers-`uname -r` make

#For the update
d-i pkgsel/update-policy select none

# Whether to upgrade packages after debootstrap.
# Allowed values: none, safe-upgrade, full-upgrade
d-i pkgsel/upgrade select safe-upgrade

popularity-contest popularity-contest/participate boolean false

# Go grub, go!
d-i grub-installer/only_debian boolean true
d-i grub-installer/bootdev string default

d-i finish-install/reboot_in_progress note

This is the sed file that processes the VMX file and adds double quotes to the config parameters. I had a couple of issues with the source command and the /bin/sh shell on the GCE instance I was spinning up to complete the Packer build and had to replace the word ‘source’ with a period.

Post Processor ‘sed’ sanitizing file

echo "This is the current directory `pwd`"
. files/
sed 's/= /= "/g' $STG > $STG1
sed 's/$/"/g' $STG1 > $VMX

This is the sed variables file, this is dynamically created inside the Packer template. The reason I needed it was that I had to export environment variables created inside the template, and export them out into the files that are run as part of the post-processor tasks. This seemed to be the only way to preserve them.

Post processor ‘sed’ shell output file


OK, so now everything was good, although the build took 1hr 23mins!!.

I had objects in my S3 bucket that I could use to pull down the OVF from directly into vCentre and provision new VMs. I could do this manually from the GUI,PowerShell/PSWA or as additional build steps in Travis!

© Matt Palmer 7th July 2016

Linux Guru & Technology Enthusiast…