Using Packer to deploy VM’s in a Nested ESXi environment

I thought I would write a post about an environment I put together which would allow me to use Packer to deploy VMs within my Nested ESXi installation.

Theres a great post here on creating the Nested ESXi Environment. This is the one I followed to pull my Lab together.

Packer is great and will allow you to create Virtual machine images for multiple virtualization platforms. – From EC2, to VirtualBox, ESXi an so on – Check out all the supported environments here
in the builders section.

Firstly I’ll describe a few of the problems I was having. I wanted to be able to go through a full system installation unattended, but be able to customize this at will. For this I decided I would go down the route of using Preseed files in Debian. This worked great when I was using Packer to build my VMs on my local mac using VirtualBox, but didn’t work so great when I moved to ESXi.

The process broke down in the final stages of the build where the Packer process checks SSH connectivity to the VM it has just put together for you. I’m pretty sure this is something to do with Packer needing your environment to be on the same network. I also hit a few issues with the DHCP address of the VM being created changing between reboots even though the lease had not expired or the host credentials changed,etc.

What I did for the DHCP issue was to create a static file on the VM itself as it was created and then as the VM reboots and Packer does its post installation SSH client check, executed a script on the VM to rewrite the VMs IP address to the first address that was claimed. This allowed the SSH connectivity check to pass and the build process to complete.

The next steps that I had to cover off were where would I put my Preseed files?

I wanted to keep the environment as closed as possible and as easy to reproduce as possible. I could use an external service like, this would be ok as I was only posting up test preseed documents but no good if I needed privacy.

So I thought I would put my Preseed files on the Nested ESXi host itself, this proved trickier than I first thought, as the only thing I could find running on ESXi other than the Hypervisor was Python, and I didn’t really want to try compiling extra stuff onto the Nested ESXi environment.

So I dug around the Net looking for examples of a very simple Python Web server to save reinventing the wheel, which I will provide links to and include here for continuity.

This worked really well, but I had the problem that I needed to get this to start up each time the Nested environment was rebooted or redeployed. So I needed to create a set of custom Firewall rules in ESXi to allow another HTTP service to run on a different set of ports and used VIBAuthor to help out here, then I was able to put my preseed files here and reference them directly from within my own environment.


I downloaded VIBAuthor which is available for download on the VMware Labs Flings site
and created a micro VM just running the VIBAuthor utility within the Nested ESXi environment. At the end of this process I would create a template of the Nested ESXi environment so that I could redeploy all these changes without having to reproduce it all again manually.

Here is what my VIBauthor layout looked like. On the VIBauthor VM I used I created the following directory structure:

I created a payloads/payload1 directory and then etc/vmware under there and then created descriptor.xml in staging directory which looked as follows ( I got this layout from this site )

<vib version="5.0">
  <summary>Custom Firewall Rule VIB</summary>
  <description>Adds custom firewall rule to ESXi host</description>
    <payload name="payload1" type="vgz"></payload>

I also added payloads/payload1/opt and in here copied my Python WWW server executable and my two preseed files.

I then created /etc/vmware/firewall under the staging directory and included customfwrules.xml which looked like this (This would ensure that the Firewall rule would persist throughout a reboot of the Hypervisor).

 <service id="0033">
 <rule id='0000'>
 <rule id='0001'>


Using this approach saved having to edit /etc/vmware/firewall/service.xml every time the Nested ESXi installation was rebooted,etc.

I then created an init script which I put in /staging/payloads/payload1/etc/rc.local.d, changed its permissions:

chmod 755 
chmod +t 

Once I had created this file structure I ran the following command on my VM that I had downloaded the vibauthor tool onto

vibauthor -C -t staging -v customfwrules.vib -O customfwrules-offline-bundle-zip

The contents of the shellscript would start up the Simple Python WWW server that I grabbed from the Web.

cd /opt
/bin/python &

Create WWW server capable of serving multiple files


from BaseHTTPServer import BaseHTTPRequestHandler,HTTPServer
from os import curdir, sep


#This class will handles any incoming request from
#the browser 
class myHandler(BaseHTTPRequestHandler):
        #Handler for the GET requests
        def do_GET(self):
            if self.path=="/":
                  #Check the file extension required and
                  #set the right mime type
                  sendReply = False
                  if self.path.endswith(".txt"):
                        sendReply = True
                  if self.path.endswith(".html"):
                        sendReply = True
                  if self.path.endswith(".jpg"):
                        sendReply = True
                  if self.path.endswith(".gif"):
                        sendReply = True
                  if self.path.endswith(".js"):
                        sendReply = True
                  if self.path.endswith(".css"):
                        sendReply = True

                  if sendReply == True:
                       #Open the static file requested and send it
                       f = open(curdir + sep + self.path) 

            except IOError:
                   self.send_error(404,'File Not Found: %s' % self.path)
     #Create a web server and define the handler to manage the
     #incoming request
     server = HTTPServer(('', PORT_NUMBER), myHandler)
     print 'Started httpserver on port ' , PORT_NUMBER

     #Wait forever for incoming htto requests
except KeyboardInterrupt:
    print '^C received, shutting down the web server'


Once the VIB was built I scp’d the VIB to the Nested ESXi server for installation and ran the following commands

esxcli software acceptance set --level=CommunitySupported
esxcli software vib install -v /vmfs/volumes/<Datastore-ID>/customfwrules.vib

Once this was complete I rebooted the Nested ESXi Environment

From this point forward I would need to put my Preseed documents in the same folder as the

My Preseed documents looked like this:


# English plx
d-i debian-installer/language string en
d-i debian-installer/locale string en_GB.UTF-8
d-i localechooser/preferred-locale string en_GB.UTF-8
d-i localechooser/supported-locales en_GB.UTF-8

# Including keyboards
d-i console-setup/ask_detect boolean false
#d-i keyboard-configuration/layout select UK
#d-i keyboard-configuration/variant select UK
#d-i keyboard-configuration/modelcode string pc105
d-i keymap select uk

# Just roll with it
d-i netcfg/get_hostname string mattpackertest
d-i netcfg/get_domain string localdomain.local
d-i time/zone string UTC
d-i clock-setup/utc-auto boolean true
d-i clock-setup/utc boolean true

# Choices: Dialog, Readline, Gnome, Kde, Editor, Noninteractive
d-i debconf debconf/frontend select Noninteractive

d-i pkgsel/install-language-support boolean false
tasksel tasksel/first multiselect standard

# Stuck between a rock and a HDD place
d-i partman-auto/method string lvm
d-i partman-lvm/confirm boolean true
d-i partman-lvm/device_remove_lvm boolean true
d-i partman-auto/choose_recipe select atomic

d-i partman/confirm_write_new_label boolean true
d-i partman/confirm_nooverwrite boolean true
d-i partman/choose_partition select finish
d-i partman/confirm boolean true

# Write the changes to disks and configure LVM?
d-i partman-lvm/confirm boolean true
d-i partman-lvm/confirm_nooverwrite boolean true
d-i partman-auto-lvm/guided_size string max

d-i mirror/country string enter information manually
d-i mirror/http/hostname string
d-i mirror/http/directory string /debian
d-i mirror/suite string testing
d-i mirror/http/proxy string

d-i     cdrom-checker/start     boolean false
# Debian archive mirror country:
# Choices: enter information manually, Algeria, Argentina, Australia, Austria, Bangladesh, Belarus, Belgium, Bosnia and Herzegovina, Brazil, Bulgaria, Cana
da, Chile, China, Colombia, Costa Rica, Croatia, Czech Republic, Denmark, El Salvador, Estonia, Finland, France, French Polynesia, Georgia, Germany, Greece
, Hong Kong, Hungary, Iceland, India, Indonesia, Ireland, Israel, Italy, Japan, Kazakhstan, Kenya, Korea\, Republic of, Latvia, Lithuania, Luxembourg, Mace
donia\, Republic of, Madagascar, Malaysia, Malta, Mexico, Moldova, Netherlands, New Caledonia, New Zealand, Nicaragua, Norway, Philippines, Poland, Portuga
l, Romania, Russian Federation, Serbia, Singapore, Slovakia, Slovenia, South Africa, Spain, Sweden, Switzerland, Taiwan, Tajikistan, Thailand, Turkey, Ukra
ine, United Kingdom, United States, Uzbekistan, Venezuela, Viet Nam
choose-mirror-bin       mirror/http/countries   select GB
# for internal use only
user-setup-udeb passwd/user-default-groups      string  audio cdrom dip floppy video plugdev netdev powerdev scanner bluetooth debian-tor sudo
# location
# Choices: Guayaquil, Galapagos
tzsetup-udeb    tzsetup/country/EC      select
# Not installing to unclean target
base-installer  base-installer/unclean_target_cancel    error
# No partitions to encrypt
partman-crypto  partman-crypto/nothing_to_setup note
# New partition size:
partman-partitioning    partman-partitioning/new_partition_size string  some number
# for internal use; can be preseeded
# Choices: Network Manager, ifupdown (/etc/network/interfaces), No network configuration
netcfg  netcfg/target_network_config    select  ifupdown

# Failed to retrieve the preconfiguration file
# No proxy, plx
d-i mirror/http/proxy string

d-i passwd/root-login boolean false
d-i passwd/make-user boolean true
#d-i passwd/root-password password "" 
#d-i passwd/root-password-again password ""

# Default user, change
d-i passwd/user-fullname string packer
d-i passwd/username string packer
d-i passwd/user-password password packer
d-i passwd/user-password-again password packer
d-i user-setup/encrypt-home boolean false
d-i user-setup/allow-password-weak boolean true

# No language support packages.
d-i pkgsel/install-language-support boolean false

# Individual additional packages to install
d-i pkgsel/include string build-essential openssh-server ssh wget sudo linux-headers-`uname -r`  make

#For the update
d-i pkgsel/update-policy select none

# Whether to upgrade packages after debootstrap.
# Allowed values: none, safe-upgrade, full-upgrade
d-i pkgsel/upgrade select safe-upgrade

popularity-contest popularity-contest/participate boolean false

# Go grub, go!
d-i grub-installer/only_debian boolean true

d-i finish-install/reboot_in_progress note
d-i preseed/late_command string \
    in-target wget -O http://<Nested_ESXI_IP>:8081/postpreseed.txt; \
        in-target /bin/bash -x chmod 777 ./; \
            in-target /bin/bash -x ./


echo "packer    ALL=(ALL) NOPASSWD: ALL">>/etc/sudoers;sync
sudo chmod 777 /etc/network/interfaces
sudo cp /etc/network/interfaces /home/packer/interfaces.tmp
sudo cp /etc/rc.local /home/packer/rc.local.tmp
IP=sudo /sbin/ifconfig | grep "inet addr" |cut -d ' ' -f 12 | sed 's/addr://'|grep -v 127\.0\.0\.1 > /home/packer/static_nic.txt
sudo echo "# The loopback network interface" >> /home/packer/interfaces
sudo echo "auto lo" >> /home/packer/interfaces
sudo echo "iface lo inet loopback" >> /home/packer/interfaces
sudo echo ""  >> /home/packer/interfaces
sudo echo "auto eth0" >> /home/packer/interfaces
sudo echo "iface eth0 inet static" >> /home/packer/interfaces
sudo echo "address `cat /home/packer/static_nic.txt`" >> /home/packer/interfaces
sudo echo "netmask" >> /home/packer/interfaces
sudo echo "gateway $GATEWAY" >> /home/packer/interfaces
sudo > /etc/rc.local
sudo echo "#!/bin/sh -e" >> /etc/rc.local
sudo echo "#" >> /etc/rc.local
sudo echo "#rc.local" >> /etc/rc.local
sudo echo "#By default this script does nothing" >> /etc/rc.local
sudo echo "mv /home/packer/interfaces /etc/network/interfaces && sudo reboot" >> /etc/rc.local
sudo echo "exit 0" >> /etc/rc.local

My Packer JSON template file looked like this:

    "variables": {
        "ssh_name": "packer",
        "ssh_pass": "packer",
        "hostname": "mattpackertest",
        "preseed_ip": "<NestedESXI_IP>",
        "preseed_port": "8081"

    "builders": [{
        "remote_type": "esx5",
        "remote_host": "<NestedESXI_IP>",
        "remote_datastore": "Nestdatastore1",
        "remote_username": "root",
        "remote_password": "testing",
        "type": "vmware-iso",
        "headless": true,
        "vnc_port_min": "5986",
        "vnc_port_max": "5988",
"guest_os_type": "linux",
        "tools_upload_flavor": "linux",
"vmdk_name": "test-VM",
        "name": "test-VM",
        "vm_name": "test-VM",
        "output_directory": "test-VM",
        "ethernet0.networkName": "VM Network",
        "ethernet0.present": "TRUE",
        "ethernet0.startConnected": "TRUE",
        "ethernet0.virtualDev": "e1000",
        "ethernet0.addressType": "generated",
        "ethernet0.generatedAddressOffset": "0",
        "ethernet0.wakeOnPcktRcv": "FALSE"       

        "iso_url": "",
        "iso_checksum": "8fdb6715228ea90faba58cb84644d296",
        "iso_checksum_type": "md5",
        "ssh_username": "{{user `ssh_name`}}",
        "ssh_password": "{{user `ssh_pass`}}",
        "ssh_wait_timeout": "20m",
        "shutdown_command": "echo {{user `ssh_pass`}} | sudo -S shutdown -P -h now",

        "boot_command" : [
            "install ",
            "preseed/url=http://{{user `preseed_ip`}}:{{user `preseed_port`}}/preseed.txt ",
            "debian-installer=en_GB auto locale=en_GB ",
            "hostname={{user `hostname`}} ",
            "kbd-chooser/method=uk ", 
            "netcfg/get_hostname={{user `hostname`}} ",
            "netcfg/get_domain=localdomain.local fb=false ",
            "debconf/frontend=noninteractive ",
            "console-setup/ask_detect=false ",
            "console-keymaps-at/keymaps=uk ",
            "keyboard-configuration/xkb-keymap=uk ",
"provisioners": [
  "type": "shell",
  "inline": [
  "sudo chmod 000 /etc/rc.local",
  "sudo mv -f /home/packer/rc.local.tmp /etc/rc.local",
  "sudo mv -f /home/packer/interfaces.tmp /etc/network/interfaces", 
  "sudo mount /dev/sr0 /mnt/",
  "sudo tar zxvf /mnt/VMwareTools* -C /usr/local/src",
  "sudo /usr/local/src/vmware-tools-distrib/ -d",
  "sudo sleep 60",
  "sudo sed -i '/packer/d' /etc/sudoers"


The last section of the JSON file with the shell type moves the temporary files I created to store the DHCP lease address and effectively make it static, cleans up the sudoers entry for the “packer” user, and also install the VMTools in the client VM.

After I had completed the above I cloned the Nested ESXi environment to a Template which enabled me to be able to redeploy the entire Nested test environment with the click of a button, and be able to deploy a packer VM ready for production as follows:

packer build esxi-template.json

Then sit back and watch the VM getting created in the Nested ESXi environment. It will then be removed from the inventory on completion, but be available in the specified Datastore to get added as a new VM. This can be iterated over and over or cloned to Template once in ESXi.

Give it a try!

Matt Palmer (c) 24 Sept 2014