Thursday, 12 October 2017

Unable to create new files and folders on NAS share from HPE StoreOnce device. isFileExists


Unable to create new files and folders on NAS share from HPE StoreOnce device. May affect other windows SMB clone servers.

You see errors like the following in Windows Explorer:

Could not find this item
This is no longer located in <%3 Null:OpText>. Verify the item's location and try again.

There is a problem accessing \\server\sharename
Make sure you are connected to the network and try again.

You see errors like the following in Veeam Agent for Windows Backup log:

Error: The specified network name is no longer available. Failed to process [isFileExisits]


The Microsoft network client: digitally sign communications (always) policy setting is set to enabled


Set the following policy to Disabled and reboot the machine.

Computer Configuration > Windows Settings > Security Settings > Local Policies > Security Options > Microsoft network client: Digitally sign communications (always)

Thursday, 17 August 2017

Azure CPU Price to Performance Roundup

I have recently been running some benchmarks on Azure Virtual Machines... Lots of benchmarks!

Update: Script published on GitHub here! This is not a finished product and is a bit "hacky" - you have been warned! :)

In fact, I've written a script which will power up each VM type that is available to me on an MSDN subscription and make it run Cinebench R15. My MSDN subscription has the default cores per region limit and therefore the largest machine I was able to test was the D15_v2 at 20-Cores.

The script ran the benchmark 3 times to try to account for time based variance such as background processes. I tried to minimise background processes by disabling Windows Update and Defender on the machine.

The time taken to run through all of the machine types available to me serially and run Cinebench 3 times was not as long as I initially expected - about 24 hours for a full run. Due to this, I'm open to running other benchmarks that can be run from the command line and will output in some standard fashion. I might do a strawpoll if anyone is interested.

Keep in mind that Cinebench only tests CPU performance, so this will not be relevant for other machine uses such as GPU or Disk IO.

Price to Performance

Price to performance for each VM series and CPU type was calculated by dividing the Cinebench score (average of 3 runs) by the price per month of the Virtual Machine. The results are displayed as an average score of all the VM types in the series. 

From the results, you can see that the F series VM is by far the best performing per pound spent. The defunct G series VM is the most expensive for CPU.

Real world Cinebench R15 results vs Microsoft 'Azure Compute Unit' figures

I have normalised the Cinebench per-core, per-thread and ACU scores for each VM type and then averaged the normalised value to each VM series.

The best performing per-core virtual machine type is the H series, using the Intel Xeon E5-2667 v3 Haswell at 3.2 GHz. 

Looking at the results shows that almost all of the CPUs (relative to the H series) perform similarly to their respective ACU score which means that the ACU is a good benchmark to gauge relative CPU performance by.

The only outlier on this chart is the Dv3 series when looking at per-core values. Since this VM is using Hyper-Threading, there are 2 threads per core and therefore it performs significantly better relative to the other non-SMT virtual machine types. This is however reflected in the pricing of the VM and so the H series is still top dog on CPU price to performance.

A final note

I have looked at the variance on per core and per thread scores (divide multi core score by number or cores / threads,) and the variance on all the VM types in a series is very low. The only noticeable difference is the G series, where the Cinebench R15 multi core score does not scale linearly. Per core scores go down as the VM size increases. It's probably worth investigating single core benchmarks on these machines to see if there is some artificial limiting happening or if this is due to Intel Turbo Boost kicking in on the CPU.

Let me know if you think there might be a good benchmark to run on the entire series of Azure VMs?

Here is a Google Sheet with all of my results. 

Here is the script


Monday, 24 July 2017

Saturday, 15 July 2017

Azure D-Series v3 Performance Comparison - Does Hyperthreading mean better price to performance?

Microsoft has just announced their new Dv3 and Ev3 Series VMs taking advantage of Hyperthreading on their Intel Xeon Broadwell CPUs. They suggest an up to 28% price reduction based on the Dv2 VMs with comparative vCPUs on each VM size.

I wanted to see how the new VM sizes compare to the older sizes and see if there is any price to performance benefits of the new machines. I've done some testing in these new VMs with Cinebench to see how the new VMs compare to the old VMs on the popular synthetic CPU benchmark.


While doing my testing, I noticed some interesting features of the new VMs.

Here are the Core and Thread counts for the comparative DS4 v2 and the D8s v3. Both VMs are shown as 8 "core" VMs in the marketplace. The new v3 VM has 8 Threads and only 4 cores, whereas the older v2 VM has 8 real cores.

D8s v3 - 8 virtual CPU VM DS4 v2 - 8 virtual CPU VM

It's apparent from the above screen grabs from Cinebench that the hypervisor is presenting Hyperthreading up to the guest OS. 

While testing, I made note of the core and thread count of each VM and also the processor type on each VM. The new v3 VMs present their processors up to the operating system as hyperthreaded logical cores, while the v2 VMs present "full cores" to the OS.

Cores vs Threads

The new v3 VM types are all hyperthreading enabled VMs which means that for each pair of "virtual CPUs" there is a single underlying core on the physical processor in the server. This means that the machines will perform better per core, however they will not perform 2x better per core.

Below is a table of the VM types that I tested with their core and thread counts, and also the CPU type that was detected in each machine.

Server Type Cores Threads CPU Type
DS2 v2 2 2 Intel Xeon E5-2673 v4 (1 socket, 2 virtual processors)
D2s v3 1 2 Intel Xeon E5-2673 v4 (1 socket, 2 virtual processors)
DS3 v2 4 4 Intel Xeon E5-2673 v4 (1 socket, 4 virtual processors)
D4s v3 2 4 Intel Xeon E5-2673 v4 (1 socket, 4 virtual processors)
DS4 v2 8 8 Intel Xeon E5-2673 v4 (1 socket, 8 virtual processors)
D8s v2 4 8 Intel Xeon E5-2673 v4 (1 socket, 8 virtual processors)

Testing Setup

I created a new VM on one of the sizes, waited for initial setup to complete and for the machine to become idle, then ran the multi-core and single-core benchmarks in Cinebench. Once I had gathered the results and some screenshots, I resized the same VM into a different model and ran the tests again.

I only had time to run a one single and one multi threaded test per VM type, but I hope to address this soon with full, multi run benchmarks.


The results were not surprising based on the thread and core counts of the VMs. When testing normal desktop CPUs, I have seen similar results from Hyperthreaded and non Hyperthreaded CPUs. 

Below is the multi-core Cinebench score for each VM type that I tested. You can see that the equivalent VM type on the v3 VMs scores substantially lower then the v2 VM with the same number of virtual CPUs (threads)

The multi-core Cinebench score for each VM type tested, the top 2 machines are the 2-vCPU VMs, the second set are the 4 vCPU VMs and the last 2 machines are the 8-vCPU VMs.

You can see from the above chart that the equivalent v3 virtual machines score considerably lower that the v2 machines.

The next chart is something I put together to show the relative value of each machine type based on Cinebench score. It is calculated as (Multi-core Cinebench Score) ÷ (£ GBP per day cost to run the VM)

Cinebench scores weighted against cost to run the VM.

You can see that the v2 VMs score considerably higher than the v3 VMs in cost per cb score.

Here is my full spreadsheet of results.


Microsoft appears to be offering machines at a lower cost per core, however if you are looking for raw performance per £ spent, it's better to keep using the v2 D-Series VMs at this time.

Monday, 10 July 2017

Convert Azure Windows virtual machine license to Hybrid Use Benefit

Microsoft has recently announced their hybrid use benefit for Windows virtual machines. They claim a 40% lower price on Windows VMs in Azure which is certainly a good thing.

If you license your on premises hosts with Windows Datacentre and also run some Azure Windows VMs then you can use the license for both - at the same time!

"Each two-processor licence or each set of 16-core licences are entitled to two instances of up to 8 cores, or one instance of up to 16 cores. The Azure Hybrid Use Benefit for Standard Edition licences can only be used once either on-premises or in Azure. Datacenter edition benefits allow for simultaneous usage both on-premises and in Azure."

The only issue with this offer is that you need to enable the hybrid use at deploy time on your VMs. This isn't a problem for ephemeral VMs, but if you have permanent VMs that you want to save some money on, then you need to re-provision them.

It is possible to delete your VMs and then recreate them using the original VHD, with the hybrid use benefit enabled. I've written a script which will do just this and published it on GitHub here.

As always, DO NOT use this in production and make sure you have backups of your machines before you use the script!

Be aware that this will only work on some VMs, I've not yet worked out the prerequisites. The script *should* detect if it hasn't worked and put your VM back with it's original license if the ARM platform throws an error. To put this in perspective, I've run the script on 8 VMs, 1 of which didn't work.

Update: Should now work with managed disk VMs

Thursday, 6 July 2017

There is already a session running for the copy specification JOBNAME. The session will abort.

The job shows the following error in the log

There is already a session running for the copy specification <JOB NAME>. The session will abort.

The job cannot be seen in the monitor tab of Data Protector manager.

This is known to occur in HPE Data Protector 9.08 but it's likely to be the same for other versions.

The job has failed but the csm.exe process that controls the job appears to have crashed and been left running.

Find the running csm.exe for the job and kill the task with task manager. 

I found the task by looking at the CPU Time column to see which one had been running the longest as there was a 2nd task running. The csm.exe that had crashed was using a large amount of memory (>5 GiB) and maxing a single core on CPU.

To be certain you have the right csm.exe process, it is possible to find the environment variable DATALIST by using procexp.exe to look at the process properties.

Once the broken csm.exe has been killed, the job should run normally.

Friday, 26 May 2017

Automating VNX Snapshots with EMC Storage Integrator Powershell

I've been working on automating snapshot creation for a bunch of datastores for one of our applications. In the past I would have written a PowerShell wrapper for the old Navisphere CLI, but I thought I would check to see if the PowerShell module for EMC VNX had gotten any better.

It has! Very much better!

Some of the nice features I have noticed while working on this script:
  • You can connect to multiple storage arrays and search for LUNs across arrays.
  • Once you have the LUN objects you can work on them and not need to care which SAN they are on.
I have published the script here on my GitHub.

The script is intended to be scheduled to run on a daily basis and will automatically clear any snapshots older than the 'expireDays' parameter.

The script will detect which type of LUN is passed in and will create a pool based snapshot for pool LUNs and a SnapView snapshot for RAID based LUNs.

Comments and improvements always welcome.

An example for running the script follows: