My first steps on managing Hyper-V through PowerShell

I’ve dodged Linux for way too long, but let’s face it: Linux’s kernel and it’s derivatives have won the war over Windows everywhere but the desktop (and laptop).

And while I have been dabbling with Ubuntu over WSL for over a year now, the reality is the current version of WSL has some serious limitations. Good thing is WSL 2 is on it’s way with support for a “real” Linux kernel in it.

Unfortunately, WSL 2 is still in preview so a couple of months ago when I’ve decided to up my game on Linux and it’s different distros, I choose to do so running VMs on top of Windows 10 1809’s Hyper-V.

So I’m back to fiddling with virtual machines after several years of basically ignoring them. Back then, I had a quite elaborate setup that allowed me to spin up multiple VMs at once on a laptop. I remember using differential disks heavily to conserve disk space on the host’s hard drive. For some reason I really don’t recall now, the VM’s configuration files and VHDs where stored at non-default locations. Maybe they were being copied to different hosts, which had different defaults.

Well, my current usage of virtual machines doesn’t justify the use of differential disks and I’m not moving those VMs around so, as a long time K.I.S.S. proponent, I’m sticking to the defaults for the time being.

During the first couple of days playing around, I had setup a handful of VMs using Hyper-V Manager, but that is kind of tedious and error-prone, so again, in the spirit of “Always Be Automating”, I started using PowerShell where possible so I could learn the commands and eventually codify the tasks involved in a script.

So here are the guest virtual machines I had setup:

PS C:\WINDOWS\system32> Get-VM | Select-Object Name
 Name
 CentoOS20190725
 CentOS
 Suse
 Ubuntu
 Ubuntu1804
 Windows 10 dev environment

Unfortunately, all those virtual machines came at the cost of helping to exhaust the free space on the host’s local hard drive.

PS C:\Users> Get-PSDrive -PSProvider FileSystem
 Name           Used (GB)     Free (GB) Provider      Root
 ----           ---------     --------- --------      ----
 C                 879.83         36.93 FileSystem    C:\
 D                                      FileSystem    D:\
 E                                      FileSystem    E:\

To get a sense of how much space those VMs are taking, let’s take a look at the disks being used by them.

Just so I can save some typing, I’m going to be using some variables here and there.

First, I get the path to the directory where the virtual hard drives are located.

$vhdPath = (Get-VMHost).VirtualHardDiskPath

Then I get a collection of objects representing the files contained in there. Since I’m only interested in the files names and their sizes, I’ll be leaving out the other properties.

$files = Get-ChildItem $vhdPath | Select-Object Name, Length | Sort-Object Length -Descending
Contents of the files variable
Get-ChildItem $vhdPath | Select-Object Name, Length | Sort-Object Length -Descending

There are a handful of relatively smaller files containing GUIDs in their names. Those files are Hyper-V checkpoints and enable us to go back to the point in time when those snapshots were made.

Here’s how we list the checkpoints available on the current host.

Get-VM | Get-VMSnapshot | Select-Object VMName, Name
List of checkpoints for the virtual machines contained on current Hyper-V host
Get-VM | Get-VMSnapshot | Select-Object VMName, Name

Since I really don’t need those checkpoints right now, and they are making it hard to see what’s going on, I’ll be removing and merging them into their main VHDs.

But before that, let’s take note of the total number of files and how much disk space they consume.

PS C:\WINDOWS\system32> $baseline = $files | Measure-Object -Sum Length | Select-Object Count, Sum
 PS C:\WINDOWS\system32> $baseline
 Count         Sum
 -----         ---
    13 86536880128

To remove and merge the checkpoints of each VM:

Get-VM | Remove-VMSnapshot

An important note: Remove-VMSnapshot seems to be executed asynchronously and will return before removing the file so you want to be careful if you’re immediately subsequently issuing commands that depend on those files being deleted.

To check out the result of removing those checkpoints, we basically repeat the commands issued previously.

PS C:\WINDOWS\system32> $files = Get-ChildItem $vhdPath | Select-Object Name, Length | Sort-Object Length -Descending
 PS C:\WINDOWS\system32> $files
 Name                                 Length
 ----                                 ------
 Windows 10 dev environment.vhdx 39195770880
 Ubuntu 18.04.1 LTS (1).vhdx     10661920768
 New Virtual Machine (2).vhdx     8728346624
 New Virtual Machine.vhdx         5943328768
 CentoOS20190725.vhdx             5540675584
 Ubuntu 18.04.1 LTS.vhdx          4756340736
 Ubuntu Server 18.04.vhdx         3762290688
 New Virtual Machine (1).vhdx        4194304
 PS C:\WINDOWS\system32> $current = $files | Measure-Object -Sum Length | Select-Object Count, Sum
 PS C:\WINDOWS\system32> $current
 Count         Sum
 -----         ---
     8 78592868352
PS C:\WINDOWS\system32> $baseline.Count - $current.Count
 5
PS C:\WINDOWS\system32> $baseline.Sum - $current.Sum
 7944011776

As can be seen, five checkpoints were removed resulting in saving a little over 7GB. It isn’t that much, but at least looking at the remaining files, it’s easier to see that there are two more virtual disks than virtual machines. Given those virtual machines are configured with only one virtual hard disk each, there are two orphaned virtual hard drives that should probably be deleted.

As I’m sticking to Hyper-V’s defaults and checkpoints already have been removed, deleting those orphaned virtual disks is quite easy: First you get a list containing each virtual disk attached to a virtual machine. Then you enumerate the files in the host’s default virtual hard disk directory and remove those that aren’t on the list.

 PS C:\WINDOWS\system32> $rootedVHDs = (Get-VM).HardDrives.Path

 PS C:\WINDOWS\system32> (Get-VMHost).VirtualHardDiskPath | Where-Object { $_.FullName -notin $rootedVHDs } | Remove-Item

gpg: keyserver receive failed: No dirmngr

As part of verifying the signature for a downloaded file using GnuPG running on Ubuntu 18.04 on WSL on Windows 10 1809, I tried to import the publisher’s signing key…

gpg --receive-keys 0x22C07BA534178CD02EFE22AAB88B2FD43DBDC284

… for which I received the following error message:

gpg: connecting dirmngr at '/home/foo/.gnupg/S.dirmngr' failed: IPC connect call failed
 gpg: keyserver receive failed: No dirmngr

According to Ben Hillis, a developer on the Windows Subsystem for Linux team, there was a bug in the version of GPG packaged into Ubuntu 18.04 that only manifests itself when running over WSL:

… this is a timing-related issue that is exposed because of a difference in how Windows and Linux handles connection attempts to localhost sockets. On Linux an attempt to connect to a localhost tcp socket on a port that is not active will return a failure immediately. On Windows there appears to be a 1 second timeout. This causes the following sequence to occur.
1. gpg spawns dirmngr
2. dirmngr attempt to connect to localhost port 9050 (this is attempted twice).
3. gpg attempts to connect to a unix socket that dirmngr creates after the localhost socket connection fails.
This one second timeout in step 2 is enough to cause gpg to think that dirmngr is not responding. There appears to be a retry loop in gpg, but it is not waiting long enough to account for the 1 second connect timeout (the connect is actually attempted twice).

His series of comments for the issue over at GitHub really seems to be an accurate description of the problem as I was able import the GPG key in Ubuntu 18.04 running in a Hyper-V VM without any problems whatsoever.

Back to WSL, note that although the command failed due to the timing issue described above, dirmngr is now running, so if you issue the command once again, it should work.

foo@bar:~$ gpg --receive-keys 0x22C07BA534178CD02EFE22AAB88B2FD43DBDC284
 gpg: connecting dirmngr at '/home/foo/.gnupg/S.dirmngr' failed: IPC connect call failed
 gpg: keyserver receive failed: No dirmngr

 foo@bar:~$ !!
 gpg --receive-keys 0x22C07BA534178CD02EFE22AAB88B2FD43DBDC284
 gpg: key B88B2FD43DBDC284: 22 signatures not checked due to missing keys
 gpg: key B88B2FD43DBDC284: public key "openSUSE Project Signing Key opensuse@opensuse.org" imported
 gpg: marginals needed: 3  completes needed: 1  trust model: pgp
 gpg: depth: 0  valid:   1  signed:   3  trust: 0-, 0q, 0n, 0m, 0f, 1u
 gpg: depth: 1  valid:   3  signed:   0  trust: 3-, 0q, 0n, 0m, 0f, 0u
 gpg: Total number processed: 1
 gpg:               imported: 1

Since I’m banging commands interactively against the shell, I’m OK with this workaround. On the other hand, if these commands were part of a script, I’d make sure dirmngr is running before issuing any commands that depend on it. In that case, you may want to take a look at the man pages.

How to copy Visual Code extensions to another machine

Earlier this month I was setting up an Ubuntu VM on Windows 10 for development and after installing Visual Studio Code it was time to install the extensions I’m used to having around.

There were 30 of them installed on my main development box. I probably don’t use most these of extensions, but I wasn’t in the mood to sort them out, so I searched for a way to export the settings so I could import them into Code running in the VM.

I wasn’t able to find what I was hoping for, the closest thing being the answers to “How can you export VS Code extension list” on StackOverflow.

# Based on the code snippet found at https://stackoverflow.com/a/49398449/151249
code --list-extensions | xargs -L 1 echo code --install-extension

The answers were a step in the right direction but not quite where I ultimately wanted the solution to be:

  1. Having the list of extensions checked into source control with the rest of the project. For instance, if the team agreed on using ESLint as a build step, having a script to automate installing the corresponding extension (possibly one of many) could help in ramping up new project members.
  2. Having a way to sync extensions between machines or a way to export a list of extensions (not tied to any specific project) I could easily import to anywhere needed, such as disposable VMs, a new machine provided by an employer, etc.

I didn’t have time to figure out how to implement any of those ideas to the full extent of how I think they should work, but in the spirit of “Always Be Automating” I did the next best thing which was hacking a couple of one-liners, a step closer to the solution I want it to eventually be.

First I created a file containing the list of extensions:

code --list-extensions > vscode-extensions.txt
view raw export hosted with ❤ by GitHub

I think for most people the easiest way to access the list from other machines is putting the file somewhere online. I’m of the opinion that the best place to store anything development related is on GitHub, so I uploaded the list to a gist over there:

DavidAnson.vscode-markdownlint
docsmsft.docs-article-templates
docsmsft.docs-markdown
docsmsft.docs-preview
EditorConfig.EditorConfig
GitHub.vscode-pull-request-github
ms-azuretools.vscode-azureappservice
ms-azuretools.vscode-azurefunctions
ms-azuretools.vscode-azurestorage
ms-azuretools.vscode-cosmosdb
ms-mssql.mssql
ms-vscode.azure-account
ms-vscode.azurecli
ms-vscode.cpptools
ms-vscode.csharp
ms-vscode.mono-debug
ms-vscode.powershell
ms-vscode.vscode-node-azure-pack
ms-vsts.team
msazurermtools.azurerm-vscode-tools
msjsdiag.debugger-for-chrome
PeterJausovec.vscode-docker
redhat.java
VisualStudioExptTeam.vscodeintellicode
vsciot-vscode.azure-iot-toolkit
vscjava.vscode-java-debug
vscjava.vscode-java-dependency
vscjava.vscode-java-pack
vscjava.vscode-java-test
vscjava.vscode-maven

Then all I had to do to install those extensions was to CURL that file and pipe it into code:

#!/bin/bash
curl https://gist.githubusercontent.com/alfredmyers/336ed20410acee6688f7ba7c85b5826f/raw/84afcdbb919e9a9912c73914c7859746e862259a/vscode-extensions.txt | xargs -L 1 code --install-extension

That did the trick and I was able to continue working on whatever I was working on, although I wasn’t quite happy with the gist’s URL. See, I don’t know of anyway of getting rid of that automatically generated GUID which would make the URL more memorable.

Earlier today after watching Amanda Sliver and John Papa on Five Things, where she mentioned how to list Visual Studio Code extensions from the command-line, I decided to fix that URL problem putting the list on a GitHub repo with GitHub Pages turned on so instead of a cryptic URL I have something more memorable.

#!/bin/bash
curl https://alfredmyers.github.io/codex/all.txt | xargs -L 1 code --install-extension

I decided to call the repo Codex, for Code Extensions. That gives me an easy to remember base URL: https://alfredmyers.github.io/codex/

For now, I only have a single list in “all.txt”, but there’s nothing stopping me from creating other lists containing extensions for specific purposes or projects. For instance:

  • https://alfredmyers.github.io/codex/dotnet.txt
  • https://alfredmyers.github.io/codex/nojejs.txt

Oh… And by the way, once you CURL that list, you can pipe it into any command you’d like. For instance, to uninstall all those extensions we got from all.txt:

#!/bin/bash
curl https://alfredmyers.github.io/codex/all.txt | xargs -L 1 code --uninstall-extension

Just make sure you have a list of the extensions you really need hanging around so you can use it to reset everything to a desired state.

I you find the idea interesting, feel free to fork the project (https://github.com/alfredmyers/codex) and hack it to your needs. And don’t forget to turn on GitHub pages so you can access the lists using an easy to remember base URL such as https://{your-user-name}.github.io/codex/.

error: gpg failed to sign the data

A couple of months ago I noticed that commits I’ve done through the GitHub web interface were receiving a “Verified” badge while commits done through the Git command line in WSL (Windows Subsystem for Linux) at my local dev machine weren’t.

I’m all for badges so I followed the steps found at About commit signature verification to setup GPG signing. The thing is, there was something still missing and as a result, when trying to commit I was getting an error message as follows:

error: gpg failed to sign the data
fatal: failed to write commit object

Fortunately, the solution is simple. Export a variable named GPG_TTY as follows:

export GPG_TTY=$(tty)

I ended appending it to ~/.bashrc so as to persist it between terminal sessions.

Cannot find runtime ‘node’ on PATH. Is ‘node’ installed?

On my current Windows box, I have Node.js installed only in WSL – not on Windows itself.

When debugging a Node.js application from within a Visual Studio Code instance started from WSL, you may receive the following message:

Cannot find runtime 'node' on PATH. Is 'node' installed?
Cannot find runtime ‘node’ on PATH. Is ‘node’ installed?

While the dev experience certainly could be better, the solution is quite simple: Click on the “Open launch.json” button and add a configuration for “useWSL” with the value set to “true”.

{
"version": "0.2.0",
"configurations": [
{
"type": "node",
"request": "launch",
"name": "Launch Program",
"program": "${workspaceFolder}\\entry-point.js",
"useWSL": true
}
]
}
view raw launch.json hosted with ❤ by GitHub

The system does not support local kernel debugging

If you’re trying to do some local kernel debugging with one of Windows Debugging Tools’ debuggers and Windows isn’t booted into debug mode you’ll get a message like one of the following (all commands run from an elevated command prompt):

kd -kl

Microsoft (R) Windows Debugger Version 10.0.15063.400 AMD64
Copyright (c) Microsoft Corporation. All rights reserved.

The system does not support local kernel debugging.
Local kernel debugging requires Administrative privileges.
Only a single local kernel debugging session can run at a time.
Local kernel debugging is disabled by default. You must run “bcdedit -debug on” and reboot to enable it.
Debuggee initialization failed, HRESULT 0x80004001
Not implemented

windbg -kl

—————————
WinDbg:10.0.15063.400 AMD64
—————————
The system does not support local kernel debugging.

Local kernel debugging requires Administrative
privileges, and is not supported by WOW64.
Only a single local kernel debugging session can run at a time.
Local kernel debugging is disabled by default. You must run ‘bcdedit -debug on’ and reboot to enable it.
—————————
OK
—————————

You can enable Windows debug mode by using…

bcdedit.exe -debug on

 or msconfig.exe, but if you have BitLocker enabled for your OS drive, you’ll have to have to enter the recovery key or recovery password.

Here’s the message from msconfig.exe:

—————————
System Configuration
—————————
BitLocker Drive Encryption is enabled on your OS drive. Because these changes modify the machine’s boot settings, the machine will enter recovery mode at next boot and you will need to provide a BitLocker recovery key or recovery password. Are you sure you want to proceed?
—————————
Yes No
—————————

Depending on what you’re up to, that’s just too much of a hassle. Fortunately there’s a tool from Sysinternals that removes the need of booting Windows into debug mode: livekd.

By default, livekd will run kd.exe, but you can tell it to run WinDbg by passing the -w option:

livekd -w

Several other options can be set when running livekd (see the link above for details). The ones it doesn’t understand are passed on to the chosen debugger.

Important to note that the feature set of the debugger when running through livekd is not the same as when running without it. See the docs for more information.

 

 

 

warning CS0618: ‘Device.OS’ is obsolete: ‘TargetPlatform is obsolete as of version 2.3.4. Please use RuntimePlatform instead.’

Here is a couple of compiler generated warnings I found on project I was recently reviewing:

  1. warning CS0618: ‘Device.OS’ is obsolete: ‘TargetPlatform is obsolete as of version 2.3.4. Please use RuntimePlatform instead.’
  2. warning CS0612: ‘TargetPlatform’ is obsolete

A CS0612 is generated when the code references a type or member to wich the parameterless ObsoleteAttribute was applied.

CS0618 is generated when the code references a type or member to wich a parameterized ObsoleteAttribute was applied.

In this case, the reference to Device.OS has to be replaced with a reference to Device.RuntimePlatform.

Since Device.OS was an enum of type TargetPlatform and Device.RuntimePlatform is a string, it is necessary to update the right hand of the expression as well.

For that you can use one of the string constants defined on the Device class:

  • iOS;
  • Android;
  • WinPhone;
  • UWP;
  • WinRT;
  • macOS;

Here’s how the code looked like originally:

And here’s how the looks code after the update:

Getting the latest version of Sysinternals’ tools

You can easily grab the latest version of any Sysinternals tool pointing your browser to https://live.sysinternals.com

Say you want to run Process Monitor, point your browser to:

https://live.sysinternals.com/procmon.exe

and voila!

All browsers will download the file to your system, but Internet Explorer and Edge will give you the option to run the tool automatically as soon as it finishes downloading.

But here’s a neat trick I learned today while reading the first chapter of Troubleshooting with the Windows Sysinternals Tools (2nd Edition):

You can run any Sysinternals tool directly from a UNC path such as the following:

\\live.sysinternals.com\tools\procmon.exe

Being a UNC share, you can map it as a local drive and use it from Windows Explorer, from the command line or from PowerShell.

The book goes on to tell that for this to work, a Windows Service called WebClient needs to be running.

On recent versions of Windows it is stopped by default. There are a handfull of ways to start the service – both explicitly and implicitly. Some of them require user elevation, others don’t, but the neatest way I didn’t find in the book:

pushd \\live.sysinternals.com\tools

pushd will map the share to the first available drive starting from z: and change the current directory to it.

Although running this way is kind of slow (the files are being downloaded from the internet after all…), it is still usefull for those situations where you wanna get in, do whatever you gotta do, and leave without having to worry about deleting any files later.

When you’re ready to remove the mapping, popd will remove it and change the directory back to the one that was current when you pushd in.

ASP.NET For The oWin

It’s been a while since my last post but I’m eager to come back blogging again.

I’ve started at a new job back in May supporting software development teams and since then I’ve been getting back up to speed with the .NET Framework and studying like crazy all things ASP.NET.

The ASP.NET stack has changed a lot in the ten years since the last time I looked seriously into it as a whole while preparing for certification exams. Using just user management and role management as an example, in the span of ten years ASP.NET went from ASP.NET Membership to ASP.NET Simple Membership to ASP.NET Universal Providers to ASP.NET Identity (to ASP.NET Core Identity? – I haven’t yet taken a look into that to see how different it is from its classical ASP.NET counterpart).

The good news is that for the most part it all has gone open source. I feel that not many people appreciate how much the source code worths as a learning tool.

Now let me get back into building an OAuth 2.0 authentication server using OWIN.

Sobre O Estado da imprensa escrita

Hoje pela manhã recebi a newsletter do Estadão.
Entre as notícias, uma me chamou a atenção: As 10 cidades mais baratas do mundo para turistas. E as 10 mais caras.

Clico no link e caio numa página onde só tem imagens. Todas com a mesma legenda:

“O site de planejamento e reserva de viagens TripAdvisor divulgou nesta terça-feira (19 de julho) o índice TripIndex Cidades, que identifica as cidades mais baratas e mais caras do mundo para turistas… … Confira a lista

A ênfase no “Confira a lista” é por minha conta. Não tem lista. Não tem o nome da cidade a qual se refere cada uma das fotos. Não diz quais as cidades mais baratas, tampouco quais as mais caras.

Olhando os comentários, vejo vários leitores reclamando sobre a falta de mais informações.

Faço uma busca no Google e entre as primeiras respostas vem um link para uma “matéria” no próprio Estadão: As cidades mais baratas do mundo para turistas; veja lista.

Ênfase no “veja lista” [sic] por minha conta. Não tem lista. De novo.

Faço nova busca no Google. Desta vez procurando pela fonte, o próprio TripAdvisor. A pesquisa trás links para vários sites que de alguma forma reproduzem a notícia. Finalmente encontro um link para a fonte que está em inglês.

Decido postar o link nos comentários para facilitar a vida de outros leitores.

Vejo o comentário de outro leitor dizendo que o nome das cidades está lá sim. Basta passar com o mouse em cima da imagem, que aparece uma tooltip com o nome da cidade. (In)felizmente, meu celular não tem mouse.

Meu interesse pelo assunto acaba. Já tenho o que eu queria. A informação. Direto da fonte. E a vida segue.