The term ‘{0}’ is not recognized as the name of a cmdlet, function, script file, or operable program.

Applicable to:PowerShell Core global tool 6.2.2 and 6.2.3.
TL;DR;For the purposes of defining scope, I’m limiting the following analysis to SDKs for .NET Core versions currently in LTS and versions in between. As of this writing, that would be 2.1. 2.2, 3.0 and 3.1.

This should work well enough for most cases:

– If you can root or sudo, install whatever version of PowerShell you deem appropriate with an installer or package manager depending on your operating system;
– Else, if you have .NET Core SDK 3.1, update to PowerShell Core tool 7.0.0;
– Else update to PowerShell Core global tool 6.2.4.

For details, read on…

As written in my the last post, installing PowerShell global tool 7.0 solved the issue of initializing PowerShell extension for Visual Studio Code (ms-vscode.powershell).

But there was one thing intriguing me: While trying to initialize the extension, some writing in red was flashing in Code’s integrated terminal.

After capturing it on video and pausing it at precisely the split second the message appears it is possible to read:

-NoProfile: The term '-NoProfile' is not recognized as the name of a cmdlet, funciton, script, or operable program.
Check the spelling of the name, or if a path was included, verify that the path is correct and try again.
At line:1 char:1
+ -NoProfile -NonInteractive -EncodedCommand SQBtAHAAbwByAHQALQBNAG8AZA...
+ ~~~~~~~~~~
+ CategoryInfo          : ObjectNotFound: (-NoProfile:String) [], CommandNotFoundException
+ FullyQualifiedErrorId : CommandNotFoundException

Coincidentally, the parameters seen in the error message match those I had seen previously in the extension’s output log…

PowerShell Extension Output log

… from which I removed timestamps, log level, and truncated the log message for better display below:

Visual Studio Code v1.43.0 64-bit
 PowerShell Extension v2020.3.0
 Operating System: Linux 64-bit
 Language server starting --
     PowerShell executable: /home/alfred/.dotnet/tools/pwsh
     PowerShell args: -NoProfile -NonInteractive -Encoded...
     PowerShell Editor Services args: Import-Module '/home/...
 pwsh started.
 Waiting for session file
 Error occurred retrieving session file
 Language server startup failed.
 The language service could not be started: 
 Timed out waiting for session file to appear.

It is unfortunate that the extension’s log doesn’t show the same information that, albeit briefly, is shown in the integrated terminal – showing instead a misleading error due to some “session file”.

If we capture those parameters from the log and use them for running PowerShell directly, we’ll receive the same exact error shown in the integrated terminal.

With this we can discard ms-vscode.powershell as the source for the error and instead focus on pwsh itself.

It’s nice to see in practice the link of the failure in initializing the extension to the parsing of arguments in PowerShell Core global tool, but this doesn’t really add that much to the situation given I already had learned this is a known bug:

Older versions of the PowerShell dotnet global tool have a bug in them where arguments are not processed properly, breaking our startup invocation.

Please update your PowerShell version to 7 to enable this, or install a non-global tool PowerShell installation and configure the PowerShell extension to use it…

Although the statement pointed me in the right direction, it is imprecise and may lead some people, as was my case, to incorrect conclusions such as the only two alternatives being into install PowerShell global tool 7.0.0 or using some means of installation other than as a .NET Core tool.

Given I had very little exposure to PowerShell running on operating systems other than Windows and haven’t played at all with .NET Core tools, I took the opportunity to read through the documentation and do some testing. Here are some of my findings:

If you have root permissions or can sudo, you’re probably better off installing PowerShell using any means other than as a .NET global tool. PowerShell Core global tool can be seen as a shim over the “real thing”™, an extra layer that in versions 6.2.2 and 6.2.3 had it’s option parsing broken.

If you can’t sudo or elevate to root, you’ll have to resort to using .NET Core tools. First checkout what SDK versions are installed and confirm that PowerShell is installed as a .NET Core tool:

$ dotnet --list-sdks
 2.1.804 [/usr/share/dotnet/sdk]
 3.1.200 [/usr/share/dotnet/sdk]

$ dotnet tool list -g
 Package Id      Version      Commands
 powershell      6.2.3        pwsh   

Whichever SDK versions are installed, you can always uninstall a given version, then install another version. For instance:

# dotnet tool uninstall -g powershell
 Tool 'powershell' (version '6.2.3') was successfully uninstalled.

# dotnet tool install -g --version 6.2.4 powershell
 You can invoke the tool using the following command: pwsh
 Tool 'powershell' (version '6.2.4') was successfully installed.

But upgrading can be done in a single step using dotnet tool update:

# dotnet tool update -g powershell
 Tool 'powershell' was successfully updated from version '6.2.3' to version '7.0.0'.

The command above will work as long the latest version of the .NET Core tool on the package repository is compatible with the latest .NET Core SDK version installed locally.

Otherwise, it is necessary to specify the tool version, which can only be done starting with .NET Core SDK 3.0:

# dotnet tool update -g --version 6.2.4 powershell
 Tool 'powershell' was successfully updated from version '6.2.3' to version '6.2.4'.

The language service could not be started: Source: PowerShell (Extension)

Earlier today when opening a PowerShell script in Visual Studio Code I got the following error message:

The language service could not be started:

Source: PowerShell (Extension)

After some investigation that included changing the Editor Services log level to Diagnostic, and decoding the base64 encoded command being passed to pwsh, it was pointed out to me on GitHub that PowerShell versions prior to 7 contain a bug that, when they’re installed as a .NET Core global tool, prevents them from processing parameters passed to the pwsh command.

So just to confirm the version I had installed, I ran…

$ dotnet tool list -g

 Package Id      Version      Commands
 powershell      6.2.3        pwsh  

… and then after closing opened instances of Visual Studio Code, I updated PowerShell using .NET Core’s CLI:

$ dotnet tool update powershell -g

Tool 'powershell' was successfully updated from version '6.2.3' to version '7.0.0'.

In case you didn’t install PowerShell through .NET Core’s CLI, you may want to take a look at “Installing various versions of PowerShell” over on Microsoft Docs where you’ll find instructions for installing PowerShell on all the supported target platforms.

@id:ms-vscode.csharp – No extensions found.

There seems to have been a release coordination snafu between Visual Studio Code and the latest release for the C# for Visual Studio Code extension and as a result, you may be getting…

The ‘C#’ extension is recommended for this file type.

… over and over again.

If you click on Install, Code will tell you it can’t find the extension.

@id:ms-vscode.csharp

No extensions found.

The issue is caused by publisher for the extension being changed from ms-vscode to ms-dotnetools and how this cascades to other parts of the extension such as it’s Id.

From what I understood from the issue over on GitHub, the problem should go away with the next version of Visual Studio Code – the current version being 1.42.1.

$ code --version
 1.42.1
 c47d83b293181d9be64f27ff093689e8e7aed054
 x64

Meanwhile, since there are reports on GitHub about the change impacting dependent extensions, the workaround will depend on whether you use one of those extensions and how much the installation prompt bugs you.

A good start is finding out which extension, if any, you have installed.

$ code --list-extensions | grep -E 'ms-\w+.csharp'

This should return ms-vscode.csharp if you have the older extension or ms-dotnettools.csharp if you have the newer one. It may return other extensions as well if they happen to match the given regular expression.

If you have the newer version and are OK with being prompt to install it over and over again, you’re set up. Otherwise, you can still get the older version, but since it has been removed from the Marketplace, you’ll have to resort to getting it from GitHub.

$ code --uninstall-extension ms-dotnettools.csharp
 Uninstalling ms-dotnettools.csharp…
 Extension 'ms-dotnettools.csharp' was successfully uninstalled!

$ wget https://github.com/OmniSharp/omnisharp-vscode/releases/download/v1.21.12/csharp-1.21.12.vsix
...

$ code --install-extension csharp-1.21.12.vsix
Installing extensions…
 Extension 'csharp-1.21.12.vsix' was successfully installed.

ERROR for site owner: Invalid key type

I just found out that this blog was being used to send spam email to accounts over at qq.com, a Chinese instant messaging software service and web portal.

Apparently, this is due to a bug on Jetpack’s Sharing feature that despite being known for at least three years, still hasn’t been fixed.

The link above points to a support thread over at WordPress.org describing the issue in more detail and contains a handful of workarounds for mitigating it.

Of the workarounds, I find two are worth mentioning here.

  1. Disable the email Sharing Button. Although a little extreme, it may be justified if you don’t have the inclination to mess around with configuration files or if the usage volume of the feature by legitimate users is low enough;
  2. Adding reCaptcha to the Email Sharing Button. This one involves adding a couple of lines to wp-config.php;

The thing is, the instructions for adding reCaptcha were written before the release of reCAPTCHA v3 and JetPack (version 7.8 as of this writing) isn’t currently compatible with it, so when setting up reCAPTCHA, you should choose reCAPTCHA v2 or else you will receive the message bellow:

ERROR for site owner: Invalid key type
ERROR for site owner: Invalid key type

I’ll be monitoring the logs for a couple of days to see if enabling reCAPTCHA will suffice. If not, I’ll just disable the email sharing button altogether.

It’s kind of lame that the default settings of an email sharing button would open up a website for being used to send spam. But even lamer is seeing people telling others to not complain about it on the basis of the plugin being free.

Please remember that if you using this plugin for free, all requests future need to be in reasonable manners, as nobody paying for it. Consider that, they are doing very good job for users who are using this plugin for free and there is solution for it already.

Freemium ain’t free. It’s a marketing gimmick and has been working very well for Automattic, makers of JetPack and WordPress – the latter of which by some accounts powers over 1/3 of the top 10 million web sites on the internet.

CentOS 8.0 installation boots into a blank screen

CentOS 8.0 was released earlier today with a bug inherited from RHEL 8.0 upstream.

The bug manifests itself when Server with Gui is selected as the Base Environment when installing on a Hyper-V virtual machine. After copying all the files needed into the hard drive Anaconda will boot into a blank screen.

The problem is made a little worse because Server with Gui is the default selection.

In yesterday’s post, I’ve shown how the solution proposed in RHEL 8.0’s release notes didn’t quite work when the system wasn’t (yet) registered with Red Hat Subscription Management.

Since CentOS has no such thing, the solution described in the release notes can be applied as is. That is, as long as you have internet connectivity.

But what if the system at hand is isolated from the internet? Then again we have to resort to installing from the installation media.

This time though, we can take advantage of the repo configuration files installed with the operating system that point to the installation media.

yum repolist --all
yum repolist --all
yum repolist –all

From yesterday, we already know that base-x is contained in AppStream, so from the list above we probably want to take a better look at c8-media-AppStream.

[[email protected] ~]# yum repoinfo --all c8-media-AppStream
 Last metadata expiration check: 0:24:40 ago on Tue 24 Sep 2019 09:24:39 PM -03.
 Repo-id      : c8-media-AppStream
 Repo-name    : CentOS-AppStream-8 - Media
 Repo-status  : disabled
 Repo-baseurl : file:///media/CentOS/AppStream, file:///media/cdrom/AppStream, file:///media/cdrecorder/AppStream
 Repo-expire  : 172,800 second(s) (last: unknown)
 Repo-filename: /etc/yum.repos.d/CentOS-Media.repo
 [[email protected] ~]#

As can be seen the repo is configured to search for its files at one of the URLs defined by the Repo-baseurl property. Let’s mount the installation media into /media/CentOS.

[[email protected] ~]# mkdir /media/CentOS
[[email protected] ~]# mount /dev/sr0 /media/CentOS/
 mount: /media/CentOS: WARNING: device write-protected, mounted read-only.
 

Now we can invoke yum groupinstall specifying c8-media-AppStream as the source repository.

[[email protected] ~]# yum --repo=c8-media-AppStream -y groupinstall base-x  
yum --repo=c8-media-AppStream -y groupinstall base-x
yum –repo=c8-media-AppStream -y groupinstall base-x

As yesterday, rebooting now should load the graphical user interface for the final steps of the installation process.

Error: There are no enabled repos.

I was installing Red Hat Enterprise Linux 8.0 on a Hyper-V virtual machine. After copying all the packages from the installation media and installing them onto the virtual hard drive, Anaconda booted RHEL into a blank screen.

After some research I found that the problem was a known issue listed in RHEL 8.0’s release notes as well as on Red Hat’s knowledge base:

The xorg-x11-drv-fbdev, xorg-x11-drv-vesa, and xorg-x11-drv-vmware video drivers are not installed by default

In addition, virtual machines relying on EFI for graphics support, such as Hyper-V, are also affected. If you selected the Server with GUI base environment on Hyper-V, you might be unable to log in due to a black screen displayed on reboot. To work around this problem on Hyper-v, enable multi- or single-user mode using the following steps:
Reboot the virtual machine.
During the booting process, select the required kernel using the up and down arrow keys on your keyboard.
Press the e key on your keyboard to edit the kernel command line.
Add systemd.unit=multi-user.target to the kernel command line in GRUB.
Press Ctrl-X to start the virtual machine.
After logging in, run the yum -y groupinstall base-x command.
Reboot the virtual machine to access the graphical mode.
(BZ#1687489)

So I started following the instructions, but on step #6 something unexpected happens:

[[email protected] ~]# yum -y groupinstall base-x
 Updating Subscription Management repositories.
 Unable to read consumer identity
 This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
 Error: There are no enabled repos.

Well… Thing is this is a short lived disposable virtual machine and I have no plans on registering it with Red Hat Subscription Management.

Luckily, the installation media contains the two repositories introduced with RHEL 8.0: BaseOS and AppStream.

Base-x is contained in the AppStream repo. We just have to:

  1. Make sure the repositories from the installation media are accessible from the file system;
  2. Import the public keys used to sign the packages into RPM;
  3. Direct yum to install the packages from the AppStream repository.
[[email protected] ~]# mount /dev/sr0 /mnt
 mount: /mnt: WARNING: device write-protected, mounted read-only.
 [[email protected] ~]# rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
 [[email protected] ~]# yum --repofrompath AppStream,file:///mnt/AppStream -y groupinstall base-x  
Results of the package installation
base-x installed successfully

Please note the command importing Red Hat’s public keys into RPM before installing the package, otherwise the package wouldn’t have been installed and an error message would have been displayed as seen below.

[[email protected] ~]# mount /dev/sr0 /mnt
 mount: /mnt: WARNING: device write-protected, mounted read-only.
 [[email protected] ~]# yum --repofrompath AppStream,file:///mnt/AppStream -y groupinstall base-x
 Updating Subscription Management repositories.
 Unable to read consumer identity
 This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
 Added AppStream repo from file:///mnt/AppStream
 You have enabled checking of packages via GPG keys. This is a good thing.
 However, you do not have any GPG public keys installed. You need to download
 the keys for packages you wish to install and install them.
 You can do that by running the command:
     rpm --import public.gpg.key
 Alternatively you can specify the url to the key you would like to use
 for a repository in the 'gpgkey' option in a repository section and DNF
 will install it for you.

Once the package is installed, rebooting the system should load the graphical user interface for the final steps of the installation process.

RHEL 8.0 Initial Setup graphical user interface
Initial Setup

Automating my way out of ancillary tasks with PowerShell

After creating a handful of virtual machines for ramping up on the different Linux distros, certain configuration patterns started to emerge. Repeating those same settings over and over again in Hyper-V Manager or even PowerShell’s command line interface is kind of tedious. Good thing is scripts are a great way to automate those repetitive configuration tasks away.

In my case, the settings I kept copying over and over again were:

  • Two cores;
  • Network adapter bound to a virtual switch with external connectivity;
  • Automatic Checkpoints turned off;

And then there were other settings that although varied between distros and installation options within those distros, had to be configured each and every time:

  • Virtual machine name;
  • RAM size;
  • Hard drive size;
  • Path to installation ISO;
  • Secure Boot;

So after learning and getting comfortable with the different PowerShell cmdlets, I started prototyping a few scripts. At first, the scripts were kind of lame and buggy, but as I learned more about the features of Hyper-V and how those features are exposed through PowerShell, the scripts were improved little by little much in the spirit of Kaizen – continuous improvement.

The important thing here is realizing this as a learning tool and as such starting small and improving as you go and as new requirements make themselves known.

Important as well is knowing when to stop. The purpose of this exercise isn’t creating a production-ready script, but to create a script that will allow me to get back as soon as possible to what I settled out to do in the first place: Create virtual machines so I could learn something else.

If you are curious enough, you can find the source code for one of the early versions of the scripts as a gist over on GitHub. If you are even more curious, you can see the entire history of how the code evolved also on GitHub.

Since that early prototype, the code has evolved quite a bit from a collection of discrete scripts to the current version which is implemented as a PowerShell module and incorporates all the learnings of the last few weeks – including a best practice for creating virtual hard disks for use with Linux file systems. Good luck trying remember that one every time you create a new VM for Linux!

function Get-OrphanedVHDs {
$rootedVHDs = (Get-VM).HardDrives.Path + (Get-VM | Get-VMSnapshot).HardDrives.Path
Return Get-ChildItem (Get-VMHost).VirtualHardDiskPath | Where-Object { $_.FullName -notin $rootedVHDs }
}
function New-VirtualMachine {
Param(
$Name,
$MemoryBytes,
$VHDSizeBytes,
$IsoPath,
$SecureBootTemplate,
$SwitchName
)
# If a switch name hasn't been provided, it'll try to find a default.
if (!$SwitchName) {
$Switches = Get-VMSwitch | Where-Object SwitchType -eq 'External'
#If there's only one switch with external connectivity, that's it.
#Else, a switch name should have been provided.
if ($Switches.Count -eq 1) {
$SwitchName = $Switches.Name
}
}
$VHDPath = Join-Path Path (Get-VMHost).VirtualHardDiskPath ChildPath ($Name + ".vhdx")
if ($SecureBootTemplate -eq "MicrosoftWindows") {
$null = New-VHD Path $VHDPath SizeBytes $VHDSizeBytes
}
else {
$null = New-VHD Path $VHDPath SizeBytes $VHDSizeBytes BlockSizeBytes 1MB
}
$VM = New-VM Name $Name MemoryStartupBytes $MemoryBytes Generation 2 BootDevice VHD SwitchName $SwitchName VHDPath $VHDPath
Set-VM $VM ProcessorCount 2 MemoryMaximumBytes $MemoryBytes AutomaticCheckpointsEnabled $false
Add-VMDvdDrive $VM Path $IsoPath
Set-VMFirmware $VM BootOrder ((Get-VMFirmware $VM).BootOrder | ? BootType -eq 'Drive')
if ($SecureBootTemplate) {
Set-VMFirmware $VM SecureBootTemplate $SecureBootTemplate
}
else {
Set-VMFirmware $VM EnableSecureBoot Off
}
Return $VM
}
function Remove-VirtualMachine {
Param(
$VMName
)
$vm = Get-VM $VMName
$snapshot = (Get-VMSnapshot $vm | ? ParentSnapshotName -eq $null)
if ($snapshot) {
$vhds = $snapshot.HardDrives.Path
}
else {
$vhds = $vm.HardDrives.Path
}
Remove-VM $vm Force
Remove-Item $vhds
}
view raw Power-V.psm1 hosted with ❤ by GitHub

Certainly there are a lot of opportunities for improvement (documentation, error handling and resilience in general just to name a few), but those are left as an exercise for future self. Meanwhile, let me get back to playing with those VMs.

My first steps on managing Hyper-V through PowerShell

I’ve dodged Linux for way too long, but let’s face it: Linux’s kernel and it’s derivatives have won the war over Windows everywhere but the desktop (and laptop).

And while I have been dabbling with Ubuntu over WSL for over a year now, the reality is the current version of WSL has some serious limitations. Good thing is WSL 2 is on it’s way with support for a “real” Linux kernel in it.

Unfortunately, WSL 2 is still in preview so a couple of months ago when I’ve decided to up my game on Linux and it’s different distros, I choose to do so running VMs on top of Windows 10 1809’s Hyper-V.

So I’m back to fiddling with virtual machines after several years of basically ignoring them. Back then, I had a quite elaborate setup that allowed me to spin up multiple VMs at once on a laptop. I remember using differential disks heavily to conserve disk space on the host’s hard drive. For some reason I really don’t recall now, the VM’s configuration files and VHDs where stored at non-default locations. Maybe they were being copied to different hosts, which had different defaults.

Well, my current usage of virtual machines doesn’t justify the use of differential disks and I’m not moving those VMs around so, as a long time K.I.S.S. proponent, I’m sticking to the defaults for the time being.

During the first couple of days playing around, I had setup a handful of VMs using Hyper-V Manager, but that is kind of tedious and error-prone, so again, in the spirit of “Always Be Automating”, I started using PowerShell where possible so I could learn the commands and eventually codify the tasks involved in a script.

So here are the guest virtual machines I had setup:

PS C:\WINDOWS\system32> Get-VM | Select-Object Name
 Name
 CentoOS20190725
 CentOS
 Suse
 Ubuntu
 Ubuntu1804
 Windows 10 dev environment

Unfortunately, all those virtual machines came at the cost of helping to exhaust the free space on the host’s local hard drive.

PS C:\Users> Get-PSDrive -PSProvider FileSystem
 Name           Used (GB)     Free (GB) Provider      Root
 ----           ---------     --------- --------      ----
 C                 879.83         36.93 FileSystem    C:\
 D                                      FileSystem    D:\
 E                                      FileSystem    E:\

To get a sense of how much space those VMs are taking, let’s take a look at the disks being used by them.

Just so I can save some typing, I’m going to be using some variables here and there.

First, I get the path to the directory where the virtual hard drives are located.

$vhdPath = (Get-VMHost).VirtualHardDiskPath

Then I get a collection of objects representing the files contained in there. Since I’m only interested in the files names and their sizes, I’ll be leaving out the other properties.

$files = Get-ChildItem $vhdPath | Select-Object Name, Length | Sort-Object Length -Descending
Contents of the files variable
Get-ChildItem $vhdPath | Select-Object Name, Length | Sort-Object Length -Descending

There are a handful of relatively smaller files containing GUIDs in their names. Those files are Hyper-V checkpoints and enable us to go back to the point in time when those snapshots were made.

Here’s how we list the checkpoints available on the current host.

Get-VM | Get-VMSnapshot | Select-Object VMName, Name
List of checkpoints for the virtual machines contained on current Hyper-V host
Get-VM | Get-VMSnapshot | Select-Object VMName, Name

Since I really don’t need those checkpoints right now, and they are making it hard to see what’s going on, I’ll be removing and merging them into their main VHDs.

But before that, let’s take note of the total number of files and how much disk space they consume.

PS C:\WINDOWS\system32> $baseline = $files | Measure-Object -Sum Length | Select-Object Count, Sum
 PS C:\WINDOWS\system32> $baseline
 Count         Sum
 -----         ---
    13 86536880128

To remove and merge the checkpoints of each VM:

Get-VM | Remove-VMSnapshot

An important note: Remove-VMSnapshot seems to be executed asynchronously and will return before removing the file so you want to be careful if you’re immediately subsequently issuing commands that depend on those files being deleted.

To check out the result of removing those checkpoints, we basically repeat the commands issued previously.

PS C:\WINDOWS\system32> $files = Get-ChildItem $vhdPath | Select-Object Name, Length | Sort-Object Length -Descending
 PS C:\WINDOWS\system32> $files
 Name                                 Length
 ----                                 ------
 Windows 10 dev environment.vhdx 39195770880
 Ubuntu 18.04.1 LTS (1).vhdx     10661920768
 New Virtual Machine (2).vhdx     8728346624
 New Virtual Machine.vhdx         5943328768
 CentoOS20190725.vhdx             5540675584
 Ubuntu 18.04.1 LTS.vhdx          4756340736
 Ubuntu Server 18.04.vhdx         3762290688
 New Virtual Machine (1).vhdx        4194304
 PS C:\WINDOWS\system32> $current = $files | Measure-Object -Sum Length | Select-Object Count, Sum
 PS C:\WINDOWS\system32> $current
 Count         Sum
 -----         ---
     8 78592868352
PS C:\WINDOWS\system32> $baseline.Count - $current.Count
 5
PS C:\WINDOWS\system32> $baseline.Sum - $current.Sum
 7944011776

As can be seen, five checkpoints were removed resulting in saving a little over 7GB. It isn’t that much, but at least looking at the remaining files, it’s easier to see that there are two more virtual disks than virtual machines. Given those virtual machines are configured with only one virtual hard disk each, there are two orphaned virtual hard drives that should probably be deleted.

As I’m sticking to Hyper-V’s defaults and checkpoints already have been removed, deleting those orphaned virtual disks is quite easy: First you get a list containing each virtual disk attached to a virtual machine. Then you enumerate the files in the host’s default virtual hard disk directory and remove those that aren’t on the list.

 PS C:\WINDOWS\system32> $rootedVHDs = (Get-VM).HardDrives.Path

 PS C:\WINDOWS\system32> (Get-VMHost).VirtualHardDiskPath | Where-Object { $_.FullName -notin $rootedVHDs } | Remove-Item

gpg: keyserver receive failed: No dirmngr

As part of verifying the signature for a downloaded file using GnuPG running on Ubuntu 18.04 on WSL on Windows 10 1809, I tried to import the publisher’s signing key…

gpg --receive-keys 0x22C07BA534178CD02EFE22AAB88B2FD43DBDC284

… for which I received the following error message:

gpg: connecting dirmngr at '/home/foo/.gnupg/S.dirmngr' failed: IPC connect call failed
 gpg: keyserver receive failed: No dirmngr

According to Ben Hillis, a developer on the Windows Subsystem for Linux team, there was a bug in the version of GPG packaged into Ubuntu 18.04 that only manifests itself when running over WSL:

… this is a timing-related issue that is exposed because of a difference in how Windows and Linux handles connection attempts to localhost sockets. On Linux an attempt to connect to a localhost tcp socket on a port that is not active will return a failure immediately. On Windows there appears to be a 1 second timeout. This causes the following sequence to occur.
1. gpg spawns dirmngr
2. dirmngr attempt to connect to localhost port 9050 (this is attempted twice).
3. gpg attempts to connect to a unix socket that dirmngr creates after the localhost socket connection fails.
This one second timeout in step 2 is enough to cause gpg to think that dirmngr is not responding. There appears to be a retry loop in gpg, but it is not waiting long enough to account for the 1 second connect timeout (the connect is actually attempted twice).

His series of comments for the issue over at GitHub really seems to be an accurate description of the problem as I was able import the GPG key in Ubuntu 18.04 running in a Hyper-V VM without any problems whatsoever.

Back to WSL, note that although the command failed due to the timing issue described above, dirmngr is now running, so if you issue the command once again, it should work.

[email protected]:~$ gpg --receive-keys 0x22C07BA534178CD02EFE22AAB88B2FD43DBDC284
 gpg: connecting dirmngr at '/home/foo/.gnupg/S.dirmngr' failed: IPC connect call failed
 gpg: keyserver receive failed: No dirmngr

 [email protected]:~$ !!
 gpg --receive-keys 0x22C07BA534178CD02EFE22AAB88B2FD43DBDC284
 gpg: key B88B2FD43DBDC284: 22 signatures not checked due to missing keys
 gpg: key B88B2FD43DBDC284: public key "openSUSE Project Signing Key [email protected]" imported
 gpg: marginals needed: 3  completes needed: 1  trust model: pgp
 gpg: depth: 0  valid:   1  signed:   3  trust: 0-, 0q, 0n, 0m, 0f, 1u
 gpg: depth: 1  valid:   3  signed:   0  trust: 3-, 0q, 0n, 0m, 0f, 0u
 gpg: Total number processed: 1
 gpg:               imported: 1

Since I’m banging commands interactively against the shell, I’m OK with this workaround. On the other hand, if these commands were part of a script, I’d make sure dirmngr is running before issuing any commands that depend on it. In that case, you may want to take a look at the man pages.

How to copy Visual Code extensions to another machine

Earlier this month I was setting up an Ubuntu VM on Windows 10 for development and after installing Visual Studio Code it was time to install the extensions I’m used to having around.

There were 30 of them installed on my main development box. I probably don’t use most these of extensions, but I wasn’t in the mood to sort them out, so I searched for a way to export the settings so I could import them into Code running in the VM.

I wasn’t able to find what I was hoping for, the closest thing being the answers to “How can you export VS Code extension list” on StackOverflow.

# Based on the code snippet found at https://stackoverflow.com/a/49398449/151249
code --list-extensions | xargs -L 1 echo code --install-extension
view raw original-snippet.txt hosted with ❤ by GitHub

The answers were a step in the right direction but not quite where I ultimately wanted the solution to be:

  1. Having the list of extensions checked into source control with the rest of the project. For instance, if the team agreed on using ESLint as a build step, having a script to automate installing the corresponding extension (possibly one of many) could help in ramping up new project members.
  2. Having a way to sync extensions between machines or a way to export a list of extensions (not tied to any specific project) I could easily import to anywhere needed, such as disposable VMs, a new machine provided by an employer, etc.

I didn’t have time to figure out how to implement any of those ideas to the full extent of how I think they should work, but in the spirit of “Always Be Automating” I did the next best thing which was hacking a couple of one-liners, a step closer to the solution I want it to eventually be.

First I created a file containing the list of extensions:

code --list-extensions > vscode-extensions.txt
view raw export hosted with ❤ by GitHub

I think for most people the easiest way to access the list from other machines is putting the file somewhere online. I’m of the opinion that the best place to store anything development related is on GitHub, so I uploaded the list to a gist over there:

DavidAnson.vscode-markdownlint
docsmsft.docs-article-templates
docsmsft.docs-markdown
docsmsft.docs-preview
EditorConfig.EditorConfig
GitHub.vscode-pull-request-github
ms-azuretools.vscode-azureappservice
ms-azuretools.vscode-azurefunctions
ms-azuretools.vscode-azurestorage
ms-azuretools.vscode-cosmosdb
ms-mssql.mssql
ms-vscode.azure-account
ms-vscode.azurecli
ms-vscode.cpptools
ms-vscode.csharp
ms-vscode.mono-debug
ms-vscode.powershell
ms-vscode.vscode-node-azure-pack
ms-vsts.team
msazurermtools.azurerm-vscode-tools
msjsdiag.debugger-for-chrome
PeterJausovec.vscode-docker
redhat.java
VisualStudioExptTeam.vscodeintellicode
vsciot-vscode.azure-iot-toolkit
vscjava.vscode-java-debug
vscjava.vscode-java-dependency
vscjava.vscode-java-pack
vscjava.vscode-java-test
vscjava.vscode-maven

Then all I had to do to install those extensions was to CURL that file and pipe it into code:

#!/bin/bash
curl https://gist.githubusercontent.com/alfredmyers/336ed20410acee6688f7ba7c85b5826f/raw/84afcdbb919e9a9912c73914c7859746e862259a/vscode-extensions.txt | xargs -L 1 code --install-extension

That did the trick and I was able to continue working on whatever I was working on, although I wasn’t quite happy with the gist’s URL. See, I don’t know of anyway of getting rid of that automatically generated GUID which would make the URL more memorable.

Earlier today after watching Amanda Sliver and John Papa on Five Things, where she mentioned how to list Visual Studio Code extensions from the command-line, I decided to fix that URL problem putting the list on a GitHub repo with GitHub Pages turned on so instead of a cryptic URL I have something more memorable.

#!/bin/bash
curl https://alfredmyers.github.io/codex/all.txt | xargs -L 1 code --install-extension

I decided to call the repo Codex, for Code Extensions. That gives me an easy to remember base URL: https://alfredmyers.github.io/codex/

For now, I only have a single list in “all.txt”, but there’s nothing stopping me from creating other lists containing extensions for specific purposes or projects. For instance:

  • https://alfredmyers.github.io/codex/dotnet.txt
  • https://alfredmyers.github.io/codex/nojejs.txt

Oh… And by the way, once you CURL that list, you can pipe it into any command you’d like. For instance, to uninstall all those extensions we got from all.txt:

#!/bin/bash
curl https://alfredmyers.github.io/codex/all.txt | xargs -L 1 code --uninstall-extension

Just make sure you have a list of the extensions you really need hanging around so you can use it to reset everything to a desired state.

I you find the idea interesting, feel free to fork the project (https://github.com/alfredmyers/codex) and hack it to your needs. And don’t forget to turn on GitHub pages so you can access the lists using an easy to remember base URL such as https://{your-user-name}.github.io/codex/.