2020 Book Review

As in the book review for 2019, the list for 2020 includes only the books that I’ve read from cover to cover or at least finished reading the parts I’ve committed myself to read. There are a few books I’ve abandoned for a reason or another. Then there are a few others I haven’t finished but intend to resume reading. The latter group may end up in the review for the next year.

Also as last time, the book cover images below are affiliate links to Amazon. If you click on them and buy anything over there, you won’t be charged anything more than their normal price, but I’ll get a little commission.

That being said, most of the books I buy these days aren’t bought from Amazon. I mostly buy books in electronic form if it has PDF as one of the formats available. That’s pretty much most of them as most publishers now-a-days have web stores and sell e-books in two or three formats: epub, PDF and mobi – the latter of which can be copied over to Kindle devices. Having a book in more than one downloadable format gives you flexibility, portability and prevents you from getting locked into a particular ecosystem.

The exception would be for books from O’Reilly. They’ve stopped selling e-books directly from their site apparently in an effort to arm twist their customers into subscribing to O’Reilly Learning. The only place I’ve been able to find their books in electronic format is when from time-to-time a themed selection of their books goes on sale on humblebundle.com.

Webpack for Beginners

by Mohamed Bouzid

As part of my interest in PWAs, I’ve invested the earlier part of 2019 ramping up on modern web development technologies and although I knew that sooner or later I’d have to take a look into bundlers, the topic was still some items away in my to-do list.

But in January 2020 I offered myself to do technical reviewing for books on topics I had an interest in: Linux, client-side web development, Azure, and .NET. I got an offer to review this book about Webpack and it was a nice match as the book is laid out as a tutorial and I was a true beginner on the subject.

As the pesky reviewer I am, I followed each and every instruction contained in the book making sure each one worked as described and gave feedback where it occasionally didn’t.

Beyond that, I gave feedback on a few parts that I thought needed clarification, other parts that seemed repetitive, and other parts that seemed to get into too much detail on topics that in my understanding were pre-reqs before someone would be interested in using Webpack itself. All in all, I hope my humble feedback resulted in a better book.

Sudo Mastery

by Michael W Lucas

Despite having bought my first Mac in 2010, it wasn’t until 2014 that I first tried to use one for development purposes. Alien to anything derived from UNIX, I followed closely any instructions I could find for installing the necessary tooling only to hit some error for which the solution proposed by forum users typically involved using sudo to run commands that in some cases the documentation or software packages themselves warned against because of the security implications of doing so.

Not taking the time to step back and learn the basics of the UNIX world was certainly sudomasochistic as I probably spent more time than necessary trying to troubleshoot issues due to improper permissions and/or opening security holes that wouldn’t be there if I had a clue of what I was doing.

Fast-forward to April 2020, I’m pretty much comfortable using a Bash terminal to accomplish day-to-day tasks and although I have an understanding about the differences of running as root versus an unprivileged user, I still don’t understand the specifics of how sudo does it magic.

Enter Sudo Mastery, a book that teached me everything I ever wanted to know about sudo and then some more. Answer to questions such as:

  • What accounts for the differences in sudo.conf between Ubuntu and CentOS?
  • On a system with shared administrative responsibilities, what would be gained by implementing sudo policies, instead of sharing the root password among different sysadmins?
  • How to delegate tasks to other users that need access to privileged commands while limiting what commands can be executed?

Answers to those and many more questions I had are found throughout the text. All with a writing style with just the right amount of snarkiness that I found really entertaining.

Ed Mastery

by Michael W Lucas

What could possibly be written about ed that hasn’t already? Why would anyone buy and then read a book about ed in 2020 when there are modern alternatives such as the extremely popular Visual Studio Code or the older but dependable Vim?

Well, for starters:

ed is the standard Unix text editor.


Ed Mastery is a short book published in 2018 whose subject is a piece of software first written in 1969 that can still be found on boxes running operating systems derived from UNIX to this day. The writing style follows the previous book, so there’s some geek entertainment right there, but is it useful in any way?

Surprisingly, at least for me, after forcing myself to use ed's constrained feature set to edit several little programs as exercises of a programing language book, when I got back to Vim, I immediately saw some of those techniques being put to use as several commands that are essential to ed‘s operation, work as well on Vim, sed, etc.

Firewalls Don’t Stop Dragons 3rd Edition

A Step-by-Step Guide to Computer Security and Privacy for Non-Techies

by Carey Parker

The following observations are based on the 3rd edition published in 2018. There’s a more recent version published in 2020 which I haven’t read.

The idea of a book on computer security for non-techies is a nice one, but I’m genuinely curious about potential audience size of non-techies willing to buy and then read a 440 page book on the subject. I’m willing to bet that most will rather do a Google search, end up watching some random tutorial on YouTube and stop there.

One could (correctly) argue that you don’t have to read the whole 440 pages as it devotes lengthily sections containing instructions full of screenshots for three different versions of two operating systems: Windows 7, 8.1 and 10; Mac OS X 10.11, macOS 10.12 and 10.13.

But given the target audience…

It’s the book that’s going to save you countless hours explaining to Aunt May why she needs to have more than one password …

… or helping your mom remove ten different Internet Explorer toolbars so that she can actually see more web page than buttons.

From the preface, page xxi.

… if they were to read a book like this – and again, that’s a big IF right there, how much you wanna bet that Uncle George will be mixing up instructions for one of the Mac versions with a Windows box or the other way around?

Don’t get me wrong. There’s plenty fundamentals and good guidance in the book for anyone that doesn’t pay much attention to security beyond making sure the anti-virus and firewall are turned on. It’s just that I don’t see the target audience as it was described in the book taking the time to actually read it.

Then there’s Chapter 4. It’s devoted to the topic of passwords. What makes for good and bad passwords, the importance of enabling 2FA where possible, and the importance – in the author’s point of view – of 3rd party password managers.

Mostly important advice, but this is the point where things start smelling funny to me. See… all the current browsers offer support for some level of password management. Some will even synchronize between different devices and generate strong random passwords for you.

The author introduces LastPass and lays out instructions on how to install it on a computer and a smartphone – which is nice given the target audience? – then somehow manages to fit mentions to LastPass in each and every chapter there on to the end of the book (except for chapter 10). If you don’t have MFA enabled, don’t use a browser that has a password manager built-in and are using the same password over and over again, then you should totally get a password manager, but I couldn’t help but get the feeling that the whole thing was a lengthily advert for LastPass.

Sharing monitors between devices

It’s been a while I’ve been using a dual-monitor setup for my desktop computer at home running Ubuntu. Now due to the pandemics, I’ve been working from home sharing one of those monitors with a Windows 10 laptop issued to me by the company I work for.

Simply setting “Input Source” to “Auto” on the shared monitor wasn’t enough to make it work for me as the monitor’s auto selection seemingly had a strong opinion or found it would be funny to select precisely the input source I wanted to switch away from.

After a bit of research I found a command that would let me change these configurations from a terminal: xrandr.

To get a list of monitors connected to the host:

$ xrandr --listmonitors 
Monitors: 2
 0: +*DP-1 3840/597x2160/336+0+0  DP-1
 1: +HDMI-1 1920/509x1080/286+3840+0  HDMI-1


$ xrandr --output DP-1 --off

… to turn off the monitor connected to DP1.

$ xrandr --output DP-1 --auto --left-of HDMI-1

… turns it back on positioning it to the left of the monitor connected to the HDMI-1 in the virtual display.

While turning monitors on and off, GNOME automatically rearranges the desktop environment according to the new settings of the virtual desktop. That means moving UI elements such as desktop widgets, icons, etc.

More information xrandr‘s options most useful for multi-monitor setups can be found at ArchWiki.

The term ‘{0}’ is not recognized as the name of a cmdlet, function, script file, or operable program.

Applicable to:PowerShell Core global tool 6.2.2 and 6.2.3.
TL;DR;For the purposes of defining scope, I’m limiting the following analysis to SDKs for .NET Core versions currently in LTS and versions in between. As of this writing, that would be 2.1. 2.2, 3.0 and 3.1.

This should work well enough for most cases:

– If you can root or sudo, install whatever version of PowerShell you deem appropriate with an installer or package manager depending on your operating system;
– Else, if you have .NET Core SDK 3.1, update to PowerShell Core tool 7.0.0;
– Else update to PowerShell Core global tool 6.2.4.

For details, read on…

As written in my the last post, installing PowerShell global tool 7.0 solved the issue of initializing PowerShell extension for Visual Studio Code (ms-vscode.powershell).

But there was one thing intriguing me: While trying to initialize the extension, some writing in red was flashing in Code’s integrated terminal.

After capturing it on video and pausing it at precisely the split second the message appears it is possible to read:

-NoProfile: The term '-NoProfile' is not recognized as the name of a cmdlet, funciton, script, or operable program.
Check the spelling of the name, or if a path was included, verify that the path is correct and try again.
At line:1 char:1
+ -NoProfile -NonInteractive -EncodedCommand SQBtAHAAbwByAHQALQBNAG8AZA...
+ ~~~~~~~~~~
+ CategoryInfo          : ObjectNotFound: (-NoProfile:String) [], CommandNotFoundException
+ FullyQualifiedErrorId : CommandNotFoundException

Coincidentally, the parameters seen in the error message match those I had seen previously in the extension’s output log…

PowerShell Extension Output log

… from which I removed timestamps, log level, and truncated the log message for better display below:

Visual Studio Code v1.43.0 64-bit
 PowerShell Extension v2020.3.0
 Operating System: Linux 64-bit
 Language server starting --
     PowerShell executable: /home/alfred/.dotnet/tools/pwsh
     PowerShell args: -NoProfile -NonInteractive -Encoded...
     PowerShell Editor Services args: Import-Module '/home/...
 pwsh started.
 Waiting for session file
 Error occurred retrieving session file
 Language server startup failed.
 The language service could not be started: 
 Timed out waiting for session file to appear.

It is unfortunate that the extension’s log doesn’t show the same information that, albeit briefly, is shown in the integrated terminal – showing instead a misleading error due to some “session file”.

If we capture those parameters from the log and use them for running PowerShell directly, we’ll receive the same exact error shown in the integrated terminal.

With this we can discard ms-vscode.powershell as the source for the error and instead focus on pwsh itself.

It’s nice to see in practice the link of the failure in initializing the extension to the parsing of arguments in PowerShell Core global tool, but this doesn’t really add that much to the situation given I already had learned this is a known bug:

Older versions of the PowerShell dotnet global tool have a bug in them where arguments are not processed properly, breaking our startup invocation.

Please update your PowerShell version to 7 to enable this, or install a non-global tool PowerShell installation and configure the PowerShell extension to use it…

Although the statement pointed me in the right direction, it is imprecise and may lead some people, as was my case, to incorrect conclusions such as the only two alternatives being into install PowerShell global tool 7.0.0 or using some means of installation other than as a .NET Core tool.

Given I had very little exposure to PowerShell running on operating systems other than Windows and haven’t played at all with .NET Core tools, I took the opportunity to read through the documentation and do some testing. Here are some of my findings:

If you have root permissions or can sudo, you’re probably better off installing PowerShell using any means other than as a .NET global tool. PowerShell Core global tool can be seen as a shim over the “real thing”™, an extra layer that in versions 6.2.2 and 6.2.3 had it’s option parsing broken.

If you can’t sudo or elevate to root, you’ll have to resort to using .NET Core tools. First checkout what SDK versions are installed and confirm that PowerShell is installed as a .NET Core tool:

$ dotnet --list-sdks
 2.1.804 [/usr/share/dotnet/sdk]
 3.1.200 [/usr/share/dotnet/sdk]

$ dotnet tool list -g
 Package Id      Version      Commands
 powershell      6.2.3        pwsh   

Whichever SDK versions are installed, you can always uninstall a given version, then install another version. For instance:

# dotnet tool uninstall -g powershell
 Tool 'powershell' (version '6.2.3') was successfully uninstalled.

# dotnet tool install -g --version 6.2.4 powershell
 You can invoke the tool using the following command: pwsh
 Tool 'powershell' (version '6.2.4') was successfully installed.

But upgrading can be done in a single step using dotnet tool update:

# dotnet tool update -g powershell
 Tool 'powershell' was successfully updated from version '6.2.3' to version '7.0.0'.

The command above will work as long the latest version of the .NET Core tool on the package repository is compatible with the latest .NET Core SDK version installed locally.

Otherwise, it is necessary to specify the tool version, which can only be done starting with .NET Core SDK 3.0:

# dotnet tool update -g --version 6.2.4 powershell
 Tool 'powershell' was successfully updated from version '6.2.3' to version '6.2.4'.

The language service could not be started: Source: PowerShell (Extension)

Earlier today when opening a PowerShell script in Visual Studio Code I got the following error message:

The language service could not be started:

Source: PowerShell (Extension)

After some investigation that included changing the Editor Services log level to Diagnostic, and decoding the base64 encoded command being passed to pwsh, it was pointed out to me on GitHub that PowerShell versions prior to 7 contain a bug that, when they’re installed as a .NET Core global tool, prevents them from processing parameters passed to the pwsh command.

So just to confirm the version I had installed, I ran…

$ dotnet tool list -g

 Package Id      Version      Commands
 powershell      6.2.3        pwsh  

… and then after closing opened instances of Visual Studio Code, I updated PowerShell using .NET Core’s CLI:

$ dotnet tool update powershell -g

Tool 'powershell' was successfully updated from version '6.2.3' to version '7.0.0'.

In case you didn’t install PowerShell through .NET Core’s CLI, you may want to take a look at “Installing various versions of PowerShell” over on Microsoft Docs where you’ll find instructions for installing PowerShell on all the supported target platforms.

@id:ms-vscode.csharp – No extensions found.

There seems to have been a release coordination snafu between Visual Studio Code and the latest release for the C# for Visual Studio Code extension and as a result, you may be getting…

The ‘C#’ extension is recommended for this file type.

… over and over again.

If you click on Install, Code will tell you it can’t find the extension.


No extensions found.

The issue is caused by publisher for the extension being changed from ms-vscode to ms-dotnetools and how this cascades to other parts of the extension such as it’s Id.

From what I understood from the issue over on GitHub, the problem should go away with the next version of Visual Studio Code – the current version being 1.42.1.

$ code --version

Meanwhile, since there are reports on GitHub about the change impacting dependent extensions, the workaround will depend on whether you use one of those extensions and how much the installation prompt bugs you.

A good start is finding out which extension, if any, you have installed.

$ code --list-extensions | grep -E 'ms-\w+.csharp'

This should return ms-vscode.csharp if you have the older extension or ms-dotnettools.csharp if you have the newer one. It may return other extensions as well if they happen to match the given regular expression.

If you have the newer version and are OK with being prompt to install it over and over again, you’re set up. Otherwise, you can still get the older version, but since it has been removed from the Marketplace, you’ll have to resort to getting it from GitHub.

$ code --uninstall-extension ms-dotnettools.csharp
 Uninstalling ms-dotnettools.csharp…
 Extension 'ms-dotnettools.csharp' was successfully uninstalled!

$ wget https://github.com/OmniSharp/omnisharp-vscode/releases/download/v1.21.12/csharp-1.21.12.vsix

$ code --install-extension csharp-1.21.12.vsix
Installing extensions…
 Extension 'csharp-1.21.12.vsix' was successfully installed.

2019 Book Review

This post has been sitting for a long time in the drafts folder. As 2020 comes to an end, I thought that I’d finish and publish it now or delete it altogether.

As some would say, better late than later.

2019 was a year where I’ve invested a lot of time learning through video courses on Pluralsight and O’Reilly Learning. In that regard, O’Reilly had the extra benefit of online live video courses in addition to pre-recorded ones. Being live, it was possible to interact with the instructor and other course participants, which in my opinion is way better than pre-recorded material.

O’Reilly Learning also encompasses a vast online e-book library and although it’s very useful for quickly digging into a specific topic, there are a few issues that make me still prefer e-books in PDF format or physical books:

  • I prefer fac-simili layouts to auto flow layouts. The differences are more notable when there are images such as diagrams or illustrations;
  • O’Reilly offers an app for reading on the phone, but I don’t like the phone’s form-factor.
  • More than once, I’ve found material that was present in the PDF or print version that was lacking from O’Reilly’s platform;

Although I’d started reading several books with the initial intent to read them from cover to cover, I almost always got distracted along the way abandoning them in favor of the next shinny thing to cross my field of vision.

All that being said, that’s why the list is so meager. I don’t think I’d do justice in reviewing a book without reading most of it or at least the parts I committed myself to.

Oh, and by the way, for the sake of transparency both book cover images below are affiliate links to Amazon. If you click on them and buy anything there, you won’t be charged anything more than their normal price, but I’ll get a little commission.

PGP: Pretty Good Privacy: Pretty Good Privacy

by Simson Garfinkel

I’ve already wrote a little about GnuPG a couple of times and following the rabbit whole of learning more about it’s concepts and history I’ve ended up buying a used copy of this book.

The book was published back in 1995 and is divided in two parts.

The first part goes over the history and motivations that lead to the creation of PGP – the precursor to GnuPG and other OpenPGP implementations.

The second part is a reference on how to use the software package. This second part is unsurprisingly totally outdated. So when I decided to buy the book I knew whatever I was going to pay for it I’d be paying for only the history part. And it was totally worth it!

The Cathedral & the Bazaar

by Eric S. Raymond

Back in the 90’s I’ve read several books on Bill Gates, the history of Microsoft and the economic principles behind its modus operandi.

This book, on the other hand, helped me in getting started on the culture and economic forces behind open source software development and why this way of producing software has for the most part won the war against closed source alternatives.

As most of the print books I’ve bought in 2019, this was an used copy as well and was totally worth the price I paid for it.

ERROR for site owner: Invalid key type

I just found out that this blog was being used to send spam email to accounts over at qq.com, a Chinese instant messaging software service and web portal.

Apparently, this is due to a bug on Jetpack’s Sharing feature that despite being known for at least three years, still hasn’t been fixed.

The link above points to a support thread over at WordPress.org describing the issue in more detail and contains a handful of workarounds for mitigating it.

Of the workarounds, I find two are worth mentioning here.

  1. Disable the email Sharing Button. Although a little extreme, it may be justified if you don’t have the inclination to mess around with configuration files or if the usage volume of the feature by legitimate users is low enough;
  2. Adding reCaptcha to the Email Sharing Button. This one involves adding a couple of lines to wp-config.php;

The thing is, the instructions for adding reCaptcha were written before the release of reCAPTCHA v3 and JetPack (version 7.8 as of this writing) isn’t currently compatible with it, so when setting up reCAPTCHA, you should choose reCAPTCHA v2 or else you will receive the message bellow:

ERROR for site owner: Invalid key type
ERROR for site owner: Invalid key type

I’ll be monitoring the logs for a couple of days to see if enabling reCAPTCHA will suffice. If not, I’ll just disable the email sharing button altogether.

It’s kind of lame that the default settings of an email sharing button would open up a website for being used to send spam. But even lamer is seeing people telling others to not complain about it on the basis of the plugin being free.

Please remember that if you using this plugin for free, all requests future need to be in reasonable manners, as nobody paying for it. Consider that, they are doing very good job for users who are using this plugin for free and there is solution for it already.

Freemium ain’t free. It’s a marketing gimmick and has been working very well for Automattic, makers of JetPack and WordPress – the latter of which by some accounts powers over 1/3 of the top 10 million web sites on the internet.

CentOS 8.0 installation boots into a blank screen

CentOS 8.0 was released earlier today with a bug inherited from RHEL 8.0 upstream.

The bug manifests itself when Server with Gui is selected as the Base Environment when installing on a Hyper-V virtual machine. After copying all the files needed into the hard drive Anaconda will boot into a blank screen.

The problem is made a little worse because Server with Gui is the default selection.

In yesterday’s post, I’ve shown how the solution proposed in RHEL 8.0’s release notes didn’t quite work when the system wasn’t (yet) registered with Red Hat Subscription Management.

Since CentOS has no such thing, the solution described in the release notes can be applied as is. That is, as long as you have internet connectivity.

But what if the system at hand is isolated from the internet? Then again we have to resort to installing from the installation media.

This time though, we can take advantage of the repo configuration files installed with the operating system that point to the installation media.

yum repolist --all
yum repolist --all
yum repolist –all

From yesterday, we already know that base-x is contained in AppStream, so from the list above we probably want to take a better look at c8-media-AppStream.

[root@centos-8-1905 ~]# yum repoinfo --all c8-media-AppStream
 Last metadata expiration check: 0:24:40 ago on Tue 24 Sep 2019 09:24:39 PM -03.
 Repo-id      : c8-media-AppStream
 Repo-name    : CentOS-AppStream-8 - Media
 Repo-status  : disabled
 Repo-baseurl : file:///media/CentOS/AppStream, file:///media/cdrom/AppStream, file:///media/cdrecorder/AppStream
 Repo-expire  : 172,800 second(s) (last: unknown)
 Repo-filename: /etc/yum.repos.d/CentOS-Media.repo
 [root@centos-8-1905 ~]#

As can be seen the repo is configured to search for its files at one of the URLs defined by the Repo-baseurl property. Let’s mount the installation media into /media/CentOS.

[root@centos-8-1905 ~]# mkdir /media/CentOS
[root@centos-8-1905 ~]# mount /dev/sr0 /media/CentOS/
 mount: /media/CentOS: WARNING: device write-protected, mounted read-only.

Now we can invoke yum groupinstall specifying c8-media-AppStream as the source repository.

[root@centos-8-1905 ~]# yum --repo=c8-media-AppStream -y groupinstall base-x  
yum --repo=c8-media-AppStream -y groupinstall base-x
yum –repo=c8-media-AppStream -y groupinstall base-x

As yesterday, rebooting now should load the graphical user interface for the final steps of the installation process.

Error: There are no enabled repos.

I was installing Red Hat Enterprise Linux 8.0 on a Hyper-V virtual machine. After copying all the packages from the installation media and installing them onto the virtual hard drive, Anaconda booted RHEL into a blank screen.

After some research I found that the problem was a known issue listed in RHEL 8.0’s release notes as well as on Red Hat’s knowledge base:

The xorg-x11-drv-fbdev, xorg-x11-drv-vesa, and xorg-x11-drv-vmware video drivers are not installed by default

In addition, virtual machines relying on EFI for graphics support, such as Hyper-V, are also affected. If you selected the Server with GUI base environment on Hyper-V, you might be unable to log in due to a black screen displayed on reboot. To work around this problem on Hyper-v, enable multi- or single-user mode using the following steps:
Reboot the virtual machine.
During the booting process, select the required kernel using the up and down arrow keys on your keyboard.
Press the e key on your keyboard to edit the kernel command line.
Add systemd.unit=multi-user.target to the kernel command line in GRUB.
Press Ctrl-X to start the virtual machine.
After logging in, run the yum -y groupinstall base-x command.
Reboot the virtual machine to access the graphical mode.

So I started following the instructions, but on step #6 something unexpected happens:

[root@localhost ~]# yum -y groupinstall base-x
 Updating Subscription Management repositories.
 Unable to read consumer identity
 This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
 Error: There are no enabled repos.

Well… Thing is this is a short lived disposable virtual machine and I have no plans on registering it with Red Hat Subscription Management.

Luckily, the installation media contains the two repositories introduced with RHEL 8.0: BaseOS and AppStream.

Base-x is contained in the AppStream repo. We just have to:

  1. Make sure the repositories from the installation media are accessible from the file system;
  2. Import the public keys used to sign the packages into RPM;
  3. Direct yum to install the packages from the AppStream repository.
[root@localhost ~]# mount /dev/sr0 /mnt
 mount: /mnt: WARNING: device write-protected, mounted read-only.
 [root@localhost ~]# rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
 [root@localhost ~]# yum --repofrompath AppStream,file:///mnt/AppStream -y groupinstall base-x  
Results of the package installation
base-x installed successfully

Please note the command importing Red Hat’s public keys into RPM before installing the package, otherwise the package wouldn’t have been installed and an error message would have been displayed as seen below.

[root@localhost ~]# mount /dev/sr0 /mnt
 mount: /mnt: WARNING: device write-protected, mounted read-only.
 [root@localhost ~]# yum --repofrompath AppStream,file:///mnt/AppStream -y groupinstall base-x
 Updating Subscription Management repositories.
 Unable to read consumer identity
 This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
 Added AppStream repo from file:///mnt/AppStream
 You have enabled checking of packages via GPG keys. This is a good thing.
 However, you do not have any GPG public keys installed. You need to download
 the keys for packages you wish to install and install them.
 You can do that by running the command:
     rpm --import public.gpg.key
 Alternatively you can specify the url to the key you would like to use
 for a repository in the 'gpgkey' option in a repository section and DNF
 will install it for you.

Once the package is installed, rebooting the system should load the graphical user interface for the final steps of the installation process.

RHEL 8.0 Initial Setup graphical user interface
Initial Setup

Automating my way out of ancillary tasks with PowerShell

After creating a handful of virtual machines for ramping up on the different Linux distros, certain configuration patterns started to emerge. Repeating those same settings over and over again in Hyper-V Manager or even PowerShell’s command line interface is kind of tedious. Good thing is scripts are a great way to automate those repetitive configuration tasks away.

In my case, the settings I kept copying over and over again were:

  • Two cores;
  • Network adapter bound to a virtual switch with external connectivity;
  • Automatic Checkpoints turned off;

And then there were other settings that although varied between distros and installation options within those distros, had to be configured each and every time:

  • Virtual machine name;
  • RAM size;
  • Hard drive size;
  • Path to installation ISO;
  • Secure Boot;

So after learning and getting comfortable with the different PowerShell cmdlets, I started prototyping a few scripts. At first, the scripts were kind of lame and buggy, but as I learned more about the features of Hyper-V and how those features are exposed through PowerShell, the scripts were improved little by little much in the spirit of Kaizen – continuous improvement.

The important thing here is realizing this as a learning tool and as such starting small and improving as you go and as new requirements make themselves known.

Important as well is knowing when to stop. The purpose of this exercise isn’t creating a production-ready script, but to create a script that will allow me to get back as soon as possible to what I settled out to do in the first place: Create virtual machines so I could learn something else.

If you are curious enough, you can find the source code for one of the early versions of the scripts as a gist over on GitHub. If you are even more curious, you can see the entire history of how the code evolved also on GitHub.

Since that early prototype, the code has evolved quite a bit from a collection of discrete scripts to the current version which is implemented as a PowerShell module and incorporates all the learnings of the last few weeks – including a best practice for creating virtual hard disks for use with Linux file systems. Good luck trying remember that one every time you create a new VM for Linux!

function Get-OrphanedVHDs {
$rootedVHDs = (Get-VM).HardDrives.Path + (Get-VM | Get-VMSnapshot).HardDrives.Path
Return Get-ChildItem (Get-VMHost).VirtualHardDiskPath | Where-Object { $_.FullName -notin $rootedVHDs }
function New-VirtualMachine {
# If a switch name hasn't been provided, it'll try to find a default.
if (!$SwitchName) {
$Switches = Get-VMSwitch | Where-Object SwitchType -eq 'External'
#If there's only one switch with external connectivity, that's it.
#Else, a switch name should have been provided.
if ($Switches.Count -eq 1) {
$SwitchName = $Switches.Name
$VHDPath = Join-Path -Path (Get-VMHost).VirtualHardDiskPath -ChildPath ($Name + ".vhdx")
if ($SecureBootTemplate -eq "MicrosoftWindows") {
$null = New-VHD -Path $VHDPath -SizeBytes $VHDSizeBytes
else {
$null = New-VHD -Path $VHDPath -SizeBytes $VHDSizeBytes -BlockSizeBytes 1MB
$VM = New-VM -Name $Name -MemoryStartupBytes $MemoryBytes -Generation 2 -BootDevice VHD -SwitchName $SwitchName -VHDPath $VHDPath
Set-VM $VM -ProcessorCount 2 -MemoryMaximumBytes $MemoryBytes -AutomaticCheckpointsEnabled $false
Add-VMDvdDrive $VM -Path $IsoPath
Set-VMFirmware $VM -BootOrder ((Get-VMFirmware $VM).BootOrder | ? BootType -eq 'Drive')
if ($SecureBootTemplate) {
Set-VMFirmware $VM -SecureBootTemplate $SecureBootTemplate
else {
Set-VMFirmware $VM -EnableSecureBoot Off
Return $VM
function Remove-VirtualMachine {
$vm = Get-VM $VMName
$snapshot = (Get-VMSnapshot $vm | ? ParentSnapshotName -eq $null)
if ($snapshot) {
$vhds = $snapshot.HardDrives.Path
else {
$vhds = $vm.HardDrives.Path
Remove-VM $vm -Force
Remove-Item $vhds
view raw Power-V.psm1 hosted with ❤ by GitHub

Certainly there are a lot of opportunities for improvement (documentation, error handling and resilience in general just to name a few), but those are left as an exercise for future self. Meanwhile, let me get back to playing with those VMs.