Before we get much further in this project, we should set up command line builds while things are still simple. It’s easier to enhance the builds step by step as required than to configure a complex build system in one go.
Introduction to psake
As a quick refresher, psake is a build orchestration tool. You break up your build into a set of tasks, each task carrying out one required step. Each step declares its dependencies - other tasks that must execute first. When you run a build, the psake engine reads in all the tasks, inspects the dependencies and runs all the required tasks in the right order.
For example, the
Compile task listed below specifies a dependency on the task
Requires.DotNetExe. This task locates the
dotnet executable and aborts the build with an actionable error message if it can’t be found.
The task first looks on the PATH to find the
dotnet command; if it’s not found that way, it falls back to a hard coded location that reflects the most common (i.e. default) installation folder. If that still doesn’t work, an error is thrown to abort the build.
To learn more about psake, see the introduction to psake series that I wrote back in the second half of 2017.
Kicking off the build
To kick off the build, we need a simple triggering script, located in the root folder of our project.
As described in Bootstrapping a psake build, the script
.\scripts\bootstrap.ps1 ensures that psake is available for use and will abort the build if it can’t be found.
Once psake is available, the
invoke-psake command is used to run the actual build. All of the available tasks are defined in
.\scripts\psake-build.ps1, including our initial target
This breaks our build down into a series of steps as discussed below.
To ensure that we don’t have any debris left around from prior builds, we first clean our build output folder.
This works by trying to delete the entire folder, and then creating a new one.
I’m not interested in our build scripts doing incremental builds. Instead, I want to ensure that everything is compiled from scratch each time. So, I want to ensure that all of the intermediate results generated by prior builds are removed.
To build the source code itself, we need to first find the
dotnet executable. To keep our tasks small and easy to understand, we separate finding
dotnet.exe into a task of its own, as shown above.
The actual compilation task is fairly straightforward:
To run all the unit tests, we look recursively for all our projects under
src/tests and run each of them in turn.
Notice the three additional parameters included at the end of the
dotnet command. These are used to configure coverlet to generate code coverage statistics. I’ve used (and blogged about) OpenCover in the past, but it doesn’t support .NET Core, so I’ve switched to coverlet for this project.
The first parameter (
/p:CollectCoverage=true) turns on collection of coverage data. Coverlet integrates directly into the MSBuild scripts, so it’s really easy to use.
The second parameter (
/p:CoverletOutputFormat=opencover) specifies that the output file should be in OpenCover format. We need this for the report generation tool discussed below.
The third parameter (
/p:Exclude="[xunit*]*%2c[*.Tests]*") is currently neeeded to make it work. Seems there is a bug in the v2.60 release for which this is a good workaround. I expect this bug will go away pretty quickly, given that Coverlet is in active development. When it does, we should be able to delete this parameter.
To generate reports detailing code coverage, let’s return to ReportGenerator. Since I last blogged on the use of ReportGenerator, things have changed in the world of NuGet and packages are no longer stored in a
./packages/ subfolder of the project.
For our build to find
ReportGenerator.exe, we now need to scan NuGet’s global package caches and search for the latest available package.
This is largely similar to the way the
bootstrap.ps1 script searches for psake when we first trigger a build. We use
dotnet to list all the package cache directories, and then search each one in turn for the executable we desire. As a tiebreaker in the case of multiple matches, we choose the latest available version.
With that complete, we do the actual report generation:
You’ll observe that we open the coverage report at the end; this makes sense for now, but will need to be changed when we start automating our builds. For now, however, it’s useful to see the coverage report every time we run the script: