Composable Scripts and the PowerShell Pipeline
Back in 2016 I wrote a small PowerShell library called posh-fluent-migrator. The goal was simple: automate the repetitive parts of managing a FluentMigrator project. Build the migration assembly, create the local database, run migrations, test rollbacks.
What I didn’t call focus to at the time was that the interesting part wasn’t the scripts themselves. It was how they were meant to be used together.
The Setup
The project had a handful of scripts, each doing exactly one thing:
Get-MigrationProject.ps1- find and return the migration project configInvoke-MsBuild.ps1- build itCreate-LocalDatabase.ps1- create the local SQL Server databaseMigrate-LocalDatabase.ps1- run migrations up (or down)Restore-NuGetPackage.ps1- restore dependenciesNew-Migration.ps1- scaffold a new migration file
Nothing special on their own. The interesting part was the usage:
# Full setup from scratch
.\scripts\Get-MigrationProject.ps1 |
.\scripts\Invoke-MsBuild.ps1 |
.\scripts\Create-LocalDatabase.ps1 -DropDatabaseIfExists |
.\scripts\Migrate-LocalDatabase.ps1
And for testing your latest migration and its rollback:
.\scripts\Get-MigrationProject.ps1 project-name |
.\scripts\Invoke-MsBuild.ps1 |
.\scripts\Migrate-LocalDatabase.ps1 |
.\scripts\Migrate-LocalDatabase.ps1 -Rollback |
.\scripts\Migrate-LocalDatabase.ps1
That last one runs migrations up, then down, then up again. If your rollback is broken, you know right then instead of in production.
What Made It Work
PowerShell pipelines pass objects, not text. Unlike Unix pipes where you’re parsing stdout strings, each script in this chain received a structured object from the previous one - the project name, paths, connection strings. No parsing, no fragile string splits.
Each script was designed with a single responsibility and a predictable input/output contract. That’s what made them composable. You could drop any script out of the chain, swap the order where it made sense, or add new scripts without touching existing ones.
How Object Passing Works
The mechanics are straightforward. A script outputs an object, and the next script declares a parameter that accepts pipeline input.
Outputting an object:
# Get-MigrationProject.ps1
param([string]$ProjectName)
[PSCustomObject]@{
ProjectName = $ProjectName
AssemblyPath = ".\bin\Release\Migrations.dll"
ConnectionString = "Server=.;Database=$ProjectName;Trusted_Connection=True"
}
Receiving it in the next script:
# Invoke-MsBuild.ps1
param(
[Parameter(ValueFromPipeline)]
[PSCustomObject]$Config
)
process {
# build using $Config.AssemblyPath, etc.
msbuild $Config.AssemblyPath /p:Configuration=Release
$Config # pass the same object along
}
The process block runs once per object in the pipeline. Passing $Config through at the end lets the next script in the chain pick it up unchanged.
If you want to add fields as you go:
process {
# do work, then emit an enriched object
$Config | Add-Member -NotePropertyName BuildOutput -NotePropertyValue $output -PassThru
}
That’s the whole pattern. Each script reads what it needs, does its work, and emits either the same object or an enriched version for downstream scripts.
Worth Revisiting?
If you’re still on FluentMigrator and doing manual migration runs, this pattern is worth considering. The scripts are short enough to read in a few minutes and easy to adapt.
More broadly, if you find yourself writing long one-off automation scripts, ask whether the logic could be split into smaller composable pieces. A pipeline of five short scripts is easier to debug, test, and reuse than one 200-line monster.
The core idea of composable scripts and pipelines is timeless. The tools have evolved, but the pattern still holds up. It’s a reminder that sometimes the best solutions are the simplest ones, even if they don’t look flashy on a README.
Comments