TB

MoppleIT Tech Blog

Welcome to my personal blog where I share thoughts, ideas, and experiences.

Handle File Locks Gracefully in PowerShell: Tiny Retries, Exclusive Append, and Clean Disposes

Windows file systems enforce sharing rules that can briefly lock files when multiple processes touch them at once. In automation, scheduled tasks, and services, this often shows up as intermittent IOException errors (sharing or lock violations) when you append to a log or update a small state file. The solution isnt to hammer the disk; its to be polite: retry a few times with short, bounded waits, open the file exclusively, append, and always dispose in a finally block. With a tiny loop and predictable backoff, youll get fewer write errors, cleaner logs, and safer updates.

Why files lock and what "graceful" means

Locks commonly occur when:

  • Two or more processes write to the same file at the same time (e.g., multiple scheduled tasks, parallel jobs).
  • Security scanners or backup agents briefly open files with restrictive sharing.
  • Log rotation or a rollover task moves/renames the file while you append.
  • Cloud sync tools (OneDrive, Dropbox) touch the file during replication.

Graceful handling means you:

  • Retry only for transient conditions (sharing/lock violations).
  • Sleep for short, bounded durations (milliseconds), with a small backoff.
  • Stop after a strict budget or max attempts to avoid unbounded waits.
  • Open exclusively (FileShare.None), append reliably, then always dispose the stream in finally.
  • Log attempts so you can reason about behavior in production.

Implement a tiny, budgeted retry loop

Heres a minimal pattern you can drop into scripts to append a line to a log with exclusive access and a small, bounded retry:

$path = 'C:\logs\app.log'
$max = 5
$delay = 250  # milliseconds

for ($i = 1; $i -le $max; $i++) {
  try {
    $fs = [IO.File]::Open($path, [IO.FileMode]::OpenOrCreate, [IO.FileAccess]::Write, [IO.FileShare]::None)
    try {
      $text  = "Ping $(Get-Date -Format 'u')`r`n"
      $bytes = [Text.Encoding]::UTF8.GetBytes($text)
      $fs.Seek(0, [IO.SeekOrigin]::End) | Out-Null
      $fs.Write($bytes, 0, $bytes.Length)
      Write-Host ("OK on try {0}" -f $i)
      break
    } finally {
      $fs.Dispose()
    }
  } catch [IO.IOException] {
    Write-Warning ("Locked (try {0}). Retrying..." -f $i)
    if ($i -eq $max) { Write-Warning 'Giving up.'; break }
    Start-Sleep -Milliseconds $delay
  }
}

What this does well:

  • Exclusive open: FileShare.None guarantees youre not writing while another writer has the file.
  • Append semantics: Seek to end to safely append even if the file already exists.
  • Deterministic cleanup: Dispose in finally ensures the lock is released even if a write fails.
  • Bounded retries: A tiny fixed delay reduces contention without stalling your script.

For many scripts this is enough. For long-running services or shared infrastructure, wrap the pattern in a reusable function with a time budget, optional jitter, and better diagnostics.

Make it reusable: a budgeted append-with-retry helper

The function below appends a line using a small retry loop with millisecond backoff and an overall time budget. It retries only for sharing/lock violations, logs attempts, and guarantees disposal in every path.

function Add-LogLineWithRetry {
  [CmdletBinding()]
  param(
    [Parameter(Mandatory)] [string]$Path,
    [Parameter(Mandatory)] [string]$Text,
    [int]$MaxAttempts = 5,
    [int]$InitialDelayMs = 200,
    [int]$BudgetMs = 1500,
    [switch]$WriteThrough,   # safer flush; minor perf cost
    [switch]$ThrowOnFailure  # throw instead of returning $false
  )

  # Ensure directory exists before opening
  $dir = [IO.Path]::GetDirectoryName($Path)
  if ($dir) { [IO.Directory]::CreateDirectory($dir) | Out-Null }

  $bytes = [Text.Encoding]::UTF8.GetBytes($Text + "`r`n")
  $rand  = [Random]::new()
  $sw    = [Diagnostics.Stopwatch]::StartNew()

  for ($i = 1; $i -le $MaxAttempts; $i++) {
    try {
      # Choose options and open exclusively
      if ($WriteThrough) {
        $fs = New-Object System.IO.FileStream(
          $Path,
          [IO.FileMode]::OpenOrCreate,
          [IO.FileAccess]::Write,
          [IO.FileShare]::None,
          4096,
          [IO.FileOptions]::WriteThrough
        )
      } else {
        $fs = [IO.File]::Open($Path, [IO.FileMode]::OpenOrCreate, [IO.FileAccess]::Write, [IO.FileShare]::None)
      }

      try {
        # Append then flush
        $fs.Seek(0, [IO.SeekOrigin]::End) | Out-Null
        $fs.Write($bytes, 0, $bytes.Length)
        $fs.Flush()
        Write-Verbose ("Write OK on try {0} after ~{1} ms" -f $i, $sw.ElapsedMilliseconds)
        return $true
      } finally {
        $fs.Dispose()
      }
    } catch [IO.IOException] {
      # Win32 sharing/lock violations are 32/33 (low 16 bits of HResult)
      $code = ($_.Exception.HResult -band 0xFFFF)
      if ($code -ne 32 -and $code -ne 33) {
        Write-Warning ("Non-retryable I/O error (code {0}): {1}" -f $code, $_.Exception.Message)
        if ($ThrowOnFailure) { throw }
        return $false
      }

      Write-Warning ("Locked (try {0}). Retrying..." -f $i)

      # Budget and final-attempt checks
      if ($i -eq $MaxAttempts -or $sw.ElapsedMilliseconds -ge $BudgetMs) {
        Write-Warning ("Giving up after {0} tries (~{1} ms)." -f $i, $sw.ElapsedMilliseconds)
        if ($ThrowOnFailure) { throw }
        return $false
      }

      # Small exponential backoff + jitter, but keep waits tiny and bounded
      $base = [int]([double]$InitialDelayMs * [math]::Pow(1.2, $i - 1))
      $jitter = $rand.Next(0, [math]::Min(50, [math]::Max(5, [int]($InitialDelayMs / 2))))
      $sleep = [math]::Min($base + $jitter, [math]::Max(10, $BudgetMs - [int]$sw.ElapsedMilliseconds))
      Start-Sleep -Milliseconds $sleep
    }
  }

  # Should not get here, but stay safe
  Write-Warning ("Exhausted attempts without success after ~{0} ms." -f $sw.ElapsedMilliseconds)
  if ($ThrowOnFailure) { throw }
  return $false
}

# Example use
Add-LogLineWithRetry -Path "C:\\logs\\app.log" -Text ("Ping {0:u}" -f (Get-Date)) -MaxAttempts 5 -InitialDelayMs 250 -BudgetMs 1200 -Verbose

How it protects you

  • Retry scope: It only retries for sharing/lock violations (error codes 32/33). Other I/O errors surface immediately.
  • Bounded waits: It respects both a max attempt count and a total time budget.
  • Exclusive writes: FileShare.None prevents interleaved writes.
  • Deterministic cleanup: Every successful or failed open is disposed in a finally.
  • Safer durability: Optional -WriteThrough reduces risk of data loss on power failure (at a minor performance cost).

Performance, security, and reliability tips

  • Keep retries tiny: Millisecond sleeps smooth out brief contention without blocking your pipeline. Prefer 100250 ms to start; cap the budget to ~11.5 seconds for routine logs.
  • Create the directory first: Use [IO.Directory]::CreateDirectory() prior to open to avoid racey failures when the log folder is missing.
  • UTF-8 append: Use [Text.Encoding]::UTF8 and always append newline `r`n for Windows-friendly logs.
  • Log sparingly: Write compact attempt logs with Write-Verbose/Write-Warning. Excess console I/O can amplify contention in pipelines.
  • Dont busy-wait: Avoid spinning; always Start-Sleep -Milliseconds for backoff.
  • Consider STDOUT for containers: In Docker/Kubernetes, prefer stdout/stderr and let the runtime handle log rotation. Use file-based logging only when required.
  • Access control: Ensure only the service account can write the log directory. Use NTFS ACLs to prevent accidental modification by other users/processes.
  • Atomic updates for configs: For configuration/state files, write to a temp file then Move-Item to replace atomically. The same retry loop applies to the move if needed.

Testing and troubleshooting

  1. Simulate a lock: In a second PowerShell session, run a lock holder:
    $p = 'C:\\logs\\app.log'
    $fs = [IO.File]::Open($p, [IO.FileMode]::OpenOrCreate, [IO.FileAccess]::Write, [IO.FileShare]::None)
    # Keep it open to simulate a writer holding the lock
    
    Then run Add-LogLineWithRetry in your primary session to see retries and eventual success when you $fs.Dispose().
  2. Inspect error codes: If retries dont trigger, print ($_.Exception.HResult -band 0xFFFF) in the catch to confirm its 32/33.
  3. Measure end-to-end: The function logs elapsed milliseconds. Track the number of attempts in your telemetry so you can detect contention before it becomes a problem.

Real-world uses

  • Service logs: Multiple background jobs append to a shared log without interleaving lines or throwing noisy errors.
  • Idempotent schedulers: A nightly job updates a state file; when antivirus briefly scans, retries kick in and the job proceeds without failure.
  • Build agents: CI tasks writing artifacts or summaries to a shared workspace recover from transient locks created by parallel steps.

The pattern is deliberately small: exclusive open, append, dispose, retry with tiny waits, and stop when the budget is spent. Adopt it anywhere you write files from PowerShell, and youll see fewer write errors, more predictable behavior under contention, and simpler postmortems.

← All Posts Home →