TB

MoppleIT Tech Blog

Welcome to my personal blog where I share thoughts, ideas, and experiences.

Reuse HttpClient Safely in PowerShell: Avoid Socket Exhaustion with a Single SocketsHttpHandler

When you script against APIs in PowerShell, creating a new HttpClient for every call is a fast track to socket exhaustion, flaky latency, and unpredictable timeouts. The reliable pattern is to build one SocketsHttpHandler and one HttpClient, configure clear limits, and reuse them across requests. In this post, youll learn the why and howa0 with production-ready snippets that deliver faster calls, predictable timeouts, and stable throughput.

Build a Single HttpClient with a Tuned SocketsHttpHandler

In PowerShell 7+ (on .NET), prefer SocketsHttpHandler to get modern HTTP/2 support, efficient connection pooling, transparent decompression, and fine-grained tuning. Then pass it to a single HttpClient you reuse across your script or module lifetime.

Key settings you should set explicitly

  • PooledConnectionLifetime: forces occasional connection refresh to respect DNS changes and avoid long-lived stale connections.
  • PooledConnectionIdleTimeout: trims idle connections to keep the pool healthy.
  • MaxConnectionsPerServer: caps concurrency per host to prevent self-induced overload.
  • AutomaticDecompression: reduces payload size with gzip/deflate automatically.
  • Client timeout and per-call cancellation: ensure calls dont hang indefinitely.
$handler = [System.Net.Http.SocketsHttpHandler]::new()
$handler.PooledConnectionLifetime = [TimeSpan]::FromMinutes(5)      # Refresh connections to pick up DNS changes
$handler.PooledConnectionIdleTimeout = [TimeSpan]::FromMinutes(2)  # Reap idle connections
$handler.MaxConnectionsPerServer = 8                               # Concurrency cap per host
$handler.AutomaticDecompression = [System.Net.DecompressionMethods]::GZip -bor `
    [System.Net.DecompressionMethods]::Deflate
$handler.AllowAutoRedirect = $false                                # Be explicit about redirects

$client = [System.Net.Http.HttpClient]::new($handler)
$client.Timeout = [TimeSpan]::FromSeconds(15)                      # Overall guardrail timeout
$client.DefaultRequestHeaders.UserAgent.ParseAdd("PowerShellClient/1.0 (+https://example.local)")

Why these limits matter:

  • Socket exhaustion: creating many short-lived clients leaves connections in TIME_WAIT and pressures ephemeral ports. A single client reuses connections safely.
  • DNS changes: PooledConnectionLifetime forces the pool to rotate connections periodically and re-resolve hosts.
  • Fairness: MaxConnectionsPerServer keeps your script from opening hundreds of parallel connections and overwhelming the API.

Use HTTP/2 where possible

SocketsHttpHandler and HttpClient negotiate HTTP/2 automatically if the server supports it, which can improve throughput via multiplexing. If you push many concurrent requests to the same host and still observe head-of-line blocking, consider tuning advanced HTTP/2 options (e.g., EnableMultipleHttp2Connections in newer runtimes). For most cases, the defaults are excellent.

Per-Call Deadlines, Error Handling, and Predictable Behavior

Relying solely on HttpClient.Timeout is rarely enough. Use a CancellationTokenSource per call to enforce a stricter deadline and guarantee that stalled responses are abandoned quickly. Always create a fresh HttpRequestMessage per call.

$uris = @('https://api.example.local/items', 'https://api.example.local/users')
foreach ($u in $uris) {
  $req = [System.Net.Http.HttpRequestMessage]::new('GET', $u)
  $req.Headers.Accept.ParseAdd('application/json')

  # Per-call deadline shorter than the client guardrail timeout
  $cts = [System.Threading.CancellationTokenSource]::new([TimeSpan]::FromSeconds(10))

  try {
    # ResponseHeadersRead avoids buffering the whole response in memory
    $res = $client.SendAsync(
      $req,
      [System.Net.Http.HttpCompletionOption]::ResponseHeadersRead,
      $cts.Token
    ).GetAwaiter().GetResult()

    $res.EnsureSuccessStatusCode()

    # Read and parse JSON
    $text = $res.Content.ReadAsStringAsync().GetAwaiter().GetResult()
    ($text | ConvertFrom-Json -Depth 10) | Select-Object -First 1 Name, Id
  }
  catch [System.OperationCanceledException] {
    Write-Warning ("Timeout: {0}" -f $u)
  }
  catch [System.Net.Http.HttpRequestException] {
    Write-Warning ("HTTP error: {0}" -f $_.Exception.Message)
  }
  finally {
    $req.Dispose(); $cts.Dispose()
  }
}

# Dispose once, when you are completely done
$client.Dispose()

Notes:

  • ResponseHeadersRead: starts processing as soon as headers arrive; useful for large payloads or streaming scenarios.
  • Per-call CTS: guarantees a hard deadline regardless of server behavior; set it a bit below the clients Timeout.
  • EnsureSuccessStatusCode(): throws on 4xx/5xx so you can handle or retry explicitly.

Pragmatic retry for idempotent GETs

Retries can smooth over transient failures, but only retry idempotent requests (GET/HEAD) unless you know your POST/PUT is safe to repeat. Use exponential backoff and preserve your per-call deadline.

function Invoke-GetWithRetry {
  param(
    [Parameter(Mandatory)] [string] $Uri,
    [int] $MaxRetries = 3
  )
  $attempt = 0
  $delayMs = 250
  while ($true) {
    $attempt++
    $req = [System.Net.Http.HttpRequestMessage]::new('GET', $Uri)
    $req.Headers.Accept.ParseAdd('application/json')
    $cts = [System.Threading.CancellationTokenSource]::new([TimeSpan]::FromSeconds(8))
    try {
      $res = $client.SendAsync($req, [System.Net.Http.HttpCompletionOption]::ResponseHeadersRead, $cts.Token).GetAwaiter().GetResult()
      if ([int]$res.StatusCode -ge 500) { throw [System.Net.Http.HttpRequestException]::new("Server error: $($res.StatusCode)") }
      $res.EnsureSuccessStatusCode()
      return $res.Content.ReadAsStringAsync().GetAwaiter().GetResult() | ConvertFrom-Json -Depth 10
    }
    catch [System.OperationCanceledException] {
      if ($attempt -ge $MaxRetries) { throw }
      Start-Sleep -Milliseconds $delayMs
      $delayMs = [Math]::Min($delayMs * 2, 8000)
    }
    catch [System.Net.Http.HttpRequestException] {
      if ($attempt -ge $MaxRetries) { throw }
      Start-Sleep -Milliseconds $delayMs
      $delayMs = [Math]::Min($delayMs * 2, 8000)
    }
    finally {
      $req.Dispose(); $cts.Dispose()
    }
  }
}

Tip: cap total time by combining retry backoffs with a hard maximum per-call CTS; fail fast when the overall budget is exceeded.

Packaging for Reuse, Cross-Version Notes, and Advanced Tuning

Make it a module-level singleton

For scripts that call multiple endpoints, keep the HttpClient and SocketsHttpHandler in module scope. Initialize once in Begin or module import, dispose on exit.

# In your module .psm1
if (-not $script:Handler) {
  $script:Handler = [System.Net.Http.SocketsHttpHandler]::new()
  $script:Handler.PooledConnectionLifetime = [TimeSpan]::FromMinutes(5)
  $script:Handler.PooledConnectionIdleTimeout = [TimeSpan]::FromMinutes(2)
  $script:Handler.MaxConnectionsPerServer = 8
  $script:Handler.AutomaticDecompression = [System.Net.DecompressionMethods]::GZip -bor `
      [System.Net.DecompressionMethods]::Deflate
  $script:Client = [System.Net.Http.HttpClient]::new($script:Handler)
  $script:Client.Timeout = [TimeSpan]::FromSeconds(15)
}

function Invoke-ApiGet {
  param([Parameter(Mandatory)][string]$Uri)
  $req = [System.Net.Http.HttpRequestMessage]::new('GET', $Uri)
  $req.Headers.Accept.ParseAdd('application/json')
  $cts = [System.Threading.CancellationTokenSource]::new([TimeSpan]::FromSeconds(10))
  try {
    $res = $script:Client.SendAsync($req, [System.Net.Http.HttpCompletionOption]::ResponseHeadersRead, $cts.Token).GetAwaiter().GetResult()
    $res.EnsureSuccessStatusCode()
    return $res.Content.ReadAsStringAsync().GetAwaiter().GetResult() | ConvertFrom-Json -Depth 10
  } finally {
    $req.Dispose(); $cts.Dispose()
  }
}

# Optional: teardown function
function Close-ApiClient { $script:Client.Dispose(); $script:Handler.Dispose(); Remove-Variable -Scope Script Client,Handler -ErrorAction SilentlyContinue }

Windows PowerShell 5.1 fallback

On Windows PowerShell 5.1 (.NET Framework), SocketsHttpHandler isnt available. You still should reuse a single HttpClient. Tune ServicePoint settings to mimic connection leasing and concurrency limits.

# PS 5.1-compatible
[System.Net.ServicePointManager]::DefaultConnectionLimit = 8
[System.Net.ServicePointManager]::Expect100Continue = $false
$handler = New-Object System.Net.Http.HttpClientHandler
$handler.AutomaticDecompression = [System.Net.DecompressionMethods]::GZip -bor `
    [System.Net.DecompressionMethods]::Deflate
$client = New-Object System.Net.Http.HttpClient($handler)
$client.Timeout = [TimeSpan]::FromSeconds(15)

# Lease connections every 5 minutes for a specific host to pick up DNS changes
$sp = [System.Net.ServicePointManager]::FindServicePoint('https://api.example.local')
$sp.ConnectionLeaseTimeout = 300000   # ms
$sp.MaxIdleTime = 120000               # ms

Additional practical tuning

  • Default headers: set common Authorization, User-Agent, and Accept at client level to avoid repetition.
  • Proxies: $handler.Proxy = [System.Net.WebProxy]::new('http://proxy:8080'); $handler.UseProxy = $true;
  • Large downloads: stream to file with ResponseHeadersRead and CopyToAsync, rather than buffering in memory.
  • TLS/certificates: keep default validation; avoid disabling checks. If you must customize, use SslOptions.RemoteCertificateValidationCallback on SocketsHttpHandler sparingly.
  • Observability: wrap calls with [System.Diagnostics.Stopwatch]::StartNew() and emit timings to logs; enable System.Net.Http event source when diagnosing connection pooling.

End-to-end example

$handler = [System.Net.Http.SocketsHttpHandler]::new()
$handler.PooledConnectionLifetime = [TimeSpan]::FromMinutes(5)
$handler.PooledConnectionIdleTimeout = [TimeSpan]::FromMinutes(2)
$handler.MaxConnectionsPerServer = 8
$handler.AutomaticDecompression = [System.Net.DecompressionMethods]::GZip -bor [System.Net.DecompressionMethods]::Deflate
$client = [System.Net.Http.HttpClient]::new($handler)
$client.Timeout = [TimeSpan]::FromSeconds(15)

$uris = @('https://api.example.local/items', 'https://api.example.local/users')
foreach ($u in $uris) {
  $req = [System.Net.Http.HttpRequestMessage]::new('GET', $u)
  $req.Headers.Accept.ParseAdd('application/json')
  $cts = [System.Threading.CancellationTokenSource]::new([TimeSpan]::FromSeconds(10))
  try {
    $res = $client.SendAsync($req, [System.Net.Http.HttpCompletionOption]::ResponseHeadersRead, $cts.Token).GetAwaiter().GetResult()
    $res.EnsureSuccessStatusCode()
    $text = $res.Content.ReadAsStringAsync().GetAwaiter().GetResult()
    ($text | ConvertFrom-Json -Depth 10) | Select-Object -First 1 Name, Id
  } catch [System.OperationCanceledException] {
    Write-Warning ("Timeout: {0}" -f $u)
  } catch [System.Net.Http.HttpRequestException] {
    Write-Warning ("HTTP error: {0}" -f $_.Exception.Message)
  } finally {
    $req.Dispose(); $cts.Dispose()
  }
}
$client.Dispose()

What You Get

  • Fewer socket leaks: one client, one pool, predictable resource usage.
  • Faster calls: connection reuse, automatic decompression, and optional HTTP/2 multiplexing.
  • Predictable timeouts: per-call deadlines plus guardrail client timeout.
  • Stable throughput: fair per-host concurrency limits.

Strengthen your API reliability in PowerShell with these patterns. For more advanced recipes and production hardening, see resources like the PowerShell Advanced Cookbook. Keep shipping stable, fast automation scripts.

← All Posts Home →