Skip to content

Add configurable HTTP timeouts and error callback to get_prerendered_page_response #69

@rlafferty

Description

@rlafferty

Problem

get_prerendered_page_response creates a Net::HTTP instance without setting open_timeout or read_timeout:

http = Net::HTTP.new(url.host, url.port)
http.use_ssl = true if url.scheme == 'https'
response = http.request(req)

This inherits Ruby's Net::HTTP defaults of 60 seconds for both connection and read timeouts. When the Prerender service is degraded (slow responses, gateway errors), each request holds the calling web server thread/process for up to 60 seconds before timing out.
In threaded application servers like Puma, this quickly saturates the worker pool — a single slow Prerender dependency can take down the entire application for all users, not just bot traffic.
The bare rescue => nil also means timeout errors are silently swallowed with no opportunity for the consuming application to log, alert, or take corrective action (e.g., circuit breaking).

Proposed Solution

1. Configurable timeouts via the options hash

Allow consumers to pass open_timeout and read_timeout through the existing options mechanism:

config.middleware.use Rack::Prerender,
  prerender_token: 'YOUR_TOKEN',
  open_timeout: 5,
  read_timeout: 10

Implementation in get_prerendered_page_response:

http = Net::HTTP.new(url.host, url.port)
http.use_ssl = true if url.scheme == 'https'
http.open_timeout = @options[:open_timeout] if @options[:open_timeout]
http.read_timeout = @options[:read_timeout] if @options[:read_timeout]
response = http.request(req)

When not provided, behavior is unchanged (Net::HTTP defaults apply). Fully backward-compatible.

2. Error callback via on_error option

Expose an optional on_error callback (consistent with the existing before_render / after_render pattern) so consuming applications have visibility into failures:

config.middleware.use Rack::Prerender,
  prerender_token: 'YOUR_TOKEN',
  open_timeout: 5,
  read_timeout: 10,
  on_error: Proc.new do |error, env|
    Rails.logger.warn("Prerender request failed: #{error.class} - #{error.message}")
  end

Implementation — replace the current bare rescue:

rescue => e
  @options[:on_error].call(e, env) if @options[:on_error]
  nil

The return value remains nil (falling through to @app.call), preserving existing behavior. The callback is purely for observability and consumer-side error handling.

Context

We experienced a production incident where Prerender.io degradation caused 30-60 second response times. With the default 60-second Net::HTTP timeout, Puma workers were held for the full duration of each request. This saturated the worker pool within minutes, causing the upstream reverse proxy (NGINX) to return 503s to all traffic — including non-bot requests that don't use Prerender at all.
Reducing the timeout to 5-10 seconds and having visibility into errors would have limited the blast radius significantly.

Alternatives Considered

  • Wrapping Net::HTTP.new: More invasive, breaks if the HTTP implementation changes.
  • Using Timeout.timeout: Dangerous in Ruby — can interrupt code at unpredictable points. Native Net::HTTP timeouts are the correct approach.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions