Education· Last updated April 4, 2026

Build a Visual Regression Testing System with WebShot API in Python

Automate visual regression testing for web applications using WebShot API. Detect UI regressions, layout breaks, and CSS changes before they reach production.

Build a Visual Regression Testing System with WebShot API in Python

Visual regression testing catches the bugs that unit tests and functional tests miss: a CSS change that breaks the navigation on mobile, a font that fails to load and reverts to system fallback, a button that renders correctly in Chrome but disappears in Safari. These are the bugs users see first and report most.

Traditional visual regression tools require you to spin up browsers, manage Selenium or Playwright infrastructure, and deal with flakiness from test environments. This guide builds a lightweight visual regression system using WebShot API — take screenshots programmatically, compare them pixel-by-pixel against baselines, and alert your team when significant visual changes are detected.

How Visual Regression Testing Works

The process has three phases:

  1. Baseline capture — Take screenshots of every important page in a known-good state
  2. Comparison run — After each deployment, take new screenshots and compare to baseline
  3. Diff analysis — Calculate pixel difference percentage; alert if above threshold

The key challenge is handling acceptable variation (timestamps, dynamic content, ad slots) without suppressing real regressions.

Architecture

CI/CD Pipeline (post-deploy trigger)
          │
          ▼
    WebShot API
    ├── Desktop (1440px)
    ├── Tablet (768px)
    └── Mobile (375px)
          │
          ▼
  Screenshot Storage (S3/local)
          │
          ▼
  Pixel Comparison Engine
          │
    ┌─────┴──────┐
    │             │
  PASS          FAIL
  (<2% diff)   (>2% diff)
                  │
                  ▼
          Alert + Diff Image
          (Slack / GitHub PR comment)

Setup

pip install requests Pillow numpy python-dotenv boto3
APIVULT_API_KEY=YOUR_API_KEY
S3_BUCKET=your-screenshots-bucket
AWS_REGION=us-east-1

Step 1: Screenshot Capture Module

import os
import requests
import base64
from pathlib import Path
from typing import List, Dict
from dotenv import load_dotenv
 
load_dotenv()
 
API_KEY = os.getenv("APIVULT_API_KEY")
BASE_URL = "https://apivult.com/api/webshot"
 
VIEWPORTS = {
    "desktop": {"width": 1440, "height": 900},
    "tablet": {"width": 768, "height": 1024},
    "mobile": {"width": 375, "height": 812}
}
 
def capture_screenshot(
    url: str,
    viewport: str = "desktop",
    wait_for: str = None,
    full_page: bool = True,
    mask_selectors: List[str] = None
) -> bytes:
    """
    Capture a screenshot of a URL.
 
    Args:
        url: Page URL to capture
        viewport: "desktop", "tablet", or "mobile"
        wait_for: CSS selector to wait for before screenshot
        full_page: Capture full scrollable page
        mask_selectors: CSS selectors for dynamic elements to mask (timestamps, ads)
 
    Returns:
        PNG image bytes
    """
    headers = {
        "X-RapidAPI-Key": API_KEY,
        "Content-Type": "application/json"
    }
 
    viewport_config = VIEWPORTS.get(viewport, VIEWPORTS["desktop"])
 
    payload = {
        "url": url,
        "viewport_width": viewport_config["width"],
        "viewport_height": viewport_config["height"],
        "full_page": full_page,
        "format": "png",
        "quality": 95,
        "wait_for_network": "idle",  # wait for all network requests to complete
        "wait_timeout_ms": 8000
    }
 
    if wait_for:
        payload["wait_for_selector"] = wait_for
 
    if mask_selectors:
        payload["mask_selectors"] = mask_selectors  # fill these with solid color
 
    response = requests.post(
        f"{BASE_URL}/capture",
        json=payload,
        headers=headers,
        timeout=30
    )
    response.raise_for_status()
    result = response.json()
 
    # Decode base64 screenshot
    return base64.b64decode(result["screenshot"])
 
def capture_page_responsive(
    url: str,
    output_dir: str,
    page_name: str,
    mask_selectors: List[str] = None
) -> Dict[str, str]:
    """
    Capture a page at all three viewport sizes.
    Returns dict of viewport -> file path.
    """
    Path(output_dir).mkdir(parents=True, exist_ok=True)
    paths = {}
 
    for viewport_name in VIEWPORTS:
        screenshot = capture_screenshot(
            url,
            viewport=viewport_name,
            mask_selectors=mask_selectors
        )
        file_path = f"{output_dir}/{page_name}_{viewport_name}.png"
        with open(file_path, "wb") as f:
            f.write(screenshot)
 
        paths[viewport_name] = file_path
        print(f"  Captured {viewport_name}: {file_path}")
 
    return paths

Step 2: Pixel Comparison Engine

import numpy as np
from PIL import Image, ImageChops, ImageEnhance
import io
 
def compare_screenshots(
    baseline_path: str,
    current_path: str,
    diff_output_path: str = None,
    threshold_pct: float = 2.0
) -> dict:
    """
    Compare two screenshots pixel-by-pixel.
 
    Args:
        baseline_path: Path to baseline screenshot
        current_path: Path to current screenshot
        diff_output_path: Where to save the diff image
        threshold_pct: Percentage of pixels that must differ to fail
 
    Returns:
        Comparison result with diff percentage and pass/fail status
    """
    baseline = Image.open(baseline_path).convert("RGB")
    current = Image.open(current_path).convert("RGB")
 
    # Resize current to match baseline if dimensions differ
    if baseline.size != current.size:
        current = current.resize(baseline.size, Image.LANCZOS)
 
    # Convert to numpy arrays for fast comparison
    baseline_arr = np.array(baseline, dtype=np.float32)
    current_arr = np.array(current, dtype=np.float32)
 
    # Calculate pixel difference
    diff_arr = np.abs(baseline_arr - current_arr)
 
    # Pixels with >10 difference in any channel are "changed"
    changed_pixels = np.any(diff_arr > 10, axis=2)
    total_pixels = changed_pixels.size
    diff_pixels = np.sum(changed_pixels)
    diff_pct = (diff_pixels / total_pixels) * 100
 
    # Generate visual diff image if requested
    if diff_output_path:
        diff_image = Image.fromarray(
            (diff_arr * 3).clip(0, 255).astype(np.uint8)
        )
        # Enhance visibility of differences
        diff_enhanced = ImageEnhance.Contrast(diff_image).enhance(5.0)
 
        # Create side-by-side comparison
        comparison_width = baseline.width * 3
        comparison = Image.new("RGB", (comparison_width, baseline.height))
        comparison.paste(baseline, (0, 0))
        comparison.paste(current, (baseline.width, 0))
        comparison.paste(diff_enhanced, (baseline.width * 2, 0))
        comparison.save(diff_output_path)
 
    return {
        "diff_percentage": round(diff_pct, 3),
        "diff_pixels": int(diff_pixels),
        "total_pixels": int(total_pixels),
        "passed": diff_pct <= threshold_pct,
        "threshold": threshold_pct,
        "diff_image": diff_output_path
    }

Step 3: Baseline Management

import json
from datetime import datetime
 
BASELINES_DIR = "visual_baselines"
RESULTS_DIR = "visual_results"
 
# Define the pages to test
PAGES_TO_TEST = [
    {
        "name": "homepage",
        "url": "https://your-app.com",
        "wait_for": ".hero-section",
        "mask_selectors": [".timestamp", ".ad-slot", "[data-dynamic]"]
    },
    {
        "name": "pricing",
        "url": "https://your-app.com/pricing",
        "wait_for": ".pricing-cards"
    },
    {
        "name": "dashboard",
        "url": "https://your-app.com/dashboard",
        "wait_for": ".dashboard-main",
        "mask_selectors": [".last-updated", ".live-counter"]
    },
    {
        "name": "signup",
        "url": "https://your-app.com/signup",
        "wait_for": "form"
    }
]
 
def create_baselines():
    """Capture baseline screenshots for all pages. Run once after a known-good deploy."""
    print("Creating visual regression baselines...")
    baseline_manifest = {
        "created_at": datetime.utcnow().isoformat(),
        "pages": {}
    }
 
    for page in PAGES_TO_TEST:
        print(f"\nCapturing: {page['name']}")
        paths = capture_page_responsive(
            url=page["url"],
            output_dir=f"{BASELINES_DIR}/{page['name']}",
            page_name=page["name"],
            mask_selectors=page.get("mask_selectors")
        )
        baseline_manifest["pages"][page["name"]] = paths
 
    with open(f"{BASELINES_DIR}/manifest.json", "w") as f:
        json.dump(baseline_manifest, f, indent=2)
 
    print(f"\nBaselines created for {len(PAGES_TO_TEST)} pages")
    return baseline_manifest

Step 4: Regression Test Runner

def run_visual_regression_tests(deployment_id: str = None) -> dict:
    """
    Run visual regression tests against current baselines.
    Call this in CI/CD after deployment.
    """
    run_id = deployment_id or datetime.utcnow().strftime("%Y%m%d_%H%M%S")
    run_dir = f"{RESULTS_DIR}/{run_id}"
    Path(run_dir).mkdir(parents=True, exist_ok=True)
 
    # Load baseline manifest
    with open(f"{BASELINES_DIR}/manifest.json") as f:
        baselines = json.load(f)
 
    results = {
        "run_id": run_id,
        "run_at": datetime.utcnow().isoformat(),
        "pages": {},
        "summary": {"passed": 0, "failed": 0, "total": 0}
    }
 
    for page in PAGES_TO_TEST:
        page_name = page["name"]
        print(f"\nTesting: {page_name}")
 
        page_results = {"viewports": {}}
 
        # Capture current screenshots
        current_paths = capture_page_responsive(
            url=page["url"],
            output_dir=f"{run_dir}/{page_name}",
            page_name=page_name,
            mask_selectors=page.get("mask_selectors")
        )
 
        # Compare each viewport
        for viewport in VIEWPORTS:
            baseline_path = baselines["pages"][page_name].get(viewport)
            current_path = current_paths.get(viewport)
 
            if not baseline_path or not current_path:
                continue
 
            diff_path = f"{run_dir}/{page_name}/{page_name}_{viewport}_diff.png"
 
            comparison = compare_screenshots(
                baseline_path=baseline_path,
                current_path=current_path,
                diff_output_path=diff_path,
                threshold_pct=2.0
            )
 
            page_results["viewports"][viewport] = comparison
            results["summary"]["total"] += 1
 
            status = "PASS" if comparison["passed"] else "FAIL"
            print(f"  {viewport}: {status} ({comparison['diff_percentage']:.2f}% diff)")
 
            if comparison["passed"]:
                results["summary"]["passed"] += 1
            else:
                results["summary"]["failed"] += 1
 
        results["pages"][page_name] = page_results
 
    # Save results
    with open(f"{run_dir}/results.json", "w") as f:
        json.dump(results, f, indent=2)
 
    return results
 
def send_regression_alert(results: dict, slack_webhook: str):
    """Send Slack notification for failed visual regression tests."""
    failed_pages = [
        page_name
        for page_name, page_data in results["pages"].items()
        if any(
            not vp["passed"]
            for vp in page_data["viewports"].values()
        )
    ]
 
    if not failed_pages:
        return
 
    summary = results["summary"]
    message = {
        "text": f"🖼️ Visual Regression Failures Detected",
        "blocks": [
            {
                "type": "section",
                "text": {
                    "type": "mrkdwn",
                    "text": (
                        f"*Visual Regression Test Results* — Run {results['run_id']}\n"
                        f"✅ Passed: {summary['passed']} | "
                        f"❌ Failed: {summary['failed']} | "
                        f"Total: {summary['total']}\n\n"
                        f"*Failed pages:*\n"
                        + "\n".join(f"• {p}" for p in failed_pages)
                    )
                }
            }
        ]
    }
 
    requests.post(slack_webhook, json=message, timeout=10)

CI/CD Integration

Add to your GitHub Actions workflow:

- name: Visual Regression Tests
  run: |
    python visual_regression.py --run-tests --deployment-id ${{ github.sha }}
  env:
    APIVULT_API_KEY: ${{ secrets.APIVULT_API_KEY }}
 
- name: Upload diff images
  if: failure()
  uses: actions/upload-artifact@v4
  with:
    name: visual-diffs-${{ github.sha }}
    path: visual_results/

Performance

PagesViewportsTotal ScreenshotsRun Time
10330~45 sec
25375~2 min
503150~4 min

The comparison is CPU-bound (numpy operations), not API-bound. Parallelize screenshot capture for large page sets.

Conclusion

Visual regression testing catches an entire category of bugs that standard test suites miss. With WebShot API handling the browser infrastructure, you get reliable cross-viewport screenshots without managing headless browser clusters.

Start with 5–10 of your most important pages, set a 2% pixel threshold, and run after every production deployment. You'll catch regressions in minutes instead of hearing about them from users.

See the WebShot API documentation for advanced options including element-level screenshots, PDF capture, and authentication header support.