Websites change. It's not a bug — it's the nature of the web. Designs get refreshed. APIs get versioned. Selectors shift. Components get renamed.
The question isn't "how do I stop websites from changing?" It's: "how do I keep my automations running when they do?"
"I built Site Spy after missing a visa appointment slot because a government page changed and I didn't notice for two weeks."
— vkuprin, Hacker News
"I'd use it to track when state wildlife agencies update their regulation pages — those change once a year with no announcement and I always miss it."
— Hauk307, Hacker News
"I currently use Wachete, but since over a year, it triggers rate limits on a specific website and I just can't monitor German laws anymore."
— nicbou, Hacker News
Three different users, three different needs, the same problem: websites change, and there's no reliable way to know when or adapt automatically.
Not all website changes are the same:
1. Data changes — new content, updated prices, new posts. This is what you want to track. Your automation should detect and report these.
2. Structure changes — redesigned layout, renamed CSS classes, new API version. This is what breaks your automation. Your automation should survive these.
Most tools conflate the two. tap watch and tap doctor handle them separately.
$ tap watch hackernews hot --every 10m 2026-04-04T10:00 +added "Show HN: Tap" score=342 2026-04-04T10:10 +added "Rust 2.0 announced" score=128 2026-04-04T10:10 -removed "Old post fell off" score=12 2026-04-04T10:20 ~changed "Show HN: Tap" score: 342→487
tap watch runs your program on an interval, diffs the results, and outputs only what changed. It's built on Unix primitives — while + sleep + diff. No database. No scheduler service.
Pipe it anywhere:
# Append to a log file $ tap watch github trending --every 1h >> ~/trending.log # Send to Slack via webhook $ tap watch reddit hot --every 30m | curl -X POST -d @- $SLACK_WEBHOOK # Feed into another program $ tap watch competitor prices --every 6h | tap filter --where "change > 10%"
When a website redesigns and your selectors break, watch won't show changes — it'll show nothing. That's where health contracts come in.
# Every tap has a health contract health: { min_rows: 5, // must return ≥5 results non_empty: ["title"] // title must never be empty }
$ tap doctor hackernews/hot ✔ ok 30 rows (245ms) bbc/news ✘ fail 0 rows min_rows: expected ≥5, got 0 github/trending ✔ ok 25 rows (1.2s)
bbc/news is broken. The site changed its layout. Before your data went bad, doctor caught it.
$ tap doctor --auto bbc/news: 0 rows (min_rows: 5) ☉ Re-inspecting https://www.bbc.com/news... ☉ AI analyzing new page structure... ✔ Healed: bbc/news.tap.js updated ✔ Verified: 12 rows, score=1.0
AI re-inspects the page, writes a new program, verifies it works, and saves. One AI call. Then $0 per run again.
forge → run → watch → doctor → heal → run → ...
# Set it up once $ tap forge "track competitor pricing" # AI writes program $ tap watch competitor prices --every 6h # monitor changes $ tap doctor --schedule "0 6 * * *" # daily health check
You configure it once. The loop runs itself:
watch reports them in real-timedoctor detects via health contractdoctor --auto re-forges with one AI callYou sleep. Your automations don't stop.
Read more: Your Scraper Is Broken Right Now · Your Automation Costs $1 Per Run
curl -fsSL https://taprun.dev/install.sh | sh tap update && tap hackernews hot tap watch hackernews hot --every 10m tap doctor
Getting started · GitHub · 195+ community taps