← Tap · Blog

16 Comments, 6 Insights: Using HN and Reddit as a Positioning Lab

April 6, 2026 · Leon Ting · 7 min read

I spent an afternoon writing 16 comments across Hacker News and Reddit. Not to promote anything — to test which pain points actually resonate with developers.

The result: 6 content principles I now use to decide what to build, what to write about, and how to position my product. Here's the method.

The Method: Comments as Micro-Experiments

The premise is simple: a comment is the cheapest possible A/B test.

Writing a blog post takes hours. A landing page rewrite takes days. A comment takes 2 minutes. If it gets upvoted, the angle works. If it's ignored, you saved yourself a blog post nobody would read.

The process:

  1. Find hot posts in your domain (automation, scraping, developer tools, AI)
  2. Write a comment that tests a specific angle — one pain point, one insight
  3. Track which angles get traction
  4. Turn validated angles into blog posts and landing page copy

I posted across 10 subreddits (r/webscraping, r/automation, r/selfhosted, r/commandline, r/webdev, r/opensource, r/programming, r/MachineLearning, r/LocalLLaMA, r/devops) and HN front-page posts spanning AI, infrastructure, open source, and developer tools.

The 6 Insights

#1 Silent failure is the universal pain

I commented on Gallery-dl's DMCA move (HN front page) and r/webscraping's "endgame for scraping" (104 upvotes). Both times, the angle that resonated was: the hard part isn't writing a scraper — it's knowing when it breaks.

"Most scrapers fail silently — they return empty arrays for days before anyone notices."

This wasn't a hypothesis. It was an observation I kept hearing in different words across communities. The phrase "silent failure" connected immediately.

Principle: Lead with the maintenance problem, not the creation problem. Everyone can build a scraper. Nobody can keep it running.

Solution: Health contracts. Every program defines what "working" means: minimum rows, required fields. tap doctor checks all programs in one command. When something breaks, you know in seconds — not weeks.

health: { min_rows: 5, non_empty: ["title", "url"] }

$ tap doctor
hackernews/hot    ✔ ok     30 rows
reddit/hot        ✘ fail   0 rows — selector changed
  ↳ auto-healing...

#2 Cost anxiety is real and specific

Caveman hit 727 points — a post about reducing LLM token usage. Nanocode (177 points) was about self-hosting Claude Code to understand the real cost. Developers aren't just curious about AI costs — they're anxious about them.

When I commented about token compression paying for itself at scale, it clicked because developers already feel this pain daily.

Principle: Use exact numbers. "$1.05 per run" and "300x cheaper" land. "More affordable" doesn't. Developers think in math, not adjectives.

Solution: The compiler model. AI runs once at authoring time (~$0.15), produces a deterministic program, and every subsequent execution is $0. Fifty daily automations: $18,000/year with AI agents vs ~$60/year with compiled programs.

#3 "Open-source alternative" is not a value prop

The Modo post (“open-source alternative to Cursor and Windsurf”) had only 2 comments despite being on the front page. My feedback: users don't switch tools for ideology — they switch for workflow improvements.

"The README should lead with a concrete before/after: 'In Cursor you do X in 5 steps, in Modo you do it in 1.' That's what converts users."

This insight immediately changed how I position my own tool. Instead of "deterministic alternative to Browser Use," I now say "Browser Use costs $1.05/run. This costs $0."

Principle: Show the delta, not the category. "I'm like X but open source" tells users nothing about why they should switch.

Solution: Concrete comparison. Browser Use: $0.50–$2.00/run, 60–95% reliability, 30–120s. Tap: $0/run, 100% deterministic, 1–5s. Same task, measurable difference.

#4 Local-first is having a moment

Three unrelated posts all trended around the same theme:

Developers are increasingly allergic to tools that phone home. Privacy, latency, reliability — the reasons vary, but the trend is clear: if it can run locally, it should.

Principle: "Runs on your machine, works offline" is a feature worth highlighting, not an implementation detail to bury.

Solution: Tap programs are plain .tap.js files that execute locally. No API calls at runtime, no data leaving your device, no cloud dependency. They work on a plane, in a cabin, wherever your laptop goes.

#5 Legal pressure on scraping is accelerating

Gallery-dl's DMCA notice trended on both HN and r/programming simultaneously. The discussion wasn't about the specific tool — it was about the pattern: open-source scraping tools face increasing legal pressure.

My comment about Codeberg as "pragmatic infrastructure for tools that platforms want to suppress" resonated because developers see this pattern repeating (youtube-dl, yt-dlp, now gallery-dl).

Principle: API-first data access is both technically superior (structured data, stable endpoints) and legally safer (no "circumvention" arguments). Position accordingly.

Solution: tap.fetch() calls site APIs directly — structured JSON, stable endpoints, no DOM parsing. Only falls back to browser rendering when no API exists. Less breakage, less legal surface area.

#6 Infrastructure beats features

Switzerland's 25 Gbit internet (315 points, 249 comments) wasn't about speed — it was about structural fairness. Open fiber access vs. local monopolies. Utility vs. marketplace.

The parallel to automation tooling: AI agents at $1/run create a cost barrier. Deterministic programs that run at $0 are infrastructure — accessible to anyone, owned by the user, no ongoing rent.

Principle: Frame your tool as infrastructure people own, not a service they rent. "Stop renting automation. Start owning it."

Solution: Every .tap.js is a file you own. Git-versionable, diffable, composable. Cancel your subscription and your programs keep running. No vendor lock-in, no API keys required at runtime. You own the automation like you own your code.

From Insights to Action

These 6 principles aren't abstract. I used them the same day:

InsightAction taken
Silent failureAlready had a blog post. Validated it's the strongest hook.
Concrete numbersAdded "$0" and comparison tables to landing page
Not "alternative"Rewrote FAQ: "Browser Use costs $0.50–$2.00/run, Tap costs $0"
Local-firstAdded "Local-First" card to landing page Why section
Legal clarityAdded "API First" card and FAQ about scraping legality
InfrastructureCTA changed to "Stop renting automation. Start owning it."

Total time: one afternoon of commenting + one evening of landing page updates. No focus groups. No surveys. No A/B testing infrastructure. Just conversations with developers in places they already hang out.

The Playbook

If you're building a developer tool and struggling with positioning:

  1. Don't start with a landing page. Start with 10 comments on relevant posts.
  2. Each comment tests one angle. One pain point, one insight, one framing.
  3. Upvotes = validation. High-scoring posts where your comment resonates = confirmed pain point.
  4. Silence = signal too. If nobody engages with your angle, it's not a pain point — it's a feature you think is important but users don't.
  5. Turn validated angles into content. Blog post from the best angle. Landing page copy from the specific phrases that worked.
  6. Never link your product in comments. Share expertise. Build credibility. The product link lives on your profile, not in your comments.

Comments are conversations. Conversations reveal what people actually care about. That's worth more than any amount of competitor analysis or market research.


The tool I used to validate these insights

Tap turns AI into a compiler for browser automation. AI writes a program once, then it runs forever at $0. The positioning came from the comments. The product came from the pain.

Read more posts · GitHub