RBV

Robots.txt Validator

Validate robots directives and detect common SEO mistakes

SEO & Schema
πŸ”’ 100% client-side β€” your data never leaves this page
Page mode
robots.txt Input

Quick CTA

Paste robots.txt and inspect errors and warnings first; strict rules like unknown directives stay in Deep.

Output
Validation result will appear here
100% client-side
Page reading mode

Quick keeps the summary, direct answers, and next steps so you can validate fit and get a result faster.

About this tool

Robots.txt Validator checks crawler directive quality before you ship changes. It validates core structure, reports unknown directives, detects rules placed before user-agent groups, and verifies sitemap URL format and duplicates. This helps prevent accidental crawl blocking and indexing loss caused by subtle configuration mistakes. The tool also outputs a normalized robots.txt view for cleaner reviews and commits. It runs fully client-side, so your rule drafts never leave the browser.

Direct Answers

Why does a robots.txt file still get warnings even when it looks readable?

Human-readable rules can still contain malformed groups, unknown directives, duplicate sitemap lines, or bad protocol usage.

Should sitemap lines inside robots.txt use HTTPS?

Yes, when your canonical site is HTTPS. Mixed-protocol sitemap declarations create unnecessary SEO confusion.