What is robots.txt?
A plain-text file at the root of a site that tells crawlers which paths they can and can't crawl.
robots.txt is the simplest form of crawler control. It sits at `https://example.com/robots.txt` and uses a tiny syntax: `User-agent: *` then `Disallow: /admin/` to block a path, or `Allow: /` to permit. Important caveats: it's a *crawl* directive, not an indexing directive — a disallowed page can still be indexed if other sites link to it (use `noindex` meta tag for actual de-indexing). InBuild's robots.txt allows public marketing and published-site routes and disallows the authenticated app surface (`/dashboard`, `/editor`, `/api/`).