---
name: robots-txt
description: Publish a valid robots.txt at the site root
---

# Implement robots.txt

Publish `/robots.txt` declaring how crawlers and AI agents may access your site.

## Requirements

- Serve `/robots.txt` as `text/plain` with HTTP 200
- Include at least one `User-agent:` block
- Add a `Sitemap:` directive pointing at your canonical sitemap URL
- Add explicit rules for major AI crawlers (see the ai-bot-rules skill)

## Example

```
User-agent: *
Allow: /

Sitemap: https://example.com/sitemap.xml
```

## References

- [RFC 9309 — Robots Exclusion Protocol](https://www.rfc-editor.org/rfc/rfc9309)
- [Google's robots.txt documentation](https://developers.google.com/search/docs/crawling-indexing/robots/intro)
