# As a condition of accessing this website, you agree to abide by the following # content signals: # (a) If a content-signal = yes, you may collect content for the corresponding # use. # (b) If a content-signal = no, you may not collect content for the # corresponding use. # (c) If the website operator does not include a content signal for a # corresponding use, the website operator neither grants nor restricts # permission via content signal with respect to the corresponding use. # The content signals and their meanings are: # search: building a search index and providing search results (e.g., returning # hyperlinks and short excerpts from your website's contents). Search does not # include providing AI-generated search summaries. # ai-input: inputting content into one or more AI models (e.g., retrieval # augmented generation, grounding, or other real-time taking of content for # generative AI search answers). # ai-train: training or fine-tuning AI models. # ANY RESTRICTIONS EXPRESSED VIA CONTENT SIGNALS ARE EXPRESS RESERVATIONS OF # RIGHTS UNDER ARTICLE 4 OF THE EUROPEAN UNION DIRECTIVE 2019/790 ON COPYRIGHT # AND RELATED RIGHTS IN THE DIGITAL SINGLE MARKET. # BEGIN Cloudflare Managed content User-Agent: * Content-signal: search=yes,ai-train=no Allow: / User-agent: Amazonbot Disallow: / User-agent: Applebot-Extended Disallow: / User-agent: Bytespider Disallow: / User-agent: CCBot Disallow: / User-agent: ClaudeBot Disallow: / User-agent: Google-Extended Disallow: / User-agent: GPTBot Disallow: / User-agent: meta-externalagent Disallow: / # END Cloudflare Managed Content # Block all known AI crawlers and assistants # from using content for training AI models. # Source: https://robotstxt.com/ai User-Agent: GPTBot User-Agent: ClaudeBot User-Agent: Claude-User User-Agent: Claude-SearchBot User-Agent: CCBot User-Agent: Google-Extended User-Agent: Applebot-Extended User-Agent: Facebookbot User-Agent: Meta-ExternalAgent User-Agent: Meta-ExternalFetcher User-Agent: diffbot User-Agent: PerplexityBot User-Agent: Perplexity‑User User-Agent: Omgili User-Agent: Omgilibot User-Agent: webzio-extended User-Agent: ImagesiftBot User-Agent: Bytespider User-Agent: TikTokSpider User-Agent: Amazonbot User-Agent: Youbot User-Agent: SemrushBot-OCOB User-Agent: Petalbot User-Agent: VelenPublicWebCrawler User-Agent: TurnitinBot User-Agent: Timpibot User-Agent: OAI-SearchBot User-Agent: ICC-Crawler User-Agent: AI2Bot User-Agent: AI2Bot-Dolma User-Agent: DataForSeoBot User-Agent: AwarioBot User-Agent: AwarioSmartBot User-Agent: AwarioRssBot User-Agent: Google-CloudVertexBot User-Agent: PanguBot User-Agent: Kangaroo Bot User-Agent: Sentibot User-Agent: img2dataset User-Agent: Meltwater User-Agent: Seekr User-Agent: peer39_crawler User-Agent: cohere-ai User-Agent: cohere-training-data-crawler User-Agent: DuckAssistBot User-Agent: Scrapy User-Agent: Cotoyogi User-Agent: aiHitBot User-Agent: Factset_spyderbot User-Agent: FirecrawlAgent Disallow: / DisallowAITraining: / # Block any non-specified AI crawlers (e.g., new # or unknown bots) from using content for training # AI models, while allowing the website to be # indexed and accessed by bots. These directives # are still experimental and may not be supported # by all AI crawlers. User-Agent: * DisallowAITraining: / Content-Usage: ai=n Allow: /