Googlebot handles no-index headers very elegantly. It advises to leave as many routes as possible open and uses headers for high fidelity rules regarding indexes.
Discourse adds special `x-robot-tags` noindex headers to users, badges, groups, search and tag routes.
Following up on b52143feff8c32f2 we now have it so Googlebot gets special handling.
Rest of the crawlers get a far more aggressive disallow list to protect against excessive crawling.
DEV: Replace instances of Discourse.base_uri with Discourse.base_path
This is clearer because the base_uri is actually just a path prefix. This continues the work started in 555f467.
Google no longer supports the use of robots.txt to block indexing.
See https://support.google.com/webmasters/answer/6062608 and
https://support.google.com/webmasters/answer/93710
Previous commits have added the `noindex` header to appropriate pages,
now we need to remove the paths from robots.txt so the pages can be
crawled.
Follow up to:
13f229808a22db9e1032832a313ab701b66614c8
b6765aac4b532c026418a7ffd9effd0741ab8a37
676be3a853454a33cf627c3d570feb37d3bb0bfd
07b728c5e557c9aae91c51f3eaac5c32d479f2a2
c94e6a9a66757ea48d99e3ee8d880523871cb6f4
* FEATURE: Allow customization of robots.txt
This allows admins to customize/override the content of the robots.txt
file at /admin/customize/robots. That page is not linked to anywhere in
the UI -- admins have to manually type the URL to access that page.
* use Ember.computed.not
* Jeff feedback
* Feedback
* Remove unused import
This reduces chances of errors where consumers of strings mutate inputs
and reduces memory usage of the app.
Test suite passes now, but there may be some stuff left, so we will run
a few sites on a branch prior to merging
Also moved to default crawl delay bing so no more than a req every 5 seconds is allowed
New site settings:
"slow_down_crawler_user_agents" - list of crawlers that will be slowed down
"slow_down_crawler_rate" - how many seconds to wait between requests
Not enforced server side yet