FIX: Remove paths from robots.txt in favor of noindex header

Google no longer supports the use of robots.txt to block indexing.
See https://support.google.com/webmasters/answer/6062608 and
https://support.google.com/webmasters/answer/93710

Previous commits have added the `noindex` header to appropriate pages,
now we need to remove the paths from robots.txt so the pages can be
crawled.

Follow up to:
13f229808a
b6765aac4b
676be3a853
07b728c5e5
c94e6a9a66
This commit is contained in:
Joshua Rosenfeld 2020-06-25 13:55:06 -04:00
parent 01b6349a67
commit b52143feff
No known key found for this signature in database
GPG Key ID: BFD6217DEA2C0A95
1 changed files with 0 additions and 13 deletions

View File

@ -10,14 +10,6 @@ class RobotsTxtController < ApplicationController
DISALLOWED_PATHS ||= %w{
/auth/
/assets/browser-update*.js
/users/
/u/
/my/
/badges/
/search
/search/
/tags
/tags/
/email/
/session
/session/
@ -27,11 +19,6 @@ class RobotsTxtController < ApplicationController
/user-api-key/
/*?api_key*
/*?*api_key*
/groups
/groups/
/t/*/*.rss
/tags/*.rss
/c/*.rss
}
def index