Commit Graph

7 Commits

Author SHA1 Message Date
Sam 7b562d2f46 FEATURE: much improved and simplified crawler detection
- phase one does it match 'trident|webkit|gecko|chrome|safari|msie|opera'
    yes- well it is possibly a browser

- phase two does it match 'rss|bot|spider|crawler|facebook|archive|wayback|ping|monitor'
    probably a crawler then

Based off: https://gist.github.com/SamSaffron/6cfad7ea3e6df321ffb7a84f93720a53
2018-01-16 15:41:45 +11:00
Sam f6fdc1ebe8 FEATURE: flexible crawler detection
You can use the crawler user agents site setting to amend what user agents
are considered crawlers based on a string match in the user agent

Also improves performance of crawler detection slightly
2017-09-29 12:31:50 +10:00
Robin Ward 2a4006fe0c Add `YandexBot` to our list of crawlers 2016-07-26 13:21:37 -04:00
Andy Waite 3e50313fdc Prepare for separation of RSpec helper files
Since rspec-rails 3, the default installation creates two helper files:
* `spec_helper.rb`
* `rails_helper.rb`

`spec_helper.rb` is intended as a way of running specs that do not
require Rails, whereas `rails_helper.rb` loads Rails (as Discourse's
current `spec_helper.rb` does).

For more information:

https://www.relishapp.com/rspec/rspec-rails/docs/upgrade#default-helper-files

In this commit, I've simply replaced all instances of `spec_helper` with
`rails_helper`, and renamed the original `spec_helper.rb`.

This brings the Discourse project closer to the standard usage of RSpec
in a Rails app.

At present, every spec relies on loading Rails, but there are likely
many that don't need to. In a future pull request, I hope to introduce a
separate, minimal `spec_helper.rb` which can be used in tests which
don't rely on Rails.
2015-12-01 20:39:42 +00:00
Luciano Sousa 0fd98b56d8 few components with rspec3 syntax 2015-01-09 13:34:37 -03:00
Vikhyat Korrapati e3702ecb30 Improved crawler detection: add Twitterbot, Facebook, curl, Bing, Baidu. 2014-03-16 19:30:20 +05:30
Robin Ward c4b5455c21 REFACTOR: Rename `GooglebotDetection` to `CrawlerDetection` because we
will likely whitelist more crawlers in the future.
2014-02-20 16:07:02 -05:00