Often we need to amend our schema, it is tempting to use
drop_table, rename_column and drop_column to amned schema
trouble though is that existing code that is running in production
can depend on the existance of previous schema leading to application
breaking until new code base is deployed.
The commit enforces new rules to ensure we can never drop tables or
columns in migrations and instead use Migration::ColumnDropper and
Migration::TableDropper to defer drop the db objects
In the past we used suppress_from_homepage, it had mixed semantics
it would remove from category list if category list was on home and
unconditionally remove from latest.
New setting explicitly only removes from latest list but leaves the
category list alond
* SPEC: PollFeedJob parsing atom feed
* add FeedItemAccessor
It is to provide a consistent interface to access a feed item's tag
content.
* add FeedElementInstaller
to install non-standard and non-namespaced feed elements
* FEATURE: replace SimpleRSS with Ruby RSS module
* get FinalDestination and download with Excon
* support namespaced element with FeedElementInstaller
This change-set allows setting different defaults for different locales.
It also:
- Adds extensive testing around site setting validation
- raises deprecation error if site setting has the default property based on env
- relocated site settings for dev and tests in the initializer
- deprecated client_setting in the site setting's loading process
- ensure it raises when a enum site setting being set
- default_locale is promoted to `required` category.
- fixes incorrect default setting and validation
- fixes ensure type check for site settings
- creates a benchmark for site setting
- sets reasonable defaults for Chinese
The mail class seems to handle mails sent with Content-Transfer-Encoding: 8bit
somewhat weirdly: It decodes them (to utf-8), changes the raw source to base64,
and does not modify the Content-Type:charset= header.
This leads to Discourse trying the message encoding (in my example ISO-8859-1)
first, and if that does not contain any unparseable characters, it uses that.
Sadly, in ISO-8859-1, every byte sequence is valid.
Fix this by always trying to decode as UTF-8 first. The probability of someone
using another encoding that cleanly (but wrongly) decodes as UTF-8 should be
fairly low.