Prevent excessive disk consumption by log files

This commit enables management of the main Elasticsearch log files
out-of-the-box by the following changes:
 - compress rolled logs
 - roll logs every 128 MB
 - maintain a sliding window of logs
 - remove the oldest logs maintaining no more than 2 GB of compressed
   logs on disk

Relates #25660
This commit is contained in:
Jason Tedor 2017-07-12 15:52:00 -04:00 committed by GitHub
parent 8b846f9141
commit 86e9438d3c
2 changed files with 43 additions and 14 deletions

View File

@ -14,11 +14,21 @@ appender.rolling.name = rolling
appender.rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}.log
appender.rolling.layout.type = PatternLayout
appender.rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] %marker%.-10000m%n
appender.rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}-%d{yyyy-MM-dd}.log
appender.rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}-%d{yyyy-MM-dd}-%i.log.gz
appender.rolling.policies.type = Policies
appender.rolling.policies.time.type = TimeBasedTriggeringPolicy
appender.rolling.policies.time.interval = 1
appender.rolling.policies.time.modulate = true
appender.rolling.policies.size.type = SizeBasedTriggeringPolicy
appender.rolling.policies.size.size = 128MB
appender.rolling.strategy.type = DefaultRolloverStrategy
appender.rolling.strategy.fileIndex = nomax
appender.rolling.strategy.action.type = Delete
appender.rolling.strategy.action.basepath = ${sys:es.logs.base_path}
appender.rolling.strategy.action.condition.type = IfFileName
appender.rolling.strategy.action.condition.glob = ${sys:es.logs.cluster_name}-*
appender.rolling.strategy.action.condition.nested_condition.type = IfAccumulatedFileSize
appender.rolling.strategy.action.condition.nested_condition.exceeds = 2GB
rootLogger.level = info
rootLogger.appenderRef.console.ref = console

View File

@ -117,28 +117,47 @@ appender.rolling.type = RollingFile <1>
appender.rolling.name = rolling
appender.rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}.log <2>
appender.rolling.layout.type = PatternLayout
appender.rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c] %.10000m%n
appender.rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}-%d{yyyy-MM-dd}.log <3>
appender.rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] %marker%.-10000m%n
appender.rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}-%d{yyyy-MM-dd}-%i.log.gz <3>
appender.rolling.policies.type = Policies
appender.rolling.policies.time.type = TimeBasedTriggeringPolicy <4>
appender.rolling.policies.time.interval = 1 <5>
appender.rolling.policies.time.modulate = true <6>
appender.rolling.policies.size.type = SizeBasedTriggeringPolicy <7>
appender.rolling.policies.size.size = 256MB <8>
appender.rolling.strategy.type = DefaultRolloverStrategy
appender.rolling.strategy.fileIndex = nomax
appender.rolling.strategy.action.type = Delete <9>
appender.rolling.strategy.action.basepath = ${sys:es.logs.base_path}
appender.rolling.strategy.action.condition.type = IfFileName <10>
appender.rolling.strategy.action.condition.glob = ${sys:es.logs.cluster_name}-* <11>
appender.rolling.strategy.action.condition.nested_condition.type = IfAccumulatedFileSize <12>
appender.rolling.strategy.action.condition.nested_condition.exceeds = 2GB <13>
--------------------------------------------------
<1> Configure the `RollingFile` appender
<2> Log to `/var/log/elasticsearch/production.log`
<3> Roll logs to `/var/log/elasticsearch/production-yyyy-MM-dd.log`
<4> Using a time-based roll policy
<3> Roll logs to `/var/log/elasticsearch/production-yyyy-MM-dd-i.log`; logs
will be compressed on each roll and `i` will be incremented
<4> Use a time-based roll policy
<5> Roll logs on a daily basis
<6> Align rolls on the day boundary (as opposed to rolling every twenty-four
hours)
<7> Using a size-based roll policy
<8> Roll logs after 256 MB
<9> Use a delete action when rolling logs
<10> Only delete logs matching a file pattern
<11> The pattern is to only delete the main logs
<12> Only delete if we have accumulated too many compressed logs
<13> The size condition on the compressed logs is 2 GB
NOTE: Log4j's configuration parsing gets confused by any extraneous whitespace;
if you copy and paste any Log4j settings on this page, or enter any Log4j
configuration in general, be sure to trim any leading and trailing whitespace.
If you append `.gz` or `.zip` to `appender.rolling.filePattern`, then the logs
will be compressed as they are rolled.
Note than you can replace `.gz` by `.zip` in `appender.rolling.filePattern` to
compress the rolled logs using the zip format. If you remove the `.gz`
extension then logs will not be compressed as they are rolled.
If you want to retain log files for a specified period of time, you can use a
rollover strategy with a delete action.
@ -148,22 +167,22 @@ rollover strategy with a delete action.
appender.rolling.strategy.type = DefaultRolloverStrategy <1>
appender.rolling.strategy.action.type = Delete <2>
appender.rolling.strategy.action.basepath = ${sys:es.logs.base_path} <3>
appender.rolling.strategy.action.condition.type = IfLastModified <4>
appender.rolling.strategy.action.condition.age = 7D <5>
appender.rolling.strategy.action.PathConditions.type = IfFileName <6>
appender.rolling.strategy.action.PathConditions.glob = ${sys:es.logs.cluster_name}-* <7>
appender.rolling.strategy.action.condition.type = IfFileName <4>
appender.rolling.strategy.action.condition.glob = ${sys:es.logs.cluster_name}-* <5>
appender.rolling.strategy.action.condition.nested_condition.type = IfLastModified <6>
appender.rolling.strategy.action.condition.nested_condition.age = 7D <7>
--------------------------------------------------
<1> Configure the `DefaultRolloverStrategy`
<2> Configure the `Delete` action for handling rollovers
<3> The base path to the Elasticsearch logs
<4> The condition to apply when handling rollovers
<5> Retain logs for seven days
<6> Only delete files older than seven days if they match the specified glob
<7> Delete files from the base path matching the glob
<5> Delete files from the base path matching the glob
`${sys:es.logs.cluster_name}-*`; this is the glob that log files are rolled
to; this is needed to only delete the rolled Elasticsearch logs but not also
delete the deprecation and slow logs
<6> A nested condition to apply to files matching the glob
<7> Retain logs for seven days
Multiple configuration files can be loaded (in which case they will get merged)
as long as they are named `log4j2.properties` and have the Elasticsearch config