Service stdout log files, move logs to log/. (#12570)

* Service stdout log files, move logs to log/.

Two changes that make log behavior cleaner:

1) Redirect messages from the Java runtime to their own log files.
   Otherwise, they would get jumbled up in the output of the all-in-one
   start command.

2) Use log/ instead of bin/log/ for the default log directory. Makes them
   easier to find.

Additionally, add documentation about how to avoid the reflective
access warnings in Java 11.

* Spelling.

* See if code formatting affects spelling.
This commit is contained in:
Gian Merlino 2022-06-02 22:14:29 -07:00 committed by GitHub
parent 9c8e6bb000
commit a27f4f5740
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
4 changed files with 66 additions and 20 deletions

View File

@ -27,7 +27,7 @@ Apache Druid processes will emit logs that are useful for debugging to log files
These processes also emit periodic [metrics](../configuration/index.md#enabling-metrics) about their state.
Metric info logs can be disabled with `-Ddruid.emitter.logging.logLevel=debug`.
Druid uses [log4j2](http://logging.apache.org/log4j/2.x/) for logging.
Druid uses [log4j2](http://logging.apache.org/log4j/2.x/) for logging.
The default configuration file log4j2.xml ships with Druid under conf/druid/{config}/_common/log4j2.xml .
By default, Druid uses `RollingRandomAccessFile` for rollover daily, and keeps log files up to 7 days.
@ -71,19 +71,49 @@ An example log4j2.xml file is shown below:
```
> NOTE:
> Although the log4j configuration file is shared with Druid's peon processes,
> the appenders in this file DO NOT take effect for peon processes for they always output logs to console.
> And middle managers are responsible to redirect the console output to task log files.
> Although the log4j configuration file is shared with Druid's task peon processes,
> the appenders in this file DO NOT take effect for peon processes, which always output logs to standard output.
> Middle Managers redirect task logs from standard output to [long-term storage](index.md#log-long-term-storage).
>
> But the logging levels settings take effect for these peon processes
> which means you can still configure loggers at different logging level for peon processes in this file.
>
> However, log level settings do take effect for these task peon processes,
> which means you can still configure loggers at different logging level for task logs using `log4j2.xml`.
## How to change log directory
By default, Druid outputs the logs to a directory `log` under the directory where Druid is launched from.
For example, if Druid is started from its `bin` directory, there will be a subdirectory `log` generated under `bin` directory to hold the log files.
If you want to change the log directory, set environment variable `DRUID_LOG_DIR` to the right directory before you start Druid.
## Log directory
The included log4j2.xml configuration for Druid and ZooKeeper will output logs to the `log` directory at the root of the distribution.
If you want to change the log directory, set the environment variable `DRUID_LOG_DIR` to the right directory before you start Druid.
## All-in-one start commands
If you use one of the all-in-one start commands, such as `bin/start-micro-quickstart`, then in the default configuration
each service has two kind of log files. The main log file (for example, `log/historical.log`) is written by log4j2 and
is rotated periodically.
The secondary log file (for example, `log/historical.stdout.log`) contains anything that is written by the component
directly to standard output or standard error without going through log4j2. This consists mainly of messages from the
Java runtime itself. This file is not rotated, but it is generally small due to the low volume of messages. If
necessary, you can truncate it using the Linux command `truncate --size 0 log/historical.stdout.log`.
## Avoid reflective access warnings in logs
On Java 11, you may see warnings like this in log files:
```
WARNING: An illegal reflective access operation has occurred
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
```
These messages do not cause harm, but you can avoid them by adding the following lines to your `jvm.config` files. These
lines are not part of the default JVM configs that ship with Druid, because Java 8 will not recognize these options and
will fail to start up.
```
--add-exports=java.base/jdk.internal.ref=ALL-UNNAMED
--add-exports=java.base/jdk.internal.perf=ALL-UNNAMED
--add-opens=java.base/java.lang=ALL-UNNAMED
--add-opens=jdk.management/com.sun.management.internal=ALL-UNNAMED
```
## My logs are really chatty, can I set them to asynchronously write?

View File

@ -42,7 +42,7 @@ if [ -z "$JAVA_BIN" ]; then
exit 1
fi
LOG_DIR="${DRUID_LOG_DIR:=${WHEREAMI}/log}"
LOG_DIR="${DRUID_LOG_DIR:=${WHEREAMI}/../log}"
# Remove possible ending slash
if [[ $LOG_DIR == */ ]];
then

View File

@ -41,7 +41,7 @@ if [ -z "$JAVA_BIN" ]; then
exit 1
fi
LOG_DIR="${DRUID_LOG_DIR:=${WHEREAMI}/log}"
LOG_DIR="${DRUID_LOG_DIR:=${WHEREAMI}/../log}"
# Remove possible ending slash
if [[ $LOG_DIR == */ ]];
then

View File

@ -192,8 +192,9 @@ if (defined $config->{'kill-timeout'}) {
$opt{'kill-timeout'} = $config->{'kill-timeout'};
}
# Remember where vardir, svdir are after chdiring
# Remember where vardir, logdir, svdir are after chdiring
my $vardir = File::Spec->rel2abs($opt{vardir});
my $logdir = File::Spec->rel2abs(realpath($ENV{'DRUID_LOG_DIR'} || "$FindBin::Bin/../log"));
my $svdir = "$vardir/sv";
# chdir to the root of the distribution (or whereever)
@ -209,6 +210,11 @@ if (! -e $svdir) {
system("mkdir -p \Q$svdir\E") == 0 or die "mkdir $svdir failed: $!\n";
}
# Create logdir, if needed
if (!defined $opt{svlogd} && ! -e "$logdir") {
system("mkdir -p \Q$logdir\E") == 0 or die "mkdir $logdir failed: $!\n";
}
# Lock svdir and keep it locked until we exit
my $lockfile = "$svdir/.lock";
open my $lockfh, ">", $lockfile or die "Cannot write to svdir, please check permissions: $svdir\n";
@ -230,20 +236,27 @@ $SIG{TERM} = sub { if (!$killed) { $killed = 15; $killkill = time + $opt{'kill-t
# Build up control fifo command over multiple sysreads, potentially
my $fifobuffer = '';
if (defined $opt{svlogd}) {
logit "Staring services with log directory [svdir].";
} else {
logit "Starting services with log directory [$logdir].";
}
while (1) {
# Spawn new procs
if (!$killed) {
for my $command (grep { !$_->{pid} } @commands) {
if ($command->{down} < time) {
my $logfile = sprintf("%s%s", "$svdir/$command->{name}", defined $opt{'svlogd'} ? "" : ".log");
if (my $pid = fork) {
$command->{pid} = $pid;
$command->{logfile} = $logfile;
} else {
setsid;
if (defined $opt{'svlogd'}) {
# If using svlogd, program output goes into the service directory. We do not use $logdir here.
my $logfile = "$svdir/$command->{name}";
logit "Running command[" . pretty($command->{name}, 'bold') . "]: $command->{command}";
if (! -e $logfile) {
system("mkdir -p \Q$logfile\E") == 0 or logdie "mkdir $logfile failed: $!\n";
}
@ -258,9 +271,12 @@ while (1) {
open STDOUT, "|svlogd $logfile" or logdie "pipe to svlogd $logfile failed: $!\n";
} else {
# Since the log4j2 is configured to write log to file, which means there's no application log output to the console,
# We don't need to redirect the STDOUT of application to a file
# open STDOUT, ">>", $logfile or logdie "open $logfile failed: $!\n";
# If not using svlogd, program output goes to $logdir. In the default configuration, this will be a small
# amount of logging from the JVM itself, because all of the Druid and ZooKeeper logs are written into
# separate files by log4j2.
logit "Running command[" . pretty($command->{name}, 'bold') . "]: $command->{command}";
my $logfile = "$logdir/$command->{name}.stdout.log";
open STDOUT, ">>", $logfile or logdie "open $logfile failed: $!\n";
}
open STDERR, ">&STDOUT" or logdie "redirecting stderr failed: $!\n";