DEV: Ignore `ls` errors when clearing FileStore cache (#8780)

A race condition issue is possible when multiple thread/processes are calling this method.
`ls` prints out to stderr "cannot access '...': No such file or directory" if any of the files it's currently trying to list are being removed by the `xargs rm -rf` in an another process. That doesn't affect the result, but it did raise an error before this change.

Tested on a production instance where the original issue was observed.

Co-Authored-By: Régis Hanol <regis@hanol.fr>
This commit is contained in:
Jarek Radosz 2020-01-27 02:59:54 +01:00 committed by GitHub
parent b843aa7b05
commit 63a4aa65ff
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
1 changed files with 12 additions and 3 deletions

View File

@ -150,14 +150,23 @@ module FileStore
dir = File.dirname(path)
FileUtils.mkdir_p(dir) unless Dir.exist?(dir)
FileUtils.cp(file.path, path)
# keep latest 500 files
# Keep latest 500 files
processes = Open3.pipeline(
"ls -t #{CACHE_DIR}",
["ls -t #{CACHE_DIR}", err: "/dev/null"],
"tail -n +#{CACHE_MAXIMUM_SIZE + 1}",
"awk '$0=\"#{CACHE_DIR}\"$0'",
"xargs rm -f"
)
raise "Error clearing old cache" if !processes.all?(&:success?)
ls = processes.shift
# Exit status `1` in `ls` occurs when e.g. "listing a directory
# in which entries are actively being removed or renamed".
# It's safe to ignore it here.
if ![0, 1].include?(ls.exitstatus) || !processes.all?(&:success?)
raise "Error clearing old cache"
end
end
private