mirror of https://github.com/apache/druid.git
93aeaf4801
Add a "guessAggregatorHeapFootprint" method to AggregatorFactory that mitigates #6743 by enabling heap footprint estimates based on a specific number of rows. The idea is that at ingestion time, the number of rows that go into an aggregator will be 1 (if rollup is off) or will likely be a small number (if rollup is on). It's a heuristic, because of course nothing guarantees that the rollup ratio is a small number. But it's a common case, and I expect this logic to go wrong much less often than the current logic. Also, when it does go wrong, users can fix it by lowering maxRowsInMemory or maxBytesInMemory. The current situation is unintuitive: when the estimation goes wrong, users get an OOME, but actually they need to *raise* these limits to fix it. |
||
---|---|---|
.. | ||
avro-extensions | ||
azure-extensions | ||
datasketches | ||
druid-aws-rds-extensions | ||
druid-basic-security | ||
druid-bloom-filter | ||
druid-kerberos | ||
druid-pac4j | ||
druid-ranger-security | ||
ec2-extensions | ||
google-extensions | ||
hdfs-storage | ||
histogram | ||
kafka-extraction-namespace | ||
kafka-indexing-service | ||
kinesis-indexing-service | ||
kubernetes-extensions | ||
lookups-cached-global | ||
lookups-cached-single | ||
mysql-metadata-storage | ||
orc-extensions | ||
parquet-extensions | ||
postgresql-metadata-storage | ||
protobuf-extensions | ||
s3-extensions | ||
simple-client-sslcontext | ||
stats | ||
testing-tools |