* array support for expression language for multi-value string columns
* fix tests?
* fixes
* more tests
* fixes
* cleanup
* more better, more test
* ignore inspection
* license
* license fix
* inspection
* remove dumb import
* more better
* some comments
* add expr rewrite for arrayfn args for more magic, tests
* test stuff
* more tests
* fix test
* fix test
* castfunc can deal with arrays
* needs more empty array
* more tests, make cast to long array more forgiving
* refactor
* simplify ExprMacro Expr implementations with base classes in core
* oops
* more test
* use Shuttle for Parser.flatten, javadoc, cleanup
* fixes and more tests
* unused import
* fixes
* javadocs, cleanup, refactors
* fix imports
* more javadoc
* more javadoc
* more
* more javadocs, nonnullbydefault, minor refactor
* markdown fix
* adjustments
* more doc
* move initial filter out
* docs
* map empty arg lambda, apply function argument validation
* check function args at parse time instead of eval time
* more immutable
* more more immutable
* clarify grammar
* fix docs
* empty array is string test, we need a way to make arrays better maybe in the future, or define empty arrays as other types..
* Add state and error tracking for seekable stream supervisors
* Fixed nits in docs
* Made inner class static and updated spec test with jackson inject
* Review changes
* Remove redundant config param in supervisor
* Style
* Applied some of Jon's recommendations
* Add transience field
* write test
* implement code review changes except for reconsidering logic of markRunFinishedAndEvaluateHealth()
* remove transience reporting and fix SeekableStreamSupervisorStateManager impl
* move call to stateManager.markRunFinished() from RunNotice to runInternal() for tests
* remove stateHistory because it wasn't adding much value, some fixes, and add more tests
* fix tests
* code review changes and add HTTP health check status
* fix test failure
* refactor to split into a generic SupervisorStateManager and a specific SeekableStreamSupervisorStateManager
* fixup after merge
* code review changes - add additional docs
* cleanup KafkaIndexTaskTest
* add additional documentation for Kinesis indexing
* remove unused throws class
* add s3 authentication method informations
* add druid.s3.fileSessionCredentials related content
* remove authentication parameters to avoid confusion as it is more detailed in S3 Deep Storage page
* streamline s3 docs
* SQL: Allow NULLs in place of optional arguments in many functions.
Also adjust SQL docs to describe how to make time literals using
TIME_PARSE (which is now possible in a nicer way).
* Be less forbidden.
* Upgrade various build and doc links to https.
Where it wasn't possible to upgrade build-time dependencies to https,
I kept http in place but used hardcoded checksums or GPG keys to ensure
that artifacts fetched over http are verified properly.
* Switch to https://apache.org.
* update sys.servers table to show all servers
* update docs
* Fix integration test
* modify test query for batch integration test
* fix case in test queries
* make the server_type lowercase
* Apply suggestions from code review
Co-Authored-By: Himanshu <g.himanshu@gmail.com>
* Fix compilation from git suggestion
* fix unit test
* fix lookup editor to use lookup tiers instead of historical tiers
* use default tier if empty response, fix if configured lookups is null
* fixes
* fix typo
* use relative link to build instructions from top level readme
* add textfile to readme
* formatting
* make README.BINARY plaintext, move LABELS.md to LABELS, README.txt to README
* exclude README.BINARY still
* remove jdk links/recommmendations
* add script to use DRUIDVERSION in textfile README instead of latest, add links to recommended jdk to build.md
* license
* better readme template, links to latest if does not detect an apache release version
* fix
* First set of changes for tDigest histogram
* Add license
* Address code review comments
* Add a doc page for new T-Digest sketch aggregators. Minor code cleanup and comments.
* Remove synchronization from BufferAggregators. Address code review comments
* Fix typo
* add postgresql meta db table schema configuration property (#7137)
If the postgresql db schema changes, you must set the configuration
values.
You do not need to set it if there is no change from the default schema
'public'.
druid.metadata.postgres.dbTableSchema=public
* create postgresql metadb table schema configuration property (#7137)
If the postgresql db schema changes, you must set the configuration
values.
You do not need to set it if there is no change from the default schema
'public'.
druid.metadata.postgres.dbTableSchema=public
check PostgreSQLTablesConfig.java
* modify postgresql readme file. - metadb table schema (#7137)
If the postgresql db schema changes, you must set the configuration
values.
You do not need to set it if there is no change from the default schema
'public'.
druid.metadata.postgres.dbTableSchema=public
check PostgreSQLTablesConfig.java
* Remove SQL experimental banner and other doc adjustments.
Also,
- Adjust the ToC and other docs a bit so SQL and native queries are
presented on more equal footing.
- De-emphasize querying historicals and peons directly in the
native query docs. This is a really niche thing and may have been
confusing to include prominently in the very first paragraph.
- Remove DataSketches and Kafka indexing service from the experimental
features ToC. They are not experimental any longer and were there in
error.
* More notes.
* Slight tweak.
* Remove extra extra word.
* Remove RT node from ToC.
* V1 - improve parallelism of zookeeper based segment change processing
* Create zk nodes in batches. Address code review comments.
Introduce various configs.
* Add documentation for the newly added configs
* Fix test failures
* Fix more test failures
* Remove prinstacktrace statements
* Address code review comments
* Use a single queue
* Address code review comments
Since we have a separate load peon for every historical, just having a single SegmentChangeProcessor
task per historical is enough. This commit also gets rid of the associated config druid.coordinator.loadqueuepeon.curator.numCreateThreads
* Resolve merge conflict
* Fix compilation failure
* Remove batching since we already have a dynamic config maxSegmentsInNodeLoadingQueue that provides that control
* Fix NPE in test
* Remove documentation for configs that are no longer needed
* Address code review comments
* Address more code review comments
* Fix checkstyle issue
* Address code review comments
* Code review comments
* Add back monitor node remove executor
* Cleanup code to isolate null checks and minor refactoring
* Change param name since it conflicts with member variable name
This feature allows Calcite's Bindable interpreter to be bolted on
top of Druid queries and table scans. I think it should be removed for
a few reasons:
1. It is not recommended for production anyway, because it generates
unscalable query plans (e.g. it will plan a join into two table scans
and then try to do the entire join in memory on the broker).
2. It doesn't work with Druid-specific SQL functions, like TIME_FLOOR,
REGEXP_EXTRACT, APPROX_COUNT_DISTINCT, etc.
3. It makes the SQL planning code needlessly complicated.
With SQL coming out of experimental status soon, it's a good opportunity
to remove this feature.
* Contributing Moving-Average Query to open source.
* Fix failing code inspections.
* See if explicit types will invoke the correct comparison function.
* Explicitly remove support for druid.generic.useDefaultValueForNull configuration parameter.
* Update styling and headers for complience.
* Refresh code with latest master changes:
* Remove NullDimensionSelector.
* Apply changes of RequestLogger.
* Apply changes of TimelineServerView.
* Small checkstyle fix.
* Checkstyle fixes.
* Fixing rat errors; Teamcity errors.
* Removing support theta sketches. Will be added back in this pr or a following once DI conflicts with datasketches are resolved.
* Implements some of the review fixes.
* Contributing Moving-Average Query to open source.
* Fix failing code inspections.
* See if explicit types will invoke the correct comparison function.
* Explicitly remove support for druid.generic.useDefaultValueForNull configuration parameter.
* Update styling and headers for complience.
* Refresh code with latest master changes:
* Remove NullDimensionSelector.
* Apply changes of RequestLogger.
* Apply changes of TimelineServerView.
* Small checkstyle fix.
* Checkstyle fixes.
* Fixing rat errors; Teamcity errors.
* Removing support theta sketches. Will be added back in this pr or a following once DI conflicts with datasketches are resolved.
* Implements some of the review fixes.
* More fixes for review.
* More fixes from review.
* MapBasedRow is Unmodifiable. Create new rows instead of modifying existing ones.
* Remove more changes related to datasketches support.
* Refactor BaseAverager startFrom field and add a comment.
* fakeEvents field: Refactor initialization and add comment.
* Rename parameters (tiny change).
* Fix variable name typo in test (JAN_4).
* Fix styling of non camelCase fields.
* Fix Preconditions.checkArgument for cycleSize.
* Add more documentation to RowBucketIterable and other classes.
* key/value comment on in MovingAverageIterable.
* Fix anonymous makeColumnValueSelector returning null.
* Replace IdentityYieldingAccumolator with Yielders.each().
* * internalNext() should return null instead of throwing exception.
* Remove unused variables/prarameters.
* Harden MovingAverageIterableTest (Switch anyOf to exact match).
* Change internalNext() from recursion to iteration; Simplify next() and hasNext().
* Remove unused imports.
* Address review comments.
* Rename fakeEvents to emptyEvents.
* Remove redundant parameter key from computeMovingAverage.
* Check yielder as well in RowBucketIterable#hasNext()
* Fix javadoc.
* Add reload by interval API
Implements the reload proposal of #7439
Added tests and updated docs
* PR updates
* Only build timeline with required segments
Use 404 with message when a segmentId is not found
Fix typo in doc
Return number of segments modified.
* Fix checkstyle errors
* Replace String.format with StringUtils.format
* Remove return value
* Expand timeline to segments that overlap for intervals
Restrict update call to only segments that need updating.
* Only add overlapping enabled segments to the timeline
* Some renames for clarity
Added comments
* Don't rely on cached poll data
Only fetch required information from DB
* Match error style
* Merge and cleanup doc
* Fix String.format call
* Add unit tests
* Fix unit tests that check for overshadowing
* Add api to drop data by interval
* update to address comments
* unused imports
* PR comments + add tests in SQLMetadataSegmentManagerTest
* update tests and docs
* Implement round function in druid-sql
* Return value according to the type of argument
* Fix codes for abnoraml inputs, updated math-expr.md
* Fix assert text
* Fix error messages and refactor codes
* Fix compile error, update sql.md, refactor codes and format tests
* Enhnace the HttpFirehose to work with both insecure URIs and URIs requiring basic authentication
* Improve security of enhanced HttpFirehoseFactory by not logging auth credentials
* Fix checkstyle failure in HttpFirehoseFactory.java
* Update docs and fix TeamCity build with required noinspection
* Indentation cleanup and logic modification for HttpFirehose object stream
* Remove default Empty string password provider in http firehose
* Add JavaDoc for MixIn describing its intended use
* Reverting documentation notation for json code to be inline with rest of doc
* Improve instantiation of ObjectMappers that require MixIn for redacting password from task logs
* Add comment to clarify fully qualified references of Objects in SQLMetadataStorageActionHandler
* Update scan query runner factory to accept SpecificSegmentSpec
* nit
* Sorry travis
* Improve logging and fix doc
* Bug fix
* Friendlier error msgs and tests to cover bug
* Address Gian's comments
* Fix doc
* Added tests for empty and null column list
* Style
* Fix checking wrong order (looking at query param when it should be
looking at the null-handled order)
* Add test case for null order
* Fix ScanQueryRunnerTest
* Forbidden APIs fixed
Removes the coordinator sanity check that prevents it from dropping all
segments. It's useful to get rid of this, since the behavior is
unintuitive for dev/testing clusters where users might regularly want
to drop all their data to get back to a clean slate.
But the sanity check was there for a reason: to prevent a race condition
where the coordinator might drop all segments if it ran before the
first metadata store poll finished. This patch addresses that concern
differently, by allowing methods in MetadataSegmentManager to return
null if a poll has not happened yet, and canceling coordinator runs
in that case.
This patch also makes the "dataSources" reference in
SQLMetadataSegmentManager volatile. I'm not sure why it wasn't volatile
before, but it seems necessary to me: it's not final, and it's dereferenced
from multiple threads without synchronization.
* orc extension reworked to use apache orc map-reduce lib, moved to core extensions, support for flattenSpec, tests, docs
* change binary handling to be compatible with avro and parquet, Rows.objectToStrings now converts byte[] to base64, change date handling
* better docs and tests
* fix it
* formatting
* doc fix
* fix it
* exclude redundant dependencies
* use latest orc-mapreduce, add hadoop jobProperties recommendations to docs
* doc fix
* review stuff and fix binaryAsString
* cache for root level fields
* more better
* Make IngestSegmentFirehoseFactory splittable for parallel ingestion
* Code review feedback
- Get rid of WindowedSegment
- Don't document 'segments' parameter or support splitting firehoses that use it
- Require 'intervals' in WindowedSegmentId (since it won't be written by hand)
* Add missing @JsonProperty
* Integration test passes
* Add unit test
* Remove two FIXME comments from CompactionTask
I'd like to leave this PR in a potentially mergeable state, but I still would
appreciate reviewer eyes on the questions I'm removing here.
* Updates from code review
* Update init
Fix bin/init to source from proper directory.
* Fix for Proposal #6518: Shutdown druid processes upon complete loss of ZK connectivity
* Zookeeper Loss:
- Add feature documentation
- Cosmetic refactors
- Variable extractions
- Remove getter
* - Change config key name and reword documentation
- Switch from Function<Void,Void> to Runnable/Lambda
- try { … } finally { … }
* Fix line length too long
* - change to formatted string for logging
- use System.err.println after lifecycle stops
* commenting on makeEnsembleProvider()-created Zookeeper termination
* Add javadoc
* added java doc reference back to apache discussion thread.
* move comment to other class
* favor two-slash comments instead of multiline comments
* Moved Scan Builder to Druids class and started on Scan Benchmark setup
* Need to form queries
* It runs.
* Stuff for time-ordered scan query
* Move ScanResultValue timestamp comparator to a separate class for testing
* Licensing stuff
* Change benchmark
* Remove todos
* Added TimestampComparator tests
* Change number of benchmark iterations
* Added time ordering to the scan benchmark
* Changed benchmark params
* More param changes
* Benchmark param change
* Made Jon's changes and removed TODOs
* Broke some long lines into two lines
* nit
* Decrease segment size for less memory usage
* Wrote tests for heapsort scan result values and fixed bug where iterator
wasn't returning elements in correct order
* Wrote more tests for scan result value sort
* Committing a param change to kick teamcity
* Fixed codestyle and forbidden API errors
* .
* Improved conciseness
* nit
* Created an error message for when someone tries to time order a result
set > threshold limit
* Set to spaces over tabs
* Fixing tests WIP
* Fixed failing calcite tests
* Kicking travis with change to benchmark param
* added all query types to scan benchmark
* Fixed benchmark queries
* Renamed sort function
* Added javadoc on ScanResultValueTimestampComparator
* Unused import
* Added more javadoc
* improved doc
* Removed unused import to satisfy PMD check
* Small changes
* Changes based on Gian's comments
* Fixed failing test due to null resultFormat
* Added config and get # of segments
* Set up time ordering strategy decision tree
* Refactor and pQueue works
* Cleanup
* Ordering is correct on n-way merge -> still need to batch events into
ScanResultValues
* WIP
* Sequence stuff is so dirty :(
* Fixed bug introduced by replacing deque with list
* Wrote docs
* Multi-historical setup works
* WIP
* Change so batching only occurs on broker for time-ordered scans
Restricted batching to broker for time-ordered queries and adjusted
tests
Formatting
Cleanup
* Fixed mistakes in merge
* Fixed failing tests
* Reset config
* Wrote tests and added Javadoc
* Nit-change on javadoc
* Checkstyle fix
* Improved test and appeased TeamCity
* Sorry, checkstyle
* Applied Jon's recommended changes
* Checkstyle fix
* Optimization
* Fixed tests
* Updated error message
* Added error message for UOE
* Renaming
* Finish rename
* Smarter limiting for pQueue method
* Optimized n-way merge strategy
* Rename segment limit -> segment partitions limit
* Added a bit of docs
* More comments
* Fix checkstyle and test
* Nit comment
* Fixed failing tests -> allow usage of all types of segment spec
* Fixed failing tests -> allow usage of all types of segment spec
* Revert "Fixed failing tests -> allow usage of all types of segment spec"
This reverts commit ec470288c7.
* Revert "Merge branch '6088-Time-Ordering-On-Scans-N-Way-Merge' of github.com:justinborromeo/incubator-druid into 6088-Time-Ordering-On-Scans-N-Way-Merge"
This reverts commit 57033f36df, reversing
changes made to 8f01d8dd16.
* Check type of segment spec before using for time ordering
* Fix bug in numRowsScanned
* Fix bug messing up count of rows
* Fix docs and flipped boolean in ScanQueryLimitRowIterator
* Refactor n-way merge
* Added test for n-way merge
* Refixed regression
* Checkstyle and doc update
* Modified sequence limit to accept longs and added test for long limits
* doc fix
* Implemented Clint's recommendations
* Move GCP to a core extension
* Don't provide druid-core >.<
* Keep AWS and GCP modules separate
* Move AWSModule to its own module
* Add aws ec2 extension and more modules in more places
* Fix bad imports
* Fix test jackson module
* Include AWS and GCP core in server
* Add simple empty method comment
* Update version to 15
* One more 0.13.0-->0.15.0 change
* Fix multi-binding problem
* Grep for s3-extensions and update docs
* Update extensions.md