* update static-checks GHA to run sequentially
remove static-checks from travis.yml
move docs, web-console, packaging checks from travis to GHA
* nit
* nit
* groups all checks, runs on 8, 11, 17 jdks
* nit
* adds license info
* update permissions on scripts folder
* nit
* nit
* fix packaging check
* changes naming, cleans repo before license checks
* simulate failure
* bump up license checks
* test license checks failure
* test license checks failure
* test license checks failure
* verify gha script run exit code
* fail fast in case of shell script
* verified fail fast in case of shell script
Druid catalog basics
Catalog object model for tables, columns
Druid metadata DB storage (as an extension)
REST API to update the catalog (as an extension)
Integration tests
Model only: no planner integration yet
* Remove usage of method deleted in latest jackson-databind
* Revert "Remove usage of method deleted in latest jackson-databind"
This reverts commit 81cb5d41d9.
* Use get-pip to install pip
* Use default pyyaml version
* Upgrade pyyaml
* fix doc search
* upgrade website node to 16
* change website travis script
* move spellcheck notification
* explicit path to npm bin
* cd to the correct place
* Various documentation updates.
1) Split out "data management" from "ingestion". Break it into thematic pages.
2) Move "SQL-based ingestion" into the Ingestion category. Adjust content so
all conceptual content is in concepts.md and all syntax content is in reference.md.
Shorten the known issues page to the most interesting ones.
3) Add SQL-based ingestion to the ingestion method comparison page. Remove the
index task, since index_parallel is just as good when maxNumConcurrentSubTasks: 1.
4) Rename various mentions of "Druid console" to "web console".
5) Add additional information to ingestion/partitioning.md.
6) Remove a mention of Tranquility.
7) Remove a note about upgrading to Druid 0.10.1.
8) Remove no-longer-relevant task types from ingestion/tasks.md.
9) Move ingestion/native-batch-firehose.md to the hidden section. It was previously deprecated.
10) Move ingestion/native-batch-simple-task.md to the hidden section. It is still linked in some
places, but it isn't very useful compared to index_parallel, so it shouldn't take up space
in the sidebar.
11) Make all br tags self-closing.
12) Certain other cosmetic changes.
13) Update to node-sass 7.
* make travis use node12 for docs
Co-authored-by: Vadim Ogievetsky <vadim@ogievetsky.com>
* Building druid-it-tools and running for travis in it.sh
* Addressing comments
* Updating druid-it-image pom to point to correct it-tools
* Updating all it-tools references to druid-it-tools
* Adding dist back to it.sh travis
* Trigger Build
* Disabling batchIndex tests and commenting out user specific code
* Fixing checkstyle and intellij inspection errors
* Replacing tabs with spaces in it.sh
* Enabling old batch index tests with indexer
This commit is a first draft of the revised integration test framework which provides:
- A new directory, integration-tests-ex that holds the new integration test structure. (For now, the existing integration-tests is left unchanged.)
- Maven module druid-it-tools to hold code placed into the Docker image.
- Maven module druid-it-image to build the Druid-only test image from the tarball produced in distribution. (Dependencies live in their "official" image.)
- Maven module druid-it-cases that holds the revised tests and the framework itself. The framework includes file-based test configuration, test-specific clients, test initialization and updated versions of some of the common test support classes.
The integration test setup is primarily a huge mass of details. This approach refactors many of those details: from how the image is built and configured to how the Docker Compose scripts are structured to test configuration. An extensive set of "readme" files explains those details. Rather than repeat that material here, please consult those files for explanations.
* Improved Java 17 support and Java runtime docs.
1) Add a "Java runtime" doc page with information about supported
Java versions, garbage collection, and strong encapsulation..
2) Update asm and equalsverifier to versions that support Java 17.
3) Add additional "--add-opens" lines to surefire configuration, so
tests can pass successfully under Java 17.
4) Switch openjdk15 tests to openjdk17.
5) Update FrameFile to specifically mention Java runtime incompatibility
as the cause of not being able to use Memory.map.
6) Update SegmentLoadDropHandler to log an error for Errors too, not
just Exceptions. This is important because an IllegalAccessError is
encountered when the correct "--add-opens" line is not provided,
which would otherwise be silently ignored.
7) Update example configs to use druid.indexer.runner.javaOptsArray
instead of druid.indexer.runner.javaOpts. (The latter is deprecated.)
* Adjustments.
* Use run-java in more places.
* Add run-java.
* Update .gitignore.
* Exclude hadoop-client-api.
Brought in when building on Java 17.
* Swap one more usage of java.
* Fix the run-java script.
* Fix flag.
* Include link to Temurin.
* Spelling.
* Update examples/bin/run-java
Co-authored-by: Xavier Léauté <xl+github@xvrl.net>
Co-authored-by: Xavier Léauté <xl+github@xvrl.net>
This commit contains the cleanup needed for the new integration test framework.
Changes:
- Fix log lines, misspellings, docs, etc.
- Allow the use of some of Druid's "JSON config" objects in tests
- Fix minor bug in `BaseNodeRoleWatcher`
* Ensure ByteBuffers allocated in tests get freed.
Many tests had problems where a direct ByteBuffer would be allocated
and then not freed. This is bad because it causes flaky tests.
To fix this:
1) Add ByteBufferUtils.allocateDirect(size), which returns a ResourceHolder.
This makes it easy to free the direct buffer. Currently, it's only used
in tests, because production code seems OK.
2) Update all usages of ByteBuffer.allocateDirect (off-heap) in tests either
to ByteBuffer.allocate (on-heap, which are garbaged collected), or to
ByteBufferUtils.allocateDirect (wherever it seemed like there was a good
reason for the buffer to be off-heap). Make sure to close all direct
holders when done.
* Changes based on CI results.
* A different approach.
* Roll back BitmapOperationTest stuff.
* Try additional surefire memory.
* Revert "Roll back BitmapOperationTest stuff."
This reverts commit 49f846d9e3.
* Add TestBufferPool.
* Revert Xmx change in tests.
* Better behaved NestedQueryPushDownTest. Exit tests on OOME.
* Fix TestBufferPool.
* Remove T1C from ARM tests.
* Somewhat safer.
* Fix tests.
* Fix style stuff.
* Additional debugging.
* Reset null / expr configs better.
* ExpressionLambdaAggregatorFactory thread-safety.
* Alter forkNode to try to get better info when a JVM crashes.
* Fix buffer retention in ExpressionLambdaAggregatorFactory.
* Remove unused import.
This PR enables ARM builds on Travis. I've ported over the changes from @martin-g on reducing heap requirements for some of the tests to ensure they run well on Travis arm instances.
The latest version of Error Prone now requires Java 11. Upgrading means we can
remove a lot of the maven profile complexity required to run checks with Java 8.
This also requires switching our strict build to use Java 11.
* update error-prone to 2.11
* remove need for specific maven profiles for Java 8 and Java 15
* fix additional Error Prone warnings with Java 11
* update strict build to use Java 11
* Use Druid's extension loading for integration test instead of maven
* fix maven command
* override config path
* load input format extensions and kafka by default; add prepopulated-data group
* all docker-composes are overridable
* fix s3 configs
* override config for all
* fix docker_compose_args
* fix security tests
* turn off debug logs for overlord api calls
* clean up stuff
* revert docker-compose.yml
* fix override config for query error test; fix circular dependency in docker compose
* add back some dependencies in docker compose
* new maven profile for integration test
* example file filter
Add support for hadoop 3 profiles . Most of the details are captured in #11791 .
We use a combination of maven profiles and resource filtering to achieve this. Hadoop2 is supported by default and a new maven profile with the name hadoop3 is created. This will allow the user to choose the profile which is best suited for the use case.
Fixes#11297.
Description
Description and design in the proposal #11297
Key changed/added classes in this PR
*DataSegmentPusher
*ShuffleClient
*PartitionStat
*PartitionLocation
*IntermediaryDataManager
The dependency-check cron job now purges any caches NVD before performing dependency check. Without this, a high CVE vulernability was reported in this job a few months after the nvd was updated for it.
* support using mariadb connector with mysql extensions
* cleanup and more tests
* fix test
* javadocs, more tests, etc
* style and more test
* more test more better
* missing pom
* more pom
* upgrade error-prone to 2.7.1 and support checks with Java 11+
- upgrade error-prone to 2.7.1
- support running error-prone with Java 11 and above using -Xplugin
instead of custom compiler
- add compiler arguments to ignore warnings/errors in Java 15/16
- introduce strictCompile property to enable strict profiles since we
now need multiple strict profiles for Java 8
- properly exclude all generated source files from error-prone
- fix druid-processing overriding annotation processors from parent pom
- fix druid-core disabling most non-default checks
- align plugin and annotation errorprone versions
- fix / suppress additional issues found by error-prone:
* fix bug in SeekableStreamSupervisor initializing ArrayList size with
the taskGroupdId
* fix missing @Override annotations
- remove outdated compiler plugin in benchmarks
- remove deleted ParameterPackage error-prone rule
- re-enable checks on benchmark module as well
* fix IntelliJ inspections
* disable LongFloatConversion due to bug in error-prone with JDK 8
* add comment about InsecureCrypto
* maybe make ci more selective in what it do
* maybe this
* skip self and travis config
* can it be in before_script? lets find out
* nope
* fixes
* oops
* only skip stuff for PR builds, add some tests
* revert unintended change
* more comments, more tests, more better
With this change, Druid will only support ZooKeeper 3.5.x and later.
In order to support Java 15 we need to switch to ZK 3.5.x client libraries and drop support for ZK 3.4.x
(see #10780 for the detailed reasons)
* remove ZooKeeper 3.4.x compatibility
* exclude additional ZK 3.5.x netty dependencies to ensure we use our version
* keep ZooKeeper version used for integration tests in sync with client library version
* remove the need to specify ZK version at runtime for docker
* add support to run integration tests with JDK 15
* build and run unit tests with Java 15 in travis
* Update some dev dependencies, prettify, tslint-fix
* Sort tsconfig keys for easy comparison
* Set noImplicitThis
* Slightly more accurate types
* Bump Jest and related
* Bump react to latest on v16
* Bump node-sass, sass-loader for node14 support
* Remove node-sass-chokidar (unused)
* More unused dependencies
* Fix blueprint imports
* Webpack 5
* Update webpack config for 'process' usage
* Update playwright-chromium
* Emit esnext modules for tree shaking
* Enable source maps in development
* Dedupe
* Bump babel and things
* npm audit fix
* Add .editorconfig file to match prettier settings
* Update licenses (tslib is 0BSD as of 1.11.2)
https://github.com/microsoft/tslib/pull/96
* Require node >= 10
* Use Node 10 to run e2e tests
* Use 'ws' transport mode for dev server (will be default in next version)
* Remove an 'any'
* No sourcemaps in prod
* Exclude .editorconfig from license checks
* Try nvm for setting node version
* Update integration-tests README
Updated the integration-tests README file to include instructions
for setting the `ZK_VERSION` property which is now required to be
set prior to executing any integration test. Also added a note
about the importance of setting the test group parameter when
running integration tests, even when running single tests.
* * revert change made to DOCKER_IP doc
* * Add default value for zk version
* * update travis config to use new zk.version property when running
integration tests
* Remove doc about needing to set ZK_VERSION variable when running
integration tests
* do build and stop action in IT
* change base dir from druidHome to druidHome/integration-tests
* add env DRUID_HOME
* bug fix
* modify stop_sh
* ready to test
* bug fix
* modify dir
* tested on dev
* modify dir
* move DRUID_HOME env
* done
Co-authored-by: yuezhang <yuezhang@freewheel.tv>
* move integration tests from ZooKeeper 3.4.x to 3.5.x
* run a subset of our integration tests with ZK 3.4 for backwards compatibility testing.
* remove need to build separate docker-base image
- use multi-stage build for the base image
- use openjdk base image instead of building our own JDK base
- workaround Debian not including MySQL by using MariaDB
- download mysql connector directly instead of using distro version
* fix incorrect openssl command failing on Debian
* keep mysql connector version in sync with pom version
* ready to test
* tested on dev cluster
* tested
* code review
* add UTs
* add UTs
* ut passed
* ut passed
* opti imports
* done
* done
* fix checkstyle
* modify uts
* modify logs
* changing the package of SegmentLazyLoadFailCallback.java to org.apache.druid.segment
* merge from master
* modify import orders
* merge from master
* merge from master
* modify logs
* modify docs
* modify logs to rerun ci
* modify logs to rerun ci
* modify logs to rerun ci
* modify logs to rerun ci
* modify logs to rerun ci
* modify logs to rerun ci
* modify logs to rerun ci
* modify logs to rerun ci
Co-authored-by: yuezhang <yuezhang@freewheel.tv>