mirror of https://github.com/apache/druid.git
5d1950d451
* MSQ controller: Support in-memory shuffles; towards JVM reuse. This patch contains two controller changes that make progress towards a lower-latency MSQ. First, support for in-memory shuffles. The main feature of in-memory shuffles, as far as the controller is concerned, is that they are not fully buffered. That means that whenever a producer stage uses in-memory output, its consumer must run concurrently. The controller determines which stages run concurrently, and when they start and stop. "Leapfrogging" allows any chain of sort-based stages to use in-memory shuffles even if we can only run two stages at once. For example, in a linear chain of stages 0 -> 1 -> 2 where all do sort-based shuffles, we can use in-memory shuffling for each one while only running two at once. (When stage 1 is done reading input and about to start writing its output, we can stop 0 and start 2.) 1) New OutputChannelMode enum attached to WorkOrders that tells workers whether stage output should be in memory (MEMORY), or use local or durable storage. 2) New logic in the ControllerQueryKernel to determine which stages can use in-memory shuffling (ControllerUtils#computeStageGroups) and to launch them at the appropriate time (ControllerQueryKernel#createNewKernels). 3) New "doneReadingInput" method on Controller (passed down to the stage kernels) which allows stages to transition to POST_READING even if they are not gathering statistics. This is important because it enables "leapfrogging" for HASH_LOCAL_SORT shuffles, and for GLOBAL_SORT shuffles with 1 partition. 4) Moved result-reading from ControllerContext#writeReports to new QueryListener interface, which ControllerImpl feeds results to row-by-row while the query is still running. Important so we can read query results from the final stage using an in-memory channel. 5) New class ControllerQueryKernelConfig holds configs that control kernel behavior (such as whether to pipeline, maximum number of concurrent stages, etc). Generated by the ControllerContext. Second, a refactor towards running workers in persistent JVMs that are able to cache data across queries. This is helpful because I believe we'll want to reuse JVMs and cached data for latency reasons. 1) Move creation of WorkerManager and TableInputSpecSlicer to the ControllerContext, rather than ControllerImpl. This allows managing workers and work assignment differently when JVMs are reusable. 2) Lift the Controller Jersey resource out from ControllerChatHandler to a reusable resource. 3) Move memory introspection to a MemoryIntrospector interface, and introduce ControllerMemoryParameters that uses it. This makes it easier to run MSQ in process types other than Indexer and Peon. Both of these areas will have follow-ups that make similar changes on the worker side. * Address static checks. * Address static checks. * Fixes. * Report writer tests. * Adjustments. * Fix reports. * Review updates. * Adjust name. * Small changes. |
||
---|---|---|
.. | ||
cases | ||
docs | ||
image | ||
tools | ||
.gitignore | ||
README.md |
README.md
Revised Integration Tests
This directory builds a Docker image for Druid, then uses that image, along with test configuration to run tests. This version greatly evolves the integration tests from the earlier form. See the History section for details.
Shortcuts
List of the most common commands once you're familiar with the framework. If you are new to the framework, see Quickstart for an explanation.
Build Druid
./it.sh build
Build the Test Image
./it.sh image
Run an IT from the Command Line
./it.sh test <category>
Where <category>
is one of the test categories.
Run an IT from the IDE
Start the cluster:
./it.sh up <category>
Where <category>
is one of the test categories. Then launch the
test as a JUnit test.
Contents
- Goals
- Quickstart
- Create a new test
- Maven configuration
- Docker image
- Druid configuration
- Docker Compose configuration
- Test configuration
- Test structure
- Test runtime semantics
- Scripts
- Dependencies
- Debugging
Background information
- Next steps
- Test conversion - How to convert existing tests.
- History - Comparison with prior integration tests.
Goals
The goal of the present version is to simplify development.
- Speed up the Druid test image build by avoiding download of dependencies. (Instead, any such dependencies are managed by Maven and reside in the local build cache.)
- Use official images for dependencies to avoid the need to download, install, and manage those dependencies.
- Make it is easy to manually build the image, launch a cluster, and run a test against the cluster.
- Convert tests to JUnit so that they will easily run in your favorite IDE, just like other Druid tests.
- Use the actual Druid build from
distribution
so we know what is tested. - Leverage, don't fight, Maven.
- Run the integration tests easily on a typical development machine.
By meeting these goals, you can quickly:
- Build the Druid distribution.
- Build the Druid image. (< 1 minute)
- Launch the cluster for the particular test. (a few seconds)
- Run the test any number of times in your debugger.
- Clean up the test artifacts.
The result is that the fastest path to develop a Druid patch or feature is:
- Create a normal unit test and run it to verify your code.
- Create an integration test that double-checks the code in a live cluster.