Add IT-related changes pulled out of PR #12368 (#12673)

This commit contains changes made to the existing ITs to support the new ITs.

Changes:
- Make the "custom node role" code usable by the new ITs. 
- Use flag `-DskipITs` to skips the integration tests but runs unit tests.
- Use flag `-DskipUTs` skips unit tests but runs the "new" integration tests.
- Expand the existing Druid profile, `-P skip-tests` to skip both ITs and UTs.
This commit is contained in:
Paul Rogers 2022-06-25 13:43:59 -07:00 committed by GitHub
parent f7caee3b25
commit f83fab699e
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
10 changed files with 137 additions and 63 deletions

View File

@ -17,26 +17,26 @@
~ under the License.
-->
Integration Testing
Integration Testing
===================
To run integration tests, you have to specify the druid cluster the
tests should use.
tests should use.
Druid comes with the mvn profile integration-tests
for setting up druid running in docker containers, and using that
cluster to run the integration tests.
To use a druid cluster that is already running, use the
mvn profile int-tests-config-file, which uses a configuration file
mvn profile int-tests-config-file, which uses a configuration file
describing the cluster.
Integration Testing Using Docker
Integration Testing Using Docker
-------------------
Before starting, if you don't already have docker on your machine, install it as described on
[Docker installation instructions](https://docs.docker.com/install/). Ensure that you
have at least 4GiB of memory allocated to the docker engine. (You can verify it
Before starting, if you don't already have docker on your machine, install it as described on
[Docker installation instructions](https://docs.docker.com/install/). Ensure that you
have at least 4GiB of memory allocated to the docker engine. (You can verify it
under Preferences > Resources > Advanced.)
Also set the `DOCKER_IP`
@ -52,49 +52,86 @@ Optionally, you can also set `APACHE_ARCHIVE_MIRROR_HOST` to override `https://a
export APACHE_ARCHIVE_MIRROR_HOST=https://example.com/remote-generic-repo
```
## Running tests againt auto brought up Docker containers
## Running tests against auto brought up Docker containers
> NOTE: This section describes how to start integration tests against docker containers which will be brought up automatically by following commands.
If you want to buid docker images and run tests separately, see the next section.
This section describes how to start integration tests against Docker containers which will be brought up automatically by following commands.
If you want to build Docker images and run tests separately, see the next section.
To run all tests from a test group using Docker and Maven run the following command:
To run all tests from a test group using docker and mvn run the following command:
(list of test groups can be found at `integration-tests/src/test/java/org/apache/druid/tests/TestNGGroup.java`)
```bash
mvn verify -P integration-tests -Dgroups=<test_group>
```
To run only a single test using mvn run the following command:
The list of test groups can be found at
`integration-tests/src/test/java/org/apache/druid/tests/TestNGGroup.java`.
### Run a single test
To run only a single test using Maven:
```bash
mvn verify -P integration-tests -Dgroups=<test_group> -Dit.test=<test_name>
```
The test group should always be set, as certain test setup and cleanup tasks are based on the test group. You can find
the test group for a given test as an annotation in the respective test class.
Add `-rf :druid-integration-tests` when running integration tests for the second time or later without changing
Parameters:
* Test Group: Required, as certain test tasks for setup and cleanup are based on the test group. You can find
the test group for a given test as an annotation in the respective test class. A list of test groups can be found at
`integration-tests/src/test/java/org/apache/druid/tests/TestNGGroup.java`. The annotation uses a string
constant defined in `TestNGGroup.java`, be sure to use the constant value, not name. For example,
if your test has the the annotation: `@Test(groups = TestNGGroup.BATCH_INDEX)` then use the argument
`-Dgroups=batch-index`.
* Test Name: Use the fully-qualified class name. For example, `org.apache.druid.tests.BATCH_INDEX`.
* Add `-pl :druid-integration-tests` when running integration tests for the second time or later without changing
the code of core modules in between to skip up-to-date checks for the whole module dependency tree.
Integration tests can also be run with either Java 8 or Java 11 by adding `-Djvm.runtime=#` to mvn command, where `#`
* Integration tests can also be run with either Java 8 or Java 11 by adding `-Djvm.runtime=#` to the `mvn` command, where `#`
can either be 8 or 11.
Druid's configuration (using Docker) can be overrided by providing `-Doverride.config.path=<PATH_TO_FILE>`.
* Druid's configuration (using Docker) can be overridden by providing `-Doverride.config.path=<PATH_TO_FILE>`.
The file must contain one property per line, the key must start with `druid_` and the format should be snake case.
Note that when bringing up docker containers through mvn and -Doverride.config.path is provided, additional
Druid routers for security group integration test (permissive tls, no client auth tls, custom check tls) will not be started.
Note that when bringing up Docker containers through Maven and `-Doverride.config.path` is provided, additional
Druid routers for security group integration test (permissive tls, no client auth tls, custom check tls) will not be started.
### Debugging test runs
The integration test process is fragile and can fail for many reasons when run on your machine.
Here are some suggestions.
#### Workround for failed builds
Sometimes the command above may fail for reasons unrelated to the changes you wish to test.
In such cases, a workaround is to build the code first, then use the next section to run
individual tests. To build:
```bash
mvn clean package -P integration-tests -Pskip-static-checks -Pskip-tests -Dmaven.javadoc.skip=true -T1.0C -nsu
```
#### Keep the local Maven cache fresh
As you work with issues, you may be tempted to reuse already-built jars. That only works for about 24 hours,
after which Maven will helpfully start downloading snapshot jars from an upstream repository.
This is, unfortunately, a feature of the build scripts. The `-nsu` option above tries to force
Maven to only look locally for snapshot jars.
## Running tests against mannually brought up Docker containers
1. Build docker images.
From root module run maven command, run the following command:
```bash
mvn clean install -pl integration-tests -P integration-tests -Ddocker.run.skip=true -Dmaven.test.skip=true -Ddocker.build.hadoop=true
```
```
> **NOTE**: `-Ddocker.build.hadoop=true` is optional if you don't run tests against Hadoop.
2. Choose a docker-compose file to start containers.
There are a few different Docker compose yamls located in "docker" folder that could be used to start containers for different tests.
There are a few different Docker compose yamls located in "docker" folder that could be used to start containers for different tests.
- To start basic Druid cluster (skip this if running Druid cluster with override configs):
```bash
@ -105,19 +142,19 @@ Druid routers for security group integration test (permissive tls, no client aut
```bash
OVERRIDE_ENV=<PATH_TO_ENV> docker-compose -f docker-compose.yml up
```
- To start tests against Hadoop
```bash
docker-compose -f docker-compose.druid-hadoop.yml up
```
- To start tests againt security group
```bash
docker-compose -f docker-compose.yml -f docker-compose.security.yml up
```
3. Run tests.
Execute the following command from root module, where `<test_name>` is the class name of a test, such as ITIndexerTest.
```bash
mvn verify -P integration-tests -pl integration-tests -Ddocker.build.skip=true -Ddocker.run.skip=true -Dit.test=<test_name>
@ -178,14 +215,13 @@ The values shown above are for the default docker compose cluster. For other clu
```
- docker-compose.druid-hadoop.yml
For starting Apache Hadoop 2.8.5 cluster with the same setup as the Druid tutorial.
```bash
docker-compose -f docker-compose.druid-hadoop.yml up
```
## Tips & tricks for debugging and developing integration tests
### Useful mvn command flags
@ -262,7 +298,7 @@ Make sure that you have at least 6GiB of memory available before you run the tes
To run tests on any druid cluster that is already running, create a configuration file:
{
{
"broker_host": "<broker_ip>",
"broker_port": "<broker_port>",
"router_host": "<router_ip>",
@ -282,7 +318,7 @@ Set the environment variable `CONFIG_FILE` to the name of the configuration file
export CONFIG_FILE=<config file name>
```
To run all tests from a test group using mvn run the following command:
To run all tests from a test group using mvn run the following command:
(list of test groups can be found at integration-tests/src/test/java/org/apache/druid/tests/TestNGGroup.java)
```bash
mvn verify -P int-tests-config-file -Dgroups=<test_group>
@ -297,10 +333,10 @@ Running a Test That Uses Cloud
-------------------
The integration test that indexes from Cloud or uses Cloud as deep storage is not run as part
of the integration test run discussed above. Running these tests requires the user to provide
their own Cloud.
their own Cloud.
Currently, the integration test supports Amazon Kinesis, Google Cloud Storage, Amazon S3, and Microsoft Azure.
These can be run by providing "kinesis-index", "kinesis-data-format", "gcs-deep-storage", "s3-deep-storage", or "azure-deep-storage"
These can be run by providing "kinesis-index", "kinesis-data-format", "gcs-deep-storage", "s3-deep-storage", or "azure-deep-storage"
to -Dgroups for Amazon Kinesis, Google Cloud Storage, Amazon S3, and Microsoft Azure respectively. Note that only
one group should be run per mvn command.
@ -309,13 +345,13 @@ For all the Cloud Integration tests, the following will also need to be provided
integration-tests/docker/environment-configs/override-examples/ directory for env vars to provide for each Cloud.
For Amazon Kinesis, the following will also need to be provided:
1) Provide -Ddruid.test.config.streamEndpoint=<STREAM_ENDPOINT> with the endpoint of your stream set.
1) Provide -Ddruid.test.config.streamEndpoint=<STREAM_ENDPOINT> with the endpoint of your stream set.
For example, kinesis.us-east-1.amazonaws.com
For Google Cloud Storage, Amazon S3, and Microsoft Azure, the following will also need to be provided:
1) Set the bucket and path for your test data. This can be done by setting -Ddruid.test.config.cloudBucket and
1) Set the bucket and path for your test data. This can be done by setting -Ddruid.test.config.cloudBucket and
-Ddruid.test.config.cloudPath in the mvn command or setting "cloud_bucket" and "cloud_path" in the config file.
2) Copy wikipedia_index_data1.json, wikipedia_index_data2.json, and wikipedia_index_data3.json
2) Copy wikipedia_index_data1.json, wikipedia_index_data2.json, and wikipedia_index_data3.json
located in integration-tests/src/test/resources/data/batch_index/json to your Cloud storage at the location set in step 1.
For Google Cloud Storage, in addition to the above, you will also have to:
@ -326,21 +362,21 @@ For example, to run integration test for Google Cloud Storage:
mvn verify -P integration-tests -Dgroups=gcs-deep-storage -Doverride.config.path=<PATH_TO_FILE> -Dresource.file.dir.path=<PATH_TO_FOLDER> -Ddruid.test.config.cloudBucket=test-bucket -Ddruid.test.config.cloudPath=test-data-folder/
```
Running a Test That Uses Hadoop
-------------------
The integration test that indexes from hadoop is not run as part
of the integration test run discussed above. This is because druid
test clusters might not, in general, have access to hadoop.
This also applies to integration test that uses Hadoop HDFS as an inputSource or as a deep storage.
This also applies to integration test that uses Hadoop HDFS as an inputSource or as a deep storage.
To run integration test that uses Hadoop, you will have to run a Hadoop cluster. This can be done in two ways:
1) Run Druid Docker test clusters with Hadoop container by passing -Dstart.hadoop.docker=true to the mvn command. If you have not already built the hadoop image, you will also need to add -Ddocker.build.hadoop=true to the mvn command.
2) Run your own Druid + Hadoop cluster and specified Hadoop configs in the configuration file (CONFIG_FILE).
Currently, hdfs-deep-storage and other <cloud>-deep-storage integration test groups can only be run with
Currently, hdfs-deep-storage and other <cloud>-deep-storage integration test groups can only be run with
Druid Docker test clusters by passing -Dstart.hadoop.docker=true to start Hadoop container.
You will also have to provide -Doverride.config.path=<PATH_TO_FILE> with your Druid's Hadoop configs set.
You will also have to provide -Doverride.config.path=<PATH_TO_FILE> with your Druid's Hadoop configs set.
See integration-tests/docker/environment-configs/override-examples/hdfs directory for example.
Note that if the integration test you are running also uses other cloud extension (S3, Azure, GCS), additional
credentials/configs may need to be set in the same file as your Druid's Hadoop configs set.
@ -383,9 +419,9 @@ do the following instead of running the tests using mvn:
On a machine that can do mvn builds:
```bash
cd druid
cd druid
mvn clean package
cd integration_tests
cd integration_tests
mvn dependency:copy-dependencies package
```
@ -450,23 +486,23 @@ Refer ITIndexerTest as an example on how to use dependency Injection
### Running test methods in parallel
By default, test methods in a test class will be run in sequential order one at a time. Test methods for a given test
By default, test methods in a test class will be run in sequential order one at a time. Test methods for a given test
class can be set to run in parallel (multiple test methods of each class running at the same time) by excluding
the given class/package from the "AllSerializedTests" test tag section and including it in the "AllParallelizedTests"
the given class/package from the "AllSerializedTests" test tag section and including it in the "AllParallelizedTests"
test tag section in integration-tests/src/test/resources/testng.xml. TestNG uses two parameters, i.e.,
`thread-count` and `data-provider-thread-count`, for parallel test execution, which are both set to 2 for Druid integration tests.
For test using parallel execution with data provider, you will also need to set `@DataProvider(parallel = true)`
on your data provider method in your test class. Note that for test using parallel execution with data provider, the test
class does not need to be in the "AllParallelizedTests" test tag section and if it is in the "AllParallelizedTests"
class does not need to be in the "AllParallelizedTests" test tag section and if it is in the "AllParallelizedTests"
test tag section it will actually be run with `thread-count` times `data-provider-thread-count` threads.
You may want to modify those values for faster execution.
See https://testng.org/doc/documentation-main.html#parallel-running and https://testng.org/doc/documentation-main.html#parameters-dataproviders for details.
Please be mindful when adding tests to the "AllParallelizedTests" test tag that the tests can run in parallel with
other tests from the same class at the same time. i.e. test does not modify/restart/stop the druid cluster or other dependency containers,
test does not use excessive memory starving other concurent task, test does not modify and/or use other task,
supervisor, datasource it did not create.
test does not use excessive memory starving other concurent task, test does not modify and/or use other task,
supervisor, datasource it did not create.
### Limitation of Druid cluster in Travis environment
@ -474,6 +510,6 @@ By default, integration tests are run in Travis environment on commits made to o
required to pass for a PR to be elligible to be merged. Here are known issues and limitations to the Druid docker cluster
running in Travis machine that may cause the tests to fail:
- Number of concurrent running tasks. Although the default Druid cluster config sets the maximum number of tasks (druid.worker.capacity) to 10,
the actual maximum can be lowered depending on the type of the tasks. For example, running 2 range partitioning compaction tasks with 2 subtasks each
(for a total of 6 tasks) concurrently can cause the cluster to intermittently fail. This can cause the Travis job to become stuck until it timeouts (50 minutes)
the actual maximum can be lowered depending on the type of the tasks. For example, running 2 range partitioning compaction tasks with 2 subtasks each
(for a total of 6 tasks) concurrently can cause the cluster to intermittently fail. This can cause the Travis job to become stuck until it timeouts (50 minutes)
and/or terminates after 10 mins of not receiving new output.

View File

@ -30,7 +30,7 @@ cp -R docker $SHARED_DIR/docker
pushd ../
rm -rf distribution/target/apache-druid-$DRUID_VERSION-integration-test-bin
mvn -DskipTests -T1C -Danimal.sniffer.skip=true -Dcheckstyle.skip=true -Ddruid.console.skip=true -Denforcer.skip=true -Dforbiddenapis.skip=true -Dmaven.javadoc.skip=true -Dpmd.skip=true -Dspotbugs.skip=true install -Pintegration-test
mvn -P skip-static-checks,skip-tests -T1C -Danimal.sniffer.skip=true -Dcheckstyle.skip=true -Ddruid.console.skip=true -Denforcer.skip=true -Dforbiddenapis.skip=true -Dmaven.javadoc.skip=true -Dpmd.skip=true -Dspotbugs.skip=true install -Pintegration-test
mv distribution/target/apache-druid-$DRUID_VERSION-integration-test-bin/lib $SHARED_DIR/docker/lib
mv distribution/target/apache-druid-$DRUID_VERSION-integration-test-bin/extensions $SHARED_DIR/docker/extensions
popd

View File

@ -66,6 +66,7 @@ public class CliCustomNodeRole extends ServerRunnable
public static final String SERVICE_NAME = "custom-node-role";
public static final int PORT = 9301;
public static final int TLS_PORT = 9501;
public static final NodeRole NODE_ROLE = new NodeRole(CliCustomNodeRole.SERVICE_NAME);
public CliCustomNodeRole()
{
@ -75,7 +76,7 @@ public class CliCustomNodeRole extends ServerRunnable
@Override
protected Set<NodeRole> getNodeRoles(Properties properties)
{
return ImmutableSet.of(new NodeRole(CliCustomNodeRole.SERVICE_NAME));
return ImmutableSet.of(NODE_ROLE);
}
@Override

View File

@ -23,6 +23,19 @@ import javax.annotation.Nullable;
import java.util.Map;
/**
* Configuration for tests. Opinionated about the shape of the cluster:
* there is one or two Coordinators or Overlords, zero or one of
* everything else.
* <p>
* To work in Docker (and K8s) there are two methods per host:
* {@code get<service>Host()} which returns the host as seen from
* the test machine (meaning the proxy host), and
* {@code get<service>InternalHost()} which returns the name of
* the host as seen by itself and other services: the host published
* in ZK, which is the host known to the Docker/K8s overlay network.
* <p>
* The {@code get<service>Url()} methods return URLs relative to
* the test, using the proxy host for Docker and K8s.
*/
public interface IntegrationTestingConfig
{

View File

@ -30,7 +30,6 @@ import javax.ws.rs.core.MediaType;
public class SqlResourceTestClient extends AbstractQueryResourceTestClient<SqlQuery>
{
@Inject
SqlResourceTestClient(
ObjectMapper jsonMapper,

View File

@ -41,7 +41,6 @@ import java.util.function.Function;
public abstract class AbstractTestQueryHelper<QueryResultType extends AbstractQueryWithResults>
{
public static final Logger LOG = new Logger(TestQueryHelper.class);
protected final AbstractQueryResourceTestClient queryClient;
@ -54,7 +53,7 @@ public abstract class AbstractTestQueryHelper<QueryResultType extends AbstractQu
@Inject
AbstractTestQueryHelper(
ObjectMapper jsonMapper,
AbstractQueryResourceTestClient queryClient,
AbstractQueryResourceTestClient<?> queryClient,
IntegrationTestingConfig config
)
{
@ -70,7 +69,7 @@ public abstract class AbstractTestQueryHelper<QueryResultType extends AbstractQu
AbstractTestQueryHelper(
ObjectMapper jsonMapper,
AbstractQueryResourceTestClient queryClient,
AbstractQueryResourceTestClient<?> queryClient,
String broker,
String brokerTLS,
String router,
@ -103,9 +102,13 @@ public abstract class AbstractTestQueryHelper<QueryResultType extends AbstractQu
public void testQueriesFromString(String str) throws Exception
{
testQueriesFromString(getQueryURL(broker), str);
testQueriesFromString(getQueryURL(brokerTLS), str);
if (!broker.equals(brokerTLS)) {
testQueriesFromString(getQueryURL(brokerTLS), str);
}
testQueriesFromString(getQueryURL(router), str);
testQueriesFromString(getQueryURL(routerTLS), str);
if (!router.equals(routerTLS)) {
testQueriesFromString(getQueryURL(routerTLS), str);
}
}
public void testQueriesFromFile(String url, String filePath) throws Exception

View File

@ -272,7 +272,7 @@ public class DruidClusterAdminClient
}
catch (Throwable e) {
//
// supress stack trace logging for some specific exceptions
// suppress stack trace logging for some specific exceptions
// to reduce excessive stack trace messages when waiting druid nodes to start up
//
if (e.getCause() instanceof ChannelException) {

View File

@ -19,7 +19,6 @@
package /*CHECKSTYLE.OFF: PackageName*/org.testng/*CHECKSTYLE.ON: PackageName*/;
import org.apache.druid.java.util.common.logger.Logger;
import org.apache.druid.testing.utils.SuiteListener;
import org.testng.internal.IConfiguration;
import org.testng.internal.Systematiser;
@ -34,7 +33,6 @@ import java.util.List;
*/
public class DruidTestRunnerFactory implements ITestRunnerFactory
{
private static final Logger LOG = new Logger(DruidTestRunnerFactory.class);
private static final SuiteListener SUITE_LISTENER = new SuiteListener();
@Override

View File

@ -40,7 +40,7 @@ import org.testng.annotations.Test;
* -Ddruid.test.config.s3AssumeRoleWithExternalId or setting "s3_assume_role_with_external_id" in the config file.
* -Ddruid.test.config.s3AssumeRoleExternalId or setting "s3_assume_role_external_id" in the config file.
* -Ddruid.test.config.s3AssumeRoleWithoutExternalId or setting "s3_assume_role_without_external_id" in the config file.
* The credientials provided in OVERRIDE_S3_ACCESS_KEY and OVERRIDE_S3_SECRET_KEY must be able to assume these roles.
* The credentials provided in OVERRIDE_S3_ACCESS_KEY and OVERRIDE_S3_SECRET_KEY must be able to assume these roles.
* These roles must also have access to the bucket and path for your data in #1.
* (s3AssumeRoleExternalId is the external id for s3AssumeRoleWithExternalId, while s3AssumeRoleWithoutExternalId
* should not have external id set)

28
pom.xml
View File

@ -127,6 +127,13 @@
<!-- Allow the handful of flaky tests with transient failures to pass. -->
<surefire.rerunFailingTestsCount>3</surefire.rerunFailingTestsCount>
<!-- Using -DskipTests or -P skip-tests will skip both the unit tests and
the "new" integration tests. To skip just the unit tests (but run the
new integration tests by setting the required profile), use -DskipUTs=true.
-->
<skipUTs>false</skipUTs>
<skipITs>false</skipITs>
</properties>
<modules>
@ -1169,6 +1176,7 @@
<excludes>
<!-- Ignore initialization classes, these are tested by the integration tests. -->
<exclude>org/apache/druid/cli/Cli*</exclude>
<exclude>org/apache/druid/cli/GuiceRunnable.class</exclude>
<exclude>org/apache/druid/cli/*JettyServerInitializer*</exclude>
<exclude>org/apache/druid/server/initialization/*Module*</exclude>
<exclude>org/apache/druid/server/initialization/jetty/*Module*</exclude>
@ -1183,6 +1191,13 @@
<exclude>org/apache/druid/benchmark/**/*</exclude> <!-- benchmarks -->
<exclude>org/apache/druid/**/*Benchmark*</exclude> <!-- benchmarks -->
<exclude>org/apache/druid/testing/**/*</exclude> <!-- integration-tests -->
<!-- The ITs has test code sprinkled through the module tree. Remove the following
once the old version is retired. -->
<exclude>org/apache/druid/server/coordination/ServerManagerForQueryErrorTest.class</exclude>
<exclude>org/apache/druid/guice/SleepModule.class</exclude>
<exclude>org/apache/druid/guice/CustomNodeRoleClientModule.class</exclude>
<exclude>org/apache/druid/cli/CustomNodeRoleCommandCreator.class</exclude>
<exclude>org/apache/druid/cli/QueryRetryTestCommandCreator.class</exclude>
<!-- Exceptions -->
<exclude>org/apache/druid/query/TruncatedResponseContextException.class</exclude>
@ -1513,6 +1528,9 @@
<!--@TODO After fixing https://github.com/apache/druid/issues/4964 remove this parameter-->
-Ddruid.indexing.doubleStorage=double
</argLine>
<!-- Skip the tests which Surefire runs. Surefire runs the unit tests,
while its sister plugin, Failsafe, runs the "new" ITs. -->
<skipTests>${skipUTs}</skipTests>
<trimStackTrace>false</trimStackTrace>
<!-- our tests are very verbose, let's keep the volume down -->
<redirectTestOutputToFile>true</redirectTestOutputToFile>
@ -1829,7 +1847,7 @@
<exclude>docker/service-supervisords/*.conf</exclude>
<exclude>target/**</exclude>
<exclude>licenses/**</exclude>
<exclude>**/test/resources/**</exclude>
<exclude>**/test/resources/**</exclude> <!-- test data for "old" ITs. -->
<exclude>**/derby.log</exclude>
<exclude>**/jvm.config</exclude>
<exclude>**/*.avsc</exclude>
@ -1837,6 +1855,8 @@
<exclude>**/*.json</exclude>
<exclude>**/*.parq</exclude>
<exclude>**/*.parquet</exclude>
<exclude>**/*.pmd</exclude> <!-- Artifact of maven-pmd-plugin -->
<exclude>**/*.pmdruleset.xml</exclude> <!-- Artifact of maven-pmd-plugin -->
<exclude>**/docker/schema-registry/*</exclude>
<exclude>LICENSE</exclude>
<exclude>LICENSE.BINARY</exclude>
@ -1911,8 +1931,12 @@
</profile>
<profile>
<id>skip-tests</id>
<activation>
<activeByDefault>false</activeByDefault>
</activation>
<properties>
<skipTests>true</skipTests>
<skipUTs>true</skipUTs> <!-- Skip only UTs -->
<skipITs>true</skipITs> <!-- ITs are also behind a profile -->
<jacoco.skip>true</jacoco.skip>
</properties>
</profile>