* Create properties for PublicKeyHandler to read existing keys from disk
* Move pregenerated keys from core/test-files to test-framework
* Update tests to use existing keys instead of new keys each run
SOLR-13996: Refactor HttpShardHandler.prepDistributed method into smaller pieces
This commit introduces an interface named ReplicaSource which is marked as experimental. It has two sub-classes named CloudReplicaSource (for solr cloud) and LegacyReplicaSource for non-cloud clusters. The prepDistributed method now calls out to these sub-classes depending on whether the cluster is running on cloud mode or not.
* Use Caffeine impl and weak values (to the schema). Previously the cache never evicted!
* now populating the configSet name from ZK into CloudDescriptor when CloudDescriptor is loaded
* actual schema name needs to be deterministic now; fallback from non-existent managed-schema to schema.xml will thwart this cache
* a test conf/core.properties wasn't actually used and became a problem in it's weird location after I refactored some logic
Jetty 9.4.16.v20190411 and up introduced separate
client and server SslContextFactory implementations.
This split requires the proper use of of
SslContextFactory in clients and server configs.
This fixes the following
* SSL with SOLR_SSL_NEED_CLIENT_AUTH not working since v8.2.0
* Http2SolrClient SSL not working in branch_8x
Signed-off-by: Kevin Risden <krisden@apache.org>
Solr tests now have a similar policy to Lucene, loopback use only. If a
test tries to resolve or connect to the internet, it will get SecurityException.
Some solr tests explicitly try to talk to dead nodes with real
networking. This is not good and asking for trouble, but use low loopback port numbers instead of
multicast addresses. The idea is that it fails faster. Move these to
constants so that stuff isn't copy-pasted everywhere, in case we have to
do something different later.
This removes the Solr security manager hacks
for Hadoop. It does so by:
* Using a fake group mapping class instead of ShellGroupMapping
* Copies a few Hadoop classes and modifies them for tests with no Shell
* Nulls out some of the static variables in the tests
The Hadoop files were copied from Apache Hadoop 3.2.0
and copied to the test package to be only picked up
during tests. They were modified to remove the need to
shell out for access. The assumption is that these
HDFS integration tests only run on Unix based systems
and therefore Windows compatibility was removed in some
of the modified classes. The long term goal is to remove
these custom Hadoop classes. All the copied classes are
in the org.apache.hadoop package.
Signed-off-by: Kevin Risden <krisden@apache.org>
This groundwork commit allows tests to randomize request content-type
more flexibly. This will be taken advantage of by subsequent commits.
Co-Authored-By: Thomas Woeckinger
Closes: #755
* Refactor existing work around in BasicAuthIntegrationTest up into SolrCloudAuthTestCase for re-use in JWTAuthPluginIntegrationTest
* Simplify BasicAuthOnSingleNodeTest and PKIAuthenticationIntegrationTest to use their existing (static) security settings on creation of MiniSolrCloud. Since they no longer modify security.json once the nodes are alive, the issue no longer affects them