Note: Previously, the maven jar plugin contained a configuration embedded in each module's generated manifest files. The configuration got relocated to the project/bnd.bnd file in a previous commit, and gets handled through the bnd plugin.
Replace substituted GSON package names with those provided from the vendor.
Reduce OSGi-metadata declaration of core-module because the artificial package org.jclouds.json.gson.internal was removed.
Remove the Gson module its children Gson bundle, and Gson shaded.
Remove duplication conflict and check-style rules due to the removal of the internal Gson module.
Add maven repository where a custom version of the Gson library gets hosted, which exports all packages.
Remove particular repository
Remove the declaration of the repository that serves a custom build GSON version. The build uses GSON in its original form of the vendor, which gets distributed through the standard distribution channel. The identifiers for group, artifact, and version correspond to the latest stable release of GSON.
Integrate GSON library in Clouds Core Bundle
The change contained in the commit puts the GSON library into the classpath of the JClouds core module.
After several tests with Karaf and Karaf JClouds, especially if the Maven identifier matches the original GSON library, there are only a limited number of ways to keep the deployment effort low.
Specifically, Karaf has a set of predefined Maven repositories that can be easily customized. The order in which a particular repository is resolved into the customized GSON library is more difficult. In normal OSGi applications, which do not have such a management function, I imagine this configuration to be more complicated. Sure, a unique identifier would help, but then we are back to step 1.
Although I honestly don't like to see this kind of approach in a codebase I'm working with, there are not many alternatives to the main aspect of deployment mentioned above.
Maybe the approach can still ease the problem in the short term. In a further mid-term step, however, this problem must be addressed in general.
AWS response data is encoded in UTF-8. Creating a String from said data
using the JVM's default charset results in incorrect encoding on
environments in which the JVM's default charset is not UTF-8.
https://issues.apache.org/jira/browse/JCLOUDS-1509
* JCLOUDS-1166: Relocate the gson internal packge to be able to keep using it
* Fixes
* Fix import order and shaded jar
* More fixes
* Proper dependency configuration
* Fix typos
* Bring back duplicate exclusions
* JCLOUDS-1496: Update maven-compiler-plugin for increased JDK compatibility
* increase maven-compiler-plugin version
* A space change to trigger jenkins again
* Another space change to trigger jenkins again
These should provide a descriptive second argument, not the same as
the first argument which is null in the failure case. This also found
a logic error in CreateVolumeResponseHandler.
Previously getBlobKeysInsideContainer returned all keys and filtered
in LocalBlobStore. Now getBlobKeysInsideContainer filters via prefix
which can dramatically decrease the number of keys returned,
especially for the filesystem provider. Further optimizations are
possible for delimiter.
Prior commit introduced a bug in the computation of the MPU ETag value,
where it was concatenating strings, rather than operating on the bytes
of the integer value.
S3 uses a different ETag for multipart uploads (MPUs) than regular
objects. The ETag consists of the md5 hash of the concatenated ETags of
individual parts followed by the number of parts (separated by "-").
The patch changes the LocalBlobStore's implementation of
CompleteMultipartUpload to set the S3-style ETag before calling
putBlob() and return that ETag to the caller.
To simplify testing, a new protected method with a default NOOP
implementation is added to the BaseBlobIntegrationTest. It allows
providers to further verify MPUs (i.e. ensuring the correct ETag has
been stored alongside the object).
The x-amz-copy-source header on S3 CopyObject should be URL encoded (as
a path). This is not universally true of all headers though (for example
the = in x-amz-copy-source-range) therefore introducing a new parameter
on @Headers to indicate whether URL encoding should take place.