It's not a real issue, just risk of duplicate writing to the cache,
where that is considered acceptable.
Change-Id: I11ef64d2bd0a303e678e114fb38317194d1b50cd
the build failed with two modules missing the shaded protobuf lib
either: old hadoop-common was picked up or somehow maven wasn't including
it. added one more declaration of dependency
Change-Id: Ifb5b4f45d745a7ad7ab0636cb1f5d1262b4d98fa
...lets see if that helps things to build, though it does look like a
mvn repo change.
key point: dependencies are the same as before, but protobuf can be excluded
by downstream projects and all RPC code will still work.
Change-Id: I681b75d3dfe5cd5d0e6852ee5d73a63e28a3b8d0
* new org.apache.hadoop.ipc.internal for internal only classes
* with a ShadedProtobufHelper in there which has shaded protobuf refs
only, so guaranteed not to need protobuf-2.5 on the CP
* findbugs, protobuf source patch etc
* can spec new export policies for the protobuf jar in hadoop-common
and other places it is referenced. hadoop-common back @compile
Change-Id: I61a99d2fd673259ab50d000f28a29e8a38aaf1b1
* reflection + shortcut to look for class instanceof
* ProtobufWrapperLegacy pulled out for better isolation
Change-Id: I6f67841d9649648a733f7d89835957faffbfd520
The methods which were overloaded with shaded/unshaded protobuf paramters
now have @Private ones with different names. the existing overloaded
methods are still there but deprecated, in case things use them.
* removed the extra imports needed before this change went in
* protobuf2.scope is the name for the scope var
+added a test for exception extraction, so yetus stops complaining
I am thinking of merging this in to 3.3.5 with the scope set to "compile",
as we get today. That way
* the overloading changes are in
* anyone who wants to cut their own release without protobuf2.5 can do it
Change-Id: I3423720c7047c63f7c9797d6e386774bff10b21a
The option protobuf.scope defines whether the protobuf 2.5.0
dependency is marked as provided or not.
* all declarations except those in yarn-csi are updated
* those modules which don't compile without their own explicit
import (hadoop-hdfs-client, hadoop-hdfs-rbf)
It's actually interesting to see where/how that compile fails
hadoop-hdfs-client: ClientNamenodeProtocolTranslatorPB
hadoop-hdfs-rbf:RouterAdminProtocolTranslatorPB
both with "class file for com.google.protobuf.ServiceException not found",
even though *neither class uses it*
what they do have is references to ProtobufHelper.getRemoteException(),
which is overloaded to both the shaded ServiceException and the original one
Hypothesis: the javac overload resolution needs to look at the entire
class hierarchy before it can decide which one to use.
Proposed: add a new method
ioe extractException(org.apache.hadoop.thirdparty.protobuf.ServiceException)
and move our own code to it. Without the overloading the classes should not
be needed
Change-Id: I70354abfe3f1fdc03c418dac88e60f8cc4929a33
HDFS-16896 clear ignoredNodes list when we clear deadnode list on refetchLocations.
ignoredNodes list is only used on hedged read codepath
Co-authored-by: Tom McCormick <tmccormi@linkedin.com>
POM and LICENSE fixup of transient dependencies
* Exclude hadoop-cloud-storage imports which come in with hadoop-common
* Add explicit import of hadoop's org.codehaus.jettison declaration
to hadoop-aliyun
* Tune aliyun jars imports
* Update LICENSE-binary for the current set of libraries.
Contributed by Steve Loughran
Followup to the original HADOOP-18582.
Temporary path cleanup is re-enabled for -append jobs
as these will create temporary files when creating or overwriting files.
Contributed by Ayush Saxena
Even though DiskChecker.mkdirsWithExistsCheck() will create the directory tree,
it is only called *after* the enumeration of directories with available
space has completed.
Directories which don't exist are reported as having 0 space, therefore
the mkdirs code is never reached.
Adding a simple mkdirs() -without bothering to check the outcome-
ensures that if a dir has been deleted then it will be reconstructed
if possible. If it can't it will still have 0 bytes of space
reported and so be excluded from the allocation.
Contributed by Steve Loughran