Revert "Revert "HBASE-15789 PB related changes to work with offheap""

Restore this change but with a clean of the generated dirs first so
that we avoid trying to apply a patch on top of an already patched src.

This reverts commit 0f384158fc.
This commit is contained in:
Michael Stack 2016-10-24 14:36:42 -07:00
parent 988d1f9bc9
commit 4533bb63cf
12 changed files with 2550 additions and 49 deletions

View File

@ -161,10 +161,6 @@
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-server</artifactId>
</dependency>
<dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-thrift</artifactId>
</dependency>
<!-- To dump tools in hbase-procedure into cached_classpath.txt. -->
<dependency>
<groupId>org.apache.hbase</groupId>

View File

@ -16,11 +16,16 @@ protobuf Message class is at
org.apache.hadoop.hbase.shaded.com.google.protobuf.Message
rather than at com.google.protobuf.Message.
Below we describe how to generate the java files for this
module. Run this step any time you change the proto files
in this module or if you change the protobuf version. If you
add a new file, be sure to add mention of the proto in the
pom.xml (scroll till you see the listing of protos to consider).
Finally, this module also includes patches applied on top of
protobuf to add functionality not yet in protobuf that we
need now.
The shaded generated java files, including the patched protobuf
source files are all checked in.
If you make changes to protos, to the protobuf version or to
the patches you want to apply to protobuf, you must rerun this
step.
First ensure that the appropriate protobuf protoc tool is in
your $PATH as in:
@ -33,16 +38,22 @@ build protoc first.
Run:
$ mvn install -Dgenerate-shaded-classes
$ mvn install -Dcompile-protobuf
or
$ mvn install -Pgenerate-shaded-classes
$ mvn install -Pcompille-protobuf
to build and trigger the special generate-shaded-classes
profile. When finished, the content of
src/main/java/org/apache/hadoop/hbase/shaded will have
been updated. Check in the changes.
been updated. Make sure all builds and then carefully
check in the changes. Files may have been added or removed
by the steps above.
If you have patches for the protobuf, add them to
src/main/patches directory. They will be applied after
protobuf is shaded and unbundled into src/main/java.
See the pom.xml under the generate-shaded-classes profile
for more info on how this step works.

View File

@ -151,37 +151,32 @@
<surefire.skipFirstPart>true</surefire.skipFirstPart>
</properties>
</profile>
<!--
Generate shaded classes using proto files and
the protobuf lib we depend on. Drops generated
files under src/main/java. Check in the generated
files so available at build time. Run this
profile/step everytime you change proto
files or update the protobuf version. If you add a
proto, be sure to add it to the list below in the
hadoop-maven-plugin else we won't 'see' it.
The below first generates java files from protos.
We then compile the generated files and make a jar
file. The jar file is then passed to the shade plugin
which makes a new fat jar that includes the protobuf
lib but with all relocated given the
org.apache.hadoop.hbase.shaded prefix. The shading
step as by-product produces a jar with relocated
java source files in it. This jar we then unpack over
the src/main/java directory and we're done.
User is expected to check in the changes if they look
good.
TODO: Patch the protobuf lib using maven-patch-plugin
with changes we need.
-->
<profile>
<id>generate-shaded-classes</id>
<id>compile-protobuf</id>
<!--
Generate and shade proto files. Drops generated java files
under src/main/java. Check in the generated files so available
at build time. Run this profile/step everytime you change proto
files or update the protobuf version. If you add a proto, be
sure to add it to the list below in the hadoop-maven-plugin
else we won't 'see' it.
The below does a bunch of ugly stuff. It purges current content
of the generated and shaded com.google.protobuf java files first.
It does this because later we apply patches later and patches
fail they've already been applied. We remove too because we
overlay the shaded protobuf and if files have been removed or
added, it'll be more plain if we have first done this delete.
Next up we generate protos, build a jar, shade it (which
includes the referenced protobuf), undo it over the src/main/java
directory, and then apply patches.
The result needs to be checked in.
-->
<activation>
<property>
<name>generate-shaded-classes</name>
<name>compile-protobuf</name>
</property>
</activation>
<properties>
@ -191,10 +186,35 @@
<!--When the compile for this profile runs, make sure it makes jars that
can be related back to this shading profile. Give them a shading prefix.
-->
<jar.finalName>${profile.id}.${artifactId}-${project.version}</jar.finalName>
<jar.finalName>${profile.id}.${project.artifactId}-${project.version}</jar.finalName>
</properties>
<build>
<plugins>
<plugin>
<artifactId>maven-clean-plugin</artifactId>
<executions>
<execution>
<id>pre-compile-protoc</id>
<phase>generate-sources</phase>
<goals>
<goal>clean</goal>
</goals>
<configuration>
<filesets>
<fileset>
<directory>${basedir}/src/main/java/org/apache/hadoop/hbase/shaded</directory>
<includes>
<include>ipc/protobuf/generated/**/*.java</include>
<include>protobuf/generated/**/*.java</include>
<include>com/google/protobuf/**/*.java</include>
</includes>
<followSymlinks>false</followSymlinks>
</fileset>
</filesets>
</configuration>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-maven-plugins</artifactId>
@ -337,9 +357,32 @@
</execution>
</executions>
</plugin>
<!--Patch the files here!!!
Use maven-patch-plugin
-->
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-patch-plugin</artifactId>
<version>1.2</version>
<configuration>
<!--Patches are made at top-level-->
<targetDirectory>${basedir}/..</targetDirectory>
<skipApplication>false</skipApplication>
</configuration>
<executions>
<execution>
<id>patch</id>
<configuration>
<strip>1</strip>
<patchDirectory>src/main/patches</patchDirectory>
<patchTrackingFile>${project.build.directory}/patches-applied.txt</patchTrackingFile>
<naturalOrderProcessing>true</naturalOrderProcessing>
</configuration>
<phase>package</phase>
<goals>
<!--This should run after the above unpack phase-->
<goal>apply</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
</profile>

View File

@ -112,7 +112,7 @@ final class ByteBufferWriter {
}
}
private static byte[] getOrCreateBuffer(int requestedSize) {
static byte[] getOrCreateBuffer(int requestedSize) {
requestedSize = max(requestedSize, MIN_CACHED_BUFFER_SIZE);
byte[] buffer = getBuffer();

View File

@ -0,0 +1,81 @@
// Protocol Buffers - Google's data interchange format
// Copyright 2008 Google Inc. All rights reserved.
// https://developers.google.com/protocol-buffers/
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package org.apache.hadoop.hbase.shaded.com.google.protobuf;
import java.io.IOException;
import java.nio.ByteBuffer;
/**
* An input for raw bytes. This is similar to an InputStream but it is offset addressable. All the
* read APIs are relative.
*/
@ExperimentalApi
public abstract class ByteInput {
/**
* Reads a single byte from the given offset.
* @param offset The offset from where byte to be read
* @return The byte of data at given offset
*/
public abstract byte read(int offset);
/**
* Reads bytes of data from the given offset into an array of bytes.
* @param offset The src offset within this ByteInput from where data to be read.
* @param out Destination byte array to read data into.
* @return The number of bytes read from ByteInput
*/
public int read(int offset, byte b[]) throws IOException {
return read(offset, b, 0, b.length);
}
/**
* Reads up to <code>len</code> bytes of data from the given offset into an array of bytes.
* @param offset The src offset within this ByteInput from where data to be read.
* @param out Destination byte array to read data into.
* @param outOffset Offset within the the out byte[] where data to be read into.
* @param len The number of bytes to read.
* @return The number of bytes read from ByteInput
*/
public abstract int read(int offset, byte[] out, int outOffset, int len);
/**
* Reads bytes of data from the given offset into given {@link ByteBuffer}.
* @param offset he src offset within this ByteInput from where data to be read.
* @param out Destination {@link ByteBuffer} to read data into.
* @return The number of bytes read from ByteInput
*/
public abstract int read(int offset, ByteBuffer out);
/**
* @return Total number of bytes in this ByteInput.
*/
public abstract int size();
}

View File

@ -0,0 +1,249 @@
// Protocol Buffers - Google's data interchange format
// Copyright 2008 Google Inc. All rights reserved.
// https://developers.google.com/protocol-buffers/
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package org.apache.hadoop.hbase.shaded.com.google.protobuf;
import java.io.IOException;
import java.io.InputStream;
import java.io.InvalidObjectException;
import java.io.ObjectInputStream;
import java.io.OutputStream;
import java.nio.ByteBuffer;
import java.nio.charset.Charset;
import java.util.Arrays;
import java.util.Collections;
import java.util.List;
/**
* A {@link ByteString} that wraps around a {@link ByteInput}.
*/
final class ByteInputByteString extends ByteString.LeafByteString {
private final ByteInput buffer;
private final int offset, length;
ByteInputByteString(ByteInput buffer, int offset, int length) {
if (buffer == null) {
throw new NullPointerException("buffer");
}
this.buffer = buffer;
this.offset = offset;
this.length = length;
}
// =================================================================
// Serializable
/**
* Magic method that lets us override serialization behavior.
*/
private Object writeReplace() {
return ByteString.wrap(toByteArray());
}
/**
* Magic method that lets us override deserialization behavior.
*/
private void readObject(@SuppressWarnings("unused") ObjectInputStream in) throws IOException {
throw new InvalidObjectException("ByteInputByteString instances are not to be serialized directly");// TODO check here
}
// =================================================================
@Override
public byte byteAt(int index) {
return buffer.read(getAbsoluteOffset(index));
}
private int getAbsoluteOffset(int relativeOffset) {
return this.offset + relativeOffset;
}
@Override
public int size() {
return length;
}
@Override
public ByteString substring(int beginIndex, int endIndex) {
if (beginIndex < 0 || beginIndex >= size() || endIndex < beginIndex || endIndex >= size()) {
throw new IllegalArgumentException(
String.format("Invalid indices [%d, %d]", beginIndex, endIndex));
}
return new ByteInputByteString(this.buffer, getAbsoluteOffset(beginIndex), endIndex - beginIndex);
}
@Override
protected void copyToInternal(
byte[] target, int sourceOffset, int targetOffset, int numberToCopy) {
this.buffer.read(getAbsoluteOffset(sourceOffset), target, targetOffset, numberToCopy);
}
@Override
public void copyTo(ByteBuffer target) {
this.buffer.read(this.offset, target);
}
@Override
public void writeTo(OutputStream out) throws IOException {
out.write(toByteArray());// TODO
}
@Override
boolean equalsRange(ByteString other, int offset, int length) {
return substring(0, length).equals(other.substring(offset, offset + length));
}
@Override
void writeToInternal(OutputStream out, int sourceOffset, int numberToWrite) throws IOException {
byte[] buf = ByteBufferWriter.getOrCreateBuffer(numberToWrite);
this.buffer.read(getAbsoluteOffset(sourceOffset), buf, 0, numberToWrite);
out.write(buf, 0, numberToWrite);
}
@Override
void writeTo(ByteOutput output) throws IOException {
output.writeLazy(toByteArray(), 0, length);
}
@Override
public ByteBuffer asReadOnlyByteBuffer() {
return ByteBuffer.wrap(toByteArray()).asReadOnlyBuffer();
}
@Override
public List<ByteBuffer> asReadOnlyByteBufferList() {
return Collections.singletonList(asReadOnlyByteBuffer());
}
@Override
protected String toStringInternal(Charset charset) {
byte[] bytes = toByteArray();
return new String(bytes, 0, bytes.length, charset);
}
@Override
public boolean isValidUtf8() {
return Utf8.isValidUtf8(buffer, offset, offset + length);
}
@Override
protected int partialIsValidUtf8(int state, int offset, int length) {
int off = getAbsoluteOffset(offset);
return Utf8.partialIsValidUtf8(state, buffer, off, off + length);
}
@Override
public boolean equals(Object other) {
if (other == this) {
return true;
}
if (!(other instanceof ByteString)) {
return false;
}
ByteString otherString = ((ByteString) other);
if (size() != otherString.size()) {
return false;
}
if (size() == 0) {
return true;
}
if (other instanceof RopeByteString) {
return other.equals(this);
}
return Arrays.equals(this.toByteArray(), otherString.toByteArray());
}
@Override
protected int partialHash(int h, int offset, int length) {
offset = getAbsoluteOffset(offset);
int end = offset + length;
for (int i = offset; i < end; i++) {
h = h * 31 + buffer.read(i);
}
return h;
}
@Override
public InputStream newInput() {
return new InputStream() {
private final ByteInput buf = buffer;
private int pos = offset;
private int limit = pos + length;
private int mark = pos;
@Override
public void mark(int readlimit) {
this.mark = readlimit;
}
@Override
public boolean markSupported() {
return true;
}
@Override
public void reset() throws IOException {
this.pos = this.mark;
}
@Override
public int available() throws IOException {
return this.limit - this.pos;
}
@Override
public int read() throws IOException {
if (available() <= 0) {
return -1;
}
return this.buf.read(pos++) & 0xFF;
}
@Override
public int read(byte[] bytes, int off, int len) throws IOException {
int remain = available();
if (remain <= 0) {
return -1;
}
len = Math.min(len, remain);
buf.read(pos, bytes, off, len);
pos += len;
return len;
}
};
}
@Override
public CodedInputStream newCodedInput() {
// We trust CodedInputStream not to modify the bytes, or to give anyone
// else access to them.
return CodedInputStream.newInstance(buffer, offset, length, true);
}
}

View File

@ -322,6 +322,13 @@ public abstract class ByteString implements Iterable<Byte>, Serializable {
}
}
/**
* Wraps the given bytes into a {@code ByteString}. Intended for internal only usage.
*/
static ByteString wrap(ByteInput buffer, int offset, int length) {
return new ByteInputByteString(buffer, offset, length);
}
/**
* Wraps the given bytes into a {@code ByteString}. Intended for internal only
* usage to force a classload of ByteString before LiteralByteString.

View File

@ -149,6 +149,15 @@ public abstract class CodedInputStream {
return newInstance(buffer, 0, buffer.length, true);
}
/** Create a new CodedInputStream wrapping the given {@link ByteInput}. */
public static CodedInputStream newInstance(ByteInput buf, boolean bufferIsImmutable) {
return new ByteInputDecoder(buf, bufferIsImmutable);
}
public static CodedInputStream newInstance(ByteInput buf, int off, int len, boolean bufferIsImmutable) {
return new ByteInputDecoder(buf, off, len, bufferIsImmutable);
}
/** Disable construction/inheritance outside of this class. */
private CodedInputStream() {}
@ -2892,4 +2901,652 @@ public abstract class CodedInputStream {
pos = size - tempPos;
}
}
private static final class ByteInputDecoder extends CodedInputStream {
private final ByteInput buffer;
private final boolean immutable;
private int limit;
private int bufferSizeAfterLimit;
private int pos;
private int startPos;
private int lastTag;
private boolean enableAliasing;
/** The absolute position of the end of the current message. */
private int currentLimit = Integer.MAX_VALUE;
private ByteInputDecoder(ByteInput buffer, boolean immutable) {
this(buffer, 0, buffer.size(), immutable);
}
private ByteInputDecoder(ByteInput buffer, int off, int len, boolean immutable) {
this.buffer = buffer;
pos = off;
limit = off + len;
startPos = pos;
this.immutable = immutable;
}
@Override
public int readTag() throws IOException {
if (isAtEnd()) {
lastTag = 0;
return 0;
}
lastTag = readRawVarint32();
if (WireFormat.getTagFieldNumber(lastTag) == 0) {
// If we actually read zero (or any tag number corresponding to field
// number zero), that's not a valid tag.
throw InvalidProtocolBufferException.invalidTag();
}
return lastTag;
}
@Override
public void checkLastTagWas(int value) throws InvalidProtocolBufferException {
if (lastTag != value) {
throw InvalidProtocolBufferException.invalidEndTag();
}
}
@Override
public int getLastTag() {
return lastTag;
}
@Override
public boolean skipField(int tag) throws IOException {
switch (WireFormat.getTagWireType(tag)) {
case WireFormat.WIRETYPE_VARINT:
skipRawVarint();
return true;
case WireFormat.WIRETYPE_FIXED64:
skipRawBytes(FIXED_64_SIZE);
return true;
case WireFormat.WIRETYPE_LENGTH_DELIMITED:
skipRawBytes(readRawVarint32());
return true;
case WireFormat.WIRETYPE_START_GROUP:
skipMessage();
checkLastTagWas(
WireFormat.makeTag(WireFormat.getTagFieldNumber(tag), WireFormat.WIRETYPE_END_GROUP));
return true;
case WireFormat.WIRETYPE_END_GROUP:
return false;
case WireFormat.WIRETYPE_FIXED32:
skipRawBytes(FIXED_32_SIZE);
return true;
default:
throw InvalidProtocolBufferException.invalidWireType();
}
}
@Override
public boolean skipField(int tag, CodedOutputStream output) throws IOException {
switch (WireFormat.getTagWireType(tag)) {
case WireFormat.WIRETYPE_VARINT:
{
long value = readInt64();
output.writeRawVarint32(tag);
output.writeUInt64NoTag(value);
return true;
}
case WireFormat.WIRETYPE_FIXED64:
{
long value = readRawLittleEndian64();
output.writeRawVarint32(tag);
output.writeFixed64NoTag(value);
return true;
}
case WireFormat.WIRETYPE_LENGTH_DELIMITED:
{
ByteString value = readBytes();
output.writeRawVarint32(tag);
output.writeBytesNoTag(value);
return true;
}
case WireFormat.WIRETYPE_START_GROUP:
{
output.writeRawVarint32(tag);
skipMessage(output);
int endtag =
WireFormat.makeTag(
WireFormat.getTagFieldNumber(tag), WireFormat.WIRETYPE_END_GROUP);
checkLastTagWas(endtag);
output.writeRawVarint32(endtag);
return true;
}
case WireFormat.WIRETYPE_END_GROUP:
{
return false;
}
case WireFormat.WIRETYPE_FIXED32:
{
int value = readRawLittleEndian32();
output.writeRawVarint32(tag);
output.writeFixed32NoTag(value);
return true;
}
default:
throw InvalidProtocolBufferException.invalidWireType();
}
}
@Override
public void skipMessage() throws IOException {
while (true) {
final int tag = readTag();
if (tag == 0 || !skipField(tag)) {
return;
}
}
}
@Override
public void skipMessage(CodedOutputStream output) throws IOException {
while (true) {
final int tag = readTag();
if (tag == 0 || !skipField(tag, output)) {
return;
}
}
}
public double readDouble() throws IOException {
return Double.longBitsToDouble(readRawLittleEndian64());
}
@Override
public float readFloat() throws IOException {
return Float.intBitsToFloat(readRawLittleEndian32());
}
@Override
public long readUInt64() throws IOException {
return readRawVarint64();
}
@Override
public long readInt64() throws IOException {
return readRawVarint64();
}
@Override
public int readInt32() throws IOException {
return readRawVarint32();
}
@Override
public long readFixed64() throws IOException {
return readRawLittleEndian64();
}
@Override
public int readFixed32() throws IOException {
return readRawLittleEndian32();
}
@Override
public boolean readBool() throws IOException {
return readRawVarint64() != 0;
}
@Override
public String readString() throws IOException {
final int size = readRawVarint32();
if (size > 0 && size <= remaining()) {
byte[] bytes = copyToArray(pos, size);
pos += size;
return new String(bytes, UTF_8);
}
if (size == 0) {
return "";
}
if (size < 0) {
throw InvalidProtocolBufferException.negativeSize();
}
throw InvalidProtocolBufferException.truncatedMessage();
}
@Override
public String readStringRequireUtf8() throws IOException {
final int size = readRawVarint32();
if (size > 0 && size <= remaining()) {
if (!Utf8.isValidUtf8(buffer, pos, pos + size)) {
throw InvalidProtocolBufferException.invalidUtf8();
}
byte[] bytes = copyToArray(pos, size);
pos += size;
return new String(bytes, UTF_8);
}
if (size == 0) {
return "";
}
if (size <= 0) {
throw InvalidProtocolBufferException.negativeSize();
}
throw InvalidProtocolBufferException.truncatedMessage();
}
@Override
public void readGroup(int fieldNumber, MessageLite.Builder builder,
ExtensionRegistryLite extensionRegistry) throws IOException {
if (recursionDepth >= recursionLimit) {
throw InvalidProtocolBufferException.recursionLimitExceeded();
}
++recursionDepth;
builder.mergeFrom(this, extensionRegistry);
checkLastTagWas(WireFormat.makeTag(fieldNumber, WireFormat.WIRETYPE_END_GROUP));
--recursionDepth;
}
@Override
public <T extends MessageLite> T readGroup(int fieldNumber, Parser<T> parser,
ExtensionRegistryLite extensionRegistry) throws IOException {
if (recursionDepth >= recursionLimit) {
throw InvalidProtocolBufferException.recursionLimitExceeded();
}
++recursionDepth;
T result = parser.parsePartialFrom(this, extensionRegistry);
checkLastTagWas(WireFormat.makeTag(fieldNumber, WireFormat.WIRETYPE_END_GROUP));
--recursionDepth;
return result;
}
@Deprecated
@Override
public void readUnknownGroup(int fieldNumber, MessageLite.Builder builder) throws IOException {
readGroup(fieldNumber, builder, ExtensionRegistryLite.getEmptyRegistry());
}
@Override
public void readMessage(MessageLite.Builder builder, ExtensionRegistryLite extensionRegistry)
throws IOException {
final int length = readRawVarint32();
if (recursionDepth >= recursionLimit) {
throw InvalidProtocolBufferException.recursionLimitExceeded();
}
final int oldLimit = pushLimit(length);
++recursionDepth;
builder.mergeFrom(this, extensionRegistry);
checkLastTagWas(0);
--recursionDepth;
popLimit(oldLimit);
}
@Override
public <T extends MessageLite> T readMessage(Parser<T> parser,
ExtensionRegistryLite extensionRegistry) throws IOException {
int length = readRawVarint32();
if (recursionDepth >= recursionLimit) {
throw InvalidProtocolBufferException.recursionLimitExceeded();
}
final int oldLimit = pushLimit(length);
++recursionDepth;
T result = parser.parsePartialFrom(this, extensionRegistry);
checkLastTagWas(0);
--recursionDepth;
popLimit(oldLimit);
return result;
}
@Override
public ByteString readBytes() throws IOException {
final int size = readRawVarint32();
if (size > 0 && size <= (limit - pos)) {
// Fast path: We already have the bytes in a contiguous buffer, so
// just copy directly from it.
final ByteString result =
immutable && enableAliasing
? ByteString.wrap(buffer, pos, size)
: ByteString.wrap(copyToArray(pos, size));
pos += size;
return result;
}
if (size == 0) {
return ByteString.EMPTY;
}
// Slow path: Build a byte array first then copy it.
return ByteString.wrap(readRawBytes(size));
}
@Override
public byte[] readByteArray() throws IOException {
return readRawBytes(readRawVarint32());
}
@Override
public ByteBuffer readByteBuffer() throws IOException {
return ByteBuffer.wrap(readByteArray());
}
@Override
public int readUInt32() throws IOException {
return readRawVarint32();
}
@Override
public int readEnum() throws IOException {
return readRawVarint32();
}
@Override
public int readSFixed32() throws IOException {
return readRawLittleEndian32();
}
@Override
public long readSFixed64() throws IOException {
return readRawLittleEndian64();
}
@Override
public int readSInt32() throws IOException {
return decodeZigZag32(readRawVarint32());
}
@Override
public long readSInt64() throws IOException {
return decodeZigZag64(readRawVarint64());
}
@Override
public int readRawVarint32() throws IOException {
// See implementation notes for readRawVarint64
fastpath:
{
int tempPos = pos;
if (limit == tempPos) {
break fastpath;
}
int x;
if ((x = buffer.read(tempPos++)) >= 0) {
pos = tempPos;
return x;
} else if (limit - tempPos < 9) {
break fastpath;
} else if ((x ^= (buffer.read(tempPos++) << 7)) < 0) {
x ^= (~0 << 7);
} else if ((x ^= (buffer.read(tempPos++) << 14)) >= 0) {
x ^= (~0 << 7) ^ (~0 << 14);
} else if ((x ^= (buffer.read(tempPos++) << 21)) < 0) {
x ^= (~0 << 7) ^ (~0 << 14) ^ (~0 << 21);
} else {
int y = buffer.read(tempPos++);
x ^= y << 28;
x ^= (~0 << 7) ^ (~0 << 14) ^ (~0 << 21) ^ (~0 << 28);
if (y < 0
&& buffer.read(tempPos++) < 0
&& buffer.read(tempPos++) < 0
&& buffer.read(tempPos++) < 0
&& buffer.read(tempPos++) < 0
&& buffer.read(tempPos++) < 0) {
break fastpath; // Will throw malformedVarint()
}
}
pos = tempPos;
return x;
}
return (int) readRawVarint64SlowPath();
}
@Override
public long readRawVarint64() throws IOException {
fastpath:
{
int tempPos = pos;
if (limit == tempPos) {
break fastpath;
}
long x;
int y;
if ((y = buffer.read(tempPos++)) >= 0) {
pos = tempPos;
return y;
} else if (limit - tempPos < 9) {
break fastpath;
} else if ((y ^= (buffer.read(tempPos++) << 7)) < 0) {
x = y ^ (~0 << 7);
} else if ((y ^= (buffer.read(tempPos++) << 14)) >= 0) {
x = y ^ ((~0 << 7) ^ (~0 << 14));
} else if ((y ^= (buffer.read(tempPos++) << 21)) < 0) {
x = y ^ ((~0 << 7) ^ (~0 << 14) ^ (~0 << 21));
} else if ((x = y ^ ((long) buffer.read(tempPos++) << 28)) >= 0L) {
x ^= (~0L << 7) ^ (~0L << 14) ^ (~0L << 21) ^ (~0L << 28);
} else if ((x ^= ((long) buffer.read(tempPos++) << 35)) < 0L) {
x ^= (~0L << 7) ^ (~0L << 14) ^ (~0L << 21) ^ (~0L << 28) ^ (~0L << 35);
} else if ((x ^= ((long) buffer.read(tempPos++) << 42)) >= 0L) {
x ^= (~0L << 7) ^ (~0L << 14) ^ (~0L << 21) ^ (~0L << 28) ^ (~0L << 35) ^ (~0L << 42);
} else if ((x ^= ((long) buffer.read(tempPos++) << 49)) < 0L) {
x ^=
(~0L << 7)
^ (~0L << 14)
^ (~0L << 21)
^ (~0L << 28)
^ (~0L << 35)
^ (~0L << 42)
^ (~0L << 49);
} else {
x ^= ((long) buffer.read(tempPos++) << 56);
x ^=
(~0L << 7)
^ (~0L << 14)
^ (~0L << 21)
^ (~0L << 28)
^ (~0L << 35)
^ (~0L << 42)
^ (~0L << 49)
^ (~0L << 56);
if (x < 0L) {
if (buffer.read(tempPos++) < 0L) {
break fastpath; // Will throw malformedVarint()
}
}
}
pos = tempPos;
return x;
}
return readRawVarint64SlowPath();
}
@Override
long readRawVarint64SlowPath() throws IOException {
long result = 0;
for (int shift = 0; shift < 64; shift += 7) {
final byte b = readRawByte();
result |= (long) (b & 0x7F) << shift;
if ((b & 0x80) == 0) {
return result;
}
}
throw InvalidProtocolBufferException.malformedVarint();
}
@Override
public int readRawLittleEndian32() throws IOException {
int tempPos = pos;
if (limit - tempPos < FIXED_32_SIZE) {
throw InvalidProtocolBufferException.truncatedMessage();
}
pos = tempPos + FIXED_32_SIZE;
return (((buffer.read(tempPos) & 0xff))
| ((buffer.read(tempPos + 1) & 0xff) << 8)
| ((buffer.read(tempPos + 2) & 0xff) << 16)
| ((buffer.read(tempPos + 3) & 0xff) << 24));
}
@Override
public long readRawLittleEndian64() throws IOException {
int tempPos = pos;
if (limit - tempPos < FIXED_64_SIZE) {
throw InvalidProtocolBufferException.truncatedMessage();
}
pos = tempPos + FIXED_64_SIZE;
return (((buffer.read(tempPos) & 0xffL))
| ((buffer.read(tempPos + 1) & 0xffL) << 8)
| ((buffer.read(tempPos + 2) & 0xffL) << 16)
| ((buffer.read(tempPos + 3) & 0xffL) << 24)
| ((buffer.read(tempPos + 4) & 0xffL) << 32)
| ((buffer.read(tempPos + 5) & 0xffL) << 40)
| ((buffer.read(tempPos + 6) & 0xffL) << 48)
| ((buffer.read(tempPos + 7) & 0xffL) << 56));
}
@Override
public void enableAliasing(boolean enabled) {
this.enableAliasing = enabled;
}
@Override
public void resetSizeCounter() {
startPos = pos;
}
@Override
public int pushLimit(int byteLimit) throws InvalidProtocolBufferException {
if (byteLimit < 0) {
throw InvalidProtocolBufferException.negativeSize();
}
byteLimit += getTotalBytesRead();
final int oldLimit = currentLimit;
if (byteLimit > oldLimit) {
throw InvalidProtocolBufferException.truncatedMessage();
}
currentLimit = byteLimit;
recomputeBufferSizeAfterLimit();
return oldLimit;
}
@Override
public void popLimit(int oldLimit) {
currentLimit = oldLimit;
recomputeBufferSizeAfterLimit();
}
@Override
public int getBytesUntilLimit() {
if (currentLimit == Integer.MAX_VALUE) {
return -1;
}
return currentLimit - getTotalBytesRead();
}
@Override
public boolean isAtEnd() throws IOException {
return pos == limit;
}
@Override
public int getTotalBytesRead() {
return pos - startPos;
}
@Override
public byte readRawByte() throws IOException {
if (pos == limit) {
throw InvalidProtocolBufferException.truncatedMessage();
}
return buffer.read(pos++);
}
@Override
public byte[] readRawBytes(int length) throws IOException {
if (length > 0 && length <= (limit - pos)) {
byte[] bytes = copyToArray(pos, length);
pos += length;
return bytes;
}
if (length <= 0) {
if (length == 0) {
return Internal.EMPTY_BYTE_ARRAY;
} else {
throw InvalidProtocolBufferException.negativeSize();
}
}
throw InvalidProtocolBufferException.truncatedMessage();
}
@Override
public void skipRawBytes(int length) throws IOException {
if (length >= 0 && length <= (limit - pos)) {
// We have all the bytes we need already.
pos += length;
return;
}
if (length < 0) {
throw InvalidProtocolBufferException.negativeSize();
}
throw InvalidProtocolBufferException.truncatedMessage();
}
private void recomputeBufferSizeAfterLimit() {
limit += bufferSizeAfterLimit;
final int bufferEnd = limit - startPos;
if (bufferEnd > currentLimit) {
// Limit is in current buffer.
bufferSizeAfterLimit = bufferEnd - currentLimit;
limit -= bufferSizeAfterLimit;
} else {
bufferSizeAfterLimit = 0;
}
}
private int remaining() {
return (int) (limit - pos);
}
private byte[] copyToArray(int begin, int size) throws IOException {
try {
byte[] bytes = new byte[size];
buffer.read(begin, bytes);
return bytes;
} catch (IOException e) {
throw InvalidProtocolBufferException.truncatedMessage();
}
}
private void skipRawVarint() throws IOException {
if (limit - pos >= MAX_VARINT_SIZE) {
skipRawVarintFastPath();
} else {
skipRawVarintSlowPath();
}
}
private void skipRawVarintFastPath() throws IOException {
for (int i = 0; i < MAX_VARINT_SIZE; i++) {
if (buffer.read(pos++) >= 0) {
return;
}
}
throw InvalidProtocolBufferException.malformedVarint();
}
private void skipRawVarintSlowPath() throws IOException {
for (int i = 0; i < MAX_VARINT_SIZE; i++) {
if (readRawByte() >= 0) {
return;
}
}
throw InvalidProtocolBufferException.malformedVarint();
}
}
}

View File

@ -229,6 +229,16 @@ final class Utf8 {
}
}
private static int incompleteStateFor(ByteInput bytes, int index, int limit) {
int byte1 = bytes.read(index - 1);
switch (limit - index) {
case 0: return incompleteStateFor(byte1);
case 1: return incompleteStateFor(byte1, bytes.read(index));
case 2: return incompleteStateFor(byte1, bytes.read(index), bytes.read(index + 1));
default: throw new AssertionError();
}
}
// These UTF-8 handling methods are copied from Guava's Utf8 class with a modification to throw
// a protocol buffer local exception. This exception is then caught in CodedOutputStream so it can
// fallback to more lenient behavior.
@ -331,6 +341,24 @@ final class Utf8 {
return processor.partialIsValidUtf8(state, buffer, index, limit);
}
/**
* Determines if the given {@link ByteInput} is a valid UTF-8 string.
*
* @param buffer the buffer to check.
*/
static boolean isValidUtf8(ByteInput buffer, int index, int limit) {
return processor.isValidUtf8(buffer, index, limit);
}
/**
* Determines if the given {@link ByteInput} is a partially valid UTF-8 string.
*
* @param buffer the buffer to check.
*/
static int partialIsValidUtf8(int state, ByteInput buffer, int index, int limit) {
return processor.partialIsValidUtf8(state, buffer, index, limit);
}
/**
* Encodes the given characters to the target {@link ByteBuffer} using UTF-8 encoding.
*
@ -610,6 +638,169 @@ final class Utf8 {
}
}
public boolean isValidUtf8(ByteInput buffer, int index, int limit) {
return partialIsValidUtf8(COMPLETE, buffer, index, limit) == COMPLETE;
}
int partialIsValidUtf8(int state, ByteInput bytes, int index, int limit) {
if (state != COMPLETE) {
// The previous decoding operation was incomplete (or malformed).
// We look for a well-formed sequence consisting of bytes from
// the previous decoding operation (stored in state) together
// with bytes from the array slice.
//
// We expect such "straddler characters" to be rare.
if (index >= limit) { // No bytes? No progress.
return state;
}
int byte1 = (byte) state;
// byte1 is never ASCII.
if (byte1 < (byte) 0xE0) {
// two-byte form
// Simultaneously checks for illegal trailing-byte in
// leading position and overlong 2-byte form.
if (byte1 < (byte) 0xC2
// byte2 trailing-byte test
|| bytes.read(index++) > (byte) 0xBF) {
return MALFORMED;
}
} else if (byte1 < (byte) 0xF0) {
// three-byte form
// Get byte2 from saved state or array
int byte2 = (byte) ~(state >> 8);
if (byte2 == 0) {
byte2 = bytes.read(index++);
if (index >= limit) {
return incompleteStateFor(byte1, byte2);
}
}
if (byte2 > (byte) 0xBF
// overlong? 5 most significant bits must not all be zero
|| (byte1 == (byte) 0xE0 && byte2 < (byte) 0xA0)
// illegal surrogate codepoint?
|| (byte1 == (byte) 0xED && byte2 >= (byte) 0xA0)
// byte3 trailing-byte test
|| bytes.read(index++) > (byte) 0xBF) {
return MALFORMED;
}
} else {
// four-byte form
// Get byte2 and byte3 from saved state or array
int byte2 = (byte) ~(state >> 8);
int byte3 = 0;
if (byte2 == 0) {
byte2 = bytes.read(index++);
if (index >= limit) {
return incompleteStateFor(byte1, byte2);
}
} else {
byte3 = (byte) (state >> 16);
}
if (byte3 == 0) {
byte3 = bytes.read(index++);
if (index >= limit) {
return incompleteStateFor(byte1, byte2, byte3);
}
}
// If we were called with state == MALFORMED, then byte1 is 0xFF,
// which never occurs in well-formed UTF-8, and so we will return
// MALFORMED again below.
if (byte2 > (byte) 0xBF
// Check that 1 <= plane <= 16. Tricky optimized form of:
// if (byte1 > (byte) 0xF4 ||
// byte1 == (byte) 0xF0 && byte2 < (byte) 0x90 ||
// byte1 == (byte) 0xF4 && byte2 > (byte) 0x8F)
|| (((byte1 << 28) + (byte2 - (byte) 0x90)) >> 30) != 0
// byte3 trailing-byte test
|| byte3 > (byte) 0xBF
// byte4 trailing-byte test
|| bytes.read(index++) > (byte) 0xBF) {
return MALFORMED;
}
}
}
return partialIsValidUtf8(bytes, index, limit);
}
private static int partialIsValidUtf8(ByteInput bytes, int index, int limit) {
// Optimize for 100% ASCII (Hotspot loves small simple top-level loops like this).
// This simple loop stops when we encounter a byte >= 0x80 (i.e. non-ASCII).
while (index < limit && bytes.read(index) >= 0) {
index++;
}
return (index >= limit) ? COMPLETE : partialIsValidUtf8NonAscii(bytes, index, limit);
}
private static int partialIsValidUtf8NonAscii(ByteInput bytes, int index, int limit) {
for (;;) {
int byte1, byte2;
// Optimize for interior runs of ASCII bytes.
do {
if (index >= limit) {
return COMPLETE;
}
} while ((byte1 = bytes.read(index++)) >= 0);
if (byte1 < (byte) 0xE0) {
// two-byte form
if (index >= limit) {
// Incomplete sequence
return byte1;
}
// Simultaneously checks for illegal trailing-byte in
// leading position and overlong 2-byte form.
if (byte1 < (byte) 0xC2
|| bytes.read(index++) > (byte) 0xBF) {
return MALFORMED;
}
} else if (byte1 < (byte) 0xF0) {
// three-byte form
if (index >= limit - 1) { // incomplete sequence
return incompleteStateFor(bytes, index, limit);
}
if ((byte2 = bytes.read(index++)) > (byte) 0xBF
// overlong? 5 most significant bits must not all be zero
|| (byte1 == (byte) 0xE0 && byte2 < (byte) 0xA0)
// check for illegal surrogate codepoints
|| (byte1 == (byte) 0xED && byte2 >= (byte) 0xA0)
// byte3 trailing-byte test
|| bytes.read(index++) > (byte) 0xBF) {
return MALFORMED;
}
} else {
// four-byte form
if (index >= limit - 2) { // incomplete sequence
return incompleteStateFor(bytes, index, limit);
}
if ((byte2 = bytes.read(index++)) > (byte) 0xBF
// Check that 1 <= plane <= 16. Tricky optimized form of:
// if (byte1 > (byte) 0xF4 ||
// byte1 == (byte) 0xF0 && byte2 < (byte) 0x90 ||
// byte1 == (byte) 0xF4 && byte2 > (byte) 0x8F)
|| (((byte1 << 28) + (byte2 - (byte) 0x90)) >> 30) != 0
// byte3 trailing-byte test
|| bytes.read(index++) > (byte) 0xBF
// byte4 trailing-byte test
|| bytes.read(index++) > (byte) 0xBF) {
return MALFORMED;
}
}
}
}
/**
* Encodes an input character sequence ({@code in}) to UTF-8 in the target array ({@code out}).
* For a string, this method is similar to

File diff suppressed because it is too large Load Diff

View File

@ -143,6 +143,14 @@
</profile>
<profile>
<id>compile-protobuf</id>
<!--
Generate proto files. Drops generated java files under
src/main/java. Check in the generated files so available
at build time. Run this profile/step everytime you
change proto files. If you add a proto, be sure to add
it to the list below in the hadoop-maven-plugin else
we won't 'see' it.
-->
<activation>
<property>
<name>compile-protobuf</name>

View File

@ -393,10 +393,6 @@
<artifactId>hbase-resource-bundle</artifactId>
<version>${project.version}</version>
<optional>true</optional>
</dependency>
<dependency>
<groupId>org.apache.httpcomponents</groupId>
<artifactId>httpclient</artifactId>
</dependency>
<dependency>
<groupId>commons-codec</groupId>