diff --git a/distribution/pom.xml b/distribution/pom.xml
index f8b2999f7ba..5aa71130f77 100644
--- a/distribution/pom.xml
+++ b/distribution/pom.xml
@@ -458,6 +458,8 @@
org.apache.druid.extensions.contrib:druid-spectator-histogram-corg.apache.druid.extensions.contrib:druid-rabbit-indexing-service
+ -c
+ org.apache.druid.extensions.contrib:grpc-query
diff --git a/extensions-contrib/grpc-query/README.md b/extensions-contrib/grpc-query/README.md
new file mode 100644
index 00000000000..1edefbf350c
--- /dev/null
+++ b/extensions-contrib/grpc-query/README.md
@@ -0,0 +1,312 @@
+
+
+# gRPC Query Extension for Druid
+
+This extension provides a gRPC API for SQL and Native queries.
+
+Druid uses REST as its RPC protocol. Druid has a large variety of REST operations
+including query, ingest jobs, monitoring, configuration and many more. Although
+REST is a universally supported RPC format, is not the only one in use. This
+extension allows gRPC-based clients to issue SQL queries.
+
+Druid is optimized for high-concurrency, low-complexity queries that return a
+small result set (a few thousand rows at most). The small-query focus allows
+Druid to offer a simple, stateless request/response REST API. This gRPC API
+follows that Druid pattern: it is optimized for simple queries and follows
+Druid's request/response model. APIs such as JDBC can handle larger results
+because they are stateful: a client can request pages of results using multiple
+API calls. This API does not support paging: the entire result set is returned
+in the response, resulting in an API which is fast for small queries, and not
+suitable for larger result sets.
+
+## Use Cases
+
+The gRPC query extension can be used in two ways, depending on the selected
+result format.
+
+### CSV or JSON Response Format
+
+The simplest way to use the gRPC extension is to send a query request that
+uses CSV or JSON as the return format. The client simply pulls the results
+from the response and does something useful with them. For the CSV format,
+headers can be created from the column metadata in the response message.
+
+### Protobuf Response Format
+
+Some applications want to use Protobuf as the result format. In this case,
+the extension encodes Protobuf-encoded rows as the binary payload of the query
+response. This works for an application which uses a fixed set of queries, each
+of which is carefully designed to power one application, say a dashboard. The
+(simplified) message flow is:
+
+```text
++-----------+ query -> +-------+
+| Dashboard | -- gRPC --> | Druid |
++-----------+ <- data +-------+
+```
+
+In practice, there may be multiple proxy layers: one on the application side, and
+the Router on the Druid side.
+
+The dashboard displays a fixed set of reports and charts. Each of those sends a
+well-defined query specified as part of the application. The returned data is thus
+both well-known and fixed for each query. The set of queries is fixed by the contents
+of the dashboard. That is, this is not an ad-hoc query use case.
+
+Because the queries are locked down, and are part of the application, the set of valid
+result sets is also well known and locked down. Given this well-controlled use case, it
+is possible to use a pre-defined Protobuf message to represent the results of each distinct
+query. (Protobuf is a compiled format: the solution works only because the set of messages
+are well known. It would not work for the ad-hoc case in which each query has a different
+result set schema.)
+
+To be very clear: the application has a fixed set of queries to be sent to Druid via gRPC.
+For each query, there is a fixed Protobuf response format defined by the application.
+No other queries, aside from this well-known set, will be sent to the gRPC endpoint using
+the Protobuf response format. If the set of queries is not well-defined, use the CSV
+or JSON response format instead.
+
+## Installation
+
+The gRPC query extension is a "contrib" extension and is not installed by default when
+you install Druid. Instead, you must install it manually.
+
+In development, you can build Druid with all the "contrib" extensions. When building
+Druid, include the `-P bundle-contrib-exts` in addition to the `-P dist` option:
+
+```bash
+mvn package -Pdist,bundle-contrib-exts ...
+```
+
+In production, follow the [Druid documentation](https://druid.apache.org/docs/latest/development/extensions.html).
+
+To enable the extension, add the following to the load list in
+`_commmon/common.runtime.properties`:
+
+```text
+druid.extensions.loadList=[..., "grpc-query"]
+```
+
+Adding the extension to the load list automatically enables the extension,
+but only in the Broker.
+
+If you use the Protobuf response format, bundle up your Protobuf classes
+into a jar file, and place that jar file in the
+`$DRUID_HOME/extensions/grpc-query` directory. The Protobuf classes will
+appear on the class path and will be available from the `grpc-query`
+extension.
+
+### Configuration
+
+Enable and configure the extension in `broker/runtime.properties`:
+
+```text
+druid.grpcQuery.port=50051
+```
+
+The default port is 50051 (preliminary).
+
+If you use the Protobuf response format, bundle up your Protobuf classes
+into a jar file, and place that jar file in the
+`$DRUID_HOME/extensions/grpc-query` directory. The Protobuf classes will
+appear on the class path and will be available from the `grpc-query`
+extension.
+
+## Usage
+
+See the `src/main/proto/query.proto` file in the `grpc-query` project for the request and
+response message formats. The request message format closely follows the REST JSON message
+format. The response is optimized for gRPC: it contains an error (if the request fails),
+or the result schema and result data as a binary payload. You can query the gRPC endpoint
+with any gRPC client.
+
+Although both Druid SQL and Druid itself support a `float` data type, that type is not
+usable in a Protobuf response object. Internally Druid converts all `float` values to
+`double`. As a result, the Protobuf reponse object supports only the `double` type.
+An attempt to use `float` will lead to a runtime error when processing the query.
+Use the `double` type instead.
+
+Sample request,
+
+```
+QueryRequest.newBuilder()
+ .setQuery("SELECT * FROM foo")
+ .setResultFormat(QueryResultFormat.CSV)
+ .setQueryType(QueryOuterClass.QueryType.SQL)
+ .build();
+```
+
+When using Protobuf response format, bundle up your Protobuf classes
+into a jar file, and place that jar file in the
+`$DRUID_HOME/extensions/grpc-query` directory.
+Specify the response Protobuf message name in the request.
+
+```
+QueryRequest.newBuilder()
+ .setQuery("SELECT dim1, dim2, dim3, cnt, m1, m2, unique_dim1, __time AS "date" FROM foo")
+ .setQueryType(QueryOuterClass.QueryType.SQL)
+ .setProtobufMessageName(QueryResult.class.getName())
+ .setResultFormat(QueryResultFormat.PROTOBUF_INLINE)
+ .build();
+
+Response message
+
+message QueryResult {
+ string dim1 = 1;
+ string dim2 = 2;
+ string dim3 = 3;
+ int64 cnt = 4;
+ float m1 = 5;
+ double m2 = 6;
+ bytes unique_dim1 = 7;
+ google.protobuf.Timestamp date = 8;
+}
+```
+
+## Security
+
+The extension supports both "anonymous" and basic authorization. Anonymous is the mode
+for an out-of-the-box Druid: no authorization needed. The extension does not yet support
+other security extensions: each needs its own specific integration.
+
+Clients that use basic authentication must include a set of credentials. See
+`BasicCredentials` for a typical implementation and `BasicAuthTest` for how to
+configure the credentials in the client.
+
+## Implementation Notes
+
+This project contains several components:
+
+* Guice module and associated server initialization code.
+* Netty-based gRPC server.
+* A "driver" that performs the actual query and generates the results.
+
+## Debugging
+
+Debugging of the gRPC extension requires extra care due to the nuances of loading
+classes from an extension.
+
+### Running in a Server
+
+Druid extensions are designed to run in the Druid server. The gRPC extension is
+loaded only in the Druid broker using the contiguration described above. If something
+fails during startup, the Broker will crash. Consult the Broker logs to determine
+what went wrong. Startup failures are typically due to required jars not being installed
+as part of the extension. Check the `pom.xml` file to track down what's missing.
+
+Failures can also occur when running a query. Such failures will result in a failure
+response and should result in a log entry in the Broker log file. Use the log entry
+to sort out what went wrong.
+
+You can also attach a debugger to the running process. You'll have to enable the debugger
+in the server by adding the required parameters to the Broker's `jvm.config` file.
+
+### Debugging using Unit Tests
+
+To debug the functionality of the extension, your best bet is to debug in the context
+of a unit test. Druid provides a special test-only SQL stack with a few pre-defined
+datasources. See the various `CalciteQueryTest` classes to see what these are. You can
+also query Druid's various system tables. See `GrpcQueryTest` for a simple "starter"
+unit test that configures the server and uses an in-process client to send requests.
+
+Most unit testing can be done without the gRPC server, by calling the `QueryDriver`
+class directly. That is, if the goal is work with the code that takes a request, runs
+a query, and produces a response, then the driver is the key and the server is just a
+bit of extra copmlexity. See the `DriverTest` class for an example unit test.
+
+### Debugging in a Server in an IDE
+
+We would like to be able to debug the gRPC extension, within the Broker, in an IDE.
+As it turns out, doing so breaks Druid's class loader mechanisms in ways that are both
+hard to understand and hard to work around. When run in a server, Java creates an instance
+of `GrpcQueryModule` using the extension's class loader. Java then uses that same class
+loader to load other classes in the extension, including those here and those in the
+shaded gRPC jar file.
+
+However, when run in an IDE, if this project is on the class path, then the `GrpcQueryModule`
+class will be loaded from the "App" class loader. This works fine: it causes the other
+classes of this module to also be loaded from the class path. However, once execution
+calls into gRPC, Java will use the App class loader, not the extension class loader, and
+will fail to find some of the classes, resulting in Java exceptions. Worse, in some cases,
+Java may load the same class from both class loaders. To Java, these are not the same
+classes, and you will get mysterious errors as a result.
+
+For now, the lesson is: don't try to debug the extension in the Broker in the IDE. Use
+one of the above options instead.
+
+For reference (and in case we figure out a solution to the class loader conflict),
+the way to debug the Broker in an IDE is the following:
+
+* Build your branch. Use the `-P bundle-contrib-exts` flag in place of `-P dist`, as described
+ above.
+* Create an install from the distribution produced above.
+* Use the `single-server/micro-quickstart` config for debugging.
+* Configure the installation using the steps above.
+* Modify the Supervisor config for your config to comment out the line that launches
+ the broker. Use the hash (`#`) character to comment out the line.
+* In your IDE, define a launch configuration for the Broker.
+ * The launch command is `server broker`
+ * Add the following JVM arguments:
+
+```text
+--add-exports java.base/jdk.internal.perf=ALL-UNNAMED
+--add-exports jdk.management/com.sun.management.internal=ALL-UNNAMED
+```
+
+ * Define `grpc-query` as a project dependency. (This is for Eclipse; IntelliJ may differ.)
+ * Configure the class path to include the common and Broker properties files.
+* Launch the micro-quickstart cluster.
+* Launch the Broker in your IDE.
+
+### gRPC Logging
+
+Debugging of the gRPC stack is difficult since the shaded jar loses source attachments.
+
+Logging helps. gRPC logging is not enabled via Druid's logging system. Intead, [create
+the following `logging.properties` file](https://stackoverflow.com/questions/50243717/grpc-logger-level):
+
+```text
+handlers=java.util.logging.ConsoleHandler
+io.grpc.level=FINE
+java.util.logging.ConsoleHandler.level=FINE
+java.util.logging.ConsoleHandler.formatter=java.util.logging.SimpleFormatter
+```
+
+Then, pass the following on the command line:
+
+```text
+-Djava.util.logging.config.file=logging.properties
+```
+
+Adjust the path to the file depending on where you put the file.
+
+## Acknowledgements
+
+This is not the first project to have created a gRPC API for Druid. Others include:
+
+* [[Proposal] define a RPC protocol for querying data, support apache Arrow as data
+ exchange interface](https://github.com/apache/druid/issues/3891)
+* [gRPC Druid extension PoC](https://github.com/ndolgov/gruid)
+* [Druid gRPC-json server extension](https://github.com/apache/druid/pull/6798)
+
+Full credit goes to those who have gone this way before.
+
+Note that the class loader solution used by the two code bases above turned out
+to not be needed. See the notes above about the class loader issues.
diff --git a/extensions-contrib/grpc-query/pom.xml b/extensions-contrib/grpc-query/pom.xml
new file mode 100644
index 00000000000..101e2f34b74
--- /dev/null
+++ b/extensions-contrib/grpc-query/pom.xml
@@ -0,0 +1,375 @@
+
+
+
+
+
+ 4.0.0
+ org.apache.druid.extensions.contrib
+ grpc-query
+ grpc-query
+ grpc-query
+
+
+ org.apache.druid
+ druid
+ 32.0.0-SNAPSHOT
+ ../../pom.xml
+
+
+
+
+
+ io.grpc
+ grpc-bom
+ 1.59.0
+ pom
+ import
+
+
+
+
+
+
+ org.apache.druid
+ druid-server
+ ${project.parent.version}
+ provided
+
+
+ org.apache.druid
+ druid-processing
+ ${project.parent.version}
+ provided
+
+
+ org.apache.druid
+ druid-sql
+ ${project.parent.version}
+ provided
+
+
+ com.fasterxml.jackson.module
+ jackson-module-guice
+ provided
+
+
+ com.google.inject
+ guice
+ provided
+
+
+ com.fasterxml.jackson.core
+ jackson-databind
+ provided
+
+
+ com.fasterxml.jackson.core
+ jackson-core
+ provided
+
+
+ com.google.inject.extensions
+ guice-multibindings
+ provided
+
+
+ com.google.guava
+ guava
+ ${guava.version}
+ provided
+
+
+ com.google.code.findbugs
+ jsr305
+ provided
+
+
+
+ io.netty
+ netty-buffer
+ provided
+
+
+ io.netty
+ netty-codec-http
+ provided
+
+
+ io.netty
+ netty-common
+ provided
+
+
+ io.netty
+ netty-handler
+ provided
+
+
+ io.netty
+ netty-resolver
+ provided
+
+
+ io.netty
+ netty-transport
+ provided
+
+
+ io.netty
+ netty-codec-http2
+
+
+ javax.ws.rs
+ jsr311-api
+ provided
+
+
+ io.grpc
+ grpc-api
+
+
+ io.grpc
+ grpc-protobuf
+
+
+ io.grpc
+ grpc-stub
+
+
+ io.grpc
+ grpc-netty
+
+
+ io.grpc
+ grpc-core
+
+
+ com.google.protobuf
+ protobuf-java
+ ${protobuf.version}
+
+
+ jakarta.validation
+ jakarta.validation-api
+ provided
+
+
+ org.apache.calcite.avatica
+ avatica-core
+ provided
+
+
+ jakarta.inject
+ jakarta.inject-api
+ provided
+
+
+ com.fasterxml.jackson.core
+ jackson-annotations
+ provided
+
+
+ joda-time
+ joda-time
+ provided
+
+
+ org.apache.calcite
+ calcite-core
+ provided
+
+
+ javax.inject
+ javax.inject
+ 1
+ provided
+
+
+
+
+ org.apache.druid
+ druid-sql
+ ${project.parent.version}
+ test-jar
+ test
+
+
+ junit
+ junit
+ test
+
+
+ org.junit.jupiter
+ junit-jupiter-api
+ test
+
+
+ org.apache.druid
+ druid-server
+ ${project.parent.version}
+ test-jar
+ test
+
+
+ org.apache.druid
+ druid-processing
+ ${project.parent.version}
+ test-jar
+ test
+
+
+ org.easymock
+ easymock
+ test
+
+
+ org.apache.druid.extensions
+ druid-basic-security
+ ${project.parent.version}
+ test
+
+
+ org.reflections
+ reflections
+ test
+
+
+
+
+
+
+ kr.motd.maven
+ os-maven-plugin
+ 1.5.0.Final
+
+
+
+
+
+ org.xolstice.maven.plugins
+ protobuf-maven-plugin
+ 0.6.1
+
+ com.google.protobuf:protoc:3.21.7:exe:${os.detected.classifier}
+ grpc-java
+ io.grpc:protoc-gen-grpc-java:1.52.0:exe:${os.detected.classifier}
+
+
+
+
+ compile
+ compile-custom
+ test-compile
+
+
+
+
+
+
+ org.codehaus.mojo
+ build-helper-maven-plugin
+
+
+ add-test-source
+ generate-sources
+
+ add-source
+
+
+
+ target/generated-test-sources/protobuf/java
+ target/generated-sources/protobuf/grpc-java
+
+
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-jar-plugin
+
+
+
+ test-jar
+ package
+
+ test-jar
+
+
+ tests
+
+
+
+
+ proto-jar
+ package
+
+
+ test-jar
+
+
+ test-proto
+
+ **/proto/*
+
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-dependency-plugin
+
+
+ io.netty:netty-codec-http2
+ io.grpc:grpc-core:jar
+ io.grpc:grpc-netty:jar
+
+
+
+
+
+
diff --git a/extensions-contrib/grpc-query/src/main/java/org/apache/druid/grpc/client/GrpcResponseHandler.java b/extensions-contrib/grpc-query/src/main/java/org/apache/druid/grpc/client/GrpcResponseHandler.java
new file mode 100644
index 00000000000..37ac2cf2e85
--- /dev/null
+++ b/extensions-contrib/grpc-query/src/main/java/org/apache/druid/grpc/client/GrpcResponseHandler.java
@@ -0,0 +1,96 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.druid.grpc.client;
+
+import com.google.protobuf.AbstractMessageLite;
+import com.google.protobuf.ByteString;
+import com.google.protobuf.MessageLite;
+
+import java.io.ByteArrayInputStream;
+import java.io.IOException;
+import java.io.InputStream;
+import java.lang.reflect.Method;
+import java.util.ArrayList;
+import java.util.List;
+
+public class GrpcResponseHandler
+{
+ private final T message;
+
+ private GrpcResponseHandler(final Class clazz)
+ {
+ this.message = get(clazz);
+ }
+
+ public static GrpcResponseHandler of(Class clazz)
+ {
+ return new GrpcResponseHandler<>(clazz);
+ }
+
+ public List get(ByteString byteString)
+ {
+ return get(new ByteArrayInputStream(byteString.toByteArray()));
+ }
+
+ @SuppressWarnings("unchecked")
+ public List get(InputStream inputStream)
+ {
+ try {
+ final List data = new ArrayList<>();
+ while (true) {
+ try {
+ final MessageLite messageLite =
+ message
+ .getDefaultInstanceForType()
+ .getParserForType()
+ .parseDelimitedFrom(inputStream);
+ if (messageLite == null) {
+ break;
+ }
+ data.add((T) messageLite);
+ }
+ catch (IOException e) {
+ throw new RuntimeException(e);
+ }
+ }
+ return data;
+ }
+ finally {
+ try {
+ inputStream.close();
+ }
+ catch (IOException e) {
+ // ignore
+ }
+ }
+ }
+
+ @SuppressWarnings("unchecked")
+ private T get(Class clazz)
+ {
+ try {
+ final Method method = clazz.getMethod("getDefaultInstance", new Class>[0]);
+ return (T) method.invoke(null);
+ }
+ catch (Exception e) {
+ throw new RuntimeException(e);
+ }
+ }
+}
diff --git a/extensions-contrib/grpc-query/src/main/java/org/apache/druid/grpc/guice/GrpcQueryModule.java b/extensions-contrib/grpc-query/src/main/java/org/apache/druid/grpc/guice/GrpcQueryModule.java
new file mode 100644
index 00000000000..6621a92ed95
--- /dev/null
+++ b/extensions-contrib/grpc-query/src/main/java/org/apache/druid/grpc/guice/GrpcQueryModule.java
@@ -0,0 +1,40 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.druid.grpc.guice;
+
+import com.google.inject.Binder;
+import org.apache.druid.discovery.NodeRole;
+import org.apache.druid.grpc.server.GrpcEndpointInitializer;
+import org.apache.druid.grpc.server.GrpcQueryConfig;
+import org.apache.druid.guice.JsonConfigProvider;
+import org.apache.druid.guice.LifecycleModule;
+import org.apache.druid.guice.annotations.LoadScope;
+import org.apache.druid.initialization.DruidModule;
+
+@LoadScope(roles = NodeRole.BROKER_JSON_NAME)
+public class GrpcQueryModule implements DruidModule
+{
+ @Override
+ public void configure(Binder binder)
+ {
+ JsonConfigProvider.bind(binder, GrpcQueryConfig.CONFIG_BASE, GrpcQueryConfig.class);
+ LifecycleModule.register(binder, GrpcEndpointInitializer.class);
+ }
+}
diff --git a/extensions-contrib/grpc-query/src/main/java/org/apache/druid/grpc/server/AnonymousAuthServerInterceptor.java b/extensions-contrib/grpc-query/src/main/java/org/apache/druid/grpc/server/AnonymousAuthServerInterceptor.java
new file mode 100644
index 00000000000..3059c603d47
--- /dev/null
+++ b/extensions-contrib/grpc-query/src/main/java/org/apache/druid/grpc/server/AnonymousAuthServerInterceptor.java
@@ -0,0 +1,68 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.druid.grpc.server;
+
+import com.google.common.collect.ImmutableMap;
+import io.grpc.Context;
+import io.grpc.Contexts;
+import io.grpc.Metadata;
+import io.grpc.ServerCall;
+import io.grpc.ServerCall.Listener;
+import io.grpc.ServerCallHandler;
+import io.grpc.ServerInterceptor;
+import org.apache.druid.server.security.Authenticator;
+
+import javax.inject.Inject;
+
+/**
+ * "Authorizes" an anonymous request, which just means adding an "allow all"
+ * authorization result in the context. Use this form for either of Druid's
+ * "allow all" authorizers.
+ *
+ * @see {@link BasicAuthServerInterceptor} for details
+ */
+public class AnonymousAuthServerInterceptor implements ServerInterceptor
+{
+ private final Authenticator authenticator;
+
+ @Inject
+ public AnonymousAuthServerInterceptor(Authenticator authenticator)
+ {
+ this.authenticator = authenticator;
+ }
+
+ @Override
+ public Listener interceptCall(
+ ServerCall call,
+ Metadata headers,
+ ServerCallHandler next
+ )
+ {
+ return Contexts.interceptCall(
+ Context.current().withValue(
+ QueryServer.AUTH_KEY,
+ authenticator.authenticateJDBCContext(ImmutableMap.of())
+ ),
+ call,
+ headers,
+ next
+ );
+ }
+}
diff --git a/extensions-contrib/grpc-query/src/main/java/org/apache/druid/grpc/server/BasicAuthServerInterceptor.java b/extensions-contrib/grpc-query/src/main/java/org/apache/druid/grpc/server/BasicAuthServerInterceptor.java
new file mode 100644
index 00000000000..15a4926e209
--- /dev/null
+++ b/extensions-contrib/grpc-query/src/main/java/org/apache/druid/grpc/server/BasicAuthServerInterceptor.java
@@ -0,0 +1,149 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.druid.grpc.server;
+
+import com.google.common.collect.ImmutableMap;
+import io.grpc.Context;
+import io.grpc.Contexts;
+import io.grpc.Metadata;
+import io.grpc.ServerCall;
+import io.grpc.ServerCall.Listener;
+import io.grpc.ServerCallHandler;
+import io.grpc.ServerInterceptor;
+import io.grpc.Status;
+import io.grpc.StatusRuntimeException;
+import org.apache.druid.java.util.common.StringUtils;
+import org.apache.druid.java.util.common.logger.Logger;
+import org.apache.druid.server.security.AuthenticationResult;
+import org.apache.druid.server.security.Authenticator;
+
+import javax.inject.Inject;
+
+/**
+ * Authorizes a Basic Auth user name and password and sets the resulting
+ * {@link AuthenticationResult} on the call context.
+ *
+ * Implements the gRPC {@link ServerInterceptor} to wrap the actual RPC
+ * call with a step which pulls the "Authorization" header from the request,
+ * decodes the user name and password, looks up the user using the
+ * BasicHTTPAuthenticator#authenticateJDBCContext(java.util.Map)
+ * method, and attaches the resulting {@link AuthenticationResult} to the call
+ * {@link Context}. The gRPC service will later retrieve the auth result to pass
+ * into the Driver for use in validating query resources.
+ *
+ * Note that gRPC documentation in this area is sparse. Examples are hard to
+ * find. gRPC provides exactly one (obscure) way to do things, as represented
+ * here.
+ *
+ * Auth failures can occur in many ways: missing or badly formed header, invalid
+ * user name or password, etc. In each case, the code throws a
+ * {@link StatusRuntimeException} with {@link Status#PERMISSION_DENIED}. No hint
+ * of the problem is provided to the user.
+ *
+ * This pattern can be replicated for other supported Druid authorizers.
+ */
+public class BasicAuthServerInterceptor implements ServerInterceptor
+{
+ public static final String AUTHORIZATION_HEADER = "Authorization";
+ private static final String BASIC_PREFIX = "Basic ";
+ private static final Metadata.Key AUTHORIZATION_KEY =
+ Metadata.Key.of(AUTHORIZATION_HEADER, Metadata.ASCII_STRING_MARSHALLER);
+ private static final Logger LOG = new Logger(BasicAuthServerInterceptor.class);
+
+ // Want BasicHTTPAuthenticator, but it is not visible here.
+ private final Authenticator authenticator;
+
+ @Inject
+ public BasicAuthServerInterceptor(Authenticator authenticator)
+ {
+ this.authenticator = authenticator;
+ }
+
+ @Override
+ public Listener interceptCall(
+ ServerCall call,
+ Metadata headers,
+ ServerCallHandler next
+ )
+ {
+ // Use a gRPC method to wrap the actual call in a new context
+ // that includes the auth result.
+ return Contexts.interceptCall(
+ Context.current().withValue(
+ QueryServer.AUTH_KEY,
+ authenticate(headers.get(AUTHORIZATION_KEY))
+ ),
+ call,
+ headers,
+ next
+ );
+ }
+
+ // See BasicHTTPAuthenticator.Filter
+ public AuthenticationResult authenticate(String encodedUserSecret)
+ {
+ if (encodedUserSecret == null) {
+ throw new StatusRuntimeException(Status.PERMISSION_DENIED);
+ }
+
+ if (!encodedUserSecret.startsWith(BASIC_PREFIX)) {
+ throw new StatusRuntimeException(Status.PERMISSION_DENIED);
+ }
+ encodedUserSecret = encodedUserSecret.substring(BASIC_PREFIX.length());
+
+ // At this point, encodedUserSecret is not null, indicating that the request intends to perform
+ // Basic HTTP authentication.
+ // Copy of BasicAuthUtils.decodeUserSecret() which is not visible here.
+ String decodedUserSecret;
+ try {
+ decodedUserSecret = StringUtils.fromUtf8(StringUtils.decodeBase64String(encodedUserSecret));
+ }
+ catch (IllegalArgumentException iae) {
+ LOG.info("Malformed user secret.");
+ throw new StatusRuntimeException(Status.PERMISSION_DENIED);
+ }
+
+ String[] splits = decodedUserSecret.split(":");
+ if (splits.length != 2) {
+ // The decoded user secret is not of the right format
+ throw new StatusRuntimeException(Status.PERMISSION_DENIED);
+ }
+
+ final String user = splits[0];
+ final String password = splits[1];
+
+ // Fail fast for any authentication error. If the authentication result is null we also fail
+ // as this indicates a non-existent user.
+ try {
+ AuthenticationResult authenticationResult = authenticator.authenticateJDBCContext(
+ ImmutableMap.of("user", user, "password", password)
+ );
+ if (authenticationResult == null) {
+ throw new StatusRuntimeException(Status.PERMISSION_DENIED);
+ }
+ return authenticationResult;
+ }
+ // Want BasicSecurityAuthenticationException, but it is not visible here.
+ catch (IllegalArgumentException ex) {
+ LOG.info("Exception authenticating user [%s] - [%s]", user, ex.getMessage());
+ throw new StatusRuntimeException(Status.PERMISSION_DENIED);
+ }
+ }
+}
diff --git a/extensions-contrib/grpc-query/src/main/java/org/apache/druid/grpc/server/GrpcEndpointInitializer.java b/extensions-contrib/grpc-query/src/main/java/org/apache/druid/grpc/server/GrpcEndpointInitializer.java
new file mode 100644
index 00000000000..1cb3884884c
--- /dev/null
+++ b/extensions-contrib/grpc-query/src/main/java/org/apache/druid/grpc/server/GrpcEndpointInitializer.java
@@ -0,0 +1,116 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.druid.grpc.server;
+
+import com.fasterxml.jackson.databind.ObjectMapper;
+import org.apache.druid.guice.ManageLifecycleServer;
+import org.apache.druid.guice.annotations.Json;
+import org.apache.druid.guice.annotations.NativeQuery;
+import org.apache.druid.java.util.common.ISE;
+import org.apache.druid.java.util.common.lifecycle.LifecycleStart;
+import org.apache.druid.java.util.common.lifecycle.LifecycleStop;
+import org.apache.druid.java.util.common.logger.Logger;
+import org.apache.druid.server.QueryLifecycleFactory;
+import org.apache.druid.server.security.AuthenticatorMapper;
+import org.apache.druid.sql.SqlStatementFactory;
+
+import javax.inject.Inject;
+
+import java.io.IOException;
+
+/**
+ * Initializes the gRPC endpoint (server). This version uses a Netty-based server
+ * separate from Druid's primary Jetty-based server. We may want to consider a
+ *
+ * recent addition to the gRPC examples to run gRPC as a servlet. However, trying
+ * that turned out to incur many issues, including the fact that there was no way
+ * to pass the AuthenticationResult down through the many layers of gRPC into the
+ * query code. So, we use the gRPC server instead.
+ *
+ * An instance of this class is created by Guice and managed via Druid's
+ * lifecycle manager.
+ */
+@ManageLifecycleServer
+public class GrpcEndpointInitializer
+{
+ private static final Logger log = new Logger(GrpcEndpointInitializer.class);
+
+ private final GrpcQueryConfig config;
+ private final QueryDriver driver;
+ private final AuthenticatorMapper authMapper;
+
+ private QueryServer server;
+
+ @Inject
+ public GrpcEndpointInitializer(
+ GrpcQueryConfig config,
+ final @Json ObjectMapper jsonMapper,
+ final @NativeQuery SqlStatementFactory sqlStatementFactory,
+ final QueryLifecycleFactory queryLifecycleFactory,
+ final AuthenticatorMapper authMapper
+ )
+ {
+ this.config = config;
+ this.authMapper = authMapper;
+ this.driver = new QueryDriver(jsonMapper, sqlStatementFactory, queryLifecycleFactory);
+ }
+
+ @LifecycleStart
+ public void start()
+ {
+ server = new QueryServer(config, driver, authMapper);
+ try {
+ server.start();
+ }
+ catch (IOException e) {
+ // Indicates an error when gRPC tried to start the server
+ // (such the port is already in use.)
+ log.error(e, "Fatal error: gRPC query server startup failed");
+
+ // This exception will bring down the Broker as there is not much we can
+ // do if we can't start the gRPC endpoint.
+ throw new ISE(e, "Fatal error: grpc query server startup failed");
+ }
+ catch (Throwable t) {
+ // Catch-all for other errors. The most likely error is that some class was not found
+ // (that is, class loader issues in an IDE, or a jar missing in the extension).
+ log.error(t, "Fatal error: gRPC query server startup failed");
+
+ // This exception will bring down the Broker as there is not much we can
+ // do if we can't start the gRPC endpoint.
+ throw t;
+ }
+ }
+
+ @LifecycleStop
+ public void stop()
+ {
+ if (server != null) {
+ try {
+ server.blockUntilShutdown();
+ }
+ catch (InterruptedException e) {
+ // Just warn. We're shutting down anyway, so no need to throw an exception.
+ log.warn(e, "gRPC query server shutdown failed");
+ }
+ server = null;
+ }
+ }
+}
diff --git a/extensions-contrib/grpc-query/src/main/java/org/apache/druid/grpc/server/GrpcQueryConfig.java b/extensions-contrib/grpc-query/src/main/java/org/apache/druid/grpc/server/GrpcQueryConfig.java
new file mode 100644
index 00000000000..a9cdde23970
--- /dev/null
+++ b/extensions-contrib/grpc-query/src/main/java/org/apache/druid/grpc/server/GrpcQueryConfig.java
@@ -0,0 +1,53 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.druid.grpc.server;
+
+import com.fasterxml.jackson.annotation.JsonProperty;
+
+import javax.validation.constraints.Max;
+
+/**
+ * Grpc configs for the extension.
+ */
+public class GrpcQueryConfig
+{
+ public static final String CONFIG_BASE = "druid.grpcQuery";
+
+ @JsonProperty
+ @Max(0xffff)
+ private int port = 50051;
+
+ public GrpcQueryConfig()
+ {
+ }
+
+ public GrpcQueryConfig(int port)
+ {
+ this.port = port;
+ }
+
+ /**
+ * @return the port to accept gRPC client connections on
+ */
+ public int getPort()
+ {
+ return port;
+ }
+}
diff --git a/extensions-contrib/grpc-query/src/main/java/org/apache/druid/grpc/server/HealthService.java b/extensions-contrib/grpc-query/src/main/java/org/apache/druid/grpc/server/HealthService.java
new file mode 100644
index 00000000000..d40b0b68bd2
--- /dev/null
+++ b/extensions-contrib/grpc-query/src/main/java/org/apache/druid/grpc/server/HealthService.java
@@ -0,0 +1,170 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.druid.grpc.server;
+
+import com.google.common.util.concurrent.MoreExecutors;
+import io.grpc.Context;
+import io.grpc.Status;
+import io.grpc.stub.StreamObserver;
+import org.apache.druid.grpc.proto.HealthGrpc;
+import org.apache.druid.grpc.proto.HealthOuterClass.HealthCheckRequest;
+import org.apache.druid.grpc.proto.HealthOuterClass.HealthCheckResponse;
+import org.apache.druid.grpc.proto.HealthOuterClass.HealthCheckResponse.ServingStatus;
+
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.ConcurrentMap;
+import java.util.concurrent.CountDownLatch;
+
+/**
+ * Implementation of grpc health service. Provides {@code check(HealthCheckRequest, StreamObserver(HealthCheckResponse))}
+ * method to get health of a specific service or the overall server health.
+ *
+ * A client can call the {@code watch(HealthCheckRequest, StreamObserver(HealthCheckResponse))} method
+ * to perform a streaming health-check.
+ * The server will immediately send back a message indicating the current serving status.
+ * It will then subsequently send a new message whenever the service's serving status changes.
+ */
+class HealthService extends HealthGrpc.HealthImplBase
+{
+ private final ConcurrentMap serviceStatusMap;
+ private final ConcurrentMap cancellationContexts;
+ private final ConcurrentMap statusChangeLatchMap;
+
+ public HealthService()
+ {
+ this.serviceStatusMap = new ConcurrentHashMap<>();
+ this.cancellationContexts = new ConcurrentHashMap<>();
+ this.statusChangeLatchMap = new ConcurrentHashMap<>();
+ }
+
+ @Override
+ public void check(
+ HealthCheckRequest request,
+ StreamObserver responseObserver
+ )
+ {
+ String serviceName = request.getService();
+ ServingStatus status = getServiceStatus(serviceName);
+ HealthCheckResponse response = buildHealthCheckResponse(status);
+ responseObserver.onNext(response);
+ responseObserver.onCompleted();
+ }
+
+ @Override
+ public void watch(
+ HealthCheckRequest request,
+ StreamObserver responseObserver
+ )
+ {
+ String serviceName = request.getService();
+
+ Context.CancellableContext existingContext = cancellationContexts.get(serviceName);
+ if (existingContext != null) {
+ // Another request is already watching the same service
+ responseObserver.onError(Status.ALREADY_EXISTS.withDescription(
+ "Another watch request is already in progress for the same service").asRuntimeException());
+ return;
+ }
+
+ Context.CancellableContext cancellableContext = Context.current().withCancellation();
+ cancellationContexts.put(serviceName, cancellableContext);
+
+ // Attach a cancellation listener to the context
+ cancellableContext.addListener((context) -> {
+ // If the context is cancelled, remove the observer from the map
+ cancellationContexts.remove(serviceName);
+ }, MoreExecutors.directExecutor());
+
+
+ // Send an initial response with the current serving status
+ ServingStatus servingStatus = getServiceStatus(serviceName);
+ HealthCheckResponse initialResponse = buildHealthCheckResponse(servingStatus);
+ responseObserver.onNext(initialResponse);
+
+ // Continuously listen for service status changes
+ while (!cancellableContext.isCancelled()) {
+ // Wait for the service status to change
+ // Update the serving status and send a new response
+ servingStatus = waitForServiceStatusChange(serviceName);
+ HealthCheckResponse updatedResponse = buildHealthCheckResponse(servingStatus);
+ responseObserver.onNext(updatedResponse);
+ }
+
+ cancellationContexts.remove(serviceName);
+ responseObserver.onCompleted();
+ }
+
+ private HealthCheckResponse buildHealthCheckResponse(ServingStatus status)
+ {
+ return HealthCheckResponse
+ .newBuilder()
+ .setStatus(status)
+ .build();
+ }
+
+ // Method to register a new service with its initial serving status
+ public void registerService(String serviceName, ServingStatus servingStatus)
+ {
+ setServiceStatus(serviceName, servingStatus);
+ }
+
+ // Method to unregister a service
+ public void unregisterService(String serviceName)
+ {
+ setServiceStatus(serviceName, ServingStatus.NOT_SERVING);
+ }
+
+ private void setServiceStatus(String serviceName, ServingStatus newStatus)
+ {
+ ServingStatus currentStatus = getServiceStatus(serviceName);
+ if (currentStatus != newStatus) {
+ serviceStatusMap.put(serviceName, newStatus);
+
+ // Notify the waiting threads
+ CountDownLatch statusChangeLatch = statusChangeLatchMap.get(serviceName);
+ if (statusChangeLatch != null) {
+ statusChangeLatch.countDown();
+ }
+ }
+ }
+
+ public ServingStatus getServiceStatus(String serviceName)
+ {
+ return serviceStatusMap.getOrDefault(serviceName, ServingStatus.UNKNOWN);
+ }
+
+ public ServingStatus waitForServiceStatusChange(String serviceName)
+ {
+ CountDownLatch statusChangeLatch = new CountDownLatch(1);
+ statusChangeLatchMap.put(serviceName, statusChangeLatch);
+
+ // Wait for the status change or until the thread is interrupted
+ try {
+ statusChangeLatch.await();
+ }
+ catch (InterruptedException e) {
+ Thread.currentThread().interrupt();
+ }
+
+ statusChangeLatchMap.remove(serviceName);
+
+ return getServiceStatus(serviceName);
+ }
+}
diff --git a/extensions-contrib/grpc-query/src/main/java/org/apache/druid/grpc/server/ProtobufTransformer.java b/extensions-contrib/grpc-query/src/main/java/org/apache/druid/grpc/server/ProtobufTransformer.java
new file mode 100644
index 00000000000..795ea2b40af
--- /dev/null
+++ b/extensions-contrib/grpc-query/src/main/java/org/apache/druid/grpc/server/ProtobufTransformer.java
@@ -0,0 +1,167 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.druid.grpc.server;
+
+import com.google.protobuf.ByteString;
+import com.google.protobuf.Timestamp;
+import org.apache.calcite.rel.type.RelDataType;
+import org.apache.calcite.sql.type.SqlTypeName;
+import org.apache.druid.segment.column.ColumnType;
+import org.apache.druid.segment.column.RowSignature;
+import org.apache.druid.sql.SqlRowTransformer;
+import org.apache.druid.sql.calcite.planner.Calcites;
+import org.apache.druid.sql.calcite.table.RowSignatures;
+import org.joda.time.DateTime;
+import org.joda.time.DateTimeUtils;
+import org.joda.time.DateTimeZone;
+
+import javax.annotation.Nullable;
+import java.io.ByteArrayOutputStream;
+import java.io.IOException;
+import java.io.ObjectOutputStream;
+import java.util.Optional;
+import java.util.TimeZone;
+
+/**
+ * Transforms query result for protobuf format
+ */
+public class ProtobufTransformer
+{
+
+ /**
+ * Transform a sql query result into protobuf result format.
+ * For complex or missing column type the object is converted into ByteString.
+ * date and time column types is converted into proto timestamp.
+ * Remaining column types are not converted.
+ *
+ * @param rowTransformer row signature for sql query result
+ * @param row result row
+ * @param i index in the result row
+ * @return transformed query result in protobuf result format
+ */
+ @Nullable
+ public static Object transform(SqlRowTransformer rowTransformer, Object[] row, int i)
+ {
+ if (row[i] == null) {
+ return null;
+ }
+ final RelDataType rowType = rowTransformer.getRowType();
+ final SqlTypeName sqlTypeName = rowType.getFieldList().get(i).getType().getSqlTypeName();
+ final RowSignature signature = RowSignatures.fromRelDataType(rowType.getFieldNames(), rowType);
+ final Optional columnType = signature.getColumnType(i);
+
+ if (sqlTypeName == SqlTypeName.TIMESTAMP
+ || sqlTypeName == SqlTypeName.DATE) {
+ if (sqlTypeName == SqlTypeName.TIMESTAMP) {
+ return convertEpochToProtoTimestamp((long) row[i]);
+ }
+ return convertDateToProtoTimestamp((int) row[i]);
+ }
+
+ if (!columnType.isPresent()) {
+ return convertComplexType(row[i]);
+ }
+
+ final ColumnType druidType = columnType.get();
+
+ if (druidType == ColumnType.STRING) {
+ return row[i];
+ } else if (druidType == ColumnType.LONG) {
+ return row[i];
+ } else if (druidType == ColumnType.FLOAT) {
+ return row[i];
+ } else if (druidType == ColumnType.DOUBLE) {
+ return row[i];
+ } else {
+ return convertComplexType(row[i]);
+ }
+ }
+
+ /**
+ * Transform a native query result into protobuf result format.
+ * For complex or missing column type the object is converted into ByteString.
+ * date and time column types are converted into proto timestamp.
+ * Remaining column types are not converted.
+ *
+ * @param rowSignature type signature for a query result row
+ * @param row result row
+ * @param i index in the result
+ * @param convertToTimestamp if the result should be converted to proto timestamp
+ * @return transformed query result in protobuf result format
+ */
+ @Nullable
+ public static Object transform(RowSignature rowSignature, Object[] row, int i, boolean convertToTimestamp)
+ {
+ if (row[i] == null) {
+ return null;
+ }
+
+ final Optional columnType = rowSignature.getColumnType(i);
+
+ if (convertToTimestamp) {
+ return convertEpochToProtoTimestamp((long) row[i]);
+ }
+
+ if (!columnType.isPresent()) {
+ return convertComplexType(row[i]);
+ }
+
+ final ColumnType druidType = columnType.get();
+
+ if (druidType == ColumnType.STRING) {
+ return row[i];
+ } else if (druidType == ColumnType.LONG) {
+ return row[i];
+ } else if (druidType == ColumnType.FLOAT) {
+ return row[i];
+ } else if (druidType == ColumnType.DOUBLE) {
+ return row[i];
+ } else {
+ return convertComplexType(row[i]);
+ }
+ }
+
+ public static Timestamp convertEpochToProtoTimestamp(long value)
+ {
+ DateTime dateTime = Calcites.calciteTimestampToJoda(value, DateTimeZone.forTimeZone(TimeZone.getTimeZone("UTC")));
+ long seconds = DateTimeUtils.getInstantMillis(dateTime) / 1000;
+ return Timestamp.newBuilder().setSeconds(seconds).build();
+ }
+
+ public static Timestamp convertDateToProtoTimestamp(int value)
+ {
+ DateTime dateTime = Calcites.calciteDateToJoda(value, DateTimeZone.forTimeZone(TimeZone.getTimeZone("UTC")));
+ long seconds = DateTimeUtils.getInstantMillis(dateTime) / 1000;
+ return Timestamp.newBuilder().setSeconds(seconds).build();
+ }
+
+ private static ByteString convertComplexType(Object value)
+ {
+ try (ByteArrayOutputStream bos = new ByteArrayOutputStream();
+ ObjectOutputStream oos = new ObjectOutputStream(bos)) {
+ oos.writeObject(value);
+ oos.flush();
+ return ByteString.copyFrom(bos.toByteArray());
+ }
+ catch (IOException e) {
+ throw new RuntimeException(e);
+ }
+ }
+}
diff --git a/extensions-contrib/grpc-query/src/main/java/org/apache/druid/grpc/server/ProtobufWriter.java b/extensions-contrib/grpc-query/src/main/java/org/apache/druid/grpc/server/ProtobufWriter.java
new file mode 100644
index 00000000000..bf88c33d08a
--- /dev/null
+++ b/extensions-contrib/grpc-query/src/main/java/org/apache/druid/grpc/server/ProtobufWriter.java
@@ -0,0 +1,146 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.druid.grpc.server;
+
+import com.google.protobuf.Descriptors;
+import com.google.protobuf.GeneratedMessageV3;
+import com.google.protobuf.Message;
+import org.apache.calcite.rel.type.RelDataType;
+import org.apache.druid.segment.column.RowSignature;
+import org.apache.druid.sql.http.ResultFormat;
+
+import javax.annotation.Nullable;
+
+import java.io.IOException;
+import java.io.OutputStream;
+import java.lang.reflect.InvocationTargetException;
+import java.lang.reflect.Method;
+import java.util.HashMap;
+import java.util.Map;
+
+/**
+ * Implementation of {@code ResultFormat.Writer} for protobuf message.
+ */
+public class ProtobufWriter implements ResultFormat.Writer
+{
+ private final OutputStream outputStream;
+ private final GeneratedMessageV3 message;
+ private Message.Builder rowBuilder;
+ private final Map methods = new HashMap<>();
+
+
+ public ProtobufWriter(OutputStream outputStream, Class clazz)
+ {
+ this.outputStream = outputStream;
+ this.message = get(clazz);
+ }
+
+ private GeneratedMessageV3 get(Class clazz)
+ {
+ try {
+ final Method method = clazz.getMethod("getDefaultInstance", new Class>[0]);
+ return clazz.cast(method.invoke(null));
+ }
+ catch (Exception e) {
+ throw new RuntimeException(e);
+ }
+ }
+
+ @Override
+ public void writeResponseStart()
+ {
+ }
+
+ @Override
+ public void writeHeader(RelDataType rowType, boolean includeTypes, boolean includeSqlTypes)
+ {
+ }
+
+ @Override
+ public void writeHeaderFromRowSignature(RowSignature rowSignature, boolean b)
+ {
+
+ }
+
+ @Override
+ public void writeRowStart()
+ {
+ rowBuilder = message.getDefaultInstanceForType().newBuilderForType();
+ }
+
+ @Override
+ public void writeRowField(String name, @Nullable Object value)
+ {
+ if (value == null) {
+ return;
+ }
+ final Descriptors.FieldDescriptor fieldDescriptor =
+ message.getDescriptorForType().findFieldByName(name);
+ // we should throw an exception if fieldDescriptor is null
+ // this means the .proto fields don't match returned column names
+ if (fieldDescriptor == null) {
+ throw new QueryDriver.RequestError(
+ "Field [%s] not found in Protobuf [%s]",
+ name,
+ message.getClass()
+ );
+ }
+ final Method method = methods.computeIfAbsent("setField", k -> {
+ try {
+ return rowBuilder
+ .getClass()
+ .getMethod(
+ "setField", new Class>[]{Descriptors.FieldDescriptor.class, Object.class});
+ }
+ catch (NoSuchMethodException e) {
+ throw new RuntimeException(e);
+ }
+ });
+ try {
+ method.invoke(rowBuilder, fieldDescriptor, value);
+ }
+ catch (IllegalAccessException | InvocationTargetException e) {
+ throw new QueryDriver.RequestError(
+ "Could not write value [%s] to field [%s]",
+ value,
+ name
+ );
+ }
+ }
+
+ @Override
+ public void writeRowEnd() throws IOException
+ {
+ Message rowMessage = rowBuilder.build();
+ rowMessage.writeDelimitedTo(outputStream);
+ }
+
+ @Override
+ public void writeResponseEnd()
+ {
+ }
+
+ @Override
+ public void close() throws IOException
+ {
+ outputStream.flush();
+ outputStream.close();
+ }
+}
diff --git a/extensions-contrib/grpc-query/src/main/java/org/apache/druid/grpc/server/QueryDriver.java b/extensions-contrib/grpc-query/src/main/java/org/apache/druid/grpc/server/QueryDriver.java
new file mode 100644
index 00000000000..096a1439a4f
--- /dev/null
+++ b/extensions-contrib/grpc-query/src/main/java/org/apache/druid/grpc/server/QueryDriver.java
@@ -0,0 +1,720 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.druid.grpc.server;
+
+import com.fasterxml.jackson.core.JsonProcessingException;
+import com.fasterxml.jackson.databind.ObjectMapper;
+import com.google.common.base.Preconditions;
+import com.google.common.base.Strings;
+import com.google.common.collect.ImmutableMap;
+import com.google.protobuf.ByteString;
+import com.google.protobuf.GeneratedMessageV3;
+import org.apache.calcite.avatica.SqlType;
+import org.apache.calcite.rel.type.RelDataType;
+import org.apache.druid.grpc.proto.QueryOuterClass;
+import org.apache.druid.grpc.proto.QueryOuterClass.ColumnSchema;
+import org.apache.druid.grpc.proto.QueryOuterClass.DruidType;
+import org.apache.druid.grpc.proto.QueryOuterClass.QueryParameter;
+import org.apache.druid.grpc.proto.QueryOuterClass.QueryRequest;
+import org.apache.druid.grpc.proto.QueryOuterClass.QueryResponse;
+import org.apache.druid.grpc.proto.QueryOuterClass.QueryStatus;
+import org.apache.druid.java.util.common.RE;
+import org.apache.druid.java.util.common.StringUtils;
+import org.apache.druid.java.util.common.guava.Accumulator;
+import org.apache.druid.java.util.common.guava.Sequence;
+import org.apache.druid.java.util.common.logger.Logger;
+import org.apache.druid.query.Query;
+import org.apache.druid.query.QueryToolChest;
+import org.apache.druid.segment.column.ColumnHolder;
+import org.apache.druid.segment.column.ColumnType;
+import org.apache.druid.segment.column.RowSignature;
+import org.apache.druid.server.QueryLifecycle;
+import org.apache.druid.server.QueryLifecycleFactory;
+import org.apache.druid.server.security.Access;
+import org.apache.druid.server.security.AuthenticationResult;
+import org.apache.druid.server.security.ForbiddenException;
+import org.apache.druid.sql.DirectStatement;
+import org.apache.druid.sql.DirectStatement.ResultSet;
+import org.apache.druid.sql.SqlPlanningException;
+import org.apache.druid.sql.SqlQueryPlus;
+import org.apache.druid.sql.SqlRowTransformer;
+import org.apache.druid.sql.SqlStatementFactory;
+import org.apache.druid.sql.calcite.table.RowSignatures;
+import org.apache.druid.sql.http.ResultFormat;
+import org.apache.druid.sql.http.SqlParameter;
+import org.joda.time.format.ISODateTimeFormat;
+
+import java.io.ByteArrayOutputStream;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+import java.util.Optional;
+import java.util.UUID;
+
+
+/**
+ * "Driver" for the gRPC query endpoint. Handles translating the gRPC {@link QueryRequest}
+ * into Druid's internal formats, running the query, and translating the results into a
+ * gRPC {@link QueryResponse}. Allows for easier unit testing as we separate the machinery
+ * of running a query, given the request, from the gRPC server machinery.
+ */
+public class QueryDriver
+{
+ private static final Logger log = new Logger(QueryDriver.class);
+
+ private static final String TIME_FIELD_KEY = "timeFieldKey";
+
+ /**
+ * Internal runtime exception to report request errors.
+ */
+ protected static class RequestError extends RE
+ {
+ public RequestError(String msg, Object... args)
+ {
+ super(msg, args);
+ }
+ }
+
+ private final ObjectMapper jsonMapper;
+ private final SqlStatementFactory sqlStatementFactory;
+ private final QueryLifecycleFactory queryLifecycleFactory;
+
+ public QueryDriver(
+ final ObjectMapper jsonMapper,
+ final SqlStatementFactory sqlStatementFactory,
+ final QueryLifecycleFactory queryLifecycleFactory
+ )
+ {
+ this.jsonMapper = Preconditions.checkNotNull(jsonMapper, "jsonMapper");
+ this.sqlStatementFactory = Preconditions.checkNotNull(sqlStatementFactory, "sqlStatementFactory");
+ this.queryLifecycleFactory = queryLifecycleFactory;
+ }
+
+ /**
+ * First-cut synchronous query handler. Druid prefers to stream results, in
+ * part to avoid overly-short network timeouts. However, for now, we simply run
+ * the query within this call and prepare the Protobuf response. Async handling
+ * can come later.
+ */
+ public QueryResponse submitQuery(QueryRequest request, AuthenticationResult authResult)
+ {
+ if (request.getQueryType() == QueryOuterClass.QueryType.NATIVE) {
+ return runNativeQuery(request, authResult);
+ } else {
+ return runSqlQuery(request, authResult);
+ }
+ }
+
+ private QueryResponse runNativeQuery(QueryRequest request, AuthenticationResult authResult)
+ {
+ Query> query;
+ try {
+ query = jsonMapper.readValue(request.getQuery(), Query.class);
+ }
+ catch (JsonProcessingException e) {
+ return QueryResponse.newBuilder()
+ .setQueryId("")
+ .setStatus(QueryStatus.REQUEST_ERROR)
+ .setErrorMessage(e.getMessage())
+ .build();
+ }
+ if (Strings.isNullOrEmpty(query.getId())) {
+ query = query.withId(UUID.randomUUID().toString());
+ }
+
+ final QueryLifecycle queryLifecycle = queryLifecycleFactory.factorize();
+
+ final org.apache.druid.server.QueryResponse queryResponse;
+ final String currThreadName = Thread.currentThread().getName();
+ try {
+ queryLifecycle.initialize(query);
+ Access authorizationResult = queryLifecycle.authorize(authResult);
+ if (!authorizationResult.isAllowed()) {
+ throw new ForbiddenException(Access.DEFAULT_ERROR_MESSAGE);
+ }
+ queryResponse = queryLifecycle.execute();
+
+ QueryToolChest queryToolChest = queryLifecycle.getToolChest();
+
+ Sequence