Commit Graph

75 Commits

Author SHA1 Message Date
fjy d212b3fc43 [maven-release-plugin] prepare for next development iteration 2013-11-08 15:14:46 -08:00
fjy 4cd4578539 [maven-release-plugin] prepare release druid-0.6.9 2013-11-08 15:14:41 -08:00
fjy fd4fb78e1f [maven-release-plugin] prepare for next development iteration 2013-11-08 11:34:12 -08:00
fjy f8d5d127a4 [maven-release-plugin] prepare release druid-0.6.8 2013-11-08 11:34:09 -08:00
fjy 90240e6f1e [maven-release-plugin] prepare for next development iteration 2013-11-07 23:29:02 -08:00
fjy c3d6233bc4 [maven-release-plugin] prepare release druid-0.6.7 2013-11-07 23:28:58 -08:00
fjy d70bcc5657 repair broken pom 2013-11-07 23:27:06 -08:00
fjy fafd9c8d27 [maven-release-plugin] prepare for next development iteration 2013-11-07 18:20:19 -08:00
fjy f945416538 [maven-release-plugin] prepare release druid-0.6.5 2013-11-07 18:20:16 -08:00
fjy ed328aca6e [maven-release-plugin] prepare for next development iteration 2013-11-07 18:03:39 -08:00
fjy e34f1a829b [maven-release-plugin] prepare release druid-0.6.4 2013-11-07 18:03:36 -08:00
fjy 009646ed56 [maven-release-plugin] prepare for next development iteration 2013-11-07 17:40:12 -08:00
fjy 4cde47c9d1 [maven-release-plugin] prepare release druid-0.6.3 2013-11-07 17:40:06 -08:00
fjy e6a83c0339 [maven-release-plugin] prepare for next development iteration 2013-11-07 17:05:52 -08:00
fjy 2ac09b798d [maven-release-plugin] prepare release druid-0.6.2 2013-11-07 17:05:48 -08:00
fjy ee15fd9f85 [maven-release-plugin] prepare for next development iteration 2013-11-05 15:42:37 -08:00
fjy 713015199a [maven-release-plugin] prepare release druid-0.6.1 2013-11-05 15:42:33 -08:00
fjy a684885839 [maven-release-plugin] prepare for next development iteration 2013-10-18 15:48:16 -07:00
fjy 88dcfe9a94 [maven-release-plugin] prepare release druid-0.6.0 2013-10-18 15:48:12 -07:00
fjy 2dc716bf7e fix bug and make it actually possible to load extensions 2013-10-16 11:59:01 -07:00
fjy 9796a40b92 port docs over to 0.6 and a bunch of misc fixes 2013-10-11 18:38:53 -07:00
fjy a9a723bd11 clean up poms, add a new loading your own data tutorial, add new validation, clean up logs 2013-10-09 15:42:39 -07:00
cheddar c47fe202c7 Fix HadoopDruidIndexer to work with the new way of things
There are multiple and sundry changes in here.

First, "HadoopDruidIndexer" has been split into two pieces, (1) CliHadoop which pulls the hadoop version and builds up the right classpath with the proper hadoop version to run the indexer and (2) CliInternalHadoopIndexer which actually runs the indexer.

In order to work around a bunch of jets3t version conflicts with Hadoop and Druid, I needed to extract the S3 deep storage stuff into its own module.  I then also moved the HDFS stuff into its own module so that I could eliminate the dependency on Hadoop for druid-server.

In doing these changes, I wanted to make the extensions buildable with only the druid-api jar, so a few other things had to move out of Druid and into druid-api.  They are all API-level things, however, so they really belong in druid-api instead.

Lastly, I removed the druid-realtime module and put it all in druid-server.
2013-10-09 15:15:44 -05:00
fjy bc8db7daa5 1) make chat handler resource work again
2) add more default configs
3) make examples work again
2013-10-02 14:22:39 -07:00
cheddar 5712b29c8c Fix issues with bindings and handling extensions
The way the Guice bindings were setup previously, each process only had bindings
for the things it cared about.  This became problematic when adding extension modules
that bound everything that they could possibly need expecting that the processes would
only instantiate what they actually do need.  Guice tries to fail-fast and verifies that all
 bindings exist before it does anything, which is a problem because the extension bind
 some objects that don't necessarily have all of their dependencies bound in all processes.

The fix for this is to build a single Injector with all bindings in it and let each of the
 processes only load the things that they care about.  This also requires the use of
 Module overrides and other such interesting things, which are node done.

 In doing the fix, I also swapped out the way that the DataSegmentPusher/Puller stuff is bound, as well as made the Cassandra stuff fail if its settings are not provided.  This all of a sudden made all of the things require Cassandra's settings, so I migrated the Cassandra deep storage stuff into its own module.

 In doing these changes, I also discovered that some properties weren't properly converting for the ConvertProperties command (specifically, the properties related to data segment loading and pushing), so I fixed that.
2013-09-20 17:45:01 -05:00