mirror of https://github.com/apache/druid.git
Add a configurable bufferPeriod between when a segment is marked unused and deleted by KillUnusedSegments duty (#12599)
* Add new configurable buffer period to create gap between mark unused and kill of segment * Changes after testing * fixes and improvements * changes after initial self review * self review changes * update sql statement that was lacking last_used * shore up some code in SqlMetadataConnector after self review * fix derby compatibility and improve testing/docs * fix checkstyle violations * Fixes post merge with master * add some unit tests to improve coverage * ignore test coverage on new UpdateTools cli tool * another attempt to ignore UpdateTables in coverage check * change column name to used_flag_last_updated * fix a method signature after column name switch * update docs spelling * Update spelling dictionary * Fixing up docs/spelling and integrating altering tasks table with my alteration code * Update NULL values for used_flag_last_updated in the background * Remove logic to allow segs with null used_flag_last_updated to be killed regardless of bufferPeriod * remove unneeded things now that the new column is automatically updated * Test new background row updater method * fix broken tests * fix create table statement * cleanup DDL formatting * Revert adding columns to entry table by default * fix compilation issues after merge with master * discovered and fixed metastore inserts that were breaking integration tests * fixup forgotten insert by using pattern of sharing now timestamp across columns * fix issue introduced by merge * fixup after merge with master * add some directions to docs in the case of segment table validation issues
This commit is contained in:
parent
7e147ee905
commit
9c124f2cde
|
@ -858,6 +858,7 @@ These Coordinator static configurations can be defined in the `coordinator/runti
|
|||
|`druid.coordinator.kill.period`|How often to send kill tasks to the indexing service. Value must be greater than `druid.coordinator.period.indexingPeriod`. Only applies if kill is turned on.|P1D (1 Day)|
|
||||
|`druid.coordinator.kill.durationToRetain`|Only applies if you set `druid.coordinator.kill.on` to `true`. This value is ignored if `druid.coordinator.kill.ignoreDurationToRetain` is `true`. Valid configurations must be a ISO8601 period. Druid will not kill unused segments whose interval end date is beyond `now - durationToRetain`. `durationToRetain` can be a negative ISO8601 period, which would result in `now - durationToRetain` to be in the future.<br /><br />Note that the `durationToRetain` parameter applies to the segment interval, not the time that the segment was last marked unused. For example, if `durationToRetain` is set to `P90D`, then a segment for a time chunk 90 days in the past is eligible for permanent deletion immediately after being marked unused.|`P90D`|
|
||||
|`druid.coordinator.kill.ignoreDurationToRetain`|A way to override `druid.coordinator.kill.durationToRetain` and tell the coordinator that you do not care about the end date of unused segment intervals when it comes to killing them. If true, the coordinator considers all unused segments as eligible to be killed.|false|
|
||||
|`druid.coordinator.kill.bufferPeriod`|The amount of time that a segment must be unused before it is able to be permanently removed from metadata and deep storage. This can serve as a buffer period to prevent data loss if data ends up being needed after being marked unused.|`P30D`|
|
||||
|`druid.coordinator.kill.maxSegments`|The number of unused segments to kill per kill task. This number must be greater than 0. This only applies when `druid.coordinator.kill.on=true`.|100|
|
||||
|`druid.coordinator.balancer.strategy`|Specify the type of balancing strategy for the coordinator to use to distribute segments among the historicals. `cachingCost` is logically equivalent to `cost` but is more CPU-efficient on large clusters. `diskNormalized` weights the costs according to the servers' disk usage ratios - there are known issues with this strategy distributing segments unevenly across the cluster. `random` distributes segments among services randomly.|`cost`|
|
||||
|`druid.coordinator.balancer.cachingCost.awaitInitialization`|Whether to wait for segment view initialization before creating the `cachingCost` balancing strategy. This property is enabled only when `druid.coordinator.balancer.strategy` is `cachingCost`. If set to 'true', the Coordinator will not start to assign segments, until the segment view is initialized. If set to 'false', the Coordinator will fallback to use the `cost` balancing strategy only if the segment view is not initialized yet. Notes, it may take much time to wait for the initialization since the `cachingCost` balancing strategy involves much computing to build itself.|false|
|
||||
|
|
|
@ -103,7 +103,9 @@ system. The table has two main functional columns, the other columns are for ind
|
|||
Value 1 in the `used` column means that the segment should be "used" by the cluster (i.e., it should be loaded and
|
||||
available for requests). Value 0 means that the segment should not be loaded into the cluster. We do this as a means of
|
||||
unloading segments from the cluster without actually removing their metadata (which allows for simpler rolling back if
|
||||
that is ever an issue).
|
||||
that is ever an issue). The `used` column has a corresponding `used_flag_last_updated` column that indicates the date at the instant
|
||||
that the `used` status of the segment was last updated. This information can be used by the coordinator to determine if
|
||||
a segment is a candidate for deletion (if automated segment killing is enabled).
|
||||
|
||||
The `payload` column stores a JSON blob that has all of the metadata for the segment.
|
||||
Some of the data in the `payload` column intentionally duplicates data from other columns in the segments table.
|
||||
|
|
|
@ -0,0 +1,71 @@
|
|||
---
|
||||
id: upgrade-prep
|
||||
title: "Upgrade Prep"
|
||||
---
|
||||
|
||||
<!--
|
||||
~ Licensed to the Apache Software Foundation (ASF) under one
|
||||
~ or more contributor license agreements. See the NOTICE file
|
||||
~ distributed with this work for additional information
|
||||
~ regarding copyright ownership. The ASF licenses this file
|
||||
~ to you under the Apache License, Version 2.0 (the
|
||||
~ "License"); you may not use this file except in compliance
|
||||
~ with the License. You may obtain a copy of the License at
|
||||
~
|
||||
~ http://www.apache.org/licenses/LICENSE-2.0
|
||||
~
|
||||
~ Unless required by applicable law or agreed to in writing,
|
||||
~ software distributed under the License is distributed on an
|
||||
~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
~ KIND, either express or implied. See the License for the
|
||||
~ specific language governing permissions and limitations
|
||||
~ under the License.
|
||||
-->
|
||||
|
||||
## Upgrade to `0.24+` from `0.23` and earlier
|
||||
|
||||
### Altering segments table
|
||||
|
||||
**If you have set `druid.metadata.storage.connector.createTables` to `true` (which is the default), and your metadata connect user has DDL privileges, you can disregard this section.**
|
||||
|
||||
**The coordinator and overlord services will fail if you do not execute this change prior to the upgrade**
|
||||
|
||||
A new column, `used_flag_last_updated`, is needed in the segments table to support new
|
||||
segment killing functionality. You can manually alter the table, or you can use
|
||||
a CLI tool to perform the update.
|
||||
|
||||
#### CLI tool
|
||||
|
||||
Druid provides a `metadata-update` tool for updating Druid's metadata tables.
|
||||
|
||||
In the example commands below:
|
||||
|
||||
- `lib` is the Druid lib directory
|
||||
- `extensions` is the Druid extensions directory
|
||||
- `base` corresponds to the value of `druid.metadata.storage.tables.base` in the configuration, `druid` by default.
|
||||
- The `--connectURI` parameter corresponds to the value of `druid.metadata.storage.connector.connectURI`.
|
||||
- The `--user` parameter corresponds to the value of `druid.metadata.storage.connector.user`.
|
||||
- The `--password` parameter corresponds to the value of `druid.metadata.storage.connector.password`.
|
||||
- The `--action` parameter corresponds to the update action you are executing. In this case it is: `add-last-used-to-segments`
|
||||
|
||||
##### MySQL
|
||||
|
||||
```bash
|
||||
cd ${DRUID_ROOT}
|
||||
java -classpath "lib/*" -Dlog4j.configurationFile=conf/druid/cluster/_common/log4j2.xml -Ddruid.extensions.directory="extensions" -Ddruid.extensions.loadList=[\"mysql-metadata-storage\"] -Ddruid.metadata.storage.type=mysql org.apache.druid.cli.Main tools metadata-update --connectURI="<mysql-uri>" --user <user> --password <pass> --base druid --action add-used-flag-last-updated-to-segments
|
||||
```
|
||||
|
||||
##### PostgreSQL
|
||||
|
||||
```bash
|
||||
cd ${DRUID_ROOT}
|
||||
java -classpath "lib/*" -Dlog4j.configurationFile=conf/druid/cluster/_common/log4j2.xml -Ddruid.extensions.directory="extensions" -Ddruid.extensions.loadList=[\"postgresql-metadata-storage\"] -Ddruid.metadata.storage.type=postgresql org.apache.druid.cli.Main tools metadata-update --connectURI="<postgresql-uri>" --user <user> --password <pass> --base druid --action add-used-flag-last-updated-to-segments
|
||||
```
|
||||
|
||||
|
||||
#### Manual ALTER TABLE
|
||||
|
||||
```SQL
|
||||
ALTER TABLE druid_segments
|
||||
ADD used_flag_last_updated varchar(255);
|
||||
```
|
|
@ -13,8 +13,8 @@
|
|||
-- See the License for the specific language governing permissions and
|
||||
-- limitations under the License.
|
||||
|
||||
INSERT INTO druid_segments (id,dataSource,created_date,start,end,partitioned,version,used,payload) VALUES ('twitterstream_2013-01-01T00:00:00.000Z_2013-01-02T00:00:00.000Z_2013-01-02T04:13:41.980Z_v9','twitterstream','2013-05-13T01:08:18.192Z','2013-01-01T00:00:00.000Z','2013-01-02T00:00:00.000Z',0,'2013-01-02T04:13:41.980Z_v9',1,'{\"dataSource\":\"twitterstream\",\"interval\":\"2013-01-01T00:00:00.000Z/2013-01-02T00:00:00.000Z\",\"version\":\"2013-01-02T04:13:41.980Z_v9\",\"loadSpec\":{\"type\":\"s3_zip\",\"bucket\":\"static.druid.io\",\"key\":\"data/segments/twitterstream/2013-01-01T00:00:00.000Z_2013-01-02T00:00:00.000Z/2013-01-02T04:13:41.980Z_v9/0/index.zip\"},\"dimensions\":\"has_links,first_hashtag,user_time_zone,user_location,has_mention,user_lang,rt_name,user_name,is_retweet,is_viral,has_geo,url_domain,user_mention_name,reply_to_name\",\"metrics\":\"count,tweet_length,num_followers,num_links,num_mentions,num_hashtags,num_favorites,user_total_tweets\",\"shardSpec\":{\"type\":\"none\"},\"binaryVersion\":9,\"size\":445235220,\"identifier\":\"twitterstream_2013-01-01T00:00:00.000Z_2013-01-02T00:00:00.000Z_2013-01-02T04:13:41.980Z_v9\"}');
|
||||
INSERT INTO druid_segments (id,dataSource,created_date,start,end,partitioned,version,used,payload) VALUES ('twitterstream_2013-01-02T00:00:00.000Z_2013-01-03T00:00:00.000Z_2013-01-03T03:44:58.791Z_v9','twitterstream','2013-05-13T00:03:28.640Z','2013-01-02T00:00:00.000Z','2013-01-03T00:00:00.000Z',0,'2013-01-03T03:44:58.791Z_v9',1,'{\"dataSource\":\"twitterstream\",\"interval\":\"2013-01-02T00:00:00.000Z/2013-01-03T00:00:00.000Z\",\"version\":\"2013-01-03T03:44:58.791Z_v9\",\"loadSpec\":{\"type\":\"s3_zip\",\"bucket\":\"static.druid.io\",\"key\":\"data/segments/twitterstream/2013-01-02T00:00:00.000Z_2013-01-03T00:00:00.000Z/2013-01-03T03:44:58.791Z_v9/0/index.zip\"},\"dimensions\":\"has_links,first_hashtag,user_time_zone,user_location,has_mention,user_lang,rt_name,user_name,is_retweet,is_viral,has_geo,url_domain,user_mention_name,reply_to_name\",\"metrics\":\"count,tweet_length,num_followers,num_links,num_mentions,num_hashtags,num_favorites,user_total_tweets\",\"shardSpec\":{\"type\":\"none\"},\"binaryVersion\":9,\"size\":435325540,\"identifier\":\"twitterstream_2013-01-02T00:00:00.000Z_2013-01-03T00:00:00.000Z_2013-01-03T03:44:58.791Z_v9\"}');
|
||||
INSERT INTO druid_segments (id,dataSource,created_date,start,end,partitioned,version,used,payload) VALUES ('twitterstream_2013-01-03T00:00:00.000Z_2013-01-04T00:00:00.000Z_2013-01-04T04:09:13.590Z_v9','twitterstream','2013-05-13T00:03:48.807Z','2013-01-03T00:00:00.000Z','2013-01-04T00:00:00.000Z',0,'2013-01-04T04:09:13.590Z_v9',1,'{\"dataSource\":\"twitterstream\",\"interval\":\"2013-01-03T00:00:00.000Z/2013-01-04T00:00:00.000Z\",\"version\":\"2013-01-04T04:09:13.590Z_v9\",\"loadSpec\":{\"type\":\"s3_zip\",\"bucket\":\"static.druid.io\",\"key\":\"data/segments/twitterstream/2013-01-03T00:00:00.000Z_2013-01-04T00:00:00.000Z/2013-01-04T04:09:13.590Z_v9/0/index.zip\"},\"dimensions\":\"has_links,first_hashtag,user_time_zone,user_location,has_mention,user_lang,rt_name,user_name,is_retweet,is_viral,has_geo,url_domain,user_mention_name,reply_to_name\",\"metrics\":\"count,tweet_length,num_followers,num_links,num_mentions,num_hashtags,num_favorites,user_total_tweets\",\"shardSpec\":{\"type\":\"none\"},\"binaryVersion\":9,\"size\":411651320,\"identifier\":\"twitterstream_2013-01-03T00:00:00.000Z_2013-01-04T00:00:00.000Z_2013-01-04T04:09:13.590Z_v9\"}');
|
||||
INSERT INTO druid_segments (id,dataSource,created_date,start,end,partitioned,version,used,payload) VALUES ('wikipedia_editstream_2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z_2013-01-10T08:13:47.830Z_v9','wikipedia_editstream','2013-03-15T20:49:52.348Z','2012-12-29T00:00:00.000Z','2013-01-10T08:00:00.000Z',0,'2013-01-10T08:13:47.830Z_v9',1,'{\"dataSource\":\"wikipedia_editstream\",\"interval\":\"2012-12-29T00:00:00.000Z/2013-01-10T08:00:00.000Z\",\"version\":\"2013-01-10T08:13:47.830Z_v9\",\"loadSpec\":{\"type\":\"s3_zip\",\"bucket\":\"static.druid.io\",\"key\":\"data/segments/wikipedia_editstream/2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z/2013-01-10T08:13:47.830Z_v9/0/index.zip\"},\"dimensions\":\"anonymous,area_code,city,continent_code,country_name,dma_code,geo,language,namespace,network,newpage,page,postal_code,region_lookup,robot,unpatrolled,user\",\"metrics\":\"added,count,deleted,delta,delta_hist,unique_users,variation\",\"shardSpec\":{\"type\":\"none\"},\"binaryVersion\":9,\"size\":446027801,\"identifier\":\"wikipedia_editstream_2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z_2013-01-10T08:13:47.830Z_v9\"}');
|
||||
INSERT INTO druid_segments (id, dataSource, created_date, start, end, partitioned, version, used, payload) VALUES ('wikipedia_2013-08-01T00:00:00.000Z_2013-08-02T00:00:00.000Z_2013-08-08T21:22:48.989Z', 'wikipedia', '2013-08-08T21:26:23.799Z', '2013-08-01T00:00:00.000Z', '2013-08-02T00:00:00.000Z', '0', '2013-08-08T21:22:48.989Z', '1', '{\"dataSource\":\"wikipedia\",\"interval\":\"2013-08-01T00:00:00.000Z/2013-08-02T00:00:00.000Z\",\"version\":\"2013-08-08T21:22:48.989Z\",\"loadSpec\":{\"type\":\"s3_zip\",\"bucket\":\"static.druid.io\",\"key\":\"data/segments/wikipedia/20130801T000000.000Z_20130802T000000.000Z/2013-08-08T21_22_48.989Z/0/index.zip\"},\"dimensions\":\"dma_code,continent_code,geo,area_code,robot,country_name,network,city,namespace,anonymous,unpatrolled,page,postal_code,language,newpage,user,region_lookup\",\"metrics\":\"count,delta,variation,added,deleted\",\"shardSpec\":{\"type\":\"none\"},\"binaryVersion\":9,\"size\":24664730,\"identifier\":\"wikipedia_2013-08-01T00:00:00.000Z_2013-08-02T00:00:00.000Z_2013-08-08T21:22:48.989Z\"}');
|
||||
INSERT INTO druid_segments (id,dataSource,created_date,start,end,partitioned,version,used,payload,used_flag_last_updated) VALUES ('twitterstream_2013-01-01T00:00:00.000Z_2013-01-02T00:00:00.000Z_2013-01-02T04:13:41.980Z_v9','twitterstream','2013-05-13T01:08:18.192Z','2013-01-01T00:00:00.000Z','2013-01-02T00:00:00.000Z',0,'2013-01-02T04:13:41.980Z_v9',1,'{\"dataSource\":\"twitterstream\",\"interval\":\"2013-01-01T00:00:00.000Z/2013-01-02T00:00:00.000Z\",\"version\":\"2013-01-02T04:13:41.980Z_v9\",\"loadSpec\":{\"type\":\"s3_zip\",\"bucket\":\"static.druid.io\",\"key\":\"data/segments/twitterstream/2013-01-01T00:00:00.000Z_2013-01-02T00:00:00.000Z/2013-01-02T04:13:41.980Z_v9/0/index.zip\"},\"dimensions\":\"has_links,first_hashtag,user_time_zone,user_location,has_mention,user_lang,rt_name,user_name,is_retweet,is_viral,has_geo,url_domain,user_mention_name,reply_to_name\",\"metrics\":\"count,tweet_length,num_followers,num_links,num_mentions,num_hashtags,num_favorites,user_total_tweets\",\"shardSpec\":{\"type\":\"none\"},\"binaryVersion\":9,\"size\":445235220,\"identifier\":\"twitterstream_2013-01-01T00:00:00.000Z_2013-01-02T00:00:00.000Z_2013-01-02T04:13:41.980Z_v9\"}','1970-01-01T00:00:00.000Z');
|
||||
INSERT INTO druid_segments (id,dataSource,created_date,start,end,partitioned,version,used,payload,used_flag_last_updated) VALUES ('twitterstream_2013-01-02T00:00:00.000Z_2013-01-03T00:00:00.000Z_2013-01-03T03:44:58.791Z_v9','twitterstream','2013-05-13T00:03:28.640Z','2013-01-02T00:00:00.000Z','2013-01-03T00:00:00.000Z',0,'2013-01-03T03:44:58.791Z_v9',1,'{\"dataSource\":\"twitterstream\",\"interval\":\"2013-01-02T00:00:00.000Z/2013-01-03T00:00:00.000Z\",\"version\":\"2013-01-03T03:44:58.791Z_v9\",\"loadSpec\":{\"type\":\"s3_zip\",\"bucket\":\"static.druid.io\",\"key\":\"data/segments/twitterstream/2013-01-02T00:00:00.000Z_2013-01-03T00:00:00.000Z/2013-01-03T03:44:58.791Z_v9/0/index.zip\"},\"dimensions\":\"has_links,first_hashtag,user_time_zone,user_location,has_mention,user_lang,rt_name,user_name,is_retweet,is_viral,has_geo,url_domain,user_mention_name,reply_to_name\",\"metrics\":\"count,tweet_length,num_followers,num_links,num_mentions,num_hashtags,num_favorites,user_total_tweets\",\"shardSpec\":{\"type\":\"none\"},\"binaryVersion\":9,\"size\":435325540,\"identifier\":\"twitterstream_2013-01-02T00:00:00.000Z_2013-01-03T00:00:00.000Z_2013-01-03T03:44:58.791Z_v9\"}','1970-01-01T00:00:00.000Z');
|
||||
INSERT INTO druid_segments (id,dataSource,created_date,start,end,partitioned,version,used,payload,used_flag_last_updated) VALUES ('twitterstream_2013-01-03T00:00:00.000Z_2013-01-04T00:00:00.000Z_2013-01-04T04:09:13.590Z_v9','twitterstream','2013-05-13T00:03:48.807Z','2013-01-03T00:00:00.000Z','2013-01-04T00:00:00.000Z',0,'2013-01-04T04:09:13.590Z_v9',1,'{\"dataSource\":\"twitterstream\",\"interval\":\"2013-01-03T00:00:00.000Z/2013-01-04T00:00:00.000Z\",\"version\":\"2013-01-04T04:09:13.590Z_v9\",\"loadSpec\":{\"type\":\"s3_zip\",\"bucket\":\"static.druid.io\",\"key\":\"data/segments/twitterstream/2013-01-03T00:00:00.000Z_2013-01-04T00:00:00.000Z/2013-01-04T04:09:13.590Z_v9/0/index.zip\"},\"dimensions\":\"has_links,first_hashtag,user_time_zone,user_location,has_mention,user_lang,rt_name,user_name,is_retweet,is_viral,has_geo,url_domain,user_mention_name,reply_to_name\",\"metrics\":\"count,tweet_length,num_followers,num_links,num_mentions,num_hashtags,num_favorites,user_total_tweets\",\"shardSpec\":{\"type\":\"none\"},\"binaryVersion\":9,\"size\":411651320,\"identifier\":\"twitterstream_2013-01-03T00:00:00.000Z_2013-01-04T00:00:00.000Z_2013-01-04T04:09:13.590Z_v9\"}','1970-01-01T00:00:00.000Z');
|
||||
INSERT INTO druid_segments (id,dataSource,created_date,start,end,partitioned,version,used,payload,used_flag_last_updated) VALUES ('wikipedia_editstream_2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z_2013-01-10T08:13:47.830Z_v9','wikipedia_editstream','2013-03-15T20:49:52.348Z','2012-12-29T00:00:00.000Z','2013-01-10T08:00:00.000Z',0,'2013-01-10T08:13:47.830Z_v9',1,'{\"dataSource\":\"wikipedia_editstream\",\"interval\":\"2012-12-29T00:00:00.000Z/2013-01-10T08:00:00.000Z\",\"version\":\"2013-01-10T08:13:47.830Z_v9\",\"loadSpec\":{\"type\":\"s3_zip\",\"bucket\":\"static.druid.io\",\"key\":\"data/segments/wikipedia_editstream/2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z/2013-01-10T08:13:47.830Z_v9/0/index.zip\"},\"dimensions\":\"anonymous,area_code,city,continent_code,country_name,dma_code,geo,language,namespace,network,newpage,page,postal_code,region_lookup,robot,unpatrolled,user\",\"metrics\":\"added,count,deleted,delta,delta_hist,unique_users,variation\",\"shardSpec\":{\"type\":\"none\"},\"binaryVersion\":9,\"size\":446027801,\"identifier\":\"wikipedia_editstream_2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z_2013-01-10T08:13:47.830Z_v9\"}','1970-01-01T00:00:00.000Z');
|
||||
INSERT INTO druid_segments (id, dataSource, created_date, start, end, partitioned, version, used, payload,used_flag_last_updated) VALUES ('wikipedia_2013-08-01T00:00:00.000Z_2013-08-02T00:00:00.000Z_2013-08-08T21:22:48.989Z', 'wikipedia', '2013-08-08T21:26:23.799Z', '2013-08-01T00:00:00.000Z', '2013-08-02T00:00:00.000Z', '0', '2013-08-08T21:22:48.989Z', '1', '{\"dataSource\":\"wikipedia\",\"interval\":\"2013-08-01T00:00:00.000Z/2013-08-02T00:00:00.000Z\",\"version\":\"2013-08-08T21:22:48.989Z\",\"loadSpec\":{\"type\":\"s3_zip\",\"bucket\":\"static.druid.io\",\"key\":\"data/segments/wikipedia/20130801T000000.000Z_20130802T000000.000Z/2013-08-08T21_22_48.989Z/0/index.zip\"},\"dimensions\":\"dma_code,continent_code,geo,area_code,robot,country_name,network,city,namespace,anonymous,unpatrolled,page,postal_code,language,newpage,user,region_lookup\",\"metrics\":\"count,delta,variation,added,deleted\",\"shardSpec\":{\"type\":\"none\"},\"binaryVersion\":9,\"size\":24664730,\"identifier\":\"wikipedia_2013-08-01T00:00:00.000Z_2013-08-02T00:00:00.000Z_2013-08-08T21:22:48.989Z\"}','1970-01-01T00:00:00.000Z');
|
||||
|
|
|
@ -14,4 +14,4 @@
|
|||
-- limitations under the License.
|
||||
|
||||
INSERT INTO druid_tasks (id, created_date, datasource, payload, status_payload, active) VALUES ('index_auth_test_2030-04-30T01:13:31.893Z', '2030-04-30T01:13:31.893Z', 'auth_test', '{\"id\":\"index_auth_test_2030-04-30T01:13:31.893Z\",\"created_date\":\"2030-04-30T01:13:31.893Z\",\"datasource\":\"auth_test\",\"active\":0}', '{\"id\":\"index_auth_test_2030-04-30T01:13:31.893Z\",\"status\":\"SUCCESS\",\"duration\":1}', 0);
|
||||
INSERT INTO druid_segments (id,dataSource,created_date,start,end,partitioned,version,used,payload) VALUES ('auth_test_2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z_2013-01-10T08:13:47.830Z_v9','auth_test','2013-03-15T20:49:52.348Z','2012-12-29T00:00:00.000Z','2013-01-10T08:00:00.000Z',0,'2013-01-10T08:13:47.830Z_v9',1,'{\"dataSource\":\"auth_test\",\"interval\":\"2012-12-29T00:00:00.000Z/2013-01-10T08:00:00.000Z\",\"version\":\"2013-01-10T08:13:47.830Z_v9\",\"loadSpec\":{\"type\":\"s3_zip\",\"bucket\":\"static.druid.io\",\"key\":\"data/segments/wikipedia_editstream/2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z/2013-01-10T08:13:47.830Z_v9/0/index.zip\"},\"dimensions\":\"anonymous,area_code,city,continent_code,country_name,dma_code,geo,language,namespace,network,newpage,page,postal_code,region_lookup,robot,unpatrolled,user\",\"metrics\":\"added,count,deleted,delta,delta_hist,unique_users,variation\",\"shardSpec\":{\"type\":\"none\"},\"binaryVersion\":9,\"size\":446027801,\"identifier\":\"auth_test_2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z_2013-01-10T08:13:47.830Z_v9\"}');
|
||||
INSERT INTO druid_segments (id,dataSource,created_date,start,end,partitioned,version,used,payload,used_flag_last_updated) VALUES ('auth_test_2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z_2013-01-10T08:13:47.830Z_v9','auth_test','2013-03-15T20:49:52.348Z','2012-12-29T00:00:00.000Z','2013-01-10T08:00:00.000Z',0,'2013-01-10T08:13:47.830Z_v9',1,'{\"dataSource\":\"auth_test\",\"interval\":\"2012-12-29T00:00:00.000Z/2013-01-10T08:00:00.000Z\",\"version\":\"2013-01-10T08:13:47.830Z_v9\",\"loadSpec\":{\"type\":\"s3_zip\",\"bucket\":\"static.druid.io\",\"key\":\"data/segments/wikipedia_editstream/2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z/2013-01-10T08:13:47.830Z_v9/0/index.zip\"},\"dimensions\":\"anonymous,area_code,city,continent_code,country_name,dma_code,geo,language,namespace,network,newpage,page,postal_code,region_lookup,robot,unpatrolled,user\",\"metrics\":\"added,count,deleted,delta,delta_hist,unique_users,variation\",\"shardSpec\":{\"type\":\"none\"},\"binaryVersion\":9,\"size\":446027801,\"identifier\":\"auth_test_2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z_2013-01-10T08:13:47.830Z_v9\"}','1970-01-01T00:00:00.000Z');
|
||||
|
|
|
@ -13,8 +13,8 @@
|
|||
-- See the License for the specific language governing permissions and
|
||||
-- limitations under the License.
|
||||
|
||||
INSERT INTO druid_segments (id,dataSource,created_date,start,end,partitioned,version,used,payload) VALUES ('twitterstream_2013-01-01T00:00:00.000Z_2013-01-02T00:00:00.000Z_2013-01-02T04:13:41.980Z_v9','twitterstream','2013-05-13T01:08:18.192Z','2013-01-01T00:00:00.000Z','2013-01-02T00:00:00.000Z',0,'2013-01-02T04:13:41.980Z_v9',1,'{\"dataSource\":\"twitterstream\",\"interval\":\"2013-01-01T00:00:00.000Z/2013-01-02T00:00:00.000Z\",\"version\":\"2013-01-02T04:13:41.980Z_v9\",\"loadSpec\":{\"type\":\"s3_zip\",\"bucket\":\"static.druid.io\",\"key\":\"data/segments/twitterstream/2013-01-01T00:00:00.000Z_2013-01-02T00:00:00.000Z/2013-01-02T04:13:41.980Z_v9/0/index.zip\"},\"dimensions\":\"has_links,first_hashtag,user_time_zone,user_location,has_mention,user_lang,rt_name,user_name,is_retweet,is_viral,has_geo,url_domain,user_mention_name,reply_to_name\",\"metrics\":\"count,tweet_length,num_followers,num_links,num_mentions,num_hashtags,num_favorites,user_total_tweets\",\"shardSpec\":{\"type\":\"none\"},\"binaryVersion\":9,\"size\":445235220,\"identifier\":\"twitterstream_2013-01-01T00:00:00.000Z_2013-01-02T00:00:00.000Z_2013-01-02T04:13:41.980Z_v9\"}');
|
||||
INSERT INTO druid_segments (id,dataSource,created_date,start,end,partitioned,version,used,payload) VALUES ('twitterstream_2013-01-02T00:00:00.000Z_2013-01-03T00:00:00.000Z_2013-01-03T03:44:58.791Z_v9','twitterstream','2013-05-13T00:03:28.640Z','2013-01-02T00:00:00.000Z','2013-01-03T00:00:00.000Z',0,'2013-01-03T03:44:58.791Z_v9',1,'{\"dataSource\":\"twitterstream\",\"interval\":\"2013-01-02T00:00:00.000Z/2013-01-03T00:00:00.000Z\",\"version\":\"2013-01-03T03:44:58.791Z_v9\",\"loadSpec\":{\"type\":\"s3_zip\",\"bucket\":\"static.druid.io\",\"key\":\"data/segments/twitterstream/2013-01-02T00:00:00.000Z_2013-01-03T00:00:00.000Z/2013-01-03T03:44:58.791Z_v9/0/index.zip\"},\"dimensions\":\"has_links,first_hashtag,user_time_zone,user_location,has_mention,user_lang,rt_name,user_name,is_retweet,is_viral,has_geo,url_domain,user_mention_name,reply_to_name\",\"metrics\":\"count,tweet_length,num_followers,num_links,num_mentions,num_hashtags,num_favorites,user_total_tweets\",\"shardSpec\":{\"type\":\"none\"},\"binaryVersion\":9,\"size\":435325540,\"identifier\":\"twitterstream_2013-01-02T00:00:00.000Z_2013-01-03T00:00:00.000Z_2013-01-03T03:44:58.791Z_v9\"}');
|
||||
INSERT INTO druid_segments (id,dataSource,created_date,start,end,partitioned,version,used,payload) VALUES ('twitterstream_2013-01-03T00:00:00.000Z_2013-01-04T00:00:00.000Z_2013-01-04T04:09:13.590Z_v9','twitterstream','2013-05-13T00:03:48.807Z','2013-01-03T00:00:00.000Z','2013-01-04T00:00:00.000Z',0,'2013-01-04T04:09:13.590Z_v9',1,'{\"dataSource\":\"twitterstream\",\"interval\":\"2013-01-03T00:00:00.000Z/2013-01-04T00:00:00.000Z\",\"version\":\"2013-01-04T04:09:13.590Z_v9\",\"loadSpec\":{\"type\":\"s3_zip\",\"bucket\":\"static.druid.io\",\"key\":\"data/segments/twitterstream/2013-01-03T00:00:00.000Z_2013-01-04T00:00:00.000Z/2013-01-04T04:09:13.590Z_v9/0/index.zip\"},\"dimensions\":\"has_links,first_hashtag,user_time_zone,user_location,has_mention,user_lang,rt_name,user_name,is_retweet,is_viral,has_geo,url_domain,user_mention_name,reply_to_name\",\"metrics\":\"count,tweet_length,num_followers,num_links,num_mentions,num_hashtags,num_favorites,user_total_tweets\",\"shardSpec\":{\"type\":\"none\"},\"binaryVersion\":9,\"size\":411651320,\"identifier\":\"twitterstream_2013-01-03T00:00:00.000Z_2013-01-04T00:00:00.000Z_2013-01-04T04:09:13.590Z_v9\"}');
|
||||
INSERT INTO druid_segments (id,dataSource,created_date,start,end,partitioned,version,used,payload) VALUES ('wikipedia_editstream_2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z_2013-01-10T08:13:47.830Z_v9','wikipedia_editstream','2013-03-15T20:49:52.348Z','2012-12-29T00:00:00.000Z','2013-01-10T08:00:00.000Z',0,'2013-01-10T08:13:47.830Z_v9',1,'{\"dataSource\":\"wikipedia_editstream\",\"interval\":\"2012-12-29T00:00:00.000Z/2013-01-10T08:00:00.000Z\",\"version\":\"2013-01-10T08:13:47.830Z_v9\",\"loadSpec\":{\"type\":\"s3_zip\",\"bucket\":\"static.druid.io\",\"key\":\"data/segments/wikipedia_editstream/2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z/2013-01-10T08:13:47.830Z_v9/0/index.zip\"},\"dimensions\":\"anonymous,area_code,city,continent_code,country_name,dma_code,geo,language,namespace,network,newpage,page,postal_code,region_lookup,robot,unpatrolled,user\",\"metrics\":\"added,count,deleted,delta,delta_hist,unique_users,variation\",\"shardSpec\":{\"type\":\"none\"},\"binaryVersion\":9,\"size\":446027801,\"identifier\":\"wikipedia_editstream_2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z_2013-01-10T08:13:47.830Z_v9\"}');
|
||||
INSERT INTO druid_segments (id, dataSource, created_date, start, end, partitioned, version, used, payload) VALUES ('wikipedia_2013-08-01T00:00:00.000Z_2013-08-02T00:00:00.000Z_2013-08-08T21:22:48.989Z', 'wikipedia', '2013-08-08T21:26:23.799Z', '2013-08-01T00:00:00.000Z', '2013-08-02T00:00:00.000Z', '0', '2013-08-08T21:22:48.989Z', '1', '{\"dataSource\":\"wikipedia\",\"interval\":\"2013-08-01T00:00:00.000Z/2013-08-02T00:00:00.000Z\",\"version\":\"2013-08-08T21:22:48.989Z\",\"loadSpec\":{\"type\":\"s3_zip\",\"bucket\":\"static.druid.io\",\"key\":\"data/segments/wikipedia/20130801T000000.000Z_20130802T000000.000Z/2013-08-08T21_22_48.989Z/0/index.zip\"},\"dimensions\":\"dma_code,continent_code,geo,area_code,robot,country_name,network,city,namespace,anonymous,unpatrolled,page,postal_code,language,newpage,user,region_lookup\",\"metrics\":\"count,delta,variation,added,deleted\",\"shardSpec\":{\"type\":\"none\"},\"binaryVersion\":9,\"size\":24664730,\"identifier\":\"wikipedia_2013-08-01T00:00:00.000Z_2013-08-02T00:00:00.000Z_2013-08-08T21:22:48.989Z\"}');
|
||||
INSERT INTO druid_segments (id,dataSource,created_date,start,end,partitioned,version,used,payload,used_flag_last_updated) VALUES ('twitterstream_2013-01-01T00:00:00.000Z_2013-01-02T00:00:00.000Z_2013-01-02T04:13:41.980Z_v9','twitterstream','2013-05-13T01:08:18.192Z','2013-01-01T00:00:00.000Z','2013-01-02T00:00:00.000Z',0,'2013-01-02T04:13:41.980Z_v9',1,'{\"dataSource\":\"twitterstream\",\"interval\":\"2013-01-01T00:00:00.000Z/2013-01-02T00:00:00.000Z\",\"version\":\"2013-01-02T04:13:41.980Z_v9\",\"loadSpec\":{\"type\":\"s3_zip\",\"bucket\":\"static.druid.io\",\"key\":\"data/segments/twitterstream/2013-01-01T00:00:00.000Z_2013-01-02T00:00:00.000Z/2013-01-02T04:13:41.980Z_v9/0/index.zip\"},\"dimensions\":\"has_links,first_hashtag,user_time_zone,user_location,has_mention,user_lang,rt_name,user_name,is_retweet,is_viral,has_geo,url_domain,user_mention_name,reply_to_name\",\"metrics\":\"count,tweet_length,num_followers,num_links,num_mentions,num_hashtags,num_favorites,user_total_tweets\",\"shardSpec\":{\"type\":\"none\"},\"binaryVersion\":9,\"size\":445235220,\"identifier\":\"twitterstream_2013-01-01T00:00:00.000Z_2013-01-02T00:00:00.000Z_2013-01-02T04:13:41.980Z_v9\"}','1970-01-01T00:00:00.000Z');
|
||||
INSERT INTO druid_segments (id,dataSource,created_date,start,end,partitioned,version,used,payload,used_flag_last_updated) VALUES ('twitterstream_2013-01-02T00:00:00.000Z_2013-01-03T00:00:00.000Z_2013-01-03T03:44:58.791Z_v9','twitterstream','2013-05-13T00:03:28.640Z','2013-01-02T00:00:00.000Z','2013-01-03T00:00:00.000Z',0,'2013-01-03T03:44:58.791Z_v9',1,'{\"dataSource\":\"twitterstream\",\"interval\":\"2013-01-02T00:00:00.000Z/2013-01-03T00:00:00.000Z\",\"version\":\"2013-01-03T03:44:58.791Z_v9\",\"loadSpec\":{\"type\":\"s3_zip\",\"bucket\":\"static.druid.io\",\"key\":\"data/segments/twitterstream/2013-01-02T00:00:00.000Z_2013-01-03T00:00:00.000Z/2013-01-03T03:44:58.791Z_v9/0/index.zip\"},\"dimensions\":\"has_links,first_hashtag,user_time_zone,user_location,has_mention,user_lang,rt_name,user_name,is_retweet,is_viral,has_geo,url_domain,user_mention_name,reply_to_name\",\"metrics\":\"count,tweet_length,num_followers,num_links,num_mentions,num_hashtags,num_favorites,user_total_tweets\",\"shardSpec\":{\"type\":\"none\"},\"binaryVersion\":9,\"size\":435325540,\"identifier\":\"twitterstream_2013-01-02T00:00:00.000Z_2013-01-03T00:00:00.000Z_2013-01-03T03:44:58.791Z_v9\"}','1970-01-01T00:00:00.000Z');
|
||||
INSERT INTO druid_segments (id,dataSource,created_date,start,end,partitioned,version,used,payload,used_flag_last_updated) VALUES ('twitterstream_2013-01-03T00:00:00.000Z_2013-01-04T00:00:00.000Z_2013-01-04T04:09:13.590Z_v9','twitterstream','2013-05-13T00:03:48.807Z','2013-01-03T00:00:00.000Z','2013-01-04T00:00:00.000Z',0,'2013-01-04T04:09:13.590Z_v9',1,'{\"dataSource\":\"twitterstream\",\"interval\":\"2013-01-03T00:00:00.000Z/2013-01-04T00:00:00.000Z\",\"version\":\"2013-01-04T04:09:13.590Z_v9\",\"loadSpec\":{\"type\":\"s3_zip\",\"bucket\":\"static.druid.io\",\"key\":\"data/segments/twitterstream/2013-01-03T00:00:00.000Z_2013-01-04T00:00:00.000Z/2013-01-04T04:09:13.590Z_v9/0/index.zip\"},\"dimensions\":\"has_links,first_hashtag,user_time_zone,user_location,has_mention,user_lang,rt_name,user_name,is_retweet,is_viral,has_geo,url_domain,user_mention_name,reply_to_name\",\"metrics\":\"count,tweet_length,num_followers,num_links,num_mentions,num_hashtags,num_favorites,user_total_tweets\",\"shardSpec\":{\"type\":\"none\"},\"binaryVersion\":9,\"size\":411651320,\"identifier\":\"twitterstream_2013-01-03T00:00:00.000Z_2013-01-04T00:00:00.000Z_2013-01-04T04:09:13.590Z_v9\"}','1970-01-01T00:00:00.000Z');
|
||||
INSERT INTO druid_segments (id,dataSource,created_date,start,end,partitioned,version,used,payload,used_flag_last_updated) VALUES ('wikipedia_editstream_2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z_2013-01-10T08:13:47.830Z_v9','wikipedia_editstream','2013-03-15T20:49:52.348Z','2012-12-29T00:00:00.000Z','2013-01-10T08:00:00.000Z',0,'2013-01-10T08:13:47.830Z_v9',1,'{\"dataSource\":\"wikipedia_editstream\",\"interval\":\"2012-12-29T00:00:00.000Z/2013-01-10T08:00:00.000Z\",\"version\":\"2013-01-10T08:13:47.830Z_v9\",\"loadSpec\":{\"type\":\"s3_zip\",\"bucket\":\"static.druid.io\",\"key\":\"data/segments/wikipedia_editstream/2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z/2013-01-10T08:13:47.830Z_v9/0/index.zip\"},\"dimensions\":\"anonymous,area_code,city,continent_code,country_name,dma_code,geo,language,namespace,network,newpage,page,postal_code,region_lookup,robot,unpatrolled,user\",\"metrics\":\"added,count,deleted,delta,delta_hist,unique_users,variation\",\"shardSpec\":{\"type\":\"none\"},\"binaryVersion\":9,\"size\":446027801,\"identifier\":\"wikipedia_editstream_2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z_2013-01-10T08:13:47.830Z_v9\"}','1970-01-01T00:00:00.000Z');
|
||||
INSERT INTO druid_segments (id, dataSource, created_date, start, end, partitioned, version, used, payload,used_flag_last_updated) VALUES ('wikipedia_2013-08-01T00:00:00.000Z_2013-08-02T00:00:00.000Z_2013-08-08T21:22:48.989Z', 'wikipedia', '2013-08-08T21:26:23.799Z', '2013-08-01T00:00:00.000Z', '2013-08-02T00:00:00.000Z', '0', '2013-08-08T21:22:48.989Z', '1', '{\"dataSource\":\"wikipedia\",\"interval\":\"2013-08-01T00:00:00.000Z/2013-08-02T00:00:00.000Z\",\"version\":\"2013-08-08T21:22:48.989Z\",\"loadSpec\":{\"type\":\"s3_zip\",\"bucket\":\"static.druid.io\",\"key\":\"data/segments/wikipedia/20130801T000000.000Z_20130802T000000.000Z/2013-08-08T21_22_48.989Z/0/index.zip\"},\"dimensions\":\"dma_code,continent_code,geo,area_code,robot,country_name,network,city,namespace,anonymous,unpatrolled,page,postal_code,language,newpage,user,region_lookup\",\"metrics\":\"count,delta,variation,added,deleted\",\"shardSpec\":{\"type\":\"none\"},\"binaryVersion\":9,\"size\":24664730,\"identifier\":\"wikipedia_2013-08-01T00:00:00.000Z_2013-08-02T00:00:00.000Z_2013-08-08T21:22:48.989Z\"}','1970-01-01T00:00:00.000Z');
|
||||
|
|
|
@ -13,8 +13,8 @@
|
|||
-- See the License for the specific language governing permissions and
|
||||
-- limitations under the License.
|
||||
|
||||
INSERT INTO druid_segments (id,dataSource,created_date,start,end,partitioned,version,used,payload) VALUES ('twitterstream_2013-01-01T00:00:00.000Z_2013-01-02T00:00:00.000Z_2013-01-02T04:13:41.980Z_v9','twitterstream','2013-05-13T01:08:18.192Z','2013-01-01T00:00:00.000Z','2013-01-02T00:00:00.000Z',0,'2013-01-02T04:13:41.980Z_v9',1,'{\"dataSource\":\"twitterstream\",\"interval\":\"2013-01-01T00:00:00.000Z/2013-01-02T00:00:00.000Z\",\"version\":\"2013-01-02T04:13:41.980Z_v9\",\"loadSpec\":{\"type\":\"s3_zip\",\"bucket\":\"static.druid.io\",\"key\":\"data/segments/twitterstream/2013-01-01T00:00:00.000Z_2013-01-02T00:00:00.000Z/2013-01-02T04:13:41.980Z_v9/0/index.zip\"},\"dimensions\":\"has_links,first_hashtag,user_time_zone,user_location,has_mention,user_lang,rt_name,user_name,is_retweet,is_viral,has_geo,url_domain,user_mention_name,reply_to_name\",\"metrics\":\"count,tweet_length,num_followers,num_links,num_mentions,num_hashtags,num_favorites,user_total_tweets\",\"shardSpec\":{\"type\":\"none\"},\"binaryVersion\":9,\"size\":445235220,\"identifier\":\"twitterstream_2013-01-01T00:00:00.000Z_2013-01-02T00:00:00.000Z_2013-01-02T04:13:41.980Z_v9\"}');
|
||||
INSERT INTO druid_segments (id,dataSource,created_date,start,end,partitioned,version,used,payload) VALUES ('twitterstream_2013-01-02T00:00:00.000Z_2013-01-03T00:00:00.000Z_2013-01-03T03:44:58.791Z_v9','twitterstream','2013-05-13T00:03:28.640Z','2013-01-02T00:00:00.000Z','2013-01-03T00:00:00.000Z',0,'2013-01-03T03:44:58.791Z_v9',1,'{\"dataSource\":\"twitterstream\",\"interval\":\"2013-01-02T00:00:00.000Z/2013-01-03T00:00:00.000Z\",\"version\":\"2013-01-03T03:44:58.791Z_v9\",\"loadSpec\":{\"type\":\"s3_zip\",\"bucket\":\"static.druid.io\",\"key\":\"data/segments/twitterstream/2013-01-02T00:00:00.000Z_2013-01-03T00:00:00.000Z/2013-01-03T03:44:58.791Z_v9/0/index.zip\"},\"dimensions\":\"has_links,first_hashtag,user_time_zone,user_location,has_mention,user_lang,rt_name,user_name,is_retweet,is_viral,has_geo,url_domain,user_mention_name,reply_to_name\",\"metrics\":\"count,tweet_length,num_followers,num_links,num_mentions,num_hashtags,num_favorites,user_total_tweets\",\"shardSpec\":{\"type\":\"none\"},\"binaryVersion\":9,\"size\":435325540,\"identifier\":\"twitterstream_2013-01-02T00:00:00.000Z_2013-01-03T00:00:00.000Z_2013-01-03T03:44:58.791Z_v9\"}');
|
||||
INSERT INTO druid_segments (id,dataSource,created_date,start,end,partitioned,version,used,payload) VALUES ('twitterstream_2013-01-03T00:00:00.000Z_2013-01-04T00:00:00.000Z_2013-01-04T04:09:13.590Z_v9','twitterstream','2013-05-13T00:03:48.807Z','2013-01-03T00:00:00.000Z','2013-01-04T00:00:00.000Z',0,'2013-01-04T04:09:13.590Z_v9',1,'{\"dataSource\":\"twitterstream\",\"interval\":\"2013-01-03T00:00:00.000Z/2013-01-04T00:00:00.000Z\",\"version\":\"2013-01-04T04:09:13.590Z_v9\",\"loadSpec\":{\"type\":\"s3_zip\",\"bucket\":\"static.druid.io\",\"key\":\"data/segments/twitterstream/2013-01-03T00:00:00.000Z_2013-01-04T00:00:00.000Z/2013-01-04T04:09:13.590Z_v9/0/index.zip\"},\"dimensions\":\"has_links,first_hashtag,user_time_zone,user_location,has_mention,user_lang,rt_name,user_name,is_retweet,is_viral,has_geo,url_domain,user_mention_name,reply_to_name\",\"metrics\":\"count,tweet_length,num_followers,num_links,num_mentions,num_hashtags,num_favorites,user_total_tweets\",\"shardSpec\":{\"type\":\"none\"},\"binaryVersion\":9,\"size\":411651320,\"identifier\":\"twitterstream_2013-01-03T00:00:00.000Z_2013-01-04T00:00:00.000Z_2013-01-04T04:09:13.590Z_v9\"}');
|
||||
INSERT INTO druid_segments (id,dataSource,created_date,start,end,partitioned,version,used,payload) VALUES ('wikipedia_editstream_2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z_2013-01-10T08:13:47.830Z_v9','wikipedia_editstream','2013-03-15T20:49:52.348Z','2012-12-29T00:00:00.000Z','2013-01-10T08:00:00.000Z',0,'2013-01-10T08:13:47.830Z_v9',1,'{\"dataSource\":\"wikipedia_editstream\",\"interval\":\"2012-12-29T00:00:00.000Z/2013-01-10T08:00:00.000Z\",\"version\":\"2013-01-10T08:13:47.830Z_v9\",\"loadSpec\":{\"type\":\"s3_zip\",\"bucket\":\"static.druid.io\",\"key\":\"data/segments/wikipedia_editstream/2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z/2013-01-10T08:13:47.830Z_v9/0/index.zip\"},\"dimensions\":\"anonymous,area_code,city,continent_code,country_name,dma_code,geo,language,namespace,network,newpage,page,postal_code,region_lookup,robot,unpatrolled,user\",\"metrics\":\"added,count,deleted,delta,delta_hist,unique_users,variation\",\"shardSpec\":{\"type\":\"none\"},\"binaryVersion\":9,\"size\":446027801,\"identifier\":\"wikipedia_editstream_2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z_2013-01-10T08:13:47.830Z_v9\"}');
|
||||
INSERT INTO druid_segments (id, dataSource, created_date, start, end, partitioned, version, used, payload) VALUES ('wikipedia_2013-08-01T00:00:00.000Z_2013-08-02T00:00:00.000Z_2013-08-08T21:22:48.989Z', 'wikipedia', '2013-08-08T21:26:23.799Z', '2013-08-01T00:00:00.000Z', '2013-08-02T00:00:00.000Z', '0', '2013-08-08T21:22:48.989Z', '1', '{\"dataSource\":\"wikipedia\",\"interval\":\"2013-08-01T00:00:00.000Z/2013-08-02T00:00:00.000Z\",\"version\":\"2013-08-08T21:22:48.989Z\",\"loadSpec\":{\"type\":\"s3_zip\",\"bucket\":\"static.druid.io\",\"key\":\"data/segments/wikipedia/20130801T000000.000Z_20130802T000000.000Z/2013-08-08T21_22_48.989Z/0/index.zip\"},\"dimensions\":\"dma_code,continent_code,geo,area_code,robot,country_name,network,city,namespace,anonymous,unpatrolled,page,postal_code,language,newpage,user,region_lookup\",\"metrics\":\"count,delta,variation,added,deleted\",\"shardSpec\":{\"type\":\"none\"},\"binaryVersion\":9,\"size\":24664730,\"identifier\":\"wikipedia_2013-08-01T00:00:00.000Z_2013-08-02T00:00:00.000Z_2013-08-08T21:22:48.989Z\"}');
|
||||
INSERT INTO druid_segments (id,dataSource,created_date,start,end,partitioned,version,used,payload,used_flag_last_updated) VALUES ('twitterstream_2013-01-01T00:00:00.000Z_2013-01-02T00:00:00.000Z_2013-01-02T04:13:41.980Z_v9','twitterstream','2013-05-13T01:08:18.192Z','2013-01-01T00:00:00.000Z','2013-01-02T00:00:00.000Z',0,'2013-01-02T04:13:41.980Z_v9',1,'{\"dataSource\":\"twitterstream\",\"interval\":\"2013-01-01T00:00:00.000Z/2013-01-02T00:00:00.000Z\",\"version\":\"2013-01-02T04:13:41.980Z_v9\",\"loadSpec\":{\"type\":\"s3_zip\",\"bucket\":\"static.druid.io\",\"key\":\"data/segments/twitterstream/2013-01-01T00:00:00.000Z_2013-01-02T00:00:00.000Z/2013-01-02T04:13:41.980Z_v9/0/index.zip\"},\"dimensions\":\"has_links,first_hashtag,user_time_zone,user_location,has_mention,user_lang,rt_name,user_name,is_retweet,is_viral,has_geo,url_domain,user_mention_name,reply_to_name\",\"metrics\":\"count,tweet_length,num_followers,num_links,num_mentions,num_hashtags,num_favorites,user_total_tweets\",\"shardSpec\":{\"type\":\"none\"},\"binaryVersion\":9,\"size\":445235220,\"identifier\":\"twitterstream_2013-01-01T00:00:00.000Z_2013-01-02T00:00:00.000Z_2013-01-02T04:13:41.980Z_v9\"}','1970-01-01T00:00:00.000Z');
|
||||
INSERT INTO druid_segments (id,dataSource,created_date,start,end,partitioned,version,used,payload,used_flag_last_updated) VALUES ('twitterstream_2013-01-02T00:00:00.000Z_2013-01-03T00:00:00.000Z_2013-01-03T03:44:58.791Z_v9','twitterstream','2013-05-13T00:03:28.640Z','2013-01-02T00:00:00.000Z','2013-01-03T00:00:00.000Z',0,'2013-01-03T03:44:58.791Z_v9',1,'{\"dataSource\":\"twitterstream\",\"interval\":\"2013-01-02T00:00:00.000Z/2013-01-03T00:00:00.000Z\",\"version\":\"2013-01-03T03:44:58.791Z_v9\",\"loadSpec\":{\"type\":\"s3_zip\",\"bucket\":\"static.druid.io\",\"key\":\"data/segments/twitterstream/2013-01-02T00:00:00.000Z_2013-01-03T00:00:00.000Z/2013-01-03T03:44:58.791Z_v9/0/index.zip\"},\"dimensions\":\"has_links,first_hashtag,user_time_zone,user_location,has_mention,user_lang,rt_name,user_name,is_retweet,is_viral,has_geo,url_domain,user_mention_name,reply_to_name\",\"metrics\":\"count,tweet_length,num_followers,num_links,num_mentions,num_hashtags,num_favorites,user_total_tweets\",\"shardSpec\":{\"type\":\"none\"},\"binaryVersion\":9,\"size\":435325540,\"identifier\":\"twitterstream_2013-01-02T00:00:00.000Z_2013-01-03T00:00:00.000Z_2013-01-03T03:44:58.791Z_v9\"}','1970-01-01T00:00:00.000Z');
|
||||
INSERT INTO druid_segments (id,dataSource,created_date,start,end,partitioned,version,used,payload,used_flag_last_updated) VALUES ('twitterstream_2013-01-03T00:00:00.000Z_2013-01-04T00:00:00.000Z_2013-01-04T04:09:13.590Z_v9','twitterstream','2013-05-13T00:03:48.807Z','2013-01-03T00:00:00.000Z','2013-01-04T00:00:00.000Z',0,'2013-01-04T04:09:13.590Z_v9',1,'{\"dataSource\":\"twitterstream\",\"interval\":\"2013-01-03T00:00:00.000Z/2013-01-04T00:00:00.000Z\",\"version\":\"2013-01-04T04:09:13.590Z_v9\",\"loadSpec\":{\"type\":\"s3_zip\",\"bucket\":\"static.druid.io\",\"key\":\"data/segments/twitterstream/2013-01-03T00:00:00.000Z_2013-01-04T00:00:00.000Z/2013-01-04T04:09:13.590Z_v9/0/index.zip\"},\"dimensions\":\"has_links,first_hashtag,user_time_zone,user_location,has_mention,user_lang,rt_name,user_name,is_retweet,is_viral,has_geo,url_domain,user_mention_name,reply_to_name\",\"metrics\":\"count,tweet_length,num_followers,num_links,num_mentions,num_hashtags,num_favorites,user_total_tweets\",\"shardSpec\":{\"type\":\"none\"},\"binaryVersion\":9,\"size\":411651320,\"identifier\":\"twitterstream_2013-01-03T00:00:00.000Z_2013-01-04T00:00:00.000Z_2013-01-04T04:09:13.590Z_v9\"}','1970-01-01T00:00:00.000Z');
|
||||
INSERT INTO druid_segments (id,dataSource,created_date,start,end,partitioned,version,used,payload,used_flag_last_updated) VALUES ('wikipedia_editstream_2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z_2013-01-10T08:13:47.830Z_v9','wikipedia_editstream','2013-03-15T20:49:52.348Z','2012-12-29T00:00:00.000Z','2013-01-10T08:00:00.000Z',0,'2013-01-10T08:13:47.830Z_v9',1,'{\"dataSource\":\"wikipedia_editstream\",\"interval\":\"2012-12-29T00:00:00.000Z/2013-01-10T08:00:00.000Z\",\"version\":\"2013-01-10T08:13:47.830Z_v9\",\"loadSpec\":{\"type\":\"s3_zip\",\"bucket\":\"static.druid.io\",\"key\":\"data/segments/wikipedia_editstream/2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z/2013-01-10T08:13:47.830Z_v9/0/index.zip\"},\"dimensions\":\"anonymous,area_code,city,continent_code,country_name,dma_code,geo,language,namespace,network,newpage,page,postal_code,region_lookup,robot,unpatrolled,user\",\"metrics\":\"added,count,deleted,delta,delta_hist,unique_users,variation\",\"shardSpec\":{\"type\":\"none\"},\"binaryVersion\":9,\"size\":446027801,\"identifier\":\"wikipedia_editstream_2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z_2013-01-10T08:13:47.830Z_v9\"}','1970-01-01T00:00:00.000Z');
|
||||
INSERT INTO druid_segments (id, dataSource, created_date, start, end, partitioned, version, used, payload,used_flag_last_updated) VALUES ('wikipedia_2013-08-01T00:00:00.000Z_2013-08-02T00:00:00.000Z_2013-08-08T21:22:48.989Z', 'wikipedia', '2013-08-08T21:26:23.799Z', '2013-08-01T00:00:00.000Z', '2013-08-02T00:00:00.000Z', '0', '2013-08-08T21:22:48.989Z', '1', '{\"dataSource\":\"wikipedia\",\"interval\":\"2013-08-01T00:00:00.000Z/2013-08-02T00:00:00.000Z\",\"version\":\"2013-08-08T21:22:48.989Z\",\"loadSpec\":{\"type\":\"s3_zip\",\"bucket\":\"static.druid.io\",\"key\":\"data/segments/wikipedia/20130801T000000.000Z_20130802T000000.000Z/2013-08-08T21_22_48.989Z/0/index.zip\"},\"dimensions\":\"dma_code,continent_code,geo,area_code,robot,country_name,network,city,namespace,anonymous,unpatrolled,page,postal_code,language,newpage,user,region_lookup\",\"metrics\":\"count,delta,variation,added,deleted\",\"shardSpec\":{\"type\":\"none\"},\"binaryVersion\":9,\"size\":24664730,\"identifier\":\"wikipedia_2013-08-01T00:00:00.000Z_2013-08-02T00:00:00.000Z_2013-08-08T21:22:48.989Z\"}','1970-01-01T00:00:00.000Z');
|
||||
|
|
|
@ -13,8 +13,8 @@
|
|||
-- See the License for the specific language governing permissions and
|
||||
-- limitations under the License.
|
||||
|
||||
INSERT INTO druid_segments (id,dataSource,created_date,start,end,partitioned,version,used,payload) VALUES ('twitterstream_2013-01-01T00:00:00.000Z_2013-01-02T00:00:00.000Z_2013-01-02T04:13:41.980Z_v9','twitterstream','2013-05-13T01:08:18.192Z','2013-01-01T00:00:00.000Z','2013-01-02T00:00:00.000Z',0,'2013-01-02T04:13:41.980Z_v9',1,'{\"dataSource\":\"twitterstream\",\"interval\":\"2013-01-01T00:00:00.000Z/2013-01-02T00:00:00.000Z\",\"version\":\"2013-01-02T04:13:41.980Z_v9\",\"loadSpec\":{\"type\":\"s3_zip\",\"bucket\":\"static.druid.io\",\"key\":\"data/segments/twitterstream/2013-01-01T00:00:00.000Z_2013-01-02T00:00:00.000Z/2013-01-02T04:13:41.980Z_v9/0/index.zip\"},\"dimensions\":\"has_links,first_hashtag,user_time_zone,user_location,has_mention,user_lang,rt_name,user_name,is_retweet,is_viral,has_geo,url_domain,user_mention_name,reply_to_name\",\"metrics\":\"count,tweet_length,num_followers,num_links,num_mentions,num_hashtags,num_favorites,user_total_tweets\",\"shardSpec\":{\"type\":\"none\"},\"binaryVersion\":9,\"size\":445235220,\"identifier\":\"twitterstream_2013-01-01T00:00:00.000Z_2013-01-02T00:00:00.000Z_2013-01-02T04:13:41.980Z_v9\"}');
|
||||
INSERT INTO druid_segments (id,dataSource,created_date,start,end,partitioned,version,used,payload) VALUES ('twitterstream_2013-01-02T00:00:00.000Z_2013-01-03T00:00:00.000Z_2013-01-03T03:44:58.791Z_v9','twitterstream','2013-05-13T00:03:28.640Z','2013-01-02T00:00:00.000Z','2013-01-03T00:00:00.000Z',0,'2013-01-03T03:44:58.791Z_v9',1,'{\"dataSource\":\"twitterstream\",\"interval\":\"2013-01-02T00:00:00.000Z/2013-01-03T00:00:00.000Z\",\"version\":\"2013-01-03T03:44:58.791Z_v9\",\"loadSpec\":{\"type\":\"s3_zip\",\"bucket\":\"static.druid.io\",\"key\":\"data/segments/twitterstream/2013-01-02T00:00:00.000Z_2013-01-03T00:00:00.000Z/2013-01-03T03:44:58.791Z_v9/0/index.zip\"},\"dimensions\":\"has_links,first_hashtag,user_time_zone,user_location,has_mention,user_lang,rt_name,user_name,is_retweet,is_viral,has_geo,url_domain,user_mention_name,reply_to_name\",\"metrics\":\"count,tweet_length,num_followers,num_links,num_mentions,num_hashtags,num_favorites,user_total_tweets\",\"shardSpec\":{\"type\":\"none\"},\"binaryVersion\":9,\"size\":435325540,\"identifier\":\"twitterstream_2013-01-02T00:00:00.000Z_2013-01-03T00:00:00.000Z_2013-01-03T03:44:58.791Z_v9\"}');
|
||||
INSERT INTO druid_segments (id,dataSource,created_date,start,end,partitioned,version,used,payload) VALUES ('twitterstream_2013-01-03T00:00:00.000Z_2013-01-04T00:00:00.000Z_2013-01-04T04:09:13.590Z_v9','twitterstream','2013-05-13T00:03:48.807Z','2013-01-03T00:00:00.000Z','2013-01-04T00:00:00.000Z',0,'2013-01-04T04:09:13.590Z_v9',1,'{\"dataSource\":\"twitterstream\",\"interval\":\"2013-01-03T00:00:00.000Z/2013-01-04T00:00:00.000Z\",\"version\":\"2013-01-04T04:09:13.590Z_v9\",\"loadSpec\":{\"type\":\"s3_zip\",\"bucket\":\"static.druid.io\",\"key\":\"data/segments/twitterstream/2013-01-03T00:00:00.000Z_2013-01-04T00:00:00.000Z/2013-01-04T04:09:13.590Z_v9/0/index.zip\"},\"dimensions\":\"has_links,first_hashtag,user_time_zone,user_location,has_mention,user_lang,rt_name,user_name,is_retweet,is_viral,has_geo,url_domain,user_mention_name,reply_to_name\",\"metrics\":\"count,tweet_length,num_followers,num_links,num_mentions,num_hashtags,num_favorites,user_total_tweets\",\"shardSpec\":{\"type\":\"none\"},\"binaryVersion\":9,\"size\":411651320,\"identifier\":\"twitterstream_2013-01-03T00:00:00.000Z_2013-01-04T00:00:00.000Z_2013-01-04T04:09:13.590Z_v9\"}');
|
||||
INSERT INTO druid_segments (id,dataSource,created_date,start,end,partitioned,version,used,payload) VALUES ('wikipedia_editstream_2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z_2013-01-10T08:13:47.830Z_v9','wikipedia_editstream','2013-03-15T20:49:52.348Z','2012-12-29T00:00:00.000Z','2013-01-10T08:00:00.000Z',0,'2013-01-10T08:13:47.830Z_v9',1,'{\"dataSource\":\"wikipedia_editstream\",\"interval\":\"2012-12-29T00:00:00.000Z/2013-01-10T08:00:00.000Z\",\"version\":\"2013-01-10T08:13:47.830Z_v9\",\"loadSpec\":{\"type\":\"s3_zip\",\"bucket\":\"static.druid.io\",\"key\":\"data/segments/wikipedia_editstream/2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z/2013-01-10T08:13:47.830Z_v9/0/index.zip\"},\"dimensions\":\"anonymous,area_code,city,continent_code,country_name,dma_code,geo,language,namespace,network,newpage,page,postal_code,region_lookup,robot,unpatrolled,user\",\"metrics\":\"added,count,deleted,delta,delta_hist,unique_users,variation\",\"shardSpec\":{\"type\":\"none\"},\"binaryVersion\":9,\"size\":446027801,\"identifier\":\"wikipedia_editstream_2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z_2013-01-10T08:13:47.830Z_v9\"}');
|
||||
INSERT INTO druid_segments (id, dataSource, created_date, start, end, partitioned, version, used, payload) VALUES ('wikipedia_2013-08-01T00:00:00.000Z_2013-08-02T00:00:00.000Z_2013-08-08T21:22:48.989Z', 'wikipedia', '2013-08-08T21:26:23.799Z', '2013-08-01T00:00:00.000Z', '2013-08-02T00:00:00.000Z', '0', '2013-08-08T21:22:48.989Z', '1', '{\"dataSource\":\"wikipedia\",\"interval\":\"2013-08-01T00:00:00.000Z/2013-08-02T00:00:00.000Z\",\"version\":\"2013-08-08T21:22:48.989Z\",\"loadSpec\":{\"type\":\"s3_zip\",\"bucket\":\"static.druid.io\",\"key\":\"data/segments/wikipedia/20130801T000000.000Z_20130802T000000.000Z/2013-08-08T21_22_48.989Z/0/index.zip\"},\"dimensions\":\"dma_code,continent_code,geo,area_code,robot,country_name,network,city,namespace,anonymous,unpatrolled,page,postal_code,language,newpage,user,region_lookup\",\"metrics\":\"count,delta,variation,added,deleted\",\"shardSpec\":{\"type\":\"none\"},\"binaryVersion\":9,\"size\":24664730,\"identifier\":\"wikipedia_2013-08-01T00:00:00.000Z_2013-08-02T00:00:00.000Z_2013-08-08T21:22:48.989Z\"}');
|
||||
INSERT INTO druid_segments (id,dataSource,created_date,start,end,partitioned,version,used,payload,used_flag_last_updated) VALUES ('twitterstream_2013-01-01T00:00:00.000Z_2013-01-02T00:00:00.000Z_2013-01-02T04:13:41.980Z_v9','twitterstream','2013-05-13T01:08:18.192Z','2013-01-01T00:00:00.000Z','2013-01-02T00:00:00.000Z',0,'2013-01-02T04:13:41.980Z_v9',1,'{\"dataSource\":\"twitterstream\",\"interval\":\"2013-01-01T00:00:00.000Z/2013-01-02T00:00:00.000Z\",\"version\":\"2013-01-02T04:13:41.980Z_v9\",\"loadSpec\":{\"type\":\"s3_zip\",\"bucket\":\"static.druid.io\",\"key\":\"data/segments/twitterstream/2013-01-01T00:00:00.000Z_2013-01-02T00:00:00.000Z/2013-01-02T04:13:41.980Z_v9/0/index.zip\"},\"dimensions\":\"has_links,first_hashtag,user_time_zone,user_location,has_mention,user_lang,rt_name,user_name,is_retweet,is_viral,has_geo,url_domain,user_mention_name,reply_to_name\",\"metrics\":\"count,tweet_length,num_followers,num_links,num_mentions,num_hashtags,num_favorites,user_total_tweets\",\"shardSpec\":{\"type\":\"none\"},\"binaryVersion\":9,\"size\":445235220,\"identifier\":\"twitterstream_2013-01-01T00:00:00.000Z_2013-01-02T00:00:00.000Z_2013-01-02T04:13:41.980Z_v9\"}','1970-01-01T00:00:00.000Z');
|
||||
INSERT INTO druid_segments (id,dataSource,created_date,start,end,partitioned,version,used,payload,used_flag_last_updated) VALUES ('twitterstream_2013-01-02T00:00:00.000Z_2013-01-03T00:00:00.000Z_2013-01-03T03:44:58.791Z_v9','twitterstream','2013-05-13T00:03:28.640Z','2013-01-02T00:00:00.000Z','2013-01-03T00:00:00.000Z',0,'2013-01-03T03:44:58.791Z_v9',1,'{\"dataSource\":\"twitterstream\",\"interval\":\"2013-01-02T00:00:00.000Z/2013-01-03T00:00:00.000Z\",\"version\":\"2013-01-03T03:44:58.791Z_v9\",\"loadSpec\":{\"type\":\"s3_zip\",\"bucket\":\"static.druid.io\",\"key\":\"data/segments/twitterstream/2013-01-02T00:00:00.000Z_2013-01-03T00:00:00.000Z/2013-01-03T03:44:58.791Z_v9/0/index.zip\"},\"dimensions\":\"has_links,first_hashtag,user_time_zone,user_location,has_mention,user_lang,rt_name,user_name,is_retweet,is_viral,has_geo,url_domain,user_mention_name,reply_to_name\",\"metrics\":\"count,tweet_length,num_followers,num_links,num_mentions,num_hashtags,num_favorites,user_total_tweets\",\"shardSpec\":{\"type\":\"none\"},\"binaryVersion\":9,\"size\":435325540,\"identifier\":\"twitterstream_2013-01-02T00:00:00.000Z_2013-01-03T00:00:00.000Z_2013-01-03T03:44:58.791Z_v9\"}','1970-01-01T00:00:00.000Z');
|
||||
INSERT INTO druid_segments (id,dataSource,created_date,start,end,partitioned,version,used,payload,used_flag_last_updated) VALUES ('twitterstream_2013-01-03T00:00:00.000Z_2013-01-04T00:00:00.000Z_2013-01-04T04:09:13.590Z_v9','twitterstream','2013-05-13T00:03:48.807Z','2013-01-03T00:00:00.000Z','2013-01-04T00:00:00.000Z',0,'2013-01-04T04:09:13.590Z_v9',1,'{\"dataSource\":\"twitterstream\",\"interval\":\"2013-01-03T00:00:00.000Z/2013-01-04T00:00:00.000Z\",\"version\":\"2013-01-04T04:09:13.590Z_v9\",\"loadSpec\":{\"type\":\"s3_zip\",\"bucket\":\"static.druid.io\",\"key\":\"data/segments/twitterstream/2013-01-03T00:00:00.000Z_2013-01-04T00:00:00.000Z/2013-01-04T04:09:13.590Z_v9/0/index.zip\"},\"dimensions\":\"has_links,first_hashtag,user_time_zone,user_location,has_mention,user_lang,rt_name,user_name,is_retweet,is_viral,has_geo,url_domain,user_mention_name,reply_to_name\",\"metrics\":\"count,tweet_length,num_followers,num_links,num_mentions,num_hashtags,num_favorites,user_total_tweets\",\"shardSpec\":{\"type\":\"none\"},\"binaryVersion\":9,\"size\":411651320,\"identifier\":\"twitterstream_2013-01-03T00:00:00.000Z_2013-01-04T00:00:00.000Z_2013-01-04T04:09:13.590Z_v9\"}','1970-01-01T00:00:00.000Z');
|
||||
INSERT INTO druid_segments (id,dataSource,created_date,start,end,partitioned,version,used,payload,used_flag_last_updated) VALUES ('wikipedia_editstream_2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z_2013-01-10T08:13:47.830Z_v9','wikipedia_editstream','2013-03-15T20:49:52.348Z','2012-12-29T00:00:00.000Z','2013-01-10T08:00:00.000Z',0,'2013-01-10T08:13:47.830Z_v9',1,'{\"dataSource\":\"wikipedia_editstream\",\"interval\":\"2012-12-29T00:00:00.000Z/2013-01-10T08:00:00.000Z\",\"version\":\"2013-01-10T08:13:47.830Z_v9\",\"loadSpec\":{\"type\":\"s3_zip\",\"bucket\":\"static.druid.io\",\"key\":\"data/segments/wikipedia_editstream/2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z/2013-01-10T08:13:47.830Z_v9/0/index.zip\"},\"dimensions\":\"anonymous,area_code,city,continent_code,country_name,dma_code,geo,language,namespace,network,newpage,page,postal_code,region_lookup,robot,unpatrolled,user\",\"metrics\":\"added,count,deleted,delta,delta_hist,unique_users,variation\",\"shardSpec\":{\"type\":\"none\"},\"binaryVersion\":9,\"size\":446027801,\"identifier\":\"wikipedia_editstream_2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z_2013-01-10T08:13:47.830Z_v9\"}','1970-01-01T00:00:00.000Z');
|
||||
INSERT INTO druid_segments (id, dataSource, created_date, start, end, partitioned, version, used, payload,used_flag_last_updated) VALUES ('wikipedia_2013-08-01T00:00:00.000Z_2013-08-02T00:00:00.000Z_2013-08-08T21:22:48.989Z', 'wikipedia', '2013-08-08T21:26:23.799Z', '2013-08-01T00:00:00.000Z', '2013-08-02T00:00:00.000Z', '0', '2013-08-08T21:22:48.989Z', '1', '{\"dataSource\":\"wikipedia\",\"interval\":\"2013-08-01T00:00:00.000Z/2013-08-02T00:00:00.000Z\",\"version\":\"2013-08-08T21:22:48.989Z\",\"loadSpec\":{\"type\":\"s3_zip\",\"bucket\":\"static.druid.io\",\"key\":\"data/segments/wikipedia/20130801T000000.000Z_20130802T000000.000Z/2013-08-08T21_22_48.989Z/0/index.zip\"},\"dimensions\":\"dma_code,continent_code,geo,area_code,robot,country_name,network,city,namespace,anonymous,unpatrolled,page,postal_code,language,newpage,user,region_lookup\",\"metrics\":\"count,delta,variation,added,deleted\",\"shardSpec\":{\"type\":\"none\"},\"binaryVersion\":9,\"size\":24664730,\"identifier\":\"wikipedia_2013-08-01T00:00:00.000Z_2013-08-02T00:00:00.000Z_2013-08-08T21:22:48.989Z\"}','1970-01-01T00:00:00.000Z');
|
||||
|
|
|
@ -14,4 +14,4 @@
|
|||
-- limitations under the License.
|
||||
|
||||
INSERT INTO druid_tasks (id, created_date, datasource, payload, status_payload, active) VALUES ('index_auth_test_2030-04-30T01:13:31.893Z', '2030-04-30T01:13:31.893Z', 'auth_test', '{\"id\":\"index_auth_test_2030-04-30T01:13:31.893Z\",\"created_date\":\"2030-04-30T01:13:31.893Z\",\"datasource\":\"auth_test\",\"active\":0}', '{\"id\":\"index_auth_test_2030-04-30T01:13:31.893Z\",\"status\":\"SUCCESS\",\"duration\":1}', 0);
|
||||
INSERT INTO druid_segments (id,dataSource,created_date,start,end,partitioned,version,used,payload) VALUES ('auth_test_2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z_2013-01-10T08:13:47.830Z_v9','auth_test','2013-03-15T20:49:52.348Z','2012-12-29T00:00:00.000Z','2013-01-10T08:00:00.000Z',0,'2013-01-10T08:13:47.830Z_v9',1,'{\"dataSource\":\"auth_test\",\"interval\":\"2012-12-29T00:00:00.000Z/2013-01-10T08:00:00.000Z\",\"version\":\"2013-01-10T08:13:47.830Z_v9\",\"loadSpec\":{\"type\":\"s3_zip\",\"bucket\":\"static.druid.io\",\"key\":\"data/segments/wikipedia_editstream/2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z/2013-01-10T08:13:47.830Z_v9/0/index.zip\"},\"dimensions\":\"anonymous,area_code,city,continent_code,country_name,dma_code,geo,language,namespace,network,newpage,page,postal_code,region_lookup,robot,unpatrolled,user\",\"metrics\":\"added,count,deleted,delta,delta_hist,unique_users,variation\",\"shardSpec\":{\"type\":\"none\"},\"binaryVersion\":9,\"size\":446027801,\"identifier\":\"auth_test_2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z_2013-01-10T08:13:47.830Z_v9\"}');
|
||||
INSERT INTO druid_segments (id,dataSource,created_date,start,end,partitioned,version,used,payload,used_flag_last_updated) VALUES ('auth_test_2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z_2013-01-10T08:13:47.830Z_v9','auth_test','2013-03-15T20:49:52.348Z','2012-12-29T00:00:00.000Z','2013-01-10T08:00:00.000Z',0,'2013-01-10T08:13:47.830Z_v9',1,'{\"dataSource\":\"auth_test\",\"interval\":\"2012-12-29T00:00:00.000Z/2013-01-10T08:00:00.000Z\",\"version\":\"2013-01-10T08:13:47.830Z_v9\",\"loadSpec\":{\"type\":\"s3_zip\",\"bucket\":\"static.druid.io\",\"key\":\"data/segments/wikipedia_editstream/2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z/2013-01-10T08:13:47.830Z_v9/0/index.zip\"},\"dimensions\":\"anonymous,area_code,city,continent_code,country_name,dma_code,geo,language,namespace,network,newpage,page,postal_code,region_lookup,robot,unpatrolled,user\",\"metrics\":\"added,count,deleted,delta,delta_hist,unique_users,variation\",\"shardSpec\":{\"type\":\"none\"},\"binaryVersion\":9,\"size\":446027801,\"identifier\":\"auth_test_2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z_2013-01-10T08:13:47.830Z_v9\"}','1970-01-01T00:00:00.000Z');
|
||||
|
|
3
pom.xml
3
pom.xml
|
@ -1301,6 +1301,9 @@
|
|||
<exclude>org/apache/druid/server/initialization/jetty/*Module*</exclude>
|
||||
<exclude>org/apache/druid/guice/http/*</exclude>
|
||||
|
||||
<!-- Ignore cli tools -->
|
||||
<exclude>org/apache/druid/cli/UpdateTables*</exclude>
|
||||
|
||||
<!-- Ignore generated code -->
|
||||
<exclude>org/apache/druid/math/expr/antlr/Expr*</exclude> <!-- core -->
|
||||
<exclude>org/apache/druid/**/generated/*Benchmark*</exclude> <!-- benchmarks -->
|
||||
|
|
|
@ -88,4 +88,12 @@ public interface MetadataStorageConnector
|
|||
void createSupervisorsTable();
|
||||
|
||||
void deleteAllRecords(String tableName);
|
||||
|
||||
/**
|
||||
* Upgrade Compatibility Method.
|
||||
*
|
||||
* A new column, used_flag_last_updated, is added to druid_segments table. This method alters the table to add the column to make
|
||||
* a cluster's metastore tables compatible with the updated Druid codebase in 0.24.x+
|
||||
*/
|
||||
void alterSegmentTableAddUsedFlagLastUpdated();
|
||||
}
|
||||
|
|
|
@ -89,4 +89,10 @@ public class TestMetadataStorageConnector implements MetadataStorageConnector
|
|||
{
|
||||
throw new UnsupportedOperationException();
|
||||
}
|
||||
|
||||
@Override
|
||||
public void alterSegmentTableAddUsedFlagLastUpdated()
|
||||
{
|
||||
throw new UnsupportedOperationException();
|
||||
}
|
||||
}
|
||||
|
|
|
@ -59,29 +59,28 @@ public class SQLMetadataStorageUpdaterJobHandler implements MetadataStorageUpdat
|
|||
{
|
||||
final PreparedBatch batch = handle.prepareBatch(
|
||||
StringUtils.format(
|
||||
"INSERT INTO %1$s (id, dataSource, created_date, start, %2$send%2$s, partitioned, version, used, payload) "
|
||||
+ "VALUES (:id, :dataSource, :created_date, :start, :end, :partitioned, :version, :used, :payload)",
|
||||
"INSERT INTO %1$s (id, dataSource, created_date, start, %2$send%2$s, partitioned, version, used, payload, used_flag_last_updated) "
|
||||
+ "VALUES (:id, :dataSource, :created_date, :start, :end, :partitioned, :version, :used, :payload, :used_flag_last_updated)",
|
||||
tableName, connector.getQuoteString()
|
||||
)
|
||||
);
|
||||
for (final DataSegment segment : segments) {
|
||||
|
||||
String now = DateTimes.nowUtc().toString();
|
||||
batch.add(
|
||||
new ImmutableMap.Builder<String, Object>()
|
||||
.put("id", segment.getId().toString())
|
||||
.put("dataSource", segment.getDataSource())
|
||||
.put("created_date", DateTimes.nowUtc().toString())
|
||||
.put("created_date", now)
|
||||
.put("start", segment.getInterval().getStart().toString())
|
||||
.put("end", segment.getInterval().getEnd().toString())
|
||||
.put("partitioned", (segment.getShardSpec() instanceof NoneShardSpec) ? false : true)
|
||||
.put("version", segment.getVersion())
|
||||
.put("used", true)
|
||||
.put("payload", mapper.writeValueAsBytes(segment))
|
||||
.put("used_flag_last_updated", now)
|
||||
.build()
|
||||
);
|
||||
|
||||
log.info("Published %s", segment.getId());
|
||||
|
||||
}
|
||||
batch.execute();
|
||||
|
||||
|
|
|
@ -1419,8 +1419,8 @@ public class IndexerSQLMetadataStorageCoordinator implements IndexerMetadataStor
|
|||
|
||||
PreparedBatch preparedBatch = handle.prepareBatch(
|
||||
StringUtils.format(
|
||||
"INSERT INTO %1$s (id, dataSource, created_date, start, %2$send%2$s, partitioned, version, used, payload) "
|
||||
+ "VALUES (:id, :dataSource, :created_date, :start, :end, :partitioned, :version, :used, :payload)",
|
||||
"INSERT INTO %1$s (id, dataSource, created_date, start, %2$send%2$s, partitioned, version, used, payload, used_flag_last_updated) "
|
||||
+ "VALUES (:id, :dataSource, :created_date, :start, :end, :partitioned, :version, :used, :payload, :used_flag_last_updated)",
|
||||
dbTables.getSegmentsTable(),
|
||||
connector.getQuoteString()
|
||||
)
|
||||
|
@ -1428,16 +1428,18 @@ public class IndexerSQLMetadataStorageCoordinator implements IndexerMetadataStor
|
|||
|
||||
for (List<DataSegment> partition : partitionedSegments) {
|
||||
for (DataSegment segment : partition) {
|
||||
String now = DateTimes.nowUtc().toString();
|
||||
preparedBatch.add()
|
||||
.bind("id", segment.getId().toString())
|
||||
.bind("dataSource", segment.getDataSource())
|
||||
.bind("created_date", DateTimes.nowUtc().toString())
|
||||
.bind("created_date", now)
|
||||
.bind("start", segment.getInterval().getStart().toString())
|
||||
.bind("end", segment.getInterval().getEnd().toString())
|
||||
.bind("partitioned", (segment.getShardSpec() instanceof NoneShardSpec) ? false : true)
|
||||
.bind("version", segment.getVersion())
|
||||
.bind("used", usedSegments.contains(segment))
|
||||
.bind("payload", jsonMapper.writeValueAsBytes(segment));
|
||||
.bind("payload", jsonMapper.writeValueAsBytes(segment))
|
||||
.bind("used_flag_last_updated", now);
|
||||
}
|
||||
final int[] affectedRows = preparedBatch.execute();
|
||||
final boolean succeeded = Arrays.stream(affectedRows).allMatch(eachAffectedRows -> eachAffectedRows == 1);
|
||||
|
|
|
@ -227,6 +227,41 @@ public abstract class SQLMetadataConnector implements MetadataStorageConnector
|
|||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Execute the desired ALTER statement on the desired table
|
||||
*
|
||||
* @param tableName The name of the table being altered
|
||||
* @param sql ALTER statment to be executed
|
||||
*/
|
||||
private void alterTable(final String tableName, final Iterable<String> sql)
|
||||
{
|
||||
try {
|
||||
retryWithHandle(
|
||||
new HandleCallback<Void>()
|
||||
{
|
||||
@Override
|
||||
public Void withHandle(Handle handle)
|
||||
{
|
||||
if (tableExists(handle, tableName)) {
|
||||
final Batch batch = handle.createBatch();
|
||||
for (String s : sql) {
|
||||
log.info("Altering table[%s], with command: %s", tableName, s);
|
||||
batch.add(s);
|
||||
}
|
||||
batch.execute();
|
||||
} else {
|
||||
log.info("Table[%s] doesn't exist", tableName);
|
||||
}
|
||||
return null;
|
||||
}
|
||||
}
|
||||
);
|
||||
}
|
||||
catch (Exception e) {
|
||||
log.warn(e, "Exception Altering table[%s]", tableName);
|
||||
}
|
||||
}
|
||||
|
||||
public void createPendingSegmentsTable(final String tableName)
|
||||
{
|
||||
createTable(
|
||||
|
@ -296,6 +331,7 @@ public abstract class SQLMetadataConnector implements MetadataStorageConnector
|
|||
+ " version VARCHAR(255) NOT NULL,\n"
|
||||
+ " used BOOLEAN NOT NULL,\n"
|
||||
+ " payload %2$s NOT NULL,\n"
|
||||
+ " used_flag_last_updated VARCHAR(255) NOT NULL,\n"
|
||||
+ " PRIMARY KEY (id)\n"
|
||||
+ ")",
|
||||
tableName, getPayloadType(), getQuoteString(), getCollation()
|
||||
|
@ -346,23 +382,11 @@ public abstract class SQLMetadataConnector implements MetadataStorageConnector
|
|||
)
|
||||
);
|
||||
}
|
||||
|
||||
public boolean tableContainsColumn(Handle handle, String table, String column)
|
||||
{
|
||||
try {
|
||||
DatabaseMetaData databaseMetaData = handle.getConnection().getMetaData();
|
||||
ResultSet columns = databaseMetaData.getColumns(null, null, table, column);
|
||||
return columns.next();
|
||||
}
|
||||
catch (SQLException e) {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
public void prepareTaskEntryTable(final String tableName)
|
||||
{
|
||||
createEntryTable(tableName);
|
||||
alterEntryTable(tableName);
|
||||
alterEntryTableAddTypeAndGroupId(tableName);
|
||||
}
|
||||
|
||||
public void createEntryTable(final String tableName)
|
||||
|
@ -399,32 +423,23 @@ public abstract class SQLMetadataConnector implements MetadataStorageConnector
|
|||
);
|
||||
}
|
||||
|
||||
private void alterEntryTable(final String tableName)
|
||||
private void alterEntryTableAddTypeAndGroupId(final String tableName)
|
||||
{
|
||||
try {
|
||||
retryWithHandle(
|
||||
new HandleCallback<Void>()
|
||||
{
|
||||
@Override
|
||||
public Void withHandle(Handle handle)
|
||||
{
|
||||
final Batch batch = handle.createBatch();
|
||||
if (!tableContainsColumn(handle, tableName, "type")) {
|
||||
log.info("Adding column: type to table[%s]", tableName);
|
||||
batch.add(StringUtils.format("ALTER TABLE %1$s ADD COLUMN type VARCHAR(255)", tableName));
|
||||
}
|
||||
if (!tableContainsColumn(handle, tableName, "group_id")) {
|
||||
log.info("Adding column: group_id to table[%s]", tableName);
|
||||
batch.add(StringUtils.format("ALTER TABLE %1$s ADD COLUMN group_id VARCHAR(255)", tableName));
|
||||
}
|
||||
batch.execute();
|
||||
return null;
|
||||
}
|
||||
}
|
||||
);
|
||||
ArrayList<String> statements = new ArrayList<>();
|
||||
if (!tableHasColumn(tableName, "type")) {
|
||||
log.info("Adding 'type' column to %s", tableName);
|
||||
statements.add(StringUtils.format("ALTER TABLE %1$s ADD COLUMN type VARCHAR(255)", tableName));
|
||||
} else {
|
||||
log.info("%s already has 'type' column", tableName);
|
||||
}
|
||||
catch (Exception e) {
|
||||
log.warn(e, "Exception altering table");
|
||||
if (!tableHasColumn(tableName, "group_id")) {
|
||||
log.info("Adding 'group_id' column to %s", tableName);
|
||||
statements.add(StringUtils.format("ALTER TABLE %1$s ADD COLUMN group_id VARCHAR(255)", tableName));
|
||||
} else {
|
||||
log.info("%s already has 'group_id' column", tableName);
|
||||
}
|
||||
if (!statements.isEmpty()) {
|
||||
alterTable(tableName, statements);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -486,6 +501,32 @@ public abstract class SQLMetadataConnector implements MetadataStorageConnector
|
|||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Adds the used_flag_last_updated column to the Druid segment table.
|
||||
*
|
||||
* This is public due to allow the UpdateTables cli tool to use for upgrade prep.
|
||||
*/
|
||||
@Override
|
||||
public void alterSegmentTableAddUsedFlagLastUpdated()
|
||||
{
|
||||
String tableName = tablesConfigSupplier.get().getSegmentsTable();
|
||||
if (!tableHasColumn(tableName, "used_flag_last_updated")) {
|
||||
log.info("Adding 'used_flag_last_updated' column to %s", tableName);
|
||||
alterTable(
|
||||
tableName,
|
||||
ImmutableList.of(
|
||||
StringUtils.format(
|
||||
"ALTER TABLE %1$s \n"
|
||||
+ "ADD used_flag_last_updated varchar(255)",
|
||||
tableName
|
||||
)
|
||||
)
|
||||
);
|
||||
} else {
|
||||
log.info("%s already has used_flag_last_updated column", tableName);
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
public Void insertOrUpdate(
|
||||
final String tableName,
|
||||
|
@ -631,7 +672,11 @@ public abstract class SQLMetadataConnector implements MetadataStorageConnector
|
|||
{
|
||||
if (config.get().isCreateTables()) {
|
||||
createSegmentTable(tablesConfigSupplier.get().getSegmentsTable());
|
||||
alterSegmentTableAddUsedFlagLastUpdated();
|
||||
}
|
||||
// Called outside of the above conditional because we want to validate the table
|
||||
// regardless of cluster configuration for creating tables.
|
||||
validateSegmentTable();
|
||||
}
|
||||
|
||||
@Override
|
||||
|
@ -942,4 +987,63 @@ public abstract class SQLMetadataConnector implements MetadataStorageConnector
|
|||
log.error(e, StringUtils.format("Exception while creating index on table [%s]", tableName));
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Interrogate table metadata and return true or false depending on the existance of the indicated column
|
||||
*
|
||||
* public visibility because DerbyConnector needs to override thanks to uppercase table and column names invalidating
|
||||
* this implementation.
|
||||
*
|
||||
* @param tableName The table being interrogated
|
||||
* @param columnName The column being looked for
|
||||
* @return boolean indicating the existence of the column in question
|
||||
*/
|
||||
public boolean tableHasColumn(String tableName, String columnName)
|
||||
{
|
||||
return getDBI().withHandle(
|
||||
new HandleCallback<Boolean>()
|
||||
{
|
||||
@Override
|
||||
public Boolean withHandle(Handle handle)
|
||||
{
|
||||
try {
|
||||
if (tableExists(handle, tableName)) {
|
||||
DatabaseMetaData dbMetaData = handle.getConnection().getMetaData();
|
||||
ResultSet columns = dbMetaData.getColumns(
|
||||
null,
|
||||
null,
|
||||
tableName,
|
||||
columnName
|
||||
);
|
||||
return columns.next();
|
||||
} else {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
catch (SQLException e) {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
}
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Ensure that the segment table has the proper schema required to run Druid properly.
|
||||
*
|
||||
* Throws RuntimeException if the column does not exist. There is no recovering from an invalid schema,
|
||||
* the program should crash.
|
||||
*
|
||||
* See <a href="https://druid.apache.org/docs/latest/operations/upgrade-prep.html">upgrade-prep docs</a> for info
|
||||
* on manually preparing your segment table.
|
||||
*/
|
||||
private void validateSegmentTable()
|
||||
{
|
||||
if (tableHasColumn(tablesConfigSupplier.get().getSegmentsTable(), "used_flag_last_updated")) {
|
||||
return;
|
||||
} else {
|
||||
throw new RuntimeException("Invalid Segment Table Schema! No used_flag_last_updated column!" +
|
||||
" See https://druid.apache.org/docs/latest/operations/upgrade-prep.html for more info on remediation");
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -55,8 +55,8 @@ public class SQLMetadataSegmentPublisher implements MetadataSegmentPublisher
|
|||
this.config = config;
|
||||
this.connector = connector;
|
||||
this.statement = StringUtils.format(
|
||||
"INSERT INTO %1$s (id, dataSource, created_date, start, %2$send%2$s, partitioned, version, used, payload) "
|
||||
+ "VALUES (:id, :dataSource, :created_date, :start, :end, :partitioned, :version, :used, :payload)",
|
||||
"INSERT INTO %1$s (id, dataSource, created_date, start, %2$send%2$s, partitioned, version, used, payload, used_flag_last_updated) "
|
||||
+ "VALUES (:id, :dataSource, :created_date, :start, :end, :partitioned, :version, :used, :payload, :used_flag_last_updated)",
|
||||
config.getSegmentsTable(), connector.getQuoteString()
|
||||
);
|
||||
}
|
||||
|
@ -64,16 +64,18 @@ public class SQLMetadataSegmentPublisher implements MetadataSegmentPublisher
|
|||
@Override
|
||||
public void publishSegment(final DataSegment segment) throws IOException
|
||||
{
|
||||
String now = DateTimes.nowUtc().toString();
|
||||
publishSegment(
|
||||
segment.getId().toString(),
|
||||
segment.getDataSource(),
|
||||
DateTimes.nowUtc().toString(),
|
||||
now,
|
||||
segment.getInterval().getStart().toString(),
|
||||
segment.getInterval().getEnd().toString(),
|
||||
(segment.getShardSpec() instanceof NoneShardSpec) ? false : true,
|
||||
segment.getVersion(),
|
||||
true,
|
||||
jsonMapper.writeValueAsBytes(segment)
|
||||
jsonMapper.writeValueAsBytes(segment),
|
||||
now
|
||||
);
|
||||
}
|
||||
|
||||
|
@ -87,7 +89,8 @@ public class SQLMetadataSegmentPublisher implements MetadataSegmentPublisher
|
|||
final boolean partitioned,
|
||||
final String version,
|
||||
final boolean used,
|
||||
final byte[] payload
|
||||
final byte[] payload,
|
||||
final String usedFlagLastUpdated
|
||||
)
|
||||
{
|
||||
try {
|
||||
|
@ -128,6 +131,7 @@ public class SQLMetadataSegmentPublisher implements MetadataSegmentPublisher
|
|||
.bind("version", version)
|
||||
.bind("used", used)
|
||||
.bind("payload", payload)
|
||||
.bind("used_flag_last_updated", usedFlagLastUpdated)
|
||||
.execute();
|
||||
|
||||
return null;
|
||||
|
|
|
@ -139,11 +139,25 @@ public interface SegmentsMetadataManager
|
|||
Set<String> retrieveAllDataSourceNames();
|
||||
|
||||
/**
|
||||
* Returns top N unused segment intervals with the end time no later than the specified maxEndTime when ordered by
|
||||
* segment start time, end time.
|
||||
* Returns top N unused segment intervals with the end time no later than the specified maxEndTime and
|
||||
* used_flag_last_updated time no later than maxLastUsedTime when ordered by segment start time, end time. Any segment having no
|
||||
* used_flag_last_updated time due to upgrade from legacy Druid means maxUsedFlagLastUpdatedTime is ignored for that segment.
|
||||
*/
|
||||
List<Interval> getUnusedSegmentIntervals(String dataSource, DateTime maxEndTime, int limit);
|
||||
List<Interval> getUnusedSegmentIntervals(
|
||||
String dataSource,
|
||||
DateTime maxEndTime,
|
||||
int limit,
|
||||
DateTime maxUsedFlagLastUpdatedTime
|
||||
);
|
||||
|
||||
@VisibleForTesting
|
||||
void poll();
|
||||
|
||||
/**
|
||||
* Populates used_flag_last_updated column in the segments table iteratively until there are no segments with a NULL
|
||||
* value for that column.
|
||||
*/
|
||||
void populateUsedFlagLastUpdatedAsync();
|
||||
|
||||
void stopAsyncUsedFlagLastUpdatedUpdate();
|
||||
}
|
||||
|
|
|
@ -55,11 +55,13 @@ import org.joda.time.DateTime;
|
|||
import org.joda.time.Duration;
|
||||
import org.joda.time.Interval;
|
||||
import org.skife.jdbi.v2.BaseResultSetMapper;
|
||||
import org.skife.jdbi.v2.Batch;
|
||||
import org.skife.jdbi.v2.FoldController;
|
||||
import org.skife.jdbi.v2.Handle;
|
||||
import org.skife.jdbi.v2.StatementContext;
|
||||
import org.skife.jdbi.v2.TransactionCallback;
|
||||
import org.skife.jdbi.v2.TransactionStatus;
|
||||
import org.skife.jdbi.v2.tweak.HandleCallback;
|
||||
import org.skife.jdbi.v2.tweak.ResultSetMapper;
|
||||
|
||||
import javax.annotation.Nullable;
|
||||
|
@ -76,6 +78,8 @@ import java.util.Map;
|
|||
import java.util.Objects;
|
||||
import java.util.Set;
|
||||
import java.util.concurrent.CompletableFuture;
|
||||
import java.util.concurrent.ExecutorService;
|
||||
import java.util.concurrent.Executors;
|
||||
import java.util.concurrent.Future;
|
||||
import java.util.concurrent.ScheduledExecutorService;
|
||||
import java.util.concurrent.TimeUnit;
|
||||
|
@ -229,6 +233,8 @@ public class SqlSegmentsMetadataManager implements SegmentsMetadataManager
|
|||
@GuardedBy("startStopPollLock")
|
||||
private @Nullable ScheduledExecutorService exec = null;
|
||||
|
||||
private Future<?> usedFlagLastUpdatedPopulationFuture;
|
||||
|
||||
@Inject
|
||||
public SqlSegmentsMetadataManager(
|
||||
ObjectMapper jsonMapper,
|
||||
|
@ -313,6 +319,110 @@ public class SqlSegmentsMetadataManager implements SegmentsMetadataManager
|
|||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
public void stopAsyncUsedFlagLastUpdatedUpdate()
|
||||
{
|
||||
if (!usedFlagLastUpdatedPopulationFuture.isDone() && !usedFlagLastUpdatedPopulationFuture.isCancelled()) {
|
||||
usedFlagLastUpdatedPopulationFuture.cancel(true);
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
public void populateUsedFlagLastUpdatedAsync()
|
||||
{
|
||||
ExecutorService executorService = Executors.newSingleThreadExecutor();
|
||||
usedFlagLastUpdatedPopulationFuture = executorService.submit(
|
||||
() -> populateUsedFlagLastUpdated()
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Populate used_flag_last_updated for unused segments whose current value for said column is NULL
|
||||
*
|
||||
* The updates are made incrementally.
|
||||
*/
|
||||
@VisibleForTesting
|
||||
void populateUsedFlagLastUpdated()
|
||||
{
|
||||
String segmentsTable = getSegmentsTable();
|
||||
log.info(
|
||||
"Populating used_flag_last_updated with non-NULL values for unused segments in [%s]",
|
||||
segmentsTable
|
||||
);
|
||||
|
||||
int limit = 100;
|
||||
int totalUpdatedEntries = 0;
|
||||
|
||||
while (true) {
|
||||
List<String> segmentsToUpdate = new ArrayList<>(100);
|
||||
try {
|
||||
connector.retryWithHandle(
|
||||
new HandleCallback<Void>()
|
||||
{
|
||||
@Override
|
||||
public Void withHandle(Handle handle)
|
||||
{
|
||||
segmentsToUpdate.addAll(handle.createQuery(
|
||||
StringUtils.format(
|
||||
"SELECT id FROM %1$s WHERE used_flag_last_updated IS NULL and used = :used %2$s",
|
||||
segmentsTable,
|
||||
connector.limitClause(limit)
|
||||
)
|
||||
).bind("used", false).mapTo(String.class).list());
|
||||
return null;
|
||||
}
|
||||
}
|
||||
);
|
||||
|
||||
if (segmentsToUpdate.isEmpty()) {
|
||||
// We have no segments to process
|
||||
break;
|
||||
}
|
||||
|
||||
connector.retryWithHandle(
|
||||
new HandleCallback<Void>()
|
||||
{
|
||||
@Override
|
||||
public Void withHandle(Handle handle)
|
||||
{
|
||||
Batch updateBatch = handle.createBatch();
|
||||
String sql = "UPDATE %1$s SET used_flag_last_updated = '%2$s' WHERE id = '%3$s'";
|
||||
String now = DateTimes.nowUtc().toString();
|
||||
for (String id : segmentsToUpdate) {
|
||||
updateBatch.add(StringUtils.format(sql, segmentsTable, now, id));
|
||||
}
|
||||
updateBatch.execute();
|
||||
return null;
|
||||
}
|
||||
}
|
||||
);
|
||||
}
|
||||
catch (Exception e) {
|
||||
log.warn(e, "Population of used_flag_last_updated in [%s] has failed. There may be unused segments with"
|
||||
+ " NULL values for used_flag_last_updated that won't be killed!", segmentsTable);
|
||||
return;
|
||||
}
|
||||
|
||||
totalUpdatedEntries += segmentsToUpdate.size();
|
||||
log.info("Updated a batch of %d rows in [%s] with a valid used_flag_last_updated date",
|
||||
segmentsToUpdate.size(),
|
||||
segmentsTable
|
||||
);
|
||||
try {
|
||||
Thread.sleep(10000);
|
||||
}
|
||||
catch (InterruptedException e) {
|
||||
log.info("Interrupted, exiting!");
|
||||
Thread.currentThread().interrupt();
|
||||
}
|
||||
}
|
||||
log.info(
|
||||
"Finished updating [%s] with a valid used_flag_last_updated date. %d rows updated",
|
||||
segmentsTable,
|
||||
totalUpdatedEntries
|
||||
);
|
||||
}
|
||||
|
||||
private Runnable createPollTaskForStartOrder(long startOrder, PeriodicDatabasePoll periodicDatabasePoll)
|
||||
{
|
||||
return () -> {
|
||||
|
@ -520,8 +630,9 @@ public class SqlSegmentsMetadataManager implements SegmentsMetadataManager
|
|||
try {
|
||||
int numUpdatedDatabaseEntries = connector.getDBI().withHandle(
|
||||
(Handle handle) -> handle
|
||||
.createStatement(StringUtils.format("UPDATE %s SET used=true WHERE id = :id", getSegmentsTable()))
|
||||
.createStatement(StringUtils.format("UPDATE %s SET used=true, used_flag_last_updated = :used_flag_last_updated WHERE id = :id", getSegmentsTable()))
|
||||
.bind("id", segmentId)
|
||||
.bind("used_flag_last_updated", DateTimes.nowUtc().toString())
|
||||
.execute()
|
||||
);
|
||||
// Unlike bulk markAsUsed methods: markAsUsedAllNonOvershadowedSegmentsInDataSource(),
|
||||
|
@ -975,8 +1086,14 @@ public class SqlSegmentsMetadataManager implements SegmentsMetadataManager
|
|||
}
|
||||
|
||||
@Override
|
||||
public List<Interval> getUnusedSegmentIntervals(final String dataSource, final DateTime maxEndTime, final int limit)
|
||||
public List<Interval> getUnusedSegmentIntervals(
|
||||
final String dataSource,
|
||||
final DateTime maxEndTime,
|
||||
final int limit,
|
||||
DateTime maxUsedFlagLastUpdatedTime
|
||||
)
|
||||
{
|
||||
// Note that we handle the case where used_flag_last_updated IS NULL here to allow smooth transition to Druid version that uses used_flag_last_updated column
|
||||
return connector.inReadOnlyTransaction(
|
||||
new TransactionCallback<List<Interval>>()
|
||||
{
|
||||
|
@ -987,7 +1104,7 @@ public class SqlSegmentsMetadataManager implements SegmentsMetadataManager
|
|||
.createQuery(
|
||||
StringUtils.format(
|
||||
"SELECT start, %2$send%2$s FROM %1$s WHERE dataSource = :dataSource AND "
|
||||
+ "%2$send%2$s <= :end AND used = false ORDER BY start, %2$send%2$s",
|
||||
+ "%2$send%2$s <= :end AND used = false AND used_flag_last_updated IS NOT NULL AND used_flag_last_updated <= :used_flag_last_updated ORDER BY start, %2$send%2$s",
|
||||
getSegmentsTable(),
|
||||
connector.getQuoteString()
|
||||
)
|
||||
|
@ -996,6 +1113,7 @@ public class SqlSegmentsMetadataManager implements SegmentsMetadataManager
|
|||
.setMaxRows(limit)
|
||||
.bind("dataSource", dataSource)
|
||||
.bind("end", maxEndTime.toString())
|
||||
.bind("used_flag_last_updated", maxUsedFlagLastUpdatedTime.toString())
|
||||
.map(
|
||||
new BaseResultSetMapper<Interval>()
|
||||
{
|
||||
|
|
|
@ -23,6 +23,7 @@ import com.fasterxml.jackson.databind.ObjectMapper;
|
|||
import com.google.common.collect.ImmutableList;
|
||||
import com.google.common.collect.Iterators;
|
||||
import org.apache.druid.java.util.common.CloseableIterators;
|
||||
import org.apache.druid.java.util.common.DateTimes;
|
||||
import org.apache.druid.java.util.common.IAE;
|
||||
import org.apache.druid.java.util.common.Intervals;
|
||||
import org.apache.druid.java.util.common.StringUtils;
|
||||
|
@ -148,13 +149,13 @@ public class SqlSegmentsMetadataQuery
|
|||
final PreparedBatch batch =
|
||||
handle.prepareBatch(
|
||||
StringUtils.format(
|
||||
"UPDATE %s SET used = ? WHERE datasource = ? AND id = ?",
|
||||
"UPDATE %s SET used = ?, used_flag_last_updated = ? WHERE datasource = ? AND id = ?",
|
||||
dbTables.getSegmentsTable()
|
||||
)
|
||||
);
|
||||
|
||||
for (SegmentId segmentId : segmentIds) {
|
||||
batch.add(used, dataSource, segmentId.toString());
|
||||
batch.add(used, DateTimes.nowUtc().toString(), dataSource, segmentId.toString());
|
||||
}
|
||||
|
||||
final int[] segmentChanges = batch.execute();
|
||||
|
@ -175,12 +176,13 @@ public class SqlSegmentsMetadataQuery
|
|||
return handle
|
||||
.createStatement(
|
||||
StringUtils.format(
|
||||
"UPDATE %s SET used=:used WHERE dataSource = :dataSource",
|
||||
"UPDATE %s SET used=:used, used_flag_last_updated = :used_flag_last_updated WHERE dataSource = :dataSource",
|
||||
dbTables.getSegmentsTable()
|
||||
)
|
||||
)
|
||||
.bind("dataSource", dataSource)
|
||||
.bind("used", false)
|
||||
.bind("used_flag_last_updated", DateTimes.nowUtc().toString())
|
||||
.execute();
|
||||
} else if (Intervals.canCompareEndpointsAsStrings(interval)
|
||||
&& interval.getStart().getYear() == interval.getEnd().getYear()) {
|
||||
|
@ -190,7 +192,7 @@ public class SqlSegmentsMetadataQuery
|
|||
return handle
|
||||
.createStatement(
|
||||
StringUtils.format(
|
||||
"UPDATE %s SET used=:used WHERE dataSource = :dataSource AND %s",
|
||||
"UPDATE %s SET used=:used, used_flag_last_updated = :used_flag_last_updated WHERE dataSource = :dataSource AND %s",
|
||||
dbTables.getSegmentsTable(),
|
||||
IntervalMode.CONTAINS.makeSqlCondition(connector.getQuoteString(), ":start", ":end")
|
||||
)
|
||||
|
@ -199,6 +201,7 @@ public class SqlSegmentsMetadataQuery
|
|||
.bind("used", false)
|
||||
.bind("start", interval.getStart().toString())
|
||||
.bind("end", interval.getEnd().toString())
|
||||
.bind("used_flag_last_updated", DateTimes.nowUtc().toString())
|
||||
.execute();
|
||||
} else {
|
||||
// Retrieve, then drop, since we can't write a WHERE clause directly.
|
||||
|
|
|
@ -88,24 +88,6 @@ public class DerbyConnector extends SQLMetadataConnector
|
|||
.isEmpty();
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean tableContainsColumn(Handle handle, String table, String column)
|
||||
{
|
||||
try {
|
||||
DatabaseMetaData databaseMetaData = handle.getConnection().getMetaData();
|
||||
ResultSet columns = databaseMetaData.getColumns(
|
||||
null,
|
||||
null,
|
||||
table.toUpperCase(Locale.ENGLISH),
|
||||
column.toUpperCase(Locale.ENGLISH)
|
||||
);
|
||||
return columns.next();
|
||||
}
|
||||
catch (SQLException e) {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
public String getSerialType()
|
||||
{
|
||||
|
@ -186,6 +168,46 @@ public class DerbyConnector extends SQLMetadataConnector
|
|||
false
|
||||
);
|
||||
}
|
||||
/**
|
||||
* Interrogate table metadata and return true or false depending on the existance of the indicated column
|
||||
*
|
||||
* public visibility because DerbyConnector needs to override thanks to uppercase table and column names.
|
||||
*
|
||||
* @param tableName The table being interrogated
|
||||
* @param columnName The column being looked for
|
||||
* @return boolean indicating the existence of the column in question
|
||||
*/
|
||||
@Override
|
||||
public boolean tableHasColumn(String tableName, String columnName)
|
||||
{
|
||||
return getDBI().withHandle(
|
||||
new HandleCallback<Boolean>()
|
||||
{
|
||||
@Override
|
||||
public Boolean withHandle(Handle handle)
|
||||
{
|
||||
try {
|
||||
if (tableExists(handle, tableName)) {
|
||||
DatabaseMetaData dbMetaData = handle.getConnection().getMetaData();
|
||||
ResultSet columns = dbMetaData.getColumns(
|
||||
null,
|
||||
null,
|
||||
tableName.toUpperCase(Locale.ENGLISH),
|
||||
columnName.toUpperCase(Locale.ENGLISH)
|
||||
);
|
||||
return columns.next();
|
||||
} else {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
catch (SQLException e) {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
}
|
||||
);
|
||||
}
|
||||
|
||||
@LifecycleStart
|
||||
public void start()
|
||||
{
|
||||
|
|
|
@ -440,6 +440,7 @@ public class DruidCoordinator
|
|||
);
|
||||
|
||||
segmentsMetadataManager.startPollingDatabasePeriodically();
|
||||
segmentsMetadataManager.populateUsedFlagLastUpdatedAsync();
|
||||
metadataRuleManager.start();
|
||||
lookupCoordinatorManager.start();
|
||||
serviceAnnouncer.announce(self);
|
||||
|
@ -534,6 +535,7 @@ public class DruidCoordinator
|
|||
lookupCoordinatorManager.stop();
|
||||
metadataRuleManager.stop();
|
||||
segmentsMetadataManager.stopPollingDatabasePeriodically();
|
||||
segmentsMetadataManager.stopAsyncUsedFlagLastUpdatedUpdate();
|
||||
|
||||
if (balancerExec != null) {
|
||||
balancerExec.shutdownNow();
|
||||
|
|
|
@ -55,6 +55,10 @@ public abstract class DruidCoordinatorConfig
|
|||
@Default("false")
|
||||
public abstract boolean getCoordinatorKillIgnoreDurationToRetain();
|
||||
|
||||
@Config("druid.coordinator.kill.bufferPeriod")
|
||||
@Default("P30D")
|
||||
public abstract Duration getCoordinatorKillBufferPeriod();
|
||||
|
||||
@Config("druid.coordinator.kill.maxSegments")
|
||||
@Default("100")
|
||||
public abstract int getCoordinatorKillMaxSegments();
|
||||
|
|
|
@ -68,6 +68,7 @@ public class KillUnusedSegments implements CoordinatorDuty
|
|||
private final boolean ignoreRetainDuration;
|
||||
private final int maxSegmentsToKill;
|
||||
private long lastKillTime = 0;
|
||||
private final long bufferPeriod;
|
||||
|
||||
private final SegmentsMetadataManager segmentsMetadataManager;
|
||||
private final OverlordClient overlordClient;
|
||||
|
@ -94,14 +95,16 @@ public class KillUnusedSegments implements CoordinatorDuty
|
|||
this.retainDuration
|
||||
);
|
||||
}
|
||||
this.bufferPeriod = config.getCoordinatorKillBufferPeriod().getMillis();
|
||||
|
||||
this.maxSegmentsToKill = config.getCoordinatorKillMaxSegments();
|
||||
Preconditions.checkArgument(this.maxSegmentsToKill > 0, "coordinator kill maxSegments must be > 0");
|
||||
|
||||
log.info(
|
||||
"Kill Task scheduling enabled with period [%s], retainDuration [%s], maxSegmentsToKill [%s]",
|
||||
"Kill Task scheduling enabled with period [%s], retainDuration [%s], bufferPeriod [%s], maxSegmentsToKill [%s]",
|
||||
this.period,
|
||||
this.ignoreRetainDuration ? "IGNORING" : this.retainDuration,
|
||||
this.bufferPeriod,
|
||||
this.maxSegmentsToKill
|
||||
);
|
||||
|
||||
|
@ -230,7 +233,7 @@ public class KillUnusedSegments implements CoordinatorDuty
|
|||
: DateTimes.nowUtc().minus(retainDuration);
|
||||
|
||||
List<Interval> unusedSegmentIntervals = segmentsMetadataManager
|
||||
.getUnusedSegmentIntervals(dataSource, maxEndTime, maxSegmentsToKill);
|
||||
.getUnusedSegmentIntervals(dataSource, maxEndTime, maxSegmentsToKill, DateTimes.nowUtc().minus(bufferPeriod));
|
||||
|
||||
if (CollectionUtils.isNullOrEmpty(unusedSegmentIntervals)) {
|
||||
return null;
|
||||
|
|
|
@ -385,10 +385,10 @@ public class IndexerSQLMetadataStorageCoordinatorTest
|
|||
(int) derbyConnector.getDBI().<Integer>withHandle(
|
||||
handle -> {
|
||||
String request = StringUtils.format(
|
||||
"UPDATE %s SET used = false WHERE id = :id",
|
||||
"UPDATE %s SET used = false, used_flag_last_updated = :used_flag_last_updated WHERE id = :id",
|
||||
derbyConnectorRule.metadataTablesConfigSupplier().get().getSegmentsTable()
|
||||
);
|
||||
return handle.createStatement(request).bind("id", segment.getId().toString()).execute();
|
||||
return handle.createStatement(request).bind("id", segment.getId().toString()).bind("used_flag_last_updated", DateTimes.nowUtc().toString()).execute();
|
||||
}
|
||||
)
|
||||
);
|
||||
|
@ -433,8 +433,8 @@ public class IndexerSQLMetadataStorageCoordinatorTest
|
|||
handle -> {
|
||||
PreparedBatch preparedBatch = handle.prepareBatch(
|
||||
StringUtils.format(
|
||||
"INSERT INTO %1$s (id, dataSource, created_date, start, %2$send%2$s, partitioned, version, used, payload) "
|
||||
+ "VALUES (:id, :dataSource, :created_date, :start, :end, :partitioned, :version, :used, :payload)",
|
||||
"INSERT INTO %1$s (id, dataSource, created_date, start, %2$send%2$s, partitioned, version, used, payload, used_flag_last_updated) "
|
||||
+ "VALUES (:id, :dataSource, :created_date, :start, :end, :partitioned, :version, :used, :payload, :used_flag_last_updated)",
|
||||
table,
|
||||
derbyConnector.getQuoteString()
|
||||
)
|
||||
|
@ -449,7 +449,8 @@ public class IndexerSQLMetadataStorageCoordinatorTest
|
|||
.bind("partitioned", !(segment.getShardSpec() instanceof NoneShardSpec))
|
||||
.bind("version", segment.getVersion())
|
||||
.bind("used", true)
|
||||
.bind("payload", mapper.writeValueAsBytes(segment));
|
||||
.bind("payload", mapper.writeValueAsBytes(segment))
|
||||
.bind("used_flag_last_updated", DateTimes.nowUtc().toString());
|
||||
}
|
||||
|
||||
final int[] affectedRows = preparedBatch.execute();
|
||||
|
|
|
@ -30,11 +30,13 @@ import org.junit.Assert;
|
|||
import org.junit.Before;
|
||||
import org.junit.Rule;
|
||||
import org.junit.Test;
|
||||
import org.skife.jdbi.v2.Batch;
|
||||
import org.skife.jdbi.v2.DBI;
|
||||
import org.skife.jdbi.v2.Handle;
|
||||
import org.skife.jdbi.v2.exceptions.CallbackFailedException;
|
||||
import org.skife.jdbi.v2.exceptions.UnableToExecuteStatementException;
|
||||
import org.skife.jdbi.v2.exceptions.UnableToObtainConnectionException;
|
||||
import org.skife.jdbi.v2.tweak.HandleCallback;
|
||||
|
||||
import java.sql.SQLException;
|
||||
import java.sql.SQLRecoverableException;
|
||||
|
@ -45,6 +47,7 @@ import java.util.Arrays;
|
|||
import java.util.Collections;
|
||||
import java.util.HashSet;
|
||||
import java.util.List;
|
||||
import java.util.Locale;
|
||||
import java.util.Map;
|
||||
import java.util.Set;
|
||||
import java.util.stream.Collectors;
|
||||
|
@ -99,7 +102,7 @@ public class SQLMetadataConnectorTest
|
|||
for (String column : Arrays.asList("type", "group_id")) {
|
||||
Assert.assertTrue(
|
||||
StringUtils.format("Tasks table column %s was not created!", column),
|
||||
connector.tableContainsColumn(handle, taskTable, column)
|
||||
connector.tableHasColumn(taskTable, column)
|
||||
);
|
||||
}
|
||||
|
||||
|
@ -164,6 +167,45 @@ public class SQLMetadataConnectorTest
|
|||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* This is a test for the upgrade path where a cluster is upgrading from a version that did not have used_flag_last_updated
|
||||
* in the segments table.
|
||||
*/
|
||||
@Test
|
||||
public void testAlterSegmentTableAddLastUsed()
|
||||
{
|
||||
connector.createSegmentTable();
|
||||
|
||||
// Drop column used_flag_last_updated to bring us in line with pre-upgrade state
|
||||
derbyConnectorRule.getConnector().retryWithHandle(
|
||||
new HandleCallback<Void>()
|
||||
{
|
||||
@Override
|
||||
public Void withHandle(Handle handle)
|
||||
{
|
||||
final Batch batch = handle.createBatch();
|
||||
batch.add(
|
||||
StringUtils.format(
|
||||
"ALTER TABLE %1$s DROP COLUMN USED_FLAG_LAST_UPDATED",
|
||||
derbyConnectorRule.metadataTablesConfigSupplier()
|
||||
.get()
|
||||
.getSegmentsTable()
|
||||
.toUpperCase(Locale.ENGLISH)
|
||||
)
|
||||
);
|
||||
batch.execute();
|
||||
return null;
|
||||
}
|
||||
}
|
||||
);
|
||||
|
||||
connector.alterSegmentTableAddUsedFlagLastUpdated();
|
||||
connector.tableHasColumn(
|
||||
derbyConnectorRule.metadataTablesConfigSupplier().get().getSegmentsTable(),
|
||||
"USED_FLAG_LAST_UPDATED"
|
||||
);
|
||||
}
|
||||
|
||||
@Test
|
||||
public void testInsertOrUpdate()
|
||||
{
|
||||
|
|
|
@ -37,6 +37,8 @@ import org.apache.druid.server.metrics.NoopServiceEmitter;
|
|||
import org.apache.druid.timeline.DataSegment;
|
||||
import org.apache.druid.timeline.SegmentId;
|
||||
import org.apache.druid.timeline.partition.NoneShardSpec;
|
||||
import org.joda.time.DateTime;
|
||||
import org.joda.time.Duration;
|
||||
import org.joda.time.Interval;
|
||||
import org.joda.time.Period;
|
||||
import org.junit.After;
|
||||
|
@ -44,8 +46,13 @@ import org.junit.Assert;
|
|||
import org.junit.Before;
|
||||
import org.junit.Rule;
|
||||
import org.junit.Test;
|
||||
import org.skife.jdbi.v2.Handle;
|
||||
import org.skife.jdbi.v2.tweak.HandleCallback;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.List;
|
||||
import java.util.Locale;
|
||||
import java.util.Map;
|
||||
import java.util.Set;
|
||||
import java.util.stream.Collectors;
|
||||
|
||||
|
@ -101,8 +108,18 @@ public class SqlSegmentsMetadataManagerTest
|
|||
);
|
||||
|
||||
private void publish(DataSegment segment, boolean used) throws IOException
|
||||
{
|
||||
publish(segment, used, DateTimes.nowUtc());
|
||||
}
|
||||
|
||||
private void publish(DataSegment segment, boolean used, DateTime usedFlagLastUpdated) throws IOException
|
||||
{
|
||||
boolean partitioned = !(segment.getShardSpec() instanceof NoneShardSpec);
|
||||
|
||||
String usedFlagLastUpdatedStr = null;
|
||||
if (null != usedFlagLastUpdated) {
|
||||
usedFlagLastUpdatedStr = usedFlagLastUpdated.toString();
|
||||
}
|
||||
publisher.publishSegment(
|
||||
segment.getId().toString(),
|
||||
segment.getDataSource(),
|
||||
|
@ -112,7 +129,8 @@ public class SqlSegmentsMetadataManagerTest
|
|||
partitioned,
|
||||
segment.getVersion(),
|
||||
used,
|
||||
jsonMapper.writeValueAsBytes(segment)
|
||||
jsonMapper.writeValueAsBytes(segment),
|
||||
usedFlagLastUpdatedStr
|
||||
);
|
||||
}
|
||||
|
||||
|
@ -350,7 +368,8 @@ public class SqlSegmentsMetadataManagerTest
|
|||
true,
|
||||
"corrupt-version",
|
||||
true,
|
||||
StringUtils.toUtf8("corrupt-payload")
|
||||
StringUtils.toUtf8("corrupt-payload"),
|
||||
"corrupt-last-used-date"
|
||||
);
|
||||
|
||||
EmittingLogger.registerEmitter(new NoopServiceEmitter());
|
||||
|
@ -364,28 +383,76 @@ public class SqlSegmentsMetadataManagerTest
|
|||
}
|
||||
|
||||
@Test
|
||||
public void testGetUnusedSegmentIntervals()
|
||||
public void testGetUnusedSegmentIntervals() throws IOException
|
||||
{
|
||||
sqlSegmentsMetadataManager.startPollingDatabasePeriodically();
|
||||
sqlSegmentsMetadataManager.poll();
|
||||
|
||||
// We alter the segment table to allow nullable used_flag_last_updated in order to test compatibility during druid upgrade from version without used_flag_last_updated.
|
||||
derbyConnectorRule.allowUsedFlagLastUpdatedToBeNullable();
|
||||
|
||||
Assert.assertTrue(sqlSegmentsMetadataManager.isPollingDatabasePeriodically());
|
||||
int numChangedSegments = sqlSegmentsMetadataManager.markAsUnusedAllSegmentsInDataSource("wikipedia");
|
||||
Assert.assertEquals(2, numChangedSegments);
|
||||
|
||||
String newDs = "newDataSource";
|
||||
final DataSegment newSegment = createSegment(
|
||||
newDs,
|
||||
"2017-10-15T00:00:00.000/2017-10-16T00:00:00.000",
|
||||
"2017-10-15T20:19:12.565Z",
|
||||
"wikipedia2/index/y=2017/m=10/d=15/2017-10-16T20:19:12.565Z/0/index.zip",
|
||||
0
|
||||
);
|
||||
publish(newSegment, false, DateTimes.nowUtc().minus(Duration.parse("PT7200S").getMillis()));
|
||||
|
||||
final DataSegment newSegment2 = createSegment(
|
||||
newDs,
|
||||
"2017-10-16T00:00:00.000/2017-10-17T00:00:00.000",
|
||||
"2017-10-15T20:19:12.565Z",
|
||||
"wikipedia2/index/y=2017/m=10/d=15/2017-10-16T20:19:12.565Z/0/index.zip",
|
||||
0
|
||||
);
|
||||
publish(newSegment2, false, DateTimes.nowUtc().minus(Duration.parse("PT172800S").getMillis()));
|
||||
|
||||
final DataSegment newSegment3 = createSegment(
|
||||
newDs,
|
||||
"2017-10-17T00:00:00.000/2017-10-18T00:00:00.000",
|
||||
"2017-10-15T20:19:12.565Z",
|
||||
"wikipedia2/index/y=2017/m=10/d=15/2017-10-16T20:19:12.565Z/0/index.zip",
|
||||
0
|
||||
);
|
||||
publish(newSegment3, false, null);
|
||||
|
||||
Assert.assertEquals(
|
||||
ImmutableList.of(segment2.getInterval()),
|
||||
sqlSegmentsMetadataManager.getUnusedSegmentIntervals("wikipedia", DateTimes.of("3000"), 1)
|
||||
sqlSegmentsMetadataManager.getUnusedSegmentIntervals("wikipedia", DateTimes.of("3000"), 1, DateTimes.COMPARE_DATE_AS_STRING_MAX)
|
||||
);
|
||||
|
||||
// Test the DateTime maxEndTime argument of getUnusedSegmentIntervals
|
||||
Assert.assertEquals(
|
||||
ImmutableList.of(segment2.getInterval()),
|
||||
sqlSegmentsMetadataManager.getUnusedSegmentIntervals("wikipedia", DateTimes.of(2012, 1, 7, 0, 0), 1)
|
||||
sqlSegmentsMetadataManager.getUnusedSegmentIntervals("wikipedia", DateTimes.of(2012, 1, 7, 0, 0), 1, DateTimes.COMPARE_DATE_AS_STRING_MAX)
|
||||
);
|
||||
|
||||
Assert.assertEquals(
|
||||
ImmutableList.of(segment2.getInterval(), segment1.getInterval()),
|
||||
sqlSegmentsMetadataManager.getUnusedSegmentIntervals("wikipedia", DateTimes.of("3000"), 5)
|
||||
sqlSegmentsMetadataManager.getUnusedSegmentIntervals("wikipedia", DateTimes.of("3000"), 5, DateTimes.COMPARE_DATE_AS_STRING_MAX)
|
||||
);
|
||||
|
||||
// Test a buffer period that should exclude some segments
|
||||
|
||||
// The wikipedia datasource has segments generated with last used time equal to roughly the time of test run. None of these segments should be selected with a bufer period of 1 day
|
||||
Assert.assertEquals(
|
||||
ImmutableList.of(),
|
||||
sqlSegmentsMetadataManager.getUnusedSegmentIntervals("wikipedia", DateTimes.of("3000"), 5, DateTimes.nowUtc().minus(Duration.parse("PT86400S")))
|
||||
);
|
||||
|
||||
// One of the 3 segments in newDs has a null used_flag_last_updated which should mean getUnusedSegmentIntervals never returns it
|
||||
// One of the 3 segments in newDs has a used_flag_last_updated older than 1 day which means it should also be returned
|
||||
// The last of the 3 segemns in newDs has a used_flag_last_updated date less than one day and should not be returned
|
||||
Assert.assertEquals(
|
||||
ImmutableList.of(newSegment2.getInterval()),
|
||||
sqlSegmentsMetadataManager.getUnusedSegmentIntervals(newDs, DateTimes.of("3000"), 5, DateTimes.nowUtc().minus(Duration.parse("PT86400S")))
|
||||
);
|
||||
}
|
||||
|
||||
|
@ -870,4 +937,43 @@ public class SqlSegmentsMetadataManagerTest
|
|||
Assert.assertTrue(dataSegmentSet.contains(newSegment2));
|
||||
}
|
||||
|
||||
@Test
|
||||
public void testPopulateUsedFlagLastUpdated() throws IOException
|
||||
{
|
||||
derbyConnectorRule.allowUsedFlagLastUpdatedToBeNullable();
|
||||
final DataSegment newSegment = createSegment(
|
||||
"dummyDS",
|
||||
"2017-10-17T00:00:00.000/2017-10-18T00:00:00.000",
|
||||
"2017-10-15T20:19:12.565Z",
|
||||
"wikipedia2/index/y=2017/m=10/d=15/2017-10-16T20:19:12.565Z/0/index.zip",
|
||||
0
|
||||
);
|
||||
publish(newSegment, false, null);
|
||||
Assert.assertTrue(getCountOfRowsWithLastUsedNull() > 0);
|
||||
sqlSegmentsMetadataManager.populateUsedFlagLastUpdated();
|
||||
Assert.assertTrue(getCountOfRowsWithLastUsedNull() == 0);
|
||||
}
|
||||
|
||||
private int getCountOfRowsWithLastUsedNull()
|
||||
{
|
||||
return derbyConnectorRule.getConnector().retryWithHandle(
|
||||
new HandleCallback<Integer>()
|
||||
{
|
||||
@Override
|
||||
public Integer withHandle(Handle handle)
|
||||
{
|
||||
List<Map<String, Object>> lst = handle.select(
|
||||
StringUtils.format(
|
||||
"SELECT * FROM %1$s WHERE USED_FLAG_LAST_UPDATED IS NULL",
|
||||
derbyConnectorRule.metadataTablesConfigSupplier()
|
||||
.get()
|
||||
.getSegmentsTable()
|
||||
.toUpperCase(Locale.ENGLISH)
|
||||
)
|
||||
);
|
||||
return lst.size();
|
||||
}
|
||||
}
|
||||
);
|
||||
}
|
||||
}
|
||||
|
|
|
@ -25,10 +25,14 @@ import org.apache.druid.java.util.common.StringUtils;
|
|||
import org.apache.druid.metadata.storage.derby.DerbyConnector;
|
||||
import org.junit.Assert;
|
||||
import org.junit.rules.ExternalResource;
|
||||
import org.skife.jdbi.v2.Batch;
|
||||
import org.skife.jdbi.v2.DBI;
|
||||
import org.skife.jdbi.v2.Handle;
|
||||
import org.skife.jdbi.v2.exceptions.UnableToObtainConnectionException;
|
||||
import org.skife.jdbi.v2.tweak.HandleCallback;
|
||||
|
||||
import java.sql.SQLException;
|
||||
import java.util.Locale;
|
||||
import java.util.UUID;
|
||||
|
||||
public class TestDerbyConnector extends DerbyConnector
|
||||
|
@ -135,5 +139,27 @@ public class TestDerbyConnector extends DerbyConnector
|
|||
{
|
||||
return dbTables;
|
||||
}
|
||||
|
||||
public void allowUsedFlagLastUpdatedToBeNullable()
|
||||
{
|
||||
connector.retryWithHandle(
|
||||
new HandleCallback<Void>()
|
||||
{
|
||||
@Override
|
||||
public Void withHandle(Handle handle)
|
||||
{
|
||||
final Batch batch = handle.createBatch();
|
||||
batch.add(
|
||||
StringUtils.format(
|
||||
"ALTER TABLE %1$s ALTER COLUMN USED_FLAG_LAST_UPDATED NULL",
|
||||
dbTables.get().getSegmentsTable().toUpperCase(Locale.ENGLISH)
|
||||
)
|
||||
);
|
||||
batch.execute();
|
||||
return null;
|
||||
}
|
||||
}
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -46,6 +46,7 @@ public class TestDruidCoordinatorConfig extends DruidCoordinatorConfig
|
|||
private final int curatorLoadQueuePeonNumCallbackThreads;
|
||||
private final Duration httpLoadQueuePeonHostTimeout;
|
||||
private final int httpLoadQueuePeonBatchSize;
|
||||
private final Duration coordinatorKillBufferPeriod;
|
||||
|
||||
public TestDruidCoordinatorConfig(
|
||||
Duration coordinatorStartDelay,
|
||||
|
@ -70,7 +71,8 @@ public class TestDruidCoordinatorConfig extends DruidCoordinatorConfig
|
|||
Duration httpLoadQueuePeonRepeatDelay,
|
||||
Duration httpLoadQueuePeonHostTimeout,
|
||||
int httpLoadQueuePeonBatchSize,
|
||||
int curatorLoadQueuePeonNumCallbackThreads
|
||||
int curatorLoadQueuePeonNumCallbackThreads,
|
||||
Duration coordinatorKillBufferPeriod
|
||||
)
|
||||
{
|
||||
this.coordinatorStartDelay = coordinatorStartDelay;
|
||||
|
@ -96,6 +98,7 @@ public class TestDruidCoordinatorConfig extends DruidCoordinatorConfig
|
|||
this.httpLoadQueuePeonHostTimeout = httpLoadQueuePeonHostTimeout;
|
||||
this.httpLoadQueuePeonBatchSize = httpLoadQueuePeonBatchSize;
|
||||
this.curatorLoadQueuePeonNumCallbackThreads = curatorLoadQueuePeonNumCallbackThreads;
|
||||
this.coordinatorKillBufferPeriod = coordinatorKillBufferPeriod;
|
||||
}
|
||||
|
||||
@Override
|
||||
|
@ -236,6 +239,12 @@ public class TestDruidCoordinatorConfig extends DruidCoordinatorConfig
|
|||
return httpLoadQueuePeonBatchSize;
|
||||
}
|
||||
|
||||
@Override
|
||||
public Duration getCoordinatorKillBufferPeriod()
|
||||
{
|
||||
return coordinatorKillBufferPeriod;
|
||||
}
|
||||
|
||||
public static class Builder
|
||||
{
|
||||
private static final Duration DEFAULT_COORDINATOR_START_DELAY = new Duration("PT300s");
|
||||
|
@ -261,6 +270,7 @@ public class TestDruidCoordinatorConfig extends DruidCoordinatorConfig
|
|||
private static final int DEFAULT_HTTP_LOAD_QUEUE_PEON_BATCH_SIZE = 1;
|
||||
private static final Duration DEFAULT_COORDINATOR_AUDIT_KILL_PERIOD = new Duration("PT86400s");
|
||||
private static final Duration DEFAULT_COORDINATOR_AUTIT_KILL_DURATION_TO_RETAIN = new Duration("PT7776000s");
|
||||
private static final Duration DEFAULT_COORDINATOR_KILL_BUFFER_PERIOD = new Duration("PT86400s");
|
||||
|
||||
|
||||
private Duration coordinatorStartDelay;
|
||||
|
@ -286,6 +296,7 @@ public class TestDruidCoordinatorConfig extends DruidCoordinatorConfig
|
|||
private Integer httpLoadQueuePeonBatchSize;
|
||||
private Duration coordinatorAuditKillPeriod;
|
||||
private Duration coordinatorAuditKillDurationToRetain;
|
||||
private Duration coordinatorKillBufferPeriod;
|
||||
|
||||
public Builder()
|
||||
{
|
||||
|
@ -429,6 +440,12 @@ public class TestDruidCoordinatorConfig extends DruidCoordinatorConfig
|
|||
return this;
|
||||
}
|
||||
|
||||
public Builder withCoordinatorKillBufferPeriod(Duration coordinatorKillBufferPeriod)
|
||||
{
|
||||
this.coordinatorKillBufferPeriod = coordinatorKillBufferPeriod;
|
||||
return this;
|
||||
}
|
||||
|
||||
public TestDruidCoordinatorConfig build()
|
||||
{
|
||||
return new TestDruidCoordinatorConfig(
|
||||
|
@ -455,7 +472,8 @@ public class TestDruidCoordinatorConfig extends DruidCoordinatorConfig
|
|||
httpLoadQueuePeonHostTimeout == null ? DEFAULT_HTTP_LOAD_QUEUE_PEON_HOST_TIMEOUT : httpLoadQueuePeonHostTimeout,
|
||||
httpLoadQueuePeonBatchSize == null ? DEFAULT_HTTP_LOAD_QUEUE_PEON_BATCH_SIZE : httpLoadQueuePeonBatchSize,
|
||||
curatorLoadQueuePeonNumCallbackThreads == null ? DEFAULT_CURATOR_LOAD_QUEUE_PEON_NUM_CALLBACK_THREADS
|
||||
: curatorLoadQueuePeonNumCallbackThreads
|
||||
: curatorLoadQueuePeonNumCallbackThreads,
|
||||
coordinatorKillBufferPeriod == null ? DEFAULT_COORDINATOR_KILL_BUFFER_PERIOD : coordinatorKillBufferPeriod
|
||||
);
|
||||
}
|
||||
|
||||
|
|
|
@ -103,6 +103,7 @@ public class KillUnusedSegmentsTest
|
|||
Mockito.doReturn(DURATION_TO_RETAIN).when(config).getCoordinatorKillDurationToRetain();
|
||||
Mockito.doReturn(INDEXING_PERIOD).when(config).getCoordinatorIndexingPeriod();
|
||||
Mockito.doReturn(MAX_SEGMENTS_TO_KILL).when(config).getCoordinatorKillMaxSegments();
|
||||
Mockito.doReturn(Duration.parse("PT3154000000S")).when(config).getCoordinatorKillBufferPeriod();
|
||||
|
||||
Mockito.doReturn(Collections.singleton("DS1"))
|
||||
.when(coordinatorDynamicConfig).getSpecificDataSourcesToKillUnusedSegmentsIn();
|
||||
|
@ -129,7 +130,8 @@ public class KillUnusedSegmentsTest
|
|||
segmentsMetadataManager.getUnusedSegmentIntervals(
|
||||
ArgumentMatchers.anyString(),
|
||||
ArgumentMatchers.any(),
|
||||
ArgumentMatchers.anyInt()
|
||||
ArgumentMatchers.anyInt(),
|
||||
ArgumentMatchers.any()
|
||||
)
|
||||
).thenAnswer(invocation -> {
|
||||
DateTime maxEndTime = invocation.getArgument(1);
|
||||
|
@ -153,7 +155,8 @@ public class KillUnusedSegmentsTest
|
|||
Mockito.doReturn(null).when(segmentsMetadataManager).getUnusedSegmentIntervals(
|
||||
ArgumentMatchers.anyString(),
|
||||
ArgumentMatchers.any(),
|
||||
ArgumentMatchers.anyInt()
|
||||
ArgumentMatchers.anyInt(),
|
||||
ArgumentMatchers.any()
|
||||
);
|
||||
|
||||
mockTaskSlotUsage(1.0, Integer.MAX_VALUE, 1, 10);
|
||||
|
|
|
@ -199,7 +199,7 @@ public class TestSegmentsMetadataManager implements SegmentsMetadataManager
|
|||
}
|
||||
|
||||
@Override
|
||||
public List<Interval> getUnusedSegmentIntervals(String dataSource, DateTime maxEndTime, int limit)
|
||||
public List<Interval> getUnusedSegmentIntervals(String dataSource, DateTime maxEndTime, int limit, DateTime maxUsedFlagLastUpdatedTime)
|
||||
{
|
||||
return null;
|
||||
}
|
||||
|
@ -209,4 +209,14 @@ public class TestSegmentsMetadataManager implements SegmentsMetadataManager
|
|||
{
|
||||
|
||||
}
|
||||
|
||||
@Override
|
||||
public void populateUsedFlagLastUpdatedAsync()
|
||||
{
|
||||
}
|
||||
|
||||
@Override
|
||||
public void stopAsyncUsedFlagLastUpdatedUpdate()
|
||||
{
|
||||
}
|
||||
}
|
||||
|
|
|
@ -76,7 +76,8 @@ public class Main
|
|||
DumpSegment.class,
|
||||
ResetCluster.class,
|
||||
ValidateSegments.class,
|
||||
ExportMetadata.class
|
||||
ExportMetadata.class,
|
||||
UpdateTables.class
|
||||
);
|
||||
builder.withGroup("tools")
|
||||
.withDescription("Various tools for working with Druid")
|
||||
|
|
|
@ -0,0 +1,134 @@
|
|||
/*
|
||||
* Licensed to the Apache Software Foundation (ASF) under one
|
||||
* or more contributor license agreements. See the NOTICE file
|
||||
* distributed with this work for additional information
|
||||
* regarding copyright ownership. The ASF licenses this file
|
||||
* to you under the Apache License, Version 2.0 (the
|
||||
* "License"); you may not use this file except in compliance
|
||||
* with the License. You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing,
|
||||
* software distributed under the License is distributed on an
|
||||
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
* KIND, either express or implied. See the License for the
|
||||
* specific language governing permissions and limitations
|
||||
* under the License.
|
||||
*/
|
||||
|
||||
package org.apache.druid.cli;
|
||||
|
||||
import com.github.rvesse.airline.annotations.Command;
|
||||
import com.github.rvesse.airline.annotations.Option;
|
||||
import com.github.rvesse.airline.annotations.restrictions.Required;
|
||||
import com.google.common.collect.ImmutableList;
|
||||
import com.google.inject.Injector;
|
||||
import com.google.inject.Key;
|
||||
import com.google.inject.Module;
|
||||
import org.apache.druid.guice.DruidProcessingModule;
|
||||
import org.apache.druid.guice.JsonConfigProvider;
|
||||
import org.apache.druid.guice.QueryRunnerFactoryModule;
|
||||
import org.apache.druid.guice.QueryableModule;
|
||||
import org.apache.druid.guice.annotations.Self;
|
||||
import org.apache.druid.java.util.common.logger.Logger;
|
||||
import org.apache.druid.metadata.MetadataStorageConnector;
|
||||
import org.apache.druid.metadata.MetadataStorageConnectorConfig;
|
||||
import org.apache.druid.metadata.MetadataStorageTablesConfig;
|
||||
import org.apache.druid.server.DruidNode;
|
||||
|
||||
import java.util.List;
|
||||
|
||||
@Command(
|
||||
name = "metadata-update",
|
||||
description = "Controlled update of metadata storage"
|
||||
)
|
||||
|
||||
public class UpdateTables extends GuiceRunnable
|
||||
{
|
||||
private static final String SEGMENTS_TABLE_ADD_USED_FLAG_LAST_UPDATED = "add-used-flag-last-updated-to-segments";
|
||||
|
||||
@Option(name = "--connectURI", description = "Database JDBC connection string")
|
||||
@Required
|
||||
private String connectURI;
|
||||
|
||||
@Option(name = "--user", description = "Database username")
|
||||
@Required
|
||||
private String user;
|
||||
|
||||
@Option(name = "--password", description = "Database password")
|
||||
@Required
|
||||
private String password;
|
||||
|
||||
@Option(name = "--base", description = "Base table name")
|
||||
private String base;
|
||||
|
||||
@Option(name = "--action", description = "Action Name")
|
||||
private String action_name;
|
||||
|
||||
private static final Logger log = new Logger(CreateTables.class);
|
||||
|
||||
public UpdateTables()
|
||||
{
|
||||
super(log);
|
||||
}
|
||||
|
||||
@Override
|
||||
protected List<? extends Module> getModules()
|
||||
{
|
||||
return ImmutableList.of(
|
||||
// It's unknown why those modules are required in CreateTables, and if all of those modules are required or not.
|
||||
// Maybe some of those modules could be removed.
|
||||
// See https://github.com/apache/druid/pull/4429#discussion_r123602930
|
||||
new DruidProcessingModule(),
|
||||
new QueryableModule(),
|
||||
new QueryRunnerFactoryModule(),
|
||||
binder -> {
|
||||
JsonConfigProvider.bindInstance(
|
||||
binder,
|
||||
Key.get(MetadataStorageConnectorConfig.class),
|
||||
new MetadataStorageConnectorConfig()
|
||||
{
|
||||
@Override
|
||||
public String getConnectURI()
|
||||
{
|
||||
return connectURI;
|
||||
}
|
||||
|
||||
@Override
|
||||
public String getUser()
|
||||
{
|
||||
return user;
|
||||
}
|
||||
|
||||
@Override
|
||||
public String getPassword()
|
||||
{
|
||||
return password;
|
||||
}
|
||||
}
|
||||
);
|
||||
JsonConfigProvider.bindInstance(
|
||||
binder,
|
||||
Key.get(MetadataStorageTablesConfig.class),
|
||||
MetadataStorageTablesConfig.fromBase(base)
|
||||
);
|
||||
JsonConfigProvider.bindInstance(
|
||||
binder,
|
||||
Key.get(DruidNode.class, Self.class),
|
||||
new DruidNode("tools", "localhost", false, -1, null, true, false)
|
||||
);
|
||||
}
|
||||
);
|
||||
}
|
||||
|
||||
@Override
|
||||
public void run()
|
||||
{
|
||||
final Injector injector = makeInjector();
|
||||
MetadataStorageConnector dbConnector = injector.getInstance(MetadataStorageConnector.class);
|
||||
if (SEGMENTS_TABLE_ADD_USED_FLAG_LAST_UPDATED.equals(action_name)) {
|
||||
dbConnector.alterSegmentTableAddUsedFlagLastUpdated();
|
||||
}
|
||||
}
|
||||
}
|
|
@ -53,6 +53,7 @@ Ceph
|
|||
CloudWatch
|
||||
ColumnDescriptor
|
||||
Corretto
|
||||
CLI
|
||||
DDL
|
||||
DML
|
||||
DNS
|
||||
|
|
Loading…
Reference in New Issue