mirror of https://github.com/apache/druid.git
Clean up after add kill bufferPeriod (#14868)
Follow up changes to #12599 Changes: - Rename column `used_flag_last_updated` to `used_status_last_updated` - Remove new CLI tool `UpdateTables`. - We already have a `CreateTables` with similar functionality, which should be able to handle update cases too. - Any user running the cluster for the first time should either just have `connector.createTables` enabled or run `CreateTables` which should create tables at the latest version. - For instance, the `UpdateTables` tool would be inadequate when a new metadata table has been added to Druid, and users would have to run `CreateTables` anyway. - Remove `upgrade-prep.md` and include that info in `metadata-init.md`. - Fix log messages to adhere to Druid style - Use lambdas
This commit is contained in:
parent
1e14df4c49
commit
097b645005
|
@ -103,8 +103,8 @@ system. The table has two main functional columns, the other columns are for ind
|
||||||
Value 1 in the `used` column means that the segment should be "used" by the cluster (i.e., it should be loaded and
|
Value 1 in the `used` column means that the segment should be "used" by the cluster (i.e., it should be loaded and
|
||||||
available for requests). Value 0 means that the segment should not be loaded into the cluster. We do this as a means of
|
available for requests). Value 0 means that the segment should not be loaded into the cluster. We do this as a means of
|
||||||
unloading segments from the cluster without actually removing their metadata (which allows for simpler rolling back if
|
unloading segments from the cluster without actually removing their metadata (which allows for simpler rolling back if
|
||||||
that is ever an issue). The `used` column has a corresponding `used_flag_last_updated` column that indicates the date at the instant
|
that is ever an issue). The `used` column has a corresponding `used_status_last_updated` column which denotes the time
|
||||||
that the `used` status of the segment was last updated. This information can be used by the coordinator to determine if
|
when the `used` status of the segment was last updated. This information can be used by the Coordinator to determine if
|
||||||
a segment is a candidate for deletion (if automated segment killing is enabled).
|
a segment is a candidate for deletion (if automated segment killing is enabled).
|
||||||
|
|
||||||
The `payload` column stores a JSON blob that has all of the metadata for the segment.
|
The `payload` column stores a JSON blob that has all of the metadata for the segment.
|
||||||
|
|
|
@ -57,6 +57,8 @@ Update your Druid runtime properties with the new metadata configuration.
|
||||||
|
|
||||||
### Create Druid tables
|
### Create Druid tables
|
||||||
|
|
||||||
|
**If you have set `druid.metadata.storage.connector.createTables` to `true` (which is the default), and your metadata connect user has DDL privileges, you can disregard this section as Druid will create metadata tables automatically on start up.**
|
||||||
|
|
||||||
Druid provides a `metadata-init` tool for creating Druid's metadata tables. After initializing the Druid database, you can run the commands shown below from the root of the Druid package to initialize the tables.
|
Druid provides a `metadata-init` tool for creating Druid's metadata tables. After initializing the Druid database, you can run the commands shown below from the root of the Druid package to initialize the tables.
|
||||||
|
|
||||||
In the example commands below:
|
In the example commands below:
|
||||||
|
@ -82,6 +84,10 @@ cd ${DRUID_ROOT}
|
||||||
java -classpath "lib/*" -Dlog4j.configurationFile=conf/druid/cluster/_common/log4j2.xml -Ddruid.extensions.directory="extensions" -Ddruid.extensions.loadList="[\"postgresql-metadata-storage\"]" -Ddruid.metadata.storage.type=postgresql -Ddruid.node.type=metadata-init org.apache.druid.cli.Main tools metadata-init --connectURI="<postgresql-uri>" --user <user> --password <pass> --base druid
|
java -classpath "lib/*" -Dlog4j.configurationFile=conf/druid/cluster/_common/log4j2.xml -Ddruid.extensions.directory="extensions" -Ddruid.extensions.loadList="[\"postgresql-metadata-storage\"]" -Ddruid.metadata.storage.type=postgresql -Ddruid.node.type=metadata-init org.apache.druid.cli.Main tools metadata-init --connectURI="<postgresql-uri>" --user <user> --password <pass> --base druid
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### Update Druid tables to latest compatible schema
|
||||||
|
|
||||||
|
The same command as above can be used to update Druid metadata tables to the latest version. If any table already exists, it is not created again but any ALTER statements that may be required are still executed.
|
||||||
|
|
||||||
### Import metadata
|
### Import metadata
|
||||||
|
|
||||||
After initializing the tables, please refer to the [import commands](../operations/export-metadata.md#importing-metadata) for your target database.
|
After initializing the tables, please refer to the [import commands](../operations/export-metadata.md#importing-metadata) for your target database.
|
||||||
|
|
|
@ -1,71 +0,0 @@
|
||||||
---
|
|
||||||
id: upgrade-prep
|
|
||||||
title: "Upgrade Prep"
|
|
||||||
---
|
|
||||||
|
|
||||||
<!--
|
|
||||||
~ Licensed to the Apache Software Foundation (ASF) under one
|
|
||||||
~ or more contributor license agreements. See the NOTICE file
|
|
||||||
~ distributed with this work for additional information
|
|
||||||
~ regarding copyright ownership. The ASF licenses this file
|
|
||||||
~ to you under the Apache License, Version 2.0 (the
|
|
||||||
~ "License"); you may not use this file except in compliance
|
|
||||||
~ with the License. You may obtain a copy of the License at
|
|
||||||
~
|
|
||||||
~ http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
~
|
|
||||||
~ Unless required by applicable law or agreed to in writing,
|
|
||||||
~ software distributed under the License is distributed on an
|
|
||||||
~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
|
||||||
~ KIND, either express or implied. See the License for the
|
|
||||||
~ specific language governing permissions and limitations
|
|
||||||
~ under the License.
|
|
||||||
-->
|
|
||||||
|
|
||||||
## Upgrade to `0.24+` from `0.23` and earlier
|
|
||||||
|
|
||||||
### Altering segments table
|
|
||||||
|
|
||||||
**If you have set `druid.metadata.storage.connector.createTables` to `true` (which is the default), and your metadata connect user has DDL privileges, you can disregard this section.**
|
|
||||||
|
|
||||||
**The coordinator and overlord services will fail if you do not execute this change prior to the upgrade**
|
|
||||||
|
|
||||||
A new column, `used_flag_last_updated`, is needed in the segments table to support new
|
|
||||||
segment killing functionality. You can manually alter the table, or you can use
|
|
||||||
a CLI tool to perform the update.
|
|
||||||
|
|
||||||
#### CLI tool
|
|
||||||
|
|
||||||
Druid provides a `metadata-update` tool for updating Druid's metadata tables.
|
|
||||||
|
|
||||||
In the example commands below:
|
|
||||||
|
|
||||||
- `lib` is the Druid lib directory
|
|
||||||
- `extensions` is the Druid extensions directory
|
|
||||||
- `base` corresponds to the value of `druid.metadata.storage.tables.base` in the configuration, `druid` by default.
|
|
||||||
- The `--connectURI` parameter corresponds to the value of `druid.metadata.storage.connector.connectURI`.
|
|
||||||
- The `--user` parameter corresponds to the value of `druid.metadata.storage.connector.user`.
|
|
||||||
- The `--password` parameter corresponds to the value of `druid.metadata.storage.connector.password`.
|
|
||||||
- The `--action` parameter corresponds to the update action you are executing. In this case it is: `add-last-used-to-segments`
|
|
||||||
|
|
||||||
##### MySQL
|
|
||||||
|
|
||||||
```bash
|
|
||||||
cd ${DRUID_ROOT}
|
|
||||||
java -classpath "lib/*" -Dlog4j.configurationFile=conf/druid/cluster/_common/log4j2.xml -Ddruid.extensions.directory="extensions" -Ddruid.extensions.loadList=[\"mysql-metadata-storage\"] -Ddruid.metadata.storage.type=mysql org.apache.druid.cli.Main tools metadata-update --connectURI="<mysql-uri>" --user <user> --password <pass> --base druid --action add-used-flag-last-updated-to-segments
|
|
||||||
```
|
|
||||||
|
|
||||||
##### PostgreSQL
|
|
||||||
|
|
||||||
```bash
|
|
||||||
cd ${DRUID_ROOT}
|
|
||||||
java -classpath "lib/*" -Dlog4j.configurationFile=conf/druid/cluster/_common/log4j2.xml -Ddruid.extensions.directory="extensions" -Ddruid.extensions.loadList=[\"postgresql-metadata-storage\"] -Ddruid.metadata.storage.type=postgresql org.apache.druid.cli.Main tools metadata-update --connectURI="<postgresql-uri>" --user <user> --password <pass> --base druid --action add-used-flag-last-updated-to-segments
|
|
||||||
```
|
|
||||||
|
|
||||||
|
|
||||||
#### Manual ALTER TABLE
|
|
||||||
|
|
||||||
```SQL
|
|
||||||
ALTER TABLE druid_segments
|
|
||||||
ADD used_flag_last_updated varchar(255);
|
|
||||||
```
|
|
|
@ -13,8 +13,8 @@
|
||||||
-- See the License for the specific language governing permissions and
|
-- See the License for the specific language governing permissions and
|
||||||
-- limitations under the License.
|
-- limitations under the License.
|
||||||
|
|
||||||
INSERT INTO druid_segments (id,dataSource,created_date,start,end,partitioned,version,used,payload,used_flag_last_updated) VALUES ('twitterstream_2013-01-01T00:00:00.000Z_2013-01-02T00:00:00.000Z_2013-01-02T04:13:41.980Z_v9','twitterstream','2013-05-13T01:08:18.192Z','2013-01-01T00:00:00.000Z','2013-01-02T00:00:00.000Z',0,'2013-01-02T04:13:41.980Z_v9',1,'{\"dataSource\":\"twitterstream\",\"interval\":\"2013-01-01T00:00:00.000Z/2013-01-02T00:00:00.000Z\",\"version\":\"2013-01-02T04:13:41.980Z_v9\",\"loadSpec\":{\"type\":\"s3_zip\",\"bucket\":\"static.druid.io\",\"key\":\"data/segments/twitterstream/2013-01-01T00:00:00.000Z_2013-01-02T00:00:00.000Z/2013-01-02T04:13:41.980Z_v9/0/index.zip\"},\"dimensions\":\"has_links,first_hashtag,user_time_zone,user_location,has_mention,user_lang,rt_name,user_name,is_retweet,is_viral,has_geo,url_domain,user_mention_name,reply_to_name\",\"metrics\":\"count,tweet_length,num_followers,num_links,num_mentions,num_hashtags,num_favorites,user_total_tweets\",\"shardSpec\":{\"type\":\"none\"},\"binaryVersion\":9,\"size\":445235220,\"identifier\":\"twitterstream_2013-01-01T00:00:00.000Z_2013-01-02T00:00:00.000Z_2013-01-02T04:13:41.980Z_v9\"}','1970-01-01T00:00:00.000Z');
|
INSERT INTO druid_segments (id,dataSource,created_date,start,end,partitioned,version,used,payload,used_status_last_updated) VALUES ('twitterstream_2013-01-01T00:00:00.000Z_2013-01-02T00:00:00.000Z_2013-01-02T04:13:41.980Z_v9','twitterstream','2013-05-13T01:08:18.192Z','2013-01-01T00:00:00.000Z','2013-01-02T00:00:00.000Z',0,'2013-01-02T04:13:41.980Z_v9',1,'{\"dataSource\":\"twitterstream\",\"interval\":\"2013-01-01T00:00:00.000Z/2013-01-02T00:00:00.000Z\",\"version\":\"2013-01-02T04:13:41.980Z_v9\",\"loadSpec\":{\"type\":\"s3_zip\",\"bucket\":\"static.druid.io\",\"key\":\"data/segments/twitterstream/2013-01-01T00:00:00.000Z_2013-01-02T00:00:00.000Z/2013-01-02T04:13:41.980Z_v9/0/index.zip\"},\"dimensions\":\"has_links,first_hashtag,user_time_zone,user_location,has_mention,user_lang,rt_name,user_name,is_retweet,is_viral,has_geo,url_domain,user_mention_name,reply_to_name\",\"metrics\":\"count,tweet_length,num_followers,num_links,num_mentions,num_hashtags,num_favorites,user_total_tweets\",\"shardSpec\":{\"type\":\"none\"},\"binaryVersion\":9,\"size\":445235220,\"identifier\":\"twitterstream_2013-01-01T00:00:00.000Z_2013-01-02T00:00:00.000Z_2013-01-02T04:13:41.980Z_v9\"}','1970-01-01T00:00:00.000Z');
|
||||||
INSERT INTO druid_segments (id,dataSource,created_date,start,end,partitioned,version,used,payload,used_flag_last_updated) VALUES ('twitterstream_2013-01-02T00:00:00.000Z_2013-01-03T00:00:00.000Z_2013-01-03T03:44:58.791Z_v9','twitterstream','2013-05-13T00:03:28.640Z','2013-01-02T00:00:00.000Z','2013-01-03T00:00:00.000Z',0,'2013-01-03T03:44:58.791Z_v9',1,'{\"dataSource\":\"twitterstream\",\"interval\":\"2013-01-02T00:00:00.000Z/2013-01-03T00:00:00.000Z\",\"version\":\"2013-01-03T03:44:58.791Z_v9\",\"loadSpec\":{\"type\":\"s3_zip\",\"bucket\":\"static.druid.io\",\"key\":\"data/segments/twitterstream/2013-01-02T00:00:00.000Z_2013-01-03T00:00:00.000Z/2013-01-03T03:44:58.791Z_v9/0/index.zip\"},\"dimensions\":\"has_links,first_hashtag,user_time_zone,user_location,has_mention,user_lang,rt_name,user_name,is_retweet,is_viral,has_geo,url_domain,user_mention_name,reply_to_name\",\"metrics\":\"count,tweet_length,num_followers,num_links,num_mentions,num_hashtags,num_favorites,user_total_tweets\",\"shardSpec\":{\"type\":\"none\"},\"binaryVersion\":9,\"size\":435325540,\"identifier\":\"twitterstream_2013-01-02T00:00:00.000Z_2013-01-03T00:00:00.000Z_2013-01-03T03:44:58.791Z_v9\"}','1970-01-01T00:00:00.000Z');
|
INSERT INTO druid_segments (id,dataSource,created_date,start,end,partitioned,version,used,payload,used_status_last_updated) VALUES ('twitterstream_2013-01-02T00:00:00.000Z_2013-01-03T00:00:00.000Z_2013-01-03T03:44:58.791Z_v9','twitterstream','2013-05-13T00:03:28.640Z','2013-01-02T00:00:00.000Z','2013-01-03T00:00:00.000Z',0,'2013-01-03T03:44:58.791Z_v9',1,'{\"dataSource\":\"twitterstream\",\"interval\":\"2013-01-02T00:00:00.000Z/2013-01-03T00:00:00.000Z\",\"version\":\"2013-01-03T03:44:58.791Z_v9\",\"loadSpec\":{\"type\":\"s3_zip\",\"bucket\":\"static.druid.io\",\"key\":\"data/segments/twitterstream/2013-01-02T00:00:00.000Z_2013-01-03T00:00:00.000Z/2013-01-03T03:44:58.791Z_v9/0/index.zip\"},\"dimensions\":\"has_links,first_hashtag,user_time_zone,user_location,has_mention,user_lang,rt_name,user_name,is_retweet,is_viral,has_geo,url_domain,user_mention_name,reply_to_name\",\"metrics\":\"count,tweet_length,num_followers,num_links,num_mentions,num_hashtags,num_favorites,user_total_tweets\",\"shardSpec\":{\"type\":\"none\"},\"binaryVersion\":9,\"size\":435325540,\"identifier\":\"twitterstream_2013-01-02T00:00:00.000Z_2013-01-03T00:00:00.000Z_2013-01-03T03:44:58.791Z_v9\"}','1970-01-01T00:00:00.000Z');
|
||||||
INSERT INTO druid_segments (id,dataSource,created_date,start,end,partitioned,version,used,payload,used_flag_last_updated) VALUES ('twitterstream_2013-01-03T00:00:00.000Z_2013-01-04T00:00:00.000Z_2013-01-04T04:09:13.590Z_v9','twitterstream','2013-05-13T00:03:48.807Z','2013-01-03T00:00:00.000Z','2013-01-04T00:00:00.000Z',0,'2013-01-04T04:09:13.590Z_v9',1,'{\"dataSource\":\"twitterstream\",\"interval\":\"2013-01-03T00:00:00.000Z/2013-01-04T00:00:00.000Z\",\"version\":\"2013-01-04T04:09:13.590Z_v9\",\"loadSpec\":{\"type\":\"s3_zip\",\"bucket\":\"static.druid.io\",\"key\":\"data/segments/twitterstream/2013-01-03T00:00:00.000Z_2013-01-04T00:00:00.000Z/2013-01-04T04:09:13.590Z_v9/0/index.zip\"},\"dimensions\":\"has_links,first_hashtag,user_time_zone,user_location,has_mention,user_lang,rt_name,user_name,is_retweet,is_viral,has_geo,url_domain,user_mention_name,reply_to_name\",\"metrics\":\"count,tweet_length,num_followers,num_links,num_mentions,num_hashtags,num_favorites,user_total_tweets\",\"shardSpec\":{\"type\":\"none\"},\"binaryVersion\":9,\"size\":411651320,\"identifier\":\"twitterstream_2013-01-03T00:00:00.000Z_2013-01-04T00:00:00.000Z_2013-01-04T04:09:13.590Z_v9\"}','1970-01-01T00:00:00.000Z');
|
INSERT INTO druid_segments (id,dataSource,created_date,start,end,partitioned,version,used,payload,used_status_last_updated) VALUES ('twitterstream_2013-01-03T00:00:00.000Z_2013-01-04T00:00:00.000Z_2013-01-04T04:09:13.590Z_v9','twitterstream','2013-05-13T00:03:48.807Z','2013-01-03T00:00:00.000Z','2013-01-04T00:00:00.000Z',0,'2013-01-04T04:09:13.590Z_v9',1,'{\"dataSource\":\"twitterstream\",\"interval\":\"2013-01-03T00:00:00.000Z/2013-01-04T00:00:00.000Z\",\"version\":\"2013-01-04T04:09:13.590Z_v9\",\"loadSpec\":{\"type\":\"s3_zip\",\"bucket\":\"static.druid.io\",\"key\":\"data/segments/twitterstream/2013-01-03T00:00:00.000Z_2013-01-04T00:00:00.000Z/2013-01-04T04:09:13.590Z_v9/0/index.zip\"},\"dimensions\":\"has_links,first_hashtag,user_time_zone,user_location,has_mention,user_lang,rt_name,user_name,is_retweet,is_viral,has_geo,url_domain,user_mention_name,reply_to_name\",\"metrics\":\"count,tweet_length,num_followers,num_links,num_mentions,num_hashtags,num_favorites,user_total_tweets\",\"shardSpec\":{\"type\":\"none\"},\"binaryVersion\":9,\"size\":411651320,\"identifier\":\"twitterstream_2013-01-03T00:00:00.000Z_2013-01-04T00:00:00.000Z_2013-01-04T04:09:13.590Z_v9\"}','1970-01-01T00:00:00.000Z');
|
||||||
INSERT INTO druid_segments (id,dataSource,created_date,start,end,partitioned,version,used,payload,used_flag_last_updated) VALUES ('wikipedia_editstream_2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z_2013-01-10T08:13:47.830Z_v9','wikipedia_editstream','2013-03-15T20:49:52.348Z','2012-12-29T00:00:00.000Z','2013-01-10T08:00:00.000Z',0,'2013-01-10T08:13:47.830Z_v9',1,'{\"dataSource\":\"wikipedia_editstream\",\"interval\":\"2012-12-29T00:00:00.000Z/2013-01-10T08:00:00.000Z\",\"version\":\"2013-01-10T08:13:47.830Z_v9\",\"loadSpec\":{\"type\":\"s3_zip\",\"bucket\":\"static.druid.io\",\"key\":\"data/segments/wikipedia_editstream/2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z/2013-01-10T08:13:47.830Z_v9/0/index.zip\"},\"dimensions\":\"anonymous,area_code,city,continent_code,country_name,dma_code,geo,language,namespace,network,newpage,page,postal_code,region_lookup,robot,unpatrolled,user\",\"metrics\":\"added,count,deleted,delta,delta_hist,unique_users,variation\",\"shardSpec\":{\"type\":\"none\"},\"binaryVersion\":9,\"size\":446027801,\"identifier\":\"wikipedia_editstream_2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z_2013-01-10T08:13:47.830Z_v9\"}','1970-01-01T00:00:00.000Z');
|
INSERT INTO druid_segments (id,dataSource,created_date,start,end,partitioned,version,used,payload,used_status_last_updated) VALUES ('wikipedia_editstream_2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z_2013-01-10T08:13:47.830Z_v9','wikipedia_editstream','2013-03-15T20:49:52.348Z','2012-12-29T00:00:00.000Z','2013-01-10T08:00:00.000Z',0,'2013-01-10T08:13:47.830Z_v9',1,'{\"dataSource\":\"wikipedia_editstream\",\"interval\":\"2012-12-29T00:00:00.000Z/2013-01-10T08:00:00.000Z\",\"version\":\"2013-01-10T08:13:47.830Z_v9\",\"loadSpec\":{\"type\":\"s3_zip\",\"bucket\":\"static.druid.io\",\"key\":\"data/segments/wikipedia_editstream/2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z/2013-01-10T08:13:47.830Z_v9/0/index.zip\"},\"dimensions\":\"anonymous,area_code,city,continent_code,country_name,dma_code,geo,language,namespace,network,newpage,page,postal_code,region_lookup,robot,unpatrolled,user\",\"metrics\":\"added,count,deleted,delta,delta_hist,unique_users,variation\",\"shardSpec\":{\"type\":\"none\"},\"binaryVersion\":9,\"size\":446027801,\"identifier\":\"wikipedia_editstream_2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z_2013-01-10T08:13:47.830Z_v9\"}','1970-01-01T00:00:00.000Z');
|
||||||
INSERT INTO druid_segments (id, dataSource, created_date, start, end, partitioned, version, used, payload,used_flag_last_updated) VALUES ('wikipedia_2013-08-01T00:00:00.000Z_2013-08-02T00:00:00.000Z_2013-08-08T21:22:48.989Z', 'wikipedia', '2013-08-08T21:26:23.799Z', '2013-08-01T00:00:00.000Z', '2013-08-02T00:00:00.000Z', '0', '2013-08-08T21:22:48.989Z', '1', '{\"dataSource\":\"wikipedia\",\"interval\":\"2013-08-01T00:00:00.000Z/2013-08-02T00:00:00.000Z\",\"version\":\"2013-08-08T21:22:48.989Z\",\"loadSpec\":{\"type\":\"s3_zip\",\"bucket\":\"static.druid.io\",\"key\":\"data/segments/wikipedia/20130801T000000.000Z_20130802T000000.000Z/2013-08-08T21_22_48.989Z/0/index.zip\"},\"dimensions\":\"dma_code,continent_code,geo,area_code,robot,country_name,network,city,namespace,anonymous,unpatrolled,page,postal_code,language,newpage,user,region_lookup\",\"metrics\":\"count,delta,variation,added,deleted\",\"shardSpec\":{\"type\":\"none\"},\"binaryVersion\":9,\"size\":24664730,\"identifier\":\"wikipedia_2013-08-01T00:00:00.000Z_2013-08-02T00:00:00.000Z_2013-08-08T21:22:48.989Z\"}','1970-01-01T00:00:00.000Z');
|
INSERT INTO druid_segments (id, dataSource, created_date, start, end, partitioned, version, used, payload,used_status_last_updated) VALUES ('wikipedia_2013-08-01T00:00:00.000Z_2013-08-02T00:00:00.000Z_2013-08-08T21:22:48.989Z', 'wikipedia', '2013-08-08T21:26:23.799Z', '2013-08-01T00:00:00.000Z', '2013-08-02T00:00:00.000Z', '0', '2013-08-08T21:22:48.989Z', '1', '{\"dataSource\":\"wikipedia\",\"interval\":\"2013-08-01T00:00:00.000Z/2013-08-02T00:00:00.000Z\",\"version\":\"2013-08-08T21:22:48.989Z\",\"loadSpec\":{\"type\":\"s3_zip\",\"bucket\":\"static.druid.io\",\"key\":\"data/segments/wikipedia/20130801T000000.000Z_20130802T000000.000Z/2013-08-08T21_22_48.989Z/0/index.zip\"},\"dimensions\":\"dma_code,continent_code,geo,area_code,robot,country_name,network,city,namespace,anonymous,unpatrolled,page,postal_code,language,newpage,user,region_lookup\",\"metrics\":\"count,delta,variation,added,deleted\",\"shardSpec\":{\"type\":\"none\"},\"binaryVersion\":9,\"size\":24664730,\"identifier\":\"wikipedia_2013-08-01T00:00:00.000Z_2013-08-02T00:00:00.000Z_2013-08-08T21:22:48.989Z\"}','1970-01-01T00:00:00.000Z');
|
||||||
|
|
|
@ -14,4 +14,4 @@
|
||||||
-- limitations under the License.
|
-- limitations under the License.
|
||||||
|
|
||||||
INSERT INTO druid_tasks (id, created_date, datasource, payload, status_payload, active) VALUES ('index_auth_test_2030-04-30T01:13:31.893Z', '2030-04-30T01:13:31.893Z', 'auth_test', '{\"id\":\"index_auth_test_2030-04-30T01:13:31.893Z\",\"created_date\":\"2030-04-30T01:13:31.893Z\",\"datasource\":\"auth_test\",\"active\":0}', '{\"id\":\"index_auth_test_2030-04-30T01:13:31.893Z\",\"status\":\"SUCCESS\",\"duration\":1}', 0);
|
INSERT INTO druid_tasks (id, created_date, datasource, payload, status_payload, active) VALUES ('index_auth_test_2030-04-30T01:13:31.893Z', '2030-04-30T01:13:31.893Z', 'auth_test', '{\"id\":\"index_auth_test_2030-04-30T01:13:31.893Z\",\"created_date\":\"2030-04-30T01:13:31.893Z\",\"datasource\":\"auth_test\",\"active\":0}', '{\"id\":\"index_auth_test_2030-04-30T01:13:31.893Z\",\"status\":\"SUCCESS\",\"duration\":1}', 0);
|
||||||
INSERT INTO druid_segments (id,dataSource,created_date,start,end,partitioned,version,used,payload,used_flag_last_updated) VALUES ('auth_test_2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z_2013-01-10T08:13:47.830Z_v9','auth_test','2013-03-15T20:49:52.348Z','2012-12-29T00:00:00.000Z','2013-01-10T08:00:00.000Z',0,'2013-01-10T08:13:47.830Z_v9',1,'{\"dataSource\":\"auth_test\",\"interval\":\"2012-12-29T00:00:00.000Z/2013-01-10T08:00:00.000Z\",\"version\":\"2013-01-10T08:13:47.830Z_v9\",\"loadSpec\":{\"type\":\"s3_zip\",\"bucket\":\"static.druid.io\",\"key\":\"data/segments/wikipedia_editstream/2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z/2013-01-10T08:13:47.830Z_v9/0/index.zip\"},\"dimensions\":\"anonymous,area_code,city,continent_code,country_name,dma_code,geo,language,namespace,network,newpage,page,postal_code,region_lookup,robot,unpatrolled,user\",\"metrics\":\"added,count,deleted,delta,delta_hist,unique_users,variation\",\"shardSpec\":{\"type\":\"none\"},\"binaryVersion\":9,\"size\":446027801,\"identifier\":\"auth_test_2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z_2013-01-10T08:13:47.830Z_v9\"}','1970-01-01T00:00:00.000Z');
|
INSERT INTO druid_segments (id,dataSource,created_date,start,end,partitioned,version,used,payload,used_status_last_updated) VALUES ('auth_test_2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z_2013-01-10T08:13:47.830Z_v9','auth_test','2013-03-15T20:49:52.348Z','2012-12-29T00:00:00.000Z','2013-01-10T08:00:00.000Z',0,'2013-01-10T08:13:47.830Z_v9',1,'{\"dataSource\":\"auth_test\",\"interval\":\"2012-12-29T00:00:00.000Z/2013-01-10T08:00:00.000Z\",\"version\":\"2013-01-10T08:13:47.830Z_v9\",\"loadSpec\":{\"type\":\"s3_zip\",\"bucket\":\"static.druid.io\",\"key\":\"data/segments/wikipedia_editstream/2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z/2013-01-10T08:13:47.830Z_v9/0/index.zip\"},\"dimensions\":\"anonymous,area_code,city,continent_code,country_name,dma_code,geo,language,namespace,network,newpage,page,postal_code,region_lookup,robot,unpatrolled,user\",\"metrics\":\"added,count,deleted,delta,delta_hist,unique_users,variation\",\"shardSpec\":{\"type\":\"none\"},\"binaryVersion\":9,\"size\":446027801,\"identifier\":\"auth_test_2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z_2013-01-10T08:13:47.830Z_v9\"}','1970-01-01T00:00:00.000Z');
|
||||||
|
|
|
@ -13,8 +13,8 @@
|
||||||
-- See the License for the specific language governing permissions and
|
-- See the License for the specific language governing permissions and
|
||||||
-- limitations under the License.
|
-- limitations under the License.
|
||||||
|
|
||||||
INSERT INTO druid_segments (id,dataSource,created_date,start,end,partitioned,version,used,payload,used_flag_last_updated) VALUES ('twitterstream_2013-01-01T00:00:00.000Z_2013-01-02T00:00:00.000Z_2013-01-02T04:13:41.980Z_v9','twitterstream','2013-05-13T01:08:18.192Z','2013-01-01T00:00:00.000Z','2013-01-02T00:00:00.000Z',0,'2013-01-02T04:13:41.980Z_v9',1,'{\"dataSource\":\"twitterstream\",\"interval\":\"2013-01-01T00:00:00.000Z/2013-01-02T00:00:00.000Z\",\"version\":\"2013-01-02T04:13:41.980Z_v9\",\"loadSpec\":{\"type\":\"s3_zip\",\"bucket\":\"static.druid.io\",\"key\":\"data/segments/twitterstream/2013-01-01T00:00:00.000Z_2013-01-02T00:00:00.000Z/2013-01-02T04:13:41.980Z_v9/0/index.zip\"},\"dimensions\":\"has_links,first_hashtag,user_time_zone,user_location,has_mention,user_lang,rt_name,user_name,is_retweet,is_viral,has_geo,url_domain,user_mention_name,reply_to_name\",\"metrics\":\"count,tweet_length,num_followers,num_links,num_mentions,num_hashtags,num_favorites,user_total_tweets\",\"shardSpec\":{\"type\":\"none\"},\"binaryVersion\":9,\"size\":445235220,\"identifier\":\"twitterstream_2013-01-01T00:00:00.000Z_2013-01-02T00:00:00.000Z_2013-01-02T04:13:41.980Z_v9\"}','1970-01-01T00:00:00.000Z');
|
INSERT INTO druid_segments (id,dataSource,created_date,start,end,partitioned,version,used,payload,used_status_last_updated) VALUES ('twitterstream_2013-01-01T00:00:00.000Z_2013-01-02T00:00:00.000Z_2013-01-02T04:13:41.980Z_v9','twitterstream','2013-05-13T01:08:18.192Z','2013-01-01T00:00:00.000Z','2013-01-02T00:00:00.000Z',0,'2013-01-02T04:13:41.980Z_v9',1,'{\"dataSource\":\"twitterstream\",\"interval\":\"2013-01-01T00:00:00.000Z/2013-01-02T00:00:00.000Z\",\"version\":\"2013-01-02T04:13:41.980Z_v9\",\"loadSpec\":{\"type\":\"s3_zip\",\"bucket\":\"static.druid.io\",\"key\":\"data/segments/twitterstream/2013-01-01T00:00:00.000Z_2013-01-02T00:00:00.000Z/2013-01-02T04:13:41.980Z_v9/0/index.zip\"},\"dimensions\":\"has_links,first_hashtag,user_time_zone,user_location,has_mention,user_lang,rt_name,user_name,is_retweet,is_viral,has_geo,url_domain,user_mention_name,reply_to_name\",\"metrics\":\"count,tweet_length,num_followers,num_links,num_mentions,num_hashtags,num_favorites,user_total_tweets\",\"shardSpec\":{\"type\":\"none\"},\"binaryVersion\":9,\"size\":445235220,\"identifier\":\"twitterstream_2013-01-01T00:00:00.000Z_2013-01-02T00:00:00.000Z_2013-01-02T04:13:41.980Z_v9\"}','1970-01-01T00:00:00.000Z');
|
||||||
INSERT INTO druid_segments (id,dataSource,created_date,start,end,partitioned,version,used,payload,used_flag_last_updated) VALUES ('twitterstream_2013-01-02T00:00:00.000Z_2013-01-03T00:00:00.000Z_2013-01-03T03:44:58.791Z_v9','twitterstream','2013-05-13T00:03:28.640Z','2013-01-02T00:00:00.000Z','2013-01-03T00:00:00.000Z',0,'2013-01-03T03:44:58.791Z_v9',1,'{\"dataSource\":\"twitterstream\",\"interval\":\"2013-01-02T00:00:00.000Z/2013-01-03T00:00:00.000Z\",\"version\":\"2013-01-03T03:44:58.791Z_v9\",\"loadSpec\":{\"type\":\"s3_zip\",\"bucket\":\"static.druid.io\",\"key\":\"data/segments/twitterstream/2013-01-02T00:00:00.000Z_2013-01-03T00:00:00.000Z/2013-01-03T03:44:58.791Z_v9/0/index.zip\"},\"dimensions\":\"has_links,first_hashtag,user_time_zone,user_location,has_mention,user_lang,rt_name,user_name,is_retweet,is_viral,has_geo,url_domain,user_mention_name,reply_to_name\",\"metrics\":\"count,tweet_length,num_followers,num_links,num_mentions,num_hashtags,num_favorites,user_total_tweets\",\"shardSpec\":{\"type\":\"none\"},\"binaryVersion\":9,\"size\":435325540,\"identifier\":\"twitterstream_2013-01-02T00:00:00.000Z_2013-01-03T00:00:00.000Z_2013-01-03T03:44:58.791Z_v9\"}','1970-01-01T00:00:00.000Z');
|
INSERT INTO druid_segments (id,dataSource,created_date,start,end,partitioned,version,used,payload,used_status_last_updated) VALUES ('twitterstream_2013-01-02T00:00:00.000Z_2013-01-03T00:00:00.000Z_2013-01-03T03:44:58.791Z_v9','twitterstream','2013-05-13T00:03:28.640Z','2013-01-02T00:00:00.000Z','2013-01-03T00:00:00.000Z',0,'2013-01-03T03:44:58.791Z_v9',1,'{\"dataSource\":\"twitterstream\",\"interval\":\"2013-01-02T00:00:00.000Z/2013-01-03T00:00:00.000Z\",\"version\":\"2013-01-03T03:44:58.791Z_v9\",\"loadSpec\":{\"type\":\"s3_zip\",\"bucket\":\"static.druid.io\",\"key\":\"data/segments/twitterstream/2013-01-02T00:00:00.000Z_2013-01-03T00:00:00.000Z/2013-01-03T03:44:58.791Z_v9/0/index.zip\"},\"dimensions\":\"has_links,first_hashtag,user_time_zone,user_location,has_mention,user_lang,rt_name,user_name,is_retweet,is_viral,has_geo,url_domain,user_mention_name,reply_to_name\",\"metrics\":\"count,tweet_length,num_followers,num_links,num_mentions,num_hashtags,num_favorites,user_total_tweets\",\"shardSpec\":{\"type\":\"none\"},\"binaryVersion\":9,\"size\":435325540,\"identifier\":\"twitterstream_2013-01-02T00:00:00.000Z_2013-01-03T00:00:00.000Z_2013-01-03T03:44:58.791Z_v9\"}','1970-01-01T00:00:00.000Z');
|
||||||
INSERT INTO druid_segments (id,dataSource,created_date,start,end,partitioned,version,used,payload,used_flag_last_updated) VALUES ('twitterstream_2013-01-03T00:00:00.000Z_2013-01-04T00:00:00.000Z_2013-01-04T04:09:13.590Z_v9','twitterstream','2013-05-13T00:03:48.807Z','2013-01-03T00:00:00.000Z','2013-01-04T00:00:00.000Z',0,'2013-01-04T04:09:13.590Z_v9',1,'{\"dataSource\":\"twitterstream\",\"interval\":\"2013-01-03T00:00:00.000Z/2013-01-04T00:00:00.000Z\",\"version\":\"2013-01-04T04:09:13.590Z_v9\",\"loadSpec\":{\"type\":\"s3_zip\",\"bucket\":\"static.druid.io\",\"key\":\"data/segments/twitterstream/2013-01-03T00:00:00.000Z_2013-01-04T00:00:00.000Z/2013-01-04T04:09:13.590Z_v9/0/index.zip\"},\"dimensions\":\"has_links,first_hashtag,user_time_zone,user_location,has_mention,user_lang,rt_name,user_name,is_retweet,is_viral,has_geo,url_domain,user_mention_name,reply_to_name\",\"metrics\":\"count,tweet_length,num_followers,num_links,num_mentions,num_hashtags,num_favorites,user_total_tweets\",\"shardSpec\":{\"type\":\"none\"},\"binaryVersion\":9,\"size\":411651320,\"identifier\":\"twitterstream_2013-01-03T00:00:00.000Z_2013-01-04T00:00:00.000Z_2013-01-04T04:09:13.590Z_v9\"}','1970-01-01T00:00:00.000Z');
|
INSERT INTO druid_segments (id,dataSource,created_date,start,end,partitioned,version,used,payload,used_status_last_updated) VALUES ('twitterstream_2013-01-03T00:00:00.000Z_2013-01-04T00:00:00.000Z_2013-01-04T04:09:13.590Z_v9','twitterstream','2013-05-13T00:03:48.807Z','2013-01-03T00:00:00.000Z','2013-01-04T00:00:00.000Z',0,'2013-01-04T04:09:13.590Z_v9',1,'{\"dataSource\":\"twitterstream\",\"interval\":\"2013-01-03T00:00:00.000Z/2013-01-04T00:00:00.000Z\",\"version\":\"2013-01-04T04:09:13.590Z_v9\",\"loadSpec\":{\"type\":\"s3_zip\",\"bucket\":\"static.druid.io\",\"key\":\"data/segments/twitterstream/2013-01-03T00:00:00.000Z_2013-01-04T00:00:00.000Z/2013-01-04T04:09:13.590Z_v9/0/index.zip\"},\"dimensions\":\"has_links,first_hashtag,user_time_zone,user_location,has_mention,user_lang,rt_name,user_name,is_retweet,is_viral,has_geo,url_domain,user_mention_name,reply_to_name\",\"metrics\":\"count,tweet_length,num_followers,num_links,num_mentions,num_hashtags,num_favorites,user_total_tweets\",\"shardSpec\":{\"type\":\"none\"},\"binaryVersion\":9,\"size\":411651320,\"identifier\":\"twitterstream_2013-01-03T00:00:00.000Z_2013-01-04T00:00:00.000Z_2013-01-04T04:09:13.590Z_v9\"}','1970-01-01T00:00:00.000Z');
|
||||||
INSERT INTO druid_segments (id,dataSource,created_date,start,end,partitioned,version,used,payload,used_flag_last_updated) VALUES ('wikipedia_editstream_2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z_2013-01-10T08:13:47.830Z_v9','wikipedia_editstream','2013-03-15T20:49:52.348Z','2012-12-29T00:00:00.000Z','2013-01-10T08:00:00.000Z',0,'2013-01-10T08:13:47.830Z_v9',1,'{\"dataSource\":\"wikipedia_editstream\",\"interval\":\"2012-12-29T00:00:00.000Z/2013-01-10T08:00:00.000Z\",\"version\":\"2013-01-10T08:13:47.830Z_v9\",\"loadSpec\":{\"type\":\"s3_zip\",\"bucket\":\"static.druid.io\",\"key\":\"data/segments/wikipedia_editstream/2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z/2013-01-10T08:13:47.830Z_v9/0/index.zip\"},\"dimensions\":\"anonymous,area_code,city,continent_code,country_name,dma_code,geo,language,namespace,network,newpage,page,postal_code,region_lookup,robot,unpatrolled,user\",\"metrics\":\"added,count,deleted,delta,delta_hist,unique_users,variation\",\"shardSpec\":{\"type\":\"none\"},\"binaryVersion\":9,\"size\":446027801,\"identifier\":\"wikipedia_editstream_2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z_2013-01-10T08:13:47.830Z_v9\"}','1970-01-01T00:00:00.000Z');
|
INSERT INTO druid_segments (id,dataSource,created_date,start,end,partitioned,version,used,payload,used_status_last_updated) VALUES ('wikipedia_editstream_2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z_2013-01-10T08:13:47.830Z_v9','wikipedia_editstream','2013-03-15T20:49:52.348Z','2012-12-29T00:00:00.000Z','2013-01-10T08:00:00.000Z',0,'2013-01-10T08:13:47.830Z_v9',1,'{\"dataSource\":\"wikipedia_editstream\",\"interval\":\"2012-12-29T00:00:00.000Z/2013-01-10T08:00:00.000Z\",\"version\":\"2013-01-10T08:13:47.830Z_v9\",\"loadSpec\":{\"type\":\"s3_zip\",\"bucket\":\"static.druid.io\",\"key\":\"data/segments/wikipedia_editstream/2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z/2013-01-10T08:13:47.830Z_v9/0/index.zip\"},\"dimensions\":\"anonymous,area_code,city,continent_code,country_name,dma_code,geo,language,namespace,network,newpage,page,postal_code,region_lookup,robot,unpatrolled,user\",\"metrics\":\"added,count,deleted,delta,delta_hist,unique_users,variation\",\"shardSpec\":{\"type\":\"none\"},\"binaryVersion\":9,\"size\":446027801,\"identifier\":\"wikipedia_editstream_2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z_2013-01-10T08:13:47.830Z_v9\"}','1970-01-01T00:00:00.000Z');
|
||||||
INSERT INTO druid_segments (id, dataSource, created_date, start, end, partitioned, version, used, payload,used_flag_last_updated) VALUES ('wikipedia_2013-08-01T00:00:00.000Z_2013-08-02T00:00:00.000Z_2013-08-08T21:22:48.989Z', 'wikipedia', '2013-08-08T21:26:23.799Z', '2013-08-01T00:00:00.000Z', '2013-08-02T00:00:00.000Z', '0', '2013-08-08T21:22:48.989Z', '1', '{\"dataSource\":\"wikipedia\",\"interval\":\"2013-08-01T00:00:00.000Z/2013-08-02T00:00:00.000Z\",\"version\":\"2013-08-08T21:22:48.989Z\",\"loadSpec\":{\"type\":\"s3_zip\",\"bucket\":\"static.druid.io\",\"key\":\"data/segments/wikipedia/20130801T000000.000Z_20130802T000000.000Z/2013-08-08T21_22_48.989Z/0/index.zip\"},\"dimensions\":\"dma_code,continent_code,geo,area_code,robot,country_name,network,city,namespace,anonymous,unpatrolled,page,postal_code,language,newpage,user,region_lookup\",\"metrics\":\"count,delta,variation,added,deleted\",\"shardSpec\":{\"type\":\"none\"},\"binaryVersion\":9,\"size\":24664730,\"identifier\":\"wikipedia_2013-08-01T00:00:00.000Z_2013-08-02T00:00:00.000Z_2013-08-08T21:22:48.989Z\"}','1970-01-01T00:00:00.000Z');
|
INSERT INTO druid_segments (id, dataSource, created_date, start, end, partitioned, version, used, payload,used_status_last_updated) VALUES ('wikipedia_2013-08-01T00:00:00.000Z_2013-08-02T00:00:00.000Z_2013-08-08T21:22:48.989Z', 'wikipedia', '2013-08-08T21:26:23.799Z', '2013-08-01T00:00:00.000Z', '2013-08-02T00:00:00.000Z', '0', '2013-08-08T21:22:48.989Z', '1', '{\"dataSource\":\"wikipedia\",\"interval\":\"2013-08-01T00:00:00.000Z/2013-08-02T00:00:00.000Z\",\"version\":\"2013-08-08T21:22:48.989Z\",\"loadSpec\":{\"type\":\"s3_zip\",\"bucket\":\"static.druid.io\",\"key\":\"data/segments/wikipedia/20130801T000000.000Z_20130802T000000.000Z/2013-08-08T21_22_48.989Z/0/index.zip\"},\"dimensions\":\"dma_code,continent_code,geo,area_code,robot,country_name,network,city,namespace,anonymous,unpatrolled,page,postal_code,language,newpage,user,region_lookup\",\"metrics\":\"count,delta,variation,added,deleted\",\"shardSpec\":{\"type\":\"none\"},\"binaryVersion\":9,\"size\":24664730,\"identifier\":\"wikipedia_2013-08-01T00:00:00.000Z_2013-08-02T00:00:00.000Z_2013-08-08T21:22:48.989Z\"}','1970-01-01T00:00:00.000Z');
|
||||||
|
|
|
@ -13,8 +13,8 @@
|
||||||
-- See the License for the specific language governing permissions and
|
-- See the License for the specific language governing permissions and
|
||||||
-- limitations under the License.
|
-- limitations under the License.
|
||||||
|
|
||||||
INSERT INTO druid_segments (id,dataSource,created_date,start,end,partitioned,version,used,payload,used_flag_last_updated) VALUES ('twitterstream_2013-01-01T00:00:00.000Z_2013-01-02T00:00:00.000Z_2013-01-02T04:13:41.980Z_v9','twitterstream','2013-05-13T01:08:18.192Z','2013-01-01T00:00:00.000Z','2013-01-02T00:00:00.000Z',0,'2013-01-02T04:13:41.980Z_v9',1,'{\"dataSource\":\"twitterstream\",\"interval\":\"2013-01-01T00:00:00.000Z/2013-01-02T00:00:00.000Z\",\"version\":\"2013-01-02T04:13:41.980Z_v9\",\"loadSpec\":{\"type\":\"s3_zip\",\"bucket\":\"static.druid.io\",\"key\":\"data/segments/twitterstream/2013-01-01T00:00:00.000Z_2013-01-02T00:00:00.000Z/2013-01-02T04:13:41.980Z_v9/0/index.zip\"},\"dimensions\":\"has_links,first_hashtag,user_time_zone,user_location,has_mention,user_lang,rt_name,user_name,is_retweet,is_viral,has_geo,url_domain,user_mention_name,reply_to_name\",\"metrics\":\"count,tweet_length,num_followers,num_links,num_mentions,num_hashtags,num_favorites,user_total_tweets\",\"shardSpec\":{\"type\":\"none\"},\"binaryVersion\":9,\"size\":445235220,\"identifier\":\"twitterstream_2013-01-01T00:00:00.000Z_2013-01-02T00:00:00.000Z_2013-01-02T04:13:41.980Z_v9\"}','1970-01-01T00:00:00.000Z');
|
INSERT INTO druid_segments (id,dataSource,created_date,start,end,partitioned,version,used,payload,used_status_last_updated) VALUES ('twitterstream_2013-01-01T00:00:00.000Z_2013-01-02T00:00:00.000Z_2013-01-02T04:13:41.980Z_v9','twitterstream','2013-05-13T01:08:18.192Z','2013-01-01T00:00:00.000Z','2013-01-02T00:00:00.000Z',0,'2013-01-02T04:13:41.980Z_v9',1,'{\"dataSource\":\"twitterstream\",\"interval\":\"2013-01-01T00:00:00.000Z/2013-01-02T00:00:00.000Z\",\"version\":\"2013-01-02T04:13:41.980Z_v9\",\"loadSpec\":{\"type\":\"s3_zip\",\"bucket\":\"static.druid.io\",\"key\":\"data/segments/twitterstream/2013-01-01T00:00:00.000Z_2013-01-02T00:00:00.000Z/2013-01-02T04:13:41.980Z_v9/0/index.zip\"},\"dimensions\":\"has_links,first_hashtag,user_time_zone,user_location,has_mention,user_lang,rt_name,user_name,is_retweet,is_viral,has_geo,url_domain,user_mention_name,reply_to_name\",\"metrics\":\"count,tweet_length,num_followers,num_links,num_mentions,num_hashtags,num_favorites,user_total_tweets\",\"shardSpec\":{\"type\":\"none\"},\"binaryVersion\":9,\"size\":445235220,\"identifier\":\"twitterstream_2013-01-01T00:00:00.000Z_2013-01-02T00:00:00.000Z_2013-01-02T04:13:41.980Z_v9\"}','1970-01-01T00:00:00.000Z');
|
||||||
INSERT INTO druid_segments (id,dataSource,created_date,start,end,partitioned,version,used,payload,used_flag_last_updated) VALUES ('twitterstream_2013-01-02T00:00:00.000Z_2013-01-03T00:00:00.000Z_2013-01-03T03:44:58.791Z_v9','twitterstream','2013-05-13T00:03:28.640Z','2013-01-02T00:00:00.000Z','2013-01-03T00:00:00.000Z',0,'2013-01-03T03:44:58.791Z_v9',1,'{\"dataSource\":\"twitterstream\",\"interval\":\"2013-01-02T00:00:00.000Z/2013-01-03T00:00:00.000Z\",\"version\":\"2013-01-03T03:44:58.791Z_v9\",\"loadSpec\":{\"type\":\"s3_zip\",\"bucket\":\"static.druid.io\",\"key\":\"data/segments/twitterstream/2013-01-02T00:00:00.000Z_2013-01-03T00:00:00.000Z/2013-01-03T03:44:58.791Z_v9/0/index.zip\"},\"dimensions\":\"has_links,first_hashtag,user_time_zone,user_location,has_mention,user_lang,rt_name,user_name,is_retweet,is_viral,has_geo,url_domain,user_mention_name,reply_to_name\",\"metrics\":\"count,tweet_length,num_followers,num_links,num_mentions,num_hashtags,num_favorites,user_total_tweets\",\"shardSpec\":{\"type\":\"none\"},\"binaryVersion\":9,\"size\":435325540,\"identifier\":\"twitterstream_2013-01-02T00:00:00.000Z_2013-01-03T00:00:00.000Z_2013-01-03T03:44:58.791Z_v9\"}','1970-01-01T00:00:00.000Z');
|
INSERT INTO druid_segments (id,dataSource,created_date,start,end,partitioned,version,used,payload,used_status_last_updated) VALUES ('twitterstream_2013-01-02T00:00:00.000Z_2013-01-03T00:00:00.000Z_2013-01-03T03:44:58.791Z_v9','twitterstream','2013-05-13T00:03:28.640Z','2013-01-02T00:00:00.000Z','2013-01-03T00:00:00.000Z',0,'2013-01-03T03:44:58.791Z_v9',1,'{\"dataSource\":\"twitterstream\",\"interval\":\"2013-01-02T00:00:00.000Z/2013-01-03T00:00:00.000Z\",\"version\":\"2013-01-03T03:44:58.791Z_v9\",\"loadSpec\":{\"type\":\"s3_zip\",\"bucket\":\"static.druid.io\",\"key\":\"data/segments/twitterstream/2013-01-02T00:00:00.000Z_2013-01-03T00:00:00.000Z/2013-01-03T03:44:58.791Z_v9/0/index.zip\"},\"dimensions\":\"has_links,first_hashtag,user_time_zone,user_location,has_mention,user_lang,rt_name,user_name,is_retweet,is_viral,has_geo,url_domain,user_mention_name,reply_to_name\",\"metrics\":\"count,tweet_length,num_followers,num_links,num_mentions,num_hashtags,num_favorites,user_total_tweets\",\"shardSpec\":{\"type\":\"none\"},\"binaryVersion\":9,\"size\":435325540,\"identifier\":\"twitterstream_2013-01-02T00:00:00.000Z_2013-01-03T00:00:00.000Z_2013-01-03T03:44:58.791Z_v9\"}','1970-01-01T00:00:00.000Z');
|
||||||
INSERT INTO druid_segments (id,dataSource,created_date,start,end,partitioned,version,used,payload,used_flag_last_updated) VALUES ('twitterstream_2013-01-03T00:00:00.000Z_2013-01-04T00:00:00.000Z_2013-01-04T04:09:13.590Z_v9','twitterstream','2013-05-13T00:03:48.807Z','2013-01-03T00:00:00.000Z','2013-01-04T00:00:00.000Z',0,'2013-01-04T04:09:13.590Z_v9',1,'{\"dataSource\":\"twitterstream\",\"interval\":\"2013-01-03T00:00:00.000Z/2013-01-04T00:00:00.000Z\",\"version\":\"2013-01-04T04:09:13.590Z_v9\",\"loadSpec\":{\"type\":\"s3_zip\",\"bucket\":\"static.druid.io\",\"key\":\"data/segments/twitterstream/2013-01-03T00:00:00.000Z_2013-01-04T00:00:00.000Z/2013-01-04T04:09:13.590Z_v9/0/index.zip\"},\"dimensions\":\"has_links,first_hashtag,user_time_zone,user_location,has_mention,user_lang,rt_name,user_name,is_retweet,is_viral,has_geo,url_domain,user_mention_name,reply_to_name\",\"metrics\":\"count,tweet_length,num_followers,num_links,num_mentions,num_hashtags,num_favorites,user_total_tweets\",\"shardSpec\":{\"type\":\"none\"},\"binaryVersion\":9,\"size\":411651320,\"identifier\":\"twitterstream_2013-01-03T00:00:00.000Z_2013-01-04T00:00:00.000Z_2013-01-04T04:09:13.590Z_v9\"}','1970-01-01T00:00:00.000Z');
|
INSERT INTO druid_segments (id,dataSource,created_date,start,end,partitioned,version,used,payload,used_status_last_updated) VALUES ('twitterstream_2013-01-03T00:00:00.000Z_2013-01-04T00:00:00.000Z_2013-01-04T04:09:13.590Z_v9','twitterstream','2013-05-13T00:03:48.807Z','2013-01-03T00:00:00.000Z','2013-01-04T00:00:00.000Z',0,'2013-01-04T04:09:13.590Z_v9',1,'{\"dataSource\":\"twitterstream\",\"interval\":\"2013-01-03T00:00:00.000Z/2013-01-04T00:00:00.000Z\",\"version\":\"2013-01-04T04:09:13.590Z_v9\",\"loadSpec\":{\"type\":\"s3_zip\",\"bucket\":\"static.druid.io\",\"key\":\"data/segments/twitterstream/2013-01-03T00:00:00.000Z_2013-01-04T00:00:00.000Z/2013-01-04T04:09:13.590Z_v9/0/index.zip\"},\"dimensions\":\"has_links,first_hashtag,user_time_zone,user_location,has_mention,user_lang,rt_name,user_name,is_retweet,is_viral,has_geo,url_domain,user_mention_name,reply_to_name\",\"metrics\":\"count,tweet_length,num_followers,num_links,num_mentions,num_hashtags,num_favorites,user_total_tweets\",\"shardSpec\":{\"type\":\"none\"},\"binaryVersion\":9,\"size\":411651320,\"identifier\":\"twitterstream_2013-01-03T00:00:00.000Z_2013-01-04T00:00:00.000Z_2013-01-04T04:09:13.590Z_v9\"}','1970-01-01T00:00:00.000Z');
|
||||||
INSERT INTO druid_segments (id,dataSource,created_date,start,end,partitioned,version,used,payload,used_flag_last_updated) VALUES ('wikipedia_editstream_2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z_2013-01-10T08:13:47.830Z_v9','wikipedia_editstream','2013-03-15T20:49:52.348Z','2012-12-29T00:00:00.000Z','2013-01-10T08:00:00.000Z',0,'2013-01-10T08:13:47.830Z_v9',1,'{\"dataSource\":\"wikipedia_editstream\",\"interval\":\"2012-12-29T00:00:00.000Z/2013-01-10T08:00:00.000Z\",\"version\":\"2013-01-10T08:13:47.830Z_v9\",\"loadSpec\":{\"type\":\"s3_zip\",\"bucket\":\"static.druid.io\",\"key\":\"data/segments/wikipedia_editstream/2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z/2013-01-10T08:13:47.830Z_v9/0/index.zip\"},\"dimensions\":\"anonymous,area_code,city,continent_code,country_name,dma_code,geo,language,namespace,network,newpage,page,postal_code,region_lookup,robot,unpatrolled,user\",\"metrics\":\"added,count,deleted,delta,delta_hist,unique_users,variation\",\"shardSpec\":{\"type\":\"none\"},\"binaryVersion\":9,\"size\":446027801,\"identifier\":\"wikipedia_editstream_2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z_2013-01-10T08:13:47.830Z_v9\"}','1970-01-01T00:00:00.000Z');
|
INSERT INTO druid_segments (id,dataSource,created_date,start,end,partitioned,version,used,payload,used_status_last_updated) VALUES ('wikipedia_editstream_2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z_2013-01-10T08:13:47.830Z_v9','wikipedia_editstream','2013-03-15T20:49:52.348Z','2012-12-29T00:00:00.000Z','2013-01-10T08:00:00.000Z',0,'2013-01-10T08:13:47.830Z_v9',1,'{\"dataSource\":\"wikipedia_editstream\",\"interval\":\"2012-12-29T00:00:00.000Z/2013-01-10T08:00:00.000Z\",\"version\":\"2013-01-10T08:13:47.830Z_v9\",\"loadSpec\":{\"type\":\"s3_zip\",\"bucket\":\"static.druid.io\",\"key\":\"data/segments/wikipedia_editstream/2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z/2013-01-10T08:13:47.830Z_v9/0/index.zip\"},\"dimensions\":\"anonymous,area_code,city,continent_code,country_name,dma_code,geo,language,namespace,network,newpage,page,postal_code,region_lookup,robot,unpatrolled,user\",\"metrics\":\"added,count,deleted,delta,delta_hist,unique_users,variation\",\"shardSpec\":{\"type\":\"none\"},\"binaryVersion\":9,\"size\":446027801,\"identifier\":\"wikipedia_editstream_2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z_2013-01-10T08:13:47.830Z_v9\"}','1970-01-01T00:00:00.000Z');
|
||||||
INSERT INTO druid_segments (id, dataSource, created_date, start, end, partitioned, version, used, payload,used_flag_last_updated) VALUES ('wikipedia_2013-08-01T00:00:00.000Z_2013-08-02T00:00:00.000Z_2013-08-08T21:22:48.989Z', 'wikipedia', '2013-08-08T21:26:23.799Z', '2013-08-01T00:00:00.000Z', '2013-08-02T00:00:00.000Z', '0', '2013-08-08T21:22:48.989Z', '1', '{\"dataSource\":\"wikipedia\",\"interval\":\"2013-08-01T00:00:00.000Z/2013-08-02T00:00:00.000Z\",\"version\":\"2013-08-08T21:22:48.989Z\",\"loadSpec\":{\"type\":\"s3_zip\",\"bucket\":\"static.druid.io\",\"key\":\"data/segments/wikipedia/20130801T000000.000Z_20130802T000000.000Z/2013-08-08T21_22_48.989Z/0/index.zip\"},\"dimensions\":\"dma_code,continent_code,geo,area_code,robot,country_name,network,city,namespace,anonymous,unpatrolled,page,postal_code,language,newpage,user,region_lookup\",\"metrics\":\"count,delta,variation,added,deleted\",\"shardSpec\":{\"type\":\"none\"},\"binaryVersion\":9,\"size\":24664730,\"identifier\":\"wikipedia_2013-08-01T00:00:00.000Z_2013-08-02T00:00:00.000Z_2013-08-08T21:22:48.989Z\"}','1970-01-01T00:00:00.000Z');
|
INSERT INTO druid_segments (id, dataSource, created_date, start, end, partitioned, version, used, payload,used_status_last_updated) VALUES ('wikipedia_2013-08-01T00:00:00.000Z_2013-08-02T00:00:00.000Z_2013-08-08T21:22:48.989Z', 'wikipedia', '2013-08-08T21:26:23.799Z', '2013-08-01T00:00:00.000Z', '2013-08-02T00:00:00.000Z', '0', '2013-08-08T21:22:48.989Z', '1', '{\"dataSource\":\"wikipedia\",\"interval\":\"2013-08-01T00:00:00.000Z/2013-08-02T00:00:00.000Z\",\"version\":\"2013-08-08T21:22:48.989Z\",\"loadSpec\":{\"type\":\"s3_zip\",\"bucket\":\"static.druid.io\",\"key\":\"data/segments/wikipedia/20130801T000000.000Z_20130802T000000.000Z/2013-08-08T21_22_48.989Z/0/index.zip\"},\"dimensions\":\"dma_code,continent_code,geo,area_code,robot,country_name,network,city,namespace,anonymous,unpatrolled,page,postal_code,language,newpage,user,region_lookup\",\"metrics\":\"count,delta,variation,added,deleted\",\"shardSpec\":{\"type\":\"none\"},\"binaryVersion\":9,\"size\":24664730,\"identifier\":\"wikipedia_2013-08-01T00:00:00.000Z_2013-08-02T00:00:00.000Z_2013-08-08T21:22:48.989Z\"}','1970-01-01T00:00:00.000Z');
|
||||||
|
|
|
@ -13,8 +13,8 @@
|
||||||
-- See the License for the specific language governing permissions and
|
-- See the License for the specific language governing permissions and
|
||||||
-- limitations under the License.
|
-- limitations under the License.
|
||||||
|
|
||||||
INSERT INTO druid_segments (id,dataSource,created_date,start,end,partitioned,version,used,payload,used_flag_last_updated) VALUES ('twitterstream_2013-01-01T00:00:00.000Z_2013-01-02T00:00:00.000Z_2013-01-02T04:13:41.980Z_v9','twitterstream','2013-05-13T01:08:18.192Z','2013-01-01T00:00:00.000Z','2013-01-02T00:00:00.000Z',0,'2013-01-02T04:13:41.980Z_v9',1,'{\"dataSource\":\"twitterstream\",\"interval\":\"2013-01-01T00:00:00.000Z/2013-01-02T00:00:00.000Z\",\"version\":\"2013-01-02T04:13:41.980Z_v9\",\"loadSpec\":{\"type\":\"s3_zip\",\"bucket\":\"static.druid.io\",\"key\":\"data/segments/twitterstream/2013-01-01T00:00:00.000Z_2013-01-02T00:00:00.000Z/2013-01-02T04:13:41.980Z_v9/0/index.zip\"},\"dimensions\":\"has_links,first_hashtag,user_time_zone,user_location,has_mention,user_lang,rt_name,user_name,is_retweet,is_viral,has_geo,url_domain,user_mention_name,reply_to_name\",\"metrics\":\"count,tweet_length,num_followers,num_links,num_mentions,num_hashtags,num_favorites,user_total_tweets\",\"shardSpec\":{\"type\":\"none\"},\"binaryVersion\":9,\"size\":445235220,\"identifier\":\"twitterstream_2013-01-01T00:00:00.000Z_2013-01-02T00:00:00.000Z_2013-01-02T04:13:41.980Z_v9\"}','1970-01-01T00:00:00.000Z');
|
INSERT INTO druid_segments (id,dataSource,created_date,start,end,partitioned,version,used,payload,used_status_last_updated) VALUES ('twitterstream_2013-01-01T00:00:00.000Z_2013-01-02T00:00:00.000Z_2013-01-02T04:13:41.980Z_v9','twitterstream','2013-05-13T01:08:18.192Z','2013-01-01T00:00:00.000Z','2013-01-02T00:00:00.000Z',0,'2013-01-02T04:13:41.980Z_v9',1,'{\"dataSource\":\"twitterstream\",\"interval\":\"2013-01-01T00:00:00.000Z/2013-01-02T00:00:00.000Z\",\"version\":\"2013-01-02T04:13:41.980Z_v9\",\"loadSpec\":{\"type\":\"s3_zip\",\"bucket\":\"static.druid.io\",\"key\":\"data/segments/twitterstream/2013-01-01T00:00:00.000Z_2013-01-02T00:00:00.000Z/2013-01-02T04:13:41.980Z_v9/0/index.zip\"},\"dimensions\":\"has_links,first_hashtag,user_time_zone,user_location,has_mention,user_lang,rt_name,user_name,is_retweet,is_viral,has_geo,url_domain,user_mention_name,reply_to_name\",\"metrics\":\"count,tweet_length,num_followers,num_links,num_mentions,num_hashtags,num_favorites,user_total_tweets\",\"shardSpec\":{\"type\":\"none\"},\"binaryVersion\":9,\"size\":445235220,\"identifier\":\"twitterstream_2013-01-01T00:00:00.000Z_2013-01-02T00:00:00.000Z_2013-01-02T04:13:41.980Z_v9\"}','1970-01-01T00:00:00.000Z');
|
||||||
INSERT INTO druid_segments (id,dataSource,created_date,start,end,partitioned,version,used,payload,used_flag_last_updated) VALUES ('twitterstream_2013-01-02T00:00:00.000Z_2013-01-03T00:00:00.000Z_2013-01-03T03:44:58.791Z_v9','twitterstream','2013-05-13T00:03:28.640Z','2013-01-02T00:00:00.000Z','2013-01-03T00:00:00.000Z',0,'2013-01-03T03:44:58.791Z_v9',1,'{\"dataSource\":\"twitterstream\",\"interval\":\"2013-01-02T00:00:00.000Z/2013-01-03T00:00:00.000Z\",\"version\":\"2013-01-03T03:44:58.791Z_v9\",\"loadSpec\":{\"type\":\"s3_zip\",\"bucket\":\"static.druid.io\",\"key\":\"data/segments/twitterstream/2013-01-02T00:00:00.000Z_2013-01-03T00:00:00.000Z/2013-01-03T03:44:58.791Z_v9/0/index.zip\"},\"dimensions\":\"has_links,first_hashtag,user_time_zone,user_location,has_mention,user_lang,rt_name,user_name,is_retweet,is_viral,has_geo,url_domain,user_mention_name,reply_to_name\",\"metrics\":\"count,tweet_length,num_followers,num_links,num_mentions,num_hashtags,num_favorites,user_total_tweets\",\"shardSpec\":{\"type\":\"none\"},\"binaryVersion\":9,\"size\":435325540,\"identifier\":\"twitterstream_2013-01-02T00:00:00.000Z_2013-01-03T00:00:00.000Z_2013-01-03T03:44:58.791Z_v9\"}','1970-01-01T00:00:00.000Z');
|
INSERT INTO druid_segments (id,dataSource,created_date,start,end,partitioned,version,used,payload,used_status_last_updated) VALUES ('twitterstream_2013-01-02T00:00:00.000Z_2013-01-03T00:00:00.000Z_2013-01-03T03:44:58.791Z_v9','twitterstream','2013-05-13T00:03:28.640Z','2013-01-02T00:00:00.000Z','2013-01-03T00:00:00.000Z',0,'2013-01-03T03:44:58.791Z_v9',1,'{\"dataSource\":\"twitterstream\",\"interval\":\"2013-01-02T00:00:00.000Z/2013-01-03T00:00:00.000Z\",\"version\":\"2013-01-03T03:44:58.791Z_v9\",\"loadSpec\":{\"type\":\"s3_zip\",\"bucket\":\"static.druid.io\",\"key\":\"data/segments/twitterstream/2013-01-02T00:00:00.000Z_2013-01-03T00:00:00.000Z/2013-01-03T03:44:58.791Z_v9/0/index.zip\"},\"dimensions\":\"has_links,first_hashtag,user_time_zone,user_location,has_mention,user_lang,rt_name,user_name,is_retweet,is_viral,has_geo,url_domain,user_mention_name,reply_to_name\",\"metrics\":\"count,tweet_length,num_followers,num_links,num_mentions,num_hashtags,num_favorites,user_total_tweets\",\"shardSpec\":{\"type\":\"none\"},\"binaryVersion\":9,\"size\":435325540,\"identifier\":\"twitterstream_2013-01-02T00:00:00.000Z_2013-01-03T00:00:00.000Z_2013-01-03T03:44:58.791Z_v9\"}','1970-01-01T00:00:00.000Z');
|
||||||
INSERT INTO druid_segments (id,dataSource,created_date,start,end,partitioned,version,used,payload,used_flag_last_updated) VALUES ('twitterstream_2013-01-03T00:00:00.000Z_2013-01-04T00:00:00.000Z_2013-01-04T04:09:13.590Z_v9','twitterstream','2013-05-13T00:03:48.807Z','2013-01-03T00:00:00.000Z','2013-01-04T00:00:00.000Z',0,'2013-01-04T04:09:13.590Z_v9',1,'{\"dataSource\":\"twitterstream\",\"interval\":\"2013-01-03T00:00:00.000Z/2013-01-04T00:00:00.000Z\",\"version\":\"2013-01-04T04:09:13.590Z_v9\",\"loadSpec\":{\"type\":\"s3_zip\",\"bucket\":\"static.druid.io\",\"key\":\"data/segments/twitterstream/2013-01-03T00:00:00.000Z_2013-01-04T00:00:00.000Z/2013-01-04T04:09:13.590Z_v9/0/index.zip\"},\"dimensions\":\"has_links,first_hashtag,user_time_zone,user_location,has_mention,user_lang,rt_name,user_name,is_retweet,is_viral,has_geo,url_domain,user_mention_name,reply_to_name\",\"metrics\":\"count,tweet_length,num_followers,num_links,num_mentions,num_hashtags,num_favorites,user_total_tweets\",\"shardSpec\":{\"type\":\"none\"},\"binaryVersion\":9,\"size\":411651320,\"identifier\":\"twitterstream_2013-01-03T00:00:00.000Z_2013-01-04T00:00:00.000Z_2013-01-04T04:09:13.590Z_v9\"}','1970-01-01T00:00:00.000Z');
|
INSERT INTO druid_segments (id,dataSource,created_date,start,end,partitioned,version,used,payload,used_status_last_updated) VALUES ('twitterstream_2013-01-03T00:00:00.000Z_2013-01-04T00:00:00.000Z_2013-01-04T04:09:13.590Z_v9','twitterstream','2013-05-13T00:03:48.807Z','2013-01-03T00:00:00.000Z','2013-01-04T00:00:00.000Z',0,'2013-01-04T04:09:13.590Z_v9',1,'{\"dataSource\":\"twitterstream\",\"interval\":\"2013-01-03T00:00:00.000Z/2013-01-04T00:00:00.000Z\",\"version\":\"2013-01-04T04:09:13.590Z_v9\",\"loadSpec\":{\"type\":\"s3_zip\",\"bucket\":\"static.druid.io\",\"key\":\"data/segments/twitterstream/2013-01-03T00:00:00.000Z_2013-01-04T00:00:00.000Z/2013-01-04T04:09:13.590Z_v9/0/index.zip\"},\"dimensions\":\"has_links,first_hashtag,user_time_zone,user_location,has_mention,user_lang,rt_name,user_name,is_retweet,is_viral,has_geo,url_domain,user_mention_name,reply_to_name\",\"metrics\":\"count,tweet_length,num_followers,num_links,num_mentions,num_hashtags,num_favorites,user_total_tweets\",\"shardSpec\":{\"type\":\"none\"},\"binaryVersion\":9,\"size\":411651320,\"identifier\":\"twitterstream_2013-01-03T00:00:00.000Z_2013-01-04T00:00:00.000Z_2013-01-04T04:09:13.590Z_v9\"}','1970-01-01T00:00:00.000Z');
|
||||||
INSERT INTO druid_segments (id,dataSource,created_date,start,end,partitioned,version,used,payload,used_flag_last_updated) VALUES ('wikipedia_editstream_2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z_2013-01-10T08:13:47.830Z_v9','wikipedia_editstream','2013-03-15T20:49:52.348Z','2012-12-29T00:00:00.000Z','2013-01-10T08:00:00.000Z',0,'2013-01-10T08:13:47.830Z_v9',1,'{\"dataSource\":\"wikipedia_editstream\",\"interval\":\"2012-12-29T00:00:00.000Z/2013-01-10T08:00:00.000Z\",\"version\":\"2013-01-10T08:13:47.830Z_v9\",\"loadSpec\":{\"type\":\"s3_zip\",\"bucket\":\"static.druid.io\",\"key\":\"data/segments/wikipedia_editstream/2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z/2013-01-10T08:13:47.830Z_v9/0/index.zip\"},\"dimensions\":\"anonymous,area_code,city,continent_code,country_name,dma_code,geo,language,namespace,network,newpage,page,postal_code,region_lookup,robot,unpatrolled,user\",\"metrics\":\"added,count,deleted,delta,delta_hist,unique_users,variation\",\"shardSpec\":{\"type\":\"none\"},\"binaryVersion\":9,\"size\":446027801,\"identifier\":\"wikipedia_editstream_2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z_2013-01-10T08:13:47.830Z_v9\"}','1970-01-01T00:00:00.000Z');
|
INSERT INTO druid_segments (id,dataSource,created_date,start,end,partitioned,version,used,payload,used_status_last_updated) VALUES ('wikipedia_editstream_2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z_2013-01-10T08:13:47.830Z_v9','wikipedia_editstream','2013-03-15T20:49:52.348Z','2012-12-29T00:00:00.000Z','2013-01-10T08:00:00.000Z',0,'2013-01-10T08:13:47.830Z_v9',1,'{\"dataSource\":\"wikipedia_editstream\",\"interval\":\"2012-12-29T00:00:00.000Z/2013-01-10T08:00:00.000Z\",\"version\":\"2013-01-10T08:13:47.830Z_v9\",\"loadSpec\":{\"type\":\"s3_zip\",\"bucket\":\"static.druid.io\",\"key\":\"data/segments/wikipedia_editstream/2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z/2013-01-10T08:13:47.830Z_v9/0/index.zip\"},\"dimensions\":\"anonymous,area_code,city,continent_code,country_name,dma_code,geo,language,namespace,network,newpage,page,postal_code,region_lookup,robot,unpatrolled,user\",\"metrics\":\"added,count,deleted,delta,delta_hist,unique_users,variation\",\"shardSpec\":{\"type\":\"none\"},\"binaryVersion\":9,\"size\":446027801,\"identifier\":\"wikipedia_editstream_2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z_2013-01-10T08:13:47.830Z_v9\"}','1970-01-01T00:00:00.000Z');
|
||||||
INSERT INTO druid_segments (id, dataSource, created_date, start, end, partitioned, version, used, payload,used_flag_last_updated) VALUES ('wikipedia_2013-08-01T00:00:00.000Z_2013-08-02T00:00:00.000Z_2013-08-08T21:22:48.989Z', 'wikipedia', '2013-08-08T21:26:23.799Z', '2013-08-01T00:00:00.000Z', '2013-08-02T00:00:00.000Z', '0', '2013-08-08T21:22:48.989Z', '1', '{\"dataSource\":\"wikipedia\",\"interval\":\"2013-08-01T00:00:00.000Z/2013-08-02T00:00:00.000Z\",\"version\":\"2013-08-08T21:22:48.989Z\",\"loadSpec\":{\"type\":\"s3_zip\",\"bucket\":\"static.druid.io\",\"key\":\"data/segments/wikipedia/20130801T000000.000Z_20130802T000000.000Z/2013-08-08T21_22_48.989Z/0/index.zip\"},\"dimensions\":\"dma_code,continent_code,geo,area_code,robot,country_name,network,city,namespace,anonymous,unpatrolled,page,postal_code,language,newpage,user,region_lookup\",\"metrics\":\"count,delta,variation,added,deleted\",\"shardSpec\":{\"type\":\"none\"},\"binaryVersion\":9,\"size\":24664730,\"identifier\":\"wikipedia_2013-08-01T00:00:00.000Z_2013-08-02T00:00:00.000Z_2013-08-08T21:22:48.989Z\"}','1970-01-01T00:00:00.000Z');
|
INSERT INTO druid_segments (id, dataSource, created_date, start, end, partitioned, version, used, payload,used_status_last_updated) VALUES ('wikipedia_2013-08-01T00:00:00.000Z_2013-08-02T00:00:00.000Z_2013-08-08T21:22:48.989Z', 'wikipedia', '2013-08-08T21:26:23.799Z', '2013-08-01T00:00:00.000Z', '2013-08-02T00:00:00.000Z', '0', '2013-08-08T21:22:48.989Z', '1', '{\"dataSource\":\"wikipedia\",\"interval\":\"2013-08-01T00:00:00.000Z/2013-08-02T00:00:00.000Z\",\"version\":\"2013-08-08T21:22:48.989Z\",\"loadSpec\":{\"type\":\"s3_zip\",\"bucket\":\"static.druid.io\",\"key\":\"data/segments/wikipedia/20130801T000000.000Z_20130802T000000.000Z/2013-08-08T21_22_48.989Z/0/index.zip\"},\"dimensions\":\"dma_code,continent_code,geo,area_code,robot,country_name,network,city,namespace,anonymous,unpatrolled,page,postal_code,language,newpage,user,region_lookup\",\"metrics\":\"count,delta,variation,added,deleted\",\"shardSpec\":{\"type\":\"none\"},\"binaryVersion\":9,\"size\":24664730,\"identifier\":\"wikipedia_2013-08-01T00:00:00.000Z_2013-08-02T00:00:00.000Z_2013-08-08T21:22:48.989Z\"}','1970-01-01T00:00:00.000Z');
|
||||||
|
|
|
@ -14,4 +14,4 @@
|
||||||
-- limitations under the License.
|
-- limitations under the License.
|
||||||
|
|
||||||
INSERT INTO druid_tasks (id, created_date, datasource, payload, status_payload, active) VALUES ('index_auth_test_2030-04-30T01:13:31.893Z', '2030-04-30T01:13:31.893Z', 'auth_test', '{\"id\":\"index_auth_test_2030-04-30T01:13:31.893Z\",\"created_date\":\"2030-04-30T01:13:31.893Z\",\"datasource\":\"auth_test\",\"active\":0}', '{\"id\":\"index_auth_test_2030-04-30T01:13:31.893Z\",\"status\":\"SUCCESS\",\"duration\":1}', 0);
|
INSERT INTO druid_tasks (id, created_date, datasource, payload, status_payload, active) VALUES ('index_auth_test_2030-04-30T01:13:31.893Z', '2030-04-30T01:13:31.893Z', 'auth_test', '{\"id\":\"index_auth_test_2030-04-30T01:13:31.893Z\",\"created_date\":\"2030-04-30T01:13:31.893Z\",\"datasource\":\"auth_test\",\"active\":0}', '{\"id\":\"index_auth_test_2030-04-30T01:13:31.893Z\",\"status\":\"SUCCESS\",\"duration\":1}', 0);
|
||||||
INSERT INTO druid_segments (id,dataSource,created_date,start,end,partitioned,version,used,payload,used_flag_last_updated) VALUES ('auth_test_2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z_2013-01-10T08:13:47.830Z_v9','auth_test','2013-03-15T20:49:52.348Z','2012-12-29T00:00:00.000Z','2013-01-10T08:00:00.000Z',0,'2013-01-10T08:13:47.830Z_v9',1,'{\"dataSource\":\"auth_test\",\"interval\":\"2012-12-29T00:00:00.000Z/2013-01-10T08:00:00.000Z\",\"version\":\"2013-01-10T08:13:47.830Z_v9\",\"loadSpec\":{\"type\":\"s3_zip\",\"bucket\":\"static.druid.io\",\"key\":\"data/segments/wikipedia_editstream/2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z/2013-01-10T08:13:47.830Z_v9/0/index.zip\"},\"dimensions\":\"anonymous,area_code,city,continent_code,country_name,dma_code,geo,language,namespace,network,newpage,page,postal_code,region_lookup,robot,unpatrolled,user\",\"metrics\":\"added,count,deleted,delta,delta_hist,unique_users,variation\",\"shardSpec\":{\"type\":\"none\"},\"binaryVersion\":9,\"size\":446027801,\"identifier\":\"auth_test_2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z_2013-01-10T08:13:47.830Z_v9\"}','1970-01-01T00:00:00.000Z');
|
INSERT INTO druid_segments (id,dataSource,created_date,start,end,partitioned,version,used,payload,used_status_last_updated) VALUES ('auth_test_2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z_2013-01-10T08:13:47.830Z_v9','auth_test','2013-03-15T20:49:52.348Z','2012-12-29T00:00:00.000Z','2013-01-10T08:00:00.000Z',0,'2013-01-10T08:13:47.830Z_v9',1,'{\"dataSource\":\"auth_test\",\"interval\":\"2012-12-29T00:00:00.000Z/2013-01-10T08:00:00.000Z\",\"version\":\"2013-01-10T08:13:47.830Z_v9\",\"loadSpec\":{\"type\":\"s3_zip\",\"bucket\":\"static.druid.io\",\"key\":\"data/segments/wikipedia_editstream/2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z/2013-01-10T08:13:47.830Z_v9/0/index.zip\"},\"dimensions\":\"anonymous,area_code,city,continent_code,country_name,dma_code,geo,language,namespace,network,newpage,page,postal_code,region_lookup,robot,unpatrolled,user\",\"metrics\":\"added,count,deleted,delta,delta_hist,unique_users,variation\",\"shardSpec\":{\"type\":\"none\"},\"binaryVersion\":9,\"size\":446027801,\"identifier\":\"auth_test_2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z_2013-01-10T08:13:47.830Z_v9\"}','1970-01-01T00:00:00.000Z');
|
||||||
|
|
|
@ -88,12 +88,4 @@ public interface MetadataStorageConnector
|
||||||
void createSupervisorsTable();
|
void createSupervisorsTable();
|
||||||
|
|
||||||
void deleteAllRecords(String tableName);
|
void deleteAllRecords(String tableName);
|
||||||
|
|
||||||
/**
|
|
||||||
* Upgrade Compatibility Method.
|
|
||||||
*
|
|
||||||
* A new column, used_flag_last_updated, is added to druid_segments table. This method alters the table to add the column to make
|
|
||||||
* a cluster's metastore tables compatible with the updated Druid codebase in 0.24.x+
|
|
||||||
*/
|
|
||||||
void alterSegmentTableAddUsedFlagLastUpdated();
|
|
||||||
}
|
}
|
||||||
|
|
|
@ -89,10 +89,4 @@ public class TestMetadataStorageConnector implements MetadataStorageConnector
|
||||||
{
|
{
|
||||||
throw new UnsupportedOperationException();
|
throw new UnsupportedOperationException();
|
||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
|
||||||
public void alterSegmentTableAddUsedFlagLastUpdated()
|
|
||||||
{
|
|
||||||
throw new UnsupportedOperationException();
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
|
@ -59,8 +59,8 @@ public class SQLMetadataStorageUpdaterJobHandler implements MetadataStorageUpdat
|
||||||
{
|
{
|
||||||
final PreparedBatch batch = handle.prepareBatch(
|
final PreparedBatch batch = handle.prepareBatch(
|
||||||
StringUtils.format(
|
StringUtils.format(
|
||||||
"INSERT INTO %1$s (id, dataSource, created_date, start, %2$send%2$s, partitioned, version, used, payload, used_flag_last_updated) "
|
"INSERT INTO %1$s (id, dataSource, created_date, start, %2$send%2$s, partitioned, version, used, payload, used_status_last_updated) "
|
||||||
+ "VALUES (:id, :dataSource, :created_date, :start, :end, :partitioned, :version, :used, :payload, :used_flag_last_updated)",
|
+ "VALUES (:id, :dataSource, :created_date, :start, :end, :partitioned, :version, :used, :payload, :used_status_last_updated)",
|
||||||
tableName, connector.getQuoteString()
|
tableName, connector.getQuoteString()
|
||||||
)
|
)
|
||||||
);
|
);
|
||||||
|
@ -77,7 +77,7 @@ public class SQLMetadataStorageUpdaterJobHandler implements MetadataStorageUpdat
|
||||||
.put("version", segment.getVersion())
|
.put("version", segment.getVersion())
|
||||||
.put("used", true)
|
.put("used", true)
|
||||||
.put("payload", mapper.writeValueAsBytes(segment))
|
.put("payload", mapper.writeValueAsBytes(segment))
|
||||||
.put("used_flag_last_updated", now)
|
.put("used_status_last_updated", now)
|
||||||
.build()
|
.build()
|
||||||
);
|
);
|
||||||
log.info("Published %s", segment.getId());
|
log.info("Published %s", segment.getId());
|
||||||
|
|
|
@ -1419,8 +1419,8 @@ public class IndexerSQLMetadataStorageCoordinator implements IndexerMetadataStor
|
||||||
|
|
||||||
PreparedBatch preparedBatch = handle.prepareBatch(
|
PreparedBatch preparedBatch = handle.prepareBatch(
|
||||||
StringUtils.format(
|
StringUtils.format(
|
||||||
"INSERT INTO %1$s (id, dataSource, created_date, start, %2$send%2$s, partitioned, version, used, payload, used_flag_last_updated) "
|
"INSERT INTO %1$s (id, dataSource, created_date, start, %2$send%2$s, partitioned, version, used, payload, used_status_last_updated) "
|
||||||
+ "VALUES (:id, :dataSource, :created_date, :start, :end, :partitioned, :version, :used, :payload, :used_flag_last_updated)",
|
+ "VALUES (:id, :dataSource, :created_date, :start, :end, :partitioned, :version, :used, :payload, :used_status_last_updated)",
|
||||||
dbTables.getSegmentsTable(),
|
dbTables.getSegmentsTable(),
|
||||||
connector.getQuoteString()
|
connector.getQuoteString()
|
||||||
)
|
)
|
||||||
|
@ -1439,7 +1439,7 @@ public class IndexerSQLMetadataStorageCoordinator implements IndexerMetadataStor
|
||||||
.bind("version", segment.getVersion())
|
.bind("version", segment.getVersion())
|
||||||
.bind("used", usedSegments.contains(segment))
|
.bind("used", usedSegments.contains(segment))
|
||||||
.bind("payload", jsonMapper.writeValueAsBytes(segment))
|
.bind("payload", jsonMapper.writeValueAsBytes(segment))
|
||||||
.bind("used_flag_last_updated", now);
|
.bind("used_status_last_updated", now);
|
||||||
}
|
}
|
||||||
final int[] affectedRows = preparedBatch.execute();
|
final int[] affectedRows = preparedBatch.execute();
|
||||||
final boolean succeeded = Arrays.stream(affectedRows).allMatch(eachAffectedRows -> eachAffectedRows == 1);
|
final boolean succeeded = Arrays.stream(affectedRows).allMatch(eachAffectedRows -> eachAffectedRows == 1);
|
||||||
|
|
|
@ -198,29 +198,25 @@ public abstract class SQLMetadataConnector implements MetadataStorageConnector
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Creates the given table and indexes if the table doesn't already exist.
|
||||||
|
*/
|
||||||
public void createTable(final String tableName, final Iterable<String> sql)
|
public void createTable(final String tableName, final Iterable<String> sql)
|
||||||
{
|
{
|
||||||
try {
|
try {
|
||||||
retryWithHandle(
|
retryWithHandle(handle -> {
|
||||||
new HandleCallback<Void>()
|
if (tableExists(handle, tableName)) {
|
||||||
{
|
log.info("Table[%s] already exists", tableName);
|
||||||
@Override
|
} else {
|
||||||
public Void withHandle(Handle handle)
|
log.info("Creating table[%s]", tableName);
|
||||||
{
|
final Batch batch = handle.createBatch();
|
||||||
if (!tableExists(handle, tableName)) {
|
for (String s : sql) {
|
||||||
log.info("Creating table [%s]", tableName);
|
batch.add(s);
|
||||||
final Batch batch = handle.createBatch();
|
|
||||||
for (String s : sql) {
|
|
||||||
batch.add(s);
|
|
||||||
}
|
|
||||||
batch.execute();
|
|
||||||
} else {
|
|
||||||
log.info("Table [%s] already exists", tableName);
|
|
||||||
}
|
|
||||||
return null;
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
);
|
batch.execute();
|
||||||
|
}
|
||||||
|
return null;
|
||||||
|
});
|
||||||
}
|
}
|
||||||
catch (Exception e) {
|
catch (Exception e) {
|
||||||
log.warn(e, "Exception creating table");
|
log.warn(e, "Exception creating table");
|
||||||
|
@ -236,26 +232,19 @@ public abstract class SQLMetadataConnector implements MetadataStorageConnector
|
||||||
private void alterTable(final String tableName, final Iterable<String> sql)
|
private void alterTable(final String tableName, final Iterable<String> sql)
|
||||||
{
|
{
|
||||||
try {
|
try {
|
||||||
retryWithHandle(
|
retryWithHandle(handle -> {
|
||||||
new HandleCallback<Void>()
|
if (tableExists(handle, tableName)) {
|
||||||
{
|
final Batch batch = handle.createBatch();
|
||||||
@Override
|
for (String s : sql) {
|
||||||
public Void withHandle(Handle handle)
|
log.info("Altering table[%s], with command: %s", tableName, s);
|
||||||
{
|
batch.add(s);
|
||||||
if (tableExists(handle, tableName)) {
|
|
||||||
final Batch batch = handle.createBatch();
|
|
||||||
for (String s : sql) {
|
|
||||||
log.info("Altering table[%s], with command: %s", tableName, s);
|
|
||||||
batch.add(s);
|
|
||||||
}
|
|
||||||
batch.execute();
|
|
||||||
} else {
|
|
||||||
log.info("Table[%s] doesn't exist", tableName);
|
|
||||||
}
|
|
||||||
return null;
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
);
|
batch.execute();
|
||||||
|
} else {
|
||||||
|
log.info("Table[%s] doesn't exist.", tableName);
|
||||||
|
}
|
||||||
|
return null;
|
||||||
|
});
|
||||||
}
|
}
|
||||||
catch (Exception e) {
|
catch (Exception e) {
|
||||||
log.warn(e, "Exception Altering table[%s]", tableName);
|
log.warn(e, "Exception Altering table[%s]", tableName);
|
||||||
|
@ -331,7 +320,7 @@ public abstract class SQLMetadataConnector implements MetadataStorageConnector
|
||||||
+ " version VARCHAR(255) NOT NULL,\n"
|
+ " version VARCHAR(255) NOT NULL,\n"
|
||||||
+ " used BOOLEAN NOT NULL,\n"
|
+ " used BOOLEAN NOT NULL,\n"
|
||||||
+ " payload %2$s NOT NULL,\n"
|
+ " payload %2$s NOT NULL,\n"
|
||||||
+ " used_flag_last_updated VARCHAR(255) NOT NULL,\n"
|
+ " used_status_last_updated VARCHAR(255) NOT NULL,\n"
|
||||||
+ " PRIMARY KEY (id)\n"
|
+ " PRIMARY KEY (id)\n"
|
||||||
+ ")",
|
+ ")",
|
||||||
tableName, getPayloadType(), getQuoteString(), getCollation()
|
tableName, getPayloadType(), getQuoteString(), getCollation()
|
||||||
|
@ -425,18 +414,18 @@ public abstract class SQLMetadataConnector implements MetadataStorageConnector
|
||||||
|
|
||||||
private void alterEntryTableAddTypeAndGroupId(final String tableName)
|
private void alterEntryTableAddTypeAndGroupId(final String tableName)
|
||||||
{
|
{
|
||||||
ArrayList<String> statements = new ArrayList<>();
|
List<String> statements = new ArrayList<>();
|
||||||
if (!tableHasColumn(tableName, "type")) {
|
if (tableHasColumn(tableName, "type")) {
|
||||||
log.info("Adding 'type' column to %s", tableName);
|
log.info("Table[%s] already has column[type].", tableName);
|
||||||
|
} else {
|
||||||
|
log.info("Adding column[type] to table[%s].", tableName);
|
||||||
statements.add(StringUtils.format("ALTER TABLE %1$s ADD COLUMN type VARCHAR(255)", tableName));
|
statements.add(StringUtils.format("ALTER TABLE %1$s ADD COLUMN type VARCHAR(255)", tableName));
|
||||||
} else {
|
|
||||||
log.info("%s already has 'type' column", tableName);
|
|
||||||
}
|
}
|
||||||
if (!tableHasColumn(tableName, "group_id")) {
|
if (tableHasColumn(tableName, "group_id")) {
|
||||||
log.info("Adding 'group_id' column to %s", tableName);
|
log.info("Table[%s] already has column[group_id].", tableName);
|
||||||
statements.add(StringUtils.format("ALTER TABLE %1$s ADD COLUMN group_id VARCHAR(255)", tableName));
|
|
||||||
} else {
|
} else {
|
||||||
log.info("%s already has 'group_id' column", tableName);
|
log.info("Adding column[group_id] to table[%s].", tableName);
|
||||||
|
statements.add(StringUtils.format("ALTER TABLE %1$s ADD COLUMN group_id VARCHAR(255)", tableName));
|
||||||
}
|
}
|
||||||
if (!statements.isEmpty()) {
|
if (!statements.isEmpty()) {
|
||||||
alterTable(tableName, statements);
|
alterTable(tableName, statements);
|
||||||
|
@ -502,28 +491,24 @@ public abstract class SQLMetadataConnector implements MetadataStorageConnector
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Adds the used_flag_last_updated column to the Druid segment table.
|
* Adds the used_status_last_updated column to the "segments" table.
|
||||||
*
|
|
||||||
* This is public due to allow the UpdateTables cli tool to use for upgrade prep.
|
|
||||||
*/
|
*/
|
||||||
@Override
|
protected void alterSegmentTableAddUsedFlagLastUpdated()
|
||||||
public void alterSegmentTableAddUsedFlagLastUpdated()
|
|
||||||
{
|
{
|
||||||
String tableName = tablesConfigSupplier.get().getSegmentsTable();
|
final String tableName = tablesConfigSupplier.get().getSegmentsTable();
|
||||||
if (!tableHasColumn(tableName, "used_flag_last_updated")) {
|
if (tableHasColumn(tableName, "used_status_last_updated")) {
|
||||||
log.info("Adding 'used_flag_last_updated' column to %s", tableName);
|
log.info("Table[%s] already has column[used_status_last_updated].", tableName);
|
||||||
|
} else {
|
||||||
|
log.info("Adding column[used_status_last_updated] to table[%s].", tableName);
|
||||||
alterTable(
|
alterTable(
|
||||||
tableName,
|
tableName,
|
||||||
ImmutableList.of(
|
ImmutableList.of(
|
||||||
StringUtils.format(
|
StringUtils.format(
|
||||||
"ALTER TABLE %1$s \n"
|
"ALTER TABLE %1$s ADD used_status_last_updated varchar(255)",
|
||||||
+ "ADD used_flag_last_updated varchar(255)",
|
|
||||||
tableName
|
tableName
|
||||||
)
|
)
|
||||||
)
|
)
|
||||||
);
|
);
|
||||||
} else {
|
|
||||||
log.info("%s already has used_flag_last_updated column", tableName);
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -676,7 +661,7 @@ public abstract class SQLMetadataConnector implements MetadataStorageConnector
|
||||||
}
|
}
|
||||||
// Called outside of the above conditional because we want to validate the table
|
// Called outside of the above conditional because we want to validate the table
|
||||||
// regardless of cluster configuration for creating tables.
|
// regardless of cluster configuration for creating tables.
|
||||||
validateSegmentTable();
|
validateSegmentsTable();
|
||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
|
@ -724,14 +709,7 @@ public abstract class SQLMetadataConnector implements MetadataStorageConnector
|
||||||
)
|
)
|
||||||
{
|
{
|
||||||
return getDBI().withHandle(
|
return getDBI().withHandle(
|
||||||
new HandleCallback<byte[]>()
|
handle -> lookupWithHandle(handle, tableName, keyColumn, valueColumn, key)
|
||||||
{
|
|
||||||
@Override
|
|
||||||
public byte[] withHandle(Handle handle)
|
|
||||||
{
|
|
||||||
return lookupWithHandle(handle, tableName, keyColumn, valueColumn, key);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -989,61 +967,47 @@ public abstract class SQLMetadataConnector implements MetadataStorageConnector
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Interrogate table metadata and return true or false depending on the existance of the indicated column
|
* Checks table metadata to determine if the given column exists in the table.
|
||||||
*
|
*
|
||||||
* public visibility because DerbyConnector needs to override thanks to uppercase table and column names invalidating
|
* @return true if the column exists in the table, false otherwise
|
||||||
* this implementation.
|
|
||||||
*
|
|
||||||
* @param tableName The table being interrogated
|
|
||||||
* @param columnName The column being looked for
|
|
||||||
* @return boolean indicating the existence of the column in question
|
|
||||||
*/
|
*/
|
||||||
public boolean tableHasColumn(String tableName, String columnName)
|
protected boolean tableHasColumn(String tableName, String columnName)
|
||||||
{
|
{
|
||||||
return getDBI().withHandle(
|
return getDBI().withHandle(handle -> {
|
||||||
new HandleCallback<Boolean>()
|
try {
|
||||||
{
|
if (tableExists(handle, tableName)) {
|
||||||
@Override
|
DatabaseMetaData dbMetaData = handle.getConnection().getMetaData();
|
||||||
public Boolean withHandle(Handle handle)
|
ResultSet columns = dbMetaData.getColumns(null, null, tableName, columnName);
|
||||||
{
|
return columns.next();
|
||||||
try {
|
} else {
|
||||||
if (tableExists(handle, tableName)) {
|
return false;
|
||||||
DatabaseMetaData dbMetaData = handle.getConnection().getMetaData();
|
|
||||||
ResultSet columns = dbMetaData.getColumns(
|
|
||||||
null,
|
|
||||||
null,
|
|
||||||
tableName,
|
|
||||||
columnName
|
|
||||||
);
|
|
||||||
return columns.next();
|
|
||||||
} else {
|
|
||||||
return false;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
catch (SQLException e) {
|
|
||||||
return false;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
);
|
}
|
||||||
|
catch (SQLException e) {
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
});
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Ensure that the segment table has the proper schema required to run Druid properly.
|
* Ensures that the "segments" table has a schema compatible with the current version of Druid.
|
||||||
*
|
*
|
||||||
* Throws RuntimeException if the column does not exist. There is no recovering from an invalid schema,
|
* @throws RuntimeException if the "segments" table has an incompatible schema.
|
||||||
* the program should crash.
|
* There is no recovering from an invalid schema, the program should crash.
|
||||||
*
|
* @see <a href="https://druid.apache.org/docs/latest/operations/metadata-migration/">Metadata migration</a> for info
|
||||||
* See <a href="https://druid.apache.org/docs/latest/operations/upgrade-prep.html">upgrade-prep docs</a> for info
|
* on manually preparing the "segments" table.
|
||||||
* on manually preparing your segment table.
|
|
||||||
*/
|
*/
|
||||||
private void validateSegmentTable()
|
private void validateSegmentsTable()
|
||||||
{
|
{
|
||||||
if (tableHasColumn(tablesConfigSupplier.get().getSegmentsTable(), "used_flag_last_updated")) {
|
if (tableHasColumn(tablesConfigSupplier.get().getSegmentsTable(), "used_status_last_updated")) {
|
||||||
return;
|
// do nothing
|
||||||
} else {
|
} else {
|
||||||
throw new RuntimeException("Invalid Segment Table Schema! No used_flag_last_updated column!" +
|
throw new ISE(
|
||||||
" See https://druid.apache.org/docs/latest/operations/upgrade-prep.html for more info on remediation");
|
"Cannot start Druid as table[%s] has an incompatible schema."
|
||||||
|
+ " Reason: Column [used_status_last_updated] does not exist in table."
|
||||||
|
+ " See https://druid.apache.org/docs/latest/operations/upgrade-prep.html for more info on remediation.",
|
||||||
|
tablesConfigSupplier.get().getSegmentsTable()
|
||||||
|
);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -55,8 +55,8 @@ public class SQLMetadataSegmentPublisher implements MetadataSegmentPublisher
|
||||||
this.config = config;
|
this.config = config;
|
||||||
this.connector = connector;
|
this.connector = connector;
|
||||||
this.statement = StringUtils.format(
|
this.statement = StringUtils.format(
|
||||||
"INSERT INTO %1$s (id, dataSource, created_date, start, %2$send%2$s, partitioned, version, used, payload, used_flag_last_updated) "
|
"INSERT INTO %1$s (id, dataSource, created_date, start, %2$send%2$s, partitioned, version, used, payload, used_status_last_updated) "
|
||||||
+ "VALUES (:id, :dataSource, :created_date, :start, :end, :partitioned, :version, :used, :payload, :used_flag_last_updated)",
|
+ "VALUES (:id, :dataSource, :created_date, :start, :end, :partitioned, :version, :used, :payload, :used_status_last_updated)",
|
||||||
config.getSegmentsTable(), connector.getQuoteString()
|
config.getSegmentsTable(), connector.getQuoteString()
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
@ -131,7 +131,7 @@ public class SQLMetadataSegmentPublisher implements MetadataSegmentPublisher
|
||||||
.bind("version", version)
|
.bind("version", version)
|
||||||
.bind("used", used)
|
.bind("used", used)
|
||||||
.bind("payload", payload)
|
.bind("payload", payload)
|
||||||
.bind("used_flag_last_updated", usedFlagLastUpdated)
|
.bind("used_status_last_updated", usedFlagLastUpdated)
|
||||||
.execute();
|
.execute();
|
||||||
|
|
||||||
return null;
|
return null;
|
||||||
|
|
|
@ -140,8 +140,8 @@ public interface SegmentsMetadataManager
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Returns top N unused segment intervals with the end time no later than the specified maxEndTime and
|
* Returns top N unused segment intervals with the end time no later than the specified maxEndTime and
|
||||||
* used_flag_last_updated time no later than maxLastUsedTime when ordered by segment start time, end time. Any segment having no
|
* used_status_last_updated time no later than maxLastUsedTime when ordered by segment start time, end time. Any segment having no
|
||||||
* used_flag_last_updated time due to upgrade from legacy Druid means maxUsedFlagLastUpdatedTime is ignored for that segment.
|
* used_status_last_updated time due to upgrade from legacy Druid means maxUsedFlagLastUpdatedTime is ignored for that segment.
|
||||||
*/
|
*/
|
||||||
List<Interval> getUnusedSegmentIntervals(
|
List<Interval> getUnusedSegmentIntervals(
|
||||||
String dataSource,
|
String dataSource,
|
||||||
|
@ -154,7 +154,7 @@ public interface SegmentsMetadataManager
|
||||||
void poll();
|
void poll();
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Populates used_flag_last_updated column in the segments table iteratively until there are no segments with a NULL
|
* Populates used_status_last_updated column in the segments table iteratively until there are no segments with a NULL
|
||||||
* value for that column.
|
* value for that column.
|
||||||
*/
|
*/
|
||||||
void populateUsedFlagLastUpdatedAsync();
|
void populateUsedFlagLastUpdatedAsync();
|
||||||
|
|
|
@ -337,7 +337,7 @@ public class SqlSegmentsMetadataManager implements SegmentsMetadataManager
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Populate used_flag_last_updated for unused segments whose current value for said column is NULL
|
* Populate used_status_last_updated for unused segments whose current value for said column is NULL
|
||||||
*
|
*
|
||||||
* The updates are made incrementally.
|
* The updates are made incrementally.
|
||||||
*/
|
*/
|
||||||
|
@ -346,7 +346,7 @@ public class SqlSegmentsMetadataManager implements SegmentsMetadataManager
|
||||||
{
|
{
|
||||||
String segmentsTable = getSegmentsTable();
|
String segmentsTable = getSegmentsTable();
|
||||||
log.info(
|
log.info(
|
||||||
"Populating used_flag_last_updated with non-NULL values for unused segments in [%s]",
|
"Populating used_status_last_updated with non-NULL values for unused segments in [%s]",
|
||||||
segmentsTable
|
segmentsTable
|
||||||
);
|
);
|
||||||
|
|
||||||
|
@ -364,7 +364,7 @@ public class SqlSegmentsMetadataManager implements SegmentsMetadataManager
|
||||||
{
|
{
|
||||||
segmentsToUpdate.addAll(handle.createQuery(
|
segmentsToUpdate.addAll(handle.createQuery(
|
||||||
StringUtils.format(
|
StringUtils.format(
|
||||||
"SELECT id FROM %1$s WHERE used_flag_last_updated IS NULL and used = :used %2$s",
|
"SELECT id FROM %1$s WHERE used_status_last_updated IS NULL and used = :used %2$s",
|
||||||
segmentsTable,
|
segmentsTable,
|
||||||
connector.limitClause(limit)
|
connector.limitClause(limit)
|
||||||
)
|
)
|
||||||
|
@ -386,7 +386,7 @@ public class SqlSegmentsMetadataManager implements SegmentsMetadataManager
|
||||||
public Void withHandle(Handle handle)
|
public Void withHandle(Handle handle)
|
||||||
{
|
{
|
||||||
Batch updateBatch = handle.createBatch();
|
Batch updateBatch = handle.createBatch();
|
||||||
String sql = "UPDATE %1$s SET used_flag_last_updated = '%2$s' WHERE id = '%3$s'";
|
String sql = "UPDATE %1$s SET used_status_last_updated = '%2$s' WHERE id = '%3$s'";
|
||||||
String now = DateTimes.nowUtc().toString();
|
String now = DateTimes.nowUtc().toString();
|
||||||
for (String id : segmentsToUpdate) {
|
for (String id : segmentsToUpdate) {
|
||||||
updateBatch.add(StringUtils.format(sql, segmentsTable, now, id));
|
updateBatch.add(StringUtils.format(sql, segmentsTable, now, id));
|
||||||
|
@ -398,13 +398,13 @@ public class SqlSegmentsMetadataManager implements SegmentsMetadataManager
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
catch (Exception e) {
|
catch (Exception e) {
|
||||||
log.warn(e, "Population of used_flag_last_updated in [%s] has failed. There may be unused segments with"
|
log.warn(e, "Population of used_status_last_updated in [%s] has failed. There may be unused segments with"
|
||||||
+ " NULL values for used_flag_last_updated that won't be killed!", segmentsTable);
|
+ " NULL values for used_status_last_updated that won't be killed!", segmentsTable);
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
totalUpdatedEntries += segmentsToUpdate.size();
|
totalUpdatedEntries += segmentsToUpdate.size();
|
||||||
log.info("Updated a batch of %d rows in [%s] with a valid used_flag_last_updated date",
|
log.info("Updated a batch of %d rows in [%s] with a valid used_status_last_updated date",
|
||||||
segmentsToUpdate.size(),
|
segmentsToUpdate.size(),
|
||||||
segmentsTable
|
segmentsTable
|
||||||
);
|
);
|
||||||
|
@ -417,7 +417,7 @@ public class SqlSegmentsMetadataManager implements SegmentsMetadataManager
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
log.info(
|
log.info(
|
||||||
"Finished updating [%s] with a valid used_flag_last_updated date. %d rows updated",
|
"Finished updating [%s] with a valid used_status_last_updated date. %d rows updated",
|
||||||
segmentsTable,
|
segmentsTable,
|
||||||
totalUpdatedEntries
|
totalUpdatedEntries
|
||||||
);
|
);
|
||||||
|
@ -630,9 +630,9 @@ public class SqlSegmentsMetadataManager implements SegmentsMetadataManager
|
||||||
try {
|
try {
|
||||||
int numUpdatedDatabaseEntries = connector.getDBI().withHandle(
|
int numUpdatedDatabaseEntries = connector.getDBI().withHandle(
|
||||||
(Handle handle) -> handle
|
(Handle handle) -> handle
|
||||||
.createStatement(StringUtils.format("UPDATE %s SET used=true, used_flag_last_updated = :used_flag_last_updated WHERE id = :id", getSegmentsTable()))
|
.createStatement(StringUtils.format("UPDATE %s SET used=true, used_status_last_updated = :used_status_last_updated WHERE id = :id", getSegmentsTable()))
|
||||||
.bind("id", segmentId)
|
.bind("id", segmentId)
|
||||||
.bind("used_flag_last_updated", DateTimes.nowUtc().toString())
|
.bind("used_status_last_updated", DateTimes.nowUtc().toString())
|
||||||
.execute()
|
.execute()
|
||||||
);
|
);
|
||||||
// Unlike bulk markAsUsed methods: markAsUsedAllNonOvershadowedSegmentsInDataSource(),
|
// Unlike bulk markAsUsed methods: markAsUsedAllNonOvershadowedSegmentsInDataSource(),
|
||||||
|
@ -1093,7 +1093,7 @@ public class SqlSegmentsMetadataManager implements SegmentsMetadataManager
|
||||||
DateTime maxUsedFlagLastUpdatedTime
|
DateTime maxUsedFlagLastUpdatedTime
|
||||||
)
|
)
|
||||||
{
|
{
|
||||||
// Note that we handle the case where used_flag_last_updated IS NULL here to allow smooth transition to Druid version that uses used_flag_last_updated column
|
// Note that we handle the case where used_status_last_updated IS NULL here to allow smooth transition to Druid version that uses used_status_last_updated column
|
||||||
return connector.inReadOnlyTransaction(
|
return connector.inReadOnlyTransaction(
|
||||||
new TransactionCallback<List<Interval>>()
|
new TransactionCallback<List<Interval>>()
|
||||||
{
|
{
|
||||||
|
@ -1104,7 +1104,7 @@ public class SqlSegmentsMetadataManager implements SegmentsMetadataManager
|
||||||
.createQuery(
|
.createQuery(
|
||||||
StringUtils.format(
|
StringUtils.format(
|
||||||
"SELECT start, %2$send%2$s FROM %1$s WHERE dataSource = :dataSource AND "
|
"SELECT start, %2$send%2$s FROM %1$s WHERE dataSource = :dataSource AND "
|
||||||
+ "%2$send%2$s <= :end AND used = false AND used_flag_last_updated IS NOT NULL AND used_flag_last_updated <= :used_flag_last_updated ORDER BY start, %2$send%2$s",
|
+ "%2$send%2$s <= :end AND used = false AND used_status_last_updated IS NOT NULL AND used_status_last_updated <= :used_status_last_updated ORDER BY start, %2$send%2$s",
|
||||||
getSegmentsTable(),
|
getSegmentsTable(),
|
||||||
connector.getQuoteString()
|
connector.getQuoteString()
|
||||||
)
|
)
|
||||||
|
@ -1113,7 +1113,7 @@ public class SqlSegmentsMetadataManager implements SegmentsMetadataManager
|
||||||
.setMaxRows(limit)
|
.setMaxRows(limit)
|
||||||
.bind("dataSource", dataSource)
|
.bind("dataSource", dataSource)
|
||||||
.bind("end", maxEndTime.toString())
|
.bind("end", maxEndTime.toString())
|
||||||
.bind("used_flag_last_updated", maxUsedFlagLastUpdatedTime.toString())
|
.bind("used_status_last_updated", maxUsedFlagLastUpdatedTime.toString())
|
||||||
.map(
|
.map(
|
||||||
new BaseResultSetMapper<Interval>()
|
new BaseResultSetMapper<Interval>()
|
||||||
{
|
{
|
||||||
|
|
|
@ -149,7 +149,7 @@ public class SqlSegmentsMetadataQuery
|
||||||
final PreparedBatch batch =
|
final PreparedBatch batch =
|
||||||
handle.prepareBatch(
|
handle.prepareBatch(
|
||||||
StringUtils.format(
|
StringUtils.format(
|
||||||
"UPDATE %s SET used = ?, used_flag_last_updated = ? WHERE datasource = ? AND id = ?",
|
"UPDATE %s SET used = ?, used_status_last_updated = ? WHERE datasource = ? AND id = ?",
|
||||||
dbTables.getSegmentsTable()
|
dbTables.getSegmentsTable()
|
||||||
)
|
)
|
||||||
);
|
);
|
||||||
|
@ -176,13 +176,13 @@ public class SqlSegmentsMetadataQuery
|
||||||
return handle
|
return handle
|
||||||
.createStatement(
|
.createStatement(
|
||||||
StringUtils.format(
|
StringUtils.format(
|
||||||
"UPDATE %s SET used=:used, used_flag_last_updated = :used_flag_last_updated WHERE dataSource = :dataSource",
|
"UPDATE %s SET used=:used, used_status_last_updated = :used_status_last_updated WHERE dataSource = :dataSource",
|
||||||
dbTables.getSegmentsTable()
|
dbTables.getSegmentsTable()
|
||||||
)
|
)
|
||||||
)
|
)
|
||||||
.bind("dataSource", dataSource)
|
.bind("dataSource", dataSource)
|
||||||
.bind("used", false)
|
.bind("used", false)
|
||||||
.bind("used_flag_last_updated", DateTimes.nowUtc().toString())
|
.bind("used_status_last_updated", DateTimes.nowUtc().toString())
|
||||||
.execute();
|
.execute();
|
||||||
} else if (Intervals.canCompareEndpointsAsStrings(interval)
|
} else if (Intervals.canCompareEndpointsAsStrings(interval)
|
||||||
&& interval.getStart().getYear() == interval.getEnd().getYear()) {
|
&& interval.getStart().getYear() == interval.getEnd().getYear()) {
|
||||||
|
@ -192,7 +192,7 @@ public class SqlSegmentsMetadataQuery
|
||||||
return handle
|
return handle
|
||||||
.createStatement(
|
.createStatement(
|
||||||
StringUtils.format(
|
StringUtils.format(
|
||||||
"UPDATE %s SET used=:used, used_flag_last_updated = :used_flag_last_updated WHERE dataSource = :dataSource AND %s",
|
"UPDATE %s SET used=:used, used_status_last_updated = :used_status_last_updated WHERE dataSource = :dataSource AND %s",
|
||||||
dbTables.getSegmentsTable(),
|
dbTables.getSegmentsTable(),
|
||||||
IntervalMode.CONTAINS.makeSqlCondition(connector.getQuoteString(), ":start", ":end")
|
IntervalMode.CONTAINS.makeSqlCondition(connector.getQuoteString(), ":start", ":end")
|
||||||
)
|
)
|
||||||
|
@ -201,7 +201,7 @@ public class SqlSegmentsMetadataQuery
|
||||||
.bind("used", false)
|
.bind("used", false)
|
||||||
.bind("start", interval.getStart().toString())
|
.bind("start", interval.getStart().toString())
|
||||||
.bind("end", interval.getEnd().toString())
|
.bind("end", interval.getEnd().toString())
|
||||||
.bind("used_flag_last_updated", DateTimes.nowUtc().toString())
|
.bind("used_status_last_updated", DateTimes.nowUtc().toString())
|
||||||
.execute();
|
.execute();
|
||||||
} else {
|
} else {
|
||||||
// Retrieve, then drop, since we can't write a WHERE clause directly.
|
// Retrieve, then drop, since we can't write a WHERE clause directly.
|
||||||
|
|
|
@ -385,10 +385,10 @@ public class IndexerSQLMetadataStorageCoordinatorTest
|
||||||
(int) derbyConnector.getDBI().<Integer>withHandle(
|
(int) derbyConnector.getDBI().<Integer>withHandle(
|
||||||
handle -> {
|
handle -> {
|
||||||
String request = StringUtils.format(
|
String request = StringUtils.format(
|
||||||
"UPDATE %s SET used = false, used_flag_last_updated = :used_flag_last_updated WHERE id = :id",
|
"UPDATE %s SET used = false, used_status_last_updated = :used_status_last_updated WHERE id = :id",
|
||||||
derbyConnectorRule.metadataTablesConfigSupplier().get().getSegmentsTable()
|
derbyConnectorRule.metadataTablesConfigSupplier().get().getSegmentsTable()
|
||||||
);
|
);
|
||||||
return handle.createStatement(request).bind("id", segment.getId().toString()).bind("used_flag_last_updated", DateTimes.nowUtc().toString()).execute();
|
return handle.createStatement(request).bind("id", segment.getId().toString()).bind("used_status_last_updated", DateTimes.nowUtc().toString()).execute();
|
||||||
}
|
}
|
||||||
)
|
)
|
||||||
);
|
);
|
||||||
|
@ -433,8 +433,8 @@ public class IndexerSQLMetadataStorageCoordinatorTest
|
||||||
handle -> {
|
handle -> {
|
||||||
PreparedBatch preparedBatch = handle.prepareBatch(
|
PreparedBatch preparedBatch = handle.prepareBatch(
|
||||||
StringUtils.format(
|
StringUtils.format(
|
||||||
"INSERT INTO %1$s (id, dataSource, created_date, start, %2$send%2$s, partitioned, version, used, payload, used_flag_last_updated) "
|
"INSERT INTO %1$s (id, dataSource, created_date, start, %2$send%2$s, partitioned, version, used, payload, used_status_last_updated) "
|
||||||
+ "VALUES (:id, :dataSource, :created_date, :start, :end, :partitioned, :version, :used, :payload, :used_flag_last_updated)",
|
+ "VALUES (:id, :dataSource, :created_date, :start, :end, :partitioned, :version, :used, :payload, :used_status_last_updated)",
|
||||||
table,
|
table,
|
||||||
derbyConnector.getQuoteString()
|
derbyConnector.getQuoteString()
|
||||||
)
|
)
|
||||||
|
@ -450,7 +450,7 @@ public class IndexerSQLMetadataStorageCoordinatorTest
|
||||||
.bind("version", segment.getVersion())
|
.bind("version", segment.getVersion())
|
||||||
.bind("used", true)
|
.bind("used", true)
|
||||||
.bind("payload", mapper.writeValueAsBytes(segment))
|
.bind("payload", mapper.writeValueAsBytes(segment))
|
||||||
.bind("used_flag_last_updated", DateTimes.nowUtc().toString());
|
.bind("used_status_last_updated", DateTimes.nowUtc().toString());
|
||||||
}
|
}
|
||||||
|
|
||||||
final int[] affectedRows = preparedBatch.execute();
|
final int[] affectedRows = preparedBatch.execute();
|
||||||
|
|
|
@ -168,7 +168,7 @@ public class SQLMetadataConnectorTest
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* This is a test for the upgrade path where a cluster is upgrading from a version that did not have used_flag_last_updated
|
* This is a test for the upgrade path where a cluster is upgrading from a version that did not have used_status_last_updated
|
||||||
* in the segments table.
|
* in the segments table.
|
||||||
*/
|
*/
|
||||||
@Test
|
@Test
|
||||||
|
@ -176,7 +176,7 @@ public class SQLMetadataConnectorTest
|
||||||
{
|
{
|
||||||
connector.createSegmentTable();
|
connector.createSegmentTable();
|
||||||
|
|
||||||
// Drop column used_flag_last_updated to bring us in line with pre-upgrade state
|
// Drop column used_status_last_updated to bring us in line with pre-upgrade state
|
||||||
derbyConnectorRule.getConnector().retryWithHandle(
|
derbyConnectorRule.getConnector().retryWithHandle(
|
||||||
new HandleCallback<Void>()
|
new HandleCallback<Void>()
|
||||||
{
|
{
|
||||||
|
@ -186,7 +186,7 @@ public class SQLMetadataConnectorTest
|
||||||
final Batch batch = handle.createBatch();
|
final Batch batch = handle.createBatch();
|
||||||
batch.add(
|
batch.add(
|
||||||
StringUtils.format(
|
StringUtils.format(
|
||||||
"ALTER TABLE %1$s DROP COLUMN USED_FLAG_LAST_UPDATED",
|
"ALTER TABLE %1$s DROP COLUMN USED_STATUS_LAST_UPDATED",
|
||||||
derbyConnectorRule.metadataTablesConfigSupplier()
|
derbyConnectorRule.metadataTablesConfigSupplier()
|
||||||
.get()
|
.get()
|
||||||
.getSegmentsTable()
|
.getSegmentsTable()
|
||||||
|
@ -202,7 +202,7 @@ public class SQLMetadataConnectorTest
|
||||||
connector.alterSegmentTableAddUsedFlagLastUpdated();
|
connector.alterSegmentTableAddUsedFlagLastUpdated();
|
||||||
connector.tableHasColumn(
|
connector.tableHasColumn(
|
||||||
derbyConnectorRule.metadataTablesConfigSupplier().get().getSegmentsTable(),
|
derbyConnectorRule.metadataTablesConfigSupplier().get().getSegmentsTable(),
|
||||||
"USED_FLAG_LAST_UPDATED"
|
"USED_STATUS_LAST_UPDATED"
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -388,7 +388,7 @@ public class SqlSegmentsMetadataManagerTest
|
||||||
sqlSegmentsMetadataManager.startPollingDatabasePeriodically();
|
sqlSegmentsMetadataManager.startPollingDatabasePeriodically();
|
||||||
sqlSegmentsMetadataManager.poll();
|
sqlSegmentsMetadataManager.poll();
|
||||||
|
|
||||||
// We alter the segment table to allow nullable used_flag_last_updated in order to test compatibility during druid upgrade from version without used_flag_last_updated.
|
// We alter the segment table to allow nullable used_status_last_updated in order to test compatibility during druid upgrade from version without used_status_last_updated.
|
||||||
derbyConnectorRule.allowUsedFlagLastUpdatedToBeNullable();
|
derbyConnectorRule.allowUsedFlagLastUpdatedToBeNullable();
|
||||||
|
|
||||||
Assert.assertTrue(sqlSegmentsMetadataManager.isPollingDatabasePeriodically());
|
Assert.assertTrue(sqlSegmentsMetadataManager.isPollingDatabasePeriodically());
|
||||||
|
@ -447,9 +447,9 @@ public class SqlSegmentsMetadataManagerTest
|
||||||
sqlSegmentsMetadataManager.getUnusedSegmentIntervals("wikipedia", DateTimes.of("3000"), 5, DateTimes.nowUtc().minus(Duration.parse("PT86400S")))
|
sqlSegmentsMetadataManager.getUnusedSegmentIntervals("wikipedia", DateTimes.of("3000"), 5, DateTimes.nowUtc().minus(Duration.parse("PT86400S")))
|
||||||
);
|
);
|
||||||
|
|
||||||
// One of the 3 segments in newDs has a null used_flag_last_updated which should mean getUnusedSegmentIntervals never returns it
|
// One of the 3 segments in newDs has a null used_status_last_updated which should mean getUnusedSegmentIntervals never returns it
|
||||||
// One of the 3 segments in newDs has a used_flag_last_updated older than 1 day which means it should also be returned
|
// One of the 3 segments in newDs has a used_status_last_updated older than 1 day which means it should also be returned
|
||||||
// The last of the 3 segemns in newDs has a used_flag_last_updated date less than one day and should not be returned
|
// The last of the 3 segemns in newDs has a used_status_last_updated date less than one day and should not be returned
|
||||||
Assert.assertEquals(
|
Assert.assertEquals(
|
||||||
ImmutableList.of(newSegment2.getInterval()),
|
ImmutableList.of(newSegment2.getInterval()),
|
||||||
sqlSegmentsMetadataManager.getUnusedSegmentIntervals(newDs, DateTimes.of("3000"), 5, DateTimes.nowUtc().minus(Duration.parse("PT86400S")))
|
sqlSegmentsMetadataManager.getUnusedSegmentIntervals(newDs, DateTimes.of("3000"), 5, DateTimes.nowUtc().minus(Duration.parse("PT86400S")))
|
||||||
|
@ -964,7 +964,7 @@ public class SqlSegmentsMetadataManagerTest
|
||||||
{
|
{
|
||||||
List<Map<String, Object>> lst = handle.select(
|
List<Map<String, Object>> lst = handle.select(
|
||||||
StringUtils.format(
|
StringUtils.format(
|
||||||
"SELECT * FROM %1$s WHERE USED_FLAG_LAST_UPDATED IS NULL",
|
"SELECT * FROM %1$s WHERE USED_STATUS_LAST_UPDATED IS NULL",
|
||||||
derbyConnectorRule.metadataTablesConfigSupplier()
|
derbyConnectorRule.metadataTablesConfigSupplier()
|
||||||
.get()
|
.get()
|
||||||
.getSegmentsTable()
|
.getSegmentsTable()
|
||||||
|
|
|
@ -151,7 +151,7 @@ public class TestDerbyConnector extends DerbyConnector
|
||||||
final Batch batch = handle.createBatch();
|
final Batch batch = handle.createBatch();
|
||||||
batch.add(
|
batch.add(
|
||||||
StringUtils.format(
|
StringUtils.format(
|
||||||
"ALTER TABLE %1$s ALTER COLUMN USED_FLAG_LAST_UPDATED NULL",
|
"ALTER TABLE %1$s ALTER COLUMN USED_STATUS_LAST_UPDATED NULL",
|
||||||
dbTables.get().getSegmentsTable().toUpperCase(Locale.ENGLISH)
|
dbTables.get().getSegmentsTable().toUpperCase(Locale.ENGLISH)
|
||||||
)
|
)
|
||||||
);
|
);
|
||||||
|
|
|
@ -76,8 +76,7 @@ public class Main
|
||||||
DumpSegment.class,
|
DumpSegment.class,
|
||||||
ResetCluster.class,
|
ResetCluster.class,
|
||||||
ValidateSegments.class,
|
ValidateSegments.class,
|
||||||
ExportMetadata.class,
|
ExportMetadata.class
|
||||||
UpdateTables.class
|
|
||||||
);
|
);
|
||||||
builder.withGroup("tools")
|
builder.withGroup("tools")
|
||||||
.withDescription("Various tools for working with Druid")
|
.withDescription("Various tools for working with Druid")
|
||||||
|
|
|
@ -1,134 +0,0 @@
|
||||||
/*
|
|
||||||
* Licensed to the Apache Software Foundation (ASF) under one
|
|
||||||
* or more contributor license agreements. See the NOTICE file
|
|
||||||
* distributed with this work for additional information
|
|
||||||
* regarding copyright ownership. The ASF licenses this file
|
|
||||||
* to you under the Apache License, Version 2.0 (the
|
|
||||||
* "License"); you may not use this file except in compliance
|
|
||||||
* with the License. You may obtain a copy of the License at
|
|
||||||
*
|
|
||||||
* http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
*
|
|
||||||
* Unless required by applicable law or agreed to in writing,
|
|
||||||
* software distributed under the License is distributed on an
|
|
||||||
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
|
||||||
* KIND, either express or implied. See the License for the
|
|
||||||
* specific language governing permissions and limitations
|
|
||||||
* under the License.
|
|
||||||
*/
|
|
||||||
|
|
||||||
package org.apache.druid.cli;
|
|
||||||
|
|
||||||
import com.github.rvesse.airline.annotations.Command;
|
|
||||||
import com.github.rvesse.airline.annotations.Option;
|
|
||||||
import com.github.rvesse.airline.annotations.restrictions.Required;
|
|
||||||
import com.google.common.collect.ImmutableList;
|
|
||||||
import com.google.inject.Injector;
|
|
||||||
import com.google.inject.Key;
|
|
||||||
import com.google.inject.Module;
|
|
||||||
import org.apache.druid.guice.DruidProcessingModule;
|
|
||||||
import org.apache.druid.guice.JsonConfigProvider;
|
|
||||||
import org.apache.druid.guice.QueryRunnerFactoryModule;
|
|
||||||
import org.apache.druid.guice.QueryableModule;
|
|
||||||
import org.apache.druid.guice.annotations.Self;
|
|
||||||
import org.apache.druid.java.util.common.logger.Logger;
|
|
||||||
import org.apache.druid.metadata.MetadataStorageConnector;
|
|
||||||
import org.apache.druid.metadata.MetadataStorageConnectorConfig;
|
|
||||||
import org.apache.druid.metadata.MetadataStorageTablesConfig;
|
|
||||||
import org.apache.druid.server.DruidNode;
|
|
||||||
|
|
||||||
import java.util.List;
|
|
||||||
|
|
||||||
@Command(
|
|
||||||
name = "metadata-update",
|
|
||||||
description = "Controlled update of metadata storage"
|
|
||||||
)
|
|
||||||
|
|
||||||
public class UpdateTables extends GuiceRunnable
|
|
||||||
{
|
|
||||||
private static final String SEGMENTS_TABLE_ADD_USED_FLAG_LAST_UPDATED = "add-used-flag-last-updated-to-segments";
|
|
||||||
|
|
||||||
@Option(name = "--connectURI", description = "Database JDBC connection string")
|
|
||||||
@Required
|
|
||||||
private String connectURI;
|
|
||||||
|
|
||||||
@Option(name = "--user", description = "Database username")
|
|
||||||
@Required
|
|
||||||
private String user;
|
|
||||||
|
|
||||||
@Option(name = "--password", description = "Database password")
|
|
||||||
@Required
|
|
||||||
private String password;
|
|
||||||
|
|
||||||
@Option(name = "--base", description = "Base table name")
|
|
||||||
private String base;
|
|
||||||
|
|
||||||
@Option(name = "--action", description = "Action Name")
|
|
||||||
private String action_name;
|
|
||||||
|
|
||||||
private static final Logger log = new Logger(CreateTables.class);
|
|
||||||
|
|
||||||
public UpdateTables()
|
|
||||||
{
|
|
||||||
super(log);
|
|
||||||
}
|
|
||||||
|
|
||||||
@Override
|
|
||||||
protected List<? extends Module> getModules()
|
|
||||||
{
|
|
||||||
return ImmutableList.of(
|
|
||||||
// It's unknown why those modules are required in CreateTables, and if all of those modules are required or not.
|
|
||||||
// Maybe some of those modules could be removed.
|
|
||||||
// See https://github.com/apache/druid/pull/4429#discussion_r123602930
|
|
||||||
new DruidProcessingModule(),
|
|
||||||
new QueryableModule(),
|
|
||||||
new QueryRunnerFactoryModule(),
|
|
||||||
binder -> {
|
|
||||||
JsonConfigProvider.bindInstance(
|
|
||||||
binder,
|
|
||||||
Key.get(MetadataStorageConnectorConfig.class),
|
|
||||||
new MetadataStorageConnectorConfig()
|
|
||||||
{
|
|
||||||
@Override
|
|
||||||
public String getConnectURI()
|
|
||||||
{
|
|
||||||
return connectURI;
|
|
||||||
}
|
|
||||||
|
|
||||||
@Override
|
|
||||||
public String getUser()
|
|
||||||
{
|
|
||||||
return user;
|
|
||||||
}
|
|
||||||
|
|
||||||
@Override
|
|
||||||
public String getPassword()
|
|
||||||
{
|
|
||||||
return password;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
);
|
|
||||||
JsonConfigProvider.bindInstance(
|
|
||||||
binder,
|
|
||||||
Key.get(MetadataStorageTablesConfig.class),
|
|
||||||
MetadataStorageTablesConfig.fromBase(base)
|
|
||||||
);
|
|
||||||
JsonConfigProvider.bindInstance(
|
|
||||||
binder,
|
|
||||||
Key.get(DruidNode.class, Self.class),
|
|
||||||
new DruidNode("tools", "localhost", false, -1, null, true, false)
|
|
||||||
);
|
|
||||||
}
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
@Override
|
|
||||||
public void run()
|
|
||||||
{
|
|
||||||
final Injector injector = makeInjector();
|
|
||||||
MetadataStorageConnector dbConnector = injector.getInstance(MetadataStorageConnector.class);
|
|
||||||
if (SEGMENTS_TABLE_ADD_USED_FLAG_LAST_UPDATED.equals(action_name)) {
|
|
||||||
dbConnector.alterSegmentTableAddUsedFlagLastUpdated();
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
Loading…
Reference in New Issue