First and Last aggregator cannot be used in ingestion spec, and should only be specified as part of queries.
Note that queries with first/last aggregators on a segment created with rollup enabled will return the rolled up value, and not the last value within the raw ingested data.
#### `doubleFirst` aggregator
`doubleFirst` computes the metric value with the minimum timestamp or 0 if no row exist
```json
{
"type" : "doubleFirst",
"name" : <output_name>,
"fieldName" : <metric_name>
}
```
#### `doubleLast` aggregator
`doubleLast` computes the metric value with the maximum timestamp or 0 if no row exist
```json
{
"type" : "doubleLast",
"name" : <output_name>,
"fieldName" : <metric_name>
}
```
#### `longFirst` aggregator
`longFirst` computes the metric value with the minimum timestamp or 0 if no row exist
```json
{
"type" : "longFirst",
"name" : <output_name>,
"fieldName" : <metric_name>
}
```
#### `longLast` aggregator
`longLast` computes the metric value with the maximum timestamp or 0 if no row exist
JavaScript-based functionality is disabled by default. Please refer to the Druid <ahref="../development/javascript.html">JavaScript programming guide</a> for guidelines about using Druid's JavaScript functionality, including instructions on how to enable it.
Each individual element of the "fields" list can be a String or [DimensionSpec](../querying/dimensionspecs.html). A String dimension in the fields list is equivalent to a DefaultDimensionSpec (no transformations).
When setting `byRow` to `false` (the default) it computes the cardinality of the set composed of the union of all dimension values for all the given dimensions.
* For a single dimension, this is equivalent to
```sql
SELECT COUNT(DISTINCT(dimension)) FROM <datasource>
```
* For multiple dimensions, this is equivalent to something akin to
Uses [HyperLogLog](http://algo.inria.fr/flajolet/Publications/FlFuGaMe07.pdf) to compute the estimated cardinality of a dimension that has been aggregated as a "hyperUnique" metric at indexing time.
A filtered aggregator wraps any given aggregator, but only aggregates the values for which the given dimension filter matches.
This makes it possible to compute the results of a filtered and an unfiltered aggregation simultaneously, without having to issue multiple queries, and use both results as part of post-aggregations.
*Note:* If only the filtered results are required, consider putting the filter on the query itself, which will be much faster since it does not require scanning all the data.