Ref Guide: rename Streaming docs to fix broken intra-doc links in the PDF due to mismatching anchor names

This commit is contained in:
Cassandra Targett 2017-09-25 14:35:52 -05:00
parent 0e5c3aa3dc
commit 9beafb612f
5 changed files with 19 additions and 18 deletions

View File

@ -63,7 +63,7 @@ See the section <<solrcloud-autoscaling.adoc#solrcloud-autoscaling,SolrCloud Aut
** <<highlighting.adoc#the-unified-highlighter,Unified Highlighter>> ** <<highlighting.adoc#the-unified-highlighter,Unified Highlighter>>
** <<metrics-reporting.adoc#metrics-reporting,Metrics API>>. See also information about related deprecations in the section <<JMX Support and MBeans>> below. ** <<metrics-reporting.adoc#metrics-reporting,Metrics API>>. See also information about related deprecations in the section <<JMX Support and MBeans>> below.
** <<other-parsers.adoc#payload-query-parsers,Payload queries>> ** <<other-parsers.adoc#payload-query-parsers,Payload queries>>
** <<stream-evaluators.adoc#stream-evaluators,Streaming Evaluators>> ** <<stream-evaluator-reference.adoc#stream-evaluator-reference,Streaming Evaluators>>
** <<v2-api.adoc#v2-api,/v2 API>> ** <<v2-api.adoc#v2-api,/v2 API>>
** <<graph-traversal.adoc#graph-traversal,Graph streaming expressions>> ** <<graph-traversal.adoc#graph-traversal,Graph streaming expressions>>

View File

@ -1,6 +1,6 @@
= Stream Decorator Reference = Stream Decorator Reference
:page-shortname: stream-decorators :page-shortname: stream-decorator-reference
:page-permalink: stream-decorators.html :page-permalink: stream-decorator-reference.html
:page-tocclass: right :page-tocclass: right
:page-toclevels: 1 :page-toclevels: 1
// Licensed to the Apache Software Foundation (ASF) under one // Licensed to the Apache Software Foundation (ASF) under one
@ -386,7 +386,7 @@ As you can see in the examples above, the `cartesianProduct` function does suppo
== classify == classify
The `classify` function classifies tuples using a logistic regression text classification model. It was designed specifically to work with models trained using the <<stream-sources.adoc#train,train function>>. The `classify` function uses the <<stream-sources.adoc#model,model function>> to retrieve a stored model and then scores a stream of tuples using the model. The tuples read by the classifier must contain a text field that can be used for classification. The classify function uses a Lucene analyzer to extract the features from the text so the model can be applied. By default the `classify` function looks for the analyzer using the name of text field in the tuple. If the Solr schema on the worker node does not contain this field, the analyzer can be looked up in another field by specifying the `analyzerField` parameter. The `classify` function classifies tuples using a logistic regression text classification model. It was designed specifically to work with models trained using the <<stream-source-reference.adoc#train,train function>>. The `classify` function uses the <<stream-source-reference.adoc#model,model function>> to retrieve a stored model and then scores a stream of tuples using the model. The tuples read by the classifier must contain a text field that can be used for classification. The classify function uses a Lucene analyzer to extract the features from the text so the model can be applied. By default the `classify` function looks for the analyzer using the name of text field in the tuple. If the Solr schema on the worker node does not contain this field, the analyzer can be looked up in another field by specifying the `analyzerField` parameter.
Each tuple that is classified is assigned two scores: Each tuple that is classified is assigned two scores:
@ -500,7 +500,7 @@ daemon(id="uniqueId",
) )
---- ----
The sample code above shows a `daemon` function wrapping an `update` function, which is wrapping a `topic` function. When this expression is sent to the `/stream` handler, the `/stream` hander sees the `daemon` function and keeps it in memory where it will run at intervals. In this particular example, the `daemon` function will run the `update` function every second. The `update` function is wrapping a <<stream-sources.adoc#topic,`topic` function>>, which will stream tuples that match the `topic` function query in batches. Each subsequent call to the topic will return the next batch of tuples for the topic. The `update` function will send all the tuples matching the topic to another collection to be indexed. The `terminate` parameter tells the daemon to terminate when the `topic` function stops sending tuples. The sample code above shows a `daemon` function wrapping an `update` function, which is wrapping a `topic` function. When this expression is sent to the `/stream` handler, the `/stream` hander sees the `daemon` function and keeps it in memory where it will run at intervals. In this particular example, the `daemon` function will run the `update` function every second. The `update` function is wrapping a <<stream-source-reference.adoc#topic,`topic` function>>, which will stream tuples that match the `topic` function query in batches. Each subsequent call to the topic will return the next batch of tuples for the topic. The `update` function will send all the tuples matching the topic to another collection to be indexed. The `terminate` parameter tells the daemon to terminate when the `topic` function stops sending tuples.
The effect of this is to push documents that match a specific query into another collection. Custom push functions can be plugged in that push documents out of Solr and into other systems, such as Kafka or an email system. The effect of this is to push documents that match a specific query into another collection. Custom push functions can be plugged in that push documents out of Solr and into other systems, such as Kafka or an email system.
@ -643,7 +643,7 @@ daemon(id="myDaemon",
id="myTopic"))) id="myTopic")))
---- ----
In the example above a <<daemon,daemon>> wraps an executor, which wraps a <<stream-sources.adoc#topic,topic>> that is returning tuples with expressions to execute. When sent to the stream handler, the daemon will call the executor at intervals which will cause the executor to read from the topic and execute the expressions found in the `expr_s` field. The daemon will repeatedly call the executor until all the tuples that match the topic have been iterated, then it will terminate. This is the approach for executing batches of streaming expressions from a `topic` queue. In the example above a <<daemon,daemon>> wraps an executor, which wraps a <<stream-source-reference.adoc#topic,topic>> that is returning tuples with expressions to execute. When sent to the stream handler, the daemon will call the executor at intervals which will cause the executor to read from the topic and execute the expressions found in the `expr_s` field. The daemon will repeatedly call the executor until all the tuples that match the topic have been iterated, then it will terminate. This is the approach for executing batches of streaming expressions from a `topic` queue.
== fetch == fetch
@ -1001,7 +1001,7 @@ The expression above shows a `parallel` function wrapping a `reduce` function. T
The `priority` function is a simple priority scheduler for the <<executor>> function. The `executor` function doesn't directly have a concept of task prioritization; instead it simply executes tasks in the order that they are read from it's underlying stream. The `priority` function provides the ability to schedule a higher priority task ahead of lower priority tasks that were submitted earlier. The `priority` function is a simple priority scheduler for the <<executor>> function. The `executor` function doesn't directly have a concept of task prioritization; instead it simply executes tasks in the order that they are read from it's underlying stream. The `priority` function provides the ability to schedule a higher priority task ahead of lower priority tasks that were submitted earlier.
The `priority` function wraps two <<stream-sources.adoc#topic,topics>> that are both emitting tuples that contain streaming expressions to execute. The first topic is considered the higher priority task queue. The `priority` function wraps two <<stream-source-reference.adoc#topic,topics>> that are both emitting tuples that contain streaming expressions to execute. The first topic is considered the higher priority task queue.
Each time the `priority` function is called, it checks the higher priority task queue to see if there are any tasks to execute. If tasks are waiting in the higher priority queue then the priority function will emit the higher priority tasks. If there are no high priority tasks to run, the lower priority queue tasks are emitted. Each time the `priority` function is called, it checks the higher priority task queue to see if there are any tasks to execute. If tasks are waiting in the higher priority queue then the priority function will emit the higher priority tasks. If there are no high priority tasks to run, the lower priority queue tasks are emitted.

View File

@ -1,6 +1,6 @@
= Stream Evaluator Reference = Stream Evaluator Reference
:page-shortname: stream-evaluators :page-shortname: stream-evaluator-reference
:page-permalink: stream-evaluators.html :page-permalink: stream-evaluator-reference.html
:page-tocclass: right :page-tocclass: right
:page-toclevels: 1 :page-toclevels: 1
// Licensed to the Apache Software Foundation (ASF) under one // Licensed to the Apache Software Foundation (ASF) under one

View File

@ -1,6 +1,6 @@
= Stream Source Reference = Stream Source Reference
:page-shortname: stream-sources :page-shortname: stream-source-reference
:page-permalink: stream-sources.html :page-permalink: stream-source-reference.html
:page-tocclass: right :page-tocclass: right
:page-toclevels: 1 :page-toclevels: 1
// Licensed to the Apache Software Foundation (ASF) under one // Licensed to the Apache Software Foundation (ASF) under one
@ -20,6 +20,7 @@
// specific language governing permissions and limitations // specific language governing permissions and limitations
// under the License. // under the License.
Put something here to see if it fixes things.
== search == search
@ -36,7 +37,7 @@ This expression allows you to specify a request hander using the `qt` parameter.
* `zkHost`: Only needs to be defined if the collection being searched is found in a different zkHost than the local stream handler. * `zkHost`: Only needs to be defined if the collection being searched is found in a different zkHost than the local stream handler.
* `qt`: Specifies the query type, or request handler, to use. Set this to `/export` to work with large result sets. The default is `/select`. * `qt`: Specifies the query type, or request handler, to use. Set this to `/export` to work with large result sets. The default is `/select`.
* `rows`: (Mandatory with the `/select` handler) The rows parameter specifies how many rows to return. This parameter is only needed with the `/select` handler (which is the default) since the `/export` handler always returns all rows. * `rows`: (Mandatory with the `/select` handler) The rows parameter specifies how many rows to return. This parameter is only needed with the `/select` handler (which is the default) since the `/export` handler always returns all rows.
* `partitionKeys`: Comma delimited list of keys to partition the search results by. To be used with the parallel function for parallelizing operations across worker nodes. See the <<stream-decorators.adoc#parallel,parallel>> function for details. * `partitionKeys`: Comma delimited list of keys to partition the search results by. To be used with the parallel function for parallelizing operations across worker nodes. See the <<stream-decorator-reference.adoc#parallel,parallel>> function for details.
=== search Syntax === search Syntax
@ -251,7 +252,7 @@ knn(collection1,
== model == model
The `model` function retrieves and caches logistic regression text classification models that are stored in a SolrCloud collection. The `model` function is designed to work with models that are created by the <<train,train function>>, but can also be used to retrieve text classification models trained outside of Solr, as long as they conform to the specified format. After the model is retrieved it can be used by the <<stream-decorators.adoc#classify,classify function>> to classify documents. The `model` function retrieves and caches logistic regression text classification models that are stored in a SolrCloud collection. The `model` function is designed to work with models that are created by the <<train,train function>>, but can also be used to retrieve text classification models trained outside of Solr, as long as they conform to the specified format. After the model is retrieved it can be used by the <<stream-decorator-reference.adoc#classify,classify function>> to classify documents.
A single model tuple is fetched and returned based on the *id* parameter. The model is retrieved by matching the *id* parameter with a model name in the index. If more then one iteration of the named model is stored in the index, the highest iteration is selected. A single model tuple is fetched and returned based on the *id* parameter. The model is retrieved by matching the *id* parameter with a model name in the index. If more then one iteration of the named model is stored in the index, the highest iteration is selected.
@ -384,7 +385,7 @@ stream decorator to perform parallel relational algebra. When used in parallel m
* `fl`: (Mandatory) The list of fields to return. * `fl`: (Mandatory) The list of fields to return.
* `sort`: (Mandatory) The sort criteria. * `sort`: (Mandatory) The sort criteria.
* `zkHost`: Only needs to be defined if the collection being searched is found in a different zkHost than the local stream handler. * `zkHost`: Only needs to be defined if the collection being searched is found in a different zkHost than the local stream handler.
* `partitionKeys`: Comma delimited list of keys to partition the search results by. To be used with the parallel function for parallelizing operations across worker nodes. See the <<stream-decorators.adoc#parallel,parallel>> function for details. * `partitionKeys`: Comma delimited list of keys to partition the search results by. To be used with the parallel function for parallelizing operations across worker nodes. See the <<stream-decorator-reference.adoc#parallel,parallel>> function for details.
=== shuffle Syntax === shuffle Syntax

View File

@ -1,7 +1,7 @@
= Streaming Expressions = Streaming Expressions
:page-shortname: streaming-expressions :page-shortname: streaming-expressions
:page-permalink: streaming-expressions.html :page-permalink: streaming-expressions.html
:page-children: stream-sources, stream-decorators, stream-evaluators, statistical-programming, graph-traversal :page-children: stream-source-reference, stream-decorator-reference, stream-evaluator-reference, statistical-programming, graph-traversal
// Licensed to the Apache Software Foundation (ASF) under one // Licensed to the Apache Software Foundation (ASF) under one
// or more contributor license agreements. See the NOTICE file // or more contributor license agreements. See the NOTICE file
// distributed with this work for additional information // distributed with this work for additional information
@ -121,12 +121,12 @@ Because streaming expressions relies on the `/export` handler, many of the field
Stream sources originate streams. The most commonly used one of these is `search`, which does a query. Stream sources originate streams. The most commonly used one of these is `search`, which does a query.
A full reference to all available source expressions is available in <<stream-sources.adoc#stream-sources,Stream Sources>>. A full reference to all available source expressions is available in <<stream-source-reference.adoc#stream-source-reference,Stream Source Reference>>.
=== About Stream Decorators === About Stream Decorators
Stream decorators wrap other stream functions or perform operations on a stream. Stream decorators wrap other stream functions or perform operations on a stream.
A full reference to all available decorator expressions is available in <<stream-decorators.adoc#stream-decorators,Stream Decorators>>. A full reference to all available decorator expressions is available in <<stream-decorator-reference.adoc#stream-decorator-reference,Stream Decorator Reference>>.
=== About Stream Evaluators === About Stream Evaluators
@ -141,4 +141,4 @@ In cases where you want to use raw values as part of an evaluation you will need
If you wish to use a raw string as part of an evaluation, you will want to consider using the `raw(string)` evaluator. This will always return the raw value, no matter what is entered. If you wish to use a raw string as part of an evaluation, you will want to consider using the `raw(string)` evaluator. This will always return the raw value, no matter what is entered.
A full reference to all available evaluator expressions is available in <<stream-evaluators.adoc#stream-evaluators,Stream Evaluators>>. A full reference to all available evaluator expressions is available in <<stream-evaluator-reference.adoc#stream-evaluator-reference,Stream Evaluator Reference>>.