Merge branch 'migrate_watcher'

Original commit: elastic/x-pack-elasticsearch@8cec5e872a
This commit is contained in:
uboness 2015-07-13 12:24:03 +02:00
commit 09dd30f582
540 changed files with 61209 additions and 0 deletions

412
watcher/LICENSE.txt Normal file
View File

@ -0,0 +1,412 @@
WATCHER SOFTWARE LICENSE AGREEMENT
READ THIS AGREEMENT CAREFULLY, WHICH CONSTITUTES A LEGALLY BINDING AGREEMENT AND GOVERNS YOUR USE OF ELASTICSEARCH'S
WATCHER SOFTWARE. BY INSTALLING AND/OR USING THE WATCHER SOFTWARE, YOU ARE INDICATING THAT YOU AGREE TO THE TERMS AND
CONDITIONS SET FORTH IN THIS AGREEMENT. IF YOU DO NOT AGREE WITH SUCH TERMS AND CONDITIONS, YOU MAY NOT INSTALL OR USE
THE WATCHER SOFTWARE.
This WATCHER SOFTWARE LICENSE AGREEMENT (this "Agreement") is entered into by and between the applicable Elasticsearch
entity referred to in Attachment 1 below ("Elasticsearch") and the person or entity ("You") that has downloaded
Elasticsearch's Watcher software to which this Agreement is attached ("Watcher Software"). This Agreement is effective as
of the date an applicable ordering document ("Order Form") is entered into by Elasticsearch and You (the "Effective
Date").
1. SOFTWARE LICENSE AND RESTRICTIONS
1.1 License Grants.
(a) 30 Day Free Trial License. Subject to the terms and conditions of this Agreement, Elasticsearch agrees to grant,
and does hereby grant to You for a period of thirty (30) days from the Effective Date (the "Trial Term"), solely for
Your internal business operations, a limited, non-exclusive, non-transferable, fully paid up, right and license
(without the right to grant or authorize sublicenses) to: (i) install and use the object code version of the Watcher
Software; (ii) use, and distribute internally a reasonable number of copies of the documentation, if any, provided with
the Watcher Software ("Documentation"), provided that You must include on such copies all Elasticsearch trademarks, trade
names, logos and notices present on the Documentation as originally provided to You by Elasticsearch; (iii) permit third
party contractors performing services on Your behalf to use the Watcher Software and Documentation as set forth in (i)
and (ii) above, provided that such use must be solely for Your benefit, and You shall be responsible for all acts and
omissions of such contractors in connection with their use of the Watcher Software. For the avoidance of doubt, You
understand and agree that upon the expiration of the Trial Term, Your license to use the Watcher Software will terminate,
unless you purchase a Qualifying Subscription (as defined below) for Elasticsearch support services.
(b) Fee-Bearing Production License. Subject to the terms and conditions of this Agreement and complete payment of any
and all applicable fees for a Gold or Platinum production subscription for support services for Elasticsearch open
source software (in each case, a "Qualifying Subscription"), Elasticsearch agrees to grant, and does hereby grant to You
during the term of the applicable Qualifying Subscription, and for the restricted scope of this Agreement, solely for
Your internal business operations, a limited, non-exclusive, non-transferable right and license (without the right to
grant or authorize sublicenses) to: (i) install and use the object code version of the Watcher Software, subject to any
applicable quantitative limitations set forth in the applicable Order Form; (ii) use, and distribute internally a
reasonable number of copies of the Documentation, if any, provided with the Watcher Software, provided that You must
include on such copies all Elasticsearch trademarks, trade names, logos and notices present on the Documentation as
originally provided to You by Elasticsearch; (iii) permit third party contractors performing services on Your behalf to
use the Watcher Software and Documentation as set forth in (i) and (ii) above, provided that such use must be solely for
Your benefit, and You shall be responsible for all acts and omissions of such contractors in connection with their use
of the Watcher Software.
1.2 Reservation of Rights; Restrictions. As between Elasticsearch and You, Elasticsearch owns all right title and
interest in and to the Watcher Software and any derivative works thereof, and except as expressly set forth in Section
1.1 above, no other license to the Watcher Software is granted to You by implication, estoppel or otherwise. You agree
not to: (i) prepare derivative works from, modify, copy or use the Watcher Software in any manner except as expressly
permitted in this Agreement or applicable law; (ii) transfer, sell, rent, lease, distribute, sublicense, loan or
otherwise transfer the Watcher Software in whole or in part to any third party; (iii) use the Watcher Software for
providing time-sharing services, any software-as-a-service offering ("SaaS"), service bureau services or as part of an
application services provider or other service offering; (iv) alter or remove any proprietary notices in the Watcher
Software; or (v) make available to any third party any analysis of the results of operation of the Watcher Software,
including benchmarking results, without the prior written consent of Elasticsearch. The Watcher Software may contain or
be provided with open source libraries, components, utilities and other open source software (collectively, "Open Source
Software"), which Open Source Software may have applicable license terms as identified on a website designated by
Elasticsearch or otherwise provided with the Watcher Software or Documentation. Notwithstanding anything to the contrary
herein, use of the Open Source Software shall be subject to the license terms and conditions applicable to such Open
Source Software, to the extent required by the applicable licensor (which terms shall not restrict the license rights
granted to You hereunder, but may contain additional rights).
1.3 Open Source. The Watcher Software may contain or be provided with open source libraries, components, utilities and
other open source software (collectively, "Open Source"), which Open Source may have applicable license terms as
identified on a website designated by Elasticsearch or otherwise provided with the applicable Software or Documentation.
Notwithstanding anything to the contrary herein, use of the Open Source shall be subject to the applicable Open Source
license terms and conditions to the extent required by the applicable licensor (which terms shall not restrict the
license rights granted to You hereunder but may contain additional rights).
1.4 Audit Rights. You agree that Elasticsearch shall have the right, upon five (5) business days' notice to You, to
audit Your use of the Watcher Software for compliance with any quantitative limitations on Your use of the Watcher
Software that are set forth in the applicable Order Form. You agree to provide Elasticsearch with the necessary access
to the Watcher Software to conduct such an audit either (i) remotely, or (ii) if remote performance is not possible, at
Your facilities, during normal business hours and no more than one (1) time in any twelve (12) month period. In the
event any such audit reveals that You have used the Watcher Software in excess of the applicable quantitative
limitations, You agree to promptly pay to Elasticsearch an amount equal to the difference between the fees actually paid
and the fees that You should have paid to remain in compliance with such quantitative limitations. This Section 1.3
shall survive for a period of two (2) years from the termination or expiration of this Agreement.
2. TERM AND TERMINATION
2.1 Term. This Agreement shall commence on the Effective Date, and shall continue in force for the license term set
forth in the applicable Order Form, unless earlier terminated under Section 2.2 below, provided, however, that if You do
not purchase a Qualifying Subscription prior to the expiration of the Trial Term, this Agreement will expire at the end
of the Trial Term.
2.2 Termination. Either party may, upon written notice to the other party, terminate this Agreement for material
breach by the other party automatically and without any other formality, if such party has failed to cure such material
breach within thirty (30) days of receiving written notice of such material breach from the non-breaching party.
Notwithstanding the foregoing, this Agreement shall automatically terminate in the event that You intentionally breach
the scope of the license granted in Section 1.1 of this Agreement.
2.3 Post Termination or Expiration. Upon termination or expiration of this Agreement, for any reason, You shall
promptly cease the use of the Watcher Software and Documentation and destroy (and certify to Elasticsearch in writing the
fact of such destruction), or return to Elasticsearch, all copies of the Watcher Software and Documentation then in Your
possession or under Your control.
2.4 Survival. Sections 2.3, 2.4, 3, 4 and 5 shall survive any termination or expiration of this Agreement.
3. DISCLAIMER OF WARRANTIES
TO THE MAXIMUM EXTENT PERMITTED UNDER APPLICABLE LAW, THE WATCHER SOFTWARE IS PROVIDED "AS IS" WITHOUT WARRANTY OF ANY
KIND, AND ELASTICSEARCH AND ITS LICENSORS MAKE NO WARRANTIES WHETHER EXPRESSED, IMPLIED OR STATUTORY REGARDING OR
RELATING TO THE WATCHER SOFTWARE OR DOCUMENTATION. TO THE MAXIMUM EXTENT PERMITTED UNDER APPLICABLE LAW, ELASTICSEARCH
AND ITS LICENSORS SPECIFICALLY DISCLAIM ALL IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NON-INFRINGEMENT WITH RESPECT TO THE WATCHER SOFTWARE AND DOCUMENTATION, AND WITH RESPECT TO THE USE OF THE FOREGOING.
FURTHER, ELASTICSEARCH DOES NOT WARRANT RESULTS OF USE OR THAT THE WATCHER SOFTWARE WILL BE ERROR FREE OR THAT THE USE OF
THE WATCHER SOFTWARE WILL BE UNINTERRUPTED.
4. LIMITATION OF LIABILITY
4.1 Disclaimer of Certain Damages. IN NO EVENT SHALL YOU OR ELASTICSEARCH OR ITS LICENSORS BE LIABLE FOR ANY LOSS OF
PROFITS, LOSS OF USE, BUSINESS INTERRUPTION, LOSS OF DATA, COST OF SUBSTITUTE GOODS OR SERVICES, OR FOR ANY INDIRECT,
SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES OF ANY KIND IN CONNECTION WITH OR ARISING OUT OF THE USE OR INABILITY TO
USE THE WATCHER SOFTWARE, OR THE PERFORMANCE OF OR FAILURE TO PERFORM THIS AGREEMENT, WHETHER ALLEGED AS A BREACH OF
CONTRACT OR TORTIOUS CONDUCT, INCLUDING NEGLIGENCE, EVEN IF THE RESPONSIBLE PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES. THE LIMITATIONS OF LIABILITY SET FORTH IN THIS SECTION 4.1 SHALL NOT APPLY TO A BREACH THROUGH GROSS
NEGLIGENCE OR INTENTIONAL MISCONDUCT BY YOU OF THE SCOPE OF THE LICENSE GRANTED IN SECTION 1.1 OR TO ANY OTHER LIABILITY
THAT CANNOT BE EXCLUDED OR LIMITED UNDER APPLICABLE LAW.
4.2 Damages Cap. IN NO EVENT SHALL ELASTICSEARCH'S OR ITS LICENSORS' AGGREGATE, CUMULATIVE LIABILITY UNDER THIS
AGREEMENT EXCEED THE AMOUNT YOU PAID, IN THE TWELVE (12) MONTHS IMMEDIATELY PRIOR TO THE EVENT GIVING RISE TO LIABILITY,
UNDER THE ELASTICSEARCH SUPPORT SERVICES AGREEMENT PURSUANT TO WHICH YOU PURCHASED THE QUALIFYING SUBSCRIPTION, PROVIDED
THAT IF YOU ARE USING THE WATCHER SOFTWARE UNDER A TRIAL LICENSE PURSUANT TO SECTION 1.1(a), IN NO EVENT SHALL
ELASTICSEARCH'S AGGREGATE, CUMULATIVE LIABILITY UNDER THIS AGREEMENT EXCEED ONE THOUSAND DOLLARS ($1,000).
4.3 YOU AGREE THAT THE FOREGOING LIMITATIONS, EXCLUSIONS AND DISCLAIMERS ARE A REASONABLE ALLOCATION OF THE RISK
BETWEEN THE PARTIES AND WILL APPLY TO THE MAXIMUM EXTENT PERMITTED BY APPLICABLE LAW, EVEN IF ANY REMEDY FAILS IN ITS
ESSENTIAL PURPOSE.
5. MISCELLANEOUS
This Agreement, including Attachment 1 hereto, which is hereby incorporated herein by this reference, completely and
exclusively states the entire agreement of the parties regarding the subject matter herein, and it supersedes, and its
terms govern, all prior proposals, agreements, or other communications between the parties, oral or written, regarding
such subject matter. For the avoidance of doubt, the parties hereby expressly acknowledge and agree that if You issue
any purchase order or similar document in connection with its purchase of a license to the Watcher Software, You will do
so only for Your internal, administrative purposes and not with the intent to provide any contractual terms. This
Agreement may not be modified except by a subsequently dated, written amendment that expressly amends this Agreement and
which is signed on behalf of Elasticsearch and You, by duly authorized representatives. If any provision(s) hereof is
held unenforceable, this Agreement will continue without said provision and be interpreted to reflect the original
intent of the parties.
ATTACHMENT 1
ADDITIONAL TERMS AND CONDITIONS
A. The following additional terms and conditions apply to all Customers with principal offices in the United States of
America:
(1) Applicable Elasticsearch Entity. The entity providing the license is Elasticsearch, Inc., a Delaware corporation.
(2) Government Rights. The Watcher Software product is "Commercial Computer Software," as that term is defined in 48
(C.F.R. 2.101, and as the term is used in 48 C.F.R. Part 12, and is a Commercial Item comprised of "commercial computer
(software" and "commercial computer software documentation". If acquired by or on behalf of a civilian agency, the U.S.
(Government acquires this commercial computer software and/or commercial computer software documentation subject to the
(terms of this Agreement, as specified in 48 C.F.R. 12.212 Computer Software) and 12.211 Technical Data) of the Federal
(Acquisition Regulation "FAR") and its successors. If acquired by or on behalf of any agency within the Department of
(Defense "DOD"), the U.S. Government acquires this commercial computer software and/or commercial computer software
(documentation subject to the terms of the Elasticsearch Software License Agreement as specified in 48 C.F.R. 227.7202-3
(and 48 C.F.R. 227.7202-4 of the DOD FAR Supplement "DFARS") and its successors, and consistent with 48 C.F.R. 227.7202.
(This U.S. Government Rights clause, consistent with 48 C.F.R. 12.212 and 48 C.F.R. 227.7202 is in lieu of, and
(supersedes, any other FAR, DFARS, or other clause or provision that addresses Government rights in computer software,
(computer software documentation or technical data related to the Watcher Software under this Agreement and in any
(Subcontract under which this commercial computer software and commercial computer software documentation is acquired or
(licensed.
(3) Export Control. You acknowledge that the goods, software and technology acquired from Elasticsearch are subject to
(U.S. export control laws and regulations, including but not limited to the International Traffic In Arms Regulations
("ITAR") 22 C.F.R. Parts 120-130 2010)); the Export Administration Regulations "EAR") 15 C.F.R. Parts 730-774 2010));
(the U.S. antiboycott regulations in the EAR and U.S. Department of the Treasury regulations; the economic sanctions
(regulations and guidelines of the U.S. Department of the Treasury, Office of Foreign Assets Control, and the USA
(Patriot Act Title III of Pub. L. 107-56, signed into law October 26, 2001), as amended. You are now and will remain in
(the future compliant with all such export control laws and regulations, and will not export, re-export, otherwise
(transfer any Elasticsearch goods, software or technology or disclose any Elasticsearch software or technology to any
(person contrary to such laws or regulations. You acknowledge that remote access to the Watcher Software may in certain
(circumstances be considered a re-export of Watcher Software, and accordingly, may not be granted in contravention of
(U.S. export control laws and regulations.
(4) Governing Law. This Agreement will be governed by the laws of the State of California, without regard to its
(conflict of laws principles. This Agreement shall not be governed by the 1980 UN Convention on Contracts for the
(International Sale of Goods. All suits hereunder will be brought solely in Federal Court for the Northern District of
(California, or if that court lacks subject matter jurisdiction, in any California State Court located in Santa Clara
(County. The parties hereby irrevocably waive any and all claims and defenses either might otherwise have in any such
(action or proceeding in any of such courts based upon any alleged lack of personal jurisdiction, improper venue, forum
(non conveniens or any similar claim or defense.
B. The following additional terms and conditions apply to all Customers with principal offices in Canada:
(1) Applicable Elasticsearch Entity. The entity providing the license is Elasticsearch B.C. Ltd., a corporation
(incorporated under laws of the Province of British Columbia.
(2) Export Control. You acknowledge that the goods, software and technology acquired from Elasticsearch are subject to
the restrictions and controls set out in Section A(3) above as well as those imposed by the Export and Import Permits
Act (Canada) and the regulations thereunder and that you will comply with all applicable laws and regulations. Without
limitation, You acknowledge that the Marvel Software, or any portion thereof, will not be exported: (a) to any country
on Canada's Area Control List; (b) to any country subject to UN Security Council embargo or action; or (c) contrary to
Canada's Export Control List Item 5505. You are now and will remain in the future compliant with all such export control
laws and regulations, and will not export, re-export, otherwise transfer any Elasticsearch goods, software or technology
or disclose any Elasticsearch software or technology to any person contrary to such laws or regulations. You will not
export or re-export the Marvel Software, or any portion thereof, directly or indirectly, in violation of the Canadian
export administration laws and regulations to any country or end user, or to any end user who you know or have reason to
know will utilize them in the design, development or production of nuclear, chemical or biological weapons. You further
acknowledge that the Marvel Software product may include technical data subject to such Canadian export regulations.
Elasticsearch does not represent that the Marvel Software is appropriate or available for use in all countries.
Elasticsearch prohibits accessing materials from countries or states where contents are illegal. You are using the
Marvel Software on your own initiative and you are responsible for compliance with all applicable laws. You hereby agree
to indemnify Elasticsearch and its affiliates from any claims, actions, liability or expenses (including reasonable
lawyers' fees) resulting from Your failure to act in accordance with the acknowledgements, agreements, and
representations in this Section B(2).
(3) Governing Law and Dispute Resolution. This Agreement shall be governed by the Province of Ontario and the federal
laws of Canada applicable therein without regard to conflict of laws provisions. The parties hereby irrevocably waive
any and all claims and defenses either might otherwise have in any such action or proceeding in any of such courts based
upon any alleged lack of personal jurisdiction, improper venue, forum non conveniens or any similar claim or defense.
Any dispute, claim or controversy arising out of or relating to this Agreement or the existence, breach, termination,
enforcement, interpretation or validity thereof, including the determination of the scope or applicability of this
agreement to arbitrate, (each, a "Dispute"), which the parties are unable to resolve after good faith negotiations,
shall be submitted first to the upper management level of the parties. The parties, through their upper management level
representatives shall meet within thirty (30) days of the Dispute being referred to them and if the parties are unable
to resolve such Dispute within thirty (30) days of meeting, the parties agree to seek to resolve the Dispute through
mediation with ADR Chambers in the City of Toronto, Ontario, Canada before pursuing any other proceedings. The costs of
the mediator shall be shared equally by the parties. If the Dispute has not been resolved within thirty (30) days of the
notice to desire to mediate, any party may terminate the mediation and proceed to arbitration and the matter shall be
referred to and finally resolved by arbitration at ADR Chambers pursuant to the general ADR Chambers Rules for
Arbitration in the City of Toronto, Ontario, Canada. The arbitration shall proceed in accordance with the provisions of
the Arbitration Act (Ontario). The arbitral panel shall consist of three (3) arbitrators, selected as follows: each
party shall appoint one (1) arbitrator; and those two (2) arbitrators shall discuss and select a chairman. If the two
(2) party-appointed arbitrators are unable to agree on the chairman, the chairman shall be selected in accordance with
the applicable rules of the arbitration body. Each arbitrator shall be independent of each of the parties. The
arbitrators shall have the authority to grant specific performance and to allocate between the parties the costs of
arbitration (including service fees, arbitrator fees and all other fees related to the arbitration) in such equitable
manner as the arbitrators may determine. The prevailing party in any arbitration shall be entitled to receive
reimbursement of its reasonable expenses incurred in connection therewith. Judgment upon the award so rendered may be
entered in a court having jurisdiction or application may be made to such court for judicial acceptance of any award and
an order of enforcement, as the case may be. Notwithstanding the foregoing, Elasticsearch shall have the right to
institute an action in a court of proper jurisdiction for preliminary injunctive relief pending a final decision by the
arbitrator, provided that a permanent injunction and damages shall only be awarded by the arbitrator. The language to
be used in the arbitral proceedings shall be English.
(4) Language. Any translation of this Agreement is done for local requirements and in the event of a dispute between
(the English and any non-English version, the English version of this Agreement shall govern. At the request of the
(parties, the official language of this Agreement and all communications and documents relating hereto is the English
(language, and the English-language version shall govern all interpretation of the Agreement. Ë la demande des parties,
(la langue officielle de la prŽsente convention ainsi que toutes communications et tous documents s'y rapportant est la
(langue anglaise, et la version anglaise est celle qui rŽgit toute interprŽtation de la prŽsente convention.
(5) Disclaimer of Warranties. For Customers with principal offices in the Province of QuŽbec, the following new
(sentence is to be added to the end of Section 3: "SOME JURISDICTIONS DO NOT ALLOW LIMITATIONS OR EXCLUSIONS OF CERTAIN
(TYPES OF DAMAGES AND/OR WARRANTIES AND CONDITIONS. THE LIMITATIONS, EXCLUSIONS AND DISCLAIMERS SET FORTH IN THIS
(AGREEMENT SHALL NOT APPLY IF AND ONLY IF AND TO THE EXTENT THAT THE LAWS OF A COMPETENT JURISDICTION REQUIRE
(LIABILITIES BEYOND AND DESPITE THESE LIMITATIONS, EXCLUSIONS AND DISCLAIMERS."
(6) Limitation of Liability. For Customers with principal offices in the Province of QuŽbec, the following new
(sentence is to be added to the end of Section 4.1: "SOME JURISDICTIONS DO NOT ALLOW LIMITATIONS OR EXCLUSIONS OF
(CERTAIN TYPES OF DAMAGES AND/OR WARRANTIES AND CONDITIONS. THE LIMITATIONS, EXCLUSIONS AND DISCLAIMERS SET FORTH IN
(THIS AGREEMENT SHALL NOT APPLY IF AND ONLY IF AND TO THE EXTENT THAT THE LAWS OF A COMPETENT JURISDICTION REQUIRE
(LIABILITIES BEYOND AND DESPITE THESE LIMITATIONS, EXCLUSIONS AND DISCLAIMERS."
C. The following additional terms and conditions apply to all Customers with principal offices outside of the United
States of America and Canada:
(1) Applicable Elasticsearch Entity. The entity providing the license in Germany is Elasticsearch Gmbh; in France is
(Elasticsearch SARL, in the United Kingdom is Elasticsearch Ltd, in Australia is Elasticsearch Pty Ltd., in Japan is
(Elasticsearch KK, and in all other countries is Elasticsearch BV.
(2) Choice of Law. This Agreement shall be governed by and construed in accordance with the laws of the State of New
(York, without reference to or application of choice of law rules or principles. Notwithstanding any choice of law
(provision or otherwise, the Uniform Computer Information Transactions Act UCITA) and the United Nations Convention on
(the International Sale of Goods shall not apply.
(3) Arbitration. Any dispute, claim or controversy arising out of or relating to this Agreement or the existence,
(breach, termination, enforcement, interpretation or validity thereof, including the determination of the scope or
(applicability of this agreement to arbitrate, each, a "Dispute") shall be referred to and finally resolved by
(arbitration under the rules and at the location identified below. The arbitral panel shall consist of three 3)
(arbitrators, selected as follows: each party shall appoint one 1) arbitrator; and those two 2) arbitrators shall
(discuss and select a chairman. If the two party-appointed arbitrators are unable to agree on the chairman, the chairman
(shall be selected in accordance with the applicable rules of the arbitration body. Each arbitrator shall be independent
(of each of the parties. The arbitrators shall have the authority to grant specific performance and to allocate between
(the parties the costs of arbitration including service fees, arbitrator fees and all other fees related to the
(arbitration) in such equitable manner as the arbitrators may determine. The prevailing party in any arbitration shall
(be entitled to receive reimbursement of its reasonable expenses incurred in connection therewith. Judgment upon the
(award so rendered may be entered in a court having jurisdiction or application may be made to such court for judicial
(acceptance of any award and an order of enforcement, as the case may be. Notwithstanding the foregoing, Elasticsearch
(shall have the right to institute an action in a court of proper jurisdiction for preliminary injunctive relief pending
(a final decision by the arbitrator, provided that a permanent injunction and damages shall only be awarded by the
(arbitrator. The language to be used in the arbitral proceedings shall be English.
(a) In addition, the following terms only apply to Customers with principal offices within Europe, the Middle East or
(Africa EMEA):
Arbitration Rules and Location. Any Dispute shall be referred to and finally resolved by arbitration under the London
Court of International Arbitration ("LCIA") Rules (which Rules are deemed to be incorporated by reference into this
clause) on the basis that the governing law is the law of the State of New York, USA. The seat, or legal place, of
arbitration shall be London, England.
(b) In addition, the following terms only apply to Customers with principal offices within Asia Pacific, Australia &
(New Zealand:
Arbitration Rules and Location. Any Dispute shall be referred to and finally resolved by arbitration under the Rules of
Conciliation and Arbitration of the International Chamber of Commerce ("ICC") in force on the date when the notice of
arbitration is submitted in accordance with such Rules (which Rules are deemed to be incorporated by reference into this
clause) on the basis that the governing law is the law of the State of New York, USA. The seat, or legal place, of
arbitration shall be Singapore.
(c) In addition, the following terms only apply to Customers with principal offices within the Americas excluding North
(America):
Arbitration Rules and Location. Any Dispute shall be referred to and finally resolved by arbitration under
International Dispute Resolution Procedures of the American Arbitration Association ("AAA") in force on the date when
the notice of arbitration is submitted in accordance with such Procedures (which Procedures are deemed to be
incorporated by reference into this clause) on the basis that the governing law is the law of the State of New York,
USA. The seat, or legal place, of arbitration shall be New York, New York, USA.
(4) In addition, for Customers with principal offices within the UK, the following new sentence is added to the end of
(Section 4.1:
Nothing in this Agreement shall have effect so as to limit or exclude a party's liability for death or personal injury
caused by negligence or for fraud including fraudulent misrepresentation and this Section 4.1 shall take effect subject
to this provision.
(5) In addition, for Customers with principal offices within France, Sections 1.2, 3 and 4.1 of the Agreement are
(deleted and replaced with the following new Sections 1.2, 3 and 4.1:
1.2 Reservation of Rights; Restrictions. Elasticsearch owns all right title and interest in and to the Watcher Software
and any derivative works thereof, and except as expressly set forth in Section 1.1 above, no other license to the Watcher
Software is granted to You by implication, or otherwise. You agree not to prepare derivative works from, modify, copy or
use the Watcher Software in any manner except as expressly permitted in this Agreement; provided that You may copy the
Watcher Software for archival purposes, only where such software is provided on a non-durable medium; and You may
decompile the Watcher Software, where necessary for interoperability purposes and where necessary for the correction of
errors making the software unfit for its intended purpose, if such right is not reserved by Elasticsearch as editor of
the Watcher Software. Pursuant to article L122-6-1 of the French intellectual property code, Elasticsearch reserves the
right to correct any bugs as necessary for the Watcher Software to serve its intended purpose. You agree not to: (i)
transfer, sell, rent, lease, distribute, sublicense, loan or otherwise transfer the Watcher Software in whole or in part
to any third party; (ii) use the Watcher Software for providing time-sharing services, any software-as-a-service
offering ("SaaS"), service bureau services or as part of an application services provider or other service offering;
(iii) alter or remove any proprietary notices in the Watcher Software; or (iv) make available to any third party any
analysis of the results of operation of the Watcher Software, including benchmarking results, without the prior written
consent of Elasticsearch.
3. DISCLAIMER OF WARRANTIES
TO THE MAXIMUM EXTENT PERMITTED UNDER APPLICABLE LAW, THE WATCHER SOFTWARE IS PROVIDED "AS IS" WITHOUT WARRANTY OF ANY
KIND, AND ELASTICSEARCH AND ITS LICENSORS MAKE NO WARRANTIES WHETHER EXPRESSED, IMPLIED OR STATUTORY REGARDING OR
RELATING TO THE WATCHER SOFTWARE OR DOCUMENTATION. TO THE MAXIMUM EXTENT PERMITTED UNDER APPLICABLE LAW, ELASTICSEARCH
AND ITS LICENSORS SPECIFICALLY DISCLAIM ALL IMPLIED WARRANTIES OF FITNESS FOR A PARTICULAR PURPOSE WITH RESPECT TO THE
WATCHER SOFTWARE AND DOCUMENTATION, AND WITH RESPECT TO THE USE OF THE FOREGOING. FURTHER, ELASTICSEARCH DOES NOT
WARRANT RESULTS OF USE OR THAT THE WATCHER SOFTWARE WILL BE ERROR FREE OR THAT THE USE OF THE WATCHER SOFTWARE WILL BE
UNINTERRUPTED.
4.1 Disclaimer of Certain Damages. IN NO EVENT SHALL YOU OR ELASTICSEARCH OR ITS LICENSORS BE LIABLE FOR ANY LOSS OF
PROFITS, LOSS OF USE, BUSINESS INTERRUPTION, LOSS OF DATA, COST OF SUBSTITUTE GOODS OR SERVICES, OR FOR ANY INDIRECT OR
UNFORESEEABLE DAMAGES OF ANY KIND IN CONNECTION WITH OR ARISING OUT OF THE USE OR INABILITY TO USE THE WATCHER SOFTWARE,
OR THE PERFORMANCE OF OR FAILURE TO PERFORM THIS AGREEMENT, WHETHER ALLEGED AS A BREACH OF CONTRACT OR TORTIOUS CONDUCT,
INCLUDING NEGLIGENCE. THE LIMITATIONS OF LIABILITY SET FORTH IN THIS SECTION 4.1 SHALL NOT APPLY TO A BREACH, THROUGH
GROSS NEGLIGENCE OR INTENTIONAL MISCONDUCT BY YOU, OF THE SCOPE OF THE LICENSE GRANTED IN SECTION 1.1, OR IN CASE OF
DEATH OR PERSONAL INJURY.
(6) In addition, for Customers with principal offices within Australia, Sections 4.1, 4.2 and 4.3 of the Agreement are
(deleted and replaced with the following new Sections 4.1, 4.2 and 4.3:
4.1 Disclaimer of Certain Damages. Subject to clause 4.3, a party is not liable for Consequential Loss however caused
(including by the negligence of that party) suffered or incurred by the other party in connection with this agreement.
"Consequential Loss" means loss of revenues, loss of reputation, indirect loss, loss of profits, consequential loss,
loss of actual or anticipated savings, indirect loss, lost opportunities, including opportunities to enter into
arrangements with third parties, loss or damage in connection with claims against by third parties, or loss or
corruption or data.
4.2 Damages Cap. SUBJECT TO CLAUSES 4.1 AND 4.3, ANY LIABILITY OF ELASTICSEARCH FOR ANY LOSS OR DAMAGE, HOWEVER CAUSED
(INCLUDING BY THE NEGLIGENCE OF ELASTICSEARCH), SUFFERED BY YOU IN CONNECTION WITH THIS AGREEMENT IS LIMITED TO THE
AMOUNT YOU PAID, IN THE TWELVE (12) MONTHS IMMEDIATELY PRIOR TO THE EVENT GIVING RISE TO LIABILITY, UNDER THE
ELASTICSEARCH SUPPORT SERVICES AGREEMENT IN CONNECTION WITH WHICH YOU OBTAINED THE LICENSE TO USE THE WATCHER SOFTWARE.
THE LIMITATION SET OUT IN THIS SECTION 4.2 IS AN AGGREGATE LIMIT FOR ALL CLAIMS, WHENEVER MADE.
4.3 Limitation and Disclaimer Exceptions. If the Competition and Consumer Act 2010 (Cth) or any other legislation or
any other legislation states that there is a guarantee in relation to any good or service supplied by Elasticsearch in
connection with this agreement, and Elasticsearch's liability for failing to comply with that guarantee cannot be
excluded but may be limited, Sections 4.1 and 4.2 do not apply to that liability and instead Elasticsearch's liability
for such failure is limited (at Elasticsearch's election) to, in the case of a supply of goods, the Elasticsearch
replacing the goods or supplying equivalent goods or repairing the goods, or in the case of a supply of services,
Elasticsearch supplying the services again or paying the cost of having the services supplied again.
(7) In addition, for Customers with principal offices within Japan, Sections 1.2, 3 and 4.1 of the Agreement are
(deleted and replaced with the following new Sections 1.2, 3 and 4.1:
1.2 Reservation of Rights; Restrictions. As between Elasticsearch and You, Elasticsearch owns all right title and
interest in and to the Watcher Software and any derivative works thereof, and except as expressly set forth in Section
1.1 above, no other license to the Watcher Software is granted to You by implication or otherwise. You agree not to: (i)
prepare derivative works from, modify, copy or use the Watcher Software in any manner except as expressly permitted in
this Agreement or applicable law; (ii) transfer, sell, rent, lease, distribute, sublicense, loan or otherwise transfer
the Watcher Software in whole or in part to any third party; (iii) use the Watcher Software for providing time-sharing
services, any software-as-a-service offering ("SaaS"), service bureau services or as part of an application services
provider or other service offering; (iv) alter or remove any proprietary notices in the Watcher Software; or (v) make
available to any third party any analysis of the results of operation of the Watcher Software, including benchmarking
results, without the prior written consent of Elasticsearch.
3. DISCLAIMER OF WARRANTIES TO THE MAXIMUM EXTENT PERMITTED UNDER APPLICABLE LAW, THE WATCHER SOFTWARE IS PROVIDED "AS
IS" WITHOUT WARRANTY OF ANY KIND, AND ELASTICSEARCH AND ITS LICENSORS MAKE NO WARRANTIES WHETHER EXPRESSED, IMPLIED OR
STATUTORY REGARDING OR RELATING TO THE WATCHER SOFTWARE OR DOCUMENTATION. TO THE MAXIMUM EXTENT PERMITTED UNDER
APPLICABLE LAW, ELASTICSEARCH AND ITS LICENSORS SPECIFICALLY DISCLAIM ALL IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS
FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT WITH RESPECT TO THE WATCHER SOFTWARE AND DOCUMENTATION, AND WITH RESPECT TO
THE USE OF THE FOREGOING. FURTHER, ELASTICSEARCH DOES NOT WARRANT RESULTS OF USE OR THAT THE WATCHER SOFTWARE WILL BE
ERROR FREE OR THAT THE USE OF THE WATCHER SOFTWARE WILL BE UNINTERRUPTED.
4.1 Disclaimer of Certain Damages. IN NO EVENT SHALL YOU OR ELASTICSEARCH OR ITS LICENSORS BE LIABLE FOR ANY LOSS OF
PROFITS, LOSS OF USE, BUSINESS INTERRUPTION, LOSS OF DATA, COST OF SUBSTITUTE GOODS OR SERVICES, OR FOR ANY
SPECIALINDIRECT, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES OF ANY KIND IN CONNECTION WITH OR ARISING OUT OF THE USE
OR INABILITY TO USE THE WATCHER SOFTWARE, OR THE PERFORMANCE OF OR FAILURE TO PERFORM THIS AGREEMENT, WHETHER ALLEGED AS
A BREACH OF CONTRACT OR TORTIOUS CONDUCT, INCLUDING NEGLIGENCE, EVEN IF THE RESPONSIBLE PARTY HAS BEEN ADVISED OF THE
POSSIBILITY OF SUCH DAMAGES. THE LIMITATIONS OF LIABILITY SET FORTH IN THIS SECTION 4.1 SHALL NOT APPLY TO A BREACH
THROUGH GROSS NEGLIGENCE OR INTENTIONAL MISCONDUCT BY YOU OF THE SCOPE OF THE LICENSE GRANTED IN SECTION 1.1 OR TO ANY
OTHER LIABILITY THAT CANNOT BE EXCLUDED OR LIMITED UNDER APPLICABLE LAW.

52
watcher/NOTICE.txt Normal file
View File

@ -0,0 +1,52 @@
Elasticsearch Watcher
Copyright 2009-2015 Elastic
---
This product includes software developed by The Apache Software
Foundation (http://www.apache.org/).
---
This product contains software distributed under
Common Development and Distribution License 1.0 (CDDL)
JavaMail API 1.5.3
https://java.net/projects/javamail/pages/Home
https://java.net/projects/javamail/pages/License
JavaBeans Activation Framework 1.1.1
http://www.oracle.com/technetwork/articles/java/index-135046.html
---
This product contains software developed by Mike Samuel. The
following is the copyright and notice text for this software:
Copyright (c) 2011, Mike Samuel
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
Neither the name of the OWASP nor the names of its contributors may
be used to endorse or promote products derived from this software
without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
POSSIBILITY OF SUCH DAMAGE.

7
watcher/README.asciidoc Normal file
View File

@ -0,0 +1,7 @@
= Elasticsearch Watcher Plugin
This plugins adds conditioned scheduled tasks features to elasticsearch - such a task is called a `Watch`.
You can build the plugin with `mvn package`.
The documentation is put in the `docs/` directory.

View File

@ -0,0 +1,97 @@
@echo off
rem Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
rem or more contributor license agreements. Licensed under the Elastic License;
rem you may not use this file except in compliance with the Elastic License.
REM .in.bat <java main class> [args,..]
SETLOCAL
if NOT DEFINED JAVA_HOME goto err
set JAVA_CMD=%1
if "%JAVA_CMD%" == "" goto err_java_cmd
REM fix args
for /f "usebackq tokens=1*" %%i in (`echo %*`) DO @ set params=%%j
SHIFT
set SCRIPT_DIR=%~dp0
for %%I in ("%SCRIPT_DIR%..\..") do set ES_HOME=%%~dpfI
REM ***** JAVA options *****
if "%ES_MIN_MEM%" == "" (
set ES_MIN_MEM=256m
)
if "%ES_MAX_MEM%" == "" (
set ES_MAX_MEM=1g
)
if NOT "%ES_HEAP_SIZE%" == "" (
set ES_MIN_MEM=%ES_HEAP_SIZE%
set ES_MAX_MEM=%ES_HEAP_SIZE%
)
set JAVA_OPTS=%JAVA_OPTS% -Xms%ES_MIN_MEM% -Xmx%ES_MAX_MEM%
if NOT "%ES_HEAP_NEWSIZE%" == "" (
set JAVA_OPTS=%JAVA_OPTS% -Xmn%ES_HEAP_NEWSIZE%
)
if NOT "%ES_DIRECT_SIZE%" == "" (
set JAVA_OPTS=%JAVA_OPTS% -XX:MaxDirectMemorySize=%ES_DIRECT_SIZE%
)
set JAVA_OPTS=%JAVA_OPTS% -Xss256k
REM Enable aggressive optimizations in the JVM
REM - Disabled by default as it might cause the JVM to crash
REM set JAVA_OPTS=%JAVA_OPTS% -XX:+AggressiveOpts
set JAVA_OPTS=%JAVA_OPTS% -XX:+UseParNewGC
set JAVA_OPTS=%JAVA_OPTS% -XX:+UseConcMarkSweepGC
set JAVA_OPTS=%JAVA_OPTS% -XX:CMSInitiatingOccupancyFraction=75
set JAVA_OPTS=%JAVA_OPTS% -XX:+UseCMSInitiatingOccupancyOnly
REM When running under Java 7
REM JAVA_OPTS=%JAVA_OPTS% -XX:+UseCondCardMark
REM GC logging options -- uncomment to enable
REM JAVA_OPTS=%JAVA_OPTS% -XX:+PrintGCDetails
REM JAVA_OPTS=%JAVA_OPTS% -XX:+PrintGCTimeStamps
REM JAVA_OPTS=%JAVA_OPTS% -XX:+PrintClassHistogram
REM JAVA_OPTS=%JAVA_OPTS% -XX:+PrintTenuringDistribution
REM JAVA_OPTS=%JAVA_OPTS% -XX:+PrintGCApplicationStoppedTime
REM JAVA_OPTS=%JAVA_OPTS% -Xloggc:/var/log/elasticsearch/gc.log
REM Causes the JVM to dump its heap on OutOfMemory.
set JAVA_OPTS=%JAVA_OPTS% -XX:+HeapDumpOnOutOfMemoryError
REM The path to the heap dump location, note directory must exists and have enough
REM space for a full heap dump.
REM JAVA_OPTS=%JAVA_OPTS% -XX:HeapDumpPath=$ES_HOME/logs/heapdump.hprof
REM Disables explicit GC
set JAVA_OPTS=%JAVA_OPTS% -XX:+DisableExplicitGC
set ES_CLASSPATH=%ES_CLASSPATH%;%ES_HOME%/lib/elasticsearch-1.4.0-SNAPSHOT.jar;%ES_HOME%/lib/*;%ES_HOME%/lib/sigar/*;%ES_HOME%/plugins/watcher/*
set ES_PARAMS=-Des.path.home="%ES_HOME%"
SET HOSTNAME=%COMPUTERNAME%
"%JAVA_HOME%\bin\java" %JAVA_OPTS% %ES_JAVA_OPTS% %ES_PARAMS% -cp "%ES_CLASSPATH%" %JAVA_CMD% %PARAMS%
goto finally
:err
echo JAVA_HOME environment variable must be set!
ENDLOCAL
EXIT /B 1
:err_java_cmd
echo Can not call .in.bat without specifying a main java class
ENDLOCAL
EXIT /B 1
:finally
ENDLOCAL

123
watcher/bin/watcher/croneval Executable file
View File

@ -0,0 +1,123 @@
#!/bin/sh
# Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
# or more contributor license agreements. Licensed under the Elastic License;
# you may not use this file except in compliance with the Elastic License.
SCRIPT="$0"
# SCRIPT may be an arbitrarily deep series of symlinks. Loop until we have the concrete path.
while [ -h "$SCRIPT" ] ; do
ls=`ls -ld "$SCRIPT"`
# Drop everything prior to ->
link=`expr "$ls" : '.*-> \(.*\)$'`
if expr "$link" : '/.*' > /dev/null; then
SCRIPT="$link"
else
SCRIPT=`dirname "$SCRIPT"`/"$link"
fi
done
# determine elasticsearch home
ES_HOME=`dirname "$SCRIPT"`/../..
# make ELASTICSEARCH_HOME absolute
ES_HOME=`cd "$ES_HOME"; pwd`
# If an include wasn't specified in the environment, then search for one...
if [ "x$ES_INCLUDE" = "x" ]; then
# Locations (in order) to use when searching for an include file.
for include in /usr/share/elasticsearch/elasticsearch.in.sh \
/usr/local/share/elasticsearch/elasticsearch.in.sh \
/opt/elasticsearch/elasticsearch.in.sh \
~/.elasticsearch.in.sh \
"`dirname "$0"`"/../elasticsearch.in.sh \
$ES_HOME/bin/elasticsearch.in.sh; do
if [ -r "$include" ]; then
. "$include"
break
fi
done
# ...otherwise, source the specified include.
elif [ -r "$ES_INCLUDE" ]; then
. "$ES_INCLUDE"
fi
if [ -x "$JAVA_HOME/bin/java" ]; then
JAVA="$JAVA_HOME/bin/java"
else
JAVA=`which java`
fi
if [ ! -x "$JAVA" ]; then
echo "Could not find any executable java binary. Please install java in your PATH or set JAVA_HOME"
exit 1
fi
if [ -z "$ES_CLASSPATH" ]; then
echo "You must set the ES_CLASSPATH var" >&2
exit 1
fi
# Special-case path variables.
case `uname` in
CYGWIN*)
ES_CLASSPATH=`cygpath -p -w "$ES_CLASSPATH"`
ES_HOME=`cygpath -p -w "$ES_HOME"`
;;
esac
# Try to read package config files
if [ -f "/etc/sysconfig/elasticsearch" ]; then
CONF_DIR=/etc/elasticsearch
CONF_FILE=$CONF_DIR/elasticsearch.yml
. "/etc/sysconfig/elasticsearch"
elif [ -f "/etc/default/elasticsearch" ]; then
CONF_DIR=/etc/elasticsearch
CONF_FILE=$CONF_DIR/elasticsearch.yml
. "/etc/default/elasticsearch"
fi
# Parse any long getopt options and put them into properties
ARGCOUNT=$#
COUNT=0
while [ $COUNT -lt $ARGCOUNT ]
do
case $1 in
--*) properties="$properties $1 $2"
shift ; shift; COUNT=$(($COUNT+2))
;;
*) set -- "$@" "$1"; shift; COUNT=$(($COUNT+1))
esac
done
# check if properties already has a config file or config dir
if [ -e "$CONF_DIR" ]; then
case "$properties" in
*-Des.default.path.conf=*) ;;
*)
properties="$properties -Des.default.path.conf=$CONF_DIR"
;;
esac
fi
if [ -e "$CONF_FILE" ]; then
case "$properties" in
*-Des.default.config=*) ;;
*)
properties="$properties -Des.default.config=$CONF_FILE"
;;
esac
fi
export HOSTNAME=`hostname -s`
# include watcher jars in classpath
ES_CLASSPATH="$ES_CLASSPATH:$ES_HOME/plugins/watcher/*"
cd $ES_HOME > /dev/null
$JAVA $ES_JAVA_OPTS -cp "$ES_CLASSPATH" org.elasticsearch.watcher.trigger.schedule.tool.CronEvalTool "$@" $properties
status=$?
cd - > /dev/null
exit $status

View File

@ -0,0 +1,9 @@
@echo off
rem Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
rem or more contributor license agreements. Licensed under the Elastic License;
rem you may not use this file except in compliance with the Elastic License.
PUSHD %~dp0
CALL %~dp0.in.bat org.elasticsearch.watcher.trigger.schedule.tool.CronEvalTool %*
POPD

View File

@ -0,0 +1,14 @@
ELASTICSEARCH CONFIDENTIAL
__________________
[2014] Elasticsearch Incorporated. All Rights Reserved.
NOTICE: All information contained herein is, and remains
the property of Elasticsearch Incorporated and its suppliers,
if any. The intellectual and technical concepts contained
herein are proprietary to Elasticsearch Incorporated
and its suppliers and may be covered by U.S. and Foreign Patents,
patents in process, and are protected by trade secret or copyright law.
Dissemination of this information or reproduction of this material
is strictly forbidden unless prior written permission is obtained
from Elasticsearch Incorporated.

View File

@ -0,0 +1,46 @@
<?xml version="1.0"?>
<project name="commercial-integration-tests">
<import file="${elasticsearch.integ.antfile.default}"/>
<!-- unzip core release artifact, install license plugin, install plugin, then start ES -->
<target name="start-external-cluster-with-plugin" depends="stop-external-cluster" unless="${shouldskip}">
<local name="integ.home"/>
<local name="integ.repo.home"/>
<local name="integ.plugin.url"/>
<local name="integ.pid"/>
<delete dir="${integ.scratch}"/>
<unzip src="${org.elasticsearch:elasticsearch:zip}"
dest="${integ.scratch}"/>
<property name="integ.home" location="${integ.scratch}/elasticsearch-${elasticsearch.version}"/>
<property name="integ.repo.home" location="${integ.home}/repo"/>
<!-- begin commercial plugin mods -->
<local name="integ.license.plugin.url"/>
<makeurl property="integ.license.plugin.url" file="${org.elasticsearch:elasticsearch-license-plugin:zip}"/>
<echo>Installing license plugin...</echo>
<run-script dir="${integ.home}" script="bin/plugin"
args="-u ${integ.license.plugin.url} -i elasticsearch-license-plugin"/>
<!-- end commercial plugin mods -->
<makeurl property="integ.plugin.url" file="${project.build.directory}/releases/${project.artifactId}-${project.version}.zip"/>
<echo>Installing plugin ${project.artifactId}...</echo>
<run-script dir="${integ.home}" script="bin/plugin"
args="-u ${integ.plugin.url} -i ${project.artifactId}"/>
<!-- execute -->
<echo>Starting up external cluster...</echo>
<run-script dir="${integ.home}" script="bin/elasticsearch" spawn="true"
args="${integ.args} -Des.path.repo=${integ.repo.home}"/>
<waitfor maxwait="3" maxwaitunit="minute" checkevery="500">
<http url="http://127.0.0.1:9200"/>
</waitfor>
<extract-pid property="integ.pid"/>
<echo>External cluster started PID ${integ.pid}</echo>
</target>
</project>

View File

@ -0,0 +1,13 @@
<?xml version="1.0" encoding="UTF-8"?>
<additionalHeaders>
<javadoc_style>
<firstLine>/*</firstLine>
<beforeEachLine> * </beforeEachLine>
<endLine> */EOL</endLine>
<!--skipLine></skipLine-->
<firstLineDetectionPattern>(\s|\t)*/\*.*$</firstLineDetectionPattern>
<lastLineDetectionPattern>.*\*/(\s|\t)*$</lastLineDetectionPattern>
<allowBlankLines>false</allowBlankLines>
<isMultiline>true</isMultiline>
</javadoc_style>
</additionalHeaders>

View File

@ -0,0 +1,20 @@
<?xml version="1.0"?>
<ruleset name="Custom ruleset"
xmlns="http://pmd.sourceforge.net/ruleset/2.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://pmd.sourceforge.net/ruleset/2.0.0 http://pmd.sourceforge.net/ruleset_2_0_0.xsd">
<description>
Default ruleset for elasticsearch server project
</description>
<rule ref="rulesets/java/basic.xml"/>
<rule ref="rulesets/java/braces.xml"/>
<rule ref="rulesets/java/clone.xml"/>
<rule ref="rulesets/java/codesize.xml"/>
<rule ref="rulesets/java/coupling.xml">
<exclude name="LawOfDemeter" />
</rule>
<rule ref="rulesets/java/design.xml"/>
<rule ref="rulesets/java/unnecessary.xml">
<exclude name="UselessParentheses" />
</rule>
</ruleset>

View File

@ -0,0 +1,22 @@
randomization:
elasticsearch:
es150:
version: 1.5.0
branch: tags/v1.5.0
lucene.version: 4.10.4
es151:
version: 1.5.1
branch: tags/v1.5.1
lucene.version: 4.10.4
es152:
version: 1.5.2
branch: tags/v1.5.2
lucene.version: 4.10.4
es153:
version: 1.5.3-SNAPSHOT
branch: origin/1.5
lucene.version: 4.10.4
es160:
version: 1.6.0-SNAPSHOT
branch: origin/1.x
lucene.version: 4.10.4

View File

@ -0,0 +1,21 @@
[[administering-watcher]]
== Administering Watcher
This section describes how to configure options for watcher, use Shield to secure access to the
Watcher APIs, get information about Watcher, and monitor watch execution.
include::administering-watcher/configuring-email.asciidoc[]
include::administering-watcher/configuring-default-throttle-period.asciidoc[]
include::administering-watcher/configuring-default-http-timeouts.asciidoc[]
include::administering-watcher/configuring-default-internal-ops-timeouts.asciidoc[]
include::administering-watcher/integrating-with-shield.asciidoc[]
include::administering-watcher/integrating-with-logstash.asciidoc[]
include::administering-watcher/getting-watcher-statistics.asciidoc[]
include::administering-watcher/monitoring-watch-execution.asciidoc[]

View File

@ -0,0 +1,21 @@
[[configuring-default-http-timeouts]]
=== Configuring the Default HTTP Timeouts
All HTTP requests in watcher (e.g. used by the <<input-http, HTTP Input>> and <<actions-webhook, Webhook Action>>)
are associated with two timeouts:
Connection Timeout :: Determines how long should the request wait for the HTTP
connection to be established before failing the request.
Read Timeout :: Assuming the connenction was established, this timeout
determines how long should the request wait for a
response before failing the request.
By default, both timeouts are set to 10 seconds. It is possible to change this
default using the following settings in `elasticsearch.yml`:
[source,yaml]
--------------------------------------------------
watcher.http.default_connection_timeout: 5s
watcher.http.default_read_timeout: 20s
--------------------------------------------------

View File

@ -0,0 +1,22 @@
[[configuring-default-internal-ops-timeouts]]
=== Configuring the Default Internal Operations Timeouts
While Watcher is active, it often accesses different indices in Elasticsearch.
These can be internal indices used for its ongoing operation (such as the `.watches`
index where all the watches are stored) or as part of a watch execution via the
<<input-search, `search` input>>, <<transform-search, `search` transform>> or the
<<actions-index, `index` actions>>.
To to ensure that Watcher's workflow doesn't hang on long running search or
indexing operations, these operations time out after a set period of time. You can
change the default timeouts in `elasticsearch.yml`. The timeouts you can configure
are shown in the following table.
[[default-internal-ops-timeouts]]
[options="header"]
|======
| Name | Default | Description
| `watcher.internal.ops.search.default_timeout` | 30s | The default timeout for all internal search operations.
| `watcher.internal.ops.index.default_timeout` | 60s | The default timeout for all internal index operations.
| `watcher.internal.ops.bulk.default_timeout` | 120s | The default timeout for all internal bulk operations.
|======

View File

@ -0,0 +1,19 @@
[[configuring-default-throttle-period]]
=== Configuring the Default Throttle Period
By default, Watcher uses a default throttle period of 5 seconds. You can override this
for particular actions by setting the throttle period in the action. You can also
define a throttle period on the watch level that will serve as a default period for
all those actions that don't specify a throttle period themselves.
To change the default throttle period for all actions that are not configured with a
throttle period neither on the action level nor the watch level, you configure the
`watcher.execution.default_throttle_period` setting in `elasticsearch.yml`.
For example, to set the default throttle period to 15 minutes, add the following entry
to your `elasticsearch.yml` file and restart Elasticsearch:
[source,yaml]
--------------------------------------------------
watcher.execution.default_throttle_period: 15m
--------------------------------------------------

View File

@ -0,0 +1,270 @@
[[email-services]]
=== Configuring Watcher to Send Email
You can configure Watcher to send email from any SMTP email service. Email messages can contain
basic HTML tags. You can control which tags are allowed by
<<email-html-sanitization, Configuring HTML Sanitization Options>>.
[[email-account]]
==== Configuring Email Accounts
You configure the accounts Watcher can use to send email in your `elasticsearch.yml` configuration file.
Each account configuration has a unique name and specifies all of the SMTP information needed
to send email from that account. You can also specify defaults for all emails that are sent through
the account. For example, you can set defaults for the `from` and `bcc` fields to ensure that all
emails are sent from the same address and always blind copied to the same address.
IMPORTANT: If your email account is configured to require two step verification,
you need to generate and use a unique App Password to send email from
Watcher. Authentication will fail if you use your primary password.
If you configure multiple email accounts, you specify which account the email should be sent
with in the <<actions-email, email>> action. If there is only one account configured, you
do not have to specify the `account` attribute in the action definition. However, if you configure
multiple accounts and omit the `account` attribute, there is no guarantee which account will be
used to send the email.
To add an email account, set the `watcher.actions.email.service.account` property in
`elasticsearch.yml`. See <<email-account-attributes, Email Account Attributes>> for the
supported attributes.
For example, the following snippet configures a single Gmail account named `work`.
[source,yaml]
--------------------------------------------------
watcher.actions.email.service.account:
work:
profile: gmail
email_defaults:
from: 'John Doe <john.doe@host.domain>'
bcc: archive@host.domain
smtp:
auth: true
starttls.enable: true
host: smtp.gmail.com
port: 587
user: <username>
password: <password>
--------------------------------------------------
[[email-profile]]
The _email profile_ defines a strategy for building a MIME message. As with almost every standard
out there, different email systems interpret the MIME standard differently and have slightly
different ways of structuring MIME messages. Watcher provides three email profiles: `standard`
(default), `gmail`, and `outlook`.
If you are using Gmail or Outlook, we recommend using the corresponding profile. Use the `standard`
profile if you are using some other email system. For more information about configuring Watcher
to work with different email systems, see:
* <<gmail, Sending Email from Gmail>>
* <<outlook, Sending Email from Outlook>>
* <<exchange, Sending Email from Exchange>>
* <<amazon-ses, Sending Email from Amazon SES>>
[[email-account-attributes]]
.Email Account Attributes
[options="header"]
|======
| Name | Required | Default | Description
| `profile` | no | standard | The <<email-profile, profile>> to use to
build the MIME messages that are sent from
the account. Valid values: `standard`
(default), `gmail` and `outlook`.
| `email_defaults.*` | no | - | An optional set of email attributes to use
as defaults for the emails sent from the
account. See <<email-action-attributes,
Email Action Attributes>> for the supported
attributes. for the possible email
attributes)
| `smtp.auth` | no | false | When `true`, attempt to authenticate the
user using the AUTH command.
| `smtp.host` | yes | - | The SMTP server to connect to.
| `smtp.port` | no | 25 | The SMTP server port to connect to.
| `smtp.user` | yes | - | The user name for SMTP.
| `smtp.password` | no | - | The password for the specified SMTP user.
| `smtp.starttls.enable` | no | false | When `true`, enables the use of the
`STARTTLS` command (if supported by
the server) to switch the connection to a
TLS-protected connection before issuing any
login commands. Note that an appropriate
trust store must configured so that the
client will trust the server's certificate.
Defaults to `false`.
| `smtp.*` | no | - | SMTP attributes that enable fine control
over the SMTP protocol when sending messages.
See https://javamail.java.net/nonav/docs/api/com/sun/mail/smtp/package-summary.html[com.sun.mail.smtp]
for the full list of SMTP properties you can
set.
|======
[[gmail]]
===== Sending Email From Gmail
Use the following email account settings to send email from the https://mail.google.com[Gmail]
SMTP service:
[source,yaml]
--------------------------------------------------
watcher.actions.email.service.account:
gmail_account:
profile: gmail
smtp:
auth: true
starttls.enable: true
host: smtp.gmail.com
port: 587
user: <username>
password: <password>
--------------------------------------------------
If you get an authentication error that indicates that you need to continue the
sign-in process from a web browser when Watcher attempts to send email, you need
to configure Gmail to https://support.google.com/accounts/answer/6010255?hl=en[Allow Less
Secure Apps to access your account].
If two-step verification is enabled for your account, you must generate and use
a unique App Password to send email from Watcher.See
https://support.google.com/accounts/answer/185833?hl=en[Sign in using App Passwords]
for more information.
[[outlook]]
===== Sending Email from Outlook.com
Use the following email account settings to send email action from the
https://www.outlook.com/[Outlook.com] SMTP service:
[source,yaml]
--------------------------------------------------
watcher.actions.email.service.account:
outlook_account:
profile: outlook
smtp:
auth: true
starttls.enable: true
host: smtp-mail.outlook.com
port: 587
user: <username>
password: <password>
--------------------------------------------------
NOTE: You need to use a unique App Password if two-step verification is enabled.
See http://windows.microsoft.com/en-us/windows/app-passwords-two-step-verification[App
passwords and two-step verification] for more information.
[[amazon-ses]]
===== Sending Email from Amazon SES (Simple Email Service)
Use the following email account settings to send email from the
http://aws.amazon.com/ses[Amazon Simple Email Service] (SES) SMTP service:
[source,yaml]
--------------------------------------------------
watcher.actions.email.service.account:
ses_account:
smtp:
auth: true
starttls.enable: true
starttls.required: true
host: email-smtp.us-east-1.amazonaws.com <1>
port: 587
user: <username>
password: <password>
--------------------------------------------------
<1> `smtp.host` varies depending on the region
NOTE: You need to use your Amazon SES SMTP credentials to send email through
Amazon SES. For more information, see http://docs.aws.amazon.com/ses/latest/DeveloperGuide/smtp-credentials.html[Obtaining Your Amazon SES SMTP Credentials].
[[exchange]]
===== Sending Email from Microsoft Exchange
Use the following email account settings to send email action from Microsoft Exchange:
[source,yaml]
--------------------------------------------------
watcher.actions.email.service.account:
exchange_account:
profile: outlook
email_defaults:
from: <email address of service account> <1>
smtp:
auth: true
starttls.enable: true
host: <your exchange server>
port: 587
user: <email address of service account> <2>
password: <password>
--------------------------------------------------
<1> Some organizations configure Exchange to validate that the `from` field is a
valid local email account.
<2> Many organizations support use of your email address as your username, though
it is a good idea to check with your system administrator if you receive
authentication-related failures.
// [[postfix]]
// ===== Sending Email from Postfix
// Use the following email account settings to send email from the http://www.postfix.org[Postfix] SMTP service:
// [source,yaml]
// --------------------------------------------------
// TODO
// --------------------------------------------------
[[email-html-sanitization]]
==== Configuring HTML Sanitization Options
The `email` action supports sending messages with an HTML body. However, for security reasons,
Watcher https://en.wikipedia.org/wiki/HTML_sanitization[sanitizes] the HTML.
You can control which HTML features are allowed or disallowed by configuring the
`watcher.actions.email.html.sanitization.allow` and
`watcher.actions.email.html.sanitization.disallow` settings in `elasticsearch.yml`. You can specify
individual HTML elements and the feature groups described in the following table. By default,
Watcher allows the following features: `body`, `head`, `_tables`, `_links`, `_blocks`, `_formatting`
and `img:embedded`.
[options="header"]
|======
| Name | Description
| `_tables` | All table related elements: `<table>`, `<th>`, `<tr>` and `<td>`.
| `_blocks` | The following block elements: `<p>`, `<div>`, `<h1>`, `<h2>`, `<h3>`,
`<h4>`, `<h5>`, `<h6>`, `<ul>`, `<ol>`, `<li>` and `<blockquote>`.
| `_formatting` | The following inline formatting elements: `<b>`, `<i>`, `<s>`, `<u>`,
`<o>`, `<sup>`, `<sub>`, `<ins>`, `<del>`, `<strong>`, `<strike>`,
`<tt>`, `<code>`, `<big>`, `<small>`, `<br>`, `<span>` and `<em>`.
| `_links` | The `<a>` element with an `href` attribute that points to a URL using
the following protocols: `http`, `https` and `mailto`.
| `_styles` | The `style` attribute on all elements. Note that CSS attributes
are also sanitized to prevent XSS attacks.
| `img` or `img:all` | All images (external and embedded).
| `img:embedded` | Only embedded images. Embedded images can only use the `cid:` URL
protocol in their `src` attribute.
|======
For example, the following settings allow the HTML to contain tables and block elements, but
disallow `<h4>`, `<h5>` and `<h6>` tags.
[source,yaml]
--------------------------------------------------
watcher.actions.email.html.sanitization:
allow: _tables, _blocks
disallow: h4, h5, h6
--------------------------------------------------
To disable sanitization entirely, add the following setting to `elasticsearch.yml`:
[source,yaml]
--------------------------------------------------
watcher.actions.email.html.sanitization.enabled: false
--------------------------------------------------

View File

@ -0,0 +1,46 @@
[[getting-watcher-statistics]]
=== Getting Watcher Statistics
You use the Watcher <<api-rest-stats, `stats`>> API to get information about Watcher, such
as the current state, number of watches, size of the execution queue, and the watches that
are currently queued or executing.
For example:
[source,js]
--------------------------------------------------
GET _watcher/stats/_all
--------------------------------------------------
// AUTOSENSE
The response looks like this:
[source,js]
--------------------------------------------------
{
"watcher_state": "started",
"watch_count": 2,
"execution_thread_pool": {
"queue_size": 1,
"max_size": 40
},
"current_watches": [
{
"watch_id": "my_watch",
"watch_record_id": "my_watch4_223-2015-05-21T11:59:59.811Z",
"triggered_time": "2015-05-21T11:59:59.811Z",
"execution_time": "2015-05-21T11:59:59.811Z"
}
],
"queued_watches": [
{
"watch_id": "my_other_watch",
"watch_record_id": "my_other_watch4_223-2015-05-21T11:59:59.812Z",
"triggered_time": "2015-05-21T11:59:59.812Z",
"execution_time": "2015-05-21T11:59:59.812Z"
}
]
}
--------------------------------------------------
NOTE: To get the version of the Watcher plugin you have installed, call `GET _watcher`.

View File

@ -0,0 +1,61 @@
[[logstash-integration]]
=== Integrating Watcher with Logstash
By default, Logstash uses the `node` protocol setting to ship data to Elasticsearch. When you use
the node protocol, the Logstash instance joins the Elasticsearch cluster and shares the cluster
state.
Watcher requires the License plugin to be installed on all instances in the cluster, including
the Logstash instance. To use Watcher in combination with the Logstash node protocol, you
must install the License plugin on top of Logstash. To do this, we've created a special
Logstash plugin called `logstash-output-elasticsearch-plugin`. This plugin simply pulls the
License jar file (elasticsearch-license-1.0.0.jar) and adds it to the classpath.
NOTE: If you're using the Logstash `transport` or `http` protocol, you do not need to install the
License plugin. The License plugin is only required if you're using the `node` protocol.
To install the Logstash License plugin:
. Shutdown the Logstash instance(s) that are shipping data to Elasticsearch.
. Run `bin/plugin install` to install the Logstash license plugin:
+
[source,js]
--------------------------------------------------
bin/plugin install logstash-output-elasticsearch-license
--------------------------------------------------
+
. Restart the Logstash instance(s).
==== Using Logstash for Watch Actions
Integrating Watcher with Logstash provides users a powerful pipeline to further transform and enrich watch payloads. Integrating with Logstash also enables you to send watches to the rich collection of outputs supported by Logstash.
For Logstash to receive data from Watcher, you need to enable the `http` input. The `http` input
launches a webserver and listens for incoming requests. The
Logstash `http` input supports basic auth and HTTPS.
Once the Logstash `http` input is enabled, you post data to Logstash with the
<<actions-webhook, `webhook`>> action.
NOTE: The `http` input is built in to Logstash 1.5.2 and above. To use the `http` input with
earlier versions of Logstash, install the `logstash-input-http` plugin by
running `bin/plugin install logstash-input-http`.
To configure Logstash to listen for incoming HTTP requests, add an `http` input definition to
your Logstash coniguration file:
[source,yml]
--------------------------------------------------
input {
http {
host => "mylogstashhost" <1>
port => "8080" <2>
}
}
--------------------------------------------------
<1> The name of your Logstash HTTP host.
<2> The port the HTTP host listens on.
For more information about using a `webhook` action to send data to Logstash, see
<<configuring-webook-actions, Configuring Webhook Actions>>.

View File

@ -0,0 +1,105 @@
[[shield-integration]]
=== Integrating Watcher with Shield
Watcher can work alongside https://www.elastic.co/products/shield[Shield] and integrates with it.
This integation is expected to be extended in future releases.
IMPORTANT: Watcher 1.0.x requires Shield 1.2.2 or above
When the Watcher plugin is installed along side Shield, it will automatically register an internal
user - `__watcher_user`. All actions taken as part of a watch execution will be executed on behalf
of this user.
NOTE: The `__watcher_user` is internal to watcher. Sending executing API on behalf of that users
outside of watcher will fail (unless you specifically add such a user to any of the
supported realms).
In addition to that, Watcher also registers with Shield two additional cluster level privileges:
* `monitor_watcher` - grants access to watcher <<api-rest-stats, stats>> and
<<api-rest-get-watch, get>> APIs
* `manage_watcher` - grants access to all watcher APIs
You can use the privileges above in Shield's {shield-ref}/authorization.html#roles-file[`roles.yml`]
file to grant roles access to the watcher APIs. The following snippet shows an example of such role
definition:
[source,yaml]
--------------------------------------------------
watcher_admin:
cluster: manage_watcher
--------------------------------------------------
Once the relevant role was defined, adding the watcher administrator user requires the exact same
process as adding any other user to to Shield. For example, if you are using the
{shield-ref}/esusers.html[`esusers`] realm, use the `esusers`
{shield-ref}/esusers.html#_the_literal_esusers_literal_command_line_tool[command-line tool] to add
the user:
[source,js]
--------------------------------------------------
bin/shield/esusers useradd john -r watcher_admin
--------------------------------------------------
Once added, this user will be able to call all the watcher APIs and by that manage all watches.
[float]
=== Privileges On Watcher Internal Indices
Watcher stores its data (watches and watch history records) in its own internal indices:
* `.watches` - an index that stores all the added watches
* `.watch_history-<timestamp>` - time based indices that store all the watch records
All write operations on these indices are performed internally by Watcher itself and external users
should not write directly to them. It should be considered a best practice to not grant any write
privileges on these indices to any of the Shield users.
[float]
=== Handling Sensitive Information
Sometimes a watch may hold sensitive information. For example, the user password that is configured
as part of the basic authentication in the <<input-http-auth-basic-example, `http` input>>. In
addition, some of watcher configuration may also hold sensitive data. When Shield is installed,
Watcher can utilize some of the security services it provides to better secure this type of
sensitive information.
[float]
[[shield-watch-data-encryption]]
==== Watch Data Encryption
By default, Watcher simply stores this sensitive data as part of the watch document in the
`.watches` index. This means that the password can be retrieved in plain text by executing a
document GET or any search of the available operations in elasticsearch over that index.
NOTE: The <<api-rest-get-watch, Get Watch API>> will automatically filter out this sensitive data
from its response.
When Shield is installed, it is possible configure watcher to use shield and encrypt this sensitive
data prior to indexing the watch. This can be done by:
* Ensuring Shield's {shield-ref}/getting-started.html#message-authentication[System Key] is set up
and used
* Add the following settings in the `elasticsearch.yml` file:
+
[source,yaml]
--------------------------------------------------
watcher.shield.encrypt_sensitive_data: true
--------------------------------------------------
+
By default (when not set), the sensitive data will be index in plain text (same behaviour as when
shield is not installed)
[float]
[[shield-sensitive-data-in-conf]]
==== Sensitive Data in Configuration Files
The `elasticsearch.yml` file may also hold sensitive data. For example, the SMTP credentials that
are configured as part of the <<email-account, email accounts>>.
As for now, neither Watcher nor Shield provide a mechanism to encrypt settings in this file. It is
a best practice to ensure that access to this file is limited to the user under which the
elasticsearch instance is running.
In addition to that, When Shield is installed, these settings will be filtered out from the REST
{ref}/cluster-nodes-info.html[Nodes Info API].

View File

@ -0,0 +1,177 @@
[[monitoring-watch-execution]]
[[watch-history]]
=== Monitoring Watch Execution
Whenever a watch is triggered, a `watch_record` document is created and added to the watch history
index. A new history index is created daily with a name of the form `.watch_history-YYYY.MM.dd`.
You can search the watch history like any other Elasticsearch index or use Kibana to monitor and
visualize watch execution.
A watch record's `_source` field contains all of the information about the watch execution:
`watch_id` :: The name of the watch that was triggered.
`trigger_event` :: How the watch was triggered (`manual` or `schedule`) and the watch's scheduled
time and actual trigger time.
`input` :: The input type (`http`, `search`, or `simple`) and definition.
`condition` :: The `condition` type (`always`, `never`, or `script`) and definition.
`state` :: The state of the watch execution (`execution_not_needed`, `executed`,
`throttled`).
`result` :: The results of each phase of the watch execution. Shows the input payload,
condition status, transform status (if defined), and actions status.
NOTE: While you can perform read operations on the watch history and manage the daily indices as
needed, you should never perform write operations on a watch history index. If you have
Shield installed, we recommend only allowing users read access to the watch history index.
[float]
[[monitoring-watches]]
==== Monitoring Watches with Kibana
You can use Kibana to monitor the watch history and create visualizations of the watches that have
executed over time.
To monitor watches with Kibana:
. Go to the Kibana **Settings > Indices** tab. For example,
`http://localhost:5601/#/settings/indices`.
. Enter `.watch_history*` in the **Index name or pattern** field.
. Click in the **Time field name** field and select `trigger_event.triggered_time`.
. Go to the **Discover** tab to see the most recently executed watches.
You can create visualizations and add them to a Kibana dashboard to track what
watches are being triggered and identify trends.
For example you could create a dashboard to:
* Track triggered watches over time, broken down by top watch.
* Identify top senders, priorities, and keywords for email actions.
* Identify top webhook targets and status codes.
image:images/watcher-kibana-dashboard.png[]
[float]
[[searching-watch-history]]
==== Searching the Watch History
To get the watch history for a particular day, search that day's watch history index:
[source,js]
--------------------------------------------------
GET .watch_history-2015.05.11/_search
{
"query" : { "match_all" : {}}
}
--------------------------------------------------
// AUTOSENSE
To get all of the watch records that reference a particular watch, search the
`watch_id` field:
[source,js]
--------------------------------------------------
GET .watch_history*/_search
{
"query" : { "match" : { "watch_id": "rss_watch" }}
}
--------------------------------------------------
// AUTOSENSE
To get all of the watch records for watches that were throttled, search the
`state` field.
[source,js]
--------------------------------------------------
GET .watch_history*/_search
{
"query" : { "match" : { "state": "throttled" }}
}
--------------------------------------------------
// AUTOSENSE
To get a date histogram over all triggered watches within a particular
time range.
[source,js]
--------------------------------------------------
GET .watch_history*/_search?search_type=count
{
"query": {
"filtered": {
"query": {
"match_all": {}
},
"filter": {
"range": {
"trigger_event.triggered_time": {
"gte": 1430438400000,
"lte": 1431820800000
}
}
}
}
},
"aggs": {
"records_per_minute": {
"date_histogram": {
"field": "trigger_event.triggered_time",
"interval": "1m",
"min_doc_count": 0,
"extended_bounds": {
"min": 1430438400000,
"max": 1431820800000
}
}
}
}
}
--------------------------------------------------
// AUTOSENSE
[float]
[[managing-watch-history]]
==== Managing Watch History Indexes
You should establish a policy for how long you need to keep your watch history indexes. For
example, you might simply delete the daily history indexes after 30 days. If you need to preserve
the history but don't need to maintain immediate access to it, you can close the index or take a
snapshot and then delete it.
http://www.elastic.co/guide/en/elasticsearch/client/curator/current/index.html[Elasticsearch Curator]
provides a convenient CLI for managing time-series indices.
You can also set up a watch to manage your watch history indexes. For example, the following watch
that runs daily and uses a webhook action to delete history indexes older than seven days.
[source,js]
--------------------------------------------------
PUT _watcher/watch/manage_history
{
"metadata": {
"keep_history_days": 7
},
"trigger": {
"schedule": { "daily": { "at" : "00:01" }}
},
"input": {
"simple": {}
},
"condition": {
"always": {}
},
"transform": {
"script" : "return [ indexToDelete : '/.watch_history-' + ctx.execution_time.minusDays(ctx.metadata.keep_history_days + 1).toString('yyyy.MM.dd') ]"
},
"actions": {
"delete_old_index": {
"webhook": {
"method": "DELETE",
"host": "localhost",
"port": 9200,
"path": "{{ctx.payload.indexToDelete}}"
}
}
}
}
--------------------------------------------------
// AUTOSENSE

View File

@ -0,0 +1,296 @@
[[customizing-watches]]
== Customizing Watches
Now that you've seen how to set up simple watches to <<watch-log-data, watch your log data>>
and <<watch-cluster-status, monitor your cluster health>>, let's take a closer look at how
you can customize a watch by modifying its <<changing-inputs, inputs>>,
<<changing-conditions, conditions>>, <<using-transforms, transforms>>, and
<<customizing-actions, actions>>.
[[changing-inputs]]
=== Changing Inputs
Watcher supports three types of inputs <<loading-static-data, simple>>,
<<loading-search-results, search>>, and <<loading-http-data, http>>.
[[loading-static-data]]
==== Loading Static Data with the Simple Input
To load static data into the watch payload for testing purposes, you can use the
<<input-simple, simple>> input. For example, the following input stores three fields in the
payload:
[source,js]
--------------------------------------------------
"input" : {
"simple" : {
"color" : "red",
"status" : "error",
"count" : 3
}
}
--------------------------------------------------
[[loading-search-results]]
==== Loading Search Results with the Search Input
To load search results into the watch payload, you use the `search` input. In addition to simple
match queries like the one shown in the <<watch-log-data, Getting Started>> guide, you can use the
full Elasticsearch query language.
A <<input-search, simple>> input contains a `request` object that specifies the indices you want to
search, the {ref}/search-request-search-type.html[search type], and the search request body. The
`body` field of a search input is the same as the body of an Elasticsearch `_search` request.
NOTE: The default search type is {ref}/search-request-search-type.html#count[`count`], which
differs from the Elasticsearch default of `query_then_fetch`.
////////////
For example, the following search input searches the watch history indices for watch records whose execution_duration exceeded 2.5 seconds.
[source,js]
--------------------------------------------------
"input" : {
"search": {
"request": {
"indices": [
".watch_history*"
],
"search_type": "count",
"body": {
"query" : {
"filtered": {
"query" : {
"match_all" : { }
},
"filter": {
"range": {
"result.execution_duration": {
"gt": 2500
}
}
}
}
}
}
}
}
},
--------------------------------------------------
////////////
[[loading-http-data]]
==== Loading a Webserver Response with the HTTP Input
To query a webserver and load the response into the watch payload, you use the `http` input. In
addition to calling Elasticsearch APIs as shown in the <<watch-cluster-status, Getting Started>>
guide, you can submit requests to any webserver that returns a response in JSON.
////////////
For example, the following input gets excerpts for all of the questions posted to Stack Overflow
during the month of May, 2015 that were tagged with `elasticsearch`.
[source,js]
--------------------------------------------------
"input" : {
"http" : {
"request" : {
"host" : "api.stackexchange.com",
"port" : 80,
"path" : "https://api.stackexchange.com/2.2/search/excerpts",
"params" : { <1>
"fromdate" : 1430438400,
"todate" : 1433030400,
"order" : "desc",
"sort" : "activity",
"tagged" : "elasticsearch",
"site" : "stackoverflow"
}
}
}
}
--------------------------------------------------
<1> The query string parameters are passed to the server using a `params` field, they are not
included as part of the path.
////////////
[[changing-conditions]]
=== Changing Conditions
Watcher supports four types of conditions <<condition-always, always>>, <<condition-never, never>>,
<<condition-compare, compare>>, and <<condition-script, script>>.
The first two are pretty self-explanatory--they are shortcuts for setting a watch's condition to
`true` or `false`.
The `compare` condition enables you to perform simple comparisons against values in the Watch
payload. While you can also do this with a `script` condition, with `compare` you can define
inline comparisons without having to enable dynamic scripting. You can use the `script` condition
to perform more complex evaluations of the data in the watch payload.
For example, the following compare condition checks to see if the 'search' input returned any
hits:
[source,js]
--------------------------------------------------
"condition" : {
"compare" : { "ctx.payload.hits.total" : { "gt" : 0 }}
},
--------------------------------------------------
////////////
The following script condition checks Stack Overflow excerpts loaded by an 'http' input to see if
there are unanswered questions that have a question_score of 3 or higher.
[source,js]
--------------------------------------------------
"condition" : {
"script" : "def items = ctx.payload.items; def createResult = {if (it.question_score.value >= 3 && it.has_accepted_answer.value == false) {return true}}; items.each(createResult)"
}
--------------------------------------------------
////////////
[[using-transforms]]
=== Using Transforms
Watcher supports three types of transforms <<transform-search, search>>,
<<transform-script, script>> and <<transform-chain, chain>>. A `search` transform replaces the
existing payload with the results of a new search request. You can use `script` transforms to
modify the existing payload. A `chain` transform enables you to perform a series of `search` and
`script` transforms.
////////////
For example, the following chain transform performs a 'query_then_fetch' search to load the source
of the watch records that have an 'execution_duration' of more than 2.5 seconds. A script transform
then extracts selected information from the search results and updates the watch payload.
[source,js]
--------------------------------------------------
"transform" : {
"chain" : [
{
"search" : {
"search_type" : "query_then_fetch",
"indices" : [ ".watch_history*" ],
"body" : {
"query" : {
"filtered": {
"query" : {
"match_all" : { }
},
"filter": {
"range": {
"result.execution_duration": {
"gt": 2500
}
}
}
}
}
}
}
},
{
"script" : "def records = ctx.payload.hits.hits; def result = [ ]; def createResult = {if (!it) { result = '0'} else {result << it._source.result.execution_duration.value}}; records.each(createResult); return result"
}
]
},
--------------------------------------------------
////////////
[[customizing-actions]]
=== Customizing Actions
Watcher supports four types of actions <<actions-email, email>>,
<<actions-index, index>>, <<actions-logging, logging>>, and <<actions-webhook, webhook>>.
To use the `email` action, you need to <<email-services, configure an email account>> in
`elasticsearch.yml` that Watcher can use to send email. Your custom email messages can be
plain text or styled using HTML. You can include information from the watch payload using
<<templates, templates>>, as well as attach the entire watch payload to the message. For example,
the following email action uses a template in the subject line and attaches the payload data to the
message.
[source,js]
--------------------------------------------------
"actions" : {
"send_email" : {
"email" : {
"to" : "<username>@<domainname>",
"subject" : "Watcher Notification",
"body" : "{{ctx.payload.hits.total}} watches took more than 2.5 seconds to execute.",
"attach_data" : true
}
}
}
--------------------------------------------------
The `index` action enables you to load data from the watch payload into an Elasticsearch index. The
entire payload can be indexed as a single document, or you can use a transform to populate a
`_doc` field with an array of objects that are indexed as separate documents.
////////////
For example,
the following index action indexes each of the excerpts extracted from Stack Overflow as a separate
document.
[source,js]
--------------------------------------------------
"actions" : {
"index_payload" : {
"transform": {
...
},
"index" : {
"index" : "questions",
"doc_type" : "stackoverflow-excerpt"
}
}
}
--------------------------------------------------
////////////
The `logging` action enables you to add entries to the Elasticsearch logs, which is useful
during development and testing. For example, the following logging action logs the number
of watches that took longer than 2.5 seconds to run.
[source,js]
--------------------------------------------------
"actions" : {
"log" : {
"logging" : {
"text" : "{{ctx.payload.hits.total}} watches took more than 2.5 seconds to execute"
}
}
}
--------------------------------------------------
The `webhook` action enables you to submit a request to any external webservice. For example,
the following webhook action creates a Pagerduty trigger event.
[source,js]
--------------------------------------------------
"actions" : {
"send_trigger" : {
"throttle_period" : "5m",
"webhook" : {
"method" : "POST",
"host" : "https://events.pagerduty.com",
"port" : 443,
"path": ":/generic/2010-04-15/create_event.json}",
"body" : "{
\"service_key\": \"e93facc04764012d7bfb002500d5d1a6\",
\"incident_key\": \"long_watches\",
\"event_type\": \"trigger\",
\"description\": \"{{ctx.payload.hits.total}} watches took more than 2.5 seconds to execute\",
\"client\": \"Watcher\"
}"
"headers": {"Content-type": "application/json"}
}
}
}
--------------------------------------------------

View File

@ -0,0 +1,9 @@
[[example-watches]]
== Example Watches
The example watches in this section demonstrate two key Watcher use cases: watching Marvel data
and watching time series data.
include::example-watches/watching-marvel-data.asciidoc[]
include::example-watches/watching-time-series-data.asciidoc[]

View File

@ -0,0 +1,664 @@
[[watching-marvel-data]]
=== Watching Marvel Data
If you use Marvel to monitor your Elasticsearch deployment, you can set up
watches to take action when something out of the ordinary occurs. For example,
you could set up watches to alert on:
- <<watching-cluster-health, Cluster health changes>>
- <<watching-memory-usage, High memory usage>>
- <<watching-cpu-usage, High cpu usage>>
- <<watching-open-file-descriptors, High file descriptor usage>>
- <<watching-fielddata, High fielddata cache usage>>
- <<watching-nodes, Nodes joining or leaving the cluster>>
NOTE: These watches query the index where your cluster's Marvel data is stored.
If you don't have Marvel installed, the queries won't return any results, the conditions
evaluate to false, and no actions are performed.
[float]
[[watching-cluster-health]]
==== Watching Cluster Health
This watch checks the cluster health once a minute and takes action if the cluster state has
been red for the last 60 seconds:
- The watch schedule is set to execute the watch every minute.
- The watch input gets the most recent cluster status from the `.marvel-*` indices.
- The watch condition checks the cluster status to see if it's been red for the last 60 seconds.
- The watch action is to send an email. (You could also call a `webhook` or store the event.)
[source,json]
--------------------------------------------------
PUT _watcher/watch/cluster_red_alert
{
"trigger": {
"schedule": {
"interval": "1m"
}
},
"input": {
"search": {
"request": {
"indices": ".marvel-*",
"types": "cluster_stats",
"body": {
"query": {
"filtered": {
"filter": {
"bool": {
"must": [
{
"range": {
"@timestamp": {
"gte": "now-2m",
"lte": "now"
}
}
}
],
"should": [
{
"term": {
"status.raw": "red"
}
},
{
"term": {
"status.raw": "green"
}
},
{
"term": {
"status.raw": "yellow"
}
}
]
}
}
}
},
"fields": ["@timestamp","status"],
"sort": [
{
"@timestamp": {
"order": "desc"
}
}
],
"size": 1,
"aggs": {
"minutes": {
"date_histogram": {
"field": "@timestamp",
"interval": "5s"
},
"aggs": {
"status": {
"terms": {
"field": "status.raw",
"size": 3
}
}
}
}
}
}
}
}
},
"throttle_period": "30m", <1>
"condition": {
"script": {
"inline": "if (ctx.payload.hits.total < 1) return false; def rows = ctx.payload.hits.hits; if (rows[0].fields.status[0] != 'red') return false; if (ctx.payload.aggregations.minutes.buckets.size() < 12) return false; def last60Seconds = ctx.payload.aggregations.minutes.buckets[-12..-1]; return last60Seconds.every { it.status.buckets.every { s -> s.key == 'red' } }"
}
},
"actions": {
"send_email": { <2>
"email": {
"to": "<username>@<domainname>", <3>
"subject": "Watcher Notification - Cluster has been RED for the last 60 seconds",
"body": "Your cluster has been red for the last 60 seconds."
}
}
}
}
--------------------------------------------------
// AUTOSENSE
<1> The throttle period prevents notifications from being sent more than once every 30 minutes.
You can change the throttle period to receive notifications more or less frequently.
<2> To send email notifications, you must configure at least one email account in `elasticsearch.yml`.
See <<email-services, Configuring Email Services>> for more information.
<3> Specify the email address you want to notify.
NOTE: This example uses an inline script, which requires you to enable dynamic scripting in
Elasticsearch. While this is convenient when you're experimenting with Watcher, in a
production environment we recommend disabling dynamic scripting and using file scripts.
[float]
[[watching-memory-usage]]
==== Watching Memory Usage
This watch runs every minute and takes action if a node in the cluster has averaged 75% or greater
heap usage for the past 60 seconds.
- The watch schedule is set to execute the watch every minute.
- The watch input gets the average `jvm.mem.heap_used_percent` for each node from the `.marvel-*` indices.
- The watch condition checks to see if any node's average heap usage is 75% or greater.
- The watch action is to send an email. (You could also call a `webhook` or store the event.)
[source,json]
--------------------------------------------------
PUT _watcher/watch/mem_watch
{
"trigger": {
"schedule": {
"interval": "1m"
}
},
"input": {
"search": {
"request": {
"indices": [
".marvel-*"
],
"search_type": "count",
"body": {
"query": {
"filtered": {
"filter": {
"range": {
"@timestamp": {
"gte": "now-2m",
"lte": "now"
}
}
}
}
},
"aggs": {
"minutes": {
"date_histogram": {
"field": "@timestamp",
"interval": "minute"
},
"aggs": {
"nodes": {
"terms": {
"field": "node.name.raw",
"size": 10,
"order": {
"memory": "desc"
}
},
"aggs": {
"memory": {
"avg": {
"field": "jvm.mem.heap_used_percent"
}
}
}
}
}
}
}
}
}
}
},
"throttle_period": "30m", <1>
"condition": {
"script": "if (ctx.payload.aggregations.minutes.buckets.size() == 0) return false; def latest = ctx.payload.aggregations.minutes.buckets[-1]; def node = latest.nodes.buckets[0]; return node && node.memory && node.memory.value >= 75;"
},
"actions": {
"send_email": {
"transform": {
"script": "def latest = ctx.payload.aggregations.minutes.buckets[-1]; return latest.nodes.buckets.findAll { return it.memory && it.memory.value >= 75 };"
},
"email": { <2>
"to": "<username>@<domainname>", <3>
"subject": "Watcher Notification - HIGH MEMORY USAGE",
"body": "Nodes with HIGH MEMORY Usage (above 75%):\n\n{{#ctx.payload._value}}\"{{key}}\" - Memory Usage is at {{memory.value}}%\n{{/ctx.payload._value}}"
}
}
}
}
--------------------------------------------------
// AUTOSENSE
<1> The throttle period prevents notifications from being sent more than once every 30 minutes.
You can change the throttle period to receive notifications more or less frequently.
<2> To send email notifications, you must configure at least one email account in `elasticsearch.yml`.
See <<email-services, Configuring Email Services>> for more information.
<3> Specify the email address you want to notify.
NOTE: This example uses an inline script, which requires you to enable dynamic scripting in Elasticsearch.
While this is convenient when you're experimenting with Watcher, in a production
environment we recommend disabling dynamic scripting and using file scripts.
[float]
[[watching-cpu-usage]]
==== Watching CPU Usage
This watch runs every minute and takes action if a node in the cluster has averaged 75% or greater CPU
usage for the past 60 seconds.
- The watch schedule is set to execute the watch every minute.
- The watch input gets the average CPU usage for each node from the `.marvel-*` indices.
- The watch condition checks to see if any node's average CPU usage is 75% or greater.
- The watch action is to send an email. (You could also call a `webhook` or store the event.)
[source,json]
--------------------------------------------------
PUT _watcher/watch/cpu_usage
{
"trigger": {
"schedule": {
"interval": "1m"
}
},
"input": {
"search": {
"request": {
"indices": [
".marvel-*"
],
"search_type": "count",
"body": {
"query": {
"filtered": {
"filter": {
"range": {
"@timestamp": {
"gte": "now-2m",
"lte": "now"
}
}
}
}
},
"aggs": {
"minutes": {
"date_histogram": {
"field": "@timestamp",
"interval": "minute"
},
"aggs": {
"nodes": {
"terms": {
"field": "node.name.raw",
"size": 10,
"order": {
"cpu": "desc"
}
},
"aggs": {
"cpu": {
"avg": {
"field": "os.cpu.user"
}
}
}
}
}
}
}
}
}
}
},
"throttle_period": "30m", <1>
"condition": {
"script": "if (ctx.payload.aggregations.minutes.buckets.size() == 0) return false; def latest = ctx.payload.aggregations.minutes.buckets[-1]; def node = latest.nodes.buckets[0]; return node && node.cpu && node.cpu.value >= 75;"
},
"actions": {
"send_email": { <2>
"transform": {
"script": "def latest = ctx.payload.aggregations.minutes.buckets[-1]; return latest.nodes.buckets.findAll { return it.cpu && it.cpu.value >= 75 };"
},
"email": {
"to": "user@example.com", <3>
"subject": "Watcher Notification - HIGH CPU USAGE",
"body": "Nodes with HIGH CPU Usage (above 75%):\n\n{{#ctx.payload._value}}\"{{key}}\" - CPU Usage is at {{cpu.value}}%\n{{/ctx.payload._value}}"
}
}
}
}
--------------------------------------------------
// AUTOSENSE
<1> The throttle period prevents notifications from being sent more than once every 30 minutes.
You can change the throttle period to receive notifications more or less frequently.
<2> To send email notifications, you must configure at least one email account in `elasticsearch.yml`.
See <<email-services, Configuring Email Services>> for more information.
<3> Specify the email address you want to notify.
NOTE: This example uses an inline script, which requires you to enable dynamic scripting in Elasticsearch.
While this is convenient when you're experimenting with Watcher, in a production
environment we recommend disabling dynamic scripting and using file scripts.
[float]
[[watching-open-file-descriptors]]
==== Watching Open File Descriptors
This watch runs once a minute and takes action if there are nodes that
are using 80% or more of the available file descriptors.
- The watch schedule is set to execute the watch every minute.
- The watch input gets the average number of open file descriptors on each node from the `.marvel-*`
indices. The input search returns the top ten nodes with the highest average number of open file
descriptors.
- The watch condition checks the cluster status to see if any node's average number of open file
descriptors is 80% or greater.
- The watch action is to send an email. (You could also call a `webhook` or store the event.)
[source,json]
--------------------------------------------------
PUT _watcher/watch/open_file_descriptors
{
"metadata": {
"system_fd": 65535,
"threshold": 0.8
},
"trigger": {
"schedule": {
"interval": "1m"
}
},
"input": {
"search": {
"request": {
"indices": [
".marvel-*"
],
"types": "node_stats",
"search_type": "count",
"body": {
"query": {
"filtered": {
"filter": {
"range": {
"@timestamp": {
"gte": "now-1m",
"lte": "now"
}
}
}
}
},
"aggs": {
"minutes": {
"date_histogram": {
"field": "@timestamp",
"interval": "5s"
},
"aggs": {
"nodes": {
"terms": {
"field": "node.name.raw",
"size": 10,
"order": {
"fd": "desc"
}
},
"aggs": {
"fd": {
"avg": {
"field": "process.open_file_descriptors"
}
}
}
}
}
}
}
}
}
}
},
"throttle_period": "30m", <1>
"condition": {
"script": "if (ctx.payload.aggregations.minutes.buckets.size() == 0) return false; def latest = ctx.payload.aggregations.minutes.buckets[-1]; def node = latest.nodes.buckets[0]; return node && node.fd && node.fd.value >= (ctx.metadata.system_fd * ctx.metadata.threshold);"
},
"actions": {
"send_email": { <2>
"transform": {
"script": "def latest = ctx.payload.aggregations.minutes.buckets[-1]; return latest.nodes.buckets.findAll({ return it.fd && it.fd.value >= (ctx.metadata.system_fd * ctx.metadata.threshold) }).collect({ it.fd.percent = Math.round((it.fd.value/ctx.metadata.system_fd)*100); it });"
},
"email": {
"to": "<username>@<domainname>", <3>
"subject": "Watcher Notification - NODES WITH 80% FILE DESCRIPTORS USED",
"body": "Nodes with 80% FILE DESCRIPTORS USED (above 80%):\n\n{{#ctx.payload._value}}\"{{key}}\" - File Descriptors is at {{fd.value}} ({{fd.percent}}%)\n{{/ctx.payload._value}}"
}
}
}
}
--------------------------------------------------
// AUTOSENSE
<1> The throttle period prevents notifications from being sent more than once a minute.
You can change the throttle period to receive notifications more or less frequently.
<2> To send email notifications, you must configure at least one email account in
`elasticsearch.yml`. See <<email-services, Configuring Email Services>> for more
information.
<3> Specify the email address you want to notify.
NOTE: This example uses an inline script, which requires you to enable dynamic scripting in
Elasticsearch. While this is convenient when you're experimenting with Watcher, in a
production environment we recommend disabling dynamic scripting and using file scripts.
[float]
[[watching-fielddata]]
==== Watching Field Data Utilization
This watch runs once a minute and takes action if there are nodes that
are using 80% or more of their field data cache.
- The watch schedule is set to execute the watch every minute.
- The watch input gets the average field data memory usage on each node from the `.marvel-*` indices.
The input search returns the top ten nodes with the highest average field data usage.
- The watch condition checks the cluster status to see if any node's average field data usage is 80%
or more of the field data cache size.
- The watch action is to send an email. (You could also call a `webhook` or store the event.)
[source,json]
--------------------------------------------------
PUT _watcher/watch/fielddata_utilization
{
"metadata": {
"fielddata_cache_size": 100000, <1>
"threshold": 0.8
},
"trigger": {
"schedule": {
"interval": "1m"
}
},
"input": {
"search": {
"request": {
"indices": [
".marvel-*"
],
"types": "node_stats",
"search_type": "count",
"body": {
"query": {
"filtered": {
"filter": {
"range": {
"@timestamp": {
"gte": "now-1m",
"lte": "now"
}
}
}
}
},
"aggs": {
"minutes": {
"date_histogram": {
"field": "@timestamp",
"interval": "5s"
},
"aggs": {
"nodes": {
"terms": {
"field": "node.name.raw",
"size": 10,
"order": {
"fielddata": "desc"
}
},
"aggs": {
"fielddata": {
"avg": {
"field": "indices.fielddata.memory_size_in_bytes"
}
}
}
}
}
}
}
}
}
}
},
"throttle_period": "30m", <2>
"condition": {
"script": "if (ctx.payload.aggregations.minutes.buckets.size() == 0) return false; def latest = ctx.payload.aggregations.minutes.buckets[-1]; def node = latest.nodes.buckets[0]; return node && node.fielddata && node.fielddata.value >= (ctx.metadata.fielddata_cache_size * ctx.metadata.threshold);"
},
"actions": {
"send_email": { <3>
"transform": {
"script": "def latest = ctx.payload.aggregations.minutes.buckets[-1]; return latest.nodes.buckets.findAll({ return it.fielddata && it.fielddata.value >= (ctx.metadata.fielddata_cache_size * ctx.metadata.threshold) }).collect({ it.fielddata.percent = Math.round((it.fielddata.value/ctx.metadata.fielddata_cache_size)*100); it });"
},
"email": {
"to": "<username>@<domainname>", <4>
"subject": "Watcher Notification - NODES WITH 80% FIELDDATA UTILIZATION",
"body": "Nodes with 80% FIELDDATA UTILIZATION (above 80%):\n\n{{#ctx.payload._value}}\"{{key}}\" - Fielddata utilization is at {{fielddata.value}} bytes ({{fielddata.percent}}%)\n{{/ctx.payload._value}}"
}
}
}
}
--------------------------------------------------
// AUTOSENSE
<1> The size of the field data cache. Set to the actual cache size configured for your nodes.
<2> The throttle period prevents notifications from being sent more than once a minute.
You can change the throttle period to receive notifications more or less frequently.
<3> To send email notifications, you must configure at least one email account in
`elasticsearch.yml`. See <<email-services, Configuring Email Services>> for more
information.
<4> Specify the email address you want to notify.
NOTE: This example uses an inline script, which requires you to enable dynamic scripting in
Elasticsearch. While this is convenient when you're experimenting with Watcher, in a
production environment we recommend disabling dynamic scripting and using file scripts.
[[watching-nodes]]
[float]
==== Watching for Nodes Joining or Leaving a Cluster
This watch checks every minute to see if a node has joined or left the cluster:
- The watch schedule is set to execute the watch every minute.
- The watch input searches for `node_left` and `node_joined` events in the past 60 seconds.
- The watch condition checks to see if there are any search results in the payload. If so,
the watch actions are performed.
- The watch action is to send an email. (You could also call a `webhook` or store the event.)
[source,json]
--------------------------------------------------
PUT _watcher/watch/node_event
{
"trigger": {
"schedule": {
"interval": "60s"
}
},
"input": {
"search": {
"request": {
"indices": [
".marvel-*"
],
"search_type": "query_then_fetch",
"body": {
"query": {
"filtered": {
"query": {
"bool": {
"should": [
{
"match": {
"event": "node_left"
}
},
{
"match": {
"event": "node_joined"
}
}
]
}
},
"filter": {
"range": {
"@timestamp": {
"from": "{{ctx.trigger.scheduled_time}}||-60s",
"to": "{{ctx.trigger.triggered_time}}"
}
}
}
}
},
"fields": [
"event",
"message",
"cluster_name"
],
"sort": [
{
"@timestamp": {
"order": "desc"
}
}
]
}
}
}
},
"throttle_period": "60s", <1>
"condition": {
"script": {
"inline": "ctx.payload.hits.size() > 0 "
}
},
"actions": {
"send_email": { <2>
"email": {
"to": "<username>@<domainname>", <3>
"subject": "{{ctx.payload.hits.hits.0.fields.event}} the cluster",
"body": "{{ctx.payload.hits.hits.0.fields.message}} the cluster {{ctx.payload.hits.hits.0.fields.cluster_name}} "
}
}
}
}
--------------------------------------------------
// AUTOSENSE
<1> The throttle period prevents notifications from being sent more than once a minute.
You can change the throttle period to receive notifications more or less frequently.
<2> To send email notifications, you must configure at least one email account in
`elasticsearch.yml`. See <<email-services, Configuring Email Services>> for more
information.
<3> Specify the email address you want to notify.
NOTE: This example uses an inline script, which requires you to enable dynamic scripting in
Elasticsearch. While this is convenient when you're experimenting with Watcher, in a
production environment we recommend disabling dynamic scripting and using file scripts.

View File

@ -0,0 +1,210 @@
[[watching-time-series-data]]
=== Watching Time Series Data
If you are indexing time-series data such as logs, RSS feeds, or network traffic,
you can use watcher to send notifications when certain events occur.
For example, you could index an RSS feed of posts on Stack Overflow that are tagged with Elasticsearch, Logstash, or Kibana, set up a watch to check daily for new posts about a problem or failure, and send an email if any are found.
The simplest way to index an RSS feed is to use https://www.elastic.co/products/logstash[Logstash].
To install Logstash and set up the RSS input plugin:
. https://www.elastic.co/downloads/logstash[Download Logstash 1.5.0 RC4+] and unpack the archive file.
. Go to the `logstash-<logstash_version>` directory and install the
http://www.elastic.co/guide/en/logstash/current/plugins-inputs-rss.html[RSS input]
plugin:
+
[source,shell]
----------------------------------------------------------
cd logstash-<logstash_version>
bin/plugin install logstash-input-rss
----------------------------------------------------------
. Create a Logstash configuration file that uses the RSS input plugin
to get data from an RSS/atom feed and outputs the data to Elasticsearch. For example, the following `rss.conf` file gets events from the Stack Overflow feed that are tagged with `elasticsearch`, `logstash`, or `kibana`.
+
[source,text]
----------------------------------------------------------
input {
rss {
url => "http://stackoverflow.com/feeds/tag/elasticsearch+or+logstash+or+kibana"
interval => 3600 <1>
}
}
output {
elasticsearch {
protocol => "http"
host => "localhost" <2>
}
stdout { }
}
----------------------------------------------------------
<1> Checks the feed every hour.
<2> The hostname or IP address of the host to use to connect to your Elasticsearch cluster.
For more information see {logstash-ref}/plugins-outputs-elasticsearch.html#plugins-outputs-elasticsearch-host[Elasticsearch output] in the Logstash Reference.
. Run Logstash with the `rss.conf` config file to start indexing the feed:
+
[source,shell]
----------------------------------------------------------
bin/logstash -f rss.conf
----------------------------------------------------------
Once you have Logstash set up to input data from the RSS feed into
Elasticsearch, you can set up a watch that runs at noon each day to check for new posts that contain the words "error" or "problem".
To set up the watch, define a trigger, input, condition, and an action:
. Define the watch trigger--a daily schedule that runs at 12:00 UTC time every day:
+
[source,json]
--------------------------------------------------
"trigger" : {
"schedule" : {
"daily" : { "at" : "12:00" }
}
}
--------------------------------------------------
+
NOTE: In Watcher, you specify times in UTC time. Don't forget to do the conversion from your local time so the schedule triggers at the time you intend.
. Define the watch input--a search that uses a filter to constrain the results to
the past day.
+
[source,json]
--------------------------------------------------
"input" : {
"search" : {
"request" : {
"indices" : [ "logstash*" ],
"body" : {
"query" : {
"filtered" : {
"query" : {"match" : {"message": "error problem"}},
"filter" : {
"range" : {"@timestamp" : {"gte" : "now-1d"}}
}
}
}
}
}
}
}
--------------------------------------------------
. Define a watch condition to check the payload to see if the input search returned any hits. If it did, the condition resolves to `true` and the watch actions will be performed.
+
You define the condition with the following script:
+
[source,text]
--------------------------------------------------
return ctx.payload.hits.total > threshold
--------------------------------------------------
+
If you store the script in a file at `$ES_HOME/config/scripts/threshold_hits.groovy`, you can then reference it by name in the watch condition. Using file-based Groovy scripts enables you to avoid using dynamic scripting. For more information see {blog-ref}running-groovy-scripts-without-dynamic-scripting[Running Groovy Scripts without Dynamic Scripting].
+
[source,json]
--------------------------------------------------
"condition" : {
"script" : {
"file" : "threshold_hits",
"params" : {
"threshold" : 0 <1>
}
}
},
--------------------------------------------------
+
<1> The threshold parameter value you want to pass to the script.
+
NOTE: We recommend using file scripts when possible. To use inline or indexed scripts, you must {ref}/modules-scripting.html[enable dynamic scripting] in Elasticsearch.
. Define a watch action to send an email that contains the relevant messages from the past day as an attachment.
+
[source,json]
--------------------------------------------------
"actions" : {
"send_email" : {
"email" : {
"to" : "<username>@<domainname>",
"subject" : "Somebody needs help with ELK",
"body" : "The attached Stack Overflow posts were tagged with Elasticsearch, Logstash, or Kibana and mentioned an error or problem.",
"attach_data" : true
}
}
}
--------------------------------------------------
+
NOTE: To use the email action, you must configure at least one email account in
`elasticsearch.yml`. If you configure multiple email accounts, you need to specify which one you want to use in the email action. For more information, see <<email-services, Working with Various Email Services>>.
The complete watch looks like this:
[source,json]
--------------------------------------------------
PUT _watcher/watch/rss_watch
{
"trigger" : {
"schedule" : {
"daily" : { "at" : "12:00" }
}
},
"input" : {
"search" : {
"request" : {
"indices" : [ "logstash*" ],
"body" : {
"query" : {
"filtered" : {
"query" : {"match" : {"message": "error problem"}},
"filter" : {"range" : {"@timestamp" : {"gte" : "now-1d"}}}
}
}
}
}
}
},
"condition" : {
"script" : {
"file" : "threshold_hits",
"params" : {
"threshold" : 0
}
}
},
"actions" : {
"send_email" : {
"email" : {
"to" : "<username>@<domainname>", <1>
"subject" : "Somebody needs help with ELK",
"body" : "The attached Stack Overflow posts were tagged with Elasticsearch, Logstash, or Kibana and mentioned an error or problem.",
"attach_data" : true
}
}
}
}
--------------------------------------------------
// AUTOSENSE
<1> Replace `<username>@<domainname>` with your email address to receive notifications.
[TIP]
=================================================
To execute a watch immediately (without waiting for the schedule to trigger), use the <<api-rest-execute-watch, `_execute`>> API:
[source,json]
--------------------------------------------------
POST _watcher/watch/rss_watch/_execute
{
"ignore_condition" : true,
"action_modes" : {
"_all" : "force_execute"
},
"record_execution" : true
}
--------------------------------------------------
// AUTOSENSE
==================================================

View File

@ -0,0 +1,487 @@
[[getting-started]]
== Getting Started
This getting started guide walks you through installing Watcher and creating your first watches,
and introduces the building blocks you'll use to create custom watches. You can install Watcher
on nodes running Elasticsearch 1.5 or later.
To install and run Watcher:
. Run `bin/plugin -i` from `ES_HOME` to install the License plugin:
+
[source,shell]
----------------------------------------------------------
bin/plugin -i elasticsearch/license/latest
----------------------------------------------------------
+
NOTE: You need to install the License and Watcher plugins on each node in your cluster.
. Run `bin/plugin -i` to install the Watcher plugin.
+
[source,shell]
----------------------------------------------------------
bin/plugin -i elasticsearch/watcher/latest
----------------------------------------------------------
+
NOTE: If you are using a <<package-installation, DEB/RPM distribution>> of Elasticsearch,
run the installation with superuser permissions. To perform an offline installation,
<<offline-installation, download the Watcher binaries>>.
. Start Elasticsearch.
+
[source,shell]
----------------------------------------------------------
bin/elasticsearch
----------------------------------------------------------
. To verify that Watcher is set up, call the Watcher `_stats` API:
+
[source,js]
--------------------------------------------------
curl -XGET 'http://localhost:9200/_watcher/stats?pretty'
--------------------------------------------------
+
You haven't set up any watches yet, so the `watch_count` is zero and the `execution_thread_pool` queue
is empty:
+
[source,js]
--------------------------------------------------
{
"watcher_state": "started",
"watch_count": 0,
"execution_thread_pool": {
"queue_size": 0,
"max_size": 0
}
}
--------------------------------------------------
Ready to start building watches? Choose one of the following scenarios:
* <<watch-log-data, Watch Log Data for Errors>>
* <<watch-cluster-status, Watch Your Cluster Health>>
[[watch-log-data]]
=== Watch Log Data for Errors
You can easily configure a watch that periodically checks your log data for error conditions:
* <<log-add-input, Schedule the watch and define an input>> to search your log data for error events.
* <<log-add-condition, Add a condition>> that checks to see if any errors were found.
* <<log-take-action, Take action>> if there are any errors.
[float]
[[log-add-input]]
==== Schedule the Watch and Add an Input
A watch <<trigger-schedule, schedule>> controls how often a watch is triggered. The watch
<<input, input>> gets the data that you want to evaluate.
To periodically search your log data and load the results into the watch, you use an
<<schedule-interval, interval>> schedule and a <<input-search, search>> input. For example, the
following Watch searches the `logs` index for errors every 10 seconds:
[source,js]
--------------------------------------------------
curl -XPUT 'http://localhost:9200/_watcher/watch/log_error_watch' -d '{
"trigger" : {
"schedule" : { "interval" : "10s" } <1>
},
"input" : {
"search" : {
"request" : {
"indices" : [ "logs" ],
"body" : {
"query" : {
"match" : { "message": "error" }
}
}
}
}
}
}'
--------------------------------------------------
<1> Schedules are typically configured to run less frequently. This example sets the interval to
10 seconds so you can easily see the watches being triggered. Since this watch runs so frequently,
don't forget to <<log-delete, delete the watch>> when you're done experimenting.
If you check the watch history you'll see that the watch is being triggered every 10 seconds.
However, the search isn't returning any results so nothing is loaded into the watch payload.
For example, the following snippet gets the last ten watch executions (a.k.a watch records) from
the watch history:
[source,js]
--------------------------------------------------------------------------------
curl -XGET 'http://localhost:9200/.watch_history*/_search?pretty' -d '{
"sort" : [
{ "result.execution_time" : "desc" }
]
}'
--------------------------------------------------------------------------------
[float]
[[log-add-condition]]
==== Add a Condition
A <<condition, condition>> evaluates the data you've loaded into the watch and determines if any
action is required. Since you've defined an input that loads log errors into the watch, you can
define a condition that checks to see if any errors were found.
For example, you could add a condition that simply checks to see if the search input returned
any hits.
[source,js]
--------------------------------------------------
curl -XPUT 'http://localhost:9200/_watcher/watch/log_error_watch' -d '{
"trigger" : { "schedule" : { "interval" : "10s" } },
"input" : {
"search" : {
"request" : {
"indices" : [ "logs" ],
"body" : {
"query" : {
"match" : { "message": "error" }
}
}
}
}
},
"condition" : {
"compare" : { "ctx.payload.hits.total" : { "gt" : 0 }} <1>
}
}'
--------------------------------------------------
<1> The <<condition-compare, compare>> condition lets you easily compare against values in the
execution context without enabling dynamic scripting.
The condition result is recorded as part of the `watch_record` each time the watch executes. Since
there are currently no log events in the `logs` index, the watch condition will not be met. If you
search the history for watch executions where the condition was met during the last 5 seconds,
there are no hits:
[source,js]
--------------------------------------------------------------------------------
curl -XGET 'http://localhost:9200/.watch_history*/_search?pretty' -d '{
"query" : {
"bool" : {
"must" : [
{ "match" : { "result.condition.met" : true }},
{ "range" : { "result.execution_time" : { "from" : "now-10s"}}}
]
}
}
}'
--------------------------------------------------------------------------------
For the condition in the example above to evaluate to `true`, you need to add an event to the
`logs` index that contains an error.
For example, the following snippet adds a 404 error to the `logs` index:
[source,js]
--------------------------------------------------
curl -XPOST 'http://localhost:9200/logs/event' -d '{
"timestamp" : "2015-05-17T18:12:07.613Z",
"request" : "GET index.html",
"status_code" : 404,
"message" : "Error: File not found"
}'
--------------------------------------------------
Once you add this event, the next time the watch executes its condition will evaluate to `true`.
You can verify this by searching the watch history:
[source,js]
--------------------------------------------------------------------------------
curl -XGET 'http://localhost:9200/.watch_history*/_search?pretty' -d '{
"query" : {
"bool" : {
"must" : [
{ "match" : { "result.condition.met" : true }},
{ "range" : { "result.execution_time" : { "from" : "now-10s"}}}
]
}
}
}'
--------------------------------------------------------------------------------
[float]
[[log-take-action]]
==== Take Action
Recording `watch_records` in the watch history is nice, but the real power of Watcher is being able
to do something when the watch condition is met. The watch's <<actions, actions>> define what to
do when the watch condition evaluates to `true`--you can send emails, call third-party webhooks,
write documents to an Elasticsearch or log messages to the standards Elasticsearch log files.
For example, you could add an action to write a message to the Elasticsearch log when an error is
detected.
[source,js]
--------------------------------------------------
curl -XPUT 'http://localhost:9200/_watcher/watch/log_error_watch' -d '{
"trigger" : { "schedule" : { "interval" : "10s" } },
"input" : {
"search" : {
"request" : {
"indices" : [ "logs" ],
"body" : {
"query" : {
"match" : { "message": "error" }
}
}
}
}
},
"condition" : {
"compare" : { "ctx.payload.hits.total" : { "gt" : 0 }}
},
"actions" : {
"log_error" : {
"logging" : {
"text" : "Found {{ctx.payload.hits.total}} errors in the logs"
}
}
}
}'
--------------------------------------------------
[float]
[[log-delete]]
==== Delete the Watch
Since the `log_error_watch` is configured to run every 10 seconds, make sure you delete it when
you're done experimenting. Otherwise, the noise from this sample watch will make it hard to see
what else is going on in your watch history and log file.
To remove the watch, use the <<api-rest-delete-watch, DELETE watch>> API:
[source,js]
--------------------------------------------------
curl -XDELETE 'http://localhost:9200/_watcher/watch/log_error_watch'
--------------------------------------------------
[[watch-cluster-status]]
=== Watch Your Cluster Health
You can easily configure a basic watch to monitor the health of your Elasticsearch cluster:
* <<health-add-input, Schedule the watch and define an input>> that gets the cluster health status.
* <<health-add-condition, Add a condition>> that evaluates the health status to determine if action
is required.
* <<health-take-action, Take action>> if the cluster is RED.
[float]
[[health-add-input]]
==== Schedule the Watch and Add an Input
A watch <<trigger-schedule, schedule>> controls how often a watch is triggered. The watch
<<input, input>> gets the data that you want to evaluate.
The simplest way to define a schedule is to specify an interval. For example, the following
schedule runs every 10 seconds:
[source,js]
--------------------------------------------------
curl -XPUT 'http://localhost:9200/_watcher/watch/cluster_health_watch' -d '{
"trigger" : {
"schedule" : { "interval" : "10s" } <1>
}
}'
--------------------------------------------------
<1> Schedules are typically configured to run less frequently. This example sets the interval to
10 seconds to you can easily see the watches being triggered. Since this watch runs so frequently,
don't forget to <<health-delete, delete the watch>> when you're done experimenting.
To get the status of your cluster, you can call the Elasticsearch
{ref}//cluster-health.html[cluster health] API:
[source,js]
--------------------------------------------------
curl -XGET 'http://localhost:9200/_cluster/health?pretty'
--------------------------------------------------
To load the health status into your watch, you simply add an <<input-http, HTTP input>> that calls
the cluster health API:
[source,js]
--------------------------------------------------
curl -XPUT 'http://localhost:9200/_watcher/watch/cluster_health_watch' -d '{
"trigger" : {
"schedule" : { "interval" : "10s" }
},
"input" : {
"http" : {
"request" : {
"host" : "localhost",
"port" : 9200,
"path" : "/_cluster/health"
}
}
}
}'
--------------------------------------------------
If you check the watch history, you'll see that the cluster status is recorded as part of the
`watch_record` each time the watch executes.
For example, the following snippet gets the last ten watch records from the watch history:
[source,js]
--------------------------------------------------------------------------------
curl -XGET 'http://localhost:9200/.watch_history*/_search' -d '{
"sort" : [
{ "result.execution_time" : "desc" }
]
}'
--------------------------------------------------------------------------------
[float]
[[health-add-condition]]
==== Add a Condition
A <<condition, condition>> evaluates the data you've loaded into the watch and determines if any
action is required. Since you've defined an input that loads the cluster status into the watch,
you can define a condition that checks that status.
For example, you could add a condition to check to see if the status is RED.
[source,js]
--------------------------------------------------
curl -XPUT 'http://localhost:9200/_watcher/watch/cluster_health_watch' -d '{
"trigger" : {
"schedule" : { "interval" : "10s" } <1>
},
"input" : {
"http" : {
"request" : {
"host" : "localhost",
"port" : 9200,
"path" : "/_cluster/health"
}
}
},
"condition" : {
"compare" : {
"ctx.payload.status" : { "eq" : "red" }
}
}
}'
--------------------------------------------------
<1> Schedules are typically configured to run less frequently. This example sets the interval to
10 seconds to you can easily see the watches being triggered.
If you check the watch history, you'll see that the condition result is recorded as part of the
`watch_record` each time the watch executes.
To check to see if the condition was met, you can run the following query.
[source,js]
--------------------------------------------------------------------------------
curl -XGET 'http://localhost:9200/.watch_history*/_search?pretty' -d '{
"query" : {
"match" : { "result.condition.met" : true }
}
}'
--------------------------------------------------------------------------------
[float]
[[health-take-action]]
==== Take Action
Recording `watch_records` in the watch history is nice, but the real power of Watcher is being able
to do something in response to an alert. A watch's <<actions, actions>> define what to do when the
watch condition is true--you can send emails, call third-party webhooks, or write documents to an
Elasticsearch index or log when the watch condition is met.
For example, you could add an action to index the cluster status information when the status is RED.
[source,js]
--------------------------------------------------
curl -XPUT 'http://localhost:9200/_watcher/watch/cluster_health_watch' -d '{
"trigger" : {
"schedule" : { "interval" : "10s" }
},
"input" : {
"http" : {
"request" : {
"host" : "localhost",
"port" : 9200,
"path" : "/_cluster/health"
}
}
},
"condition" : {
"compare" : {
"ctx.payload.status" : { "eq" : "red" }
}
},
"actions" : {
"send_email" : {
"email" : {
"to" : "<username>@<domainname>",
"subject" : "Cluster Status Warning",
"body" : "Cluster status is RED"
}
}
}
}'
--------------------------------------------------
For Watcher to send email, you must configure an email account in your `elasticsearch.yml`
configuration file and restart Elasticsearch. To add an email account, set the
`watcher.actions.email.service.account` property.
For example, the following snippet configures a single Gmail account named `work`.
[source,shell]
----------------------------------------------------------
watcher.actions.email.service.account:
work:
profile: gmail
email_defaults:
from: <email> <1>
smtp:
auth: true
starttls.enable: true
host: smtp.gmail.com
port: 587
user: <username> <2>
password: <password> <3>
----------------------------------------------------------
<1> Replace `<email>` with the email address from which you want to send notifications.
<2> Replace `<username>` with your Gmail user name (typically your Gmail address).
<3> Replace `<password>` with your Gmail password.
NOTE: If you have advanced security options enabled for your email account, you need to take
additional steps to send email from Watcher. For more information, see
<<email-services, Working with Various Email Services>>.
You can check the watch history or the `status_index` to see that the action was performed.
[source,js]
--------------------------------------------------------------------------------
curl -XGET 'http://localhost:9200/.watch_history*/_search?pretty' -d '{
"query" : {
"match" : { "result.condition.met" : true }
}
}'
--------------------------------------------------------------------------------
[float]
[[health-delete]]
==== Delete the Watch
Since the `cluster_health_watch` is configured to run every 10 seconds, make sure you delete it
when you're done experimenting. Otherwise, you'll spam yourself indefinitely.
To remove the watch, use the <<api-rest-delete-watch, DELETE watch>> API:
[source,js]
--------------------------------------------------------------------------------
curl -XDELETE 'http://localhost:9200/_watcher/watch/cluster_health_watch'
--------------------------------------------------------------------------------

View File

@ -0,0 +1,403 @@
[[how-watcher-works]]
== How Watcher Works
Once you have <<getting-started, installed watcher>>, you can <<watch-definition, add watches>>
to automatically perform an action when certain conditions are met. The conditions are generally
based on data you've loaded into the watch by querying an Elasticsearch index or submitting an
HTTP request to a web service. For example, you could send an email to the sysadmin when a
search of your log data indicates that there are errors.
This topic describes the elements of a watch and how watches operate.
[[watch-definition]]
=== Watch Definition
A watch consists of a trigger, input, condition, and the actions you want to perform when the
watch condition is met. In addition, you can define transforms to process the watch payload
before executing the actions.
<<trigger,Trigger>> :: Determines when the watch is checked.
A watch must have a trigger.
<<input,Input>> :: Loads data into the watch payload.
If no input is specified, an empty payload is loaded.
<<condition,Condition>> :: Controls whether the watch actions are executed.
If no condition is specified, the condition defaults to `always`.
<<transform,Transform>> :: Processes the watch payload to prepare it for the watch actions.
You can define transforms at the watch level or define action-specific
transforms. Optional.
<<actions,Actions>> :: Specify what happens when the watch
condition is met.
[[watch-definition-example]]
For example, the following snippet shows a <<api-rest-put-watch, Put Watch>> request that defines
a watch that looks for log error events:
[source,json]
--------------------------------------------------
PUT _watcher/watch/log_event_watch
{
"metadata" : { <1>
"color" : "red"
},
"trigger" : { <2>
"schedule" : {
"interval" : "5m"
}
},
"input" : { <3>
"search" : {
"request" : {
"search_type" : "count",
"indices" : "log-events",
"body" : {
"query" : { "match" : { "status" : "error" } }
}
}
}
},
"condition" : { <4>
"script" : "return ctx.payload.hits.total > 5"
},
"transform" : { <5>
"search" : {
"request" : {
"indices" : "log-events",
"body" : {
"query" : { "match" : { "status" : "error" } }
}
}
}
},
"actions" : { <6>
"my_webhook" : {
"webhook" : {
"method" : "POST",
"host" : "mylisteninghost",
"port" : 9200,
"path" : "/{{watch_id}}",
"body" : "Encountered {{ctx.payload.hits.total}} errors"
}
},
"email_administrator" : {
"email" : {
"to" : "sys.admino@host.domain",
"subject" : "Encountered {{ctx.payload.hits.total}} errors",
"body" : "Too many error in the system, see attached data",
"attach_data" : true,
"priority" : "high"
}
}
}
}
--------------------------------------------------
// AUTOSENSE
<1> Metadata - You can attach optional static metadata to a watch.
<2> Trigger - This schedule trigger executes the watch every 5 minutes
<3> Input - This input searches for errors in the `log-events` index and loads the response
into the watch payload.
<4> Condition - This condition checks to see if there are more than 5 error events (hits in the
search response). If there are, execution continues.
<5> Transform - If the watch condition is met, this transform loads all of the errors into
the watch payload by searching for the errors using the default search type,
`query_then_fetch`. All of the watch actions have access to this payload.
<6> Actions - This watch has two actions. The `my_webhook` action notifies a 3rd party system
about the problem. The `email_administrator` action sends a high priority
email to the system administrator. The watch payload
that contains the errors is attached to the email.
[[watch-execution]]
=== Watch Execution
When you add a watch, Watcher immediately registers its trigger with the appropriate trigger
engine. Watches that have a `schedule` trigger are registered with the `scheduler` trigger engine.
The trigger engine is responsible for triggering execution of the watch. Trigger engines run on
the master node and use a separate thread pool from the one used to execute watches.
When a watch is triggered, Watcher queues it up for execution. A `watch_record` document is
created and added to the <<watch-history, watch history>> index and the initial status is set to
`awaits_execution`.
When execution starts, Watcher creates a watch execution context for the watch. The execution
context provides scripts and templates access to the watch metadata, payload, watch ID, execution
time, and trigger information. For more information, see
<<watch-execution-context, Watch Execution Context>>.
During the execution process, Watcher:
. Loads the input data into the payload in the watch execution context. This makes the data
available to all subsequent steps in the execution process. This step is controlled by the
input configured for the watch.
. Evaluates the watch condition to determine whether or not to continue processing the watch.
If the condition is met (evaluates to `true`), processing advances to the next step. If it
is not met (evaluates to `false`), execution of the watch stops.
. Applies transforms to the watch payload (if defined).
. Executes the watch actions if the condition is met and the watch is not being
<<watch-acknowledgment-throttling, throttled>>.
When watch execution finishes, Watcher updates the watch record with the execution results.
The watch record includes the execution time and duration, whether the watch condition was met,
and the status of each action that was performed. For more information, see
<<watch-history, Watch History>>.
The following diagram shows the watch execution process:
image::images/watch-execution.jpg[align="center"]
[[watch-acknowledgment-throttling]]
=== Watch Acknowledgment and Throttling
Watcher supports both time-based and acknowledgment-based throttling. This enables you to prevent
actions from being repeatedly executed for the same event.
By default, Watcher uses time-based throttling with a throttle period of 5 minutes. This means that
if a watch is executed every minute, its actions are performed a maximum of once every 5 minutes,
even when the condition is met. You can configure the throttle period on a per-action basis, at the
watch level, or change the <<configuring-default-throttle-period, default throttle period>> in
`elasticsearch.yml`.
Acknowledgment-based throttling enables you to tell Watcher not to send any more notifications
about a watch as long as its condition remains true. Once the condition evaluates to `false`, the
acknowledgment is cleared and Watcher resumes executing the watch's actions normally.
For more information, see <<actions-ack-throttle, action throttling>>.
[[scripts-templates]]
=== Scripts and Templates
You can use scripts and templates when defining a watch. Scripts and templates can reference
elements in the watch execution context, including the watch payload. The execution context defines
variables you can use in a script and parameter placeholders you can use in a template. Transforms
also update the contents of the watch payload.
Watcher uses the Elasticsearch script infrastructure, which supports <<inline-templates-scripts,inline>>,
<<indexed-templates-scripts, indexed>>, and <<file-templates-scripts, file-based scripts>>. Scripts
and templates are compiled and cached by Elasticsearch to optimize recurring execution.
{ref}/modules-scripting.html#_automatic_script_reloading[Autoloading] is also supported. For more
information, see {ref}/modules-scripting.html[Scripting] in the Elasticsearch Reference.
[[watch-execution-context]]
==== Watch Execution Context
The following snippet shows the basic elements in a watch's execution context:
[source,js]
----------------------------------------------------------------------
{
"ctx" : {
"metadata" : { ... }, <1>
"payload" : { ... }, <2>
"watch_id" : "<id>", <3>
"execution_time" : "20150220T00:00:10Z", <4>
"trigger" : { <5>
"triggered_time" : "20150220T00:00:10Z",
"scheduled_time" : "20150220T00:00:00Z"
},
"vars" : { ... } <6>
}
----------------------------------------------------------------------
<1> Any static metadata specified in the watch definition.
<2> The current watch payload.
<3> The id of the executing watch.
<4> A timestamp that shows when the watch execution started.
<5> Information about the trigger event. For a `schedule` trigger, this
consists of the `triggered_time` (when the watch was triggered)
and the `scheduled_time` (when the watch was scheduled to be triggered).
<6> Dynamic variables that can be set and accessed by different constructs
during the execution. These variables are scoped to a single execution
(i.e they're not persisted and can't be used between different executions
of the same watch)
[[scripts]]
==== Using Scripts
You can use scripts to define <<condition-script, conditions>> and <<transform-script, transforms>>.
The default scripting language is groovy.
Scripts can reference any of the values in the watch execution context or values explicitly passed
through script parameters.
For example, if the context metadata contains a `color` field, `"metadata" : {"color": "red"}`, you
can access its value with the variable `ctx.metadata.color`. If you pass in a `color` parameter as
part of the condition or transform definition, `"params" : {"color": "red"}`, you access its value
with the variable `color`.
[[templates]]
==== Using Templates
You use templates to define dynamic content for a watch. At execution time, templates pull in data
from the watch's execution context. For example, you could use a template to populate the `subject`
field for an `email` action with data stored in the watch payload. Templates can also access values
explicitly passed through template parameters.
Watcher supports templates in a variety of places:
* The <<input-http, `http`>> input supports templates in the `path`, `params`, `headers` and
`body` fields.
* The <<actions-email, `email`>> action supports templates in the `from`, `reply_to`, `priority`,
`to`, `cc`, `bcc`, `subject`, `body.text` and `body.html` fields.
* The <<actions-webhook, `webhook`>> action supports templates in the `path`, `params`, `headers`
and `body` fields.
You specify templates using the https://mustache.github.io[Mustache] scripting language.
[NOTE]
===============================
While Elasticsearch supports Mustache out of the box, Watcher ships with its own version registered
under `xmustache`. This is because the default Mustache implementation in Elasticsearch 1.5 lacks
array/list access support. `xmustache` adds this support to enable easy array access. For example,
to refer to the source of the third search hit in the payload use
`{{ctx.payload.hits.hits.2._source}}`.
When this feature is available in Elasticsearch, we expect to remove `xmustache` from Watcher and
use the version that ships with Elasticsearch.
===============================
For example, if the context metadata contains a `color` field, you can access its value with the
expression `{{ctx.metadata.color}}`. If the context payload contains the results of a search, you
could access the source of the 3rd search hit in the payload with the following expression
`{{ctx.payload.hits.hits.2._source}}`.
If you pass in a parameter as part of the input or action definition, you can reference the
parameter by name. For example, the following snippet defines and references the `color` parameter.
[source,js]
----------------------------------------------------------------------
{
"actions" : {
"email_notification" : {
"email" : {
"subject" : {
"inline" : "{{color}} alert",
"params" : {
"color" : "red"
}
}
}
}
}
}
----------------------------------------------------------------------
[[inline-templates-scripts]]
==== Inline Templates and Scripts
To define an inline template or script, you simply specify it directly in the value of a field.
For example, the following snippet configures the subject of the `email` action using an inline
template that references the `color` value in the context metadata.
[source,js]
----------------------------------------------------------------------
"actions" : {
"email_notification" : {
"email" : {
"subject" : "{{ctx.metadata.color}} alert"
}
}
}
}
----------------------------------------------------------------------
For a script, you simply specify the inline script as the value of the `script` field.
For example:
[source,js]
----------------------------------------------------------------------
"condition" : {
"script" : "return true"
}
----------------------------------------------------------------------
You can also explicitly specify the inline type by using a formal object definition as the field
value. For example:
[source,js]
----------------------------------------------------------------------
"actions" : {
"email_notification" : {
"email" : {
"subject" : {
"inline" : "{{ctx.metadata.color}} alert"
}
}
}
}
----------------------------------------------------------------------
The formal object definition for a script would be:
[source,js]
----------------------------------------------------------------------
"condition" : {
"script" : {
"inline": "return true"
}
}
----------------------------------------------------------------------
[[indexed-templates-scripts]]
==== Indexed Templates and Scripts
If you {ref}/modules-scripting.html#_indexed_scripts[index] your templates and scripts, you can
reference them by id.
To reference an indexed script or template, you use the formal object definition and specify its
id in the `id` field. For example, the following snippet references the `email_notification_subject`
template.
[source,js]
----------------------------------------------------------------------
{
...
"actions" : {
"email_notification" : {
"email" : {
"subject" : {
"id" : "email_notification_subject",
"params" : {
"color" : "red"
}
}
}
}
}
}
----------------------------------------------------------------------
[[file-templates-scripts]]
==== File-based Templates and Scripts
If you store templates or scripts in the `$ES_HOME/config/scripts` directory, you can reference
them by name. Template files must be saved with the extension `.mustache`. Script files must be
saved with the appropriate file extension, such as `.groovy`.
NOTE: The `config/scripts` directory is scanned periodically for changes. New and changed
templates and scripts are reloaded and deleted templates and scripts are removed from
the preloaded scripts cache. For more information, see
{ref}/modules-scripting.html#_automatic_script_reloading[Automatic Script Reloading]
in the Elasticsearch Reference.
To reference a file-based index or script, you use the formal object definition and specify its
name in the `file` field. For example, the following snippet references the script file
`threshold_hits.groovy`.
[source,js]
--------------------------------------------------
"condition" : {
"script" : {
"file" : "threshold_hits",
"params" : {
"threshold" : 0
}
}
}
--------------------------------------------------
include::how-watcher-works/dynamic-index-names.asciidoc[]

View File

@ -0,0 +1,106 @@
[[dynamic-index-names]]
=== Dynamic Index Names
Several watch constructs deal with indices, including <<actions-index, `index` action>>,
the <<transform-search, `search` transform>> and the <<input-search, `search` input>>.
When configuring these constructs you can set the index names to static values. In addition
to specifying static index names, Watcher enables you to specify indexes using dynamic
time-aware templates. These templates resolve to specific index names during the watch
execution according to the execution time.
Dynamic index name resolution enables you to search a range of time-series indices, rather
than searching all of your time-series indices and filtering the the results. Limiting the
number of indices that are searched reduces the load on the cluster and improves watch
execution performance. For example, if you are using a watch to monitor errors in your
daily logs, you can use a dynamic index name template to restrict the search to the past
two days.
A dynamic index name takes the following form:
[source,txt]
----------------------------------------------------------------------
<static_name{date_math_expr{date_format}}>
----------------------------------------------------------------------
Where:
* `static_name` is the static text part of the name
* `date_math_expr` is a dynamic date math expression that computes the date dynamically
* `date_format` is the format in which the computed date should be rendered
NOTE: You must enclose dynamic index name templates within angle brackets. For example,
`<logstash-{now/d-2d}>`
The following example shows different forms of dynamic index names and the final index names
they resolve to given the execution date is 22rd March 2024.
[options="header"]
|======
| Expression |Resolves to
| `<logstash-{now/d}>` | `logstash-2024.03.22`
| `<logstash-{now/M}>` | `logstash-2024.03.01`
| `<logstash-{now/M{YYYY.MM}}>` | `logstash-2024.03`
| `<logstash-{now/M-1M{YYYY.MM}}>` | `logstash-2024.02`
|======
To use the characters `{` and `}` in the static part of an index name template, escape them
with a backslash, `\`:
* `<elastic\\{ON\\}-{now/M}>` resolves to `elastic{ON}-2024.03.01`
The following example shows a search input that searches the Logstash indices for the past
three days, assuming the indices use the default Logstash index name format,
`logstash-YYYY.MM.dd`.
[source,json]
----------------------------------------------------------------------
{
...
"input" : {
"search" : {
"request" : {
"indices" : [
"<logstash-{now/d-2d}>",
"<logstash-{now/d-1d}>",
"<logstash-{now/d}>"
],
...
}
}
}
...
}
----------------------------------------------------------------------
[[dynamic-index-name-timezone]]
By default, the index names are resolved base on `UTC` time zone. You can change this default at
multiple levels:
Configuring the following setting set the default dynamic index name time zone in watcher:
[source,yaml]
--------------------------------------------------
watcher.dynamic_indices.time_zone: '+01:00'
--------------------------------------------------
You can also configure the default time zone separately on each of the construct that make
use of it (`search` input/transform and `index` action):
[source,yaml]
--------------------------------------------------
watcher.input.search.dynamic_indices.time_zone: '+01:00'
--------------------------------------------------
[source,yaml]
--------------------------------------------------
watcher.transform.search.dynamic_indices.time_zone: '+01:00'
--------------------------------------------------
[source,yaml]
--------------------------------------------------
watcher.actions.index.dynamic_indices.time_zone: '+01:00'
--------------------------------------------------
Alternatively, each of these construct can define their own time zone within the watch
definition.

View File

@ -0,0 +1,196 @@
[[scripts-templates]]
=== Scripts and Templates
[float]
[[scripts]]
==== Using Scripts
[float]
[[templates]]
==== Using Templates
You can use templates to define dynamic content for a watch. At execution time, a template
can pull in data from the watch's execution context. For example, you could use a template to populate
the `subject` field for an `email` action with data stored in the watch payload.
You can use templates in a variety of places:
* The <<input-http, `http`>> input supports templates in the `path`, `params`, `headers` and `body` fields.
* The <<actions-email, `email`>> action supports templates in the `from`, `reply_to`, `priority`, `to`
`cc`, `bcc`, `subject`, `body.text` and `body.html` fields.
* The <<actions-webhook, `webhook`>> action supports templates in the `path`, `params`, `headers` and `body` fields.
You specify templates using the https://mustache.github.io[Mustache] scripting language. The
Watcher template engine uses Elasticsearch's script engine infrastructure, which supports:
* Caching - templates are compiled and cached by Elasticsearch to optimize recurring execution.
* Indexed Templates - like other scripts, you can {ref}/modules-scripting.html#_indexed_scripts[index]
your templates and refer to them by id.
* Template Files - you can store template files in `config/scripts` and refer to them by name.
{ref}/modules-scripting.html#_automatic_script_reloading[Autoloading] is also supported.
[NOTE]
===============================
While Elasticsearch supports Mustache out of the box, Watcher ships with its own version registered under `xmustache`. This is because the default Mustache implementation in Elasticsearch 1.5 lacks array/list access support. `xmustache` adds this support to enable easy array access. For example, to refer to the source of the third search hit in the
payload use `{{ctx.payload.hits.hits.2._source}}`.
When this feature is available in Elasticsearch, we expect to remove `xmustache` from Watcher and use the
version that ships with Elasticsearch.
===============================
[float]
[[accessing-template-values]]
==== Accessing the Watch Context and Template Parameters
A template can reference any of the values in the watch execution context or values explicitly passed through
template parameters.
The <<watch-execution-context, Standard Watch Execution Context Model>> is shown in the following snippet:
[source,js]
----------------------------------------------------------------------
{
"ctx" : {
"metadata" : { ... },
"payload" : { ... },
"watch_id" : "<id>",
"execution_time" : "20150220T00:00:10Z",
"trigger" {
"triggered_time" : "20150220T00:00:10Z",
"scheduled_time" : "20150220T00:00:00Z"
}
}
}
----------------------------------------------------------------------
For example, if the context metadata contains a `color` field, you can access its
value with the expression `{{ctx.metadata.color}}`. If the context payload
contains the results of a search, you could access the source of the 3rd search hit in the
payload with the following expression `{{ctx.payload.hits.hits.3._source}}`.
You can also pass arbitrary template parameters for a field by specifying the `params` attribute.
Templates can then reference these parameters by name. For example, the following
snippet defines and references the `color` parameter.
[source,js]
----------------------------------------------------------------------
{
"actions" : {
"email_notification" : {
"email" : {
"subject" : {
"inline" : "{{color}} alert",
"params" : {
"color" : "red"
}
}
}
}
}
}
----------------------------------------------------------------------
[float]
[[inline-templates]]
==== Inline Templates
The default template type is `inline`, where you specify the template directly
in the value of a field. For example, the following snippet configures the subject
of the `email` action using an inline template that references the `color` value
in the metadata defined in the watch context.
[source,js]
----------------------------------------------------------------------
{
"metadata" : {
"color" : "red"
},
...
"actions" : {
"email_notification" : {
"email" : {
"subject" : "{{ctx.metadata.color}} alert"
}
}
}
}
----------------------------------------------------------------------
You can also explicitly indicate that the template is inlined using the formal
object definition of the template:
[source,js]
----------------------------------------------------------------------
"actions" : {
"email_notification" : {
"email" : {
"subject" : {
"inline" : "{{ctx.metadata.color}} alert"
}
}
}
}
----------------------------------------------------------------------
[float]
[[index-templates]]
==== Indexed Templates
If you choose to {ref}/modules-scripting.html#_indexed_scripts[index your templates],
you can reference them by id. For this, you'll need to use the formal object definition of the
template and refer to the template id using the `id` field. For example, the following
snippet references the indexed template with the id `email_notification_subject".
[source,js]
----------------------------------------------------------------------
{
...
"actions" : {
"email_notification" : {
"email" : {
"subject" : {
"id" : "email_notification_subject",
"params" : {
"color" : "red"
}
}
}
}
}
}
----------------------------------------------------------------------
[float]
[[file-templates]]
==== File Templates
If you store templates in files in the `config/scripts` directory, you can
reference them by name. For this, you'll need to use the formal object definition
of the template and refer to the template file by its name using the `file` field.
For example, the following snippet references the template file
`email_notification_subject.mustache`.
[source,js]
----------------------------------------------------------------------
{
...
"actions" : {
"email_notification" : {
"email" : {
"subject" : {
"file" : "email_notification_subject",
"params" : {
"color" : "red"
}
}
}
}
}
}
----------------------------------------------------------------------
NOTE: The `config/scripts` directory is scanned periodically for changes.
New and changed templates are reloaded and deleted templates are removed
from the preloaded scripts cache. For more information, see
{ref}/modules-scripting.html#_automatic_script_reloading[Automatic Script Reloading]
in the Elasticsearch Reference.

Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 43 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 169 KiB

Binary file not shown.

View File

@ -0,0 +1,33 @@
[[watcher]]
= Elasticsearch Watcher
:ref: http://www.elastic.co/guide/en/elasticsearch/reference/current
:shield-ref: http://www.elastic.co/guide/en/shield/current
:logstash-ref: http://www.elastic.co/guide/en/logstash/current
:java-client-ref: http://www.elastic.co/guide/en/elasticsearch/client/java-api/current
:blog-ref: https://www.elastic.co/blog/
:forum: https://discuss.elastic.co/c/watcher
include::introduction.asciidoc[]
include::getting-started.asciidoc[]
include::customizing-watches.asciidoc[]
include::how-watcher-works.asciidoc[]
include::installing-watcher.asciidoc[]
include::administering-watcher.asciidoc[]
include::managing-watches.asciidoc[]
include::example-watches.asciidoc[]
include::reference.asciidoc[]
include::troubleshooting.asciidoc[]
include::release-notes.asciidoc[]

View File

@ -0,0 +1,103 @@
[[installing-watcher]]
== Installing Watcher
The <<getting-started, Getting Started Guide>> steps through a basic Watcher installation. This
section provides some additional information about the installation prerequisites, deployment
options, and the installation process for RPM/DEB package installations.
[float]
[[installation-prerequisites]]
=== Watcher Installation Prerequisites
All you need to use Watcher is:
* Java 7 or later
* Elasticsearch 1.5 or later
* Elasticsearch License plugin
For information about installing the latest Oracle JDK, see
http://www.oracle.com/technetwork/java/javase/downloads/index-jsp-138363.html[Java SE Downloads].
For information about installing Elasticsearch, see {ref}/_installation.html[Installation] in the
Elasticsearch Reference.
If you are using Shield, youve already installed the License plugin. If you haven't, installing
Shield is part of the basic installation instructions in the <<getting-started, Getting Started>>
guide.
[float]
[[deploying-existing-cluster]]
=== Deploying Watcher on an Existing Cluster
Deploying watcher directly to the nodes of an existing cluster is generally the easiest way to get
started with Watcher. Keep in mind, however, that this requires stopping and starting all of the
nodes in your cluster. For larger clusters, we recommend
<<deploying-separate-cluster, deploying Watcher to a separate cluster>>.
When you deploy Watcher on an existing cluster, you use the <<input-search, search input>> to
search the cluster's indexes and load the results into a watch's payload.
To deploy to an existing cluster, you need to install the License and Watcher plugins on every
node in the cluster. For general installation instructions, see the
<<getting-started, Getting Started>> guide. If you are using the Elasticsearch DEB/RPM packages,
see <<package-installation, Installing Watcher on a DEB/RPM Package Installation>> for more
information.
[float]
[[deploying-separate-cluster]]
=== Deploying Watcher as a Separate Cluster
If you have a larger cluster, we recommend running Watcher on a separate monitoring cluster. If
you're using a separate cluster for Marvel, you can install Watcher on the nodes you're using to
store your Marvel data.
When you deploy watcher on a separate cluster, you use the <<input-http, HTTP input>> to send
search requests to the cluster you are monitoring and load the results into a watch's payload.
To deploy to a separate monitoring cluster, you need to install the License and Watcher plugins
on every node in the monitoring cluster. For general installation instructions, see the
<<getting-started, Getting Started>> guide. If you are using the Elasticsearch DEB/RPM packages,
see <<package-installation, Installing Watcher on a DEB/RPM Package Installation>> for more
information.
[float]
[[package-installation]]
=== Installing Watcher on a DEB/RPM Package Installation
If you use the DEB/RPM packages to install Elasticsearch, the installation process for Watcher
is slightly different. You need to install the License and Watcher plugins from the
`/usr/share/elasticsearch` directory using superuser permissions:
[source,shell]
----------------------------------------------------------
cd /usr/share/elasticsearch
sudo bin/plugin -i elasticsearch/license/latest
sudo bin/plugin -i elasticsearch/watcher/latest
----------------------------------------------------------
[float]
[[offline-installation]]
=== Installing Watcher on Offline Machines
To install Watcher on a machine that doesn't have Internet access:
. Manually download the Watcher binaries from:
https://download.elastic.co/elasticsearch/watcher/watcher-1.0.0.zip[
https://download.elastic.co/elasticsearch/watcher/watcher-1.0.0.zip].
. Transfer the Watcher zip file to the offline machine.
. Run `bin/plugin` with the `-u` option:
[source,shell]
----------------------------------------------------------
bin/plugin -i watcher -u file://<path_to_zip_file>
----------------------------------------------------------

View File

@ -0,0 +1,69 @@
[[introduction]]
== Introduction
_Watcher_ is a plugin for Elasticsearch that provides alerting and notification based on changes
in your data. This guide describes how to install, manage, and use Watcher.
[float]
== Alerting and Notification
With simple REST APIs, Elasticsearch is a platform that encourages integration and automation for
a wide range of use-cases. Increasingly, these use-cases require taking action based on changes or
anomalies in the data. For example, you might want to:
* Monitor social media as another way to detect failures in user-facing automated systems like ATMs
or ticketing systems. When the number of tweets and posts in an area exceeds a threshold of
significance, notify a service technician.
* Monitor your infrastructure, tracking disk usage over time. Open a helpdesk ticket when any
servers are likely to run out of free space in the next few days.
* Track network activity to detect malicious activity, and proactively change firewall
configuration to reject the malicious user.
* Monitor Elasticsearch, and send immediate notification to the system administrator if nodes leave
the cluster or query throughput exceeds an expected range.
* Track application response times and if page-load time exceeds SLAs for more than 5 minutes, open
a helpdesk ticket. If SLAs are exceeded for an hour, page the administrator on duty.
All of these use-cases share a few key properties:
* The relevant data or changes in data can be identified with a periodic Elasticsearch query.
* The results of the query can be checked against a condition.
* One or more actions are taken if the condition is true -- an email is sent, a 3rd party system is
notified, or the query results are stored.
[float]
== Watcher Concepts
Watcher provides an API for creating, managing and testing _watches_. A watch describes a single
alert in Watcher, which can contain multiple notification actions.
At a high-level, a typical watch is built from four simple building blocks:
schedule :: Define the schedule on which to trigger the query and check the condition.
Query :: Specify the query to run as input to the condition. Watcher supports the full
Elasticsearch query language, including aggregations.
Condition :: Define your condition to determine whether to execute the actions. You can use simple
conditions (always true), or use scripting for more sophisticated scenarios.
Actions :: Define one or more actions, such as sending email, pushing data to 3rd party systems
via webhook, or indexing the results of your query.
A full history of all watches is maintained in an Elasticsearch index. This history keeps track of
each time a watch is triggered and records the results from the query, whether the condition was
met, and what actions were taken.
[float]
== Where to Go Next
<<customizing-watches,Customizing Watches>> :: Learn more about how watches are configured and how
you create custom watches.
<<example-watches, Example Watches>> :: See complete example watches for common scenarios.
<<reference, Reference:>> :: Full documentation of the watch constructs and
the Watcher REST and Java APIs.
We designed Watcher to address a wide range of alerting, and notification needs. We hope you
like it.
[float]
== Have Comments, Questions, or Feedback?
Head over to our {forum}[Watcher Discussion Forum] to share you experience, questions, and
suggestions.

View File

@ -0,0 +1,15 @@
[[managing-watches]]
== Managing Watches
This section describes how to:
* <<listing-watches, List Configured Watches>>
* <<deleting-watches, Delete Watches>>
For information about configuring watches, see <<customizing-watches, Customizing Watches>>.
For information about managing the watch history, see
<<managing-watch-history, Managing Watch History Indexes>>.
include::managing-watches/listing-watches.asciidoc[]
include::managing-watches/deleting-watches.asciidoc[]

View File

@ -0,0 +1,23 @@
[[deleting-watches]]
=== Deleting Watches
You use the Watcher <<api-rest-delete-watch, `delete`>> API to permanently remove a watch.
For example:
[source,js]
--------------------------------------------------
DELETE _watcher/watch/my_watch
--------------------------------------------------
// AUTOSENSE
A successful response looks like this:
[source,js]
--------------------------------------------------
{
"_id": "my-watch",
"_version": 8,
"found": true
}
--------------------------------------------------

View File

@ -0,0 +1,24 @@
[[listing-watches]]
=== Listing Watches
Watcher stores watches in the `.watches` index. You can search this index to see what watches are
configured.
IMPORTANT: You can only perform read actions on the `.watches` index. You must use the Watcher
APIs to create, update, and delete watches. If you are using Shield, we recommend only granting
users `read` privileges for the `.watches` index.
To get the ids of all configured watches, run the following search query:
[source,js]
--------------------------------------------------
GET .watches/_search
{
"fields" : [], <1>
"query" : {"match_all" : { } }
}
--------------------------------------------------
// AUTOSENSE
<1> If you omit the `fields` option, the search returns the full watch definition and status
of each watch.

View File

@ -0,0 +1,16 @@
[[reference]]
== Reference
include::reference/input.asciidoc[]
include::reference/trigger.asciidoc[]
include::reference/condition.asciidoc[]
include::reference/actions.asciidoc[]
include::reference/transform.asciidoc[]
include::reference/java.asciidoc[]
include::reference/rest.asciidoc[]

View File

@ -0,0 +1,170 @@
[[actions]]
=== Actions
The actions associated with a watch are executed whenever the watch is executed, its condition
is met, and the watch is not throttled. The actions are executed one at a time and each action
executes independently from the others. Any failures encountered while executing an action are
recorded in the action result and in the watch history.
NOTE: If no actions are defined for a watch, no actions are executed. However, a `watch_record`
is still written to the <<watch-history, Watch History>>.
Actions have access to the payload in the execution context. They can use it to support their
execution in any way they need. For example, the payload might serve as a model for a templated
email body.
[float]
[[actions-ack-throttle]]
=== Acknowledgement and Throttling
During the watch execution, once the condition is met, a decision is made per configured action
as to whether it should be throttled. The main purpose of action throttling is to prevent too
many executions of the same action for the same watch.
For example, suppose you have a watch that detects errors in an application's log entries. The
watch is triggered every five minutes and searches for errors during the last hour. In this case,
if there are errors, there is a period of time where the watch is checked and its actions are
executed multiple times based on the same errors. As a result, the system administrator receives
multiple notifications about the same issue, which can be annoying.
To address this issue, Watcher supports time-based throttling. You can define a throttling
period as part of the action configuration to limit how often the action is executed. When you
set a throttling period, Watcher prevents repeated execution of the action if it has already
executed within the throttling period time frame (`now - throttling period`).
The following snippet shows a watch for the scenario described above - associating a throttle
period with the `email_administrator` action:
[source,json]
.Watch Definition Example
--------------------------------------------------
PUT _watcher/watch/log_event_watch
{
"metadata" : {
"color" : "red"
},
"trigger" : {
"schedule" : {
"interval" : "5m"
}
},
"input" : {
"search" : {
"request" : {
"search_type" : "count",
"indices" : "log-events",
"body" : {
"query" : { "match" : { "status" : "error" } }
}
}
}
},
"condition" : {
"script" : "return ctx.payload.hits.total > 5"
},
"actions" : {
"email_administrator" : {
"throttle_period": "15m", <1>
"email" : { <2>
"to" : "sys.admino@host.domain",
"subject" : "Encountered {{ctx.payload.hits.total}} errors",
"body" : "Too many error in the system, see attached data",
"attach_data" : true,
"priority" : "high"
}
}
}
}
--------------------------------------------------
// AUTOSENSE
<1> There will be at least 15 minutes between subsequent `email_administrator` action executions.
<2> See <<actions-email, Email Action>> for more information.
You can also define a throttle period at the watch level. The watch-level throttle period serves
as the default throttle period for all of the actions defined in the watch:
[source,json]
.Watch Definition Example
--------------------------------------------------
PUT _watcher/watch/log_event_watch
{
"trigger" : {
...
},
"input" : {
...
},
"condition" : {
...
},
"throttle_period" : "15m", <1>
"actions" : {
"email_administrator" : {
"email" : {
"to" : "sys.admino@host.domain",
"subject" : "Encountered {{ctx.payload.hits.total}} errors",
"body" : "Too many error in the system, see attached data",
"attach_data" : true,
"priority" : "high"
}
},
"notify_pager" : {
"webhook" : {
"method" : "POST",
"host" : "pager.service.domain",
"port" : 1234,
"path" : "/{{watch_id}}",
"body" : "Encountered {{ctx.payload.hits.total}} errors"
}
}
}
}
--------------------------------------------------
<1> There will be at least 15 minutes between subsequent action executions (applies to both
`email_administrator` and `notify_pager` actions)
If you do not define a throttle period at the action or watch level, the global default
throttle period is applied. Initially, this is set to 5 seconds. To change the global default,
configure the `watcher.execution.default_throttle_period` setting in `elasticsearch.yml`:
[source,yaml]
--------------------------------------------------
watcher.execution.default_throttle_period: 15m
--------------------------------------------------
Watcher also supports acknowledgement-based throttling. You can acknowledge a watch using the
<<api-rest-ack-watch, Ack Watch API>> to prevent the watch actions from being executed again
while the watch condition remains `true`. This essentially tells Watcher "I received the
notification and I'm handling it, please do not notify me about this error again".
An acknowledged watch action remains in the `acked` state until the watch's condition evaluates
to `false`. When that happens, the action's state changes to `awaits_successful_execution`.
To acknowledge an action, you use the `ack` API:
[source,js]
----------------------------------------------------------------------
PUT _watcher/watch/<id>/_ack?actions=<action_ids>
----------------------------------------------------------------------
// AUTOSENSE
Where `<id>` is the id of the watch and `<action_ids>` is a comma-separated list of the action
ids you want to acknowledge. To acknowledge all actions, omit the `actions` parameter.
The following diagram illustrates the throttling decisions made for each action of a watch
during its execution:
image::images/action-throttling.jpg[align="center"]
Watcher supports four action types: <<actions-email, Email>>,
<<actions-webhook, Webhook>>, <<actions-index, Index>> and
<<actions-logging, Logging>>.
include::actions/email.asciidoc[]
include::actions/webhook.asciidoc[]
include::actions/index.asciidoc[]
include::actions/logging.asciidoc[]

View File

@ -0,0 +1,70 @@
[[actions-email]]
==== Email Action
A watch <<actions, action>> that sends email notifications. To use the `email` action, you must configure at least one email account. For instructions, see <<email-services, Configuring Email Accounts>>.
See <<email-action-attributes>> for the supported attributes. Any attributes that are missing from the email action definition are looked up in the configuration of the account from which the email is being sent. The required attributes must either be set in the email action definition or the account's `email_defaults`.
[[configuring-email-actions]]
===== Configuring Email Actions
You configure email actions in a watch's `actions` array. Action-specific attributes are
specified using the `email` keyword.
The following snippet shows a basic email action definition:
[source,json]
--------------------------------------------------
"actions" : {
"email_admin" : { <1>
"transform" : { ... }, <2>
"email": {
"to": "'John Doe <john.doe@example.com>'", <3>
"subject": "{{ctx.watch_id}} executed", <4>
"body": "{{ctx.watch_id}} executed with {{ctx.payload.hits.total}} hits" <5>
}
}
}
--------------------------------------------------
<1> The id of the action.
<2> An optional <<transform, transform>> to transform the payload before processing the email.
<3> One or more addresses to send the email to. If not specified, the `to` address is read from the
account's `email_defaults`.
<4> The subject of the email (static text or a Mustache <<templates, template>>).
<5> The body of the email (static text or a Mustache <<templates, template>>).
[[email-action-attributes]]
.Email Action Attributes
[options="header"]
|======
| Name |Required | Default | Description
| `account` | no | the default account | The <<email-account, account>> to use to send the email.
| `from` | no | - | The email <<email-address,address>> from which the email will be sent. The `from` field can contain Mustache <<templates, templates>> as long as it resolves to a valid email address.
| `to` | yes | - | The email <<email-address,addresses>> of the `to` recipients. The `to` field can contain Mustache <<templates, templates>> as long as it resolves to a valid email address.
| `cc` | no | - | The email <<email-address,addresses>> of the `cc` recipients. The `cc` field can contain Mustache <<templates, templates>> as long as it resolves to a valid email address.
| `bcc` | no | - | The email <<email-address,addresses>> of the `bcc` recipients. The `bcc` field can contain Mustache <<templates, templates>> as long as it resolves to a valid email address.
| `reply_to` | no | - | The email <<email-address,addresses>> that will be set on the message's `Reply-To` header. The `reply_to` field can contain Mustache <<templates, templates>> as long as it resolves to a valid email address.
| `subject` | no | - | The subject of the email. The subject can be static text or contain Mustache <<templates, templates>>.
| `body` | no | - | The body of the email. When this field holds a string, it will default to the text body of the email. Set as an object to specify either the text or the html body or both (using the fields bellow)
| `body.text` | yes* | - | The plain text body of the email. The body can be static text or contain Mustache <<templates, templates>>.
| `body.html` | yes* | - | The html body of the email. The body can be static text or contain Mustache <<templates, templates>>. This body will be sanitized to remove dangerous content such as scripts. This behavior can be disabled by setting `watcher.actions.email.sanitize_html: false` in elasticsearch.yaml.
| `priority` | no | - | The priority of this email. Valid values are: `lowest`, `low`, `normal`, `high` and `highest`. The priority can contain a Mustache <<templates, template>> as long as it resolves to one of the valid values.
| `attach_data` | no | false | Indicates whether the watch execution data should be attached to the email. You can specify a Boolean value or an object. If `attach_data` is set to `true`, the data is attached as a YAML file
called `data.yml`. If it's set to `false`, no data is attached. To control the format of the attached data, specify an object that contains a `format` field`.
| `attach_data.format` | no | yaml | When `attach_data` is specified as an object, this field controls the format of the attached data. The supported formats are `json` and `yaml`.
|======
* When setting the `body` object, at least one of its `text` or `html` fields must be defined.
[[email-address]]
Email Address::
An email address can contain two possible parts--the address itself and an optional personal name as described in http://www.ietf.org/rfc/rfc822.txt[RFC 822]. The address can be represented either as a string of the form `user@host.domain` or `Personal Name <user@host.domain>`. You can also specify an email address as an object that contains `name` and `address` fields.
[[address-list]]
Address List::
A list of addresses can either be specified as a comma-delimited string or as an array:
+
`'Personal Name <user1@host.domain>, user2@host.domain'` or
`[ 'Personal Name <user1@host.domain>', 'user2@host.domain' ]`

View File

@ -0,0 +1,78 @@
[[actions-index]]
==== Index Action
A watch <<actions, Action>> that enable you to index data in Elasticsearch.
See <<index-action-attributes>> for the supported attributes.
===== Configuring Index Actions
The following snippet shows a simple `index` action definition:
[source,json]
--------------------------------------------------
"actions" : {
"index_payload" : { <1>
"transform": { ... },<2>
"index" : {
"index" : "my-index", <3>
"doc_type" : "my-type" <4>
}
}
}
--------------------------------------------------
<1> The id of the action
<2> An optional <<transform, transform>> to transform the payload and prepare the data that should be indexed
<3> The elasticsearch index to store the data to
<4> The document type to store the data as
[[index-action-attributes]]
.Index Action Attributes
[options="header"]
|======
|Name |Required | Default | Description
| `index` | yes | - | The Elasticsearch index to
index into. <<dynamic-index-names, Dynamic index names>>
are supported
| `doc_type` | yes | - | The type of the document
the data will be indexed as.
| `execution_time_field` | no | _timestamp | The field that will store/index
the watch execution time. When
not set or when set to `_timestamp`,
the execution time will serve as
the document's
{ref}/mapping-timestamp-field.html[timestamp].
| `timeout` | no | 60s | The timeout for waiting for the index api call to return.
If no response is returned within this time, the index
action times out and fails. This setting overrides
the default internal index/bulk operations
<<default-internal-ops-timeouts, timeouts>>.
| `dynamic_name_timezone` | no | - | The time zone to use for resolving the index name based on
<<dynamic-index-names, Dynamic Index Names>>. The default
time zone also can be <<dynamic-index-name-timezone, configured>>
globally.
|======
[[anatomy-actions-index-multi-doc-support]]
===== Multi-Document Support
Like with all other actions, you can use a <<transform, transform>> to replace
the current execution context payload with another and by that change the document that
will end up indexed.
The index action plays well with transforms with its support for the special `_doc`
payload field.
When resolving the document to be indexed, the index action first looks up for a
`_doc` field in the payload. When not found, the payload is indexed as a single
document.
When a `_doc` field exists, if the field holds an object, it is extracted and indexed
as a single document. If the field holds an array of objects, each object is treated as
a document and the index aciton indexes all of them in a bulk.

View File

@ -0,0 +1,43 @@
[[actions-logging]]
==== Logging Action
A watch <<actions, action>> that simply logs text to the standard Elasticsearch logs.
See <<logging-action-attributes>> for the supported attributes.
This action is primarily used during development and debugging.
[[configuring-logging-actions]]
===== Configuring Logging Actions
You configure logging actions in a watch's `actions` array. Action-specific attributes are
specified using the `logging` keyword.
The following snippet shows a simple logging action definition:
[source,json]
--------------------------------------------------
"actions" : {
"log" : { <1>
"transform" : { ... }, <2>
"logging" : {
"text" : "executed at {{ctx.execution_time}}" <3>
}
}
}
--------------------------------------------------
<1> The id of the action.
<2> An optional <<transform, transform>> to transform the payload before executing the `logging` action.
<3> The text to be logged.
[[logging-action-attributes]]
.Logging Action Attributes
[options="header"]
|======
| Name |Required | Default | Description
| `text` | yes | - | The text that should be logged. Can be static text or include Mustache <<templates, templates>>.
| `category` | no | watcher.actions.logging | The category under which the text will be logged.
| `level` | no | info | The logging level. Valid values are: `error`, `warn`, `info`, `debug` and `trace`.
|======

View File

@ -0,0 +1,143 @@
[[actions-webhook]]
==== Webhook Action
A watch <<actions, action>> that connects to a web server and listens on a specific port.
The webhook action supports both HTTP and HTTPS connections. See <<webhook-action-attributes>> for
the supported attributes.
[[configuring-webook-actions]]
===== Configuring Webhook Actions
You configure webhook actions in a watch's `actions` array. Action-specific attributes are
specified using the `webhook` keyword.
The following snippet shows a simple webhook action definition:
[source,json]
--------------------------------------------------
"actions" : {
"my_webhook" : { <1>
"transform" : { ... }, <2>
"throttle_period" : "5m", <3>
"webhook" : {
"method" : "POST", <4>
"host" : "mylisteningserver", <5>
"port" : 9200, <6>
"path": ":/{{ctx.watch_id}", <7>
"body" : "{{ctx.watch_id}}:{{ctx.payload.hits.total}}" <8>
}
}
}
--------------------------------------------------
<1> The id of the action
<2> An optional <<transform, transform>> to transform the payload before executing the `webhook` action
<3> An optional <<actions-ack-throttle, throttle period>> for the action (5 minutes in this example)
<4> The HTTP method to use when connecting to the host
<5> The host to connect to
<6> The port to connect to
<7> The path (URI) to use in the HTTP request
<8> The body to send with the request
You can use basic authentication when sending a request to a secured webservice. For example:
[source,json]
--------------------------------------------------
"actions" : {
"my_webhook" : {
"webhook" : {
"auth" : {
"basic" : {
"username" : "<username>", <1>
"password" : "<password>" <2>
}
}
"method" : "POST",
"host" : "mylisteningserver",
"port" : 9200,
"path": ":/{{ctx.watch_id}",
"body" : "{{ctx.watch_id}}:{{ctx.payload.hits.total}}"
}
}
}
--------------------------------------------------
<1> The username
<2> The corresponding password
NOTE: By default, both the username and the password are stored in the `.watches` index in plain text. When
Shield is installed, Watcher can be <<shield-watch-data-encryption ,configured>> to encrypt the password before
storing it.
[[webhook-query-parameters]]
===== Query Parameters
You can specify query parameters to send with the request with the `params` field. This field simply
holds an object where the keys serve as the parameter names and the values serve as the parameter values:
[source,json]
--------------------------------------------------
"actions" : {
"my_webhook" : {
"webhook" : {
"method" : "POST",
"host" : "mylisteningserver",
"port" : 9200,
"path": ":/alert",
"params" : {
"watch_id" : "{{ctx.watch_id}}" <1>
}
}
}
}
--------------------------------------------------
<1> The parameter values can contain templated strings.
[[webhook-custom-request-headers]]
===== Custom Request Headers
You can specify request headers to send with the request with the `headers` field. This field simply
holds an object where the keys serve as the header names and the values serve as the header values:
[source,json]
--------------------------------------------------
"actions" : {
"my_webhook" : {
"webhook" : {
"method" : "POST",
"host" : "mylisteningserver",
"port" : 9200,
"path": ":/alert/{{ctx.watch_id}}",
"headers" : {
"Content-Type" : "application/yaml" <1>
},
"body" : "count: {{ctx.payload.hits.total}}"
}
}
}
--------------------------------------------------
<1> The header values can contain templated strings.
[[webhook-action-attributes]]
.Webhook Action Attributes
[options="header"]
|======
| Name |Required | Default | Description
| `request.scheme` | no | http | The connection scheme. Valid values are: `http` or `https`.
| `request.host` | yes | - | The host to connect to.
| `request.port` | yes | - | The port the HTTP service is listening on.
| `request.path` | no | - | The URL path. The path can be static text or include Mustache <<templates, templates>>.
| `request.method` | no | get | The HTTP method. Valid values are: `head`, `get`, `post`, `put` and `delete`.
| `request.headers` | no | - | The HTTP request headers. The header values can be static text or include Mustache <<templates, templates>>.
| `request.params` | no | - | The URL query string parameters. The parameter values can be static text or include Mustache <<templates, templates>>.
| `request.auth` | no | - | Authentication related HTTP headers. Currently, only basic authentication is supported.
| `request.body` | no | - | The HTTP request body. The body can be static text or include Mustache <<templates, templates>>. When not specified, an empty body is sent.
| `request.connection_timeout` | no | 10s | The timeout for setting up the http connection. If the connection could not be set up within this time, the action will timeout and fail. It is
also possible to <<configuring-default-http-timeouts, configure>> the default connection timeout for all http connection timeouts.
| `request.read_timeout` | no | 10s | The timeout for reading data from http connection. If no response was received within this time, the action will timeout and fail. It is
also possible to <<configuring-default-http-timeouts, configure>> the default read timeout for all http connection timeouts.
|======

View File

@ -0,0 +1,20 @@
[[condition]]
=== Condition
When a watch is triggered, its condition determines whether or not to execute its actions.
Watcher supports four condition types: <<condition-always, `always`>>, <<condition-never, `never`>>,
<<condition-script, `script`>> and <<condition-compare, `compare`>>.
NOTE: If you omit the condition definition from a watch, the condition defaults to `always`.
When a condition is evaluated, it has full access to the watch execution context, including the watch payload (`ctx.payload.*`).
The <<condition-script, script>> and <<condition-compare, compare>> conditions can use the data in
the payload to determine whether or not the necessary conditions have been met.
include::condition/always.asciidoc[]
include::condition/never.asciidoc[]
include::condition/script.asciidoc[]
include::condition/compare.asciidoc[]

View File

@ -0,0 +1,32 @@
[[condition-always]]
==== Always Condition
A watch <<condition, condition>> that always evaluates to `true`. When you use the `always`
condition, the watch's actions are always executed when the watch is triggered, unless the action
is <<actions-ack-throttle, throttled>>.
NOTE: If you omit the condition definition from a watch, the condition defaults to `always`.
You can use the `always` condition to configure watches that should run on a set schedule, such as:
[source,text]
--------------------------------------------------
"At noon every Friday, send a status report email to sys.admin@example.com"
--------------------------------------------------
To configure this watch, you define an input that loads the status data, set a schedule that
triggers every Friday, set the condition to `always`, and configure an email action to send the
status data.
===== Using the Always Condition
There are no attributes to specify for the `always` condition. To use the `always` condition,
you simply specify the condition type and associate it with an empty object:
[source,json]
--------------------------------------------------
"condition" : {
"always" : {}
}
--------------------------------------------------

View File

@ -0,0 +1,97 @@
[[condition-compare]]
==== Compare Condition
A watch <<condition, condition>> that simply compares a value in the <<watch-execution-context, Watch Execution Context Model>>
to given value. The value in the model is identified by a path within that model.
While limited in its functionality, the advantage of this condition over the <<condition-script, Script Condition>>
is that you do not have to enable dynamic scripting to use compare conditions.
===== Using a Compare Condition
The following snippet configures a `compare` condition that returns `true` if the number of
the total hits in the search result (typically loaded by the <<input-search, Search Input>>) is
higher or equals 5:
[source,json]
--------------------------------------------------
{
...
"condition" : {
"compare" : {
"ctx.payload.hits.total" : { <1>
"gte" : 5 <2>
}
}
...
}
--------------------------------------------------
<1> The field name is the path to the execution context model
<2> The field name (here `gte`) is the comparison operator, and the value is the value to compare to.
The path is a "dot-notation" expression that can reference the following variables in the watch context:
[options="header"]
|======
| Name | Description
| `ctx.watch_id` | The id of the watch that is currently executing.
| `ctx.execution_time` | The time execution of this watch started.
| `ctx.trigger.triggered_time` | The time this watch was triggered.
| `ctx.trigger.scheduled_time` | The time this watch was supposed to be triggered.
| `ctx.metadata.*` | Any metadata associated with the watch.
| `ctx.payload.*` | The payload data loaded by the watch's input.
|======
TIP: You can reference entries in arrays using their zero-based array indices. For example, to access the third
element of the `ctx.payload.hits.hits` array, use `ctx.payload.hits.hits.2`.
The comparison operator can be any one of the following:
[options="header"]
|======
| Name | Description
| `eq` | Returns `true` when the resolved value equals the given one (applies to numeric, string, list, object and values)
| `not_eq` | Returns `true` when the resolved value does not equal the given one (applies to numeric, string, list, object and null values)
| `gt` | Returns `true` when the resolved value is greater than the given one (applies to numeric and string values)
| `gte` | Returns `true` when the resolved value is greater/equal than/to the given one (applies to numeric and string values)
| `lt` | Returns `true` when the resolved value is less than the given one (applies to numeric and string values)
| `lte` | Returns `true` when the resolved value is less/equal than/to the given one (applies to numeric and string values)
|======
When dealing with dates/times, the specified value can hold date math expression in the form of `<{expression}>`. For example, one
can compare the watch execution time as follows:
[source,json]
--------------------------------------------------
{
...
"condition" : {
"compare" : {
"ctx.execution_time" : {
"gte" : "<{now-5m}>"
}
}
...
}
--------------------------------------------------
It is also possible to compare one value in the context model to another value in the same model. This can be done by
specifying the compared value as a path in the form of `{{path}}`. The following snippet shows a condition that compares
two values in the payload
[source,json]
--------------------------------------------------
{
...
"condition" : {
"compare" : {
"ctx.payload.aggregations.status.buckets.error.doc_count" : {
"not_eq" : "{{ctx.payload.aggregations.handled.buckets.true.doc_count}}"
}
}
...
}
--------------------------------------------------

View File

@ -0,0 +1,24 @@
[[condition-never]]
==== Never Condition
A watch <<condition, condition>> that always evaluates to `false`. If you use this condition,
the watch's actions are never executed. The watch's input is executed, a record is added to the watch history,
and processing stops. This condition is generally only used for testing.
===== Using the Never Condition
There are no attributes to specify for the `never` condition. To use the `never` condition,
you simply specify the condition type and associate it with an empty object:
[source,json]
--------------------------------------------------
PUT _watcher/watch/my-watch
{
...
"condition" : {
"never" : {}
}
...
}
--------------------------------------------------

View File

@ -0,0 +1,177 @@
[[condition-script]]
==== Script Condition
A watch <<condition, condition>> that evaluates a script. The default scripting language is
`groovy`. You can use any of the scripting languages supported by Elasticsearch as long as the
language supports evaluating expressions to Boolean values. Note that the `mustache` and
`expression` languages are too limited to be used by this condition. For more information,
see {ref}/modules-scripting.html[Scripting] in the Elasticsearch Reference.
IMPORTANT: You must explicitly {ref}/modules-scripting.html#enable-dynamic-scripting[enable
dynamic scripts] in `elasticsearch.yml` to use `inline` or `indexed` scripts.
===== Using a Script Condition
The following snippet configures an inline `script` condition that always returns `true`:
[source,json]
--------------------------------------------------
"condition" : {
"script" : "return true"
}
--------------------------------------------------
This example defines a script as a simple string. This format is actually a shortcut for defining an
<<condition-script-inline, inline>> groovy script. The formal definition of a script is an object
that specifies the script type and optional language and parameter values. If the `lang` attribute
is omitted, the language defaults to groovy. Elasticsearch supports three script types:
<<condition-script-inline, Inline>>, <<condition-script-file, File>>, and
<<condition-script-indexed, Indexed>>.
For example, the following snippet shows a formal definition of an `inline` script that explicitly
specifies the language and defines a single script parameter, `result`.
[source,json]
--------------------------------------------------
"condition" : {
"script" : {
"inline" : "return result",
"lang" : "groovy",
"params" : {
"result" : true
}
}
}
--------------------------------------------------
[[condition-script-inline]]
===== Inline Scripts
Inline scripts are scripts that are defined in the condition itself. The following snippet shows the
formal configuration of a simple groovy script that always returns `true`.
[source,json]
--------------------------------------------------
"condition" : {
"script" : {
"inline" : "return true"
}
}
--------------------------------------------------
[[condition-script-file]]
===== File Scripts
File scripts are scripts that are defined in files stored in the `config/scripts` directory. The
following snippet shows how to refer to the `my_script.groovy` file:
[source,json]
--------------------------------------------------
"condition" : {
"script" : {
"file" : "my_script"
}
}
--------------------------------------------------
As with <<condition-script-inline, Inline>> scripts, you can also specify the script language and
parameters:
[source,json]
--------------------------------------------------
"condition" : {
"script" : {
"file" : "my_script",
"lang" : "javascript",
"params" : {
"result" : true
}
}
}
--------------------------------------------------
[[condition-script-indexed]]
===== Indexed Scripts
Indexed scripts refer to scripts that were {ref}/modules-scripting.html#_indexed_scripts[indexed]
in Elasticsearch. The following snippet shows how to refer to a script by its `id`:
[source,json]
--------------------------------------------------
"condition" : {
"script" : {
"id" : "my_script"
}
}
--------------------------------------------------
As with <<condition-script-file, File>> and <<condition-script-inline, Inline>> scripts, you can
also specify the script language and parameters:
[source,json]
--------------------------------------------------
"condition" : {
"script" : {
"id" : "my_script",
"lang" : "javascript",
"params" : { "color" : "red" }
}
}
--------------------------------------------------
[[accessing-watch-payload]]
===== Accessing the Watch Payload
A script can access the current watch execution context, including the payload data, as well as
any parameters passed in through the condition definition.
For example, the following snippet defines a watch that uses a <<input-search, `search` input>>
and uses a `script` condition to check if the number of hits is above a specified threshold:
[source,json]
--------------------------------------------------
{
"input" : {
"search" : {
"search_type" : "count",
"indices" : "log-events",
"body" : {
"query" : { "match" : { "status" : "error" } }
}
}
},
"condition" : {
"script" : {
"script" : "return ctx.payload.hits.total > threshold",
"params" : {
"threshold" : 5
}
}
}
...
}
--------------------------------------------------
When you're using a scripted condition to evaluate an Elasticsearch response, keep in mind that
the fields in the response are no longer in their native data types. For example, the
`@timestamp` in the response is a string, rather than a `DateTime`. To compare the response
`@timestamp` against the `ctx.execution_time`, you need to parse the `@timestamp` string into
a `DateTime`. For example:
[source,json]
--------------------------------------------------
org.elasticsearch.common.joda.time.DateTime.parse(@timestamp)
--------------------------------------------------
You can reference the following variables in the watch context:
[options="header"]
|======
| Name | Description
| `ctx.watch_id` | The id of the watch that is currently executing.
| `ctx.execution_time` | The time execution of this watch started.
| `ctx.trigger.triggered_time` | The time this watch was triggered.
| `ctx.trigger.scheduled_time` | The time this watch was supposed to be triggered.
| `ctx.metadata.*` | Any metadata associated with the watch.
| `ctx.payload.*` | The payload data loaded by the watch's input.
|======

View File

@ -0,0 +1,15 @@
[[input]]
=== Input
A watch _input_ loads data into a watch's execution context as the initial payload.
Watcher supports three input types: <<input-simple, `simple`>> , <<input-search, `search`>>,
and <<input-http, `http`>>
NOTE: If you don't define an input for a watch, an empty payload is loaded into the
execution context.
include::input/simple.asciidoc[]
include::input/search.asciidoc[]
include::input/http.asciidoc[]

View File

@ -0,0 +1,164 @@
[[input-http]]
==== HTTP Input
An <<input, input>> that enables you to query an HTTP endpoint and load theresponse into a watch's execution context as the initial payload. See <<http-input-attributes>> for the supported attributes.
For example, you can use the `http` input to:
* Query an external Elasticsearch cluster. The query that one can define in the <<input-search,`search`>> input can also defined
in the `http` input. This in particular interesting as one can query clusters that are version wise incompatible with the
cluster where _Watcher_ is running on. With the `search` input these cluster would otherwise not be accessible. Also with
the `http` input it is straight forward to run a dedicated _Watcher_ cluster that queries other clusters.
* Query Elasticsearch APIs other than the search API. For example, the {ref}/cluster-nodes-stats.html[Nodes Stats],
{ref}/cluster-health.html[Cluster Health] or {ref}/cluster-state.html[Cluster State] APIs.
* Query an external webservice. Any service that exposes an HTTP endpoint can be queried by the `http` input. This can be very useful when you need to bridge between an Elasticsearch cluster and other systems.
Conditions, transforms, and actions can access the JSON response through the watch execution context. For example, if
the response contains a `message` object, you could use `ctx.payload.message` to get the message from the payload.
NOTE: If the body of the response from the HTTP endpoint is in the JSON or YAML format it will be parsed and used as the initial payload. Any
other value that is returned will be assigned and accessible to/via the `_value` variable of the payload.
[[http-input-attributes]]
.HTTP Input Attributes
[options="header"]
|======
| Name |Required | Default | Description
| `request.scheme` | no | http | Url scheme. Valid values are: `http` or `https`.
| `request.host` | yes | - | The host to connect to.
| `request.port` | yes | - | The port the http service is listening on.
| `request.path` | no | - | The URL path. The path can be static text or contain `mustache` <<templates, templates>>.
| `request.method` | no | get | The HTTP method. Supported values are: `head`, `get`, `post`, `put` and `delete`.
| `request.headers` | no | - | The HTTP request headers. The header values can be static text or include `mustache` <<templates, templates>>.
| `request.params` | no | - | The URL query string parameters. The parameter values can be static text or contain `mustache` <<templates, templates>>.
| `request.auth` | no | - | Authentication related HTTP headers. Currently, only basic authentication is supported.
| `request.connection_timeout` | no | 10s | The timeout for setting up the http connection. If the connection could not be set up within this time, the input will timeout and fail. It is
also possible to <<configuring-default-http-timeouts, configure>> the default connection timeout for all http connection timeouts.
| `request.read_timeout` | no | 10s | The timeout for reading data from http connection. If no response was received within this time, the input will timeout and fail. It is
also possible to <<configuring-default-http-timeouts, configure>> the default read timeout for all http connection timeouts.
| `request.body` | no | - | The HTTP request body. The body can be static text or include `mustache` <<templates, templates>>.
| `extract` | no | - | A array of JSON keys to extract from the input response and use as payload. In cases when an input generates a large response this can be used to filter the relevant piece of the response to be used as payload.
| `response_content_type` | no | json | The expected content type the response body will contain. Supported values are `json`, `yaml` and `text`. If the format is `text` the `extract` attribute cannot exist. Note that any content type set by the http headers in the response will override this setting. If this is set to `text` the body of the response will be assigned and accessible to/via the `_value` variable of the payload.
|======
You can reference the following variables in the execution context when specifying the `path`, `params`, `headers` and `body` values:
[options="header"]
|======
| Name | Description
| `ctx.watch_id` | The id of the watch that is currently executing.
| `ctx.execution_time` | The time execution of this watch started.
| `ctx.trigger.triggered_time` | The time this watch was triggered.
| `ctx.trigger.scheduled_time` | The time this watch was supposed to be triggered.
| `ctx.metadata.*` | Any metadata associated with the watch.
|======
===== Querying External Elasticsearch Clusters
The following snippet shows a basic `http` input that searches for all documents in the `idx` index in
an external cluster:
[source,json]
--------------------------------------------------
"input" : {
"http" : {
"request" : {
"host" : "example.com",
"port" : 9200,
"path" : "/idx/_search"
}
}
}
--------------------------------------------------
You can use the full Elasticsearch {ref}/query-dsl.html[Query DSL] to perform more sophisticated searches. For example, the following snippet retrieves all documents that contain `event` in the `category` field.
[source,json]
--------------------------------------------------
"input" : {
"http" : {
"request" : {
"host" : "host.domain",
"port" : 9200,
"path" : "/idx/_search",
"body" : "\"query\" : { \"match\" : { \"category\" : \"event\"}"
}
}
}
--------------------------------------------------
===== Using Templates
The `http` input supports templating. You can use <<templates, templates>> when specifying
the `path`, `body`, header values, and parameter values.
For example, the following snippet uses templates to specify what
index to query and restrict the results to documents added
within the last five minutes.
[source,json]
--------------------------------------------------
"input" : {
"http" : {
"request" : {
"host" : "host.domain",
"port" : 9200,
"path" : "/{{ctx.watch_id}}/_search",
"body" : "\"query\" : {\"range\": {\"@timestamp\" : {\"from\": \"{{ctx.trigger.triggered_time}}||-5m\",\"to\": \"{{ctx.trigger.triggered_time}}\"}}}"
}
}
}
--------------------------------------------------
===== Calling Elasticsearch APIs
You can use `http` input load the data returned by any Elasticsearch API. For example, the following snippet calls the
{ref}/cluster-stats.html[Cluster Stats] API and passes in the `human` query string argument.
[source,json]
.Http Input
--------------------------------------------------
"input" : {
"http" : {
"request" : {
"host" : "host.domain",
"port" : "9200",
"path" : "/_cluster/stats",
"params" : {
"human" : "true" <1>
}
}
}
}
--------------------------------------------------
<1> Enabling this attribute returns the `bytes` values in the response in human readable format.
===== Calling External Webservices
You can use `http` input to get data from any external webservice. The `http` input
supports basic authentication. For example, the following snippet calls `myservice` and uses basic authentication:
[[input-http-auth-basic-example]]
[source,json]
.Http Input
--------------------------------------------------
"input" : {
"http" : {
"request" : {
"host" : "host.domain",
"port" : "9200",
"path" : "/myservice",
"auth" : {
"basic" : {
"username" : "user",
"password" : "pass"
}
}
}
}
}
--------------------------------------------------

View File

@ -0,0 +1,139 @@
[[input-search]]
==== Search Input
An <<input, input>> that enables you to search the Elasticsearch cluster that Watcher is running on and load the
response into a watch's execution context as the initial payload. See <<search-input-attributes>> for the supported attributes.
Conditions, transforms, and actions can access the search results through the watch execution context. For example:
* To load all of the search hits into an email body, use `ctx.payload.hits`.
* To reference the total number of hits, use `ctx.payload.hits.total`.
* To access a particular hit, use its zero-based array index. For example, to
get the third hit, use `ctx.payload.hits.hits.2`.
* To get a field value from a particular hit, use `ctx.payload.hits.hits.<index>.fields.<fieldname>`. For
example, to get the message field from the first hit, use `ctx.payload.hits.hits.0.fields.message`.
[[search-input-attributes]]
.Search Input Attributes
[options="header"]
|======
| Name |Required | Default | Description
| `request.search_type` | no | count | The {ref}/search-request-search-type.html#search-request-search-type[type] of search request to perform. Valid values are: `count`, `dfs_query_and_fetch`, `dfs_query_then_fetch`, `query_and_fetch`, `query_then_fetch`, and `scan`. The Elasticsearch default is `query_then_fetch`.
| `request.indices` | no | - | The indices to search. If omitted, all indices are searched, which is the default behaviour in Elasticsearch. <<dynamic-index-names, Dynamic index names>> are supported.
| `request.types` | no | - | The document types to search for. If omitted, all document types are are searched, which is the default behaviour in Elasticsearch.
| `request.body` | no | - | The body of the request. The {ref}/search-request-body.html[request body] follows the same structure you normally send in the body of a REST `_search` request. The body can be static text or include `mustache` <<templates, templates>>.
| `request.template` | no | - | The body of the search template. See <<templates, configure templates>> for more information.
| `request.indices_options.expand_wildcards` | no | `open` | How to expand wildcards. Valid values are: `all`, `open`, `closed`, and `none` See {ref}/multi-index.html#multi-index[`expand_wildcards`] for more information.
| `request.indices_options.ignore_unavailable` | no | `true` | Whether the search should ignore unavailable indices. See {ref}/multi-index.html#multi-index[`ignore_unavailable`] for more information.
| `request.indices_options.allow_no_indices` | no | `true` | Whether to allow a search where a wildcard indices expression results in no concrete indices. See {ref}/multi-index.html#multi-index[allow_no_indices] for more information.
| `extract` | no | - | A array of JSON keys to extract from the search response and load as the payload. When a search generates a large response, you can use `extract` to select the relevant fields instead of loading the entire response.
| `timeout` | no | 30s | The timeout for waiting for the search api call to return. If no response is returned within this time, the search input times out and fails.
This setting overrides the default internal search operations <<default-internal-ops-timeouts, timeouts>>.
| `dynamic_name_timezone` | no | - | The time zone to use for resolving the index name based on <<dynamic-index-names, Dynamic Index Names>>. The default time zone also can be <<dynamic-index-name-timezone, configured>> globally.
|======
You can reference the following variables in the execution context when specifying the request `body`:
[options="header"]
|======
| Name | Description
| `ctx.watch_id` | The id of the watch that is currently executing.
| `ctx.execution_time` | The time execution of this watch started.
| `ctx.trigger.triggered_time` | The time this watch was triggered.
| `ctx.trigger.scheduled_time` | The time this watch was supposed to be triggered.
| `ctx.metadata.*` | Any metadata associated with the watch.
|======
===== Submitting Searches
You can use the search input to submit any valid search request to your Elasticsearch cluster.
For example, the following snippet returns all `event` documents in the `logs` index.
[source,json]
--------------------------------------------------
"input" : {
"search" : {
"request" : {
"indices" : [ "logs" ],
"types" : [ "event" ],
"body" : {
"query" : { "match_all" : {}}
}
}
}
}
--------------------------------------------------
===== Extracting Specific Fields
You can specify which fields in the search response you want to load into the watch payload with
the `extract` attribute. This is useful when a search generates a large response and you are only
interested in particular fields.
For example, the following input loads only the total number of hits into the watch payload:
[source,json]
--------------------------------------------------
"input": {
"search": {
"request": {
"indices": [".watch_history*"]
},
"extract": ["hits.total"]
}
},
--------------------------------------------------
===== Using Templates
The `search` input supports {ref}/search-template.html[search templates]. For example, the following snippet
references the indexed template called `my_template` and passes a value of 23 to fill in the template's
`value` parameter.
[source,json]
--------------------------------------------------
{
"input" : {
"search" : {
"request" : {
"indices" : [ "logs" ],
"template" : {
"id" : "my_template",
"params" : {
"value" : 23
}
}
}
}
}
...
}
--------------------------------------------------
===== Applying Conditions
The `search` input is often used in conjunction with the <<condition-script, `script`>> condition. For example,
the following snippet adds a condition to check if the search returned more than five hits
[source,json]
--------------------------------------------------
{
"input" : {
"search" : {
"request" : {
"indices" : [ "logs" ],
"body" : {
"query" : { "match_all" : {} }
}
}
}
},
"condition" : {
"script" : "return ctx.payload.hits.total > 5"
}
...
}
--------------------------------------------------

View File

@ -0,0 +1,52 @@
[[input-simple]]
==== Simple Input
An <<input, input>> that enables you to load static data into a watch's execution context as the initial payload.
The `simple` input is useful when the data you want to work with doesn't need to be loaded dynamically, but for
maintainability you want to store the data centrally and reference it with templates.
You can define the static data as a string (`str`), numeric value (`num`), or an object (`obj`):
[source,json]
--------------------------------------------------
{
"input" : {
"simple" : {
"str" : "val1",
"num" : 23,
"obj" : {
"str" : "val2"
}
}
}
...
}
--------------------------------------------------
For example, the following watch uses the `simple` input to set the recipient name
for a reminder email that's sent every day at noon.
[source,json]
--------------------------------------------------
{
"trigger" : {
"schedule" : {
"daily" : { "at" : "noon" }
}
},
"input" : {
"simple" : {
"name" : "John"
}
},
"actions" : {
"reminder_email" : {
"email" : {
"to" : "to@host.domain",
"subject" : "Reminder",
"body" : "Dear {{ctx.payload.name}}, by the time you read these lines, I'll be gone"
}
}
}
}
--------------------------------------------------

View File

@ -0,0 +1,111 @@
[[api-java]]
=== Java API
Watcher provides a Java client called WatcherClient that adds support for the Watcher APIs to the standard Java clients
that ship with Elasticsearch ({java-client-ref}/transport-client.html[Transport Client] or
the {java-client-ref}/node-client.html[Node Client]).
==== Installing WatcherClient
To use the `WatcherClient` you will need to make sure the `elasticsearch-watcher` JAR file is in the classpath. You can
extract the jar from the downloaded watcher plugin itself.
If you use Maven to manage dependencies, add the following to the `pom.xml`:
[source,xml]
--------------------------------------------------
<project ...>
<repositories>
<!-- add the elasticsearch repo -->
<repository>
<id>elasticsearch-releases</id>
<url>http://maven.elasticsearch.org/releases</url>
<releases>
<enabled>true</enabled>
</releases>
<snapshots>
<enabled>false</enabled>
</snapshots>
</repository>
...
</repositories>
...
<dependencies>
<!-- add the Watcher jar as a dependency -->
<dependency>
<groupId>org.elasticsearch</groupId>
<artifactId>elasticsearch-watcher</artifactId>
<version>1.0.0-Beta1</version>
</dependency>
...
</dependencies>
...
</project>
--------------------------------------------------
If you use Gradle, add the dependencies to `build.gradle`:
[source,groovy]
--------------------------------------------------------------
repositories {
/* ... Any other repositories ... */
// Add the Elasticsearch Maven Repository
maven {
url "http://maven.elasticsearch.org/releases"
}
}
dependencies {
// Provide the Watcher jar on the classpath for compilation and at runtime
compile "org.elasticsearch:elasticsearch-watcher:1.0.0-Beta1"
/* ... */
}
--------------------------------------------------------------
You can manually download the http://maven.elasticsearch.org/releases/org/elasticsearch/elasticsearch-watcher/1.0.0-Beta1/elasticsearch-watcher-1.0.0-Beta1.jar[Watcher JAR]
directly from our Maven repository.
==== Creating the WatcherClient
Creating the `WatcherClient` can simply be done as the following snippet shows:
[source,java]
--------------------------------------------------
import org.elasticsearch.watcher.client.WatcherClient;
...
Client client = ... // create and initialize either the transport or the node client
WatcherClient watcherClient = new WatcherClient(client);
--------------------------------------------------
include::java/put-watch.asciidoc[]
include::java/get-watch.asciidoc[]
include::java/delete-watch.asciidoc[]
include::java/execute-watch.asciidoc[]
include::java/ack-watch.asciidoc[]
include::java/stats.asciidoc[]
include::java/service.asciidoc[]

View File

@ -0,0 +1,65 @@
[[api-java-ack-watch]]
==== Ack Watch API
<<actions-ack-throttle, Acknowledging>> a watch enables you to manually throttle
execution of the watch's actions. An action's _acknowledgement state_ is stored in the
`_status.actions.<id>.ack.state` structure.
The current status of a watch and the state of its actions is returned with the watch
definition when you call the <<api-java-get-watch, Get Watch API>>:
[source,java]
--------------------------------------------------
GetWatchResponse getWatchResponse = watcherClient.prepareGetWatch("my-watch").get();
String state = getWatchResponse.getStatus().actionStatus("my-action").ackStatus().state();
--------------------------------------------------
The action state of a newly-created watch is `awaits_successful_execution`. When the watch
runs and its condition is met, the value changes to `ackable`. Acknowledging the action
(using the ACK API) sets this value to `acked`.
When an action state is set to `acked`, further executions of that action are throttled
until its state is reset to `awaits_successful_execution`. This happens when the watch's
condition is checked and is not met (the condition evaluates to `false`).
The following snippet shows how to acknowledge a particular action. You specify the IDs of
the watch and the action you want to acknowledge--in this example `my-watch` and `my-action`.
[source,java]
--------------------------------------------------
AckWatchResponse ackResponse = watcherClient.prepareAckWatch("my-watch", "my-action").get();
--------------------------------------------------
As a response to this request, the status of the watch and the state of the action will be
returned in the `AckWatchResponse` object
[source,java]
--------------------------------------------------
Watch.Status status = ackResponse.getStatus();
ActionStatus actionStatus = status.actionStatus("my-action");
ActionStatus.AckStatus ackStatus = actionStatus.ackStatus();
ActionStatus.AckStatus.State ackState = ackStatus.state();
--------------------------------------------------
You can acknowledge multiple actions:
[source,java]
--------------------------------------------------
AckWatchResponse ackResponse = watcherClient.prepareAckWatch("my-watch")
.setActionIds("action1", "action2")
.get();
--------------------------------------------------
To acknowledge all of a watch's actions, specify `_all` as the action ID or simply omit the
actions altogether.
[source,java]
--------------------------------------------------
AckWatchResponse ackResponse = watcherClient.prepareAckWatch("my-watch").get();
--------------------------------------------------
[source,java]
--------------------------------------------------
AckWatchResponse ackResponse = watcherClient.prepareAckWatch("my-watch", "_all").get();
--------------------------------------------------

View File

@ -0,0 +1,19 @@
[[api-java-delete-watch]]
==== Delete Watch API
The DELETE watch API removes a specific watch (identified by its `id`) from Watcher. Once removed, the document
representing the watchin the `.watches` index will be gone and it will never be executed again.
Please not that deleting a watch **does not** delete any watch execution records related to this watch from
the <<watch-history, Watch History>>.
IMPORTANT: Deleting a watch must be done via this API only. Do not delete the watch directly from the `.watches` index
using Elasticsearch's DELETE Document API. When integrating with Shield, a best practice is to make sure
no `write` privileges are granted to anyone over the `.watches` API.
The following example deletes a watch with the `my-watch` id:
[source,java]
--------------------------------------------------
DeleteWatchResponse deleteWatchResponse = watcherClient.prepareDeleteWatch("my-watch").get();
--------------------------------------------------

View File

@ -0,0 +1,54 @@
[[api-java-execute-watch]]
==== Execute Watch API
This API forces the execution of a watch stored in the `.watches` index.
It can be used to test a watch without executing all the actions or by ignoring the condition.
The response contains a `BytesReference` that represents the record that would be written to the `.watch_history` index.
The following example executes a watch with the name `my-watch`
[source,java]
--------------------------------------------------
ExecuteWatchResponse executeWatchResponse = watcherClient.prepareExecuteWatch("my-watch")
// Will execute the actions no matter what the condition returns
.setIgnoreCondition(true)
// A map containing alternative input to use instead of the input result from the watch's input
.setAlternativeInput(new HashMap<String, Object>())
// Trigger data to use (Note that "scheduled_time" is not provided to the ctx.trigger by this
// execution method so you may want to include it here)
.setTriggerData(new HashMap<String, Object>())
// Simulating the "email_admin" action while ignoring its throttle state. Use
// "_all" to set the action execution mode to all actions
.setActionMode("_all", ActionExecutionMode.FORCE_SIMULATE)
// If the execution of this watch should be written to the `.watch_history` index and reflected
// in the persisted Watch
.setRecordExecution(false)
.get()
--------------------------------------------------
Once the response is returned, you can explore it by getting execution record source:
[source,java]
--------------------------------------------------
XContentSource source = executeWatchResponse.getRecordSource();
--------------------------------------------------
The `XContentSource` provides you methods to explore the source:
[source,java]
--------------------------------------------------
Map<String, Object> map = source.getAsMap();
--------------------------------------------------
Or get a specific value associated with a known key:
[source,java]
--------------------------------------------------
String actionId = source.getValue("result.actions.0.id");
--------------------------------------------------

View File

@ -0,0 +1,32 @@
[[api-java-get-watch]]
==== Get Watch API
This API retrieves a watch by its id.
The following example gets a watch with `my-watch` id:
[source,java]
--------------------------------------------------
GetWatchResponse getWatchResponse = watcherClient.prepareGetWatch("my-watch").get();
--------------------------------------------------
You can access the watch definition by accessing the source of the response:
[source,java]
--------------------------------------------------
XContentSource source = getWatchResponse.getSource();
--------------------------------------------------
The `XContentSource` provides you methods to explore the source:
[source,java]
--------------------------------------------------
Map<String, Object> map = source.getAsMap();
--------------------------------------------------
Or get a specific value associated with a known key:
[source,java]
--------------------------------------------------
String host = source.getValue("input.http.request.host");
--------------------------------------------------

View File

@ -0,0 +1,76 @@
[[api-java-put-watch]]
==== PUT Watch API
The PUT watch API either registers a new watch in Watcher or update an existing one. Once registered, a new document
will be added to the `.watches` index, representing the watch, and the watch's trigger will immediately be registered
with the relevant trigger engine (typically the scheduler, for the `schedule` trigger).
IMPORTANT: Putting a watch must be done via this API only. Do not put a watch directly to the `.watches` index
using Elasticsearch's Index API. When integrating with Shield, a best practice is to make sure
no `write` privileges are granted to anyone over the `.watches` API.
The following example adds an watch with the `my-watch` id that has the following qualities:
* The watch schedule triggers every minute.
* The watch search input finds any 404 HTTP responses that occurred in the past five minutes.
* The watch condition checks the search results for 404s.
* The watch action sends an email if there are any 404s.
[source,java]
--------------------------------------------------
WatchSourceBuilder watchSourceBuilder = WatchSourceBuilders.watchBuilder();
// Set the trigger
watchSourceBuilder.trigger(TriggerBuilders.schedule(Schedules.cron("0 0/1 * * * ?")));
// Create the search request to use for the input
SearchRequest request = Requests.searchRequest("idx").source(searchSource()
.query(filteredQuery(matchQuery("response", 404), boolFilter()
.must(rangeFilter("date").gt("{{ctx.trigger.scheduled_time}}"))
.must(rangeFilter("date").lt("{{ctx.execution_time}}")))));
// Set the input
watchSourceBuilder.input(new SearchInput(request, null));
// Set the condition
watchSourceBuilder.condition(new ScriptCondition(Script.inline("ctx.payload.hits.total > 1").build()));
// Create the email template to use for the action
EmailTemplate.Builder emailBuilder = EmailTemplate.builder();
emailBuilder.to("someone@domain.host.com");
emailBuilder.subject("404 recently encountered");
EmailAction.Builder emailActionBuilder = EmailAction.builder(emailBuilder.build());
// Add the action
watchSourceBuilder.addAction("email_someone", emailActionBuilder.build());
PutWatchResponse putWatchResponse = watcherClient.preparePutWatch("my-watch")
.setSource(watchSourceBuilder)
.get();
--------------------------------------------------
While the above snippet flashes out all the concrete classes that make our watch, using the
available builder classes along with static imports can significantly simplify and compact
your code:
[source,java]
--------------------------------------------------
PutWatchResponse putWatchResponse = watcherClient.preparePutWatch("my-watch")
.setSource(watchBuilder()
.trigger(schedule(cron("0 0/1 * * * ?")))
.input(searchInput(searchRequest("idx").source(searchSource()
.query(filteredQuery(matchQuery("response", 404), boolFilter()
.must(rangeFilter("date").gt("{{ctx.trigger.scheduled_time}}"))
.must(rangeFilter("date").lt("{{ctx.execution_time}}")))))))
.condition(scriptCondition("ctx.payload.hits.total > 1"))
.addAction("email_someone", emailAction(EmailTemplate.builder()
.to("someone@domain.host.com")
.subject("404 recently encountered"))))
.get();
--------------------------------------------------
* Use `TriggerBuilders` and `Schedules` classes to define the trigger
* Use `InputBuilders` class to define the input
* Use `ConditionBuilders` class to define the condition
* Use `ActionBuilders` to define the actions

View File

@ -0,0 +1,24 @@
[[api-java-service]]
==== Service API
The `service` watcher API allows the control of stopping and starting the watcher service.
The following example starts the watcher service:
[source,java]
--------------------------------------------------
WatcherServiceResponse watcherServiceResponse = watcherClient.prepareWatchService().start().get();
--------------------------------------------------
The following example stops the watcher service:
[source,java]
--------------------------------------------------
WatcherServiceResponse watcherServiceResponse = watcherClient.prepareWatchService().stop().get();
--------------------------------------------------
The following example restarts the watcher service:
[source,java]
--------------------------------------------------
WatcherServiceResponse watcherServiceResponse = watcherClient.prepareWatchService().restart().get();
--------------------------------------------------

View File

@ -0,0 +1,33 @@
[[api-java-stats]]
==== Stats API
The Watcher `stats` API returns information on the aspects of Watcher on your cluster.
The following example queries the `stats` API :
[source,java]
--------------------------------------------------
WatcherStatsResponse watcherStatsResponse = watcherClient.prepareWatcherStats().get();
--------------------------------------------------
A successful call returns a response structure that can be accessed as shown:
[source,java]
--------------------------------------------------
WatcherBuild build = watcherStatsResponse.getBuild();
// The Version of watcher currently running
WatcherVersion version = watcherStatsResponse.getVersion();
// The current size of the watcher execution queue
long executionQueueSize = watcherStatsResponse.getExecutionQueueSize();
// The maximum size the watch execution queue has grown to
long executionQueueMaxSize = watcherStatsResponse.getWatchExecutionQueueMaxSize();
// The total number of watches registered in the system
long totalNumberOfWatches = watcherStatsResponse.getWatchesCount();
// Watcher state (STARTING,STOPPED or STARTED)
WatcherState watcherState = watcherStatsResponse.getWatcherState();
--------------------------------------------------

View File

@ -0,0 +1,28 @@
[[api-rest]]
=== REST API
include::rest/put-watch.asciidoc[]
include::rest/get-watch.asciidoc[]
include::rest/delete-watch.asciidoc[]
include::rest/execute-watch.asciidoc[]
include::rest/ack-watch.asciidoc[]
include::rest/info.asciidoc[]
include::rest/stats.asciidoc[]
include::rest/stop.asciidoc[]
include::rest/start.asciidoc[]
include::rest/restart.asciidoc[]

View File

@ -0,0 +1,148 @@
[[api-rest-ack-watch]]
==== Ack Watch API
<<actions-ack-throttle, Acknowledging>> a watch enables you to manually throttle
execution of the watch's actions. An action's _acknowledgement state_ is stored in the
`_status.actions.<id>.ack.state` structure.
The current status of a watch and the state of its actions is returned with the watch
definition when you call the <<api-rest-get-watch, Get Watch API>>:
[source,json]
--------------------------------------------------
GET _watcher/watch/<watch_id>
--------------------------------------------------
// AUTOSENSE
The action state of a newly-created watch is `awaits_successful_execution`.
[source,js]
--------------------------------------------------
"_status": {
...
"actions": {
"action_id": {
"ack": {
"timestamp": "2015-05-26T18:04:27.723Z",
"state": "awaits_successful_execution"
},
...
}
}
}
--------------------------------------------------
When the watch runs and the condition matches, the value of the `ack.state` changes
to `ackable`:
[source,js]
--------------------------------------------------
"_status": {
...
"actions": {
"action_id": {
"ack": {
"timestamp": "2015-05-26T18:19:08.758Z",
"state": "ackable"
},
...
}
}
}
--------------------------------------------------
Acknowledging the watch action (using the ACK API) sets the value of the `ack.state`
to `acked`:
[source,js]
--------------------------------------------------
"_status": {
...
"actions": {
"action_id": {
"ack": {
"timestamp": "2015-05-26T18:21:09.982Z",
"state": "acked"
},
...
}
}
}
--------------------------------------------------
Acknowledging an action throttles further executions of that action until its
`ack.state` is reset to `awaits_successful_execution`. This happens when the watch's condition
is checked and is not met (the condition evaluates to `false`).
The following snippet shows how to ack a watch action identified by its id. In this example, the
watch id is `my-watch` and the id of the action being acknowledged is `my-action`:
[source,js]
--------------------------------------------------
PUT _watcher/watch/my-watch/my-action/_ack
--------------------------------------------------
// AUTOSENSE
As a response to this request, the full watch status is returned:
[source,js]
--------------------------------------------------
{
"_status": {
"last_checked": "2015-05-26T18:21:08.630Z",
"last_met_condition": "2015-05-26T18:21:08.630Z",
"actions": {
"my-action": {
"ack_status": {
"timestamp": "2015-05-26T18:21:09.982Z",
"state": "acked"
},
"last_execution": {
"timestamp": "2015-05-26T18:21:04.106Z",
"successful": true
},
"last_successful_execution": {
"timestamp": "2015-05-26T18:21:04.106Z",
"successful": true
},
"last_throttle": {
"timestamp": "2015-05-26T18:21:08.630Z",
"reason": "throttling interval is set to [5 seconds] but time elapsed since last execution is [4 seconds and 530 milliseconds]"
}
}
}
}
}
--------------------------------------------------
You can acknowledge multiple actions by assigning the `actions` parameter a
comma-separated list of action ids:
[source,js]
--------------------------------------------------
PUT _watcher/watch/my-watch/action1,action2/_ack
--------------------------------------------------
// AUTOSENSE
To acknowledge all of a watch's actions, simply omit the `actions` parameter:
[source,js]
--------------------------------------------------
PUT _watcher/watch/my-watch/_ack
--------------------------------------------------
// AUTOSENSE
===== Timeouts
If you acknowledge a watch while it is executing, the ack action blocks and waits for the watch
execution to finish. For some watches, this can take a significant amount of time. By default,
the ack watch action has a timeout of 10 seconds. You can change the timeout setting by
specifying the `master_timeout` parameter.
The following snippet shows how to change the default timeout of the ack action to 30 seconds:
[source,js]
--------------------------------------------------
PUT _watcher/watch/my-watch/_ack?master_timeout=30s
--------------------------------------------------
// AUTOSENSE

View File

@ -0,0 +1,46 @@
[[api-rest-delete-watch]]
==== Delete Watch API
The DELETE watch API removes a specific watch (identified by its `id`) from watcher. Once removed, the document
representing the watch in the `.watches` index will be gone and it will never be executed again.
Please note that deleting a watch **does not** delete any watch execution records related to this watch from
the <<watch-history, Watch History>>.
IMPORTANT: Deleting a watch must be done via this API only. Do not delete the watch directly from the `.watches` index
using Elasticsearch's DELETE Document API. When integrating with Shield, a best practice is to make sure
no `write` privileges are granted to anyone over the `.watches` API.
The following example deletes a watch with the `my-watch` id:
[source,js]
--------------------------------------------------
DELETE _watcher/watch/my-watch
--------------------------------------------------
// AUTOSENSE
This is a sample output
[source,js]
--------------------------------------------------
{
"found": true,
"_id": "my_watch",
"_version": 10
}
--------------------------------------------------
===== Timeouts
When deleting a watch while it is executing, the delete action will block and wait for the watch execution
to finish. Depending on the nature of the watch, in some situations this can take a while. For this reason,
the delete watch action is associated with a timeout that is set to 10 seconds by default. You can control this
timeout by passing in the `master_timeout` parameter.
The following snippet shows how to change the default timeout of the delete action to 30 seconds:
[source,js]
--------------------------------------------------
DELETE _watcher/watch/my-watch?master_timeout=30s
--------------------------------------------------
// AUTOSENSE

View File

@ -0,0 +1,316 @@
[[api-rest-execute-watch]]
==== Execute Watch API
The execute watch API forces the execution of a stored watch. It can be used to force
execution of the watch outside of its triggering logic, or to test the watch for
debugging purposes.
The following example executes the `my-watch` watch:
[source,js]
--------------------------------------------------
POST _watcher/watch/my-watch/_execute
--------------------------------------------------
// AUTOSENSE
For testing and debugging purposes, you also have fine-grained control on how the
watch is executed--execute the watch without executing all of its actions or by simply
simulating them. You can also force execution by ignoring the watch's condition and
control whether a watch record would be written to the watch history after execution.
This API supports the following fields:
[options="header"]
|======
| Name | Required | Default | Description
| trigger_data | no | | This structure is parsed as the data of
the trigger event that will be used during
the watch execution
| ignore_condition | no | false | When set to `true`, the watch execution
uses the <<condition-always, Always>>
Condition.
| alternative_input | no | null | When present, the watch uses this object as a payload
instead of executing its own input.
| action_modes | no | null | Determines how to handle the watch actions as part
of the watch execution. See
<<api-rest-execute-watch-action-mode, Action Execution Modes>>
for more information.
| record_execution | no | false | When set to `true`, the watch record representing
the watch execution result is persisted to
the `.watch_history` index for the current time.
In addition, the status of the watch is
updated, possbily throttling subsequent executions.
| watch | no | null | When present, this <<watch-definition, watch>> is
used instead of the one specified in the request. This
watch is not persisted to the index and record_execution
cannot be set.
|======
The following example shows a comprehensive example of executing the `my-watch` watch:
[source,js]
--------------------------------------------------
POST _watcher/watch/my-watch/_execute
{
"trigger_data" : { <1>
"triggered_time" : "now",
"scheduled_time" : "now"
},
"alternative_input" : { <2>
"foo" : "bar"
},
"ignore_condition" : true, <3>
"action_modes" : {
"my-action" : "force_simulate" <4>
},
"record_execution" : true <5>
}
--------------------------------------------------
// AUTOSENSE
<1> The triggered and schedule times are provided.
<2> The input as defined by the watch is ignored and instead the
provided input will be used as the execution payload.
<3> The condition as defined by the watch will be ignored and will
be assumed to evaluate to `true`.
<4> Forces the simulation of `my-action`. Forcing the simulation
means that throttling is ignored and the watch is simulated by
Watcher instead of being executed normally.
<5> The execution of the watch will create a watch record in the
watch history, and the throttling state of the watch will
potentially be updated accordingly.
This is an example of the output:
[source,js]
--------------------------------------------------
{
"_id": "my-watch_0-2015-06-02T23:17:55.124Z", <1>
"watch_record": { <2>
"watch_id": "my-watch",
"trigger_event": {
"type": "manual",
"triggered_time": "2015-06-02T23:17:55.124Z",
"manual": {
"schedule": {
"scheduled_time": "2015-06-02T23:17:55.124Z"
}
}
},
"state": "executed",
"input": {
"search": {
"request": {
"search_type": "query_then_fetch",
"indices": [
"logstash*"
],
"types": [],
"body": {
"query": {
"filtered": {
"query": {
"match": {
"response": 404
}
},
"filter": {
"range": {
"@timestamp": {
"from": "{{ctx.trigger.scheduled_time}}||-5m",
"to": "{{ctx.trigger.triggered_time}}"
}
}
}
}
}
}
}
}
},
"condition": {
"script": "ctx.payload.hits.total > 1"
},
"result": { <3>
"execution_time": "2015-06-02T23:17:55.124Z",
"execution_duration": 12608,
"input": {
"type": "simple",
"payload": {
"foo": "bar"
}
},
"condition": {
"type": "always",
"met": true
},
"actions": [
{
"id": "email_admin",
"type" : "email"
"status" : "success"
"email": {
"account": "gmail",
"email": {
"id": "my-watch_0-2015-05-30T01:14:05.319Z",
"from": "watcher@example.com",
"sent_date": "2015-05-30T01:14:05.319Z",
"to": [
"admin@domain.host.com"
],
"subject": "404 recently encountered"
}
}
}
]
}
}
}
--------------------------------------------------
<1> The id of the watch record as it would be stored in the `.watch_history` index.
<2> The watch record document as it would be stored in the `.watch_history` index.
<3> The watch execution results.
[[api-rest-execute-watch-action-mode]]
===== Action Execution Modes
Action modes define how actions will be handled during the watch execution. There are five
possible modes an action can be associated with:
[options="header"]
|======
| Name |Description
| simulate | The action execution will be simulated. Each action type define its own
simulation mode. For example, The <<actions-email, email>> action
will create the email that would have been sent but will not actually
send it. In this mode, the action may be throttled if the current state
of the watch indicates it should be.
| force_simulate | Similar to the the `simulate` mode, except the action will not be
throttled even if the current state of the watch indicates it should be.
| execute | Executes the action as it would have been executed if the watch would have
been triggered by its own trigger. The execution may be throttled if the
current state of the watch indicates it should be.
| force_execute | Similar to the `execute` mode, except the action ill not be throttled
even if the current state of the watch indicates it should be.
| skip | The action will be skipped and won't be executed or simulated.
Effectively forcing the action to be throttled.
|======
You can set a different execution mode for every action by simply associating the mode name
with the action id:
[source,js]
--------------------------------------------------
POST _watcher/watch/my-watch/_execute
{
"action_modes" : {
"action1" : "force_simulate",
"action2" : "skip"
}
}
--------------------------------------------------
// AUTOSENSE
You can also associate a single execution mode with all the watch's actions using `_all`
as the action id:
[source,js]
--------------------------------------------------
POST _watcher/watch/my-watch/_execute
{
"action_modes" : {
"_all" : "force_execute"
}
}
--------------------------------------------------
// AUTOSENSE
[[api-rest-execute-inline-watch]]
===== Inline Watch Execution
You can use the Execute API to execute watches that are not yet registered in Watcher by
specifying the watch definition inline. This serves as great tool for testing and debugging
your watches prior to adding them to Watcher.
The following example demonstrates how you can test a watch defintion:
[source,js]
--------------------------------------------------
POST _watcher/watch/_execute
{
"watch" : {
"trigger" : { "schedule" : { "interval" : "10s" } },
"input" : {
"search" : {
"request" : {
"indices" : [ "logs" ],
"body" : {
"query" : {
"match" : { "message": "error" }
}
}
}
}
},
"condition" : {
"compare" : { "ctx.payload.hits.total" : { "gt" : 0 }}
},
"actions" : {
"log_error" : {
"logging" : {
"text" : "Found {{ctx.payload.hits.total}} errors in the logs"
}
}
}
}
}
--------------------------------------------------
All other settings for this API still apply take effect when inlining a watch.
In the following snippet, while the watch is defined with a `compare` condition,
during execution this condition will be ignored:
[source,js]
--------------------------------------------------
POST _watcher/watch/_execute
{
"ignore_condition" : true,
"watch" : {
"trigger" : { "schedule" : { "interval" : "10s" } },
"input" : {
"search" : {
"request" : {
"indices" : [ "logs" ],
"body" : {
"query" : {
"match" : { "message": "error" }
}
}
}
}
},
"condition" : {
"compare" : { "ctx.payload.hits.total" : { "gt" : 0 }}
},
"actions" : {
"log_error" : {
"logging" : {
"text" : "Found {{ctx.payload.hits.total}} errors in the logs"
}
}
}
}
}
--------------------------------------------------

View File

@ -0,0 +1,109 @@
[[api-rest-get-watch]]
==== Get Watch API
This API retrieves a watch by its id.
The following example gets a watch with `my-watch` id:
[source,js]
--------------------------------------------------
GET _watcher/watch/my-watch
--------------------------------------------------
// AUTOSENSE
This is an example of the output:
[source,js]
--------------------------------------------------
{
"found": true,
"_id": "my_watch",
"_status": { <1>
"last_checked": "2015-05-26T18:21:08.630Z",
"last_met_condition": "2015-05-26T18:21:08.630Z",
"actions": {
"email_admin": {
"ack_status": {
"timestamp": "2015-05-26T18:21:09.982Z",
"state": "acked"
},
"last_execution": {
"timestamp": "2015-05-26T18:21:04.106Z",
"successful": true
},
"last_successful_execution": {
"timestamp": "2015-05-26T18:21:04.106Z",
"successful": true
},
"last_throttle": {
"timestamp": "2015-05-26T18:21:08.630Z",
"reason": "throttling interval is set to [5 seconds] but time elapsed since last execution is [4 seconds and 530 milliseconds]"
}
}
}
},
"watch": {
"input": {
"search": {
"request": {
"search_type": "query_then_fetch",
"indices": [
"logstash*"
],
"types": [],
"body": {
"query": {
"filtered": {
"filter": {
"range": {
"@timestamp": {
"from": "{{ctx.trigger.scheduled_time}}||-5m",
"to": "{{ctx.trigger.triggered_time}}"
}
}
},
"query": {
"match": {
"response": 404
}
}
}
}
}
}
}
},
"condition": {
"script": {
"type": "inline",
"lang": "groovy",
"params": {},
"script": "ctx.payload.hits.total > 1"
}
},
"trigger": {
"schedule": {
"cron": "0 0/1 * * * ?"
}
},
"actions": {
"email_admin": {
"email": {
"subject": {
"type": "inline",
"lang": "mustache",
"params": {},
"script": "404 recently encountered"
},
"attach_data": false,
"to": [
"someone@domain.host.com"
]
}
}
}
}
}
--------------------------------------------------
<1> The retrieved watch will return with its current status

View File

@ -0,0 +1,33 @@
[[api-rest-info]]
==== Info API
The watcher info API gives basic version information on the watcher plugin that is installed.
The following example queries the `info` API.
[source,js]
--------------------------------------------------
GET _watcher
--------------------------------------------------
// AUTOSENSE
A successful call returns a JSON structure similar to the following example:
[source,js]
--------------------------------------------------
{
"version": {
"name": "2.0.0",
"number": "2.0.0", <1>
"build_hash": "41f64213d2d370bf66f0e9b839a30a19", <2>
"build_timestamp": "2015-04-07T13:34:42Z", <3>
"build_snapshot": true <4>
}
}
--------------------------------------------------
<1> The build number of the plugin
<2> The build hash of the plugin
<3> The time the plugin was built
<4> Whether or not this plugin was a development snapshot build

View File

@ -0,0 +1,95 @@
[[api-rest-put-watch]]
==== PUT Watch API
The PUT watch API either registers a new watch in watcher or update an existing one. Once registered, a new document
will be added to the `.watches` index, representing the watch, and the watch's trigger will immediately be registered
with the relevant trigger engine (typically the scheduler, for the `schedule` trigger).
IMPORTANT: Putting a watch must be done via this API only. Do not put a watch directly to the `.watches` index
using Elasticsearch's Index API. When integrating with Shield, a best practice is to make sure
no `write` privileges are granted to anyone over the `.watches` API.
The following example adds a watch with the `my-watch` id that has the following qualities:
* The watch schedule triggers every minute.
* The watch search input finds any 404 HTTP responses that occurred in the past five minutes.
* The watch condition checks the search results for 404s.
* The watch action sends an email if there are any 404s.
[source,js]
--------------------------------------------------
PUT _watcher/watch/my-watch
{
"trigger" : {
"schedule" : { "cron" : "0 0/1 * * * ?" }
},
"input" : {
"search" : {
"request" : {
"indices" : [
"logstash*"
],
"body" : {
"query" : {
"filtered": {
"query": {
"match": { "response": 404 }
},
"filter": {
"range": {
"@timestamp" : {
"from": "{{ctx.trigger.scheduled_time}}||-5m",
"to": "{{ctx.trigger.triggered_time}}"
}
}
}
}
}
}
}
}
},
"condition" : {
"script" : "ctx.payload.hits.total > 1"
},
"actions" : {
"email_admin" : {
"email" : {
"to" : "admin@domain.host.com",
"subject" : "404 recently encountered"
}
}
}
}'
--------------------------------------------------
// AUTOSENSE
A watch has the following fields:
[options="header"]
|======
| Name | Description
| `trigger` | The <<trigger, trigger>> that defines when the watch should run
| `input` | The <<input, input>> that defines the input that loads the data for the watch
| `condition` | The <<condition, condition>> that defines if the actions should be run
| `actions` | The list of <<actions, actions>> that will be run if the condition matches
| `meta` | Metadata json that will be copied into the history entries.
| `throttle_period` | The minimum time between actions being run, the default for this is 5 seconds. This default can be changed in the config file with the setting `watcher.throttle.period.default_period`.
|======
===== Timeouts
When updating a watch while it is executing, the put action will block and wait for the watch execution
to finish. Depending on the nature of the watch, in some situations this can take a while. For this reason,
the put watch action is associated with a timeout that is set to 10 seconds by default. You can control this
timeout by passing in the `master_timeout` parameter.
The following snippet shows how to change the default timeout of the put action to 30 seconds:
[source,js]
--------------------------------------------------
PUT _watcher/watch/my-watch?master_timeout=30s
--------------------------------------------------
// AUTOSENSE

View File

@ -0,0 +1,19 @@
[[api-rest-restart]]
==== Restart API
The `restart` watcher API stops, then starts the watcher service, as in the following example:
[source,js]
--------------------------------------------------
PUT _watcher/_restart
--------------------------------------------------
// AUTOSENSE
Watcher returns the following response if the request is successful:
[source,js]
--------------------------------------------------
{
"acknowledged": true
}
--------------------------------------------------

View File

@ -0,0 +1,19 @@
[[api-rest-start]]
==== Start API
The `start` watcher API starts the watcher service if the service is not already running, as in the following example:
[source,js]
--------------------------------------------------
PUT _watcher/_start
--------------------------------------------------
// AUTOSENSE
Watcher returns the following response if the request is successful:
[source,js]
--------------------------------------------------
{
"acknowledged": true
}
--------------------------------------------------

View File

@ -0,0 +1,154 @@
[[api-rest-stats]]
==== Stats API
The watcher `stats` API returns information on the aspects of watcher on your cluster.
The watcher `stats` API supports the following request options
[options="header"]
|======
| Name | Required | Default | Description
| metric | no | null | What metric should be returned.
|======
The supported metric values:
[options="header"]
|======
| Metric | Description
| executing_watches | Include the current executing watches in the response.
| queued_watches | Include the watches queued for execution in the response.
| _all | Include all metrics in the response.
|======
The watcher `stats` API always returns basic metrics regardless of the `metric` option.
The following example queries the `stats` API including the basic metrics:
[source,js]
--------------------------------------------------
GET _watcher/stats
--------------------------------------------------
// AUTOSENSE
A successful call returns a JSON structure similar to the following example:
[source,js]
--------------------------------------------------
{
"watcher_state": "started", <1>
"watch_count": 1, <2>
"execution_thread_pool": {
"size": 1000, <3>
"max_size": 1 <4>
}
}
--------------------------------------------------
<1> The current state of watcher. May be either `started`, `starting` or `stopped`.
<2> The number of watches currently registered in watcher.
<3> The number of watches that were triggered and currently queued for execution.
<4> The largest size of the execution thread pool indicating the largest number of concurrent executing watches.
===== Current executing watches metric
The current executing watches metric gives insight into the watches that are currently being executed by Watcher.
Per watch that is executing information is shared, like the `watch_id`, when execution started and at what phase the
execution is.
To include this metric, the `metric` option should be set to `executing_watches` or `_all`.
The following example specifies the `metric` option as a query string argument and will include the basic metrics and
metrics about the current watches being executed:
[source,js]
--------------------------------------------------
GET _watcher/stats?metric=executing_watches
--------------------------------------------------
// AUTOSENSE
The following example specifies the `metric` option as part of the url path:
[source,js]
--------------------------------------------------
GET _watcher/stats/current_watches
--------------------------------------------------
// AUTOSENSE
An example of a successful json response that captures a watch in execution:
[source,js]
--------------------------------------------------
{
"watcher_state": "started",
"watch_count": 2,
"execution_thread_pool": {
"queue_size": 1000,
"max_size": 20
},
"current_watches": [ <1>
{
"watch_id": "slow_condition", <2>
"watch_record_id": "slow_condition_3-2015-05-13T07:42:32.179Z", <3>
"triggered_time": "2015-05-12T11:53:51.800Z", <4>
"execution_time": "2015-05-13T07:42:32.179Z", <5>
"execution_phase": "condition" <6>
}
]
}
--------------------------------------------------
<1> A list of all the Watches that are currently being executed by Watcher. In case of an empty array no executing watches
had been captured. The captured watches are sorted by execution time in descending order. So the longest running watch
is always on top.
<2> The id of the watch being executed.
<3> The id of the watch record.
<4> The time the watch was triggered by the trigger engine.
<5> The time the watch was executed. This is just before the input is being executed.
<6> The current execution phase the watch is in. Can be `input`, `condition` or `action`.
===== Queued watches metric
When a watch triggers it is being prepared for execution and when there is capacity the watch get executed.
If many watches trigger concurrently and there is no capacity to execute then watches are queued up. These watches
are then queued for execution. The queued watches metric gives insight what watches are queued for execution.
To include this metric, the `metric` option should include `queued_watches` or `_all`.
The following example specifies the `queued_watches` metric option and will include the basic metrics and
the watches queued for execution:
[source,js]
--------------------------------------------------
GET _watcher/stats/queued_watches
--------------------------------------------------
// AUTOSENSE
An example of a successful json response that captures a watch in execution:
[source,js]
--------------------------------------------------
{
"watcher_state": "started",
"watch_count": 10,
"execution_thread_pool": {
"queue_size": 1000,
"max_size": 20
},
"queued_watches": [ <1>
{
"watch_id": "slow_condition4", <2>
"watch_record_id": "slow_condition4_223-2015-05-21T11:59:59.811Z", <3>
"triggered_time": "2015-05-21T11:59:59.811Z", <4>
"execution_time": "2015-05-21T11:59:59.811Z" <5>
},
...
]
}
--------------------------------------------------
<1> A list of all the Watches that are queued for execution. In case of an empty array no watches are queued for execution.
<2> The id of the watch queued for execution.
<3> The id of the watch record.
<4> The time the watch was triggered by the trigger engine.
<5> The time the watch was went into a queued state.

View File

@ -0,0 +1,19 @@
[[api-rest-stop]]
==== Stop API
The `stop` watcher API stops the watcher service if the service is running, as in the following example:
[source,js]
--------------------------------------------------
PUT _watcher/_stop
--------------------------------------------------
// AUTOSENSE
Watcher returns the following response if the request is successful:
[source,js]
--------------------------------------------------
{
"acknowledged": true
}
--------------------------------------------------

View File

@ -0,0 +1,60 @@
[[transform]]
=== Transform
A _transform_ processes the payload in the watch execution context to prepare the payload for watch actions.
NOTE: If no transforms are defined, the actions have access to the payload as loaded by the watch input.
You can define transforms in two places:
1. As a top level construct in the watch definition. In this case, the payload is
transformed before any of the watch actions are executed.
2. As part of the definition of a particular action. In this case, the payload is
transformed before that action is executed. The transformation is only applied to the payload for that specific action.
If all actions require the same view of the payload, define a transform as part of the watch definition. If each action requires a different view of the payload, define different
transforms as part of the action definitions so each action has the payload prepared by its own dedicated transform.
The following example defines two transforms, one at the watch level and one as part of the definition of the `my_webhook` action.
[source,json]
.Watch Transform Constructs
--------------------------------------------------
{
"trigger" : { ...}
"input" : { ... },
"condition" : { ... },
"transform" : { <1>
"search" : {
"body" : { "query" : { "match_all" : {} } }
}
}
"actions" : {
"my_webhook": {
"transform" : { <2>
"script" : "return ctx.payload.hits"
}
"webhook" : {
"host" : "host.domain",
"port" : 8089,
"path" : "/notify/{{ctx.watch_id}}"
}
}
]
...
}
--------------------------------------------------
<1> A watch level `transform`
<2> An action level `transform`
Watcher supports three types of transforms: <<transform-search, `search`>>, <<transform-script, `script`>>
and <<transform-chain, `chain`>>.
include::transform/search.asciidoc[]
include::transform/script.asciidoc[]
include::transform/chain.asciidoc[]

View File

@ -0,0 +1,43 @@
[[transform-chain]]
==== Chain Transform
A <<transform, Transform>> that executes an ordereed list of configured transforms in a chain, where
the output of one transform serves as the input of the next transform in the chain. The payload that is
accepted by this transform serves as the input of the first transform in the chain and the output of the last
transform in the chain is the output of the `chain` transform as a whole.
You can use chain transforms to build more complex transforms out of the other available transforms. For example,
you can combine a <<transform-search, `search`>> transform and a <<transform-script, `script`>> transform,
as shown in the following snippet:
[source,json]
--------------------------------------------------
"transform" : {
"chain" : [ <1>
{
"search" : { <2>
"search_type" : "count",
"indices" : [ "logstash-*" ],
"body" : {
"query" : {
"match" : { "priority" : "error" }
}
}
}
},
{
"script" : "return [ error_count : ctx.payload.hits.total ]" <3>
}
]
}
--------------------------------------------------
<1> The `chain` transform definition
<2> The first transform in the chain (in this case, a `search` transform)
<3> The second and final transform in the chain (in this case, a `script` transform)
This example executes a `count` search on the cluster to look for `error` events. The
search results are then passed to the second `script` transform. The `script` transform
extracts the total hit count and assigns it to the `error_count` field in a newly-generated payload.
This newly-generated payload is the output of the `chain` transform and replaces the
payload in the watch execution context.

View File

@ -0,0 +1,62 @@
[[transform-script]]
==== Script Transform
A <<transform, Transform>> that executes a script on the current payload in the watch execution context
and replaces it with a newly generated one. The following snippet shows how a simple script transform can be defined on the watch level:
[source,json]
.Simple Script Transform
--------------------------------------------------
{
...
"transform" : {
"script" : "return [ time : ctx.trigger.scheduled_time ]" <1>
}
...
}
--------------------------------------------------
<1> A simple `groovy` script that creates a new payload with a single `time` field holding the scheduled time.
NOTE: The executed script may either return a valid model that is the equivalent of a Java(TM) Map or a JSON object (you
will need to consult the documentation of the specific scripting language to find out what this construct is). Any
other value that is returned will be assigned and accessible to/via the `_value` variable.
As seen above, the `script` may hold a string value in which case it will be treated as the script itself and the default
elasticsearch script languages will be assumed (as described {ref}/modules-scripting.html#modules-scripting[here]). It
is possible to have more control over the scripting languages and also utilize pre-registered/pre-configured scripts
in elasticsearch. For this, the `script` field will be defined as an object, and the following table lists the possible
settings that can be configured:
[[transform-script-settings]]
.Script Transform Settings
[options="header"]
|======
| Name |Required | Default | Description
| `inline` | yes* | - | When using an inline script, this field holds the script itself.
| `file` | yes* | - | When refering to a script file, this field holds the name of the file.
| `id` | yes* | - | When refering to an indexed script, this field holds the id of the script.
| `lang` | no | `groovy` | The script language
| `params` | no | - | Additional parameters/variables that are accessible by the script
|======
* When using the object notation of the script, one (and only one) of `inline`, `file` or `id` fields must be defined
NOTE: In addition to the provided `params`, the scripts also have access to the <<watch-execution-context, Standard Watch Execution Context Parameters>>
===== Script Type
IMPORTANT: When suing `inline` scripts, if you're running Elasticsearch 1.3.8 or above, or 1.4.3 or above,
you will need to explicitly {ref}/modules-scripting.html#enable-dynamic-scripting[enable dynamic scripts]
in `elasticsearch.yml`.
As indicated by the table above, it is possible to utilize the full scripting support in elasticsearch and to base the script
on pre-registered indexed scripts or pre-defined scripts in file. Please note, for security reasons, starting from elasticsearch
`v1.4.3`, inline groovy scripts are disabled by default. Furthermore, it is considered a best practice to pre-define the script
in stored files. To read more about elasticsearch search scripting support and possible related vulnerabilities, please
see {ref}/modules-scripting.html[here].
TIP: The `script` transform is often useful when used in combination with the <<transform-script, `search`>>
transform, where the script can extract only the significant data from a search result, and by that, keep the payload
minimal. This can be achieved with the <<transform-chain, `chain`>> transform.

View File

@ -0,0 +1,165 @@
[[transform-search]]
==== Search Transform
A <<transform, Transform>> that executes a search on the cluster and replaces the current payload in
the watch execution context with the returned search results. The following snippet shows how a simple search
transform can be defined on the watch level:
[source,json]
.Simple Search Transform
--------------------------------------------------
{
...
"transform" : {
"search" : {
"request" : {
"body" : { "query" : { "match_all" : {} }}
}
}
}
...
}
--------------------------------------------------
Like every other search based construct, one can make use of elasticsearch's full search API by providing
additional parameters:
[source,json]
.Simple Search Transform
--------------------------------------------------
{
"transform" : {
"search" : {
"request" : {
"search_type" : "count",
"indices" : [ "logstash-*" ],
"body" : {
"query" : {
"match" : { "priority" : "error"}
}
}
}
}
}
}
--------------------------------------------------
The above example executes a {ref}/search-request-search-type.html#count[count] search over all the logstash indices, matching all
the events with `error` priority.
The following table lists all available settings for the search transform:
[[transform-search-settings]]
.Search Transform Settings
[options="header"]
|======
| Name |Required | Default | Description
| `request.search_type` | no | {ref}/search-request-search-type.html#query-then-fetch[query_then_fetch] | The search {ref}/search-request-search-type.html[search type]
| `request.indices` | no | all indices | One or more indices to search on (may be a comma-delimited string or an array of indices names). <<dynamic-index-names, Dynamic index names>> are supported.
| `request.types` | no | all types | One or more document types to search on (may be a comma-delimited string or an array of document types names)
| `request.body` | no | `match_all` query | The body of the request. The {ref}/search-request-body.html[request body] follows the same structure you normally send in the body of a REST `_search` request. The body can be static text or include `mustache` <<templates, templates>>.
| `request.indices_options.expand_wildcards` | no | `open` | Determines how to expand indices wildcards. Can be one of `open`, `closed`, `none` or `all` (see {ref}/multi-index.html[multi-index support])
| `request.indices_options.ignore_unavailable` | no | `true` | A boolean value that determines whether the search should leniently ignore unavailable indices ((see {ref}/multi-index.html[multi-index support])
| `request.indices_options.allow_no_indices` | no | `true` | A boolean value that determines whether the search should leniently return no results when no indices are resolved ((see {ref}/multi-index.html[multi-index support])
| `request.template` | no | - | The body of the search template. See <<templates, configure templates>> for more information.
| `timeout` | no | 30s | The timeout for waiting for the search api call to return. If no response is returned within this time, the search transform times out and fails.
This setting overrides the default internal search operations <<default-internal-ops-timeouts, timeouts>>.
| `dynamic_name_timezone` | no | - | The time zone to use for resolving the index name based on <<dynamic-index-names, Dynamic Index Names>>. The default time zone also can be <<dynamic-index-name-timezone, configured>> globally.
|======
[[transform-search-template]]
===== Template Support
As can be seen in the <<transform-search-settings, table>> above, the search transform support mustache templates.
This can either be as part of the body definition, or alternatively, point to a pre defined/registered template (either
defined in a file or {ref}/search-template.html#pre-registered-templates[registered] as a script in elasticsearch).
The following snippet shows an example of a search that refers to the scheduled time of the watch:
[source,json]
.Simple Search Transform using body template support
--------------------------------------------------
{
"transform" : {
"search" : {
"search_type" : "count",
"index" : [ "logstash-*" ],
"type" : "event",
"body" : {
"query" : {
"filtered" : {
"filter" : {
"bool" : {
"must" : [
{
"range" : {
"@timestamp" : {
"from" : "{{ctx.trigger.scheduled_time}}||-30s",
"to" : "{{ctx.trigger.triggered_time}}"
}
}
},
{
"query" : {
"match" : { "priority" : "error"}
}
}
]
}
}
}
}
}
}
}
}
--------------------------------------------------
The model of the template (based on which the mustache template will be evaluated) is a union between the provided
`template.params` settings and the <<watch-execution-context, standard watch execution context model>>.
[source,json]
.Simple Search Transform using an inline template
--------------------------------------------------
{
"transform" : {
"search" : {
"search_type" : "count",
"index" : [ "logstash-*" ],
"type" : "event",
"body" : {
"template" {
"inline" : {
"query" : {
"filtered" : {
"filter" : {
"bool" : {
"must" : [
{
"range" : {
"@timestamp" : {
"from" : "{{ctx.trigger.scheduled_time}}||-30s",
"to" : "{{ctx.trigger.triggered_time}}"
}
}
},
{
"query" : {
"match" : { "priority" : "{{priority}}"}
}
}
]
}
}
}
},
"params" : {
"priority" : "error"
}
}
}
}
}
}
}
--------------------------------------------------

View File

@ -0,0 +1,12 @@
[[trigger]]
=== Trigger
Every watch must have a `trigger` that defines when the watch execution process should start.
When you create a watch, its trigger is registered with the appropriate _trigger engine_.
The trigger engine is responsible for evaluating the trigger and triggering the watch
when needed.
Watcher is designed to support different types of triggers, but
only time-based <<trigger-schedule, `schedule`>> triggers are currently available.
include::trigger/schedule.asciidoc[]

View File

@ -0,0 +1,52 @@
[[trigger-schedule]]
[float]
=== Schedule Trigger
Schedule <<trigger, triggers>> define when the watch execution should start based on
date and time. All times are specified in UTC time.
NOTE: Be careful when setting trigger times between midnight and 1:00 AM as daylight savings
time changes can cause a watch to skip or a repeat depending on whether the time moves
back or jumps forward.
Watcher uses the system clock to determine the current time. To ensure schedules are triggered
when expected, you should synchronize the clocks of all nodes in the cluster using a time service
such as http://www.ntp.org/[NTP].
Keep in mind that the throttle period can affect when a watch is actually executed. The default
throttle period is five seconds (5000 ms). If you configure a schedule that's more frequent than
the throttle period, the throttle period overrides the schedule. For example, if you set the
throttle period to one minute (60000 ms) and set the schedule to every 10 seconds, the watch is
executed no more than once per minute. For more information about throttling,
see <<actions-ack-throttle, Acknowledgement and Throttling>>.
Watcher provides several types of schedule triggers:
* <<schedule-hourly, `hourly`>>
* <<schedule-daily, `daily`>>
* <<schedule-weekly, `weekly`>>
* <<schedule-monthly, `monthly`>>
* <<schedule-yearly, `yearly`>>
* <<schedule-cron, `cron`>>
* <<schedule-interval, `interval`>>
[[schedule-scheduler]]
==== Scheduler
When you create a scheduled watch, its schedule is registered with the _scheduler_ trigger engine. The scheduler tracks time and triggers the execution of watches according to their schedules. The scheduler runs on the master node and is bound to the lifecycle of the Watcher service. When the Watcher service is stopped, the scheduler stops with it.
IMPORTANT: The scheduler operates on UTC time. All schedules are relative to UTC.
include::schedule/hourly.asciidoc[]
include::schedule/daily.asciidoc[]
include::schedule/weekly.asciidoc[]
include::schedule/monthly.asciidoc[]
include::schedule/yearly.asciidoc[]
include::schedule/cron.asciidoc[]
include::schedule/interval.asciidoc[]

View File

@ -0,0 +1,142 @@
[[schedule-cron]]
==== `cron` Schedule
A <<trigger-schedule, `schedule`>> trigger that enables you to use a http://unixhelp.ed.ac.uk/CGI/man-cgi?crontab+5[cron] style expression to specify when you want the scheduler to start the watch execution. Watcher uses the cron parser from the http://www.quartz-scheduler.org[Quartz Job Scheduler]. For more information about writing Quartz cron expressions, see the http://www.quartz-scheduler.org/documentation/quartz-1.x/tutorials/crontrigger[Quartz CronTrigger Tutorial].
WARNING: While `cron` triggers are super powerful, we recommend using one of the other schedule types if you can, as they
are much more straightforward to configure. If you use `cron`, construct your `cron` expressions with care to be sure you
are actually setting the schedule you want. You can use the <<croneval, `croneval`>> tool to validate your cron expressions and see what the resulting trigger times will be.
===== Cron Expressions
A cron expression is a string of the following form:
<seconds> <minutes> <hours> <day_of_month> <month> <day_of_week> [year]
All elements are required except for `year`. <<schedule-cron-elements>> shows the valid values for each
element in a cron expression.
[[schedule-cron-elements]]
.Cron Expression Elements
[options="header"]
|======
| Name | Required | Valid Values | Valid Special Characters
| `seconds` | yes | `0`-`59` | `,` `-` `*` `/`
| `minutes` | yes | `0`-`59` | `,` `-` `*` `/`
| `hours` | yes | `0`-`23` | `,` `-` `*` `/`
| `day_of_month` | yes | `1`-`31` | `,` `-` `*` `/` `?` `L` `W`
| `month` | yes | `1`-`12` or `JAN`-`DEC` | `,` `-` `*` `/`
| `day_of_week` | yes | `1`-`7` or `SUN`-`SAT` | `,` `-` `*` `/` `?` `L` `#`
| `year` | no | empty or `1970`-`2099` | `,` `-` `*` `/`
|======
The special characters you can use in a cron expression are described in <<schedule-cron-special-characters>>.
The names of months and days of the week are not case sensitive. For example, `MON` and `mon` are equivalent.
Be careful when setting trigger times between midnight and 1:00 AM as daylight savings time changes can
cause a watch to skip or a repeat depending on whether the time moves back or jumps forward.
NOTE: Currently, you must specify `?` for either the `day_of_week` or `day_of_month`. Explicitly specifying
both values is not supported.
[[schedule-cron-special-characters]]
.Cron Special Characters
[options="header"]
|======
| Special Character | Description
| * | All values. Selects every possible value for a field. For example, `*` in the `hours` field means "every hour".
| ? | No specific value. Use when you don't care what the value is. For example, if you want the schedule to trigger on a particular day of the month, but don't care what day of the week that happens to be, you can specify `?` in the `day_of_week` field.
| - | A range of values (inclusive). Use to separate a minimum and maximum value. For example, if you want
the schedule to trigger every hour between 9:00 AM and 5:00 PM, you could specify `9-17` in the `hours` field.
| , | Multiple values. Use to separate multiple values for a field. For example, if you want the schedule to trigger every Tueday and Thursday, you could specify `TUE,THU` in the `day_of_week` field.
| / | Increment. Use to separate values when specifying a time increment. The first value represents the starting point, and the second value represents the interval. For example, if you want the schedule to trigger every 20 minutes starting at the top of the hour, you could specify `0/20` in the `minutes` field. Similarly, specifying `1/5` in day_of_month field will trigger every 5 days starting on the first day of the month.
| L | Last. Use in the `day_of_month` field to mean the last day of the month--day 31 for January, day 28 for February in non-leap years, day 30 for April, and so on. Use alone in the `day_of_week` field in place of `7` or `SAT`, or after a particular day of the week to select the last day of that type in the month. For example `6L` means the last Friday of the month. You can specify
`LW` in the `day_of_month` field to specify the last weekday of the month. Avoid using the `L` option when specifying lists or ranges of values, as the results likely won't be what you expect.
| W | Weekday. Use to specify the weekday (Monday-Friday) nearest the given day. As an example, if you specify `15W` in the `day_of_month` field and the 15th is a Saturday, the schedule will trigger on the 14th. If the 15th is a Sunday, the schedule will trigger on Monday the 16th. If the 15th is a Tuesday, the schedule will trigger on Tuesday the 15th. However if you specify `1W` as the value for `day_of_month`, and the 1st is a Saturday, the schedule will trigger on Monday the 3rd--it won't jump over the month boundary. You can specify `LW` in the `day_of_month` field to specify the last weekday of the month. You can only use the `W` option when the `day_of_month` is a single day--it is not valid when specifying a range or list of days.
| # | Nth XXX day in a month. Use in the `day_of_week` field to specify the nth XXX day of the month. For example, if you specify `6#1`, the schedule will trigger on the first Friday of the month. Note that if you specify `3#5` and there are not 5 Tuesdays in a particular month, the schedule won't trigger that month.
|======
.Setting Daily Triggers
[options="header"]
|======
| Cron Expression | Description
| `0 5 9 * * ?` | Trigger at 9:05 AM every day.
| `0 5 9 * * ? 2015` | Trigger at 9:05 AM every day during the year 2015.
|======
.Restricting Triggers to a Range of Days or Times
[options="header"]
|======
| Cron Expression | Description
| `0 5 9 ? * MON-FRI` | Trigger at 9:05 AM Monday through Friday.
| `0 0-5 9 * * ?` | Trigger every minute starting at 9:00 AM and ending at 9:05 AM every day.
|======
.Setting Interval Triggers
[options="header"]
|======
| Cron&nbsp;Expression&nbsp; | Description
| `0 0/15 9 * * ?` | Trigger every 15 minutes starting at 9:00 AM and ending at 9:45 AM every day.
| `0 5 9 1/3 * ?` | Trigger at 9:05 AM every 3 days every month, starting on the first day of the month.
|======
.Setting Schedules that Trigger on a Particular Day
[options="header"]
|======
| Cron Expression | Description
| `0 1 4 1 4 ?` | Trigger every April 1st at 4:01 AM.
| `0 0,30 9 ? 4 WED` | Trigger at 9:00 AM and at 9:30 AM every Wednesday in the month of April.
| `0 5 9 15 * ?` | Trigger at 9:05 AM on the 15th day of every month.
| `0 5 9 15W * ?` | Trigger at 9:05 AM on the nearest weekday to the 15th of every month.
| `0 5 9 ? * 6#1` | Trigger at 9:05 AM on the first Friday of every month.
|======
.Setting Triggers Using Last
[options="header"]
|======
| Cron Expression | Description
| `0 5 9 L * ?` | Trigger at 9:05 AM on the last day of every month.
| `0 5 9 ? * 2L` | Trigger at 9:05 AM on the last Monday of every month
| `0 5 9 LW * ?` | Trigger at 9:05 AM on the last weekday of every month.
|======
===== Configuring a Cron Schedule
To configure a `cron` schedule, you simply specify the cron expression as a string value.
For example, the following snippet configures a `cron` schedule that triggers every day at noon:
[source,json]
--------------------------------------------------
{
...
"trigger" : {
"schedule" : {
"cron" : "0 0 12 * * ?"
}
}
...
}
--------------------------------------------------
[[croneval]]
===== Verifying Cron Expressions
Watcher ships with a `croneval` command line tool that you can use to verify that your cron expressions are
valid and produce the expected results. This tool is
provided in the `$ES_HOME/bin/watcher` directory.
To verify a cron expression, simply pass it in as a string to `croneval`:
[source,bash]
--------------------------------------------------
bin/watcher/croneval "0 0/1 * * * ?"
--------------------------------------------------
If the cron expression is valid, `croneval` displays the next 10 times that the schedule will be triggered.
You can specify the `-c` option to control how many future trigger times are displayed. For example,
the following command displays the next 20 trigger times.
[source,bash]
--------------------------------------------------
bin/watcher/croneval "0 0/1 * * * ?" -c 20
--------------------------------------------------

View File

@ -0,0 +1,98 @@
[[schedule-daily]]
==== Daily Schedule
A <<trigger-schedule, `schedule`>> that triggers at a particular time
every day. To use the `daily` schedule, you specify the time of day (or times)
when you want the scheduler to start the watch execution with the `at` attribute.
Times are specified in the
form `HH:mm` on a 24-hour clock. You can also use the reserved values
`midnight` and `noon` for `00:00` and `12:00`, and <<specifying-times-using-objects, specify times using objects>>.
NOTE: If you don't specify the `at` attribute for a `daily` schedule, it defaults
to firing once daily at midnight, `00:00`.
===== Configuring a Daily Schedule
To configure a once a day schedule, you specify a single time with the `at`
attribute. For example, the following `daily` schedule triggers once every day at 5:00 PM.
[source,json]
--------------------------------------------------
{
...
"trigger" : {
"schedule" : {
"daily" : { "at" : "17:00" }
}
}
...
}
--------------------------------------------------
===== Configuring a Multiple Times Daily Schedule
To configure a `daily` schedule that triggers at multiple times during the day, you specify
an array of times. For example, the following `daily` schedule triggers at `00:00`, `12:00`, and
`17:00` every day.
[source,json]
--------------------------------------------------
{
...
"trigger" : {
"schedule" : {
"daily" : { "at" : [ "midnight", "noon", "17:00" ] }
}
}
...
}
--------------------------------------------------
[[specifying-times-using-objects]]
===== Specifying Times Using Objects
In addition to using the `HH:mm` string syntax to specify times, you
can specify a time as an object that has `hour` and `minute` attributes.
For example, the following `daily` schedule triggers once every day at 5:00 PM.
[source,json]
--------------------------------------------------
{
...
"trigger" : {
"schedule" : {
"daily" : {
"at" {
"hour" : 17,
"minute" : 0
}
}
}
}
...
}
--------------------------------------------------
To specify multiple times using the object notation, you specify multiple hours or minutes as an array.
For example, following `daily` schedule triggers at `00:00`, `00:30`, `12:00`, `12:30`, `17:00` and `17:30`
every day.
[source,json]
--------------------------------------------------
{
...
"trigger" : {
"schedule" : {
"daily" : {
"at" {
"hour" : [ 0, 12, 17 ],
"minute" : [0, 30]
}
}
}
}
...
}
--------------------------------------------------

View File

@ -0,0 +1,49 @@
[[schedule-hourly]]
==== Hourly Schedule
A <<trigger-schedule, `schedule`>> that triggers at a particular minute
every hour of the day. To use the `hourly` schedule, you specify the minute (or minutes)
when you want the scheduler to start the watch execution with the `minute` attribute.
NOTE: If you don't specify the `minute` attribute for an `hourly` schedule, it defaults to `0` and the
schedule triggers on the hour every hour--`12:00`, `13:00`, `14:00`, and so on.
===== Configuring a Once an Hour Schedule
To configure a once an hour schedule, you specify a single time with the `minute`
attribute.
For example, the following `hourly` schedule triggers at minute 30 every hour--
`12:30`, `13:30`, `14:30`, and so on.
[source,json]
--------------------------------------------------
{
...
"trigger" : {
"schedule" : {
"hourly" : { "minute" : 30 }
}
}
...
}
--------------------------------------------------
===== Configuring a Multiple Times Hourly Schedule
To configure an `hourly` schedule that triggers at multiple times during the hour, you specify
an array of minutes. For example, the following schedule triggers every 15
minutes every hour--`12:00`, `12:15`, `12:30`, `12:45`, `1:00`, `1:15`, and so on.
[source,json]
--------------------------------------------------
{
...
"trigger" : {
"schedule" : {
"hourly" : { "minute" : [ 0, 15, 30, 45 ] }
}
}
...
}
--------------------------------------------------

View File

@ -0,0 +1,35 @@
[[schedule-interval]]
==== Interval Schedule
A <<trigger-schedule, `schedule`>> that triggers at a fixed time interval.
The interval can be set in seconds, minutes, hours, days, or weeks:
* `"Xs"` - trigger every `X` seconds. For example, `"30s"` means every 30 seconds.
* `"Xm"` - trigger every `X` minutes. For example, `"5m"` means every 5 minutes.
* `"Xh"` - trigger every `X` hours. For example, `"12h"` means every 12 hours.
* `"Xd"` - trigger every `X` days. For example, `"3d"` means every 3 days.
* `"Xw"` - trigger every `X` weeks. For example, `"2w"` means every 2 weeks.
If you don't specify a time unit, it defaults to seconds.
NOTE: The interval value differs from the standard _time value_ used in Elasticsearch.
You cannot configure intervals in milliseconds or nanoseconds.
===== Configuring an Interval Schedule
To configure an `interval` schedule, you simply specify a string value that represents the interval.
If you omit the unit of time (`s`,`m`, `h`, `d`, or `w`), it defaults to seconds.
For example, the following `interval` schedule triggers every five minutes.
[source,json]
.Hourly Schedule
--------------------------------------------------
{
...
"trigger" : {
"schedule" : {
"interval" : "5m"
}
}
...
}
--------------------------------------------------

View File

@ -0,0 +1,72 @@
[[schedule-monthly]]
==== Monthly Schedule
A <<trigger-schedule, `schedule`>> that triggers at a specific day and time
every month. To use the `monthly` schedule, you specify the day of the month and time (or days and times)
when you want the scheduler to start the watch execution with the `on` and `at` attributes.
You specify the day of month as a numeric value between `1` and `31` (inclusive). Times are specified in the
form `HH:mm` on a 24-hour clock. You can also use the reserved values
`midnight` and `noon` for `00:00` and `12:00`.
===== Configuring a Monthly Schedule
To configure a once a month schedule, you specify a single day and time with the `on`
and `at` attributes. For example, the following `monthly` schedule triggers on the 10th of each
month at noon.
[source,json]
--------------------------------------------------
{
...
"trigger" : {
"schedule" : {
"monthly" : { "on" : 10, "at" : "noon" }
}
}
...
}
--------------------------------------------------
NOTE: You can also specify the day and time with the `day` and `time` attributes, they are
interchangeable with `on` and `at`.
===== Configuring a Multiple Times Monthly Schedule
To configure a `monthly` schedule that triggers multiple times a month, you can specify
an array of day and time values. For example, the following `monthly` schedule
triggers at 12:00 PM on the 10th of each month and at 5:00 PM on the 20th of each month.
[source,json]
--------------------------------------------------
{
...
"trigger" : {
"schedule" : {
"monthly" : [
{ "on" : 10, "at" : "noon" },
{ "on" : 20, "at" : "17:00" }
]
}
}
...
}
--------------------------------------------------
Alternatively, you can specify days and times in an object that has `on` and `at` attributes
that contain an array of values. For example, the following `monthly` schedule triggers at 12:00 AM and 12:00 PM on the
10th and 20th of each month.
[source,json]
--------------------------------------------------
{
...
"trigger" : {
"schedule" : {
"monthly" : {
"on" : [ 10, 20 ],
"at" : [ "midnight", "noon" ]
}
}
}
...
}
--------------------------------------------------

View File

@ -0,0 +1,77 @@
[[schedule-weekly]]
==== Weekly Schedule
A <<trigger-schedule, `schedule`>> that triggers at a specific day and time
every week. To use the `weekly` schedule, you specify the day and time (or days and times)
when you want the scheduler to start the watch execution with the `on` and `at` attributes.
You can specify the day of the week by name, abbreviation, or number (with Sunday being the first day of the week):
* `sunday`, `monday`, `tuesday`, `wednesday`, `thursday`, `friday` and `saturday`
* `sun`, `mon`, `tue`, `wed`, `thu`, `fri` and `sat`
* `1`, `2`, `3`, `4`, `5`, `6` and `7`
Times are specified in the form `HH:mm` on a 24-hour clock. You can also use the reserved values
`midnight` and `noon` for `00:00` and `12:00`.
===== Configuring a Weekly Schedule
To configure a once a week schedule, you specify the day with the `on` attribute
and the time with the `at` attribute.
For example, the following `weekly` schedule triggers once a week
on Friday at 5:00 PM.
[source,json]
--------------------------------------------------
{
...
"trigger" : {
"schedule" : {
"weekly" : { "on" : "friday", "at" : "17:00" }
}
}
...
}
--------------------------------------------------
NOTE: You can also specify the day and time with the `day` and `time` attributes, they are
interchangeable with `on` and `at`.
===== Configuring a Multiple Times Weekly Schedule
To configure a `weekly` schedule that triggers multiple times a week, you can specify
an array of day and time values. For example, the following `weekly` schedule
triggers every Tuesday at 12:00 PM and every Friday at 5:00 PM.
[source,json]
--------------------------------------------------
{
...
"trigger" : {
"schedule" : {
"weekly" : [
{ "on" : "tuesday", "at" : "noon" },
{ "on" : "friday", "at" : "17:00" }
]
}
}
...
}
--------------------------------------------------
Alternatively, you can specify days and times in an object that has `on` and `minute` attributes
that contain an array of values. For example, the following `weekly` schedule triggers every Tuesday and Friday
at 12:00 PM and 17:00 PM.
[source,json]
--------------------------------------------------
{
...
"trigger" : {
"schedule" : {
"weekly" : {
"on" : [ "tuesday", "friday" ],
"at" : [ "noon", "17:00" ]
}
}
}
...
}
--------------------------------------------------

View File

@ -0,0 +1,80 @@
[[schedule-yearly]]
==== Yearly Schedule
A <<trigger-schedule, `schedule`>> that triggers at a specific day and time
every year. To use the `yearly` schedule, you specify the month, day, and time (or months, days, and times)
when you want the scheduler to start the watch execution with the `in`, `on`, and `at` attributes.
You can specify the month by name, abbreviation, or number:
* `january`, `february`, `march`, `april`, `may`, `june`, `july`,
`august`, `september`, `october`, `november` and `december`
* `jan`, `feb`, `mar`, `apr`, `may`, `jun`, `jul`, `aug`,
`sep`, `oct`, `nov` and `dec`
* `1`, `2`, `3`, `4`, `5`, `6`, `7`, `8`, `9`, `10`, `11` and `12`
You specify the day of month as a numeric value between `1` and `31` (inclusive).
The Times are specified in the
form `HH:mm` on a 24-hour clock. You can also use the reserved values
`midnight` and `noon` for `00:00` and `12:00`.
===== Configuring a Yearly Schedule
To configure a once a year schedule, you specify the month with the `in` attribute, the day with the `on` attribute,
and the time with the `at` attribute.
For example, the following `yearly` schedule triggers once a year at noon on January 10th.
[source,json]
--------------------------------------------------
{
...
"trigger" : {
"schedule" : {
"yearly" : { "in" : "january", "on" : 10, "at" : "noon" }
}
}
...
}
--------------------------------------------------
NOTE: You can also specify the month, day, and time with the `month`, `day`, and `time` attributes, they are
interchangeable with `in`, `on`, and `at`.
===== Configuring a Multiple Times Yearly Schedule
To configure a `yearly` schedule that triggers multiple times a year, you can specify
an array of month, day, and time values. For example, the following `yearly` schedule
triggers twice a year: at noon on January 10th, and at 5:00 PM on July 20th.
[source,json]
--------------------------------------------------
{
...
"trigger" : {
"schedule" : {
"yearly" : [
{ "in" : "january", "on" : 10, "at" : "noon" },
{ "in" : "july", "on" : 20, "at" : "17:00" }
]
}
}
...
}
--------------------------------------------------
Alternatively, you can specify the months, days, and times in an object that has `in`, `on`, and `minute` attributes
that contain an array of values. For example, the following `yearly` schedule triggers at 12:00 AM and 12:00 PM on January 10th, January 20th, December 10th, and December 20th.
[source,json]
--------------------------------------------------
{
...
"trigger" : {
"schedule" : {
"yearly" : {
"in: : [ "jan", "dec" ],
"on" : [ 10, 20 ],
"at" : [ "midnight", "noon" ]
}
}
}
...
}
--------------------------------------------------

View File

@ -0,0 +1,108 @@
[[release-notes]]
== Release Notes
[float]
[[version-compatibility]]
=== Version Compatibility
Watcher 1.0.0 is compatible with:
* Elasticsearch: 1.5.2+
* License: 1.0
* Shield: 1.2.2
[float]
[[upgrade-instructions]]
=== Upgrading Watcher
Watcher 1.0.0 is not backward compatible with Watcher 1.0.0-rc1. Follow these steps to
upgrade:
1. Back up all of the watches you've defined. You can search/scan the `.watches` index and save the
returned watches aside.
2. Stop Elasticsearch on all nodes in your cluster.
3. Uninstall the Watcher plugin from each node:
+
[source,yaml]
--------------------------------------------------
bin/plugin -r watcher
--------------------------------------------------
4. Restart Elasticsearch on each node.
5. Delete the `.watches` index and all of the existing `.watch_history-*` indices
+
[source,yaml]
--------------------------------------------------
DELETE .watches
--------------------------------------------------
+
[source,yaml]
--------------------------------------------------
DELETE .watch_history*
--------------------------------------------------
6. Stop Elasticsearch on all nodes in your cluster.
7. From here on you can simply follow the <<getting-started, Getting Started>> guide. If you are
upgrading from Beta1, you can skip the license installation as both Beta1 and Beta2 are
compatible with the same license version (1.0.0). Once Watcher is installed, you can use the
<<api-rest-put-watch, PUT Watch API>> to restore your backed up watches.
[float]
[[change-list]]
=== Change List
[float]
==== 1.0.0
.Enhancements
* Added execution time aware <<dynamic-index-names, dynamic index names>> support to `index`
action, `search` input, and `search` transform.
* You must now explicitly specify the unit when configuring any time value. (Numeric-only
values are no longer supported.)
* Cleaned up the <<api-rest-get-watch, Get Watch API>> response.
* Cleaned up the <<api-rest-stats, Stats API>> response.
[float]
==== 1.0.0-rc1
.New Features
* Added <<api-rest-execute-inline-watch, inline watch>> support to the Execute API
.Enhancements
* Added execution context <<watch-execution-context, variables>> support.
* Email html body sanitization is now <<email-html-sanitization, configurable>>.
* It is now possible to configure timeouts for http requests in
<<http-input-attributes, HTTP input>> and <<webhook-action-attributes, webhook actions>>.
[float]
==== 1.0.0-Beta2
.New Features
* <<actions-ack-throttle, Acking and Throttling>> are now applied at the action level rather than
the watch level.
* Added support for <<anatomy-actions-index-multi-doc-support, multi-doc>> indexing to the index
action.
* Added a queued watches metric that's accessible via the <<api-rest-stats, Stats API>>.
* Added a currently-executing watches metric that's accessible via the <<api-rest-stats, Stats API>>.
.Enhancements
* The <<condition-compare, compare condition>> result now includes the value of each field that
was referenced in the comparison.
* The <<api-rest-execute-watch, Execute API>> now supports a default trigger event
(**breaking change**)
* The `watch_record` document structure in the `.watch_history-*` indices has changed significantly
(**breaking change**)
* A new internal index was introduced - `.triggered_watches`
* Added support for headers in the <<actions-webhook, Webhook Action>> result and the
<<input-http, HTTP Input>> result.
* Add plain text response body support for the <<input-http, HTTP Input>>.
.Bug Fixes
* Disallow negative time value settings for <<actions-ack-throttle, `throttle_period`>>
* Added support for separate keystore and truststore in <<actions-webhook, Webhook Action>>
and <<input-http, HTTP Input>>.

View File

@ -0,0 +1,79 @@
[[troubleshooting]]
== Troubleshooting
Here are some common issues you might encounter while using Watcher. If you don't see a solution
to your problem here, post a question to the {forum}[Watcher Discussion Forum].
[float]
=== Troubleshooting a New Watcher Installation
If Watcher or Elasticsearch fail to start up properly after installation:
* Make sure you are running Elasticsearch 1.5 or later.
* Make sure the License plugin is installed on every node in the cluster.
* If you are using Shield, make sure you are running Shield 1.2.2 or later.
* Make sure Watcher is installed on every node in the cluster.
* Make sure all plugin versions are compatible with the Elasticsearch version.
[float]
=== Logstash Can't Connect to Elasticsearch after Installing Watcher
By default, Logstash uses the `node` protocol. When you use the node protocol, the Logstash
instance joins the Elasticsearch cluster. Because Watcher requires all instances in the cluster
to have the License plugin, Logstash cannot join the cluster unless it has the License plugin.
You can <<logstash-integration, install the Logstash License plugin>> or use the `transport` or
`http` protocol to ship data to Elasticsearch.
[float]
=== Dynamic Mapping Error When Trying to Add a Watch
If you get the error _Dynamic Mapping is Disabled_ when you try to add a watch, verify that the
index mappings for the .watches index are available. You can do that by submitting the following
request:
[source,js]
--------------------------------------------------
GET .watches/_mapping
--------------------------------------------------
// AUTOSENSE
If the index mappings are missing, follow these steps to restore the correct mappings:
. Stop the node.
. Add the configuration setting `watcher.index.rest.direct_access : true` to `elasticsearch.yml`.
. Restart the node.
. Delete the `.watches` index:
+
[source,js]
--------------------------------------------------
DELETE .watches
--------------------------------------------------
+
. Disable direct access to the `.watches` index:
.. Stop the node.
.. Remove `watcher.index.rest.direct_access : true` from `elasticsearch.yml`.
.. Restart the node.
[float]
=== Unable to Send Email
If you get an authentication error that indicates that you need to continue the sign-in process
from a web browser when Watcher attempts to send email, you need to configure Gmail to
https://support.google.com/accounts/answer/6010255?hl=en[Allow Less Secure Apps to access your account].
If you have two-step verification enabled for your email account, you must generate and use an App
Specific password to send email from Watcher. For more information, see:
- Gmail: https://support.google.com/accounts/answer/185833?hl=en[Sign in using App Passwords]
- Outlook.com: http://windows.microsoft.com/en-us/windows/app-passwords-two-step-verification[App passwords and two-step verification]
[float]
=== Watcher Not Responsive
Keep in mind that there's no built-in validation of scripts that you add to a watch. Buggy or
deliberately malicious scripts can negatively impact Watcher performance. For example, if you
add multiple watches with buggy script conditions in a short period of time, Watcher might be
temporarily unable to process watches until the bad watches time out.

364
watcher/pom.xml Normal file
View File

@ -0,0 +1,364 @@
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>org.elasticsearch</groupId>
<artifactId>elasticsearch-watcher</artifactId>
<version>2.0.0.beta1-SNAPSHOT</version>
<scm>
<connection>scm:git:git@github.com:elastic/elasticsearch-watcher.git</connection>
<developerConnection>scm:git:git@github.com:elastic/elasticsearch-watcher.git</developerConnection>
<url>http://github.com/elastic/elasticsearch-watcher</url>
</scm>
<parent>
<groupId>org.elasticsearch.plugin</groupId>
<artifactId>elasticsearch-plugin</artifactId>
<version>2.0.0.beta1-SNAPSHOT</version>
</parent>
<properties>
<elasticsearch.license.header>dev-tools/elasticsearch_license_header.txt</elasticsearch.license.header>
<elasticsearch.license.headerDefinition>dev-tools/license_header_definition.xml</elasticsearch.license.headerDefinition>
<elasticsearch.integ.antfile>dev-tools/integration-tests.xml</elasticsearch.integ.antfile>
<license.plugin.version>2.0.0.beta1-SNAPSHOT</license.plugin.version>
<shield.plugin.version>2.0.0.beta1-SNAPSHOT</shield.plugin.version>
<tests.rest.load_packaged>false</tests.rest.load_packaged>
<tests.timewarp>true</tests.timewarp>
</properties>
<dependencies>
<!-- Test dependencies -->
<dependency>
<groupId>org.subethamail</groupId>
<artifactId>subethasmtp</artifactId>
<version>3.1.7</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.hamcrest</groupId>
<artifactId>hamcrest-all</artifactId>
<version>1.3</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.elasticsearch</groupId>
<artifactId>securemock</artifactId>
<version>1.0-SNAPSHOT</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>com.squareup.okhttp</groupId>
<artifactId>mockwebserver</artifactId>
<version>2.3.0</version>
<scope>test</scope>
</dependency>
<dependency> <!-- required for rest tests -->
<groupId>org.apache.httpcomponents</groupId>
<artifactId>httpclient</artifactId>
<version>4.3.5</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.apache.lucene</groupId>
<artifactId>lucene-expressions</artifactId>
<version>${lucene.maven.version}</version>
<scope>compile</scope>
<optional>true</optional>
</dependency>
<dependency>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
<version>1.2.17</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.elasticsearch</groupId>
<artifactId>elasticsearch-license-plugin</artifactId>
<version>${license.plugin.version}</version>
<type>zip</type>
<scope>test</scope>
</dependency>
<!-- needed for tests that use templating -->
<dependency>
<groupId>com.github.spullara.mustache.java</groupId>
<artifactId>compiler</artifactId>
<version>0.8.13</version>
<optional>true</optional>
</dependency>
<!-- Regular dependencies -->
<dependency>
<groupId>org.elasticsearch</groupId>
<artifactId>elasticsearch</artifactId>
<version>${elasticsearch.version}</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.elasticsearch</groupId>
<artifactId>elasticsearch-license-plugin</artifactId>
<version>${license.plugin.version}</version>
<optional>true</optional>
</dependency>
<dependency>
<groupId>org.elasticsearch</groupId>
<artifactId>elasticsearch-shield</artifactId>
<version>${shield.plugin.version}</version>
<optional>true</optional>
</dependency>
<dependency>
<groupId>com.googlecode.owasp-java-html-sanitizer</groupId>
<artifactId>owasp-java-html-sanitizer</artifactId>
<version>r239</version>
</dependency>
<dependency>
<groupId>com.sun.mail</groupId>
<artifactId>javax.mail</artifactId>
<version>1.5.3</version>
</dependency>
<dependency>
<groupId>javax.activation</groupId>
<artifactId>activation</artifactId>
<version>1.1.1</version>
</dependency>
</dependencies>
<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.elasticsearch</groupId>
<artifactId>elasticsearch-shield</artifactId>
<version>${shield.plugin.version}</version>
<exclusions>
<exclusion>
<groupId>org.elasticsearch</groupId>
<artifactId>elasticsearch</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.elasticsearch</groupId>
<artifactId>elasticsearch-license-plugin</artifactId>
<version>${license.plugin.version}</version>
<exclusions>
<exclusion>
<groupId>org.elasticsearch</groupId>
<artifactId>elasticsearch</artifactId>
</exclusion>
<exclusion>
<groupId>com.spatial4j</groupId>
<artifactId>spatial4j</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.subethamail</groupId>
<artifactId>subethasmtp</artifactId>
<version>3.1.7</version>
<exclusions>
<exclusion>
<groupId>javax.mail</groupId>
<artifactId>mail</artifactId>
</exclusion>
</exclusions>
</dependency>
</dependencies>
</dependencyManagement>
<repositories>
<repository>
<id>elasticsearch-releases</id>
<url>http://maven.elasticsearch.org/releases</url>
<releases>
<enabled>true</enabled>
<updatePolicy>daily</updatePolicy>
</releases>
<snapshots>
<enabled>false</enabled>
</snapshots>
</repository>
<repository>
<id>elasticsearch-snapshots</id>
<url>http://maven.elasticsearch.org/snapshots</url>
<releases>
<enabled>false</enabled>
</releases>
<snapshots>
<enabled>true</enabled>
<updatePolicy>always</updatePolicy>
</snapshots>
</repository>
<repository>
<id>maven2-repository.dev.java.net</id>
<name>Java.net Repository for Maven</name>
<url>http://download.java.net/maven/2/</url>
<layout>default</layout>
</repository>
<repository>
<id>oss-snapshots</id>
<name>Sonatype OSS Snapshots</name>
<url>https://oss.sonatype.org/content/repositories/snapshots/</url>
</repository>
</repositories>
<build>
<resources>
<resource>
<directory>src/main/resources</directory>
<filtering>true</filtering>
</resource>
</resources>
<testResources>
<testResource>
<directory>${basedir}/src/test/resources</directory>
<includes>
<include>**/*.*</include>
</includes>
</testResource>
<testResource>
<directory>${basedir}/rest-api-spec</directory>
<targetPath>rest-api-spec</targetPath>
<includes>
<include>api/*.json</include>
<include>test/**/*.yaml</include>
</includes>
</testResource>
</testResources>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-enforcer-plugin</artifactId>
</plugin>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>buildnumber-maven-plugin</artifactId>
</plugin>
<plugin>
<artifactId>maven-assembly-plugin</artifactId>
<version>2.4</version>
<configuration>
<attach>false</attach>
</configuration>
</plugin>
<plugin>
<groupId>com.carrotsearch.randomizedtesting</groupId>
<artifactId>junit4-maven-plugin</artifactId>
<configuration>
<systemProperties>
<tests.timewarp>${tests.timewarp}</tests.timewarp>
</systemProperties>
</configuration>
</plugin>
</plugins>
</build>
<profiles>
<profile>
<id>deploy-internal</id>
<distributionManagement>
<repository>
<id>elasticsearch-internal-releases</id>
<name>Elasticsearch Internal Releases</name>
<url>http://maven.elasticsearch.org/artifactory/internal-releases</url>
</repository>
<snapshotRepository>
<id>elasticsearch-internal-snapshots</id>
<name>Elasticsearch Internal Snapshots</name>
<url>http://maven.elasticsearch.org/artifactory/internal-snapshots</url>
</snapshotRepository>
</distributionManagement>
</profile>
<profile>
<id>deploy-public</id>
<distributionManagement>
<repository>
<id>elasticsearch-public-releases</id>
<name>Elasticsearch Public Releases</name>
<url>http://maven.elasticsearch.org/artifactory/public-releases</url>
</repository>
</distributionManagement>
</profile>
<profile>
<id>default</id>
<activation>
<activeByDefault>true</activeByDefault>
</activation>
</profile>
<profile>
<id>coverage</id>
<activation>
<property>
<name>tests.coverage</name>
<value>true</value>
</property>
</activation>
<dependencies>
<dependency>
<!-- must be on the classpath -->
<groupId>org.jacoco</groupId>
<artifactId>org.jacoco.agent</artifactId>
<classifier>runtime</classifier>
<version>0.6.4.201312101107</version>
<scope>test</scope>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.jacoco</groupId>
<artifactId>jacoco-maven-plugin</artifactId>
<version>0.6.4.201312101107</version>
<executions>
<execution>
<id>default-prepare-agent</id>
<goals>
<goal>prepare-agent</goal>
</goals>
</execution>
<execution>
<id>default-report</id>
<phase>prepare-package</phase>
<goals>
<goal>report</goal>
</goals>
</execution>
<execution>
<id>default-check</id>
<goals>
<goal>check</goal>
</goals>
</execution>
</executions>
<configuration>
<excludes/>
</configuration>
</plugin>
</plugins>
</build>
</profile>
</profiles>
</project>

View File

@ -0,0 +1,54 @@
{
"bulk": {
"documentation": "This file is copied from es core just to verify that the .watches api hijacking works",
"methods": ["POST", "PUT"],
"url": {
"path": "/_bulk",
"paths": ["/_bulk", "/{index}/_bulk", "/{index}/{type}/_bulk"],
"parts": {
"index": {
"type" : "string",
"description" : "Default index for items which don't provide one"
},
"type": {
"type" : "string",
"description" : "Default document type for items which don't provide one"
}
},
"params": {
"consistency": {
"type" : "enum",
"options" : ["one", "quorum", "all"],
"description" : "Explicit write consistency setting for the operation"
},
"refresh": {
"type" : "boolean",
"description" : "Refresh the index after performing the operation"
},
"replication": {
"type" : "enum",
"options" : ["sync","async"],
"default" : "sync",
"description" : "Explicitely set the replication type"
},
"routing": {
"type" : "string",
"description" : "Specific routing value"
},
"timeout": {
"type" : "time",
"description" : "Explicit operation timeout"
},
"type": {
"type" : "string",
"description" : "Default document type for items which don't provide one"
}
}
},
"body": {
"description" : "The operation definition and data (action-data pairs), separated by newlines",
"required" : true,
"serialize" : "bulk"
}
}
}

View File

@ -0,0 +1,55 @@
{
"cluster.health": {
"documentation": "This file is copied from es core because the REST test framework requires it",
"methods": ["GET"],
"url": {
"path": "/_cluster/health",
"paths": ["/_cluster/health", "/_cluster/health/{index}"],
"parts": {
"index": {
"type" : "string",
"description" : "Limit the information returned to a specific index"
}
},
"params": {
"level": {
"type" : "enum",
"options" : ["cluster","indices","shards"],
"default" : "cluster",
"description" : "Specify the level of detail for returned information"
},
"local": {
"type" : "boolean",
"description" : "Return local information, do not retrieve the state from master node (default: false)"
},
"master_timeout": {
"type" : "time",
"description" : "Explicit operation timeout for connection to master node"
},
"timeout": {
"type" : "time",
"description" : "Explicit operation timeout"
},
"wait_for_active_shards": {
"type" : "number",
"description" : "Wait until the specified number of shards is active"
},
"wait_for_nodes": {
"type" : "string",
"description" : "Wait until the specified number of nodes is available"
},
"wait_for_relocating_shards": {
"type" : "number",
"description" : "Wait until the specified number of relocating shards is finished"
},
"wait_for_status": {
"type" : "enum",
"options" : ["green","yellow","red"],
"default" : null,
"description" : "Wait until cluster is in a specific state"
}
}
},
"body": null
}
}

View File

@ -0,0 +1,66 @@
{
"delete": {
"documentation": "This file is copied from es core just to verify that the .watches api hijacking works",
"methods": ["DELETE"],
"url": {
"path": "/{index}/{type}/{id}",
"paths": ["/{index}/{type}/{id}"],
"parts": {
"id": {
"type" : "string",
"required" : true,
"description" : "The document ID"
},
"index": {
"type" : "string",
"required" : true,
"description" : "The name of the index"
},
"type": {
"type" : "string",
"required" : true,
"description" : "The type of the document"
}
},
"params": {
"consistency": {
"type" : "enum",
"options" : ["one", "quorum", "all"],
"description" : "Specific write consistency setting for the operation"
},
"parent": {
"type" : "string",
"description" : "ID of parent document"
},
"refresh": {
"type" : "boolean",
"description" : "Refresh the index after performing the operation"
},
"replication": {
"type" : "enum",
"options" : ["sync","async"],
"default" : "sync",
"description" : "Specific replication type"
},
"routing": {
"type" : "string",
"description" : "Specific routing value"
},
"timeout": {
"type" : "time",
"description" : "Explicit operation timeout"
},
"version" : {
"type" : "number",
"description" : "Explicit version number for concurrency control"
},
"version_type": {
"type" : "enum",
"options" : ["internal", "external", "external_gte", "force"],
"description" : "Specific version type"
}
}
},
"body": null
}
}

View File

@ -0,0 +1,81 @@
{
"delete_by_query": {
"documentation": "This file is copied from es core just to verify that the .watches api hijacking works",
"methods": ["DELETE"],
"url": {
"path": "/{index}/_query",
"paths": ["/{index}/_query", "/{index}/{type}/_query"],
"parts": {
"index": {
"type" : "list",
"required": true,
"description" : "A comma-separated list of indices to restrict the operation; use `_all` to perform the operation on all indices"
},
"type": {
"type" : "list",
"description" : "A comma-separated list of types to restrict the operation"
}
},
"params": {
"analyzer": {
"type" : "string",
"description" : "The analyzer to use for the query string"
},
"consistency": {
"type" : "enum",
"options" : ["one", "quorum", "all"],
"description" : "Specific write consistency setting for the operation"
},
"default_operator": {
"type" : "enum",
"options" : ["AND","OR"],
"default" : "OR",
"description" : "The default operator for query string query (AND or OR)"
},
"df": {
"type" : "string",
"description" : "The field to use as default where no field prefix is given in the query string"
},
"ignore_unavailable": {
"type" : "boolean",
"description" : "Whether specified concrete indices should be ignored when unavailable (missing or closed)"
},
"allow_no_indices": {
"type" : "boolean",
"description" : "Whether to ignore if a wildcard indices expression resolves into no concrete indices. (This includes `_all` string or when no indices have been specified)"
},
"expand_wildcards": {
"type" : "enum",
"options" : ["open","closed","none","all"],
"default" : "open",
"description" : "Whether to expand wildcard expression to concrete indices that are open, closed or both."
},
"replication": {
"type" : "enum",
"options" : ["sync","async"],
"default" : "sync",
"description" : "Specific replication type"
},
"q": {
"type" : "string",
"description" : "Query in the Lucene query string syntax"
},
"routing": {
"type" : "string",
"description" : "Specific routing value"
},
"source": {
"type" : "string",
"description" : "The URL-encoded query definition (instead of using the request body)"
},
"timeout": {
"type" : "time",
"description" : "Explicit operation timeout"
}
}
},
"body": {
"description" : "A query to restrict the operation specified with the Query DSL"
}
}
}

View File

@ -0,0 +1,75 @@
{
"get": {
"documentation": "This file is copied from es core just to verify that the .watches api hijacking works",
"methods": ["GET"],
"url": {
"path": "/{index}/{type}/{id}",
"paths": ["/{index}/{type}/{id}"],
"parts": {
"id": {
"type" : "string",
"required" : true,
"description" : "The document ID"
},
"index": {
"type" : "string",
"required" : true,
"description" : "The name of the index"
},
"type": {
"type" : "string",
"required" : true,
"description" : "The type of the document (use `_all` to fetch the first document matching the ID across all types)"
}
},
"params": {
"fields": {
"type": "list",
"description" : "A comma-separated list of fields to return in the response"
},
"parent": {
"type" : "string",
"description" : "The ID of the parent document"
},
"preference": {
"type" : "string",
"description" : "Specify the node or shard the operation should be performed on (default: random)"
},
"realtime": {
"type" : "boolean",
"description" : "Specify whether to perform the operation in realtime or search mode"
},
"refresh": {
"type" : "boolean",
"description" : "Refresh the shard containing the document before performing the operation"
},
"routing": {
"type" : "string",
"description" : "Specific routing value"
},
"_source": {
"type" : "list",
"description" : "True or false to return the _source field or not, or a list of fields to return"
},
"_source_exclude": {
"type" : "list",
"description" : "A list of fields to exclude from the returned _source field"
},
"_source_include": {
"type" : "list",
"description" : "A list of fields to extract and return from the _source field"
},
"version" : {
"type" : "number",
"description" : "Explicit version number for concurrency control"
},
"version_type": {
"type" : "enum",
"options" : ["internal", "external", "external_gte", "force"],
"description" : "Specific version type"
}
}
},
"body": null
}
}

View File

@ -0,0 +1,82 @@
{
"index": {
"documentation": "This file is copied from es core just to verify that the .watches api hijacking works",
"methods": ["POST", "PUT"],
"url": {
"path": "/{index}/{type}",
"paths": ["/{index}/{type}", "/{index}/{type}/{id}"],
"parts": {
"id": {
"type" : "string",
"description" : "Document ID"
},
"index": {
"type" : "string",
"required" : true,
"description" : "The name of the index"
},
"type": {
"type" : "string",
"required" : true,
"description" : "The type of the document"
}
},
"params": {
"consistency": {
"type" : "enum",
"options" : ["one", "quorum", "all"],
"description" : "Explicit write consistency setting for the operation"
},
"op_type": {
"type" : "enum",
"options" : ["index", "create"],
"default" : "index",
"description" : "Explicit operation type"
},
"parent": {
"type" : "string",
"description" : "ID of the parent document"
},
"refresh": {
"type" : "boolean",
"description" : "Refresh the index after performing the operation"
},
"replication": {
"type" : "enum",
"options" : ["sync","async"],
"default" : "sync",
"description" : "Specific replication type"
},
"routing": {
"type" : "string",
"description" : "Specific routing value"
},
"timeout": {
"type" : "time",
"description" : "Explicit operation timeout"
},
"timestamp": {
"type" : "time",
"description" : "Explicit timestamp for the document"
},
"ttl": {
"type" : "duration",
"description" : "Expiration time for the document"
},
"version" : {
"type" : "number",
"description" : "Explicit version number for concurrency control"
},
"version_type": {
"type" : "enum",
"options" : ["internal", "external", "external_gte", "force"],
"description" : "Specific version type"
}
}
},
"body": {
"description" : "The document",
"required" : true
}
}
}

View File

@ -0,0 +1,28 @@
{
"indices.delete": {
"documentation": "This file is copied from es core just to verify that the .watches api hijacking works",
"methods": ["DELETE"],
"url": {
"path": "/{index}",
"paths": ["/{index}"],
"parts": {
"index": {
"type" : "list",
"required" : true,
"description" : "A comma-separated list of indices to delete; use `_all` or `*` string to delete all indices"
}
},
"params": {
"timeout": {
"type" : "time",
"description" : "Explicit operation timeout"
},
"master_timeout": {
"type" : "time",
"description" : "Specify timeout for connection to master"
}
}
},
"body": null
}
}

View File

@ -0,0 +1,15 @@
{
"info": {
"documentation": "This file is copied from es core because the REST test framework requires it",
"methods": ["GET"],
"url": {
"path": "/",
"paths": ["/"],
"parts": {
},
"params": {
}
},
"body": null
}
}

View File

@ -0,0 +1,98 @@
{
"update": {
"documentation": "This file is copied from es core just to verify that the .watches api hijacking works",
"methods": ["POST"],
"url": {
"path": "/{index}/{type}/{id}/_update",
"paths": ["/{index}/{type}/{id}/_update"],
"parts": {
"id": {
"type": "string",
"required": true,
"description": "Document ID"
},
"index": {
"type": "string",
"required": true,
"description": "The name of the index"
},
"type": {
"type": "string",
"required": true,
"description": "The type of the document"
}
},
"params": {
"consistency": {
"type": "enum",
"options": ["one", "quorum", "all"],
"description": "Explicit write consistency setting for the operation"
},
"fields": {
"type": "list",
"description": "A comma-separated list of fields to return in the response"
},
"lang": {
"type": "string",
"description": "The script language (default: groovy)"
},
"parent": {
"type": "string",
"description": "ID of the parent document"
},
"refresh": {
"type": "boolean",
"description": "Refresh the index after performing the operation"
},
"replication": {
"type": "enum",
"options": ["sync", "async"],
"default": "sync",
"description": "Specific replication type"
},
"retry_on_conflict": {
"type": "number",
"description": "Specify how many times should the operation be retried when a conflict occurs (default: 0)"
},
"routing": {
"type": "string",
"description": "Specific routing value"
},
"script": {
"description": "The URL-encoded script definition (instead of using request body)"
},
"script_id": {
"description": "The id of a stored script"
},
"scripted_upsert": {
"type": "boolean",
"description": "True if the script referenced in script or script_id should be called to perform inserts - defaults to false"
},
"timeout": {
"type": "time",
"description": "Explicit operation timeout"
},
"timestamp": {
"type": "time",
"description": "Explicit timestamp for the document"
},
"ttl": {
"type": "duration",
"description": "Expiration time for the document"
},
"version": {
"type": "number",
"description": "Explicit version number for concurrency control"
},
"version_type": {
"type": "enum",
"options": ["internal", "force"],
"description": "Specific version type"
}
}
},
"body": {
"description": "The request definition using either `script` or partial `doc`"
}
}
}

View File

@ -0,0 +1,28 @@
{
"watcher.ack_watch": {
"documentation": "http://www.elastic.co/guide/en/watcher/current/appendix-api-ack-watch.html",
"methods": [ "PUT", "POST" ],
"url": {
"path": "/_watcher/watch/{watch_id}/_ack",
"paths": [ "/_watcher/watch/{watch_id}/_ack", "/_watcher/watch/{watch_id}/{action_id}/_ack"],
"parts": {
"watch_id": {
"type" : "string",
"description" : "Watch ID",
"required" : true
},
"action_id": {
"type" : "list",
"description" : "A comma-separated list of the action ids to be acked"
}
},
"params": {
"master_timeout": {
"type": "duration",
"description": "Specify timeout for watch write operation"
}
}
},
"body": null
}
}

View File

@ -0,0 +1,28 @@
{
"watcher.delete_watch": {
"documentation": "http://www.elastic.co/guide/en/watcher/current/appendix-api-delete-watch.html",
"methods": [ "DELETE" ],
"url": {
"path": "/_watcher/watch/{id}",
"paths": [ "/_watcher/watch/{id}" ],
"parts": {
"id": {
"type" : "string",
"description" : "Watch ID",
"required" : true
}
},
"params": {
"master_timeout": {
"type": "duration",
"description": "Specify timeout for watch write operation"
},
"force": {
"type": "boolean",
"description": "Specify if this request should be forced and ignore locks"
}
}
},
"body": null
}
}

View File

@ -0,0 +1,27 @@
{
"watcher.execute_watch": {
"documentation": "http://www.elastic.co/guide/en/watcher/current/appendix-api-execute-watch.html",
"methods": [ "PUT", "POST" ],
"url": {
"path": "/_watcher/watch/{id}/_execute",
"paths": [ "/_watcher/watch/{id}/_execute", "/_watcher/watch/_execute" ],
"parts": {
"id": {
"type" : "string",
"description" : "Watch ID"
}
},
"params": {
"debug" : {
"type" : "boolean",
"description" : "indicates whether the watch should execute in debug mode",
"required" : false
}
}
},
"body": {
"description" : "Execution control",
"required" : false
}
}
}

Some files were not shown because too many files have changed in this diff Show More