initial migration of shield

Original commit: elastic/x-pack-elasticsearch@2bf095d3cb
This commit is contained in:
uboness 2015-07-13 12:31:34 +02:00
parent f71ba1f025
commit 8babe1c456
333 changed files with 6090 additions and 0 deletions

417
shield/LICENSE.txt Normal file
View File

@ -0,0 +1,417 @@
SHIELD SOFTWARE LICENSE AGREEMENT
READ THIS AGREEMENT CAREFULLY, WHICH CONSTITUTES A LEGALLY BINDING AGREEMENT AND GOVERNS YOUR USE OF ELASTICSEARCH'S
SHIELD SOFTWARE. BY INSTALLING AND/OR USING THE SHIELD SOFTWARE, YOU ARE INDICATING THAT YOU AGREE TO THE TERMS AND
CONDITIONS SET FORTH IN THIS AGREEMENT. IF YOU DO NOT AGREE WITH SUCH TERMS AND CONDITIONS, YOU MAY NOT INSTALL OR USE
THE SHIELD SOFTWARE.
This SHIELD SOFTWARE LICENSE AGREEMENT (this "Agreement") is entered into by and between the applicable Elasticsearch
entity referred to in Attachment 1 below ("Elasticsearch") and the person or entity ("You") that has downloaded
Elasticsearch's Shield software to which this Agreement is attached ("Shield Software"). This Agreement is effective as
of the date an applicable ordering document ("Order Form") is entered into by Elasticsearch and You (the "Effective
Date").
1. SOFTWARE LICENSE AND RESTRICTIONS
1.1 License Grants.
(a) 30 Day Free Trial License. Subject to the terms and conditions of this Agreement, Elasticsearch agrees to grant,
and does hereby grant to You for a period of thirty (30) days from the Effective Date (the "Trial Term"), solely for
Your internal business operations, a limited, non-exclusive, non-transferable, fully paid up, right and license
(without the right to grant or authorize sublicenses) to: (i) install and use the object code version of the Shield
Software; (ii) use, and distribute internally a reasonable number of copies of the documentation, if any, provided with
the Shield Software ("Documentation"), provided that You must include on such copies all Elasticsearch trademarks, trade
names, logos and notices present on the Documentation as originally provided to You by Elasticsearch; (iii) permit third
party contractors performing services on Your behalf to use the Shield Software and Documentation as set forth in (i)
and (ii) above, provided that such use must be solely for Your benefit, and You shall be responsible for all acts and
omissions of such contractors in connection with their use of the Shield Software. For the avoidance of doubt, You
understand and agree that upon the expiration of the Trial Term, Your license to use the Shield Software will terminate,
unless you purchase a Qualifying Subscription (as defined below) for Elasticsearch support services.
(b) Fee-Bearing Production License. Subject to the terms and conditions of this Agreement and complete payment of any
and all applicable fees for a Gold or Platinum production subscription for support services for Elasticsearch open
source software (in each case, a "Qualifying Subscription"), Elasticsearch agrees to grant, and does hereby grant to You
during the term of the applicable Qualifying Subscription, and for the restricted scope of this Agreement, solely for
Your internal business operations, a limited, non-exclusive, non-transferable right and license (without the right to
grant or authorize sublicenses) to: (i) install and use the object code version of the Shield Software, subject to any
applicable quantitative limitations set forth in the applicable Order Form; (ii) use, and distribute internally a
reasonable number of copies of the Documentation, if any, provided with the Shield Software, provided that You must
include on such copies all Elasticsearch trademarks, trade names, logos and notices present on the Documentation as
originally provided to You by Elasticsearch; (iii) permit third party contractors performing services on Your behalf to
use the Shield Software and Documentation as set forth in (i) and (ii) above, provided that such use must be solely for
Your benefit, and You shall be responsible for all acts and omissions of such contractors in connection with their use
of the Shield Software.
1.2 Reservation of Rights; Restrictions. As between Elasticsearch and You, Elasticsearch owns all right title and
interest in and to the Shield Software and any derivative works thereof, and except as expressly set forth in Section
1.1 above, no other license to the Shield Software is granted to You by implication, estoppel or otherwise. You agree
not to: (i) prepare derivative works from, modify, copy or use the Shield Software in any manner except as expressly
permitted in this Agreement or applicable law; (ii) transfer, sell, rent, lease, distribute, sublicense, loan or
otherwise transfer the Shield Software in whole or in part to any third party; (iii) use the Shield Software for
providing time-sharing services, any software-as-a-service offering ("SaaS"), service bureau services or as part of an
application services provider or other service offering; (iv) alter or remove any proprietary notices in the Shield
Software; or (v) make available to any third party any analysis of the results of operation of the Shield Software,
including benchmarking results, without the prior written consent of Elasticsearch. The Shield Software may contain or
be provided with open source libraries, components, utilities and other open source software (collectively, "Open Source
Software"), which Open Source Software may have applicable license terms as identified on a website designated by
Elasticsearch or otherwise provided with the Shield Software or Documentation. Notwithstanding anything to the contrary
herein, use of the Open Source Software shall be subject to the license terms and conditions applicable to such Open
Source Software, to the extent required by the applicable licensor (which terms shall not restrict the license rights
granted to You hereunder, but may contain additional rights).
1.3 Open Source. The Shield Software may contain or be provided with open source libraries, components, utilities and
other open source software (collectively, "Open Source"), which Open Source may have applicable license terms as
identified on a website designated by Elasticsearch or otherwise provided with the applicable Software or Documentation.
Notwithstanding anything to the contrary herein, use of the Open Source shall be subject to the applicable Open Source
license terms and conditions to the extent required by the applicable licensor (which terms shall not restrict the
license rights granted to You hereunder but may contain additional rights).
1.4 Audit Rights. You agree that Elasticsearch shall have the right, upon five (5) business days' notice to You, to
audit Your use of the Shield Software for compliance with any quantitative limitations on Your use of the Shield
Software that are set forth in the applicable Order Form. You agree to provide Elasticsearch with the necessary access
to the Shield Software to conduct such an audit either (i) remotely, or (ii) if remote performance is not possible, at
Your facilities, during normal business hours and no more than one (1) time in any twelve (12) month period. In the
event any such audit reveals that You have used the Shield Software in excess of the applicable quantitative
limitations, You agree to promptly pay to Elasticsearch an amount equal to the difference between the fees actually paid
and the fees that You should have paid to remain in compliance with such quantitative limitations. This Section 1.3
shall survive for a period of two (2) years from the termination or expiration of this Agreement.
2. TERM AND TERMINATION
2.1 Term. This Agreement shall commence on the Effective Date, and shall continue in force for the license term set
forth in the applicable Order Form, unless earlier terminated under Section 2.2 below, provided, however, that if You do
not purchase a Qualifying Subscription prior to the expiration of the Trial Term, this Agreement will expire at the end
of the Trial Term.
2.2 Termination. Either party may, upon written notice to the other party, terminate this Agreement for material
breach by the other party automatically and without any other formality, if such party has failed to cure such material
breach within thirty (30) days of receiving written notice of such material breach from the non-breaching party.
Notwithstanding the foregoing, this Agreement shall automatically terminate in the event that You intentionally breach
the scope of the license granted in Section 1.1 of this Agreement.
2.3 Post Termination or Expiration. Upon termination or expiration of this Agreement, for any reason, You shall
promptly cease the use of the Shield Software and Documentation and destroy (and certify to Elasticsearch in writing the
fact of such destruction), or return to Elasticsearch, all copies of the Shield Software and Documentation then in Your
possession or under Your control.
2.4 Survival. Sections 2.3, 2.4, 3, 4 and 5 shall survive any termination or expiration of this Agreement.
3. DISCLAIMER OF WARRANTIES
TO THE MAXIMUM EXTENT PERMITTED UNDER APPLICABLE LAW, THE SHIELD SOFTWARE IS PROVIDED "AS IS" WITHOUT WARRANTY OF ANY
KIND, AND ELASTICSEARCH AND ITS LICENSORS MAKE NO WARRANTIES WHETHER EXPRESSED, IMPLIED OR STATUTORY REGARDING OR
RELATING TO THE SHIELD SOFTWARE OR DOCUMENTATION. TO THE MAXIMUM EXTENT PERMITTED UNDER APPLICABLE LAW, ELASTICSEARCH
AND ITS LICENSORS SPECIFICALLY DISCLAIM ALL IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NON-INFRINGEMENT WITH RESPECT TO THE SHIELD SOFTWARE AND DOCUMENTATION, AND WITH RESPECT TO THE USE OF THE FOREGOING.
FURTHER, ELASTICSEARCH DOES NOT WARRANT RESULTS OF USE OR THAT THE SHIELD SOFTWARE WILL BE ERROR FREE OR THAT THE USE OF
THE SHIELD SOFTWARE WILL BE UNINTERRUPTED.
4. LIMITATION OF LIABILITY
4.1 Disclaimer of Certain Damages. IN NO EVENT SHALL YOU OR ELASTICSEARCH OR ITS LICENSORS BE LIABLE FOR ANY LOSS OF
PROFITS, LOSS OF USE, BUSINESS INTERRUPTION, LOSS OF DATA, COST OF SUBSTITUTE GOODS OR SERVICES, OR FOR ANY INDIRECT,
SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES OF ANY KIND IN CONNECTION WITH OR ARISING OUT OF THE USE OR INABILITY TO
USE THE SHIELD SOFTWARE, OR THE PERFORMANCE OF OR FAILURE TO PERFORM THIS AGREEMENT, WHETHER ALLEGED AS A BREACH OF
CONTRACT OR TORTIOUS CONDUCT, INCLUDING NEGLIGENCE, EVEN IF THE RESPONSIBLE PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES. THE LIMITATIONS OF LIABILITY SET FORTH IN THIS SECTION 4.1 SHALL NOT APPLY TO A BREACH THROUGH GROSS
NEGLIGENCE OR INTENTIONAL MISCONDUCT BY YOU OF THE SCOPE OF THE LICENSE GRANTED IN SECTION 1.1 OR TO ANY OTHER LIABILITY
THAT CANNOT BE EXCLUDED OR LIMITED UNDER APPLICABLE LAW.
4.2 Damages Cap. IN NO EVENT SHALL ELASTICSEARCH'S OR ITS LICENSORS' AGGREGATE, CUMULATIVE LIABILITY UNDER THIS
AGREEMENT EXCEED THE AMOUNT YOU PAID, IN THE TWELVE (12) MONTHS IMMEDIATELY PRIOR TO THE EVENT GIVING RISE TO LIABILITY,
UNDER THE ELASTICSEARCH SUPPORT SERVICES AGREEMENT PURSUANT TO WHICH YOU PURCHASED THE QUALIFYING SUBSCRIPTION, PROVIDED
THAT IF YOU ARE USING THE SHIELD SOFTWARE UNDER A TRIAL LICENSE PURSUANT TO SECTION 1.1(a), IN NO EVENT SHALL
ELASTICSEARCH'S AGGREGATE, CUMULATIVE LIABILITY UNDER THIS AGREEMENT EXCEED ONE THOUSAND DOLLARS ($1,000).
4.3 YOU AGREE THAT THE FOREGOING LIMITATIONS, EXCLUSIONS AND DISCLAIMERS ARE A REASONABLE ALLOCATION OF THE RISK
BETWEEN THE PARTIES AND WILL APPLY TO THE MAXIMUM EXTENT PERMITTED BY APPLICABLE LAW, EVEN IF ANY REMEDY FAILS IN ITS
ESSENTIAL PURPOSE.
5. MISCELLANEOUS
This Agreement, including Attachment 1 hereto, which is hereby incorporated herein by this reference, completely and
exclusively states the entire agreement of the parties regarding the subject matter herein, and it supersedes, and its
terms govern, all prior proposals, agreements, or other communications between the parties, oral or written, regarding
such subject matter. For the avoidance of doubt, the parties hereby expressly acknowledge and agree that if You issue
any purchase order or similar document in connection with its purchase of a license to the Shield Software, You will do
so only for Your internal, administrative purposes and not with the intent to provide any contractual terms. This
Agreement may not be modified except by a subsequently dated, written amendment that expressly amends this Agreement and
which is signed on behalf of Elasticsearch and You, by duly authorized representatives. If any provision(s) hereof is
held unenforceable, this Agreement will continue without said provision and be interpreted to reflect the original
intent of the parties.
ATTACHMENT 1
ADDITIONAL TERMS AND CONDITIONS
A. The following additional terms and conditions apply to all Customers with principal offices in the United States of
America:
(1) Applicable Elasticsearch Entity. The entity providing the license is Elasticsearch, Inc., a Delaware corporation.
(2) Government Rights. The Shield Software product is "Commercial Computer Software," as that term is defined in 48
(C.F.R. 2.101, and as the term is used in 48 C.F.R. Part 12, and is a Commercial Item comprised of "commercial computer
(software" and "commercial computer software documentation". If acquired by or on behalf of a civilian agency, the U.S.
(Government acquires this commercial computer software and/or commercial computer software documentation subject to the
(terms of this Agreement, as specified in 48 C.F.R. 12.212 Computer Software) and 12.211 Technical Data) of the Federal
(Acquisition Regulation "FAR") and its successors. If acquired by or on behalf of any agency within the Department of
(Defense "DOD"), the U.S. Government acquires this commercial computer software and/or commercial computer software
(documentation subject to the terms of the Elasticsearch Software License Agreement as specified in 48 C.F.R. 227.7202-3
(and 48 C.F.R. 227.7202-4 of the DOD FAR Supplement "DFARS") and its successors, and consistent with 48 C.F.R. 227.7202.
(This U.S. Government Rights clause, consistent with 48 C.F.R. 12.212 and 48 C.F.R. 227.7202 is in lieu of, and
(supersedes, any other FAR, DFARS, or other clause or provision that addresses Government rights in computer software,
(computer software documentation or technical data related to the Shield Software under this Agreement and in any
(Subcontract under which this commercial computer software and commercial computer software documentation is acquired or
(licensed.
(3) Export Control. You acknowledge that the goods, software and technology acquired from Elasticsearch are subject to
(U.S. export control laws and regulations, including but not limited to the International Traffic In Arms Regulations
("ITAR") 22 C.F.R. Parts 120-130 2010)); the Export Administration Regulations "EAR") 15 C.F.R. Parts 730-774 2010));
(the U.S. antiboycott regulations in the EAR and U.S. Department of the Treasury regulations; the economic sanctions
(regulations and guidelines of the U.S. Department of the Treasury, Office of Foreign Assets Control, and the USA
(Patriot Act Title III of Pub. L. 107-56, signed into law October 26, 2001), as amended. You are now and will remain in
(the future compliant with all such export control laws and regulations, and will not export, re-export, otherwise
(transfer any Elasticsearch goods, software or technology or disclose any Elasticsearch software or technology to any
(person contrary to such laws or regulations. You acknowledge that remote access to the Shield Software may in certain
(circumstances be considered a re-export of Shield Software, and accordingly, may not be granted in contravention of
(U.S. export control laws and regulations.
(4) Governing Law. This Agreement will be governed by the laws of the State of California, without regard to its
(conflict of laws principles. This Agreement shall not be governed by the 1980 UN Convention on Contracts for the
(International Sale of Goods. All suits hereunder will be brought solely in Federal Court for the Northern District of
(California, or if that court lacks subject matter jurisdiction, in any California State Court located in Santa Clara
(County. The parties hereby irrevocably waive any and all claims and defenses either might otherwise have in any such
(action or proceeding in any of such courts based upon any alleged lack of personal jurisdiction, improper venue, forum
(non conveniens or any similar claim or defense.
B. The following additional terms and conditions apply to all Customers with principal offices in Canada:
(1) Applicable Elasticsearch Entity. The entity providing the license is Elasticsearch B.C. Ltd., a corporation
(incorporated under laws of the Province of British Columbia.
(2) Export Control. You acknowledge that the goods, software and technology acquired from Elasticsearch are subject to
the restrictions and controls set out in Section A(3) above as well as those imposed by the Export and Import Permits
Act (Canada) and the regulations thereunder and that you will comply with all applicable laws and regulations. Without
limitation, You acknowledge that the Marvel Software, or any portion thereof, will not be exported: (a) to any country
on Canada's Area Control List; (b) to any country subject to UN Security Council embargo or action; or (c) contrary to
Canada's Export Control List Item 5505. You are now and will remain in the future compliant with all such export control
laws and regulations, and will not export, re-export, otherwise transfer any Elasticsearch goods, software or technology
or disclose any Elasticsearch software or technology to any person contrary to such laws or regulations. You will not
export or re-export the Marvel Software, or any portion thereof, directly or indirectly, in violation of the Canadian
export administration laws and regulations to any country or end user, or to any end user who you know or have reason to
know will utilize them in the design, development or production of nuclear, chemical or biological weapons. You further
acknowledge that the Marvel Software product may include technical data subject to such Canadian export regulations.
Elasticsearch does not represent that the Marvel Software is appropriate or available for use in all countries.
Elasticsearch prohibits accessing materials from countries or states where contents are illegal. You are using the
Marvel Software on your own initiative and you are responsible for compliance with all applicable laws. You hereby agree
to indemnify Elasticsearch and its affiliates from any claims, actions, liability or expenses (including reasonable
lawyers' fees) resulting from Your failure to act in accordance with the acknowledgements, agreements, and
representations in this Section B(2).
(3) Governing Law and Dispute Resolution. This Agreement shall be governed by the Province of Ontario and the federal
laws of Canada applicable therein without regard to conflict of laws provisions. The parties hereby irrevocably waive
any and all claims and defenses either might otherwise have in any such action or proceeding in any of such courts based
upon any alleged lack of personal jurisdiction, improper venue, forum non conveniens or any similar claim or defense.
Any dispute, claim or controversy arising out of or relating to this Agreement or the existence, breach, termination,
enforcement, interpretation or validity thereof, including the determination of the scope or applicability of this
agreement to arbitrate, (each, a "Dispute"), which the parties are unable to resolve after good faith negotiations,
shall be submitted first to the upper management level of the parties. The parties, through their upper management level
representatives shall meet within thirty (30) days of the Dispute being referred to them and if the parties are unable
to resolve such Dispute within thirty (30) days of meeting, the parties agree to seek to resolve the Dispute through
mediation with ADR Chambers in the City of Toronto, Ontario, Canada before pursuing any other proceedings. The costs of
the mediator shall be shared equally by the parties. If the Dispute has not been resolved within thirty (30) days of the
notice to desire to mediate, any party may terminate the mediation and proceed to arbitration and the matter shall be
referred to and finally resolved by arbitration at ADR Chambers pursuant to the general ADR Chambers Rules for
Arbitration in the City of Toronto, Ontario, Canada. The arbitration shall proceed in accordance with the provisions of
the Arbitration Act (Ontario). The arbitral panel shall consist of three (3) arbitrators, selected as follows: each
party shall appoint one (1) arbitrator; and those two (2) arbitrators shall discuss and select a chairman. If the two
(2) party-appointed arbitrators are unable to agree on the chairman, the chairman shall be selected in accordance with
the applicable rules of the arbitration body. Each arbitrator shall be independent of each of the parties. The
arbitrators shall have the authority to grant specific performance and to allocate between the parties the costs of
arbitration (including service fees, arbitrator fees and all other fees related to the arbitration) in such equitable
manner as the arbitrators may determine. The prevailing party in any arbitration shall be entitled to receive
reimbursement of its reasonable expenses incurred in connection therewith. Judgment upon the award so rendered may be
entered in a court having jurisdiction or application may be made to such court for judicial acceptance of any award and
an order of enforcement, as the case may be. Notwithstanding the foregoing, Elasticsearch shall have the right to
institute an action in a court of proper jurisdiction for preliminary injunctive relief pending a final decision by the
arbitrator, provided that a permanent injunction and damages shall only be awarded by the arbitrator. The language to
be used in the arbitral proceedings shall be English.
(4) Language. Any translation of this Agreement is done for local requirements and in the event of a dispute between
(the English and any non-English version, the English version of this Agreement shall govern. At the request of the
(parties, the official language of this Agreement and all communications and documents relating hereto is the English
(language, and the English-language version shall govern all interpretation of the Agreement. Ë la demande des parties,
(la langue officielle de la prŽsente convention ainsi que toutes communications et tous documents s'y rapportant est la
(langue anglaise, et la version anglaise est celle qui rŽgit toute interprŽtation de la prŽsente convention.
(5) Disclaimer of Warranties. For Customers with principal offices in the Province of QuŽbec, the following new
(sentence is to be added to the end of Section 3: "SOME JURISDICTIONS DO NOT ALLOW LIMITATIONS OR EXCLUSIONS OF CERTAIN
(TYPES OF DAMAGES AND/OR WARRANTIES AND CONDITIONS. THE LIMITATIONS, EXCLUSIONS AND DISCLAIMERS SET FORTH IN THIS
(AGREEMENT SHALL NOT APPLY IF AND ONLY IF AND TO THE EXTENT THAT THE LAWS OF A COMPETENT JURISDICTION REQUIRE
(LIABILITIES BEYOND AND DESPITE THESE LIMITATIONS, EXCLUSIONS AND DISCLAIMERS."
(6) Limitation of Liability. For Customers with principal offices in the Province of QuŽbec, the following new
(sentence is to be added to the end of Section 4.1: "SOME JURISDICTIONS DO NOT ALLOW LIMITATIONS OR EXCLUSIONS OF
(CERTAIN TYPES OF DAMAGES AND/OR WARRANTIES AND CONDITIONS. THE LIMITATIONS, EXCLUSIONS AND DISCLAIMERS SET FORTH IN
(THIS AGREEMENT SHALL NOT APPLY IF AND ONLY IF AND TO THE EXTENT THAT THE LAWS OF A COMPETENT JURISDICTION REQUIRE
(LIABILITIES BEYOND AND DESPITE THESE LIMITATIONS, EXCLUSIONS AND DISCLAIMERS."
C. The following additional terms and conditions apply to all Customers with principal offices outside of the United
States of America and Canada:
(1) Applicable Elasticsearch Entity. The entity providing the license in Germany is Elasticsearch Gmbh; in France is
(Elasticsearch SARL, in the United Kingdom is Elasticsearch Ltd, in Australia is Elasticsearch Pty Ltd., in Japan is
(Elasticsearch KK, and in all other countries is Elasticsearch BV.
(2) Choice of Law. This Agreement shall be governed by and construed in accordance with the laws of the State of New
(York, without reference to or application of choice of law rules or principles. Notwithstanding any choice of law
(provision or otherwise, the Uniform Computer Information Transactions Act UCITA) and the United Nations Convention on
(the International Sale of Goods shall not apply.
(3) Arbitration. Any dispute, claim or controversy arising out of or relating to this Agreement or the existence,
(breach, termination, enforcement, interpretation or validity thereof, including the determination of the scope or
(applicability of this agreement to arbitrate, each, a "Dispute") shall be referred to and finally resolved by
(arbitration under the rules and at the location identified below. The arbitral panel shall consist of three 3)
(arbitrators, selected as follows: each party shall appoint one 1) arbitrator; and those two 2) arbitrators shall
(discuss and select a chairman. If the two party-appointed arbitrators are unable to agree on the chairman, the chairman
(shall be selected in accordance with the applicable rules of the arbitration body. Each arbitrator shall be independent
(of each of the parties. The arbitrators shall have the authority to grant specific performance and to allocate between
(the parties the costs of arbitration including service fees, arbitrator fees and all other fees related to the
(arbitration) in such equitable manner as the arbitrators may determine. The prevailing party in any arbitration shall
(be entitled to receive reimbursement of its reasonable expenses incurred in connection therewith. Judgment upon the
(award so rendered may be entered in a court having jurisdiction or application may be made to such court for judicial
(acceptance of any award and an order of enforcement, as the case may be. Notwithstanding the foregoing, Elasticsearch
(shall have the right to institute an action in a court of proper jurisdiction for preliminary injunctive relief pending
(a final decision by the arbitrator, provided that a permanent injunction and damages shall only be awarded by the
(arbitrator. The language to be used in the arbitral proceedings shall be English.
(a) In addition, the following terms only apply to Customers with principal offices within Europe, the Middle East or
(Africa EMEA):
Arbitration Rules and Location. Any Dispute shall be referred to and finally resolved by arbitration under the London
Court of International Arbitration ("LCIA") Rules (which Rules are deemed to be incorporated by reference into this
clause) on the basis that the governing law is the law of the State of New York, USA. The seat, or legal place, of
arbitration shall be London, England.
(b) In addition, the following terms only apply to Customers with principal offices within Asia Pacific, Australia &
(New Zealand:
Arbitration Rules and Location. Any Dispute shall be referred to and finally resolved by arbitration under the Rules of
Conciliation and Arbitration of the International Chamber of Commerce ("ICC") in force on the date when the notice of
arbitration is submitted in accordance with such Rules (which Rules are deemed to be incorporated by reference into this
clause) on the basis that the governing law is the law of the State of New York, USA. The seat, or legal place, of
arbitration shall be Singapore.
(c) In addition, the following terms only apply to Customers with principal offices within the Americas excluding North
(America):
Arbitration Rules and Location. Any Dispute shall be referred to and finally resolved by arbitration under
International Dispute Resolution Procedures of the American Arbitration Association ("AAA") in force on the date when
the notice of arbitration is submitted in accordance with such Procedures (which Procedures are deemed to be
incorporated by reference into this clause) on the basis that the governing law is the law of the State of New York,
USA. The seat, or legal place, of arbitration shall be New York, New York, USA.
(4) In addition, for Customers with principal offices within the UK, the following new sentence is added to the end of
(Section 4.1:
Nothing in this Agreement shall have effect so as to limit or exclude a party's liability for death or personal injury
caused by negligence or for fraud including fraudulent misrepresentation and this Section 4.1 shall take effect subject
to this provision.
(5) In addition, for Customers with principal offices within France, Sections 1.2, 3 and 4.1 of the Agreement are
(deleted and replaced with the following new Sections 1.2, 3 and 4.1:
1.2 Reservation of Rights; Restrictions. Elasticsearch owns all right title and interest in and to the Shield Software
and any derivative works thereof, and except as expressly set forth in Section 1.1 above, no other license to the Shield
Software is granted to You by implication, or otherwise. You agree not to prepare derivative works from, modify, copy or
use the Shield Software in any manner except as expressly permitted in this Agreement; provided that You may copy the
Shield Software for archival purposes, only where such software is provided on a non-durable medium; and You may
decompile the Shield Software, where necessary for interoperability purposes and where necessary for the correction of
errors making the software unfit for its intended purpose, if such right is not reserved by Elasticsearch as editor of
the Shield Software. Pursuant to article L122-6-1 of the French intellectual property code, Elasticsearch reserves the
right to correct any bugs as necessary for the Shield Software to serve its intended purpose. You agree not to: (i)
transfer, sell, rent, lease, distribute, sublicense, loan or otherwise transfer the Shield Software in whole or in part
to any third party; (ii) use the Shield Software for providing time-sharing services, any software-as-a-service
offering ("SaaS"), service bureau services or as part of an application services provider or other service offering;
(iii) alter or remove any proprietary notices in the Shield Software; or (iv) make available to any third party any
analysis of the results of operation of the Shield Software, including benchmarking results, without the prior written
consent of Elasticsearch.
3. DISCLAIMER OF WARRANTIES
TO THE MAXIMUM EXTENT PERMITTED UNDER APPLICABLE LAW, THE SHIELD SOFTWARE IS PROVIDED "AS IS" WITHOUT WARRANTY OF ANY
KIND, AND ELASTICSEARCH AND ITS LICENSORS MAKE NO WARRANTIES WHETHER EXPRESSED, IMPLIED OR STATUTORY REGARDING OR
RELATING TO THE SHIELD SOFTWARE OR DOCUMENTATION. TO THE MAXIMUM EXTENT PERMITTED UNDER APPLICABLE LAW, ELASTICSEARCH
AND ITS LICENSORS SPECIFICALLY DISCLAIM ALL IMPLIED WARRANTIES OF FITNESS FOR A PARTICULAR PURPOSE WITH RESPECT TO THE
SHIELD SOFTWARE AND DOCUMENTATION, AND WITH RESPECT TO THE USE OF THE FOREGOING. FURTHER, ELASTICSEARCH DOES NOT
WARRANT RESULTS OF USE OR THAT THE SHIELD SOFTWARE WILL BE ERROR FREE OR THAT THE USE OF THE SHIELD SOFTWARE WILL BE
UNINTERRUPTED.
4.1 Disclaimer of Certain Damages. IN NO EVENT SHALL YOU OR ELASTICSEARCH OR ITS LICENSORS BE LIABLE FOR ANY LOSS OF
PROFITS, LOSS OF USE, BUSINESS INTERRUPTION, LOSS OF DATA, COST OF SUBSTITUTE GOODS OR SERVICES, OR FOR ANY INDIRECT OR
UNFORESEEABLE DAMAGES OF ANY KIND IN CONNECTION WITH OR ARISING OUT OF THE USE OR INABILITY TO USE THE SHIELD SOFTWARE,
OR THE PERFORMANCE OF OR FAILURE TO PERFORM THIS AGREEMENT, WHETHER ALLEGED AS A BREACH OF CONTRACT OR TORTIOUS CONDUCT,
INCLUDING NEGLIGENCE. THE LIMITATIONS OF LIABILITY SET FORTH IN THIS SECTION 4.1 SHALL NOT APPLY TO A BREACH, THROUGH
GROSS NEGLIGENCE OR INTENTIONAL MISCONDUCT BY YOU, OF THE SCOPE OF THE LICENSE GRANTED IN SECTION 1.1, OR IN CASE OF
DEATH OR PERSONAL INJURY.
(6) In addition, for Customers with principal offices within Australia, Sections 4.1, 4.2 and 4.3 of the Agreement are
(deleted and replaced with the following new Sections 4.1, 4.2 and 4.3:
4.1 Disclaimer of Certain Damages. Subject to clause 4.3, a party is not liable for Consequential Loss however caused
(including by the negligence of that party) suffered or incurred by the other party in connection with this agreement.
"Consequential Loss" means loss of revenues, loss of reputation, indirect loss, loss of profits, consequential loss,
loss of actual or anticipated savings, indirect loss, lost opportunities, including opportunities to enter into
arrangements with third parties, loss or damage in connection with claims against by third parties, or loss or
corruption or data.
4.2 Damages Cap. SUBJECT TO CLAUSES 4.1 AND 4.3, ANY LIABILITY OF ELASTICSEARCH FOR ANY LOSS OR DAMAGE, HOWEVER CAUSED
(INCLUDING BY THE NEGLIGENCE OF ELASTICSEARCH), SUFFERED BY YOU IN CONNECTION WITH THIS AGREEMENT IS LIMITED TO THE
AMOUNT YOU PAID, IN THE TWELVE (12) MONTHS IMMEDIATELY PRIOR TO THE EVENT GIVING RISE TO LIABILITY, UNDER THE
ELASTICSEARCH SUPPORT SERVICES AGREEMENT IN CONNECTION WITH WHICH YOU OBTAINED THE LICENSE TO USE THE SHIELD SOFTWARE.
THE LIMITATION SET OUT IN THIS SECTION 4.2 IS AN AGGREGATE LIMIT FOR ALL CLAIMS, WHENEVER MADE.
4.3 Limitation and Disclaimer Exceptions. If the Competition and Consumer Act 2010 (Cth) or any other legislation or
any other legislation states that there is a guarantee in relation to any good or service supplied by Elasticsearch in
connection with this agreement, and Elasticsearch's liability for failing to comply with that guarantee cannot be
excluded but may be limited, Sections 4.1 and 4.2 do not apply to that liability and instead Elasticsearch's liability
for such failure is limited (at Elasticsearch's election) to, in the case of a supply of goods, the Elasticsearch
replacing the goods or supplying equivalent goods or repairing the goods, or in the case of a supply of services,
Elasticsearch supplying the services again or paying the cost of having the services supplied again.
(7) In addition, for Customers with principal offices within Japan, Sections 1.2, 3 and 4.1 of the Agreement are
(deleted and replaced with the following new Sections 1.2, 3 and 4.1:
1.2 Reservation of Rights; Restrictions. As between Elasticsearch and You, Elasticsearch owns all right title and
interest in and to the Shield Software and any derivative works thereof, and except as expressly set forth in Section
1.1 above, no other license to the Shield Software is granted to You by implication or otherwise. You agree not to: (i)
prepare derivative works from, modify, copy or use the Shield Software in any manner except as expressly permitted in
this Agreement or applicable law; (ii) transfer, sell, rent, lease, distribute, sublicense, loan or otherwise transfer
the Shield Software in whole or in part to any third party; (iii) use the Shield Software for providing time-sharing
services, any software-as-a-service offering ("SaaS"), service bureau services or as part of an application services
provider or other service offering; (iv) alter or remove any proprietary notices in the Shield Software; or (v) make
available to any third party any analysis of the results of operation of the Shield Software, including benchmarking
results, without the prior written consent of Elasticsearch.
3. DISCLAIMER OF WARRANTIES TO THE MAXIMUM EXTENT PERMITTED UNDER APPLICABLE LAW, THE SHIELD SOFTWARE IS PROVIDED "AS
IS" WITHOUT WARRANTY OF ANY KIND, AND ELASTICSEARCH AND ITS LICENSORS MAKE NO WARRANTIES WHETHER EXPRESSED, IMPLIED OR
STATUTORY REGARDING OR RELATING TO THE SHIELD SOFTWARE OR DOCUMENTATION. TO THE MAXIMUM EXTENT PERMITTED UNDER
APPLICABLE LAW, ELASTICSEARCH AND ITS LICENSORS SPECIFICALLY DISCLAIM ALL IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS
FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT WITH RESPECT TO THE SHIELD SOFTWARE AND DOCUMENTATION, AND WITH RESPECT TO
THE USE OF THE FOREGOING. FURTHER, ELASTICSEARCH DOES NOT WARRANT RESULTS OF USE OR THAT THE SHIELD SOFTWARE WILL BE
ERROR FREE OR THAT THE USE OF THE SHIELD SOFTWARE WILL BE UNINTERRUPTED.
4.1 Disclaimer of Certain Damages. IN NO EVENT SHALL YOU OR ELASTICSEARCH OR ITS LICENSORS BE LIABLE FOR ANY LOSS OF
PROFITS, LOSS OF USE, BUSINESS INTERRUPTION, LOSS OF DATA, COST OF SUBSTITUTE GOODS OR SERVICES, OR FOR ANY
SPECIALINDIRECT, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES OF ANY KIND IN CONNECTION WITH OR ARISING OUT OF THE USE
OR INABILITY TO USE THE SHIELD SOFTWARE, OR THE PERFORMANCE OF OR FAILURE TO PERFORM THIS AGREEMENT, WHETHER ALLEGED AS
A BREACH OF CONTRACT OR TORTIOUS CONDUCT, INCLUDING NEGLIGENCE, EVEN IF THE RESPONSIBLE PARTY HAS BEEN ADVISED OF THE
POSSIBILITY OF SUCH DAMAGES. THE LIMITATIONS OF LIABILITY SET FORTH IN THIS SECTION 4.1 SHALL NOT APPLY TO A BREACH
THROUGH GROSS NEGLIGENCE OR INTENTIONAL MISCONDUCT BY YOU OF THE SCOPE OF THE LICENSE GRANTED IN SECTION 1.1 OR TO ANY
OTHER LIABILITY THAT CANNOT BE EXCLUDED OR LIMITED UNDER APPLICABLE LAW.

97
shield/bin/shield/.in.bat Normal file
View File

@ -0,0 +1,97 @@
@echo off
rem Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
rem or more contributor license agreements. Licensed under the Elastic License;
rem you may not use this file except in compliance with the Elastic License.
REM .in.bat <java main class> [args,..]
SETLOCAL
if NOT DEFINED JAVA_HOME goto err
set JAVA_CMD=%1
if "%JAVA_CMD%" == "" goto err_java_cmd
REM fix args
for /f "usebackq tokens=1*" %%i in (`echo %*`) DO @ set params=%%j
SHIFT
set SCRIPT_DIR=%~dp0
for %%I in ("%SCRIPT_DIR%..\..") do set ES_HOME=%%~dpfI
REM ***** JAVA options *****
if "%ES_MIN_MEM%" == "" (
set ES_MIN_MEM=256m
)
if "%ES_MAX_MEM%" == "" (
set ES_MAX_MEM=1g
)
if NOT "%ES_HEAP_SIZE%" == "" (
set ES_MIN_MEM=%ES_HEAP_SIZE%
set ES_MAX_MEM=%ES_HEAP_SIZE%
)
set JAVA_OPTS=%JAVA_OPTS% -Xms%ES_MIN_MEM% -Xmx%ES_MAX_MEM%
if NOT "%ES_HEAP_NEWSIZE%" == "" (
set JAVA_OPTS=%JAVA_OPTS% -Xmn%ES_HEAP_NEWSIZE%
)
if NOT "%ES_DIRECT_SIZE%" == "" (
set JAVA_OPTS=%JAVA_OPTS% -XX:MaxDirectMemorySize=%ES_DIRECT_SIZE%
)
set JAVA_OPTS=%JAVA_OPTS% -Xss256k
REM Enable aggressive optimizations in the JVM
REM - Disabled by default as it might cause the JVM to crash
REM set JAVA_OPTS=%JAVA_OPTS% -XX:+AggressiveOpts
set JAVA_OPTS=%JAVA_OPTS% -XX:+UseParNewGC
set JAVA_OPTS=%JAVA_OPTS% -XX:+UseConcMarkSweepGC
set JAVA_OPTS=%JAVA_OPTS% -XX:CMSInitiatingOccupancyFraction=75
set JAVA_OPTS=%JAVA_OPTS% -XX:+UseCMSInitiatingOccupancyOnly
REM When running under Java 7
REM JAVA_OPTS=%JAVA_OPTS% -XX:+UseCondCardMark
REM GC logging options -- uncomment to enable
REM JAVA_OPTS=%JAVA_OPTS% -XX:+PrintGCDetails
REM JAVA_OPTS=%JAVA_OPTS% -XX:+PrintGCTimeStamps
REM JAVA_OPTS=%JAVA_OPTS% -XX:+PrintClassHistogram
REM JAVA_OPTS=%JAVA_OPTS% -XX:+PrintTenuringDistribution
REM JAVA_OPTS=%JAVA_OPTS% -XX:+PrintGCApplicationStoppedTime
REM JAVA_OPTS=%JAVA_OPTS% -Xloggc:/var/log/elasticsearch/gc.log
REM Causes the JVM to dump its heap on OutOfMemory.
set JAVA_OPTS=%JAVA_OPTS% -XX:+HeapDumpOnOutOfMemoryError
REM The path to the heap dump location, note directory must exists and have enough
REM space for a full heap dump.
REM JAVA_OPTS=%JAVA_OPTS% -XX:HeapDumpPath=$ES_HOME/logs/heapdump.hprof
REM Disables explicit GC
set JAVA_OPTS=%JAVA_OPTS% -XX:+DisableExplicitGC
set ES_CLASSPATH=%ES_CLASSPATH%;%ES_HOME%/lib/elasticsearch-1.4.0-SNAPSHOT.jar;%ES_HOME%/lib/*;%ES_HOME%/lib/sigar/*;%ES_HOME%/plugins/shield/*
set ES_PARAMS=-Des.path.home="%ES_HOME%"
SET HOSTNAME=%COMPUTERNAME%
"%JAVA_HOME%\bin\java" %JAVA_OPTS% %ES_JAVA_OPTS% %ES_PARAMS% -cp "%ES_CLASSPATH%" %JAVA_CMD% %PARAMS%
goto finally
:err
echo JAVA_HOME environment variable must be set!
ENDLOCAL
EXIT /B 1
:err_java_cmd
echo Can not call .in.bat without specifying a main java class
ENDLOCAL
EXIT /B 1
:finally
ENDLOCAL

132
shield/bin/shield/esusers Executable file
View File

@ -0,0 +1,132 @@
#!/bin/sh
# Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
# or more contributor license agreements. Licensed under the Elastic License;
# you may not use this file except in compliance with the Elastic License.
SCRIPT="$0"
# SCRIPT may be an arbitrarily deep series of symlinks. Loop until we have the concrete path.
while [ -h "$SCRIPT" ] ; do
ls=`ls -ld "$SCRIPT"`
# Drop everything prior to ->
link=`expr "$ls" : '.*-> \(.*\)$'`
if expr "$link" : '/.*' > /dev/null; then
SCRIPT="$link"
else
SCRIPT=`dirname "$SCRIPT"`/"$link"
fi
done
# determine elasticsearch home
ES_HOME=`dirname "$SCRIPT"`/../..
# make ELASTICSEARCH_HOME absolute
ES_HOME=`cd "$ES_HOME"; pwd`
# If an include wasn't specified in the environment, then search for one...
if [ "x$ES_INCLUDE" = "x" ]; then
# Locations (in order) to use when searching for an include file.
for include in /usr/share/elasticsearch/elasticsearch.in.sh \
/usr/local/share/elasticsearch/elasticsearch.in.sh \
/opt/elasticsearch/elasticsearch.in.sh \
~/.elasticsearch.in.sh \
"`dirname "$0"`"/../elasticsearch.in.sh \
$ES_HOME/bin/elasticsearch.in.sh; do
if [ -r "$include" ]; then
. "$include"
break
fi
done
# ...otherwise, source the specified include.
elif [ -r "$ES_INCLUDE" ]; then
. "$ES_INCLUDE"
fi
if [ -x "$JAVA_HOME/bin/java" ]; then
JAVA="$JAVA_HOME/bin/java"
else
JAVA=`which java`
fi
if [ ! -x "$JAVA" ]; then
echo "Could not find any executable java binary. Please install java in your PATH or set JAVA_HOME"
exit 1
fi
if [ -z "$ES_CLASSPATH" ]; then
echo "You must set the ES_CLASSPATH var" >&2
exit 1
fi
# Special-case path variables.
case `uname` in
CYGWIN*)
ES_CLASSPATH=`cygpath -p -w "$ES_CLASSPATH"`
ES_HOME=`cygpath -p -w "$ES_HOME"`
;;
esac
# Try to read package config files
if [ -f "/etc/sysconfig/elasticsearch" ]; then
CONF_DIR=/etc/elasticsearch
CONF_FILE=$CONF_DIR/elasticsearch.yml
. "/etc/sysconfig/elasticsearch"
elif [ -f "/etc/default/elasticsearch" ]; then
CONF_DIR=/etc/elasticsearch
CONF_FILE=$CONF_DIR/elasticsearch.yml
. "/etc/default/elasticsearch"
fi
# Parse any long getopt options and put them into properties before calling getopt below
# Be dash compatible to make sure running under ubuntu works
ARGCOUNT=$#
COUNT=0
while [ $COUNT -lt $ARGCOUNT ]
do
case $1 in
--*=*) properties="$properties -Des.${1#--}"
shift 1; COUNT=$(($COUNT+1))
;;
--*) properties="$properties -Des.${1#--}=$2"
shift ; shift; COUNT=$(($COUNT+2))
;;
*) set -- "$@" "$1"; shift; COUNT=$(($COUNT+1))
esac
done
# check if properties already has a config file or config dir
if [ -e "$CONF_DIR" ]; then
case "$properties" in
*-Des.default.path.conf=*) ;;
*)
if [ ! -d "$CONF_DIR/shield" ]; then
echo "ERROR: The configuration directory [$CONF_DIR/shield] does not exist. The esusers tool expects Shield configuration files in that location."
echo "The plugin may not have been installed with the correct configuration path. If [$ES_HOME/config/shield] exists, please copy the shield directory to [$CONF_DIR]"
exit 1
fi
properties="$properties -Des.default.path.conf=$CONF_DIR"
;;
esac
fi
if [ -e "$CONF_FILE" ]; then
case "$properties" in
*-Des.default.config=*) ;;
*)
properties="$properties -Des.default.config=$CONF_FILE"
;;
esac
fi
export HOSTNAME=`hostname -s`
# include shield jars in classpath
ES_CLASSPATH="$ES_CLASSPATH:$ES_HOME/plugins/shield/*"
cd $ES_HOME > /dev/null
"$JAVA" $ES_JAVA_OPTS -cp "$ES_CLASSPATH" -Des.path.home="$ES_HOME" $properties org.elasticsearch.shield.authc.esusers.tool.ESUsersTool "$@"
status=$?
cd - > /dev/null
exit $status

View File

@ -0,0 +1,9 @@
@echo off
rem Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
rem or more contributor license agreements. Licensed under the Elastic License;
rem you may not use this file except in compliance with the Elastic License.
PUSHD %~dp0
CALL %~dp0.in.bat org.elasticsearch.shield.authc.esusers.tool.ESUsersTool %*
POPD

132
shield/bin/shield/syskeygen Executable file
View File

@ -0,0 +1,132 @@
#!/bin/sh
# Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
# or more contributor license agreements. Licensed under the Elastic License;
# you may not use this file except in compliance with the Elastic License.
SCRIPT="$0"
# SCRIPT may be an arbitrarily deep series of symlinks. Loop until we have the concrete path.
while [ -h "$SCRIPT" ] ; do
ls=`ls -ld "$SCRIPT"`
# Drop everything prior to ->
link=`expr "$ls" : '.*-> \(.*\)$'`
if expr "$link" : '/.*' > /dev/null; then
SCRIPT="$link"
else
SCRIPT=`dirname "$SCRIPT"`/"$link"
fi
done
# determine elasticsearch home
ES_HOME=`dirname "$SCRIPT"`/../..
# make ELASTICSEARCH_HOME absolute
ES_HOME=`cd "$ES_HOME"; pwd`
# If an include wasn't specified in the environment, then search for one...
if [ "x$ES_INCLUDE" = "x" ]; then
# Locations (in order) to use when searching for an include file.
for include in /usr/share/elasticsearch/elasticsearch.in.sh \
/usr/local/share/elasticsearch/elasticsearch.in.sh \
/opt/elasticsearch/elasticsearch.in.sh \
~/.elasticsearch.in.sh \
"`dirname "$0"`"/../elasticsearch.in.sh \
$ES_HOME/bin/elasticsearch.in.sh; do
if [ -r "$include" ]; then
. "$include"
break
fi
done
# ...otherwise, source the specified include.
elif [ -r "$ES_INCLUDE" ]; then
. "$ES_INCLUDE"
fi
if [ -x "$JAVA_HOME/bin/java" ]; then
JAVA="$JAVA_HOME/bin/java"
else
JAVA=`which java`
fi
if [ ! -x "$JAVA" ]; then
echo "Could not find any executable java binary. Please install java in your PATH or set JAVA_HOME"
exit 1
fi
if [ -z "$ES_CLASSPATH" ]; then
echo "You must set the ES_CLASSPATH var" >&2
exit 1
fi
# Special-case path variables.
case `uname` in
CYGWIN*)
ES_CLASSPATH=`cygpath -p -w "$ES_CLASSPATH"`
ES_HOME=`cygpath -p -w "$ES_HOME"`
;;
esac
# Try to read package config files
if [ -f "/etc/sysconfig/elasticsearch" ]; then
CONF_DIR=/etc/elasticsearch
CONF_FILE=$CONF_DIR/elasticsearch.yml
. "/etc/sysconfig/elasticsearch"
elif [ -f "/etc/default/elasticsearch" ]; then
CONF_DIR=/etc/elasticsearch
CONF_FILE=$CONF_DIR/elasticsearch.yml
. "/etc/default/elasticsearch"
fi
# Parse any long getopt options and put them into properties before calling getopt below
# Be dash compatible to make sure running under ubuntu works
ARGCOUNT=$#
COUNT=0
while [ $COUNT -lt $ARGCOUNT ]
do
case $1 in
--*=*) properties="$properties -Des.${1#--}"
shift 1; COUNT=$(($COUNT+1))
;;
--*) properties="$properties -Des.${1#--}=$2"
shift ; shift; COUNT=$(($COUNT+2))
;;
*) set -- "$@" "$1"; shift; COUNT=$(($COUNT+1))
esac
done
# check if properties already has a config file or config dir
if [ -e "$CONF_DIR" ]; then
case "$properties" in
*-Des.default.path.conf=*) ;;
*)
if [ ! -d "$CONF_DIR/shield" ]; then
echo "ERROR: The configuration directory [$CONF_DIR/shield] does not exist. The syskeygen tool expects Shield configuration files in that location."
echo "The plugin may not have been installed with the correct configuration path. If [$ES_HOME/config/shield] exists, please copy the shield directory to [$CONF_DIR]"
exit 1
fi
properties="$properties -Des.default.path.conf=$CONF_DIR"
;;
esac
fi
if [ -e "$CONF_FILE" ]; then
case "$properties" in
*-Des.default.config=*) ;;
*)
properties="$properties -Des.default.config=$CONF_FILE"
;;
esac
fi
export HOSTNAME=`hostname -s`
# include shield jars in classpath
ES_CLASSPATH="$ES_CLASSPATH:$ES_HOME/plugins/shield/*"
cd $ES_HOME > /dev/null
$JAVA $ES_JAVA_OPTS -cp "$ES_CLASSPATH" -Des.path.home="$ES_HOME" $properties org.elasticsearch.shield.crypto.tool.SystemKeyTool "$@"
status=$?
cd - > /dev/null
exit $status

View File

@ -0,0 +1,9 @@
@echo off
rem Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
rem or more contributor license agreements. Licensed under the Elastic License;
rem you may not use this file except in compliance with the Elastic License.
PUSHD %~dp0
CALL %~dp0.in.bat org.elasticsearch.shield.crypto.tool.SystemKeyTool %*
POPD

View File

@ -0,0 +1,15 @@
logger:
shield.audit.logfile: INFO, access_log
additivity:
shield.audit.logfile: false
appender:
access_log:
type: dailyRollingFile
file: ${path.logs}/${cluster.name}-access.log
datePattern: "'.'yyyy-MM-dd"
layout:
type: pattern
conversionPattern: "[%d{ISO8601}] %m%n"

View File

View File

@ -0,0 +1,94 @@
admin:
cluster: all
indices:
'*': all
# monitoring cluster privileges
# All operations on all indices
power_user:
cluster: monitor
indices:
'*': all
# Read-only operations on indices
user:
indices:
'*': read
# Defines the required permissions for transport clients
transport_client:
cluster:
- cluster:monitor/nodes/info
#uncomment the following for sniffing
#- cluster:monitor/state
# The required role for kibana 3 users
kibana3:
cluster: cluster:monitor/nodes/info
indices:
'*': indices:data/read/search, indices:data/read/get, indices:admin/get
'kibana-int': indices:data/read/search, indices:data/read/get, indices:data/write/delete, indices:data/write/index, create_index
# The required permissions for kibana 4 users.
kibana4:
cluster:
- cluster:monitor/nodes/info
- cluster:monitor/health
indices:
'*':
- indices:admin/mappings/fields/get
- indices:admin/validate/query
- indices:data/read/search
- indices:data/read/msearch
- indices:admin/get
'.kibana':
- indices:admin/exists
- indices:admin/mapping/put
- indices:admin/mappings/fields/get
- indices:admin/refresh
- indices:admin/validate/query
- indices:data/read/get
- indices:data/read/mget
- indices:data/read/search
- indices:data/write/delete
- indices:data/write/index
- indices:data/write/update
- indices:admin/create
# The required permissions for the kibana 4 server
kibana4_server:
cluster:
- cluster:monitor/nodes/info
- cluster:monitor/health
indices:
'.kibana':
- indices:admin/exists
- indices:admin/mapping/put
- indices:admin/mappings/fields/get
- indices:admin/refresh
- indices:admin/validate/query
- indices:data/read/get
- indices:data/read/mget
- indices:data/read/search
- indices:data/write/delete
- indices:data/write/index
- indices:data/write/update
# The required role for logstash users
logstash:
cluster: indices:admin/template/get, indices:admin/template/put
indices:
'logstash-*': indices:data/write/bulk, indices:data/write/delete, indices:data/write/update, indices:data/read/search, indices:data/read/scroll, create_index
# Marvel role, allowing all operations
# on the marvel indices
marvel_user:
cluster: cluster:monitor/nodes/info, cluster:admin/plugin/license/get
indices:
'.marvel-*': all
# Marvel Agent users
marvel_agent:
cluster: indices:admin/template/get, indices:admin/template/put
indices:
'.marvel-*': indices:data/write/bulk, create_index

View File

View File

View File

@ -0,0 +1,19 @@
All the following scenario are run from a user authorized for: `test.*`: read
[horizontal]
*Existing Indices*::*Action*::*Outcome (executed indices)*
`test1` `test2` `test3` `index1`::`GET _search`::`test1` `test2` `test3`
`test1` `test2` `test3` `index1`::`GET _search/*`::`test1` `test2` `test3`
`test1` `test2` `index1` `index2`::`GET _search/index*`::AuthorizationException
- empty cluster-::`GET _search`::IndexMissingException
- empty cluster-::`GET _search/*`::IndexMissingException
`index1` `index2`::`GET _search`::IndexMissingException
`index1` `index2`::`GET _search/*`::IndexMissingException
`test1` `test2` `index1`::`GET _search/test*,index1`::AuthorizationException
`test1` `test2` `index1`::`GET _search/missing`::AuthorizationException
`test1` `test2` `test3` `index1`::`GET _search/-test2`::`test1` `test3`
`test1` `test2` `test21` `test3` `index1`:: `GET _search/-test2*`::`test1` `test3`
`test1` `test2` `test3` `index1`::`GET msearch first item: all, second item: index1`:: AuthorizationException
`test1` `test2` `test3` `index1`::`GET msearch first item: all, second item: missing`:: AuthorizationException
`test1` `test2` `test3` `index1`::`GET msearch first item: all, second item: test4`:: 1st item:`test1` `test2` `test3`, 2nd item: IndexMissingException
`test1` `test2` `test3` `index1`::`GET msearch first item: all, second item: index*`:: IndexMissingException

View File

@ -0,0 +1,93 @@
== LDAP Configuration for INTERNAL only Test Servers
We've two LDAP servers for testing:
* Active Directory on Windows Server 2012
* OpenLdap on Suse Enterprise Linux 10.x
=== Configuration for OpenLdap
Here is a configuration that works for openldap. This is using OpenSuse's method for creating ldap users that can
authenticate to the box. So it is probably close to a real-world scenario. For SSL the following truststore has both
public certificates in it: elasticsearch-shield/src/test/resources/org/elasticsearch/shield/transport/ssl/certs/simple/testnode.jks
[source, yaml]
------------------------------------------------------------
shield:
ssl.keystore:
path: "/path/to/elasticsearch-shield/src/test/resources/org/elasticsearch/shield/transport/ssl/certs/simple/testnode.jks"
password: testnode
authc.realms.openldap:
type: ldap
order: 0
url: "ldaps://54.200.235.244:636"
user_dn_templates: [ "uid={0},ou=people,dc=oldap,dc=test,dc=elasticsearch,dc=com" ]
group_search:
base_dn: "ou=people,dc=oldap,dc=test,dc=elasticsearch,dc=com"
hostname_verification: false
------------------------------------------------------------
=== Configuration for Active Directory
You could configure Active Directory the same way (with type ldap and user_dn_templates). But where is the fun in that!
Active directory has a simplified (non-standard) authentication workflow that helps us eliminate the templates.
BUT this technique requires you use a DNS name for your active directory server. Do this adding the following to /etc/hosts:
`54.213.145.20 ad.test.elasticsearch.com ForestDnsZones.ad.test.elasticsearch.com DomainDnsZones.ad.test.elasticsearch.com`
[source, yaml]
------------------------------------------------------------
shield:
authc.realms.ad:
type: active_directory
order: 0
domain_name: ad.test.elasticsearch.com
------------------------------------------------------------
The above configuration results in a plaintext LDAP connection. For SSL the following configuration is required:
[source, yaml]
------------------------------------------------------------
shield:
ssl.keystore:
path: "/path/to/elasticsearch-shield/src/test/resources/org/elasticsearch/shield/transport/ssl/certs/simple/testnode.jks"
password: testnode
authc.realms.ad:
type: active_directory
order: 0
domain_name: ad.test.elasticsearch.com
url: "ldaps://ad.test.elasticsearch.com:636"
hostname_verification: false
------------------------------------------------------------
=== Users & Groups
Isn't LDAP fun?! No? Well that's why we've created super heros!
|=======================
| CN (common name) | uid | group memberships
| Commander Kraken | kraken | Hydra
| Bruce Banner | hulk | Geniuses, SHIELD, Philanthropists, Avengers
| Clint Barton | hawkeye | SHIELD, Avengers
| Jarvis | jarvis |
| Natasha Romanoff | blackwidow | SHIELD, Avengers
| Nick Fury | fury | SHIELD, Avengers
| Phil Colson | phil | SHIELD
| Steve Rogers | cap | SHIELD, Avengers
| Thor | thor | SHIELD, Avengers, Gods, Philanthropists
| Tony Stark | ironman | Geniuses, Billionaries, Playboys, Philanthropists, SHIELD, Avengers
| Odin | odin | Gods
|=======================
They aren't very good super-heros because they all share the same password: `NickFuryHeartsES`. You'll use the uid
for the username.
=== Groups
If you want to map group names to es roles, you'll use the fully distinguished names of the groups. The DNs for groups in ad is
`CN={group name},CN=Users,DC=ad,DC=test,DC=elasticsearch,DC=com`
the DNs for groups in openldap is
`cn={group name},ou=people,dc=oldap,dc=test,dc=elasticsearch,dc=com`
Ping Cam Morris or Bill Hwang for more questions.

View File

@ -0,0 +1,25 @@
[[shield]]
= Shield - Elasticsearch Security Plugin
:ref: http://www.elastic.co/guide/en/elasticsearch/reference/current
include::01-introduction.asciidoc[]
include::02-architecture.asciidoc[]
include::03-quick-getting-started.asciidoc[]
include::04-getting-started.asciidoc[]
include::05-authorization.asciidoc[]
include::06-authentication.asciidoc[]
include::07-securing-nodes.asciidoc[]
include::08-auditing.asciidoc[]
include::09-clients.asciidoc[]
include::10-appendices.asciidoc[]

View File

@ -0,0 +1,60 @@
[[introduction]]
== Introduction
This document discusses securing your Elasticsearch deployment, from initial installation to configuration.
[float]
=== Why Security?
An Elasticsearch cluster benefits from properly implemented security in the following ways:
* <<roles,Role-based>> access control at the index level and <<ldap,LDAP>> authentication integration to _prevent
unauthorized access_
* <<ssl-tls,Encryption>> to _preserve the integrity of your data_, keeping confidential data confidential.
* An _<<auditing,Audit>> trail_ to analyze access patterns.
[float]
==== Prevent Unauthorized Access
The term 'unauthorized access' properly covers two distinct security concepts: _Authentication_ and _Authorization_.
Authentication validates that a user is who they claim to be. A proper authentication setup enforces that only the
person named, for example, Kelsey Andorra can authenticate to Elasticsearch as the user `kandorra`. Shield ships with
out-of-the-box internal authentication mechanism and also integrates with LDAP and the Active Directory to provide
user authentication. Authorization enforces a set of privileges that are available to a specific user. To continue the
example, an authorization framework enforces that the user `kandorra` has the ability to perform specific actions on the
Elasticsearch cluster. These specific actions are called _privileges_. See the <<reference,Reference>> section for a
complete list of privileges. Privileges are bundled into sets, and a set of privileges is called a _role_.
Shield also provides for authorization based on the client's IP address. You may whitelist and blacklist subnets to
control network-level access to a server.
[float]
==== Preserve Data Integrity
A standard Elasticsearch cluster provides functionality that provides redundancy to protect against _accidental_ data
loss and corruption. By providing <<ssl-tls,_encryption_>> for data that is being transmitted from node to node within
the cluster, Elasticsearch security protects data from _deliberate_ tampering or unauthorized access.
[float]
==== Provides an Audit Trail
Knowing who requested which actions on your data, and when, is an important part of security. Keeping an auditable log
of the activity in your cluster can not only help diagnose performance issues, but provide insight into attacks and
attempted breaches.
[float]
=== Security as a Plugin
Security features for Elasticsearch are implemented in a plugin that you <<getting-started,install>> on each node in
your cluster.
[float]
=== What's In This Document
The information in this document covers the following broad categories:
* To learn about the architecture of the Elasticsearch security plugin and how the various elements of security
interact, see the <<architecture, Architecture Overview>> section.
* To get started with Elasticsearch security, from installation to initial configuration, see the
<<getting-started,Getting Started>> section.
* To answer specific questions about configuration elements and privileges in Elasticsearch security, see the
<<reference,Reference>> section.

View File

@ -0,0 +1,84 @@
[[architecture]]
== Architecture Overview
Shield installs as a plugin into Elasticsearch. Once installed, the plugin intercepts inbound API calls in order to
enforce authentication and authorization. The plugin can also provide encryption using Secure Sockets Layer/Transport
Layer Security (SSL/TLS) for the network traffic to and from the Elasticsearch node. The plugin also uses the API
interception layer that enables authentication and authorization to provide audit logging capability.
[float]
=== User Authentication
Shield defines a known set of users in order to authenticate users that make requests. These sets of users are defined
with an abstraction called a _realm_. A realm is a user database configured for the use of the Shield plugin. The
supported realms are _esusers_ and _LDAP_.
In the _esusers_ realm, users exist exclusively within the Elasticsearch cluster. With the _esusers_ realm, the
administrator manages users with <<esusers,tools provided by Elasticsearch>>, and all the user operations occur within
the Elasticsearch cluster. Users authenticate with a username and password pair.
In the _LDAP_ realm, the administrator manages users with the tools provided by the LDAP vendor. Elasticsearch
authenticates users by accessing the configured LDAP server. Users authenticate with a username and password pair. Shield
also enables mapping LDAP groups to roles in Shield (more on roles below).
Your application can be a user in a Shield realm. Elasticsearch Clients authenticate to the cluster by providing a
username and password pair (a.k.a _Authentication Token_) with each request. To learn more on how different clients
can authenticate, see <<clients, Clients>>.
[float]
=== Authorization
Shield's data model for action authorization consists of these elements:
* _Secured Resource_, a resource against which security permissions are defined, including the cluster, an index/alias,
or a set of indices/aliases in the cluster
* _Privilege_, one or more actions that a user may execute against a secured resource. This includes named groups of
actions (e.g. _read_), or a set specific actions (e.g. indices:/data/read/percolate)
* _Permissions_, one or more privileges against a secured resource (e.g. _read on the "products" index_)
* _Role_, named sets of permissions
* _Users_, entities which may be assigned zero or more roles, authorizing them to perform the actions on the secure
resources described in the union of their roles
A secure Elasticsearch cluster manages the privileges of users through <<roles, _roles_>>. A role has a unique name and identifies
a set of permissions that translate to privileges on resources. A user can have an arbitrary number of roles. There are
two types of permissions: _cluster_ and _index_. The total set of permissions that a user has is defined by union of the
permissions in all its roles.
Depending on the realm used, Shield provides the appropriate means to assign roles to users.
[float]
=== Node Authentication and Channel Encryption
Nodes communicate to other nodes over port 9300. With Shield, you can use SSL/TLS to wrap this communication. When
SSL/TLS is enabled, the nodes validate each other's certificates, establishing trust between the nodes. This validation
prevents unauthenticated nodes from joining the cluster. Communications between nodes in the cluster are also encrypted
when SSL/TLS is in use.
Users are responsible for generating and installing their own certificates.
You can choose a variety of ciphers for encryption. See the <<ciphers,_Adding Ciphers to Java for Stronger Encryption_>>
section for details.
For more information on how to secure nodes see <<securing-nodes, Securing Nodes>>.
[float]
=== IP Filtering
Shield provides IP-based access control for Elasticsearch nodes. This access control allows you to restrict which
other servers, via their IP address, can connect to your Elasticsearch nodes and make requests. For example, you can
configure Shield to allow access to the cluster only from your application servers. The configuration provides for
whitelisting and blacklisting of subnets, specific IP addresses, and DNS domains. To read more about IP filtering see
<<ip-filtering, IP filtering>>.
[float]
=== Auditing
The <<auditing,audit functionality>> in a secure Elasticsearch cluster logs particular events and activity on that
cluster. The events logged include authentication attempts, including granted and denied access.

View File

@ -0,0 +1,75 @@
[[quick-getting-started]]
== Getting Started (Short Version)
The following tutorial will get you up and running with Shield in 2 minutes.
[float]
=== Assumptions
* You have Java(TM) 7 or above installed.
* You have downloaded elasticsearch 1.5.0+ and extracted it (from now on, we'll refer to the elasticsearch directory as `ES_HOME`).
If you haven't done so, you can download it https://download.elastic.co/elasticsearch/elasticsearch/elasticsearch-1.5.1.tar.gz[here].
* You are *not* using a package installation (RPM/DEB) or a custom configuration directory. If you are, please see the full <<getting-started,getting started>> guide.
[float]
=== Installation
1. `cd` to `ES_HOME`
2. Install the license plugin
+
[source,shell]
----------------------------------------------------------
bin/plugin -i elasticsearch/license/latest
----------------------------------------------------------
3. Next, install the shield plugin
+
[source,shell]
----------------------------------------------------------
bin/plugin -i elasticsearch/shield/latest
----------------------------------------------------------
4. Start Elasticsearch
+
[source,shell]
----------------------------------------------------------
bin/elasticsearch
----------------------------------------------------------
5. Add a `es_admin` user with administrative permissions
+
[source,shell]
----------------------------------------------------------
bin/shield/esusers useradd es_admin -r admin
----------------------------------------------------------
6. Try it out - without username/password, the request should be rejected:
+
[source,shell]
----------------------------------------------------------
curl -XGET 'http://localhost:9200/'
----------------------------------------------------------
7. Now try with username and password
+
[source,shell]
----------------------------------------------------------
curl -u es_admin -XGET 'http://localhost:9200/'
----------------------------------------------------------
8. Optionally, verify the Shield version
+
[source,shell]
----------------------------------------------------------
curl -u es_admin -XGET 'http://localhost:9200/_shield'
----------------------------------------------------------
[float]
=== Next Steps
* For a more in-depth look into the meaning of each step above, please proceed to the full <<getting-started,getting started>> guide.
* For better understanding of the authentication mechanisms we just used, please refer to <<esusers, esusers - internal file based authentication>>
* To learn about how to create roles and customize the permissions for users, please refer to the <<authorization, authorization>> section.
* To enable secure SSL/TLS encryption of cluster and client communication, please refer to the <<securing-nodes, securing nodes>> section.
* If you are new to Shield, we suggest following the guide's natural path and reading each section in order. To continue, <<getting-started, proceed to the next section>>

View File

@ -0,0 +1,322 @@
[[getting-started]]
== Getting Started (Long Version)
Security is installed as an Elasticsearch plugin. The plugin must be installed on every node in the cluster, and every
node must be restarted after installation. Plan for a complete cluster restart before beginning the installation
process.
IMPORTANT: Shield 2.0.x is compatible with Elasticsearch 1.5.0 and above.
[float]
=== Configuring your environment
If you install Elasticsearch as a package or you specify a custom configuration directory, the command line
tools require you to specify the configuration directory. On Linux systems, add the following line to your
`.profile` file:
[source,shell]
----------------------------------------------------------
export ES_JAVA_OPTS="-Des.path.conf=/etc/elasticsearch"
----------------------------------------------------------
NOTE: When using `sudo` to run commands as a different user, the `ES_JAVA_OPTS` setting from your profile will not be
available in the other user's environment. You can manually pass the environment variables to the command or you can
make the environment variable available by adding the following line to the `/etc/sudoers` file:
[source,shell]
----------------------------------------------------------
Defaults env_keep += "ES_JAVA_OPTS"
----------------------------------------------------------
On Windows systems, the `setx` command can be used to specify a custom configuration directory:
[source,shell]
----------------------------------------------------------
setx ES_JAVA_OPTS "-Des.path.conf=C:\config"
----------------------------------------------------------
[float]
=== Shield And Licensing
Shield requires a license to operate and the licensing is managed by a separate plugin. For this reason,
the License plugin must be installed (without the license plugin Shield will prevent the node from starting up).
For instructions on how to install the License plugin, please refer to <<license-management, License Management>>.
Once you have the licensing plugin installed, you may begin working with Shield immediately. When elasticsearch starts for the
first time with Shield and the licensing plugin installed, a 30-day trial license for Shield will automatically be generated.
If you have a license for Shield that you would like to install, please refer to <<installing-license, installing a license>>.
IMPORTANT: With a valid license, Shield will be fully operational. Upon license expiry, Shield will operate in a
degraded mode, where cluster health, cluster stats, and index stats APIs will be blocked. All other operations will
continue operating normally. Additional information can be found at the <<license-expiration, Shield license expiration>>
section.
[float]
=== Installing the Shield plugin
Follow these steps on every node in the cluster:
. From the Elasticsearch home directory, run:
+
[source,sh]
------------------------------------------
bin/plugin -i elasticsearch/shield/latest
------------------------------------------
. Restart your Elasticsearch node.
+
Before restarting your cluster, consider temporarily {ref}/modules-cluster.html[disabling shard allocation].
If your server doesn't have direct Internet access, see <<manual_download,manual download>> for an alternative way to
get the Security binaries.
[[manual_download]]
[float]
==== Manual Download
Elasticsearchs `bin/plugin` script requires direct Internet access for downloading and installing the security plugin.
If your server doesnt have Internet access, you can download the required binaries from the following link:
[source,sh]
----------------------------------------------------
https://download.elastic.co/elasticsearch/shield/shield-2.0.0.zip
----------------------------------------------------
Transfer the compressed file to your server, then install the plugin with the `bin/plugin` script:
[source,shell]
----------------------------------------------------
bin/plugin -i shield -u file://PATH_TO_ZIP_FILE <1>
----------------------------------------------------
<1> Absolute path to Shield plugin zip distribution file (e.g. `file:///path/to/file/shield-2.0.0.zip`,
note the three slashes at the beginning)
[[install-layout]]
[float]
=== Shield Installation Layout
Shield comes with its own set of configuration files and executable tools. These include:
[horizontal]
[[shield-bin]] *Executables*::
Shield's bin directory is located at `$ES_HOME/bin/shield`. Consider adding this directory to
your `PATH` environment variable.
[[shield-config]] *Configuration*::
Shield's config directory is located at `<elasticsearch_config>/shield` (where
`<elasticsearch_config>` refers to the standard config directory of
Elasticsearch - typically at `$ES_HOME/config`).
Unless otherwise stated, Shield's settings are placed in the main
`elasticsearch.yml` configuration file.
[[message-authentication]]
[float]
=== Message Authentication
Message authentication verifies that a message has not been tampered with or corrupted in transit. To enable message
authentication, run the `syskeygen` tool without any options:
[source, shell]
----------------
bin/shield/syskeygen
----------------
This creates the system key file in Shield's <<shield-config,config>> directory, e.g. `config/shield/system_key`. You
can customize this file's location by changing the value of the `shield.system_key.file` setting in the
`elasticsearch.yml` file.
IMPORTANT: Because the system key is a symmetric key, the same key must be on every node in the cluster. Copy the key to
every node in the cluster after generating it.
[float]
=== Enabling Role-based Access Control
Now that we have Shield installed, we'll move to configuring the users (and their roles) with which we'll be able to execute
various of APIs on Elasticsearch.
[float]
==== Defining Roles
A _role_ encompasses a set of permissions over the cluster and/or the indices in it. Roles are defined in the
`$ES_HOME/config/shield/roles.yml` file.
.Example role definition
[source,yaml]
--------------------------------------------------
# All cluster rights
# All operations on all indices
admin: <1>
cluster: all
indices:
'*': all
# monitoring cluster privileges
# All operations on all indices
power_user: <2>
cluster: monitor
indices:
'*': all
# Read-only operations on indices
user: <3>
indices:
'*': read
--------------------------------------------------
<1> The `admin` role enables full access to the cluster and all its indices.
<2> The `power_user` role enables monitoring only access on the cluster and full access on all its indices.
<3> The `user` role has no cluster wide permissions and only has data read access on all its indices.
For this quick getting started guide, we won't need to change anything in the `roles.yml` file that comes out-of-the-box
with Shield, as it already defines the roles listed in the snippet above. To learn more on roles and how one can configure
them, please see <<roles, Roles>>.
[float]
==== Defining Users
Shield supports different authentication realms that authenticate users from different sources. In this example, we'll
use the internal <<esusers,`esusers`>> realm that comes with Shield. The `esusers` realm supports user management using
the `esusers` command line tool from Shield's `bin` directory.
NOTE: The `esusers` realm is enabled by default when no realms are explicitly configured in `elasticsearch.yml`. For more
information on realms configuration please see <<realms, Realms>>.
[source,shell]
--------------------------------------------------
bin/shield/esusers useradd rdeniro -p taxidriver -r admin
--------------------------------------------------
[source,shell]
--------------------------------------------------
bin/shield/esusers useradd alpacino -p godfather -r user
--------------------------------------------------
The example above adds two users:
* The `rdeniro` user with password `taxidriver`, with the `admin` role in the cluster
* The `alpacino` user with password `godfather`, with the `user` role in the cluster
NOTE: To ensure that Elasticsearch can read the user and role information at startup, run `esusers useradd` as the
same user you use to run Elasticsearch. Running the command as root or some other user will update the permissions
for the `users` and `users_roles` files and prevent Elasticsearch from accessing them.
Now that we've defined the roles and the users of the cluster, you can start the Elasticsearch node and we'll verify that
Shield plugin has been loaded.
[float]
==== Verifying Shield Installation
Once your Elasticsearch node is running, you can issue a `curl` command to verify that Shield has been loaded and is the
expected version.
[source,shell]
-------------------------------------------------------------------------------
curl --user rdeniro:taxidriver 'localhost:9200/_shield'
-------------------------------------------------------------------------------
[source,json]
-------------------------------------------------------------------------------
{
"status" : "enabled",
"name" : "Mimic",
"cluster_name" : "elasticsearch",
"version" : {
"number" : "2.0.0",
"build_hash" : "",
"build_timestamp" : "NA",
"build_snapshot" : true
},
"tagline" : "You know, for security"
}
-------------------------------------------------------------------------------
You can also check the startup logs to verify that the Shield plugin has loaded and the network transports are using Shield.
A successful installation will show lines similar to the following:
[source,shell]
----------------
[2014-10-09 13:47:38,841][INFO ][transport ] [Ezekiel Stane] Using [org.elasticsearch.shield.transport.ShieldServerTransportService] as transport service, overridden by [shield]
[2014-10-09 13:47:38,841][INFO ][transport ] [Ezekiel Stane] Using [org.elasticsearch.shield.transport.netty.ShieldNettyTransport] as transport, overridden by [shield]
[2014-10-09 13:47:38,842][INFO ][http ] [Ezekiel Stane] Using [org.elasticsearch.shield.transport.netty.ShieldNettyHttpServerTransport] as http transport, overridden by [shield]
----------------
In the next section, we'll use a simple HTTP client to interact with Elasticsearch protected by Shield.
[[clientauth]]
[float]
=== Configuring HTTP REST Clients
Elasticsearch works with standard HTTP http://en.wikipedia.org/wiki/Basic_access_authentication[basic authentication]
headers to identify the requester. Since Elasticsearch is stateless, this header must be sent with every request:
[source,shell]
--------------------------------------------------
Authorization: Basic <TOKEN> <1>
--------------------------------------------------
<1> The `<TOKEN>` is computed as `base64(USERNAME:PASSWORD)`
[float]
==== Client examples
Using `curl` without basic authentication to create an index has the following result:
[source,shell]
-------------------------------------------------------------------------------
curl -XPUT 'localhost:9200/idx'
-------------------------------------------------------------------------------
[source,json]
-------------------------------------------------------------------------------
{
"error": "AuthenticationException[Missing authentication token]",
"status": 401
}
-------------------------------------------------------------------------------
Since no user is associated with the request above, the request returns an authentication error. Next, use `curl`
with basic auth to create an index as the `rdeniro` user:
[source,shell]
---------------------------------------------------------
curl --user rdeniro:taxidriver -XPUT 'localhost:9200/idx'
---------------------------------------------------------
[source,json]
---------------------------------------------------------
{
"acknowledged": true
}
---------------------------------------------------------
Since the request is executed on behalf of administrative user `rdeniro`, the create index request authenticates and
authorizes successfully, resulting in normal execution of the request. Creating another index as the `alpacino` user
results in the following error:
[source,shell]
------------------------------------------------------------------------------------------------------------------
curl --user alpacino:godfather -XPUT 'localhost:9200/idx2'
------------------------------------------------------------------------------------------------------------------
[source,json]
------------------------------------------------------------------------------------------------------------------
{
"error": "AuthorizationException[Action [indices:admin/create] is unauthorized for user [alpacino]]",
"status": 403
}
------------------------------------------------------------------------------------------------------------------
As user `alpacino` does not have any index administration rights, the request is rejected with an authorization error.
[float]
=== Next Steps
Now you have a working cluster with authentication and access control enabled.
In the <<authorization, _Authorization_>> section, we explain how to manage users and their roles. The
<<authentication, _Authentication_>> section explains how to use Shield's authentication realms and LDAP integration. The
<<securing-nodes, _Securing Nodes_>> section discusses enabling SSL/TLS encryption for nodes and clients.

View File

@ -0,0 +1,132 @@
[[authorization]]
== Authorization
Shield introduces the concept of _action authorization_ to Elasticsearch. Action authorization restricts the actions
users can execute on the cluster. Shield implements authorization as Role Based Access Control (RBAC), where all
actions are restricted by default. Users are associated with roles that define a set of actions that are allowed
for those users.
[[roles]]
[float]
=== Roles, Permissions and Privileges
Privileges are actions or a set of actions that users may execute in Elasticsearch. For example, the ability to run a
query is a privilege.
A permission is a set of privileges associated with one or more secured objects. For example, a permission could allow
querying or reading all documents of index `i1`. There are two types of secured objects in Elasticsearch -
cluster and indices. Cluster permissions grant access to cluster-wide administrative and monitoring actions. Index
permissions grant data access, including administrative and monitoring actions on specific indices in the cluster.
A role is a named set of permissions. For example, you could define a role as a logging administrator. The logging
administrator is allowed to take all actions on indices named `logs-*`.
As an administrator, you will need to define the roles that you want to use, then assign users to the roles.
[[roles-file]]
[float]
==== The Role Definition File `roles.yml`
Roles are defined in the role definition file `roles.yml` located under Shield's <<shield-config,config>> directory.
This is a YAML file where each entry defines the unique role name and the cluster and indices permissions associated
with it.
[IMPORTANT]
==============================
The `roles.yml` file is managed locally by the node and is not managed globally by the cluster. This means that
with a typical multi-node cluster, the exact same changes need to be applied on each and every node in the cluster.
A safer approach would be to apply the change on one of the nodes and have the `roles.yml` distributed/copied to
all other nodes in the cluster (either manually or using a configuration management system such as Puppet or Chef).
==============================
The following snippet shows an example configuration:
[source,yaml]
-----------------------------------
# All cluster rights
# All operations on all indices
admin:
cluster: all
indices:
'*': all
# Monitoring cluster privileges
# All operations on all indices
power_user:
cluster: monitor
indices:
'*': all
# Only read operations on indices
user:
indices:
'*': read
# Only read operations on indices named events_*
events_user:
indices:
'events_*': read
-----------------------------------
[[valid-role-name]]
NOTE: A valid role name must be at least 1 character and no longer than 30 characters. It must begin with a letter
(`a-z`) or an underscore (`_`). Subsequent characters can be letters, underscores (`_`), digits (`0-9`) or any
of the following symbols `@`, `-`, `.` or `$`
The above example defines these roles:
|=======================
| `admin` | Has full access (all privileges) on the cluster and full access on all indices in the cluster.
| `power_user` | Has monitoring-only access on the cluster, enabling the user to request cluster metrics, information,
and settings, without the ability to update settings. This user also has full access on all indices in
the cluster.
| `user` | Cannot update or monitor the cluster. Has read-only access to all indices in the cluster.
| `events_user` | Has read-only access to all indices with the `events_` prefix.
|=======================
See the complete list of available <<privileges-list, cluster and indices privileges>>.
[float]
==== Action Level Access Control
The Shield security plugin enables access to specific actions in Elasticsearch. Access control using specific actions
provides a finer level of granularity than roles based on named privileges.
The role in the following example allows access to document `GET` actions for a specific index and nothing else:
.Example Role Using Action-level Access Control
[source,yaml]
---------------------------------------------------
# Only GET read action on index named events_index
get_user:
indices:
'events_index': 'indices:data/read/get'
---------------------------------------------------
See the complete list of available <<ref-actions-list, cluster and indices actions>>.
TIP: When specifying index names, you can use indices and aliases with their full names or regular expressions that
refer to multiple indices.
* Wildcard (default) - simple wildcard matching where `*` is a placeholder for zero or more characters, `?` is a
placeholder for a single character and `\` may be used as an escape character.
* Regular Expressions - A more powerful syntax for matching more complex patterns. This regular expression is based on
Lucene's regexp automaton syntax. To enable this syntax, it must be wrapped within a pair of forward slashes (`/`).
Any pattern starting with `/` and not ending with `/` is considered to be malformed.
.Example Regular Expressions
[source,yaml]
------------------------------------------------------------------------------------
"foo-bar": all # match the literal `foo-bar`
"foo-*": all # match anything beginning with "foo-"
"logstash-201?-*": all # ? matches any one character
"/.*-201[0-9]-.*/": all # use a regex to match anything containing 2010-2019
"/foo": all # syntax error - missing final /
------------------------------------------------------------------------------------
TIP: Once the roles are defined, users can then be associated with any number of these roles. In the
<<authentication,next section>> we'll learn more about authentication and see how users can be associated with the
configured roles.

View File

@ -0,0 +1,142 @@
[[authentication]]
== Authentication
Authentication identifies an individual. To gain access to restricted resources, a user must prove their identity, via
passwords, credentials, or some other means (typically referred to as authentication tokens).
[[realms]]
[float]
=== Realms
A _realm_ is an authentication mechanism, which Shield uses to resolve and authenticate users and their roles. Shield
currently provides four realm types:
[horizontal]
_esusers_:: A native authentication system built into Shield and available by default. See <<esusers>>.
_LDAP_:: Authentication via an external Lightweight Directory Protocol. See <<ldap>>.
_Active Directory_:: Authentication via an external Active Directory service. See <<active_directory>>.
_PKI_:: Authentication through the use of trusted X.509 certificates. See <<pki>>.
NOTE: _esusers_, _LDAP_, and _Active Directory_ realms authenticate using the username and password authentication tokens.
Realms live within a _realm chain_. It is essentially a prioritized list of configured realms (typically of various types).
The order of the list determines the order in which the realms will be consulted. During the authentication process,
Shield will consult and try to authenticate the request one realm at a time. Once one of the realms successfully
authenticates the request, the authentication is considered to be successful and the authenticated user will be associated
with the request (which will then proceed to the authorization phase). If a realm cannot authenticate the request, the
next in line realm in the chain will be consulted. If all realms in the chain could not authenticate the request, the
authentication is then considered to be unsuccessful and an authentication error will be returned (as HTTP status code `401`).
NOTE: Shield attempts to authenticate to each configured realm sequentially. Some systems (e.g. Active Directory) have a
temporary lock-out period after several successive failed login attempts. If the same username exists in multiple realms,
unintentional account lockouts are possible. For more information, please see <<trouble-shoot-active-directory, here>>.
For example, if `UserA` exists in both Active Directory and esusers, and the Active Directory realm is checked first and
esusers is checked second, an attempt to authenticate as `UserA` in the esusers realm would first attempt to authenticate
against Active Directory and fail, before successfully authenticating against the esusers realm. Because authentication is
verified on each request, the Active Directory realm would be checked - and fail - on each request for `UserA` in the esusers
realm. In this case, while the Shield request completed successfully, the account on Active Directory would have received
several failed login attempts, and that account may become temporarily locked out. Plan the order of your realms accordingly.
The realm chain can be configured in the `elasticsearch.yml` file. When not explicitly configured, a default chain will be
created that only holds the `esusers` realm in it. When explicitly configured, the created chain will be the exact reflection
of the configuration (e.g. the only realms in the chain will be those configured realms that are enabled)
The following snippet shows an example of realms configuration:
[source,yaml]
----------------------------------------
shield.authc:
realms:
esusers:
type: esusers
order: 0
ldap1:
type: ldap
order: 1
enabled: false
url: 'url_to_ldap1'
...
ldap2:
type: ldap
order: 2
url: 'url_to_ldap2'
...
ad1:
type: active_directory
order: 3
url: 'url_to_ad'
----------------------------------------
As can be seen above, each realm has a unique name that identifies it. There are three settings that are common to all
realms:
* `type` (required) - Identifies the type of the ream (currently can be `esusers`, `ldap` or `active_directory`). The realm
type determines what other settings the realms should be configured with.
* `order` (optional) - Defines the priority/index of the realm within the realm chain. This will determine when the realm
will be consulted during authentication.
* `enabled` (optional) - When set to `false` the realm will be disabled and will not be added to the realm chain. This is
useful for debugging purposes, where one can remove a realm from the chain without deleting and
losing its configuration.
The realm types can roughly be categorized to two categories:
* `internal` - Internal realm types are realms that are internal to elasticsearch and don't require any communication with
external parties - they are fully managed by shield. There can only be a maximum of one configured realm
per internal realm type. (Currently, only one internal realm type exists - `esusers`).
* `external` - External realm types are realms that require interaction with parties/components external to elasticsearch,
typically, with enterprise level identity management systems. Unlike the `internal` realms, there can be
as many `external` realms as one would like - each with a unique name and different settings. (Currently
the only `external` realm types that exist are `ldap` and `active_directory`).
[[anonymous-access]]
[float]
=== Anonymous Access added[1.1.0]
The authentication process can be split into two phases - token extraction and user authentication. During the first
phase (token extraction phase), the configured realms are requested to try and extract/resolve an authentication token
from the incoming request. The first realm that finds an authentication token in the request "wins", meaning, the found
authentication token will be used for authentication (moving to the second phase - user authentication - where each realm
that support this authentication token type will try to authenticate the user).
In the event where no authentication token was resolved by any of the active realms, the incoming request is considered
to be anonymous.
By default, anonymous requests are rejected and an authentication error is returned (status code `401`). It is possible
to change this behaviour and instruct Shield to associate an default/anonymous user with the anonymous request. This can
be done by configuring the following settings in the `elasticsearch.yml` file:
[source,yaml]
----------------------------------------
shield.authc:
anonymous:
username: anonymous_user <1>
roles: role1, role2 <2>
authz_exception: true <3>
----------------------------------------
<1> The username/principal of the anonymous user. This setting is optional and will be set to `_es_anonymous_user` by default
when not configured.
<2> The roles that will be associated with the anonymous user. This setting is mandatory - without it, anonymous access
will be disabled (i.e. anonymous requests will be rejected and return an authentication error)
<3> When `true`, a HTTP 403 response will be returned when the anonymous user does not have the appropriate permissions
for the requested action. The web browser will not be prompt the user to provide credentials to access the requested
resource. When set to `false`, a HTTP 401 will be returned allowing for credentials to be provided for a user with
the appropriate permissions. If you are using anonymous access in combination with HTTP, setting this to `false` may
be necessary if your client does not support preemptive basic authentication. This setting is optional and will be
set to `true` by default.
include::realms/01-esusers.asciidoc[]
include::realms/02-ldap.asciidoc[]
include::realms/03-active-directory.asciidoc[]
include::realms/04-pki.asciidoc[]

View File

@ -0,0 +1,549 @@
[[securing-nodes]]
== Securing Nodes
Elasticsearch nodes store data that may be confidential. Attacks on the data may come from the network. These attacks
could include sniffing of the data, manipulation of the data, and attempts to gain access to the server and thus the
files storing the data. Securing your nodes with the procedures below helps to reduce risk from network-based attacks.
This section shows how to:
* encrypt traffic to and from Elasticsearch nodes using SSL/TLS,
* require that nodes authenticate new nodes that join the cluster using SSL certificates, and
* make it more difficult for remote attackers to issue any commands to Elasticsearch.
The authentication of new nodes will help prevent a rogue node from joining the cluster and receiving data through
replication.
[[ssl-tls]]
=== Encryption and Certificates
Shield allows for the installation of X.509 certificates that establish trust between nodes. When a client connects to a
node using SSL or TLS, the node will present its certificate to the client, and then as part of the handshake process the
node will prove that it owns the private key linked with the certificate. The client will then determine if the node's
certificate is valid, trusted, and matches the hostname or IP address it is trying to connect to. A node also acts as a
client when connecting to other nodes in the cluster, which means that every node must trust all of the other nodes in
the cluster.
The certificates used for SSL and TLS can be signed by a certificate authority (CA) or self-signed. The type of signing
affects how a client will trust these certificates. Self-signed certificates must be trusted individually, which means
that each node must have every other node's certificate installed. Certificates signed by a CA, can be trusted through
validation that the CA signed the certificate. This means that every node will only need the signing CA certificate
installed to trust the other nodes in the cluster.
The best practice with Shield is to use certificates signed by a CA. Self-signed certificates introduce a lot of
overhead as they require each client to trust every self-signed certificate. Self-signed certificates also limit
the elasticity of elasticsearch as adding a new node to the cluster requires a restart of every node after
installing the new node's certificate. This overhead is not present when using a CA as a new node only needs a
certificate signed by the CA to establish trust with the other nodes in the cluster.
Many organizations have a CA to sign certificates for each nodes. If not, see
<<certificate-authority, Appendix - Certificate Authority>> for instructions on setting up a CA.
The following steps will need to be repeated on each node to setup SSL/TLS:
* Install the CA certificate in the node's keystore
* Generate a private key and certificate for the node
* Create a signing request for the new node certificate
* Send the signing request to the CA
* Install the newly signed certificate in the node keystore
The steps in this procedure use the <<keytool,`keytool`>> command-line utility.
WARNING: Nodes that do not have SSL/TLS encryption enabled send passwords in plain text.
=== Set up a keystore
These instructions show how to place a CA certificate and a certificate for the node in a single keystore.
You can optionally store the CA certificate in a separate truststore. The configuration for this is
discussed later in this section.
First obtain the root CA certificate from your certificate authority. This certificate is used to verify that
any node certificate has been signed by the CA. Store this certificate in a keystore as a *trusted certificate*. With
the simplest configuration, Shield uses a keystore with a trusted certificate as a truststore.
The following shows how to create a keystore from a PEM encoded certificate. A _JKS file_ is a Java Key Store file.
It securely stores certificates.
[source,shell]
--------------------------------------------------
keytool -importcert \
-keystore /home/es/config/node01.jks \
-file /Users/Download/cacert.pem <1>
--------------------------------------------------
<1> The Certificate Authority's own certificate.
The keytool command will prompt you for a password, which will be used to protect the integrity of the keystore. You
will need to remember this password as it will be needed for all further interactions with the keystore.
The keystore needs an update when the CA expires.
[[private-key]]
=== Generate a node private key and certificate
This step creates a private key and certificate that the node will use to identify itself. This step must
be done for every node.
`keytool -genkey` can generate a private key and certificate for your node. The following is a typical usage:
[source,shell]
--------------------------------------------------
keytool -genkey \
-alias node01 \ <1>
-keystore node01.jks \ <2>
-keyalg RSA \
-keysize 2048 \
-validity 712 \
-ext san=dns:node01.example.com,ip:192.168.1.1 <3>
--------------------------------------------------
<1> An alias for this public/private key-pair.
<2> The keystore for this node -- will be created.
<3> The `SubjectAlternativeName` list for this host. The '-ext' parameter is optional and can be used to specify
additional DNS names and IP Addresses that the certificate will be valid for. Multiple DNS and IP entries can
be specified by separating each entry with a comma. If this option is used, *all* names and ip addresses must
be specified in this list.
This will create an RSA public/private key-pair with a key size of 2048 bits and store them in the `node01.jks` file.
The keystore is protected with the password of `myPass`. The `712` argument specifies the number of days that the
certificate is valid for -- two years, in this example.
The tool will prompt you for information to include in the certificate.
[IMPORTANT]
.Specifying the Node Identity
==========================
An Elasticsearch node with Shield will verify the hostname contained
in the certificate of each node it connects to. Therefore it is important
that each node's certificate contains the hostname or IP address used to connect
to the node. Hostname verification can be disabled, for more information see
the <<ref-ssl-tls-settings, Configuration Parameters for TLS/SSL>> section.
The recommended way to specify the node identity is by providing all names and
IP addresses of a node as a `SubjectAlternativeName` list using the the `-ext` option.
When using a commercial CA, internal DNS names and private IP addresses will not
be accepted as a `SubjectAlternativeName` due to https://cabforum.org/internal-names/[security concerns];
only publicly resolvable DNS names and IP addresses will be accepted. The use of an
internal CA is the most secure option for using private DNS names and IP addresses,
as it allows for node identity to be specified and verified. If you must use a commercial
CA and private DNS names or IP addresses, you will not be able to include the node
identity in the certificate and will need to disable <<ref-ssl-tls-settings, hostname verification>>.
Another way to specify node identity is by using the `CommonName` attribute
of the certificate. The first prompt from keytool, `What is your first and last name?`,
is asking for the `CommonName` attribute of certificate. When using the `CommonName` attribute
for node identity, a DNS name must be used. The rest of the prompts by keytool are for information only.
==========================
At the end, you will be prompted to optionally enter a password. The command line argument specifies the password for
the keystore. This prompt is asking if you want to set a different password that is specific to this certificate.
Doing so may provide some incremental improvement to security.
Here is a sample interaction with `keytool -genkey`
[source, shell]
--------------------------------------------------
What is your first and last name?
[Unknown]: node01.example.com <1>
What is the name of your organizational unit?
[Unknown]: test
What is the name of your organization?
[Unknown]: Elasticsearch
What is the name of your City or Locality?
[Unknown]: Amsterdam
What is the name of your State or Province?
[Unknown]: Amsterdam
What is the two-letter country code for this unit?
[Unknown]: NL
Is CN=node01.example.com, OU=test, O=elasticsearch, L=Amsterdam, ST=Amsterdam, C=NL correct?
[no]: yes
Enter key password for <mydomain>
(RETURN if same as keystore password):
--------------------------------------------------
<1> The DNS name or hostname of the node must be used here if you do not specify a `SubjectAlternativeName` list using the
`-ext` option.
Now you have a certificate and private key stored in `node01.jks`.
[[generate-csr]]
=== Create a certificate signing request
The next step is to get the node certificate signed by your CA. To do this you must generate a _Certificate Signing
Request_ (CSR) with the `keytool -certreq` command:
[source, shell]
--------------------------------------------------
keytool -certreq \
-alias node01 \ <1>
-keystore node01.jks \
-file node01.csr \
-keyalg rsa \
-ext san=dns:node01.example.com,ip:192.168.1.1 <2>
--------------------------------------------------
<1> The same `alias` that you specified when creating the public/private key-pair in <<private-key>>.
<2> The `SubjectAlternativeName` list for this host. The `-ext` parameter is optional and can be used to specify
additional DNS names and IP Addresses that the certificate will be valid for. Multiple DNS and IP entries can
be specified by separating each entry with a comma. If this option is used, *all* names and ip addresses must
be specified in this list.
The resulting file -- `node01.csr` -- is your _Certificate Signing Request_, or _CSR file_.
==== Send the signing request
Send the CSR file to the Certificate Authority for signing. The Certificate Authority will sign the certificate and
return a signed version of the certificate. See <<sign-csr>> if you are running your own Certificate Authority.
NOTE: When running multiple nodes on the same host, the same signed certificate can be used on each node or a unique
certificate can be requested per node if your CA supports multiple certificates with the same common name.
=== Install the newly signed certificate
Replace the existing unsigned certificate by importing the new signed certificate from your CA into the node keystore:
[source, shell]
--------------------------------------------------
keytool -importcert \
-keystore node01.jks \
-file node01-signed.crt \ <1>
-alias node01 <2>
--------------------------------------------------
<1> This name of the signed certificate file that you received from the CA.
<2> The `alias` must be the same as the alias that you used in <<private-key>>.
NOTE: keytool confuses some PEM-encoded certificates with extra text headers as DER-encoded certificates, giving
this error: `java.security.cert.CertificateParsingException: invalid DER-encoded certificate data`. The text information
can be deleted from the certificate. The following openssl command will remove the text headers:
[source, shell]
--------------------------------------------------
openssl x509 -in node01-signed.crt -out node01-signed-noheaders.crt
--------------------------------------------------
=== Configure the keystores and enable SSL
NOTE: All ssl related node settings that are considered to be highly sensitive and therefore are not exposed via the
{ref}/cluster-nodes-info.html#cluster-nodes-info[nodes info API].
The next step is to configure the node to enable SSL, to identify itself using
its signed certificate, and to verify the identify of incoming connections.
The settings below should be added to the main `elasticsearch.yml` config file.
==== Node identity
The `node01.jks` contains the certificate that `node01` will use to identify
itself to other nodes in the cluster, to transport clients, and to HTTPS
clients. Add the following to `elasticsearch.yml`:
[source, yaml]
--------------------------------------------------
shield.ssl.keystore.path: /home/es/config/node01.jks <1>
shield.ssl.keystore.password: myPass <2>
--------------------------------------------------
<1> The full path to the node keystore file.
<2> The password used to decrypt the `node01.jks` keystore.
If you specified a different password than the keystore password when executing the `keytool -genkey` command, you will
need to specify that password in the `elasticsearch.yml` configuration file:
[source, yaml]
--------------------------------------------------
shield.ssl.keystore.key_password: myKeyPass <1>
--------------------------------------------------
<1> The password entered at the end of the `keytool -genkey` command
[[create-truststore]]
==== Optional truststore configuration
The truststore holds the trusted CA certificates. Shield will use the keystore as the truststore
by default. You can optionally provide a separate path for the truststore. In this case, Shield
will use the keystore for the node's private key and the configured truststore for trusted certificates.
First obtain the CA certificates that will be trusted. Each of these certificates need to be imported into a truststore
by running the following command for each CA certificate:
[source,shell]
--------------------------------------------------
keytool -importcert \
-keystore /home/es/config/truststore.jks \ <1>
-file /Users/Download/cacert.pem <2>
--------------------------------------------------
<1> The full path to the truststore file. If the file does not exist it will be created.
<2> A trusted CA certificate.
The keytool command will prompt you for a password, which will be used to protect the integrity of the truststore. You
will need to remember this password as it will be needed for all further interactions with the truststore.
Add the following to `elasticsearch.yml`:
[source, yaml]
--------------------------------------------------
shield.ssl.truststore.path: /home/es/config/truststore.jks <1>
shield.ssl.truststore.password: myPass <2>
--------------------------------------------------
<1> The full path to the truststore file.
<2> The password used to decrypt the `truststore.jks` keystore.
[[ssl-transport]]
==== Enable SSL on the transport layer
Enable SSL on the transport networking layer to ensure that communication between nodes is encrypted. Add the following
value to the `elasticsearch.yml` configuration file:
[source, yaml]
--------------------------------------------------
shield.transport.ssl: true
--------------------------------------------------
Regardless of this setting, transport clients can only connect to the cluster with a valid username and password.
[[disable-multicast]]
==== Disable multicast
Multicast {ref}/modules-discovery.html[discovery] is
not supported with shield. To properly secure node communications, disable multicast by setting the following values
in the `elasticsearch.yml` configuration file:
[source, yaml]
--------------------------------------------------
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["node01:9300", "node02:9301"]
--------------------------------------------------
You can learn more about unicast configuration in the {ref}/modules-discovery.html[Zen Discovery] documentation.
[[ssl-http]]
==== Enable SSL on the HTTP layer
SSL should be enabled on the HTTP networking layer to ensure that communication between HTTP clients and the cluster is
encrypted:
[source, yaml]
--------------------------------------------------
shield.http.ssl: true
--------------------------------------------------
Regardless of this setting, HTTP clients can only connect to the cluster with a valid username and password.
Congratulations! At this point, you have a node with encryption enabled for both HTTPS and the transport layers.
Your node will correctly present its certificate to other nodes or clients when connecting. There are optional,
more advanced features you may use to further configure or protect your node. They are described in the following
paragraphs.
[[ciphers]]
=== Enabling Cipher Suites for Stronger Encryption
The SSL/TLS protocols use a cipher suite that determines the strength of encryption used to protect the data. You may
want to increase the strength of encryption used when using a Oracle JVM; the IcedTea OpenJDK ships without these
restrictions in place. This step is not required to successfully use Shield.
The Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files enable additional cipher suites for
Java in a separate JAR file that you need to add to your Java installation. You can download this JAR file from
Oracle's http://www.oracle.com/technetwork/java/javase/downloads/index.html[download page]. The JCE Unlimited Strength
Jurisdiction Policy Files are required for encryption with key lengths greater than 128 bits, such as 256-bit AES
encryption.
After installation, all cipher suites in the JCE are available for use. To enable the use of stronger cipher suites with
Shield, configure the `ciphers` parameter. See the <<ref-ssl-tls-settings, Configuration Parameters for TLS/SSL>> section
of this document for specific parameter information.
NOTE: The JCE Unlimited Strength Jurisdiction Policy Files must be installed on all nodes to establish an improved level
of encryption strength.
[[separating-node-client-traffic]]
=== Separating node to node and client traffic
Elasticsearch has the feature of so called {ref}/modules-transport.html#_tcp_transport_profiles[tcp transport profiles].
This allows elasticsearch to bind to several ports and addresses. Shield extends on this functionality to enhance the
security of the cluster by enabling the separation of node to node transport traffic from client transport traffic. This
is important if the client transport traffic is not trusted and could potentially be malicious. To separate the node to
node traffic from the client traffic, add the following to `elasticsearch.yml`:
[source, yaml]
--------------------------------------------------
transport.profiles.client<1>:
port: 9500-9600 <2>
shield:
type: client <3>
--------------------------------------------------
<1> `client` is the name of this example profile
<2> The port range that will be used by transport clients to communicate with this cluster
<3> A type of `client` enables additional filters for added security by denying internal cluster operations (e.g shard
level actions and ping requests)
If supported by your environment, an internal network can be used for node to node traffic and public network can be
used for client traffic by adding the following to `elasticsearch.yml`:
[source, yaml]
--------------------------------------------------
transport.profiles.default.bind_host: 10.0.0.1 <1>
transport.profiles.client.bind_host: 1.1.1.1 <2>
--------------------------------------------------
<1> The bind address for the network that will be used for node to node communication
<2> The bind address for the network used for client communication
If separate networks are not available, then <<ip-filtering, IP Filtering>> can be enabled to limit access to the profiles.
The tcp transport profiles also allow for enabling SSL on a per profile basis. This is useful if you have a secured network
for the node to node communication, but the client is on an unsecured network. To enable SSL on a client profile when SSL is
disabled for node to node communication, add the following to `elasticsearch.yml`:
[source, yaml]
--------------------------------------------------
transport.profiles.client.ssl: true <1>
--------------------------------------------------
<1> This enables SSL on the client profile. The default value for this setting is the value of `shield.transport.ssl`.
When using SSL for transport, a different set of certificates can also be used for the client traffic by adding the
following to `elasticsearch.yml`:
[source, yaml]
--------------------------------------------------
transport.profiles.client.shield.truststore:
path: /path/to/another/truststore
password: changeme
transport.profiles.client.shield.keystore:
path: /path/to/another/keystore
password: changeme
--------------------------------------------------
To change the default behavior that requires certificates for transport clients, set the following value in the `elasticsearch.yml`
file:
[source, yaml]
--------------------------------------------------
transport.profiles.client.shield.ssl.client.auth: no
--------------------------------------------------
This setting keeps certificate authentication active for node-to-node traffic, but removes the requirement to distribute
a signed certificate to transport clients. Please see the <<transport-client, Transport Client>> section.
[[ip-filtering]]
=== IP filtering
You can apply IP filtering to application clients, node clients, or transport clients, in addition to other nodes that
are attempting to join the cluster.
If a node's IP address is on the blacklist, Shield will still allow the connection to Elasticsearch. The connection will
be dropped immediately, and no requests will be processed.
NOTE: Elasticsearch installations are not designed to be publicly accessible over the Internet. IP Filtering and the
other security capabilities of Shield do not change this condition.
==== Node filtering
Shield features an access control feature that allows or rejects hosts, domains, or subnets.
===== Configuration setting
IP filtering configuration is part of the `elasticsearch.yml` file
===== Configuration Syntax
The configuration file for IP filtering consists of a list of one `allow` and `deny` statement each, possibly containing an array. Also, the `allow` rule is prioritized over the `deny` rule.
.Example 1. Allow/Deny Statement Priority
[source,yaml]
--------------------------------------------------
shield.transport.filter.allow: "192.168.0.1"
shield.transport.filter.deny: "192.168.0.0/24"
--------------------------------------------------
The `_all` keyword denies all connections that are not explicitly allowed earlier in the file.
.Example 2. `_all` Keyword Usage
[source,yaml]
--------------------------------------------------
shield.transport.filter.allow: [ "192.168.0.1", "192.168.0.2", "192.168.0.3", "192.168.0.4" ]
shield.transport.filter.deny: _all
--------------------------------------------------
IP Filtering configuration files support IPv6 addresses.
.Example 3. IPv6 Filtering
[source,yaml]
--------------------------------------------------
shield.transport.filter.allow: "2001:0db8:1234::/48"
shield.transport.filter.deny: "1234:0db8:85a3:0000:0000:8a2e:0370:7334"
--------------------------------------------------
Shield supports hostname filtering when DNS lookups are available.
.Example 4. Hostname Filtering
[source,yaml]
--------------------------------------------------
shield.transport.filter.allow: localhost
shield.transport.filter.deny: '*.google.com'
--------------------------------------------------
==== Disabling IP Filtering
Disabling IP filtering can slightly improve performance under some conditions. To disable IP filtering entirely, set the
value of the `shield.transport.filter.enabled` attribute in the `elasticsearch.yml` configuration file to `false`.
.Example 5. Disabled IP Filtering
[source,yaml]
--------------------------------------------------
shield.transport.filter.enabled: false
--------------------------------------------------
You can also disable IP filtering for the transport protocol but enable it for HTTP only like this
.Example 6. Enable HTTP based IP Filtering
[source,yaml]
--------------------------------------------------
shield.transport.filter.enabled: false
shield.http.filter.enabled: true
--------------------------------------------------
==== Support for TCP transport profiles
In order to support bindings on multiple host, you can specify the profile name as a prefix in order to allow/deny based on profiles
.Example 7. Profile based filtering
[source,yaml]
--------------------------------------------------
shield.transport.filter.allow: 172.16.0.0/24
shield.transport.filter.deny: _all
transport.profiles.client.shield.filter.allow: 192.168.0.0/24
transport.profiles.client.shield.filter.deny: _all
--------------------------------------------------
Note: When you do not specify a profile, `default` is used automatically.
==== Support for HTTP
You may want to have different filtering between the transport and HTTP protocol
.Example 8. HTTP only filtering
[source,yaml]
--------------------------------------------------
shield.transport.filter.allow: localhost
shield.transport.filter.deny: '*.google.com'
shield.http.filter.allow: 172.16.0.0/16
shield.http.filter.deny: _all
--------------------------------------------------
[[dynamic-ip-filtering]]
==== Dynamically updating ip filter settings added[1.1.0]
In case of running in an environment with highly dynamic IP addresses like cloud based hosting it is very hard to know the IP addresses upfront when provisioning a machine. Instead of changing the configuration file and restarting the node, you can use the Cluster Update Settings API like this
[source,json]
--------------------------------------------------
curl -XPUT localhost:9200/_cluster/settings -d '{
"persistent" : {
"shield.transport.filter.allow" : "172.16.0.0/24"
}
}'
--------------------------------------------------
You can also disable filtering completely setting `shield.transport.filter.enabled` like this
[source,json]
--------------------------------------------------
curl -XPUT localhost:9200/_cluster/settings -d '{
"persistent" : {
"shield.transport.filter.enabled" : false
}
}'
--------------------------------------------------
Note: In order to not lock yourself out, the default bound transport address will never be denied. This means you can always SSH into a system and use curl to apply changes.

View File

@ -0,0 +1,292 @@
[[auditing]]
== Auditing
[IMPORTANT]
====
Audit logs are **disabled** by default. To enable this functionality the following setting should be added to the
`elasticsearch.yml` file:
[source,yaml]
----------------------------
shield.audit.enabled: true
----------------------------
====
The audit functionality was added to keep track of important events occurring in elasticsearch, primarily around security
concerns. Keeping track and persisting these events is essential for any secured environment and potentially provides
evidence for suspicious/malicious activity on the elasticsearch cluster.
Shield provides two ways to output these events: in a dedicated `access.log` file stored on the host's file system, or
in an elasticsearch index on the same or separate cluster. These options are not mutually exclusive. For example, both
options can be enabled through an entry in the `elasticsearch.yml` file:
[source,yaml]
----------------------------
shield.audit.outputs: [index, logfile]
----------------------------
It is expected that the `index` output type will be used in conjunction with the `logfile` output type. This is
because the `index` output type can lose messages if the target index is unavailable. For this reason, it is recommended
that, if auditing is enabled, then the `logfile` output type should be used as an official record of events. The `index`
output type can be enabled as a convenience to allow historical browsing of events.
Please also note that, because audit events are batched together before being indexed, they may not appear immediately.
Please refer to the `shield.audit.index.flush_interval` setting below for instructions on how to modify the frequency
with which batched events are flushed.
[float]
=== Log Entry Types
Each audit related event that occurs is represented by a single log entry of a specific type (the type represents the
type of the event that occurred). Here are the possible log entry types:
* `anonymous_access_denied` is logged when the request is denied due to missing authentication token.
* `authentication_failed` is logged when the authentication token cannot be matched to a known user.
* `authentication_failed [<realm>]` is logged for every realm that fails to present a valid authentication token.
The value of _<realm>_ is the realm type.
* `access_denied` is logged when an authenticated user attempts an action the user does not have the
<<reference,privilege>> to perform.
* `access_granted` is logged when an authenticated user attempts an action the user has the correct
privilege to perform. In TRACE level all system (internal) actions are logged as
well (in all other level they're not logged to avoid cluttering of the logs.
* `tampered_request` is logged when the request was detected to be tampered (typically relates to `search/scroll` requests when the scroll id is believed to be tampered)
* `connection_granted` is logged when an incoming tcp connection has passed the ip filtering for a specific profile
* `connection_denied` is logged when an incoming tcp connection did not pass the ip filtering for a specific profile
To avoid needless proliferation of log entries, Shield enables you to control what entry types should be logged. This can
be done by setting the logging level. The following table lists the log entry types that will be logged for each of the
possible log levels:
.Log Entry Types and Levels
[options="header"]
|======
| Log Level | Entry Type
| `ERROR` | `authentication_failed`, `access_denied`, `tampered_request`, `connection_denied`
| `WARN` | `authentication_failed`, `access_denied`, `tampered_request`, `connection_denied`, `anonymous_access_denied`
| `INFO` | `authentication_failed`, `access_denied`, `tampered_request`, `connection_denied`, `anonymous_access_denied`, `access_granted`
| `DEBUG` | (doesn't output additional entry types beyond `INFO`, but extends the information emitted for each entry (see <<audit-log-entry-format, Log Entry Format>> below)
| `TRACE` | `authentication_failed`, `access_denied`, `tampered_request`, `connection_denied`, `anonymous_access_denied`, `access_granted`, `connection_granted`, `authentication_failed [<realm>]`. In addition, internal system requests (self-management requests triggered by elasticsearch itself) will also be logged for `access_granted` entry type.
|======
[float]
[[audit-log-entry-format]]
=== Log Entry Format
As mentioned above, every log entry represents an event that occurred in the system. As such, each entry is associated with
a timestamp (at which the event occurred), the component/layer the event is associated with and the entry/event type. In
addition, every log entry (depending ot its type) carries addition information about the event.
The format of a log entry is shown below:
[source,txt]
----------------------------------------------------------------------------
[<timestamp>] [<local_node_info>] [<layer>] [<entry_type>] <attribute_list>
----------------------------------------------------------------------------
Where:
* `<timestamp>` - the timestamp of the entries (in the fomrat configured in `logging.yml` as shown above)
* `<local_node_info>` - additional information about the local node that this log entry is printed from (the <<audit-log-entry-local-node-info, table below>> shows how this information can be controlled via settings)
* `<layer>` - the layer from which this entry relates to. Can be either `rest`, `transport` or `ip_filter`
* `<entry_type>` - the type of the entry as discussed above. Can be either `anonymous_access_denied`, `authentication_failed`,
`access_denied`, `access_granted`, `connection_granted`, `connection_denied`.
* `<attribute_list>` - A comma-separated list of attribute carrying data relevant to the occurred event (formatted as `attr1=[val1], attr2=[val2],...`)
[[audit-log-entry-local-node-info]]
.Local Node Info Settings
[options="header"]
|======
| Name | Default | Description
| `shield.audit.logfile.prefix.emit_node_name` | true | When set to `true`, the local node's name will be emitted
| `shield.audit.logfile.prefix.emit_node_host_address` | false | When set to `true`, the local node's IP address will be emitted
| `shield.audit.logfile.prefix.emit_node_host_name` | false | When set to `true`, the local node's host name will be emitted
|======
The following tables describe the possible attributes each entry type can carry (the attributes that will be available depend on the configured log level):
.`[rest] [anonymous_access_denied]` attributes
[options="header"]
|======
| Attribute | Minimum Log Level | Description
| `origin_address` | WARN | The address the rest request origins from
| `uri` | WARN | The REST endpoint URI
| `request_body` | DEBUG | The body of the request
|======
.`[rest] [authentication_failed]` attributes
[options="header"]
|======
| Attribute | Minimum Log Level | Description
| `origin_address` | ERROR | The address the rest request origins from
| `principal` | ERROR | The principal (username) that failed to authenticate
| `uri` | ERROR | The REST endpoint URI
| `request_body` | DEBUG | The body of the request
| `realm` | TRACE | The realm that failed to authenticate the user. NOTE: A separate entry will be printed for each of the consulted realms
|======
.`[transport] [anonymous_access_denied]` attributes
[options="header"]
|======
| Attribute | Minimum Log Level | Description
| `origin_type` | WARN | The type of the origin the request originated from. Can be either `rest` (request was originated from a rest API request), `transport` (request received on the transport channel), `local_node` (the local node issued the request)
| `origin_address` | WARN | The address the request origins from
| `action` | WARN | The name of the action that was executed
| `request` | DEBUG | The type of the request that was executed
| `indices` | WARN | A comma-separated list of indices this request relates to (when applicable)
|======
.`[transport] [authentication_failed]` attributes
[options="header"]
|======
| Attribute | Minimum Log Level | Description
| `origin_type` | ERROR | The type of the origin the request originated from. Can be either `rest` (request was originated from a rest API request), `transport` (request received on the transport channel), `local_node` (the local node issued the request)
| `origin_address` | ERROR | The address the request origins from
| `principal` | ERROR | The principal (username) that failed to authenticate
| `action` | ERROR | The name of the action that was executed
| `request` | DEBUG | The type of the request that was executed
| `indices` | ERROR | A comma-separated list of indices this request relates to (when applicable)
| `realm` | TRACE | The realm that failed to authenticate the user. NOTE: A separate entry will be printed for each of the consulted realms
|======
.`[transport] [access_granted]` attributes
[options="header"]
|======
| Attribute | Minimum Log Level | Description
| `origin_type` | INFO | The type of the origin the request originated from. Can be either `rest` (request was originated from a rest API request), `transport` (request received on the transport channel), `local_node` (the local node issued the request)
| `origin_address` | INFO | The address the request origins from
| `principal` | INFO | The principal (username) that failed to authenticate
| `action` | INFO | The name of the action that was executed
| `request` | DEBUG | The type of the request that was executed
| `indices` | INFO | A comma-separated list of indices this request relates to (when applicable)
|======
.`[transport] [access_denied]` attributes
[options="header"]
|======
| Attribute | Minimum Log Level | Description
| `origin_type` | ERROR | The type of the origin the request originated from. Can be either `rest` (request was originated from a rest API request), `transport` (request received on the transport channel), `local_node` (the local node issued the request)
| `origin_address` | ERROR | The address the request origins from
| `principal` | ERROR | The principal (username) that failed to authenticate
| `action` | ERROR | The name of the action that was executed
| `request` | DEBUG | The type of the request that was executed
| `indices` | ERROR | A comma-separated list of indices this request relates to (when applicable)
|======
.`[transport] [tampered_request]` attributes
[options="header"]
|======
| Attribute | Minimum Log Level | Description
| `origin_type` | ERROR | The type of the origin the request originated from. Can be either `rest` (request was originated from a rest API request), `transport` (request received on the transport channel), `local_node` (the local node issued the request)
| `origin_address` | ERROR | The address the request origins from
| `principal` | ERROR | The principal (username) that failed to authenticate
| `action` | ERROR | The name of the action that was executed
| `request` | DEBUG | The type of the request that was executed
| `indices` | ERROR | A comma-separated list of indices this request relates to (when applicable)
|======
.`[ip_filter] [connection_granted]` attributes
[options="header"]
|======
| Attribute | Minimum Log Level | Description
| `origin_address` | TRACE | The address the request origins from
| `transport_profile` | TRACE | The principal (username) that failed to authenticate
| `rule` | TRACE | The IP filtering rule that granted the request
|======
.`[ip_filter] [connection_denied]` attributes
[options="header"]
|======
| Attribute | Minimum Log Level | Description
| `origin_address` | ERROR | The address the request origins from
| `transport_profile` | ERROR | The principal (username) that failed to authenticate
| `rule` | ERROR | The IP filtering rule that denied the request
|======
[float]
=== Audit Logs Settings
As mentioned above, the audit logs are configured in the `logging.yml` file located in Shield's <<shield-config, config>>
directory. The following snippet shows the default logging configuration:
[[logging-file]]
.Default `logging.yml` File
[source,yaml]
----
logger:
shield.audit.logfile: INFO, access_log
additivity:
shield.audit.logfile: false
appender:
access_log:
type: dailyRollingFile
file: ${path.logs}/${cluster.name}-access.log
datePattern: "'.'yyyy-MM-dd"
layout:
type: pattern
conversionPattern: "[%d{ISO8601}] %m%n"
----
As can be seen above, by default audit information is appended to the `access.log` file located in the
standard elasticsearch `logs` directory (typically located at `$ES_HOME/logs`).
[float]
[[audit-index]]
=== Storing Audit Logs in an Elasticsearch Index
It is possible to store audit logs in an elasticsearch index. This index can be either on the same cluster, or on
a different cluster (see below). Several settings in `elasticsearch.yml` control this behavior.
.`audit log indexing configuration`
[options="header"]
|======
| Attribute | Default Setting | Description
| `shield.audit.outputs` | `logfile` | Must be set to *index* or *[index, logfile]* to enable
| `shield.audit.index.bulk_size` | `1000` | Controls how many audit events will be batched into a single write
| `shield.audit.index.flush_interval` | `1s` | Controls how often to flush buffered events into the index
| `shield.audit.index.rollover` | `daily` | Controls how often to roll over to a new index: hourly, daily, weekly, monthly.
| `shield.audit.index.events.include` | `anonymous_access_denied, authentication_failed, access_granted, access_denied, tampered_request, connection_granted, connection_denied`| The audit events to be indexed. Valid values are `anonymous_access_denied, authentication_failed, access_granted, access_denied, tampered_request, connection_granted, connection_denied`, `system_access_granted`. `_all` is a special value that includes all types.
| `shield.audit.index.events.exclude` | `system_access_granted` | The audit events to exclude from indexing. By default, `system_access_granted` events are excluded; enabling these events results in every internal node communication being indexed, which will make the index size much larger.
|======
.audit index settings
The settings for the index that the events are stored in, can also be configured. The index settings should be placed under
the `shield.audit.index.settings` namespace. For example, the following sets the number of shards and replicas to 1 for
the audit indices:
[source,yaml]
----------------------------
shield.audit.index.settings:
index:
number_of_shards: 1
number_of_replicas: 1
----------------------------
[float]
=== Forwarding Audit Logs to a Remote Cluster
To have audit events stored into a remote Elasticsearch cluster, the additional following options are available.
.`remote audit log indexing configuration`
[options="header"]
|======
| Attribute | Default Setting | Description
| `shield.audit.index.client.hosts` | None | Comma separated list of host:port pairs. These hosts should be nodes in the cluster to which you want to index.
| `shield.audit.index.client.cluster.name` | None | The name of the remote cluster.
| `shield.audit.index.client.shield.user` | None | The username:password pair used to authenticate with the remote cluster.
|======
Additional settings may be passed to the remote client by placing them under the `shield.audit.index.client` namespace.
For example, to allow the remote client to discover all of the nodes in the remote cluster you could set
the *client.transport.sniff* option.
[source,yaml]
----------------------------
shield.audit.index.client.transport.sniff: true
----------------------------

View File

@ -0,0 +1,17 @@
[[clients]]
== Integrating Shield with clients
You will need to update the configuration for several clients to work with the Shield security plugin. The jump list in
the right side bar lists the configuration information for the clients that support Shield.
include::clients/java.asciidoc[]
include::clients/http.asciidoc[]
include::clients/logstash.asciidoc[]
include::clients/marvel.asciidoc[]
include::clients/kibana.asciidoc[]
include::clients/hadoop.asciidoc[]

View File

@ -0,0 +1,17 @@
include::appendices/01-certificate-authority.asciidoc[]
include::appendices/02-license-management.asciidoc[]
include::appendices/03-limitations.asciidoc[]
include::appendices/04-securing-aliases.asciidoc[]
include::appendices/05-tribe-node.asciidoc[]
include::appendices/06-example.asciidoc[]
include::appendices/07-trouble-shooting.asciidoc[]
include::appendices/08-reference.asciidoc[]
include::appendices/09-release-notes.asciidoc[]

View File

@ -0,0 +1,206 @@
[[certificate-authority]]
== Appendix 1. Running a Certificate Authority
A Certificate Authority (CA) can greatly simplify managing trust. Instead of trusting hundreds of certificates
individually, a client only needs to trust the certificate from the CA. When the CA signs other node certificates,
nodes that trust the CA also trust other nodes with certificates signed by the CA.
NOTE: This procedure is an example of how to set up a CA and cannot universally address a wide array of security needs.
To properly secure a production site, consult your organization's security experts to discuss requirements.
To run a CA, generate a public and private key, and wrap the public key in a certificate that clients will trust.
Node certificates are sent in a _Certificate Signing Request_ (CSR). Your CA signs the CSR, producing a newly
signed certificate that you install on the node.
IMPORTANT: Because a Certificate Authority is a central point for trust, the private keys to the CA must be protected
from compromise.
=== Setting up a CA
To set up a CA, generate a private and public key pair and build a certificate from the public key. This procedure
uses OpenSSL to create the CA certificate and sign CSRs. First, set up a file structure and configuration template for
the CA.
==== Creating the Certificate Authority
Create the `ca` directory along with the `private`, `certs`, and `conf` subdirectories, then populate the required
`serial` and `index.txt` files.
[source,shell]
--------------------------------------------------
mkdir -p ca/private ca/certs ca/conf
cd ca
echo '01' > serial
touch index.txt
--------------------------------------------------
A configuration template file specifies several configurations settings that cannot be passed from the command line.
The following sample configuration file highlights fields of particular interest.
Create the `ca/conf/caconfig.cnf` file with contents similar to the following:
[source,shell]
-------------------------------------------------------------------------------------
#..................................
[ ca ]
default_ca = CA_default
[ CA_default ]
copy_extensions = copy <1>
dir = /PATH/TO/YOUR/DIR/ca <2>
serial = $dir/serial
database = $dir/index.txt
new_certs_dir = $dir/certs
certificate = $dir/certs/cacert.pem
private_key = $dir/private/cakey.pem
default_days = 712 <3>
default_md = sha256
preserve = no
email_in_dn = no
x509_extensions = v3_ca
name_opt = ca_default
cert_opt = ca_default
policy = policy_anything
[ policy_anything ]
countryName = optional
stateOrProvinceName = optional
localityName = optional
organizationName = optional
organizationalUnitName = optional
commonName = supplied
emailAddress = optional
[ req ]
default_bits = 2048 # Size of keys
default_keyfile = key.pem # name of generated keys
default_md = sha256 # message digest algorithm
string_mask = nombstr # permitted characters
distinguished_name = req_distinguished_name
req_extensions = v3_req
[ req_distinguished_name ]
# Variable name Prompt string
#------------------------- ----------------------------------
0.organizationName = Organization Name (company)
organizationalUnitName = Organizational Unit Name (department, division)
emailAddress = Email Address
emailAddress_max = 40
localityName = Locality Name (city, district)
stateOrProvinceName = State or Province Name (full name)
countryName = Country Name (2 letter code)
countryName_min = 2
countryName_max = 2
commonName = Common Name (hostname, IP, or your name)
commonName_max = 64
# Default values for the above, for consistency and less typing.
# Variable name Value
#------------------------ ------------------------------
0.organizationName_default = Elasticsearch Test Org <4>
localityName_default = Amsterdam
stateOrProvinceName_default = Amsterdam
countryName_default = NL
emailAddress_default = cacerttest@YOUR.COMPANY.TLD
[ v3_ca ]
basicConstraints = CA:TRUE
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid:always,issuer:always
[ v3_req ]
basicConstraints = CA:FALSE
subjectKeyIdentifier = hash
---------------------------------------------------------------------------------------
<1> Copy extensions: Copies all X509 V3 extensions from a Certificate Signing Request into the signed certificate.
With the value set to `copy`, you need to ensure the extensions and their values are valid for the certificate
being requested prior to signing the certificate.
<2> CA directory: Add the full path to this newly created CA
<3> Certificate validity period: The default number of days that a certificate signed by this CA is valid for. Note the
certificates signed by a CA must expire before the CA certificate expires.
<4> Certificate Defaults: The `OrganizationName`, `localityName`, `stateOrProvinceName`, `countryName`, and
`emailAddress` fields are informational. The settings in the above example are the defaults for these values.
=== Create a CA Certificate
In the `ca` directory, create the CA certificate and export the certificate. The following command creates and signs
the CA certificate, resulting in a _self-signed_ certificate that establishes the CA as an authority.
[source,shell]
------------------------------------------------------------------------------
openssl req -new -x509 -extensions v3_ca \
-keyout private/cakey.pem \ <1>
-out certs/cacert.pem \ <2>
-days 1460 \ <3>
-config conf/caconfig.cnf
------------------------------------------------------------------------------
<1> The path to the file where the private key is stored.
<2> The path to the file where the CA certificate is stored.
<3> The duration, in days, that the CA certificate is valid. After the expiration, trust in the CA is revoked and
requires generation of a new CA certificate and re-signing of certificates.
The command prompts you to supply information to place in the certificate. You will have to pick a PEM passphrase to
encrypt the private key for your CA.
WARNING: You cannot recover the CA without this passphrase.
The following shows a sample interaction with the command above:
[source,shell]
------------------------------------------------------------------------------------------------------------------------
openssl req -new -x509 -extensions v3_ca -keyout private/cakey.pem -out certs/cacert.pem -days 1460 -config \
conf/caconfig.cnf
Generating a 2048 bit RSA private key
.....................++++++
.......++++++
writing new private key to 'private/cakey.pem'
Enter PEM pass phrase:
Verifying - Enter PEM pass phrase:
#-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
#-----
Organization Name (company) [Elasticsearch Test Org]:
Organizational Unit Name (department, division) []:.
Email Address [cacerttest@YOUR.COMPANY.TLD]:.
Locality Name (city, district) [Amsterdam]:.
State or Province Name (full name) [Amsterdam]:.
Country Name (2 letter code) [NL]:.
Common Name (hostname, IP, or your name) []:Elasticsearch Test CA
------------------------------------------------------------------------------------------------------------------------
You now have a CA private key and a CA certificate (which includes the public key). You can now distribute the CA
certificate and sign CSRs.
[[sign-csr]]
==== Signing a CSR
Signing a certificate with the CA means that the CA vouches for the owner of the certificate. The private key that is
linked to the certificate proves certificate ownership. The CSR includes the certificate. Signing a CSR results in a
new certificate that includes the old certificate, the CA certificate, and a signature. This resulting certificate is
a _certificate chain_. Send the certificate chain back to the private key's holder for use on the node.
TIP: If you do not yet have a CSR, you need to follow the steps described in <<private-key>> and <<generate-csr>>
before continuing.
The following commands sign the CSR with the CA:
[source,shell]
-----------------------------------------------------------------------------
openssl ca -in node01.csr -notext -out node01-signed.crt -config conf/caconfig.cnf -extensions v3_req
-----------------------------------------------------------------------------
The newly signed certificate chain `node01-signed.crt` can now be sent to the node to be imported back into its
keystore.
NOTE: If you plan on allowing more than one certificate per common name, OpenSSL must be configured to allow non-unique
subjects. This is necessary when running multiple nodes on a single host and requesting unique certificates per node.
Edit the `ca/index.txt.attr` file and ensure the `unique_subject` line matches below:
[source, shell]
-----------------------
unique_subject = no
-----------------------
These steps provide you with a basic CA that can sign certificates for your Shield nodes.
OpenSSL is an extremely powerful tool and there are many more options available for your certification strategy,
such as intermediate authorities and restrictions on the use of certificates. There are many tutorials on the internet
for these advanced options, and the OpenSSL website details all the intricacies.

View File

@ -0,0 +1,131 @@
[[license-management]]
== Appendix 2. License Management
[float]
==== Installing The License Plugin
To install the license plugin, you'll need to run the following command:
[source,shell]
----------------------------------------------------------
bin/plugin -i elasticsearch/license/latest
----------------------------------------------------------
If your server doesnt have direct Internet access, it is also possible to download the plugin separately and install
it manually by following these steps:
1. Download the plugin package in https://download.elastic.co/elasticsearch/license/license-latest.zip
2. Transfer the compressed file to your server, then install the plugin using the `bin/plugin` script:
[source,shell]
----------------------------------------------------
bin/plugin -i license -u file://PATH_TO_ZIP_FILE <1>
----------------------------------------------------
<1> URI to license plugin zip distribution file (e.g. `file:///path/to/file/license-latest.zip`,
note the three slashes at the beginning)
[[installing-license]]
[float]
==== Installing A License
When installing Shield for the first time, having the license plugin installed is the minimum required for Shield to work.
You can just start up the node and everything will just work as expected. The first time you start up the node, a 30 days
trial license will automatically be created which will enable Shield to be fully operational. Within these 30 days, you
will be able to replace the trial license with another one that will be provided to you up on purchase. Updating the
license can be done at runtime (no need to shutdown the nodes) using a dedicated API.
IMPORTANT: With a valid license, Shield will be fully operational. Upon license expiry, Shield will operate in a
degraded mode, where cluster health, cluster stats, and index stats APIs will be blocked. All other operations will
continue operating normally. Find out more about <<license-expiration, Shield license expiration>>.
The license itself is a _JSON_ file containing all information about the license (e.g. feature name, expiry date, etc...).
To install or update the license use the following REST API:
[source,shell]
-----------------------------------------------------------------------
curl -XPUT -u admin 'http://<host>:<port>/_licenses' -d @license.json
-----------------------------------------------------------------------
Where:
* `<host>` is the hostname of the elasticsearch node (`localhost` if executing locally)
* `<port>` is the http port (defaults to `9200`)
* `license.json` is the license json file
NOTE: The put license API is protected under the cluster admin privilege, therefore it has to be executed
by a user with the appropriate permissions.
[float]
=== Listing Currently Installed Licenses
You can list all currently installed licenses by executing the following REST API:
[source,shell]
-----------------------------------------------------
curl -XGET -u admin:password 'http://<host>:<port>/_licenses'
-----------------------------------------------------
The response of this command will be a JSON listing all available licenses. In the case of Shield, the following
entry will be shown:
[source,json]
--------------------------------------------
{
licenses: [
...
{
status: "active",
uid: "sample_uid",
type: "sample_type",
subscription_type: "sample_subscription_type",
"issue_date" : "2015-01-26T00:00:00.000Z",
"issue_date_in_millis" : 1422230400000,
feature: "shield",
"expiry_date" : "2015-04-26T23:59:59.999Z",
"expiry_date_in_millis" : 1430092799999,
max_nodes: 1,
issued_to: "sample customer",
issuer: "elasticsearch"
}
...
]
}
--------------------------------------------
NOTE: The get license API is protected under the cluster admin privilege, therefore it has to be executed
by a user with the appropriate permissions.
[[license-expiration]]
[float]
=== License Expiration
License expiration should never be a surprise. Beginning 30 days from license expiration, Shield will begin logging daily messages
containing the license expiration date and a brief description of unlicensed behavior. Beginning 7 days from license expiration,
Shield will begin logging error messages every 10 minutes with the same information. After expiration, Shield will continue to
log error messages informing you that the license has expired. These messages will also be generated at node startup, to ensure
that there are no surprises. Here is an example message:
[source,sh]
---------------------------------------------------------------------------------------------------------------------------------
[ERROR][shield.license] Shield license will expire on 1/1/1970. Cluster health, cluster stats and indices stats operations are
blocked on Shield license expiration. All data operations (read and write) continue to work. If you have a new license, please
update it. Otherwise, please reach out to your support contact.
---------------------------------------------------------------------------------------------------------------------------------
When the license for Shield is expired, Shield will block requests to the cluster health, cluster stats, and index stats APIs.
Calls to these APIs will fail with a LicenseExpiredException, and will return HTTP status code 401. By disabling only these APIs,
any automated cluster monitoring should detect the license failure, while users of the cluster should not be immediately impacted.
It is not recommended to run for any length of time with a disabled Shield license; cluster health and stats APIs are critical
for monitoring and management of an Elasticsearch cluster.
Example error response the clients will receive when license is expired and cluster health, cluster stats or index stats APIs are called:
[source,json]
----------------------------------------------------------------------------------------------------------------------------------------------
{"error":"LicenseExpiredException[license expired for feature [shield]]","status":401}
----------------------------------------------------------------------------------------------------------------------------------------------
If you receive a new license file and <<installing-license, install it>>, it will take effect immediately and the health and
stats APIs will be available.

View File

@ -0,0 +1,94 @@
[[limitations]]
== Appendix 3. Limitations
[float]
=== Plugins
Elasticsearch's plugin infrastructure is extremely flexible in terms of what can be extended. While it opens up Elasticsearch
to a wide variety of (often custom) additional functionality, when it comes to security, this high extensibility level
comes at a cost. We have no control over the third-party plugins' code (open source or not) and therefore we cannot
guarantee their compliance with Shield. For this reason, third-party plugins are not officially supported on clusters
with the Shield security plugin installed.
[float]
=== Changes in Index Wildcard Behavior
Elasticsearch clusters with the Shield security plugin installed apply the `/_all` wildcard, and all other wildcards,
to the indices that the current user has privileges for, not the set of all indices on the cluster. There are two
notable results of this behavior:
* Elasticsearch clusters with the Shield security plugin installed do not honor the `ignore_unavailable` option.
This behavior means that requests involving indices that the current user lacks authorization for throw an
`AuthorizationException` error, regardless of the option's setting.
* The `allow_no_indices` option is ignored, resulting in the following behavior: when the final set of indices after
wildcard expansion and replacement is empty, the request throws a `IndexMissingException` error.
As a general principle, core Elasticsearch will return empty results in scenarios where wildcard expansion returns no
indices, while Elasticsearch with Shield returns exceptions. Note that this behavior means that operations with
multiple items will fail the entire set of operations if any one operation throws an exception due to wildcard
expansion resulting in an empty set of authorized indices.
[[limitations-filtered-aliases]]
[float]
=== Filtered Index Aliases
You can combine a secured index alias with a {ref}/query-dsl-filters.html[filter]
to approximate document-level security. By manipulating the specific filtering, you can control the set of documents
that users with privileges on that index alias can access.
WARNING: Filtering secured index aliases does not provide security for documents retrieved through the
{ref}/docs-get.html[get api]. Read
https://github.com/elasticsearch/elasticsearch/issues/3861[elasticsearch issue #3861] to learn more about this limitation.
Users can obtain secure near-real-time get under this restriction with searches by document ID, using the
{ref}/search-search.html[search api] instead. Restrict get operations when you use this approach by granting the `search`
privilege and disallowing `get`.
WARNING: In Elasticsearch, issuing a delete operation on an alias also deletes all of the indices that the alias
points to, regardless of the filter that the alias might hold. Keep this behavior in mind when granting users
administrative privileges to filtered index aliases. Read
https://github.com/elasticsearch/elasticsearch/issues/2318[elasticsearch issue #2318] to learn more about this limitation.
[float]
=== Queries and Filters
[[limitations-disable-cache]]
[float]
==== Elasticsearch 1.6+
Elasticsearch 1.6 removes all of the limitations below with queries and filters, *but* there is the possibility of
authorization being bypassed when using a terms filter with the
{ref}/query-dsl-terms-filter.html#_terms_lookup_mechanism[terms lookup mechanism]. The authorization that could be
bypassed is for the index containing the terms. In order to ensure that all requests are properly authorized when using
Shield 1.2.0 and 1.2.1, add the following setting to your `elasticsearch.yml` file:
[source,yaml]
--------------------------------------------------
indices.cache.filter.terms.size: 0
--------------------------------------------------
[float]
==== Elasticsearch pre-1.6.0
Certain Elasticsearch requests execute other requests as part of their implementation. Some of these requests do not
maintain the security context that the original request was made with. This causes an `AuthorizationException` even when
the user has authorization to make the subsequent requests. The following requests have this behavior:
* {ref}/query-dsl-mlt-query.html[More Like This Query]
* {ref}/query-dsl-geo-shape-query.html[GeoShape Query] and {ref}/query-dsl-geo-shape-filter.html[GeoShape Filter] when
used with an {ref}/query-dsl-geo-shape-filter.html#_pre_indexed_shape[indexed shape]
* {ref}/query-dsl-terms-filter.html[Terms Filter] when using the {ref}/query-dsl-terms-filter.html#_terms_lookup_mechanism[terms lookup mechanism]
* {ref}/search-suggesters-phrase.html[Phrase Suggester] when specifying the `collate` field
* Any query using {ref}/modules-scripting.html#_indexed_scripts[indexed scripts]
* Queries using a {ref}/search-template.html[search template]
[float]
=== Document Expiration (_ttl)
Document expiration handled using the built-in {ref}/mapping-ttl-field.html#mapping-ttl-field[`_ttl` (time to live) mechanism]
does not work with Shield. The document deletions will fail and the documents continue to live past their expiration.
[float]
=== LDAP Realm
The <<ldap, LDAP Realm>> does not currently support the discovery of nested LDAP Groups. For example, if a user is a member
of GroupA and GroupA is a member of GroupB, only GroupA will be discovered. However, the <<active_directory, Active Directory Realm>> _does_
support transitive group membership.

View File

@ -0,0 +1,101 @@
[[securing-aliases]]
== Appendix 4. Securing Indices & Aliases
Elasticsearch allows to execute operations against {ref}/indices-aliases.html[index aliases],
which are effectively virtual indices. An alias points to one or more indices, holds metadata and potentially a filter.
Shield treats aliases and indices the same. Privileges for indices actions are granted on specific indices or aliases.
In order for an indices action to be authorized by Shield, the user that executes it needs to have permissions for that
action on all the specific indices or aliases that the request relates to.
Let's look at an example. Assuming we have an index called `2015`, an alias that points to it called `current_year`,
and a user with the following role:
[source,yaml]
--------------------------------------------------
current_year_read:
indices:
'2015': read
--------------------------------------------------
The user attempts to retrieve a document from `current_year`:
[source,shell]
-------------------------------------------------------------------------------
curl -XGET 'localhost:9200/current_year/logs/1'
-------------------------------------------------------------------------------
The above request gets rejected, although the user has read permissions on the concrete index that the `current_year`
alias points to. The correct permission would be as follows:
[source,yaml]
--------------------------------------------------
current_year_read:
indices:
'current_year': read
--------------------------------------------------
[float]
=== Managing aliases
Unlike creating indices, which requires `create_index` privilege, adding/removing/retrieving aliases requires
`manage_aliases` permission. Aliases can be added to an index directly as part of the index creation:
[source,shell]
-------------------------------------------------------------------------------
curl -XPUT localhost:9200/2015 -d '{
"aliases" : {
"current_year" : {}
}
}'
-------------------------------------------------------------------------------
or via the dedicated aliases api if the index already exists:
[source,shell]
-------------------------------------------------------------------------------
curl -XPOST 'http://localhost:9200/_aliases' -d '
{
"actions" : [
{ "add" : { "index" : "2015", "alias" : "current_year" } }
]
}'
-------------------------------------------------------------------------------
The above requests both require `manage_aliases` privilege on the alias name as well as the targeted index, as follows:
[source,yaml]
--------------------------------------------------
admin:
indices:
'20*,current_year': create_index,manage_aliases
--------------------------------------------------
Note also that the `manage` privilege includes both `create_index` and `manage_aliases` in addition to all of the other
management related privileges:
[source,yaml]
--------------------------------------------------
admin:
indices:
'20*,current_year': manage
--------------------------------------------------
The index aliases api allows also to delete aliases from existing indices, as follows. The privileges required for such
a request are the same as above. Both index and alias need the `manage_aliases` permission.
[source,shell]
-------------------------------------------------------------------------------
curl -XPOST 'http://localhost:9200/_aliases' -d '
{
"actions" : [
{ "delete" : { "index" : "2015", "alias" : "current_year" } }
]
}'
-------------------------------------------------------------------------------
[float]
=== Filtered aliases
Aliases can hold a filter, which allows to select a subset of documents that can be accessed out of all the documents that
the physical index contains. Filtered aliases allow to mimic document level security, but have limitations. Please read
the <<limitations-filtered-aliases,limitations>> section to know more.

View File

@ -0,0 +1,110 @@
[[tribe-node]]
== Appendix 5. Tribe Node
Shield supports the {ref}/modules-tribe.html[Tribe Node], which acts as a federated client across multiple clusters.
When using Tribe Node with Shield, you must have the same Shield configurations (users, roles, user-role mappings, SSL/TLS CA)
on each cluster, and on the Tribe Node itself, where security checking is primarily done. This, of course, also means
that all clusters must be running Shield. The following are the current limitations to keep in mind when using the
Tribe Node in combination with Shield.
[float]
=== Same privileges on all connected clusters
The Tribe Node has its own configuration and privileges, which need to grant access to actions and indices on all of the
connected clusters. Also, each cluster needs to grant access to indices belonging to other connected clusters as well.
Let's look at an example: assuming we have two clusters, `cluster1` and `cluster2`, each one holding an index, `index1`
and `index2`. A search request that targets multiple clusters, as follows
[source,shell]
-----------------------------------------------------------
curl -XGET tribe_node:9200/index1,index2/_search -u tribe_user:tribe_user
-----------------------------------------------------------
requires `search` privileges for both `index1` and `index2` on the Tribe Node:
[source,yaml]
-----------------------------------------------------------
tribe_user:
indices:
'index*': search
-----------------------------------------------------------
Also, the same privileges need to be granted on the connected clusters, meaning that `cluster1` has to grant access to
`index2` even though `index2` only exists on `cluster2`; the same requirement applies for `index1` on `cluster2`. This
applies to any indices action. As for cluster state read operations (e.g. cluster state api, get mapping api etc.),
they always get executed locally on the Tribe Node, to make sure that the merged cluster state gets returned; their
privileges are then required on the Tribe Node only.
[float]
=== Same system key on all clusters
In order for <<message-authentication,message authentication>> to properly work across multiple clusters, the Tribe Node
and all of the connected clusters need to share the same system key.
[float]
=== Encrypted communication
Encrypted communication via SSL can only be enabled globally, meaning that either all of the connected clusters and the
Tribe Node have SSL enabled, or none of them have.
[float]
=== Same certification authority on all clusters
When using encrypted communication, for simplicity, we recommend all of the connected clusters and the Tribe Node use
the same certification authority to generate their certificates.
[float]
=== Example
Let's see a complete example on how to use the Tribe Node with shield and the configuration required. First of all the
Shield and License plugins need to be installed and enabled on all clusters and on the Tribe Node.
The system key needs to be generated on one node, as described in the <<message-authentication, Getting Started section>>,
and then copied over to all of the other nodes in each cluster and the Tribe Node itself.
Each cluster can have its own users with `admin` privileges that don't need to be present in the Tribe Node too. In fact,
administration tasks (e.g. create index) cannot be performed through the Tribe Node but need to be sent directly to the
corresponding cluster. The users that need to be created on Tribe Node are those that allow to get back data merged from
the different clusters through the Tribe Node itself. Let's for instance create as follows a `tribe_user` user, with
role `user`, that has `read` privileges on any index.
[source,shell]
-----------------------------------------------------------
./bin/shield/esusers useradd tribe_user -p tribe_user -r user
-----------------------------------------------------------
The above command needs to be executed on each cluster, since the same user needs to be present on the Tribe Node as well
as on every connected cluster.
The following is the configuration required on the Tribe Node, that needs to be added to `elasticsearch.yml`.
Elasticsearch allows to list specific settings per cluster. We disable multicast discovery as described in the
<<disable-multicast, Disable Multicast section>> and configure the proper unicast discovery hosts for each cluster,
as well as their cluster names:
[source,yaml]
-----------------------------------------------------------
tribe:
t1:
cluster.name: tribe1
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["tribe1:9300"]
t2:
cluster.name: tribe2
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["tribe2:9300"]
-----------------------------------------------------------
The Tribe Node can then be started and once initialized it will be ready to accept requests like the following search,
which will return documents coming from the different connected clusters:
[source,shell]
-----------------------------------------------------------
curl -XGET localhost:9200/_search -u tribe_user:tribe_user
-----------------------------------------------------------
As for encrypted communication, the required settings are the same as described in <<securing-nodes, Securing Nodes>>,
but need to be specified per tribe as we did for discovery settings above.

View File

@ -0,0 +1,94 @@
[[example]]
== Appendix 6. Full `esusers` Example
[float]
=== Putting it all together: Ecommerce Store Example
The e-commerce store site in this example store has the following components:
* A webshop application, which executes queries
* A nightly bulk import process, which reindexes the documents to ensure correct pricing for the following day
* A update mechanism that writes data concurrently during business hours on a per-document base
* A sales representative that needs to read sales-specific indices
[float]
=== Defining the roles
[source,yaml]
--------------------------------------------------
bulk:
indices:
'products_*': write, manage, read
updater:
indices:
'products': index, delete, indices:admin/optimize
webshop:
indices:
'products': search, get
monitoring:
cluster: monitor
indices:
'*': monitor
sales_rep :
cluster : none
indices:
'sales_*' : all
'social_events' : data_access, monitor
--------------------------------------------------
Let's step through each of the role definitions:
* The `bulk` role definition has the privileges to create/delete all indices starting with `products_` as well as
indexing data into it. This set of privileges enables the user with this role to delete and repopulate a particular
index.
* The `updater` role does not require any information about concrete indices. The only privileges required for updating
the `products` index are the `write` and `delete` privileges, as well as index optimization.
* The `webshop` role is a read-only role that solely executes queries and GET requests.
* The `monitoring` role extracts monitoring data for display on an internal screen of the web application.
* The `sales_rep` role has write access on all indices starting with `sales` and read access to the `social_events`
index.
[float]
=== Creating Users and Their Roles
After creating the `roles.yml` file, you can use the `esusers` tool to create the needed users and the respective
user-to-role mapping.
[source,shell]
-----------------------------------------------------------
bin/shield/esusers useradd webshop -r webshop,monitoring
-----------------------------------------------------------
[source,shell]
-----------------------------------------------------------
bin/shield/esusers useradd bulk -r bulk
-----------------------------------------------------------
[source,shell]
-----------------------------------------------------------
bin/shield/esusers useradd updater -r updater
-----------------------------------------------------------
[source,shell]
--------------------------------------------------------------------
bin/shield/esusers useradd best_sales_guy_of_the_world -r sales_rep
--------------------------------------------------------------------
[source,shell]
----------------------------------------------------------------------------
bin/shield/esusers useradd second_best_sales_guy_of_the_world -r sales_rep
----------------------------------------------------------------------------
[float]
=== Modifying Your Application
With the users and roles defined, you now need to modify your application. Each part of the application must
authenticate to Elasticsearch using the username and password you gave it in the previous steps.

View File

@ -0,0 +1,236 @@
[[trouble-shooting]]
== Appendix 7. Trouble Shooting
[float]
=== `settings`
Some settings are not returned via the nodes settings API::
+
--
This is intentional. Some of the settings are considered to be highly sensitive (e.g. all `ssl` settings, ldap `bind_dn`,
`bind_password` and `hostname_verification`). For this reason, we filter these settings and not exposing them via the
nodes info API rest endpoint. It is also possible to define additional sensitive settings that should be hidden using
the `shield.hide_settings` setting:
[source, yaml]
------------------------------------------
shield.hide_settings: shield.authc.realms.ldap1.url, shield.authc.realms.ad1.*
------------------------------------------
The snippet above will also hide the `url` settings of the `ldap1` realm and all settings of the `ad1` realm.
--
[float]
=== `esusers`
I configured the appropriate roles and the users, but I still get an authorization exception::
+
--
Verify that the role names associated with the users match the roles defined in the `roles.yml` file. You
can use the `esusers` tool to list all the users. Any unknown roles are marked with `*`.
[source, shell]
------------------------------------------
esusers list
rdeniro : admin
alpacino : power_user
jacknich : marvel,unknown_role* <1>
------------------------------------------
<1> `unknown_role` was not found in `roles.yml`
--
ERROR: extra arguments [...] were provided::
+
--
This error occurs when the esusers tool is parsing the input and finds unexepected arguments. This can happen when there
are special characters used in some of the arguments. For example, on Windows systems the `,` character is considered
a parameter separator; in other words `-r role1,role2` is translated to `-r role1 role2` and the `esusers` tool only recognizes
`role1` as an expected parameter. The solution here is to quote the parameter: `-r "role1,role2"`.
--
[[trouble-shoot-active-directory]]
[float]
=== Active Directory
Certain users are being frequently locked out of Active Directory::
+
--
Check your realm configuration; realms are checked serially, one after another. If your Active Directory realm is being checked before other realms and there are usernames
that appear in both Active Directory and another realm, a valid login for one realm may be causing failed login attempts in another realm.
For example, if `UserA` exists in both Active Directory and esusers, and the Active Directory realm is checked first and
esusers is checked second, an attempt to authenticate as `UserA` in the esusers realm would first attempt to authenticate
against Active Directory and fail, before successfully authenticating against the esusers realm. Because authentication is
verified on each request, the Active Directory realm would be checked - and fail - on each request for `UserA` in the esusers
realm. In this case, while the Shield request completed successfully, the account on Active Directory would have received
several failed login attempts, and that account may become temporarily locked out. Plan the order of your realms accordingly.
Also note that it is not typically necessary to define multiple Active Directory realms to handle domain controller failures. When using Microsoft DNS, the DNS entry for
the domain should always point to an available domain controller.
--
[float]
=== LDAP
I can authenticate to LDAP, but I still get an authorization exception::
+
--
A number of configuration options can cause this error.
|======================
|_group identification_ |
Groups are located by either an LDAP search or by the "memberOf" attribute on
the user. Also, If subtree search is turned off, it will search only one
level deep. See the <<ldap-settings, LDAP Settings>> for all the options.
There are many options here and sticking to the defaults will not work for all
scenarios.
| _group to role mapping_|
Either the `role_mapping.yml` file or the location for this file could be
misconfigured. See <<ref-shield-files, Shield Files>> for more.
|_role definition_|
Either the `roles.yml` file or the location for this file could be
misconfigured. See <<ref-shield-files, Shield Files>> for more.
|======================
To help track down these possibilities, add `shield.authc: DEBUG` to the `logging.yml` <<shield-config, config file>>. A successful
authentication should produce debug statements that list groups and role mappings.
--
[float]
=== Encryption & Certificates
`curl` on the Mac returns a certificate verification error even when the `--cacert` option is used::
+
--
Apple's integration of `curl` with their keychain technology disables the `--cacert` option.
See http://curl.haxx.se/mail/archive-2013-10/0036.html for more information.
You can use another tool, such as `wget`, to test certificates. Alternately, you can add the certificate for the
signing certificate authority MacOS system keychain, using a procedure similar to the one detailed at the
http://support.apple.com/kb/PH14003[Apple knowledge base]. Be sure to add the signing CA's certificate and not the server's certificate.
--
[float]
==== SSLHandshakeException causing connections to fail
A `SSLHandshakeException` will cause a connection to a node to fail and indicates that there is a configuration issue. Some of the
common exceptions are shown below with tips on how to resolve these issues.
`java.security.cert.CertificateException: No name matching node01.example.com found`::
+
--
Indicates that a client connection was made to `node01.example.com` but the certificate returned did not contain the name `node01.example.com`.
In most cases, the issue can be resolved by ensuring the name is specified as a `SubjectAlternativeName` during <<private-key, certificate creation>>.
Another scenario is when the environment does not wish to use DNS names in certificates at all. In this scenario, all settings
in `elasticsearch.yml` should only use IP addresses and the following setting needs to be set in `elasticsearch.yml`:
[source, yaml]
--------------------------------------------------
shield.ssl.hostname_verification.resolve_name: false
--------------------------------------------------
--
`java.security.cert.CertificateException: No subject alternative names present`::
+
--
Indicates that a client connection was made to an IP address but the returned certificate did not contain any `SubjectAlternativeName` entries.
IP addresses are only used for hostname verification if they are specified as a `SubjectAlternativeName` during
<<private-key, certificate creation>>. If the intent was to use IP addresses for hostname verification, then the certificate
will need to be regenerated. Also verify that `shield.ssl.hostname_verification.resolve_name: false` is *not* set in
`elasticsearch.yml`.
--
`javax.net.ssl.SSLHandshakeException: null cert chain` and `javax.net.ssl.SSLException: Received fatal alert: bad_certificate`::
+
--
The `SSLHandshakeException` above indicates that a self-signed certificate was returned by the client that is not trusted
as it cannot be found in the `truststore` or `keystore`. The `SSLException` above is seen on the client side of the connection.
--
`sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target` and `javax.net.ssl.SSLException: Received fatal alert: certificate_unknown`::
+
--
The `SunCertPathBuilderException` above indicates that a certificate was returned during the handshake that is not trusted.
This message is seen on the client side of the connection. The `SSLException` above is seen on the server side of the
connection. The CA certificate that signed the returned certificate was not found in the `keystore` or `truststore` and
needs to be added to trust this certificate.
--
[float]
==== Other SSL/TLS related exceptions
The are other exceptions related to SSL that may be seen in the logs. Below you will find some common exceptions and their
meaning.
WARN: received plaintext http traffic on a https channel, closing connection::
+
--
Indicates that there was an incoming plaintext http request. This typically occurs when an external applications attempts
to make an unencrypted call to the REST interface. Please ensure that all applications are using `https` when calling the
REST interface with SSL enabled.
--
`org.jboss.netty.handler.ssl.NotSslRecordException: not an SSL/TLS record:`::
+
--
Indicates that there was incoming plaintext traffic on an SSL connection. This typically occurs when a node is not
configured to use encrypted communication and tries to connect to nodes that are using encrypted communication. Please
verify that all nodes are using the same setting for `shield.transport.ssl`.
--
`java.io.StreamCorruptedException: invalid internal transport message format, got`::
+
--
Indicates an issue with data received on the transport interface in an unknown format. This can happen when a node with
encrypted communication enabled connects to a node that has encrypted communication disabled. Please verify that all
nodes are using the same setting for `shield.transport.ssl`.
--
`java.lang.IllegalArgumentException: empty text`::
+
--
The exception is typically seen when a `https` request is made to a node that is not using `https`. If `https` is desired,
please ensure the following setting is in `elasticsearch.yml`:
[source,yaml]
----------------
shield.http.ssl: true
----------------
--
ERROR: unsupported ciphers [...] were requested but cannot be used in this JVM::
+
--
This error occurs when a SSL/TLS cipher suite is specified that cannot supported by the JVM that elasticsearch is running
in. Shield will try to use the specified cipher suites that are supported by this JVM. This error can occur when using
the Shield defaults as some distributions of OpenJDK do not enable the PKCS11 provider by default. In this case, we
recommend consulting your JVM documentation for details on how to enable the PKCS11 provider.
Another common source of this error is requesting cipher suites that use encrypting with a key length greater than 128 bits
when running on an Oracle JDK. In this case, you will need to install the <<ciphers, JCE Unlimited Strength Jurisdiction Policy Files>>.
--
[float]
=== Exceptions when unlicensed
WARN: Failed to execute IndicesStatsAction for ClusterInfoUpdateJob::
+
--
This warning occurs in the logs every 30 seconds when the Shield license is expired or invalid. It is caused by a periodic
internal request to gather disk usage information from the nodes and indices, to enable {ref}/index-modules-allocation.html#disk[disk-based shard allocation].
Disk-based shard allocation is not required, though it is enabled by default.
If you are using elasticsearch 1.4.3 or higher with disk-based shard allocation enabled, it will be automatically disabled when the Shield
license is expired or invalid, and will be automatically re-enabled when a valid Shield license is installed.
If you are using elasticsearch 1.4.2 with disk-based shard allocation enabled, we recommend manually disabling disk-based shard
allocation while your Shield license is expired, and re-enabling it after installing a valid Shield license. Instructions for
disabling disk-based shard allocation are {ref}/index-modules-allocation.html#disk[here].
--

View File

@ -0,0 +1,409 @@
[[reference]]
== Appendix 8. Reference
[[privileges-list]]
[float]
=== Privileges
[[privileges-list-cluster]]
[float]
==== Cluster
[horizontal]
`all`:: All cluster administration operations, like snapshotting, node shutdown/restart, settings update or rerouting
`monitor`:: All cluster read-ony operations, like cluster health & state, hot threads, node info, node & cluster
stats, snapshot/restore status, pending cluster tasks
`manage_shield`:: All Shield related operations (currently only exposing an API for clearing the realm caches)
[[privileges-list-indices]]
[float]
==== Indices
[horizontal]
`all`:: Any action on an index
`manage`:: All `monitor` privileges plus index administration (aliases, analyze, cache clear, close, delete, exists,
flush, mapping, open, optimize, refresh, settings, search shards, templates, validate, warmers)
`monitor`": All actions, that are required for monitoring and read-only (recovery, segments info, index stats & status)
`data_access`:: A shortcut of all of the below privileges
`crud`:: A shortcut of `read` and `write` privileges
`read`:: Read only access to actions (count, explain, get, exists, mget, get indexed scripts, more like this, multi
percolate/search/termvector), percolate, scroll, clear_scroll, search, suggest, tv)
`search`:: All of `suggest` and executing an arbitrary search request (including multi-search API)
`get`:: Allow to execute a GET request for a single document or multiple documents via the multi-get API
`suggest`:: Allow to execute the `_suggest` API
`index`:: Privilege to index and update documents
`create_index`:: Privilege to create an index. A create index request may contain aliases to be added to the index once
created. In that case the request requires `manage_aliases` privilege as well, on both the index and the aliases names.
`manage_aliases`:: Privilege to add and remove aliases, as well as retrieve aliases information. Note that in order
to add an alias to an existing index, the `manage_aliases` privilege is required on the existing index as well as on the
alias name
`delete`:: Privilege to delete documents (includes delete by query)
`write`:: Privilege to index, update, delete, delete by query and bulk operations on documents, in addition to delete
and put indexed scripts
[[ref-actions-list]]
[float]
==== Action level privileges
Although rarely needed, it is also possible to define privileges on specific actions that are available in
Elasticsearch. This only applies to publicly available indices and cluster actions.
[[ref-actions-list-cluster]]
[float]
===== Cluster actions privileges
* `cluster:admin/nodes/restart`
* `cluster:admin/nodes/shutdown`
* `cluster:admin/repository/delete`
* `cluster:admin/repository/get`
* `cluster:admin/repository/put`
* `cluster:admin/repository/verify`
* `cluster:admin/reroute`
* `cluster:admin/settings/update`
* `cluster:admin/snapshot/create`
* `cluster:admin/snapshot/delete`
* `cluster:admin/snapshot/get`
* `cluster:admin/snapshot/restore`
* `cluster:admin/snapshot/status`
* `cluster:admin/plugin/license/get`
* `cluster:admin/plugin/license/delete`
* `cluster:admin/plugin/license/put`
* `cluster:admin/indices/scroll/clear_all`
* `cluster:admin/analyze`
* `cluster:admin/shield/realm/cache/clear`
* `cluster:monitor/health`
* `cluster:monitor/nodes/hot_threads`
* `cluster:monitor/nodes/info`
* `cluster:monitor/nodes/stats`
* `cluster:monitor/state`
* `cluster:monitor/stats`
* `cluster:monitor/task`
* `indices:admin/template/delete`
* `indices:admin/template/get`
* `indices:admin/template/put`
NOTE: While indices template actions typically relate to indices, they are categorized under cluster actions to avoid
potential security leaks (e.g. having one user define a template that may match another user's index and then be
applied).
[[ref-actions-list-indices]]
[float]
===== Indices actions privileges
* `indices:admin/aliases`
* `indices:admin/aliases/exists`
* `indices:admin/aliases/get`
* `indices:admin/analyze`
* `indices:admin/cache/clear`
* `indices:admin/close`
* `indices:admin/create`
* `indices:admin/delete`
* `indices:admin/exists`
* `indices:admin/flush`
* `indices:admin/get`
* `indices:admin/mapping/delete`
* `indices:admin/mapping/put`
* `indices:admin/mappings/fields/get`
* `indices:admin/mappings/get`
* `indices:admin/open`
* `indices:admin/optimize`
* `indices:admin/refresh`
* `indices:admin/settings/update`
* `indices:admin/shards/search_shards`
* `indices:admin/types/exists`
* `indices:admin/validate/query`
* `indices:admin/warmers/delete`
* `indices:admin/warmers/get`
* `indices:admin/warmers/put`
* `indices:monitor/recovery`
* `indices:monitor/segments`
* `indices:monitor/settings/get`
* `indices:monitor/stats`
* `indices:monitor/status`
* `indices:data/read/count`
* `indices:data/read/exists`
* `indices:data/read/explain`
* `indices:data/read/get`
* `indices:data/read/mget`
* `indices:data/read/mlt`
* `indices:data/read/mpercolate`
* `indices:data/read/msearch`
* `indices:data/read/mtv`
* `indices:data/read/percolate`
* `indices:data/read/script/get`
* `indices:data/read/scroll`
* `indices:data/read/scroll/clear`
* `indices:data/read/search`
* `indices:data/read/suggest`
* `indices:data/read/tv`
* `indices:data/write/bulk`
* `indices:data/write/delete`
* `indices:data/write/delete/by_query`
* `indices:data/write/index`
* `indices:data/write/script/delete`
* `indices:data/write/script/put`
* `indices:data/write/update`
[[ref-shield-settings]]
[float]
=== Shield Settings
The parameters listed in this section are configured in the `config/elasticsearch.yml` configuration file.
[[message-auth-settings]]
.Shield Message Authentication Settings
[options="header"]
|======
| Name | Default | Description
| `shield.system_key.file` | `system_key` under Shield's <<shield-config,config>> | Sets the location of the `system_key` file (read more <<message-authentication,here>>)
|======
[[ref-anonymous-access]]
.Shield Anonymous Access Settings added[1.1.0]
[options="header"]
|======
| Name | Default | Description
| `shield.authc.anonymous.username` | `_es_anonymous_user` | The username/principal of the anonymous user (this setting is optional)
| `shield.authc.anonymous.roles` | - | The roles that will be associated with the anonymous user. This setting must be set to enable anonymous access.
| `shield.authc.anonymous.authz_exception` | `true` | When `true`, a HTTP 403 response will be returned when the anonymous user does not have the appropriate permissions for the requested action. The user will not be prompted to provide credentials to access the requested resource. When set to `false`, a HTTP 401 will be returned allowing for credentials to be provided for a user with the appropriate permissions.
|======
[[ref-realm-settings]]
[float]
==== Realm Settings
All realms are configured under the `shield.authc.realms` settings, keyed by their names as follows:
[source,yaml]
----------------------------------------
shield.authc.realms:
realm1:
type: esusers
order: 0
...
realm2:
type: ldap
order: 1
...
realm3:
type: active_directory
order: 2
...
...
----------------------------------------
.Common Settings to All Realms
[options="header"]
|======
| Name | Required | Default | Description
| `type` | yes | - | The type of the reamlm (currently `esusers`, `ldap` or `active_directory`)
| `order` | no | Integer.MAX_VALUE | The priority of the realm within the realm chain
| `enabled` | no | true | Enable/disable the realm
|======
[[ref-esusers-settings]]
._esusers_ Realm
[options="header"]
|======
| Name | Required | Default | Description
| `files.users` | no | `users` under Shield's <<shield-config,config>> | The location of <<users-file, _users_>> file
| `files.users_roles` | no | `users_roles` under Shield's <<shield-config,config>> | The location of <<users_roles-file, _users_roles_>> file
| `cache.ttl` | no | `20m` | Specified the time-to-live for cached user entries (a user and its credentials will be cached for this configured period of time). Defaults to `20m` (use the standard elasticsearch {ref}/common-options.html#time-units[time units]).
| `cache.max_users` | no | 100000 | Specified the maximum number of user entries that can live in the cache at a given time. Defaults to 100,000.
| `cache.hash_algo` | no | `ssha256` | (Expert Setting) Specifies the hashing algorithm that will be used for the in-memory cached user credentials (see <<ref-cache-hash-algo,Cache hash algorithms>> table for all possible values).
|======
[[ref-ldap-settings]]
.Shield LDAP Settings
[options="header"]
|======
| Name | Required | Default | Description
| `url` | yes | - | An LDAP URL in the format `ldap[s]://<server>:<port>`.
| `bind_dn` | no | Empty | The DN of the user that will be used to bind to the LDAP and perform searches. If this is not specified, an anonymous bind will be attempted.
| `bind_password` | no | Empty | The password for the user that will be used to bind to the LDAP.
| `user_dn_templates` | yes * | - | The DN template that replaces the user name with the string `{0}`. This element is multivalued, allowing for multiple user contexts.
| `user_search.base_dn` | yes * | - | Specifies a container DN to search for users.
| `user_search.scope` | no | `sub_tree` | The scope of the user search. Valid values are `sub_tree`, `one_level` or `base`. `one_level` only searches objects directly contained within the `base_dn`. `sub_tree` searches all objects contained under `base_dn`. `base` specifies that the `base_dn` is the user object, and that it is the only user considered.
| `user_search.attribute` | no | `uid` | The attribute to match with the username presented to Shield.
| `user_search.pool.size` | no | `20` | The maximum number of connections to the LDAP server to allow in the connection pool.
| `user_search.pool.initial_size` | no | `5` | The initial number of connections to create to the LDAP server on startup.
| `user_search.pool.health_check.enabled` | no | `true` | Flag to enable or disable a health check on LDAP connections in the connection pool. Connections will be checked in the background at the specified interval.
| `user_search.pool.health_check.dn` | no | Value of `bind_dn` | The distinguished name to be retrieved as part of the health check. If `bind_dn` is not specified, a value must be specified.
| `user_search.pool.health_check.interval` | no | `60s` | The interval to perform background checks of connections in the pool.
| `group_search.base_dn` | yes | - | The container DN to search for groups in which the user has membership. When this element is absent, Shield searches for a `memberOf` attribute set on the user in order to determine group membership.
| `group_search.scope` | no | `sub_tree` | Specifies whether the group search should be `sub_tree`, `one_level` or `base`. `one_level` only searches objects directly contained within the `base_dn`. `sub_tree` searches all objects contained under `base_dn`. `base` specifies that the `base_dn` is a group object, and that it is the only group considered.
| `group_search.filter` | no | See description | When not set, the realm will search for `group`, `groupOfNames`, or `groupOfUniqueNames`, with the attributes `member` or `memberOf`. Any instance of `{0}` in the filter will be replaced by the user attribute defined in `group_search.user_attribute`
| `group_search.user_attribute` | no | Empty | Specifies the user attribute that will be fetched and provided as a parameter to the filter. If not set, the user DN is passed into the filter.
| `unmapped_groups_as_roles` | no | false | Takes a boolean variable. When this element is set to `true`, the names of any unmapped LDAP groups are used as role names and assigned to the user. THe default value is `false`.
| `files.role_mapping` | no | `role_mapping.yml` under Shield's <<shield-config,config>> | The path and file name for the <<ldap-role-mapping, YAML role mapping configuration file>>. The default file name is `role_mapping.yml` in the <<shield-config,Shield config directory>>.
| `follow_referrals` | no | `true` | Boolean value that specifies whether Shield should follow referrals returned by the LDAP server. Referrals are URLs returned by the server that are to be used to continue the LDAP operation (e.g. search).
| `connect_timeout` | no | "5s" - for 5 seconds | The timeout period for establishing an LDAP connection. An `s` at the end indicates seconds, or `ms` indicates milliseconds.
| `read_timeout` | no | "5s" - for 5 seconds | The timeout period for an LDAP operation. An `s` at the end indicates seconds, or `ms` indicates milliseconds.
| `hostname_verification` | no | true | Performs hostname verification when using `ldaps` to protect against man in the middle attacks.
| `cache.ttl` | no | `20m` | Specified the time-to-live for cached user entries (a user and its credentials will be cached for this configured period of time). (use the standard elasticsearch {ref}/common-options.html#time-units[time units]).
| `cache.max_users` | no | 100000 | Specified the maximum number of user entries that can live in the cache at a given time.
| `cache.hash_algo` | no | `ssha256` |(Expert Setting) Specifies the hashing algorithm that will be used for the in-memory cached user credentials (see <<ref-cache-hash-algo,Cache hash algorithms>> table for all possible values).
|======
NOTE: `user_dn_templates` is required to operate in user template mode and `user_search.base_dn` is required to operated in user search mode. Only one is required for a given realm configuration. For more information on the different modes, see <<ldap-realms, ldap realms>>.
[[ref-ad-settings]]
.Shield Active Directory Settings
[options="header"]
|======
| Name | Required | Default | Description
| `url` | no | `ldap://<domain_name>:389` | A URL in the format `ldap[s]://<server>:<port>` If not specified the URL will be derived from the domain_name, assuming clear-text `ldap` and port `389` (e.g. `ldap://<domain_name>:389`).
| `domain_name` | yes | - | The domain name of Active Directory. The cluster can derive the URL and `user_search_dn` fields from values in this element if those fields are not otherwise specified.
| `unmapped_groups_as_roles` | no | false | Takes a boolean variable. When this element is set to `true`, the names of any unmapped groups and the user's relative distinguished name are used as role names and assigned to the user. THe default value is `false`.
| `files.role_mapping` | no | `role_mapping.yml` under Shield's <<shield-config,config>> | The path and file name for the <<ad-role-mapping, YAML role mapping configuration file>>. The default file name is `role_mapping.yml` in the <<shield-config,Shield config directory>>.
| `user_search.base_dn` | no | Root of Active Directory | The context to search for a user. The default value for this element is the root of the Active Directory domain.
| `user_search.scope` | no | `sub_tree` | Specifies whether the user search should be `sub_tree`, `one_level` or `base`. `one_level` only searches users directly contained within the `base_dn`. `sub_tree` searches all objects contained under `base_dn`. `base` specifies that the `base_dn` is a user object, and that it is the only user considered.
| `user_search.filter` | no | See description | Specifies a filter to use to lookup a user given a username. The default filter looks up `user` objects with either `sAMAccountName` or `userPrincipalName`
| `group_search.base_dn` | no | Root of Active Directory | The context to search for groups in which the user has membership. The default value for this element is the root of the the Active Directory domain
| `group_search.scope` | no | `sub_tree` | Specifies whether the group search should be `sub_tree`, `one_level` or `base`. `one_level` searches for groups directly contained within the `base_dn`. `sub_tree` searches all objects contained under `base_dn`. `base` specifies that the `base_dn` is a group object, and that it is the only group considered.
| `timeout.tcp_connect` | no | `5s` - for 5 seconds | The TCP connect timeout period for establishing an LDAP connection. An `s` at the end indicates seconds, or `ms` indicates milliseconds.
| `timeout.tcp_read` | no | `5s` - for 5 seconds | The TCP read timeout period after establishing an LDAP connection. An `s` at the end indicates seconds, or `ms` indicates milliseconds.
| `timeout.ldap_search` | no | `5s` - for 5 seconds | The LDAP Server enforced timeout period for an LDAP search. An `s` at the end indicates seconds, or `ms` indicates milliseconds.
| `hostname_verification` | no | true | Performs hostname verification when using `ldaps` to protect against man in the middle attacks.
| `cache.ttl` | no | `20m` | Specified the time-to-live for cached user entries (a user and its credentials will be cached for this configured period of time). (use the standard elasticsearch {ref}/common-options.html#time-units[time units]).
| `cache.max_users` | no | 100000 | Specified the maximum number of user entries that can live in the cache at a given time.
| `cache.hash_algo` | no | `ssha256` |(Expert Setting) Specifies the hashing algorithm that will be used for the in-memory cached user credentials (see <<ref-cache-hash-algo,Cache hash algorithms>> table for all possible values).
|======
[[ref-pki-settings]]
.Shield PKI Settings
[options="header"]
|======
| Name | Required | Default | Description
| `username_pattern` | no | `CN=(.*?)(?:,\|$)` | The regular expression pattern used to extract the username from the certificate DN. The first match group is the used as the username. Default is `CN=(.*?)(?:,\|$)`
| `truststore.path` | no | `shield.ssl.keystore` | The path of a truststore to use. The default truststore is the one defined by <<ref-ssl-tls-settings,SSL/TLS settings>>
| `truststore.password` | no | - | The password to the truststore. Must be provided if `truststore.path` is set.
| `truststore.algorithm` | no | SunX509 | Algorithm for the trustsore. Default is `SunX509`
| `files.role_mapping` | no | `role_mapping.yml` under Shield's <<shield-config,config>> | Specifies the path and file name for the <<pki-role-mapping, YAML role mapping configuration file>>. The default file name
|======
[[ref-cache-hash-algo]]
.Cache hash algorithms
|=======================
| Algorithm | Description
| `ssha256` | Uses a salted `SHA-256` algorithm (default).
| `md5` | Uses `MD5` algorithm.
| `sha1` | Uses `SHA1` algorithm.
| `bcrypt` | Uses `bcrypt` algorithm with salt generated in 10 rounds.
| `bcrypt4` | Uses `bcrypt` algorithm with salt generated in 4 rounds.
| `bcrypt5` | Uses `bcrypt` algorithm with salt generated in 5 rounds.
| `bcrypt6` | Uses `bcrypt` algorithm with salt generated in 6 rounds.
| `bcrypt7` | Uses `bcrypt` algorithm with salt generated in 7 rounds.
| `bcrypt8` | Uses `bcrypt` algorithm with salt generated in 8 rounds.
| `bcrypt9` | Uses `bcrypt` algorithm with salt generated in 9 rounds.
| `noop`,`clear_text` | Doesn't hash the credentials and keeps it in clear text in memory. CAUTION:
keeping clear text is considered insecure and can be compromised at the OS
level (e.g. memory dumps and `ptrace`).
|=======================
[[ref-roles-settings]]
.Shield Roles Settings
[options="header"]
|======
| Name | Default | Description
| `shield.authz.store.file.roles` | `roles.yml` under Shield's <<shield-config,config>> | The location of the roles definition file
|======
[[ref-ssl-tls-settings]]
[float]
==== TLS/SSL Settings
.Shield TLS/SSL Settings
[options="header"]
|======
| Name | Default | Description
| `shield.ssl.keystore.path` | None | Absolute path to the keystore that holds the private keys
| `shield.ssl.keystore.password` | None | Password to the keystore
| `shield.ssl.keystore.key_password` | Same value as `shield.ssl.keystore.password` | Password for the private key in the keystore
| `shield.ssl.keystore.algorithm` | SunX509 | Format for the keystore
| `shield.ssl.truststore.path` | `shield.ssl.keystore.path` | If not set, this setting defaults to `shield.ssl.keystore`
| `shield.ssl.truststore.password` | `shield.ssl.keystore.password` | Password to the truststore
| `shield.ssl.truststore.algorithm` | SunX509 | Format for the truststore
| `shield.ssl.protocol` | `TLSv1.2` | Protocol for security: `SSL`, `SSLv2`, `SSLv3`, `TLS`, `TLSv1`, `TLSv1.1`, `TLSv1.2`
| `shield.ssl.supported_protocols` | `TLSv1`, `TLSv1.1`, `TLSv1.2` | Supported protocols with versions. Valid protocols: `SSLv2Hello`, `SSLv3`, `TLSv1`, `TLSv1.1`, `TLSv1.2`
| `shield.ssl.ciphers` | `TLS_RSA_WITH_AES_128_CBC_SHA256`, `TLS_RSA_WITH_AES_128_CBC_SHA`, `TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA` | Supported cipher suites can be found in Oracle's http://docs.oracle.com/javase/7/docs/technotes/guides/security/SunProviders.html[Java Cryptography Architecture documentation]. Cipher suites using key lengths greater than 128 bits require the <<ciphers,JCE Unlimited Strength Jurisdiction Policy Files>>.
| `shield.ssl.hostname_verification` | `true` | Performs hostname verification on transport connections. This is enabled by default to protect against man in the middle attacks.
| `shield.ssl.hostname_verification.resolve_name` | `true` | A reverse DNS lookup is necessary to find the hostname when connecting to a node via an IP Address. If this is disabled and IP addresses are used to connect to a node, the IP address must be specified as a `SubjectAlternativeName` when <<private-key,creating the certificate>> or hostname verification will fail. IP addresses will be used to connect to a node if they are used in following settings: `network.host`, `network.publish_host`, `transport.publish_host`, `transport.profiles.$PROFILE.publish_host`, `discovery.zen.ping.unicast.hosts`
| `shield.ssl.session.cache_size` | `1000` | Number of SSL Sessions to cache in order to support session resumption. Setting the value to `0` means there is no size limit.
| `shield.ssl.session.cache_timeout` | `24h` | The time after the creation of a SSL session before it times out. (uses the standard elasticsearch {ref}/common-options.html#time-units[time units]).
| `shield.transport.ssl` | `false` | Set this parameter to `true` to enable SSL/TLS
| `shield.transport.ssl.client.auth` | `required` | Require client side certificates for transport protocol. Valid values are `required`, `optional`, and `no`. `required` forces a client to present a certificate, while `optional` requests a client certificate but the client is not required to present one.
| `shield.transport.filter.allow` | None | List of IP addresses to allow
| `shield.transport.filter.deny` | None | List of IP addresses to deny
| `shield.http.ssl` | `false` | Set this parameter to `true` to enable SSL/TLS
| `shield.http.ssl.client.auth` | `no` | Require client side certificates for HTTP. Valid values are `required`, `optional`, and `no`. `required` forces a client to present a certificate, while `optional` requests a client certificate but the client is not required to present one.
| `shield.http.filter.allow` | None | List of IP addresses to allow just for HTTP
| `shield.http.filter.deny` | None | List of IP addresses to deny just for HTTP
|======
[[ref-ssl-tls-profile-settings]]
.Shield TLS/SSL settings per profile
[options="header"]
|======
| Name | Default | Description
| `transport.profiles.$PROFILE.shield.ssl` | Same as `shield.transport.ssl`| Setting this parameter to true will enable SSL/TLS for this profile; false will disable SSL/TLS for this profile.
| `transport.profiles.$PROFILE.shield.truststore.path` | None | Absolute path to the truststore of this profile
| `transport.profiles.$PROFILE.shield.truststore.password` | None | Password to the truststore
| `transport.profiles.$PROFILE.shield.truststore.algorithm` | SunX509 | Format for the truststore
| `transport.profiles.$PROFILE.shield.keystore.path` | None | Absolute path to the keystore of this profile
| `transport.profiles.$PROFILE.shield.keystore.password` | None | Password to the keystore
| `transport.profiles.$PROFILE.shield.keystore.key_password` | Same value as `transport.profiles.$PROFILE.shield.keystore.password` | Password for the private key in the keystore
| `transport.profiles.$PROFILE.shield.keystore.algorithm` | SunX509 | Format for the keystore
| `transport.profiles.$PROFILE.shield.session.cache_size` | `1000` | Number of SSL Sessions to cache in order to support session resumption. Setting the value to `0` means there is no size limit.
| `transport.profiles.$PROFILE.shield.session.cache_timeout` | `24h` | The time after the creation of a SSL session before it times out. (uses the standard elasticsearch {ref}/common-options.html#time-units[time units]).
| `transport.profiles.$PROFILE.shield.filter.allow` | None | List of IP addresses to allow for this profile
| `transport.profiles.$PROFILE.shield.filter.deny` | None | List of IP addresses to deny for this profile
| `transport.profiles.$PROFILE.shield.ssl.client.auth` | `required` | Require client side certificates. Valid values are `required`, `optional`, and `no`. `required` forces a client to present a certificate, while `optional` requests a client certificate but the client is not required to present one.
| `transport.profiles.$PROFILE.shield.type` | `node` | Defines allowed actions on this profile, allowed values: `node` and `client`
| `transport.profiles.$PROFILE.shield.ciphers` | `TLS_RSA_WITH_AES_128_CBC_SHA256`, `TLS_RSA_WITH_AES_128_CBC_SHA`, `TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA` | Supported cipher suites can be found in Oracle's http://docs.oracle.com/javase/7/docs/technotes/guides/security/SunProviders.html[Java Cryptography Architecture documentation]. Cipher suites using key lengths greater than 128 bits require the <<ciphers,JCE Unlimited Strength Jurisdiction Policy Files>>.
| `transport.profiles.$PROFILE.shield.protocol` | `TLSv1.2` | Protocol for security: `SSL`, `SSLv2`, `SSLv3`, `TLS`, `TLSv1`, `TLSv1.1`, `TLSv1.2`
| `transport.profiles.$PROFILE.shield.supported_protocols` | `TLSv1`, `TLSv1.1`, `TLSv1.2` | Supported protocols with versions. Valid protocols: `SSLv2Hello`, `SSLv3`, `TLSv1`, `TLSv1.1`, `TLSv1.2`
|======
[[ref-shield-files]]
[float]
=== Files used by Shield
The Shield security plugin uses the following files:
* `config/shield/roles.yml` defines the roles in use on the cluster (read more <<roles-file,here>>).
* `config/shield/users` defines the hashed passwords for users on the cluster (read more <<users-file,here>>).
* `config/shield/users_roles` defines the role assignments for users on the cluster (read more <<users_roles-file,here>>).
* `config/shield/role_mapping.yml` defines the role assignments for a Distinguished Name (DN) to a role. This allows for
LDAP and Active Directory groups and users and PKI users to be mapped to roles (read more <<ldap-role-mapping,here>>).
* `config/shield/logging.yml` contains audit information (read more <<logging-file,here>>).
* `config/shield/system_key` holds a cluster secret key used for message authentication (read more <<message-authentication,here>>).
Several of these files are in the YAML format. When you edit these files, be aware that YAML is indentation-level
sensitive and indentation errors can lead to configuration errors. Avoid the tab character to set indentation levels,
or use an editor that automatically expands tabs to spaces.
Be careful to properly escape YAML constructs such as `:` or leading exclamation points within quoted strings. Using
the `|` or `>` characters to define block literals instead of escaping the problematic characters can help avoid
problems.

View File

@ -0,0 +1,137 @@
[[release-notes]]
== Appendix 9. Release Notes
[[version-compatibility]]
[float]
=== Version Compatibility
Shield 2.x is compatible with:
* elasticsearch: 1.5.0+
* license: 1.0
[[upgrade-instructions]]
=== Upgrading Shield
To upgrade Shield, just uninstall the current Shield plugin and install the new version of Shield. Your configuration
will be preserved and you do this with a rolling upgrade of Elasticsearch. On each node, after you have stopped it run:
[source,shell]
---------------------------------------------------
bin/plugin -r shield
bin/plugin -i elasticsearch/shield/latest <1>
---------------------------------------------------
<1> `latest` will install the latest version of Shield compatible with your version of elasticsearch. A specific version,
such as `1.1.0` can also be specified.
Then start the node. Larger sites should follow the steps in the {ref}/setup-upgrade.html#_1_0_and_later[rolling upgrade section]
in order to ensure recovery is as quick as possible.
On upgrade, your current configuration files will remain untouched. The configuration files provided by the new version
of Shield will be added with a `.new` extension.
==== updated role definitions
The default role definitions in the `roles.yml` file may need to be changed to ensure proper functionality with other
applications such as Marvel and Kibana. Any role changes will be found in `roles.yml.new` after upgrading to the new
version of Shield. We recommend copying the changes listed below to your `roles.yml` file.
* added[1.1.0] `kibana4_server` role added that defines the minimum set of permissions necessary for the Kibana 4 server.
* added[1.0.1] `kibana4` role updated to work with new features in Kibana 4 RC1
[[changelist]]
=== Change List
[float]
==== 1.3.0
.new features
* <<pki,PKI Realm>>: Adds Public Key Infrastructure (PKI) authentication through the use of X.509 certificates in place of
username and password credentials.
* <<auditing, Index Output for Audit Events>>: An index based output has been added for storing audit events in an Elasticsearch index.
.breaking changes
* The `sha2` and `apr1` hashing algorithms have been removed as options for the <<ref-cache-hash-algo,`cache.hash_algo` setting>>.
If your existing Shield installation uses either of these options, remove the setting and use the default `ssha256`
algorithm.
* The `users` file now only supports `bcrypt` password hashing. All existing passwords stored using the `esusers` tool
have been hashed with `bcrypt` and are not affected.
.enhancements
* TLS 1.2 is now the default protocol.
* Clients that do not support pre-emptive basic authentication can now support both anonymous and authenticated access
by specifying the `shield.authc.anonymous.authz_exception` <<anonymous-access,setting>> with a value of `false`.
* Reduced logging for common SSL exceptions, such as a client closing the connection during a handshake.
.bug fixes
* The `esusers` and `syskeygen` tools now work correctly with environment variables in the RPM and DEB installation
environment files `/etc/sysconfig/elasticsearch` and `/etc/default/elasticsearch`.
* Default ciphers no longer include `TLS_DHE_RSA_WITH_AES_128_CBC_SHA`.
[float]
==== 1.2.2
* The `esusers` tool no longer warns about missing roles that are properly defined in the `roles.yml` file.
* The period character, `.`, is now allowed in usernames and role names.
* The {ref}/query-dsl-terms-filter.html#_caching_19[terms filter lookup cache] has been disabled to ensure all requests
are properly authorized. This removes the need to <<limitations-disable-cache,manually disable>> the terms filter
cache.
* For LDAP client connections, only the protocols and ciphers specified in the `shield.ssl.supported_protocols` and
`shield.ssl.ciphers` <<ref-ssl-tls-settings,settings>> will be used.
* The auditing mechanism now logs authentication failed events when a request contains an invalid authentication token.
[float]
==== 1.2.1
* Several bug fixes including a fix to ensure that {ref}/index-modules-allocation.html#disk[Disk-based Shard Allocation]
works properly with Shield
[float]
==== 1.2.0
* Adds support for elasticsearch 1.5
[float]
==== 1.1.1
* Several bug fixes including a fix to ensure that {ref}/index-modules-allocation.html#disk[Disk-based Shard Allocation]
works properly with Shield
[float]
==== 1.1.0
.new features
* LDAP:
** Add the ability to bind as a specific user for LDAP searches, which removes the need to specify `user_dn_templates`.
This mode of operation also makes use of connection pooling for better performance. Please see <<ldap-user-search, ldap user search>>
for more information.
** User distinguished names (DNs) can now be used for <<ldap-role-mapping, role mapping>>.
* Authentication:
** <<anonymous-access, Anonymous access>> is now supported (disabled by default).
* IP Filtering:
** IP Filtering settings can now be <<dynamic-ip-filtering,dynamically updated>> using the {ref}/cluster-update-settings.html[Cluster Update Settings API].
.enhancements
* Significant memory footprint reduction of internal data structures
* Test if SSL/TLS ciphers are supported and warn if any of the specified ciphers are not supported
* Reduce the amount of logging when a non-encrypted connection is opened and `https` is being used
* Added the <<kibana4-roles, `kibana4_server` role>>, which is a role that contains the minimum set of permissions required for the Kibana 4 server.
* In-memory user credential caching hash algorithm defaults now to salted SHA-256 (see <<ref-cache-hash-algo, Cache hash algorithms>>
.bug fixes
* Filter out sensitive settings from the settings APIs
[float]
==== 1.0.2
* Filter out sensitive settings from the settings APIs
* Significant memory footprint reduction of internal data structures
[float]
==== 1.0.1
* Fixed dependency issues with Elasticsearch 1.4.3 and (Lucene 4.10.3 that comes with it)
* Fixed bug in how user roles were handled. When multiple roles were defined for a user, and one of the
roles only had cluster permissions, not all privileges were properly evaluated.
* Updated `kibana4` permissions to be compatible with Kibana 4 RC1
* Ensure the mandatory `base_dn` settings is set in the `ldap` realm configuration

View File

@ -0,0 +1,8 @@
[[hadoop]]
=== Shield with Elasticsearch for Apache Hadoop
Elasticsearch for Apache Hadoop ("ES-Hadoop") is capable of using HTTP basic and PKI authentication and/or TLS/SSL when accessing an Elasticsearch cluster. For full details please refer to the ES-Hadoop documentation, in particular the `Security` section.
For authentication purposes, select the user for your ES-Hadoop client (for maintenance purposes it is best to create a dedicated user). Then, assign that user to a role with the privileges required by your Hadoop/Spark/Storm job. Configure ES-Hadoop to use the user name and password through the `es.net.http.auth.user` and `es.net.http.auth.pass` properties. If PKI authentication is enabled, setup the appropriate `keystore` and `truststore` instead through `es.net.ssl.keystore.location` and `es.net.truststore.location` (and their respective `.pass` properties to specify the password).
For secured transport, enable SSL/TLS through the `es.net.ssl` property by setting it to `true`. Depending on your SSL configuration (keystore, truststore, etc...) you might need to set other parameters as well - please refer to the http://www.elastic.co/guide/en/elasticsearch/hadoop/current/configuration.html[ES-Hadoop] documentation, specifically the `Configuration` and `Security` chapter.

View File

@ -0,0 +1,57 @@
=== HTTP/REST Clients
Elasticsearch works with standard HTTP http://en.wikipedia.org/wiki/Basic_access_authentication[basic authentication]
headers to identify the requester. Since Elasticsearch is stateless, this header must be sent with every request:
[source,shell]
--------------------------------------------------
Authorization: Basic <TOKEN> <1>
--------------------------------------------------
<1> The `<TOKEN>` is computed as `base64(USERNAME:PASSWORD)`
==== Client examples
This example uses `curl` without basic auth to create an index:
[source,shell]
-------------------------------------------------------------------------------
curl -XPUT 'localhost:9200/idx'
-------------------------------------------------------------------------------
[source,json]
-------------------------------------------------------------------------------
{
"error": "AuthenticationException[Missing authentication token]",
"status": 401
}
-------------------------------------------------------------------------------
Since no user is associated with the request above, an authentication error is returned. Now we'll use `curl` with
basic auth to create an index as the `rdeniro` user:
[source,shell]
---------------------------------------------------------
curl --user rdeniro:taxidriver -XPUT 'localhost:9200/idx'
---------------------------------------------------------
[source,json]
---------------------------------------------------------
{
"acknowledged": true
}
---------------------------------------------------------
==== Client Libraries over HTTP
For more information about how to use Shield with the language specific clients please refer to
https://github.com/elasticsearch/elasticsearch-ruby/tree/master/elasticsearch-transport#authentication[Ruby],
http://elasticsearch-py.readthedocs.org/en/master/#ssl-and-authentication[Python],
https://metacpan.org/pod/Search::Elasticsearch::Role::Cxn::HTTP#CONFIGURATION[Perl],
http://www.elastic.co/guide/en/elasticsearch/client/php-api/current/_security.html[PHP],
http://nest.azurewebsites.net/elasticsearch-net/security.html[.NET],
http://www.elastic.co/guide/en/elasticsearch/client/javascript-api/current/auth-reference.html[Javascript]
////
Groovy - TODO link
////

View File

@ -0,0 +1,437 @@
=== Java clients
Elasticsearch supports two types of Java clients: _Node Client_ and _Transport Client_.
The _Node Client_ is a cluster node that joins the cluster and receives all the cluster events, in the same manner as
any other cluster node. Node clients cannot be allocated shards, and therefore cannot hold data. Node clients are not
eligible for election as a master node in the cluster. For more information about node clients, see the
http://www.elastic.co/guide/en/elasticsearch/client/java-api/current/node-client.html[following section].
Unlike the _Node Client_, the _Transport Client_ is not a node in the cluster. Yet it uses the same transport protocol
the cluster nodes use for inter-node communication and is therefore considered to be very efficient as it bypasses the
process of un/marshalling of request from/to JSON which you typically have in REST based clients (read more about
http://www.elastic.co/guide/en/elasticsearch/client/java-api/current/transport-client.html[_Transport Client_]).
Shield supports both clients. This section provides configuration instructions for these clients.
==== Node Client
WARNING: While _Node Clients_ may work with Shield, since these are actual nodes in the cluster, they require access
to a breadth of cluster management internal APIs. Additionally, just like all other nodes in the cluster,
_Node Clients_ require the License plugin to be installed and access to Shield configuration files that contain
sensitive data. For this reason, _Node Clients_ should be considered as unsafe clients. If you choose to use
these clients, make sure you treat them in the same way you treat any other node in your cluster. Your
application should sit next to the cluster within the same security zone.
There are several steps for setting up this client:
. Set the appropriate dependencies for you project
. Duplicate <<ref-shield-files, configuration files>> for authentication
. Configure the authentication token
. (Optional) If SSL/TLS is enabled, set up the keystore, then create and import the appropriate certificates.
===== Java project dependencies
If you plan on using the Node Client, you first need to make sure the Shield jar files (`elasticsearch-shield-2.0.0.jar`,
`automaton-1.11-8.jar`, `unboundid-ldapsdk-2.3.8.jar`) and the License jar file (`elasticsearch-license-2.0.0.jar`) are
in the classpath. You can either download the distributions, extract the jar files manually and include them in your
classpath, or you can pull them out of the Elasticsearch Maven repository.
===== Maven Example
The following snippet shows the configuration you will need to include in your project's `pom.xml` file:
[source,xml]
--------------------------------------------------------------
<project ...>
<repositories>
<!-- add the elasticsearch repo -->
<repository>
<id>elasticsearch-releases</id>
<url>http://maven.elasticsearch.org/releases</url>
<releases>
<enabled>true</enabled>
</releases>
<snapshots>
<enabled>false</enabled>
</snapshots>
</repository>
...
</repositories>
...
<dependencies>
<!-- add the Shield jar as a dependency -->
<dependency>
<groupId>org.elasticsearch</groupId>
<artifactId>elasticsearch-shield</artifactId>
<version>2.0.0</version>
</dependency>
<!-- add the License jar as a dependency -->
<dependency>
<groupId>org.elasticsearch</groupId>
<artifactId>elasticsearch-license-plugin</artifactId>
<version>2.0.0</version>
<scope>runtime</scope>
</dependency>
...
</dependencies>
...
</project>
--------------------------------------------------------------
===== Gradle Example
If you are using Gradle, then you will need to add the dependencies to your `build.gradle` file:
[source,groovy]
--------------------------------------------------------------
repositories {
/* ... Any other repositories ... */
// Add the Elasticsearch Maven Repository
maven {
url "http://maven.elasticsearch.org/releases"
}
}
dependencies {
// Provide the Shield jar on the classpath for compilation and at runtime
// Note: Many projects can use the Shield jar as a runtime dependency
compile "org.elasticsearch:elasticsearch-shield:2.0.0"
/* ... */
// Provide the License jar on the classpath at runtime (not needed for compilation)
runtime "org.elasticsearch:elasticsearch-license-plugin:2.0.0"
}
--------------------------------------------------------------
It is also possible to manually download the http://maven.elasticsearch.org/releases/org/elasticsearch/elasticsearch-shield/2.0.0/elasticsearch-shield-2.0.0.jar[Shield jar]
and the http://maven.elasticsearch.org/releases/org/elasticsearch/elasticsearch-license-plugin/1.0.0/elasticsearch-license-plugin-2.0.0.jar[License jar]
files from our Maven repository.
===== Duplicate Shield Configuration Files
The _Node Client_ will authenticate requests before sending the requests to the cluster. To do this, copy the `users`,
`users_roles`, `roles.yml`, and `system_key` files from the <<ref-shield-files,Shield configuration files>> to a place
accessible to the node client. These files should be stored on the filesystem in a folder with restricted access as they
contain sesnitive data. This can be configured with the following settings:
* `shield.authc.realms.esusers.files.users`
* `shield.authc.realms.esusers.files.users_roles`
* `shield.authz.store.files.roles`
* `shield.system_key.file`
[source, java]
------------------------------------------------------------------------------------------------------
import static org.elasticsearch.node.NodeBuilder.*;
...
Node node = nodeBuilder().client(true).settings(ImmutableSettings.builder()
.put("cluster.name", "myClusterName")
.put("discovery.zen.ping.multicast.enabled", false)
.putArray("discovery.zen.ping.unicast.hosts", "localhost:9300", "localhost:9301")
.put("shield.authc.realms.esusers.type", "esusers")
.put("shield.authc.realms.esusers.files.users", "/Users/es/config/shield/users")
.put("shield.authc.realms.esusers.files.users_roles", "/Users/es/config/shield/users_roles")
.put("shield.authz.store.files.roles", "/Users/es/config/shield/roles.yml")
.put("shield.system_key.file", "/Users/es/config/shield/system_key"))
...
.node();
------------------------------------------------------------------------------------------------------
Additionally, if you are using LDAP or Active Directory authentication then you will need to specify that configuration
in the settings when configuring the node or provide a `elasticsearch.yml` in the classpath with the appropriate settings.
===== Configuring Authentication Token
The authentication token can be configured in two ways - globally or per-request. When setting it up globally, the
values of the username and password are configured in the client's settings:
[source,java]
------------------------------------------------------------------------------------------------------
import static org.elasticsearch.node.NodeBuilder.*;
import static org.elasticsearch.shield.authc.support.UsernamePasswordToken.basicAuthHeaderValue;
...
Node node = nodeBuilder().client(true).settings(ImmutableSettings.builder()
...
.put("shield.user", "test_user:changeme"))
...
.node();
Client client = node.client();
------------------------------------------------------------------------------------------------------
Once the client is created as above, the `shield.user` setting is translated to a request header in the standard HTTP
basic authentication form `Authentication base64("test_user:changeme")` which will be sent with every request executed.
To skip the global configuration of the token, manually set the authentication token header on every request:
[source,java]
------------------------------------------------------------------------------------------------------
import org.elasticsearch.shield.authc.support.SecuredString;
import static org.elasticsearch.node.NodeBuilder.*;
import static org.elasticsearch.shield.authc.support.UsernamePasswordToken.basicAuthHeaderValue;
...
String token = basicAuthHeaderValue("test_user", new SecuredString("changeme".toCharArray()));
Node node = nodeBuilder().client(true).settings(ImmutableSettings.builder()
...
.node();
Client client = node.client();
client.prepareSearch().putHeader("Authorization", token).get();
------------------------------------------------------------------------------------------------------
The example above executes a search request and manually adds the authentication token as a header on it.
===== Setting up SSL
Authenticating to the cluster requires proof that a node client is trusted as part of the cluster. This is done through
standard PKI and SSL. A client node creates a private key and an associated certificate. The cluster Certificate
Authority signs the certificate. A Client node authenticates during the SSL connection setup by presenting the signed
certificate, and proving ownership of the private key. All of these setup steps are described in
<<private-key, Securing Nodes>>.
In addition, the node client acts like a node, authenticating locally any request made. Copies of the files `users`,
`users_roles`, `roles.yml` , and `system_key` need to be made available to the node client.
After following the steps in <<private-key, Securing Nodes>>, configuration for a node client with Shield might look
like this:
[source, java]
------------------------------------------------------------------------------------------------------
import static org.elasticsearch.node.NodeBuilder.*;
...
Node node = nodeBuilder().client(true).settings(ImmutableSettings.builder()
.put("cluster.name", "myClusterName")
.put("discovery.zen.ping.multicast.enabled", false)
.putArray("discovery.zen.ping.unicast.hosts", "localhost:9300", "localhost:9301")
.put("shield.ssl.keystore.path", "/Users/es/node_client/node_client.jks")
.put("shield.ssl.keystore.password", "password")
.put("shield.transport.ssl", "true")
.put("shield.authc.realms.esusers.type", "esusers")
.put("shield.authc.realms.esusers.files.users", "/Users/es/config/shield/users")
.put("shield.authc.realms.esusers.files.users_roles", "/Users/es/config/shield/users_roles")
.put("shield.authz.store.files.roles", "/Users/es/config/shield/roles.yml")
.put("shield.system_key.file", "/Users/es/config/shield/system_key"))
...
.node();
------------------------------------------------------------------------------------------------------
[[transport-client]]
==== Transport Client
If you plan on using the Transport Client over SSL/TLS you first need to make sure the Shield jar file
(`elasticsearch-shield-2.0.0.jar`) is in the classpath. You can either download the Shield distribution, extract the jar
files manually and include them in your classpath, or you can pull them out of the Elasticsearch Maven repository.
NOTE: Unlike the _Node Client_, the _Transport Client_ is not acting as a node in the cluster, and therefore
**does not** require the License plugin to be installed.
===== Maven Example
The following snippet shows the configuration you will need to include in your project's `pom.xml` file:
[source,xml]
--------------------------------------------------------------
<project ...>
<repositories>
<!-- add the elasticsearch repo -->
<repository>
<id>elasticsearch-releases</id>
<url>http://maven.elasticsearch.org/releases</url>
<releases>
<enabled>true</enabled>
</releases>
<snapshots>
<enabled>false</enabled>
</snapshots>
</repository>
...
</repositories>
...
<dependencies>
<!-- add the shield jar as a dependency -->
<dependency>
<groupId>org.elasticsearch</groupId>
<artifactId>elasticsearch-shield</artifactId>
<version>2.0.0</version>
</dependency>
...
</dependencies>
...
</project>
--------------------------------------------------------------
===== Gradle Example
If you are using Gradle, then you will need to add the dependencies to your `build.gradle` file:
[source,groovy]
--------------------------------------------------------------
repositories {
/* ... Any other repositories ... */
// Add the Elasticsearch Maven Repository
maven {
url "http://maven.elasticsearch.org/releases"
}
}
dependencies {
// Provide the Shield jar on the classpath for compilation and at runtime
// Note: Many projects can use the Shield jar as a runtime dependency
compile "org.elasticsearch:elasticsearch-shield:2.0.0"
/* ... */
}
--------------------------------------------------------------
It is also possible to manually download the http://maven.elasticsearch.org/releases/org/elasticsearch/elasticsearch-shield/2.0.0/elasticsearch-shield-2.0.0.jar[Shield jar]
file from our Maven repository.
TIP: Even if you are not planning on using the client over SSL/TLS, it is still worth having the Shield jar file in
the classpath as it provides various helpful utilities, such as the `UsernamePasswordToken` class for generating
basic-auth tokens and the `ShieldClient` that <<shield-client,exposes an API>> to clear realm caches.
[[java-transport-client-role]]
Before setting up the client itself, you need to make sure you have a user with sufficient privileges to start
the transport client. The transport client uses Elasticsearch's node info API to fetch information about the
nodes in the cluster. For this reason, the authenticated user of the transport client must have the
`cluster:monitor/nodes/info` cluster permission. Furthermore, if the client is configured to use sniffing, the
`cluster:monitor/state` cluster permission is required.
TIP: `roles.yml` ships with a predefined `transport_client` role. By default it is configured to only grant the
`cluster:monitor/nodes/info` cluster permission. You can use this role and assign it to any user
that will be attached to a transport client.
Setting up the transport client is similar to the Node client except authentication files do not need to be configured.
Without SSL, it is as easy as setting up the authentication token on the request, similarly to how they're set up with
the _Node Client_:
[source,java]
------------------------------------------------------------------------------------------------------
import org.elasticsearch.client.transport.TransportClient;
...
TransportClient client = new TransportClient(ImmutableSettings.builder()
.put("cluster.name", "myClusterName")
.put("shield.user", "test_user:changeme")
.addTransportAddress(new InetSocketTransportAddress("localhost", 9300))
.addTransportAddress(new InetSocketTransportAddress("localhost", 9301));
------------------------------------------------------------------------------------------------------
WARNING: Configuring a Transport Client without SSL will send passwords in plaintext.
When using SSL for transport client communication, a few more settings are required. By default, Shield requires client
authentication for secured transport communication. This means that every client would need to have a certificate signed
by a trusted CA. The client authentication can be disabled through the use of a <<separating-node-client-traffic, client
specific transport profile>>.
Configuration required for SSL when using client authentication:
[source,java]
------------------------------------------------------------------------------------------------------
import org.elasticsearch.client.transport.TransportClient;
...
TransportClient client = new TransportClient(ImmutableSettings.builder()
.put("cluster.name", "myClusterName")
.put("shield.user", "test_user:changeme")
.put("shield.ssl.keystore.path", "/path/to/client.jks")
.put("shield.ssl.keystore.password", "password")
.put("shield.transport.ssl", "true"))
.addTransportAddress(new InetSocketTransportAddress("localhost", 9300))
.addTransportAddress(new InetSocketTransportAddress("localhost", 9301));
------------------------------------------------------------------------------------------------------
NOTE: The `client.jks` keystore needs to contain the client's signed CA certificate and the CA certificate.
Configuration required for SSL without client authentication:
[source,java]
------------------------------------------------------------------------------------------------------
import org.elasticsearch.client.transport.TransportClient;
...
TransportClient client = new TransportClient(ImmutableSettings.builder()
.put("cluster.name", "myClusterName")
.put("shield.user", "test_user:changeme")
.put("shield.ssl.truststore.path", "/path/to/truststore.jks")
.put("shield.ssl.truststore.password", "password")
.put("shield.transport.ssl", "true"))
.addTransportAddress(new InetSocketTransportAddress("localhost", 9300))
.addTransportAddress(new InetSocketTransportAddress("localhost", 9301));
------------------------------------------------------------------------------------------------------
NOTE: The `truststore.jks` truststore needs to contain the certificate of the CA that has signed the Elasticsearch nodes'
certificates. If you are using a public CA that is already trusted by the Java runtime, then you can omit
`shield.ssl.truststore.path` and `shield.ssl.truststore.password`.
In the above code snippets, we set up a _Transport Client_ and configured the authentication token globally. Meaning,
that every request executed with this client will include this token in its headers.
The global configuration of the token *must be* set to some user with the privileges in the default `transport_client`
role, as described earlier. The global authentication token may also be overridden by adding a `Authorization` header on
each request. This is useful when an application uses multiple users to access Elasticsearch via the same client. When
operating in this mode, it is best to set the global token to a user that only has the `transport_client` role. The
following example directly sets the authentication token on the request when executing a search.
[source,java]
------------------------------------------------------------------------------------------------------
import org.elasticsearch.shield.authc.support.SecuredString;
import org.elasticsearch.client.transport.TransportClient;
import static org.elasticsearch.shield.authc.support.UsernamePasswordToken.basicAuthHeaderValue;
...
String token = basicAuthHeaderValue("test_user", new SecuredString("changeme".toCharArray()));
TransportClient client = new TransportClient(ImmutableSettings.builder()
.put("shield.user", "transport_client_user:changeme")
...
.addTransportAddress(new InetSocketTransportAddress("localhost", 9300))
.addTransportAddress(new InetSocketTransportAddress("localhost", 9301));
client.prepareSearch().putHeader("Authorization", token).get();
------------------------------------------------------------------------------------------------------
===== Anonymous Access
added[1.1.0]
If Shield enables <<anonymous-access,anonymous access>>, the `shield.user` setting may be dropped and all requests will
be executed under the anonymous user (with the exception of the requests on which the `Authorization` header is explicitly
set, as shown above). For this to work, please make sure the anonymous user is configured with sufficient roles that have
the same privileges as described <<java-transport-client-role,above>>.
[[shield-client]]
==== Shield Client
Shield exposes its own API to the user which is accessible by the `ShieldClient` class. The purpose of this API
is to manage all Shield related aspects. While at the moment it only exposes an operation for clearing up the
realm caches, the plan is to extend this API in the future.
`ShieldClient` is a wrapper around the existing clients (any client class implementing `org.elasticsearch.client.Client`.
The following example shows how one can clear up Shield's realm caches using the `ShieldClient`:
[source,java]
------------------------------------------------------------------------------------------------------
import static org.elasticsearch.node.NodeBuilder.*;
...
Client client = ... // create the client (either transport or node)
ShieldClient shieldClient = new ShieldClient(client);
ClearRealmCacheResponse response = shieldClient.authc().prepareClearRealmCache()
.realms("ldap1", "ad1")
.usernames("rdeniro")
.get();
------------------------------------------------------------------------------------------------------
In the example above, we clear the caches of two realms - `ldap1` and `ad1` - for the `rdeniro` user.

View File

@ -0,0 +1,185 @@
[[kibana]]
=== Kibana
Shield supports both Kibana 3 and Kibana 4.0+ releases. The configuration required differs
between Kibana 3 and 4. Please follow the instructions below for the version of Kibana you are working with.
=== Shield with Kibana 3
Shield and Kibana 3 have been tested together for recent versions of Chrome, Safari, and IE. This section describes
configuration changes and general information to ensure that the two products work together successfully for you.
Kibana 3 uses the `kibana-int` index to store saved dashboards. All users store dashboards in this index. Enable all
users to save and load dashboards from this index. When the Shield plugin is installed, users may be able to load
dashboards that access data in indices that they are not authorized to view. A user that loads such a dashboard
will receive a Kibana error stating that the disallowed index does not exist.
At the moment, there is no way to control which users can load which dashboards. We expect to address this
limitation with future versions of Shield and Kibana.
==== Kibana configuration
Kibana will need to be informed that you wish use credentials. In Kibana's `config.js` set the elasticsearch property:
[source,yaml]
------------------------------------
elasticsearch: {server: "http://YOUR_ELASTICSEARCH_SERVER:9200", withCredentials: true}
------------------------------------
[[cors]]
==== Elasticsearch configuration
HTTP authentication interacts with cross-origin resource sharing (CORS). Clusters that use CORS must send authentication
headers to the browser.
In the `elasticsearch.yml` file on all nodes, add the following configuration entries:
[source,yaml]
------------------------------------
http.cors.enabled: true
http.cors.allow-origin: "https://MYHOST:MYPORT"
http.cors.allow-credentials: true
------------------------------------
Note that in `http.cors.allow-origin`, `*` is disallowed for credentialed requests. You must enter the correct
protocol, hostname and port that would normally be entered into your browser.
Restart the nodes after modifying the configuration file. This change enables Elasticsearch to send the required
`Access-Control-Allow-Credentials` header.
NOTE: To learn more about enabling CORS, see http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-http.html[elasticsearch documentation].
==== Shield configuration
Shield includes a default <<roles,role>> for use with Kibana 3:
[source,yaml]
------------------------------------------------------------------------------------------------------------------------
kibana3:
cluster: cluster:monitor/nodes/info
indices:
'*': indices:data/read/search, indices:data/read/get, indices:admin/get <1>
'kibana-int': indices:data/read/get, indices:data/read/search, indices:data/write/delete, indices:data/write/index,
create_index
------------------------------------------------------------------------------------------------------------------------
<1> This line gives the Kibana 3 user read access to indices in order to search and display the data in them. To
constrain this role's access to specific indices, alter the wildcard.
Kibana 3 uses the `kibana-int` index to save and load dashboards. This role definition allows the user to manage and
use the dashboards in the `kibana-int` index.
Kibana 3 uses the cluster permission to access the `/_nodes` endpoint in order to check the node version.
Elasticsearch recommends that you create one or more roles derived from this role. These new roles will include access to
indices specified by your organization's goals and policies.
==== SSL/TLS and browsers
===== Trusting certificates
As discussed in <<securing-nodes, Securing Nodes>>, Shield supports adding SSL to the Elasticsearch HTTP interface.
When using Kibana, your browser verifies that the certificate received from the Elasticsearch node is trusted
before sending a request to the node. Establishing this trust requires that either your browser or operating
system trust the Certificate Authority (CA) that signed the node's certificate. To use SSL with Shield and
Kibana 3, ensure that the browser or operating system has been configured to trust this CA.
The process to ensure this trust varies per organization. Some organizations will have pre-installed these CA
certificates into the operating system or the browser's local certificate store. If this is the case, you will
not need to take any further action.
Other organizations will not have pre-installed the CA certificate. Or you may have created your own CA as discussed
in <<certificate-authority, Appendix 1>>. In these cases, we recommend that you consult your local IT professional to
determine the recommended procedure for adding trusted CAs in your organization.
===== Working with source builds of Kibana 3
Some developers use Kibana 3 by pulling the software from our GitHub repository, and not using a built package
from our download site. If you do this, be sure to clear your browser's cache after deploying Shield and
configuring the `http.cors.allow-credentials` parameter to avoid authentication errors with most browsers.
=== Shield with Kibana 4
Kibana 4 adds a server-side component that changes the integration with Shield and the steps required to configure Shield, Elasticsearch, and Kibana to work together. With Kibana 4, the browser makes requests to the Kibana 4 server, and not to Elasticsearch directly. The Kibana 4 server then makes requests to Elasticsearch on behalf of the browser. We recommend using separate roles for your users who log into Kibana and for the Kibana 4 server itself.
[[kibana4-roles]]
==== Configuring Roles for Kibana 4 Users
Kibana users need access to the indices that they will be working with and the `.kibana` index where their
saved searches, visualizations, and dashboards are stored. Shield includes a default `kibana4` role that grants
read access to all indices and full access to the `.kibana` index.
IMPORTANT: The default Kibana 4 user role grants read access to all indices. We strongly recommend deriving
custom roles for your Kibana users that limit access to specific indices according to your organization's goals and policies.
[source,yaml]
------------------------------------------------------------------------------------------------------------------------
kibana4:
cluster:
- cluster:monitor/nodes/info
- cluster:monitor/health
indices:
'*':
- indices:admin/mappings/fields/get
- indices:admin/validate/query
- indices:data/read/search
- indices:data/read/msearch
'.kibana':
- indices:admin/create
- indices:admin/exists
- indices:admin/mapping/put
- indices:admin/mappings/fields/get
- indices:admin/refresh
- indices:admin/validate/query
- indices:data/read/get
- indices:data/read/mget
- indices:data/read/search
- indices:data/write/delete
- indices:data/write/index
- indices:data/write/update
- indices:admin/create
------------------------------------------------------------------------------------------------------------------------
To constrain Kibana's access to specific indices, explicitly specify the index names in your role. When configuring a role for a Kibana user and granting access to a specific index, at a minimum the user needs the following privileges on the index:
* `indices:admin/mappings/fields/get`
* `indices:admin/validate/query`
* `indices:data/read/search`
* `indices:data/read/msearch`
* `indices:admin/get`
[[kibana4-server-role]]
==== Configuring a Role for the Kibana 4 Server
The Kibana 4 server needs access to the cluster monitoring APIs and the `.kibana` index. However, the server
does not need access to user indexes. The following `kibana4_server` role shows the privileges required
by the Kibana 4 server.
NOTE: This role is included in roles.yml by default as of Shield 1.1. If you are running an older version of Shield,
you need to add it yourself.
[source,yaml]
------------------------------------------------------------------------------------------------------------------------
kibana4_server:
cluster:
- cluster:monitor/nodes/info
- cluster:monitor/health
indices:
'.kibana':
- indices:admin/exists
- indices:admin/mapping/put
- indices:admin/mappings/fields/get
- indices:admin/refresh
- indices:admin/validate/query
- indices:data/read/get
- indices:data/read/mget
- indices:data/read/search
- indices:data/write/delete
- indices:data/write/index
- indices:data/write/update
------------------------------------------------------------------------------------------------------------------------
To configure the Kibana 4 server, you must assign this role to a user and add the user credentials to `kibana.yml`.
For more information, see http://www.elastic.co/guide/en/kibana/current/production.html#configuring-kibana-shield[Configuring Kibana to Work with Shield] in the Kibana 4 User Guide.
==== Configuring Kibana 4 to Use SSL
You should also configure Kibana 4 to use SSL encryption for both client requests and the requests the Kibana server sends to Elasticsearch. For more information, see http://www.elastic.co/guide/en/kibana/current/production.html#enabling-ssl[Enabling SSL] in the Kibana 4 User Guide.

View File

@ -0,0 +1,175 @@
[[logstash]]
=== Shield with Logstash
IMPORTANT: Shield 2.0.x is compatible with Logstash 1.5 and above.
Logstash provides Elasticsearch https://www.elastic.co/guide/en/logstash/current/plugins-outputs-elasticsearch.html[output], https://www.elastic.co/guide/en/logstash/current/plugins-inputs-elasticsearch.html[input] and https://www.elastic.co/guide/en/logstash/current/plugins-filters-elasticsearch.html[filter] plugins
used to index and retrieve documents through HTTP, transport or client node protocols.
All plugins support authentication and encryption over HTTP, while the output plugin additionally supports these
features over the transport protocol.
Note: When using the elasticsearch output, only the `transport` and `http` protocol are supported (i.e. `node` protocol is unsupported)
For information on setting up authentication and authorization on the Elasticsearch side, check the corresponding
documentation sections: <<authorization,_Authorization_>> and <<authentication,_Authentication_>>.
To configure the certificates and other SSL related options, see <<securing-nodes,_Securing Nodes_>>.
[[ls-user]]
==== Creating a user
By default, the Shield plugin installs a dedicated user <<roles,role>> that enables the creation of indices with names
that match the `logstash-*` regular expression, along with privileges to read, scroll, index, update, and delete
documents on those indices:
[source,yaml]
--------------------------------------------------------------------------------------------
logstash:
cluster: indices:admin/template/get, indices:admin/template/put
indices:
'logstash-*': indices:data/write/bulk, indices:data/write/delete, indices:data/write/update, indices:data/read/search, indices:data/read/scroll, create_index
--------------------------------------------------------------------------------------------
See the <<roles-file,_Role Definition File_>> section for information on modifying roles.
Create a user associated with the `logstash` role on the Elasticsearch cluster, using the <<esusers,`esusers` tool>>:
[source,shell]
--------------------------------------------------
esusers useradd <username> -p <password> -r logstash
--------------------------------------------------
NOTE: When using the transport protocol, the logstash user requires the predefined `transport_client` role in addition to the `logstash` role shown above (`-r logstash,transport_client`).
Once you've created the user, you are ready to configure Logstash.
[[ls-http]]
==== Connecting with HTTP/HTTPS
All three input, filter and output plugins support HTTP Basic Authentication as well as SSL/TLS.
The sections below demonstrate the output plugin's configuration parameters, but input and filter are the same.
[[ls-http-auth]]
===== Basic Authentication
To connect to an instance of Elasticsearch with Shield, set up the username and password credentials with the following
configuration parameters:
[source, shell]
--------------------------------------------------
input { ... }
output {
elasticsearch {
protocol => "http"
...
user => ... # string
password => ... # string
}
}
--------------------------------------------------
[[ls-http-ssl]]
===== SSL/TLS Configuration for HTTPS
To enable SSL/TLS encryption for HTTPS, use the following configuration block:
[source, shell]
--------------------------------------------------
input { ... }
output {
elasticsearch {
protocol => "http"
...
ssl => true
cacert => '/path/to/cert.pem' <1>
}
}
--------------------------------------------------
<1> The path to the `.pem` file in your filesystem that contains the Certificate Authority's certificate.
[[ls-transport]]
==== Connecting with Transport protocol
By setting the "protocol" option to "transport", Logstash communicates with the Elasticsearch cluster through the same
protocol nodes use between each other. This avoids JSON un/marshalling and is therefore more efficient.
In order to unlock this option, it's necessary to install an additional plugin in Logstash using the following command:
[source, shell]
--------------------------------------------------
bin/plugin install logstash-output-elasticsearch-shield
--------------------------------------------------
[[ls-transport-auth]]
===== Authentication for Transport protocol
Transport protocol supports both basic auth and client-certificate authentication through the use of Public Key Infrastructure (PKI).
[[ls-transport-auth-basic]]
===== Basic Authentication
To connect to an instance of Elasticsearch with Shield using basic auth, set up the username and password credentials with the following configuration parameters:
[source, shell]
--------------------------------------------------
input { ... }
output {
elasticsearch {
protocol => "transport"
...
user => ... # string
password => ... # string
}
}
--------------------------------------------------
[[ls-transport-auth-pki]]
===== PKI Authentication
To connect to an instance of Elasticsearch with Shield using client-certificate authentication you need to setup the keystore path which contain the client's certificate and the keystore password in the configuration:
[source, shell]
--------------------------------------------------
input { ... }
output {
elasticsearch {
protocol => "transport"
...
ssl => true
keystore => ... # string
keystore_password => ... # string
}
}
--------------------------------------------------
[[ls-transport-conf]]
===== SSL Configuration for Transport or Node protocols
Specify the paths to the keystore and truststore `.jks` files with the following configuration parameters:
[source, shell]
--------------------------------------------------
input { ... }
output {
elasticsearch {
protocol => "transport"
host => ... # string (optional)
cluster => ... # string (optional)
...
ssl => true
keystore => ... # string
keystore_password => ... # string
truststore => ... # string
truststore_password => ... # string
}
}
--------------------------------------------------
For more information on encryption and certificates, see the <<ssl-tls,Securing Nodes>> section:
[[ls-failure]]
==== Failures
Logstash raises an exception that halts the processing pipeline when the server's certificate does not validate over SSL
on any of the protocols discussed in this section. Same for the invalid user credentials.

View File

@ -0,0 +1,122 @@
[[marvel]]
=== Shield with Marvel
Marvel consists of a user interface over a data exporter known as the _agent_. The agent runs on each node and accesses
that node's monitoring API. The agent can store this collected data locally, on the cluster, or send the data to an
external monitoring cluster. Users can view and analyze the collected data with the Marvel UI.
To work with the Shield plugin, Marvel's configuration needs to be adapted for the _production_ cluster, which is the
cluster being monitored, as well as the _monitoring_ cluster, where the monitoring data is stored. For clusters that
store their own monitoring data, apply both sets of settings to the single, production cluster.
You will configure at least two users to work with Marvel. These users have to exist on the monitoring cluster.
* The Agent needs to be assigned a user with the correct <<roles,privileges>> to write data to the Marvel indices
named `.marvel-*`, check the Marvel index template, and upload the Marvel index template. You need only one agent user.
* Marvel UI users must authenticate and have privileges to read data from the Marvel indices. These users also
need to able to call the Nodes Info API in order to get the monitoring cluster's Elasticsearch version.
This version check allows Marvel to be compatible with many versions of Elasticsearch. You can have as many of
these end users configured as you would like.
The default `roles.yml` file includes definitions for these two roles. The steps below show you how to create these
users on the monitoring cluster.
[[monitoring-cluster]]
==== Monitoring Cluster Settings
The monitoring cluster is used to both store and view the Marvel data. When configuring Shield, you need to perform the
following actions:
* Make sure there is a user created with the `marvel_agent` role. Marvel uses this to export the data.
* Make sure there is a user created with the `marvel_user` role. You use this to view the Marvel UI and get license information.
* When using Marvel on a production server, you must enter your Marvel License. This license is stored in the
monitoring cluster. This step needs to be performed once, by a user with permissions to write to the `.marvel-kibana`
index. The .marvel-kibana index is used to store Marvel UI settings (for example, set custom warning levels) and
therefore write permission for `.marvel-kibana` is required for UI customizations. Both storing license and storing
settings can be done by any user added to the marvel_user role.
This is in the default `roles.yml`
[source,yaml]
--------------------------------------------------
marvel_agent:
cluster: indices:admin/template/get, indices:admin/template/put
indices:
'.marvel-*': indices:data/write/bulk, create_index
marvel_user:
cluster: cluster:monitor/nodes/info, cluster:admin/plugin/license/get
indices:
'.marvel-*': all
--------------------------------------------------
Once the roles are configured, create a user for the agent:
[source,shell]
--------------------------------------------------
bin/shield/esusers useradd marvel_export -p strongpassword -r marvel_agent
--------------------------------------------------
Then create one or more users for the Marvel UI:
[source,shell]
--------------------------------------------------
bin/shield/esusers useradd USER -p strongerpassword -r marvel_user
--------------------------------------------------
==== Production Cluster Settings
The Marvel agent is installed on every node in the production cluster. The agent collects monitoring data from the
production cluster and stores the data on the monitoring cluster. The agent's configuration specifies a list of
hostname and port combinations for access to the monitoring cluster.
When the monitoring cluster uses the Shield plugin and is configured to accept only HTTPS requests, you must configure the agent
on the production cluster to use HTTPS instead of the default HTTP protocol.
Authentication and protocol configuration are both controlled by the `marvel.agent.exporter.es.hosts` setting in the
node's `elasticsearch.yml` file. The setting accepts a list of monitoring cluster servers to serve as a fallback
in case a server is unavailable. Each of these servers must be properly configured, as in the following example:
Example `marvel.agent.exporter.es.hosts` setting
[source,yaml]
-------------------------------------------------------------------------------------------------------------------
marvel.agent.exporter.es.hosts: [ "https<1>://USER:PASSWORD<2>@node01:9200", "https://USER:PASSWORD@node02:9200"]
-------------------------------------------------------------------------------------------------------------------
<1> Indicates to use HTTPS.
<2> Username and password. The user needs to be configured on the Monitoring Cluster as described in the next section.
When the monitoring cluster uses HTTPS, the Marvel agent will attempt to validate the certificate of the Elasticsearch
node in the monitoring cluster. If you are using your own CA you should specify a trust store that has the signing
certificate of the CA. Here is an example config for the `marvel.agent.exporter.es.truststore.*` settings:
[source,yaml]
-------------------------------------------------------------------------------------------------------------
marvel.agent.exporter.es.hosts: [ "https://USER:PASSWORD@node01:9200", "https://USER:PASSWORD@node02:9200"]
marvel.agent.exporter.es.ssl.truststore.path: FULL_FILE_PATH
marvel.agent.exporter.es.ssl.truststore.password: PASSWORD
-------------------------------------------------------------------------------------------------------------
See the http://www.elastic.co/guide/en/marvel/current/configuration.html[Marvel documentation] for more details about
other SSL related settings.
NOTE: The 1.3.0 release of Marvel adds HTTPS support.
==== Marvel user interface & Sense
The Marvel UI supports SSL without the need for any additional configuration. You can change URL access scheme for Marvel to
HTTPS.
Users attempting to access the Marvel UI with the URL `https://HOST:9200/_plugin/marvel` must provide valid
credentials. See <<monitoring-cluster,Monitoring Cluster settings>> for information on the required user configuration.
Sense also supports HTTPS access. Users that access Sense over URLs of the form
`https://host:9200/_plugin/marvel/sense/index.html` must provide valid credentials if they have not already
authenticated to a dashboard.
Users connecting to the production cluster with Sense must provide valid credentials. Clusters must be configured to
enable cross-origin requests to enable users to connect with Sense. See the <<cors, CORS>> documentation for details.
NOTE: Providing user credentials to Sense in order to access another cluster is only supported in releases 1.3.0 and
later of Marvel.

View File

@ -0,0 +1,290 @@
[[esusers]]
=== esusers - Internal File Based Authentication
The _esusers_ realm is the default Shield realm. The _esusers_ realm enables the registration of users, passwords for
those users, and associates those users with roles. The `esusers` command-line tool assists with the registration and
administration of users.
==== `esusers` Realm Settings
Like all other realms, the `esusers` realm is configured under the `shield.authc.realms` settings namespace in the
`elasticsearch.yml` file. The following snippet shows an example of such configuration:
.Example `esusers` Realm Configuration
[source, yaml]
------------------------------------------------------------
shield:
authc:
realms:
default:
type: esusers
order: 0
------------------------------------------------------------
[[esusers-settings]]
.`esusers` Realm Settings
|=======================
| Setting | Required | Description
| `type` | yes | Indicates the realm type and must be set to `esusers`.
| `order` | no | Indicates the priority of this realm within the realm chain. Realms with lower order will be consulted first. Although not required, it is highly recommended to explicitly set this value when multiple realms are configured. Defaults to `Integer.MAX_VALUE`.
| `enabled` | no | Indicates whether this realm is enabled/disabled. Provides an easy way to disable realms in the chain without removing their configuration. Defaults to `true`.
| `files.users` | no | Points to the location of the `users` file where the users and their passwords are stored. Defaults to `users` file under shield's <<shield-config, config directory>>.
| `files.users_roles` | no | Points to the location of the `users_roles` file where the users and their roles are stored. Defaults to `users_roles` file under shield's <<shield-config, config directory>>.
| `cache.ttl` | no | Specified the time-to-live for cached user entries (a user and its credentials will be cached for this configured period of time). Defaults to `20m` (use the standard elasticsearch {ref}/common-options.html#time-units[time units]).
| `cache.max_users` | no | Specified the maximum number of user entries that can live in the cache at a given time. Defaults to 100,000.
| `cache.hash_algo` | no | (Expert Setting) Specifies the hashing algorithm that will be used for the in-memory cached user credentials (see <<esusers-cache-hash-algo,here>> for possible values).
|=======================
NOTE: When no realms are explicitly configured in `elasticsearch.yml`, a default realm chain will be created that holds
a single `esusers` realm. If you wish to only work with `esusers` realm and you're satisfied with the default
files paths, there is no real need to add the above configuration.
==== The `esusers` Command Line Tool
The `esusers` command line tool is located under Shield's <<shield-bin, bin>> directory and enables several
administrative tasks for managing users:
* <<esusers-add,Adding users>>
* <<esusers-list,Listing users and roles>>
* <<esusers-pass,Managing user passwords>>
* <<esusers-roles,Managing users' roles>>
* <<esusers-del,Removing users>>
[[esusers-add]]
===== Adding Users
The `esusers useradd` command adds a user to your cluster.
NOTE: To ensure that Elasticsearch can read the user and role information at startup, run `esusers useradd` as the
same user you use to run Elasticsearch. Running the command as root or some other user will update the permissions
for the `users` and `users_roles` files and prevent Elasticsearch from accessing them.
[source,shell]
----------------------------------------
esusers useradd <username>
----------------------------------------
A username must be at least 1 character and no longer than 30 characters. The first character must be a letter
(`a-z` or `A-Z`) or an underscore (`_`). Subsequent characters can be letters, underscores (`_`), digits (`0-9`), or any
of the following symbols `@`, `-`, `.` or `$`
You can specify the user's password at the command line with the `-p` option. When this option is absent, the
`esusers` command prompts you for the password. Omit the `-p` option to keep plaintext passwords out of the terminal
session's command history.
[source,shell]
----------------------------------------------------
esusers useradd <username> -p <secret>
----------------------------------------------------
Passwords must be at least 6 characters long.
You can define a user's roles with the `-r` parameter. This parameter accepts a comma-separated list of role names to
associate with the user.
[source,shell]
-------------------------------------------------------------------
esusers useradd <username> -r <comma-separated list of role names>
-------------------------------------------------------------------
The following example adds a new user named `jacknich` to the _esusers_ realm. The password for this user is
`theshining`, and this user is associated with the `logstash` and `marvel` roles.
[source,shell]
---------------------------------------------------------
esusers useradd jacknich -p theshining -r logstash,marvel
---------------------------------------------------------
For valid role names please see <<valid-role-name, Role Definitions>>.
[[esusers-list]]
===== Listing Users
The `esusers list` command lists the users registered in the _esusers_ realm, as in the following example:
[source, shell]
----------------------------------
esusers list
rdeniro : admin
alpacino : power_user
jacknich : marvel,logstash
----------------------------------
Users are in the left-hand column and their corresponding roles are listed in the right-hand column.
===== Listing Specific Users
The `esusers list <username>` command lists a specific user. Use this command to verify that a user has been
successfully added to the cluster.
[source,shell]
-----------------------------------
esusers list jacknich
jacknich : marvel,logstash
-----------------------------------
[[esusers-pass]]
===== Changing Users' Passwords
The `esusers passwd` command enables you to reset a user's password. You can specify the new password directly with the
`-p` option. When `-p` option is omitted, the tool will prompt you to enter and confirm a password in interactive mode.
[source,shell]
--------------------------------------------------
esusers passwd <username>
--------------------------------------------------
[source,shell]
--------------------------------------------------
esusers passwd <username> -p <password>
--------------------------------------------------
[[esusers-roles]]
===== Changing Users' Roles
The `esusers roles` command manages the roles associated to a particular user. The `-a` option adds a comma-separated
list of roles to a user. The `-r` option removes a comma-separated list of roles from a user. You can combine adding and
removing roles within the same command to change a user's roles.
[source,shell]
------------------------------------------------------------------------------------------------------------
esusers roles <username> -a <commma-separate list of roles> -r <commma-separate list of roles>
------------------------------------------------------------------------------------------------------------
The following command removes the `logstash` and `marvel` roles from user `jacknich`, as well as adding the `user` role:
[source,shell]
---------------------------------------------------------------
esusers roles jacknich -r logstash,marvel -a user
---------------------------------------------------------------
Listing the user displays the new role assignment:
[source,shell]
---------------------------------
esusers list jacknich
jacknich : user
---------------------------------
[[esusers-del]]
===== Deleting Users
The `esusers userdel` command deletes a user.
[source,shell]
--------------------------------------------------
userdel <username>
--------------------------------------------------
==== How `esusers` Works
The `esusers` tool manipulates two files, `users` and `users_roles`, in Shield's
<<shield-config,config>> directory. These two files store all user data for the _esusers_ realm and are read by Shield
on startup.
By default, Shield checks these files for changes every 5 seconds. You can change this default behavior by changing the
value of the `resource.reload.interval.high` setting in the `elasticsearch.yml` file.
[IMPORTANT]
==============================
These files are managed locally by the node and are **not** managed
globally by the cluster. This means that with a typical multi-node cluster,
the exact same changes need to be applied on each and every node in the
cluster.
A safer approach would be to apply the change on one of the nodes and have the
`users` and `users_roles` files distributed/copied to all other nodes in the
cluster (either manually or using a configuration management system such as
Puppet or Chef).
==============================
While it is possible to modify these files directly using any standard text
editor, we strongly recommend using the `esusers` command-line tool to apply
the required changes.
[[users-file]]
===== The `users` File
The `users` file stores all the users and their passwords. Each line in the `users` file represents a single user entry
consisting of the username and **hashed** password.
[source,bash]
----------------------------------------------------------------------
rdeniro:$2a$10$BBJ/ILiyJ1eBTYoRKxkqbuDEdYECplvxnqQ47uiowE7yGqvCEgj9W
alpacino:$2a$10$cNwHnElYiMYZ/T3K4PvzGeJ1KbpXZp2PfoQD.gfaVdImnHOwIuBKS
jacknich:$2a$10$GYUNWyABV/Ols/.bcwxuBuuaQzV6WIauW6RdboojxcixBq3LtI3ni
----------------------------------------------------------------------
NOTE: The `esusers` command-line tool uses `bcrypt` to hash the password by default.
[[users_roles-file]]
===== The `users_roles` File
The `users_roles` file stores the roles associated with the users, as in the following example:
[source,shell]
--------------------------------------------------
admin:rdeniro
power_user:alpacino,jacknich
user:jacknich
--------------------------------------------------
Each row maps a role to a comma-separated list of all the users that are associated with that role.
==== User Cache
The user credentials are not stored on disk in clear text. The esusers creates a `bcrypt` hashes of the passwords and
stores those. `bcrypt` is considered to be highly secured hash and by default it uses 10 rounds to generate the salts
it hashes with. While highly secured, it is also relatively slow. For this reason, Shield also introduce an in-memory
cache over the `esusers` store. This cache can use a different hashing algorithm for storing the passwords in memeory.
The default hashing algorithm that is used is `ssha256` - a salted SHA-256 algorithm.
We've seen in the table <<esusers-settings,above>> that the cache characteristics can be configured. The following table
describes the different hash algorithm that can be set:
[[esusers-cache-hash-algo]]
.Cache hash algorithms
|=======================
| Algorithm | Description
| `ssha256` | Uses a salted `SHA-256` algorithm (default).
| `md5` | Uses `MD5` algorithm.
| `sha1` | Uses `SHA1` algorithm.
| `bcrypt` | Uses `bcrypt` algorithm with salt generated in 10 rounds.
| `bcrypt4` | Uses `bcrypt` algorithm with salt generated in 4 rounds.
| `bcrypt5` | Uses `bcrypt` algorithm with salt generated in 5 rounds.
| `bcrypt6` | Uses `bcrypt` algorithm with salt generated in 6 rounds.
| `bcrypt7` | Uses `bcrypt` algorithm with salt generated in 7 rounds.
| `bcrypt8` | Uses `bcrypt` algorithm with salt generated in 8 rounds.
| `bcrypt9` | Uses `bcrypt` algorithm with salt generated in 9 rounds.
| `noop`,`clear_text` | Doesn't hash the credentials and keeps it in clear text in memory. CAUTION:
keeping clear text is considered insecure and can be compromised at the OS
level (e.g. memory dumps and `ptrace`).
|=======================
===== Cache Eviction API
Shield exposes an API to force cached user eviction. The following example, evicts all users from the `ldap1`
realm:
[source, java]
------------------------------------------------------------
$ curl -XPOST 'http://localhost:9200/_shield/realm/esusers/_cache/clear'
------------------------------------------------------------
It is also possible to evict specific users:
[source, java]
------------------------------------------------------------
$ curl -XPOST 'http://localhost:9200/_shield/realm/esusers/_cache/clear?usernames=rdeniro,alpacino'
------------------------------------------------------------
Multiple realms can also be specified using comma-delimited list:
[source, java]
------------------------------------------------------------
$ curl -XPOST 'http://localhost:9200/_shield/realm/esusers,ldap1/_cache/clear'
------------------------------------------------------------

View File

@ -0,0 +1,260 @@
[[ldap]]
=== LDAP Authentication
A secure Elasticsearch cluster can authenticate users from a Lightweight Directory Access Protocol (LDAP) directory.
With LDAP Authentication, you can assign roles to LDAP groups. When a user authenticates with LDAP, the privileges for
that user are the union of all privileges defined by the roles assigned to the set of groups that the user belongs to.
This section discusses configuration for an LDAP Realm.
==== LDAP Overview
LDAP stores users and groups hierarchically, similar to the way folders are grouped in a file system. The path to any
entry is a _Distinguished Name_, or DN. A DN uniquely identifies a user or group. User and group names typically use
attributes such as _common name_ (`cn`) or _unique ID_ (`uid`). An LDAP directory's hierarchy is built from containers
such as the _organizational unit_ (`ou`), _organization_ (`o`), or _domain controller_ (`dc`).
LDAP ignores white space in a DN definition. The following two DNs are equivalent:
[source,shell]
---------------------------------
"cn=admin,dc=example,dc=com"
"cn =admin ,dc= example , dc = com"
---------------------------------
Although optional, connections to the LDAP server should use the Secure Sockets Layer (SSL/TLS) protocol to protect
passwords. Clients and nodes that connect via SSL/TLS to the LDAP server require the certificate or the root CA for the
server. These certificates should be put into each node's keystore/truststore.
[[ldap-realms]]
==== LDAP Realm Settings
Like all realms, the `ldap` realm is configured under the `shield.authc.realms` settings namespace in the
`elasticsearch.yml` file. The LDAP realm supports two modes of operation, a user search mode and a mode with specific
templates for user DNs.
[[ldap-user-search]]
===== LDAP Realm with User Search added[1.1.0]
A LDAP user search is the most common mode of operation. In this mode, a specific user with permission to search the LDAP
is used to seach for the user DN based on the username and a LDAP attribute. The following snippet shows an example of
such configuration:
.Example LDAP Realm Configuration with User Search
[source, yaml]
------------------------------------------------------------
shield:
authc:
realms:
ldap1:
type: ldap
order: 0
url: "ldaps://ldap.example.com:636"
bind_dn: "cn=ldapuser, ou=users, o=services, dc=example, dc=com"
bind_password: changeme
user_search:
base_dn: "dc=example,dc=com"
attribute: cn
group_search:
base_dn: "dc=example,dc=com"
files:
role_mapping: "/mnt/elasticsearch/group_to_role_mapping.yml"
unmapped_groups_as_roles: false
------------------------------------------------------------
===== LDAP Realm with User DN Templates
User DN templates can be specified if your LDAP environment uses a few specific standard naming conditions for users. The
advantage of this method is that a search is not needed to find the user DN; conversely the disadvantage is multiple bind
operations may be needed to find the right user DN. The following snippet shows an example of such configuration:
.Example LDAP Realm Configuration with User DN Templates
[source, yaml]
------------------------------------------------------------
shield:
authc:
realms:
ldap1:
type: ldap
order: 0
url: "ldaps://ldap.example.com:636"
user_dn_templates:
- "cn={0}, ou=users, o=marketing, dc=example, dc=com"
- "cn={0}, ou=users, o=engineering, dc=example, dc=com"
group_search:
base_dn: "dc=example,dc=com"
files:
role_mapping: "/mnt/elasticsearch/group_to_role_mapping.yml"
unmapped_groups_as_roles: false
------------------------------------------------------------
[[ldap-settings]]
.Common LDAP Realm Settings
|=======================
| Setting | Required | Description
| `type` | yes | Indicates the realm type and must be set to `ldap`.
| `order` | no | Indicates the priority of this realm within the realm chain. Realms with lower order will be consulted first. Although not required, it is highly recommended to explicitly set this value when multiple realms are configured. Defaults to `Integer.MAX_VALUE`.
| `enabled` | no | Indicates whether this realm is enabled/disabled. Provides an easy way to disable realms in the chain without removing their configuration. Defaults to `true`.
| `url` | yes | Specifies the LDAP URL in the form of `ldap[s]://<server>:<port>`. Shield attempts to authenticate against this URL.
| `group_search.base_dn` | no | Specifies a container DN to search for groups in which the user has membership. When this element is absent, Shield searches for a `memberOf` attribute set on the user in order to determine group membership.
| `group_search.scope` | no | Specifies whether the group search should be `sub_tree`, `one_level` or `base`. `one_level` only searches objects directly contained within the `base_dn`. The default `sub_tree` searches all objects contained under `base_dn`. `base` specifies that the `base_dn` is a group object, and that it is the only group considered.
| `group_search.filter` | no | When not set, the realm will search for `group`, `groupOfNames`, or `groupOfUniqueNames`, with the attributes `member` or `memberOf`. Any instance of `{0}` in the filter will be replaced by the user attribute defined in `group_search.user_attribute`
| `group_search.user_attribute` | no | Specifies the user attribute that will be fetched and provided as a parameter to the filter. If not set, the user DN is passed into the filter.
| `unmapped_groups_as_roles` | no | When set to `true`, the names of any unmapped LDAP groups are used as role names and assigned to the user. The default value is `false`.
| `connect_timeout` | no | The timeout period for establishing an LDAP connection. An `s` at the end indicates seconds, or `ms` indicates milliseconds. Defaults to "5s" - for 5 seconds
| `read_timeout` | no | The timeout period for an LDAP operation. An `s` at the end indicates seconds, or `ms` indicates milliseconds. Defaults to "5s" - for 5 seconds
| `files.role_mapping` | no | Specifies the path and file name for the <<ldap-role-mapping, YAML role mapping configuration file>>. The default file name is `role_mapping.yml` in the <<shield-config,Shield config directory>>.
| `follow_referrals` | no | Boolean value that specifies whether Shield should follow referrals returned by the LDAP server. Referrals are URLs returned by the server that are to be used to continue the LDAP operation (e.g. search). Default is `true`.
| `hostname_verification` | no | When set to `true`, hostname verification will be performed when connecting to a LDAP server. The hostname or IP address used in the `url` must match one of the names in the certificate or the connection will not be allowed. Defaults to `true`.
| `cache.ttl` | no | Specified the time-to-live for cached user entries (a user and its credentials will be cached for this configured period of time). Defaults to `20m` (use the standard elasticsearch {ref}/common-options.html#time-units[time units]).
| `cache.max_users` | no | Specified the maximum number of user entries that can live in the cache at a given time. Defaults to 100,000.
| `cache.hash_algo` | no | (Expert Setting) Specifies the hashing algorithm that will be used for the in-memory cached user credentials (see <<ldap-cache-hash-algo,here>> for possible values).
|=======================
.User Template LDAP Realm Settings
|=======================
| Setting | Required | Description
| `user_dn_templates` | yes | Specifies the DN template that replaces the user name with the string `{0}`. This element is multivalued, allowing for multiple user contexts.
|=======================
.User Search LDAP Realm Settings added[1.1.0]
|=======================
| Setting | Required | Description
| `bind_dn` | no | The DN of the user that will be used to bind to the LDAP and perform searches. If this is not specified, an anonymous bind will be attempted.
| `bind_password` | no | The password for the user that will be used to bind to the LDAP.
| `user_search.base_dn` | yes | Specifies a container DN to search for users.
| `user_search.scope` | no | The scope of the user search. Valid values are `sub_tree`, `one_level` or `base`. `one_level` only searches objects directly contained within the `base_dn`. The default `sub_tree` searches all objects contained under `base_dn`. `base` specifies that the `base_dn` is the user object, and that it is the only user considered.
| `user_search.attribute` | no | The attribute to match with the username presented to Shield. The default attribute is `uid`
| `user_search.pool.size` | no | The maximum number of connections to the LDAP server to allow in the connection pool. Default is `20`.
| `user_search.pool.initial_size` | no | The initial number of connections to create to the LDAP server on startup. Default is `5`.
| `user_search.pool.health_check.enabled` | no | Flag to enable or disable a health check on LDAP connections in the connection pool. Connections will be checked in the background at the specified interval. Default is `true`
| `user_search.pool.health_check.dn` | no | The distinguished name to be retrieved as part of the health check. Default is the value of `bind_dn`. If `bind_dn` is not specified, a value must be specified.
| `user_search.pool.health_check.interval` | no | The interval to perform background checks of connections in the pool. Default is `60s`.
|=======================
NOTE: If any settings starting with `user_search` are specified the `user_dn_templates` setting is ignored.
NOTE: `bind_dn`, `bind_password` and `hostname_verification` are considered to be senstivie settings and therefore are not exposed via
{ref}/cluster-nodes-info.html#cluster-nodes-info[nodes info API].
[[ldap-role-mapping]]
==== Mapping Users and Groups to Roles
By default, the file that maps users and groups to roles is `config/shield/role_mapping.yml`. You can configure
the path and name of the mapping file by setting the appropriate value for the `shield.authc.ldap.files.role_mapping`
configuration parameter. When you map roles to groups, the roles of a user in that group are the combination of the
roles assigned to that group and the roles assigned to that user.
The `role_mapping.yml` file uses the YAML format. Within a mapping file, Elasticsearch roles are keys and LDAP groups
and users are values. The mapping can have a many-to-many relationship.
.Example Role Mapping File
[source, yaml]
------------------------------------------------------------
# Example LDAP group mapping configuration:
# roleA: <1>
# - groupA-DN <2>
# - groupB-DN
# - user1-DN <3>
monitoring:
- "cn=admins,dc=example,dc=com"
user:
- "cn=users,dc=example,dc=com"
- "cn=admins,dc=example,dc=com"
- "cn=John Doe,cn=contractors,dc=example,dc=com"
------------------------------------------------------------
<1> The name of the elasticsearch role found in the <<roles-file, roles file>>
<2> Example specifying the distinguished name of a LDAP group
<3> Example specifying the distinguished name of a LDAP user added[1.1.0]
After setting up role mappings, copy this file to each node. Tools like Puppet or Chef can help with this.
==== Adding an LDAP server certificate
To use SSL/TLS to access your LDAP server over an URL with the `ldaps` protocol, make sure the LDAP client used by
Shield can access the certificate of the CA that signed the LDAP server's certificate. This enables Shield's LDAP
client to authenticate the LDAP server before sending any passwords to it.
To do this, first obtain a certificate for the LDAP servers or a CA certificate that has signed the LDAP certificate.
You can use the `openssl` command to fetch the certificate and add the certificate to the `ldap.crt` file, as in
the following Unix example:
[source, shell]
----------------------------------------------------------------------------------------------
echo | openssl s_client -connect ldap.example.com:636 2>/dev/null | openssl x509 > ldap.crt
----------------------------------------------------------------------------------------------
NOTE: Older versions of openssl might not have the `-connect` option. Instead use the `-host` and `-port` options.
[[keytool]]
This certificate needs to be stored in the node keystore/truststore. Import the certificate into the truststore with the
following command, providing the password for the keystore when prompted.
[source,shell]
----------------------------------------------------------------------------------------------------
keytool -import -keystore node01.jks -file ldap.crt
----------------------------------------------------------------------------------------------------
If not already configured, add the path of the keystore/truststore to `elasticsearch.yml` as described in <<securing-nodes>>.
By default, Shield will attempt to verify the hostname or IP address used in the `url` with the values in the
certificate. If the values in the certificate do not match, Shield will not allow a connection to the LDAP server. This
behavior can be disabled by setting the `hostname_verification` property.
Restart Elasticsearch to pick up the changes to `elasticsearch.yml`.
NOTE: `hostname_verification` is considered to be a senstivie setting and therefore is not exposed via
{ref}/cluster-nodes-info.html#cluster-nodes-info[nodes info API].
[[ldap-user-cache]]
==== User Cache
To avoid connecting to the LDAP server for every incoming request, the users and their credentials are cached
locally on each node. This is a common practice when authenticating against remote servers and as can be seen
in the table <<ldap-settings,above>>, the characteristics of this cache are configurable.
The cached user credentials are hashed in memory, and there are several hash algorithms to choose from:
[[ldap-cache-hash-algo]]
.Cache hash algorithms
|=======================
| Algorithm | Description
| `ssha256` | Uses a salted `SHA-256` algorithm (default).
| `md5` | Uses `MD5` algorithm.
| `sha1` | Uses `SHA1` algorithm.
| `bcrypt` | Uses `bcrypt` algorithm with salt generated in 10 rounds.
| `bcrypt4` | Uses `bcrypt` algorithm with salt generated in 4 rounds.
| `bcrypt5` | Uses `bcrypt` algorithm with salt generated in 5 rounds.
| `bcrypt6` | Uses `bcrypt` algorithm with salt generated in 6 rounds.
| `bcrypt7` | Uses `bcrypt` algorithm with salt generated in 7 rounds.
| `bcrypt8` | Uses `bcrypt` algorithm with salt generated in 8 rounds.
| `bcrypt9` | Uses `bcrypt` algorithm with salt generated in 9 rounds.
| `sha2` | Uses `SHA2` algorithm.
| `apr1` | Uses `apr1` algorithm (md5 crypt).
| `noop`,`clear_text` | Doesn't hash the credentials and keeps it in clear text in memory. CAUTION:
keeping clear text is considered insecure and can be compromised at the OS
level (e.g. memory dumps and `ptrace`).
|=======================
===== Cache Eviction API
Shield exposes an API to force cached user eviction. The following example, evicts all users from the `ldap1`
realm:
[source, java]
------------------------------------------------------------
$ curl -XPOST 'http://localhost:9200/_shield/realm/ldap1/_cache/clear'
------------------------------------------------------------
It is also possible to evict specific users:
[source, java]
------------------------------------------------------------
$ curl -XPOST 'http://localhost:9200/_shield/realm/ldap1/_cache/clear?usernames=rdeniro,alpacino'
------------------------------------------------------------
Multiple realms can also be specified using comma-delimited list:
[source, java]
------------------------------------------------------------
$ curl -XPOST 'http://localhost:9200/_shield/realm/ldap1,ldap2/_cache/clear'
------------------------------------------------------------

View File

@ -0,0 +1,201 @@
[[active_directory]]
=== Active Directory Authentication
A secure Elasticsearch cluster can authenticate users from a Active Directory using the LDAP protocol.
With the Active Directory Realm Authentication, you can assign roles to Active Directory groups. When a user
authenticates with Active Directory, the privileges for that user are the union of all privileges defined by the roles
assigned to the set of groups that the user belongs to.
==== Active Directory and LDAP
The Active Directory Realm uses LDAP to communicate with Active Directory. The Active Directory Realm is similar to the
LDAP realm but takes advantage of extra features and streamlines configuration.
A general overview of LDAP will help with the configuration. LDAP databases, like Active Directory, store users and
groups hierarchically, similar to the way folders are grouped in a file system. The path to any
entry is a _Distinguished Name_, or DN. A DN uniquely identifies a user or group. User and group names typically use
attributes such as _common name_ (`cn`) or _unique ID_ (`uid`). An LDAP directory's hierarchy is built from containers
such as the _organizational unit_ (`ou`), _organization_ (`o`), or _domain controller_ (`dc`).
LDAP ignores white space in a DN definition. The following two DNs are equivalent:
[source,shell]
---------------------------------
"cn=admin,dc=example,dc=com"
"cn =admin ,dc= example , dc = com"
---------------------------------
Although optional, connections to the Active Directory server should use the Secure Sockets Layer (SSL/TLS) protocol to protect
passwords. Clients and nodes that connect via SSL/TLS to the LDAP server require the certificate or the root CA for the
server. These certificates should be put into each node's keystore/truststore.
==== Active Directory Realm Configuration
Like all realms, the `active_directory` realm is configured under the `shield.authc.realms` settings namespace in the
`elasticsearch.yml` file. The following snippet shows an example of such configuration:
.Example Active Directory Configuration
[source, yaml]
------------------------------------------------------------
shield:
authc:
realms:
active_directory:
type: active_directory
order: 0
domain_name: example.com
unmapped_groups_as_roles: true
...
------------------------------------------------------------
[[ad-settings]]
.Active Directory Realm Settings
|=======================
| Setting | Required | Description
| `type` | yes | Indicates the realm type and must be set to `active_directory`
| `order` | no | Indicates the priority of this realm within the realm chain. Realms with lower order will be consulted first. Although not required, it is highly recommended to explicitly set this value when multiple realms are configured. Defaults to `Integer.MAX_VALUE`.
| `enabled` | no | Indicates whether this realm is enabled/disabled. Provides an easy way to disable realms in the chain without removing their configuration. Defaults to `true`.
| `domain_name` | yes | Specifies the domain name of the Active Directory. The cluster can derive the LDAP URL and `user_search_dn` fields from values in this element if those fields are not otherwise specified.
| `url` | no | Specifies a LDAP URL in the form of `ldap[s]://<server>:<port>`. Shield attempts to authenticate against this URL. If not specified, the URL will be derived from the `domain_name`, assuming clear-text `ldap` and port `389` (e.g. `ldap://<domain_name>:389`).
| `user_search.base_dn` | no | Specifies the context to search for the user. The default value for this element is the root of the Active Directory domain.
| `user_search.scope` | no | Specifies whether the user search should be `sub_tree` (default), `one_level` or `base`. `one_level` only searches users directly contained within the `base_dn`. `sub_tree` searches all objects contained under `base_dn`. `base` specifies that the `base_dn` is a user object, and that it is the only user considered.
| `user_search.filter` | no | Specifies a filter to use to lookup a user given a username. The default filter looks up `user` objects with either `sAMAccountName` or `userPrincipalName`
| `group_search.base_dn` | no | Specifies the context to search for groups in which the user has membership. The default value for this element is the root of the Active Directory domain.
| `group_search.scope` | no | Specifies whether the group search should be `sub_tree` (default), `one_level` or `base`. `one_level` searches for groups directly contained within the `base_dn`. `sub_tree` searches all objects contained under `base_dn`. `base` specifies that the `base_dn` is a group object, and that it is the only group considered.
| `unmapped_groups_as_roles` | no | When set to `true`, the names of any unmapped LDAP groups are used as role names and assigned to the user. The default value is `false`.
| `files.role_mapping` | no | Specifies the path and file name for the <<ad-role-mapping, YAML role mapping configuration file>>. The default file name is `role_mapping.yml` in the <<shield-config,Shield config directory>>.
| `follow_referrals` | no | Boolean value that specifies whether Shield should follow referrals returned by the LDAP server. Referrals are URLs returned by the server that are to be used to continue the LDAP operation (e.g. search). Default is `true`.
| `hostname_verification` | no | When set to `true`, hostname verification will be performed when connecting to a LDAP server. The hostname or IP address used in the `url` must match one of the names in the certificate or the connection will not be allowed. Defaults to `true`.
| `cache.ttl` | no | Specified the time-to-live for cached user entries (a user and its credentials will be cached for this configured period of time). Defaults to `20m` (use the standard elasticsearch {ref}/common-options.html#time-units[time units])
| `cache.max_users` | no | Specified the maximum number of user entries that can live in the cache at a given time. Defaults to 100,000.
| `cache.hash_algo` | no | (Expert Setting) Specifies the hashing algorithm that will be used for the in-memory cached user credentials (see <<ad-cache-hash-algo,here>> for possible values).
|=======================
NOTE: `hostname_verification` is considered to be a senstivie setting and therefore is not exposed via
{ref}/cluster-nodes-info.html#cluster-nodes-info[nodes info API].
Active Directory authentication expects the username entered to be the same name as the `sAMAccountName` or `userPrincipalName` and not the
`CommonName` (CN). The URL is optional, but allows the use of custom ports.
NOTE: Binding to Active Directory fails when the domain name is not mapped in DNS. If DNS is not being provided
by a Windows DNS server, add a mapping for the domain in the local `/etc/hosts` file.
[[ad-role-mapping]]
==== Mapping Users and Groups to Roles
By default, the file that maps users and groups to roles is `config/shield/role_mapping.yml`. You can configure
the path and name of the mapping file by setting the appropriate value for the `shield.authc.active_directory.files.role_mapping`
configuration parameter. When you map roles to groups, the roles of a user in that group are the combination of the
roles assigned to that group and the roles assigned to that user.
The `role_mapping.yml` file uses the YAML format. Within a mapping file, Elasticsearch roles are keys and Active
Directory groups and users are values. The mapping can have a many-to-many relationship.
.Example Group and Role Mapping File
[source, yaml]
------------------------------------------------------------
# Example LDAP group mapping configuration:
# roleA: <1>
# - groupA-DN <2>
# - groupB-DN
# - user1-DN <3>
monitoring:
- "cn=admins,dc=example,dc=com"
user:
- "cn=users,dc=example,dc=com"
- "cn=admins,dc=example,dc=com"
- "cn=John Doe,cn=contractors,dc=example,dc=com"
------------------------------------------------------------
<1> The name of the elasticsearch role found in the <<roles-file, roles file>>
<2> Example specifying the distinguished name of a Active Directory group
<3> Example specifying the distinguished name of a Active Directory user
After setting up role mappings, copy this file to each node. Tools like Puppet or Chef can help with this.
==== Adding a Server Certificate
To use SSL/TLS to access your Active Directory server over an URL with the `ldaps` protocol, make sure the client
used by Shield can access the certificate of the CA that signed the LDAP server's certificate. This will enable
Shield's client to authenticate the Active Directory server before sending any passwords to it.
To do this, first obtain a certificate for the Active Directory servers or a CA certificate that has signed the certificate.
You can use the `openssl` command to fetch the certificate and add the certificate to the `ldap.crt` file, as in
the following Unix example:
[source, shell]
----------------------------------------------------------------------------------------------
echo | openssl s_client -connect ldap.example.com:636 2>/dev/null | openssl x509 > ldap.crt
----------------------------------------------------------------------------------------------
This certificate needs to be stored in the node keystore/truststore. Import the certificate into the truststore with the
following command, providing the password for the keystore when prompted.
[source,shell]
----------------------------------------------------------------------------------------------------
keytool -import -keystore node01.jks -file ldap.crt
----------------------------------------------------------------------------------------------------
If not already configured, add the path of the keystore/truststore to `elasticsearch.yml` as described in <<securing-nodes>>.
By default, Shield will attempt to verify the hostname or IP address used in the `url` with the values in the
certificate. If the values in the certificate do not match, Shield will not allow a connection to the Active Directory server.
This behavior can be disabled by setting the `hostname_verification` property.
Finally, restart Elasticsearch to pick up the changes to `elasticsearch.yml`.
==== User Cache
To avoid connecting to the Active Directory server for every incoming request, the users and their credentials
are cached locally on each node. This is a common practice when authenticating against remote servers and as
can be seen in the table <<ad-settings, above>>, the characteristics of this cache are configurable.
The cached user credentials are hashed in memory, and there are several hash algorithms to choose from:
[[ad-cache-hash-algo]]
.Cache hash algorithms
|=======================
| Algorithm | Description
| `ssha256` | Uses a salted `sha-256` algorithm (default).
| `md5` | Uses `MD5` algorithm.
| `sha1` | Uses `SHA1` algorithm.
| `bcrypt` | Uses `bcrypt` algorithm with salt generated in 10 rounds.
| `bcrypt4` | Uses `bcrypt` algorithm with salt generated in 4 rounds.
| `bcrypt5` | Uses `bcrypt` algorithm with salt generated in 5 rounds.
| `bcrypt6` | Uses `bcrypt` algorithm with salt generated in 6 rounds.
| `bcrypt7` | Uses `bcrypt` algorithm with salt generated in 7 rounds.
| `bcrypt8` | Uses `bcrypt` algorithm with salt generated in 8 rounds.
| `bcrypt9` | Uses `bcrypt` algorithm with salt generated in 9 rounds.
| `sha2` | Uses `SHA2` algorithm.
| `apr1` | Uses `apr1` algorithm (md5 crypt).
| `noop`,`clear_text` | Doesn't hash the credentials and keeps it in clear text in memory. CAUTION:
keeping clear text is considered insecure and can be compromised at the OS
level (e.g. memory dumps and `ptrace`).
|=======================
===== Cache Eviction API
Shield exposes an API to force cached user eviction. The following example, evicts all users from the `ad1`
realm:
[source, java]
------------------------------------------------------------
$ curl -XPOST 'http://localhost:9200/_shield/realm/ad1/_cache/clear'
------------------------------------------------------------
It is also possible to evict specific users:
[source, java]
------------------------------------------------------------
$ curl -XPOST 'http://localhost:9200/_shield/realm/ad1/_cache/clear?usernames=rdeniro,alpacino'
------------------------------------------------------------
Multiple realms can also be specified using comma-delimited list:
[source, java]
------------------------------------------------------------
$ curl -XPOST 'http://localhost:9200/_shield/realm/ad1,ad2/_cache/clear'
------------------------------------------------------------

View File

@ -0,0 +1,105 @@
[[pki]]
=== PKI Authentication
added[1.3.0] Shield allows for authentication through the use of Public Key Infrastructure (PKI). This works by requiring
clients to present X.509 certificates that are used for authentication and authorization will be performed by mapping the
distinguished name (DN) from the certificate to roles.
==== SSL/TLS setup
The PKI realm requires that SSL/TLS be enabled and client authentication also be enabled on the desired network layers
(http and/or transport). It is possible to enable SSL/TLS and client authentication on only one network layer and use PKI
authentication for that layer; for example, enabling SSL/TLS and client authentication on the transport layer with a PKI
realm defined would allow for transport clients to authenticate with X.509 certificates while HTTP traffic would still
authenticate using username and password authentication. The PKI realm supports a client authentication setting of either
`required` or `optional`; `required` forces all clients to present a certificate, while `optional` enables clients
without certificates to authenticate with other credentials. For SSL/TLS configuration details, please see
<<ref-ssl-tls-settings, SSL/TLS settings>>.
==== PKI Realm Configuration
Like all realms, the `pki` realm is configured under the `shield.authc.realms` settings namespace in the
`elasticsearch.yml` file. The following snippet shows an example of the most basic configuration:
[source, yaml]
------------------------------------------------------------
shield:
authc:
realms:
pki1:
type: pki
------------------------------------------------------------
In the above configuration, any certificate trusted by the SSL/TLS layer will be accepted for authentication. The username
will be the common name (CN) extracted from the DN of the certificate. If the username that should be used is something
other than the CN of the DN, a regex can be provided to extract the value desired for the username. The following example
will extract the email address from the DN:
[source, yaml]
------------------------------------------------------------
shield:
authc:
realms:
pki1:
type: pki
username_pattern: "EMAILADDRESS=(.*?)(?:,|$)"
------------------------------------------------------------
The PKI realm also provides configuration options to specify a specific truststore for authentication, which is useful
when the SSL/TLS layer trusts clients with certificates that are signed by a different CA than the one that signs the
certificates for client authentication. The following example shows such a configuration:
[source, yaml]
------------------------------------------------------------
shield:
authc:
realms:
pki1:
type: pki
truststore:
path: "/path/to/pki_truststore.jks"
password: "changeme"
------------------------------------------------------------
[[pki-settings]]
.PKI Realm Settings
|=======================
| Setting | Required | Description
| `type` | yes | Indicates the realm type and must be set to `pki`
| `order` | no | Indicates the priority of this realm within the realm chain. Realms with lower order will be consulted first. Although not required, it is highly recommended to explicitly set this value when multiple realms are configured. Defaults to `Integer.MAX_VALUE`.
| `enabled` | no | Indicates whether this realm is enabled/disabled. Provides an easy way to disable realms in the chain without removing their configuration. Defaults to `true`.
| `username_pattern` | no | The regular expression pattern used to extract the username from the certificate DN. The first match group is used as the username. Default is `CN=(.*?)(?:,\|$)`
| `truststore.path` | no | The path of a truststore to use. The default truststore is the one defined by <<ref-ssl-tls-settings,SSL/TLS settings>>
| `truststore.password` | no | The password to the truststore. Must be provided if `truststore.path` is set.
| `truststore.algorithm` | no | Algorithm for the trustsore. Default is `SunX509`
| `files.role_mapping` | no | Specifies the path and file name for the <<pki-role-mapping, YAML role mapping configuration file>>. The default file name is `role_mapping.yml` in the <<shield-config,Shield config directory>>.
|=======================
[[pki-role-mapping]]
==== Mapping Users and Groups to Roles
By default, the file that maps users to roles is `config/shield/role_mapping.yml`. You can configure
the path and name of the mapping file by setting the appropriate value for the `.files.role_mapping` configuration
parameter for a specific realm.
The `role_mapping.yml` file uses the YAML format. Within a mapping file, Elasticsearch roles are keys and distinguished
names (DNs) are values. The mapping can have a many-to-many relationship.
.Example Role Mapping File
[source, yaml]
------------------------------------------------------------
# Example group mapping configuration:
# roleA: <1>
# - user1-DN <2>
monitoring:
- "cn=Admin,ou=example,o=com"
user:
- "cn=John Doe,ou=example,o=com"
------------------------------------------------------------
<1> The name of the elasticsearch role found in the <<roles-file, roles file>>
<2> Example specifying the distinguished name of a PKI user
NOTE: For the PKI realm, only the DN of a user can be mapped as there is no concept of a group in PKI
After setting up role mappings, copy this file to each node. Tools like Puppet or Chef can help with this.

Some files were not shown because too many files have changed in this diff Show More