Merge branch 'migrate_shield'

Original commit: elastic/x-pack-elasticsearch@01cfbf98de
This commit is contained in:
uboness 2015-07-13 12:32:03 +02:00
commit 3f31855ded
333 changed files with 42148 additions and 0 deletions

417
shield/LICENSE.txt Normal file
View File

@ -0,0 +1,417 @@
SHIELD SOFTWARE LICENSE AGREEMENT
READ THIS AGREEMENT CAREFULLY, WHICH CONSTITUTES A LEGALLY BINDING AGREEMENT AND GOVERNS YOUR USE OF ELASTICSEARCH'S
SHIELD SOFTWARE. BY INSTALLING AND/OR USING THE SHIELD SOFTWARE, YOU ARE INDICATING THAT YOU AGREE TO THE TERMS AND
CONDITIONS SET FORTH IN THIS AGREEMENT. IF YOU DO NOT AGREE WITH SUCH TERMS AND CONDITIONS, YOU MAY NOT INSTALL OR USE
THE SHIELD SOFTWARE.
This SHIELD SOFTWARE LICENSE AGREEMENT (this "Agreement") is entered into by and between the applicable Elasticsearch
entity referred to in Attachment 1 below ("Elasticsearch") and the person or entity ("You") that has downloaded
Elasticsearch's Shield software to which this Agreement is attached ("Shield Software"). This Agreement is effective as
of the date an applicable ordering document ("Order Form") is entered into by Elasticsearch and You (the "Effective
Date").
1. SOFTWARE LICENSE AND RESTRICTIONS
1.1 License Grants.
(a) 30 Day Free Trial License. Subject to the terms and conditions of this Agreement, Elasticsearch agrees to grant,
and does hereby grant to You for a period of thirty (30) days from the Effective Date (the "Trial Term"), solely for
Your internal business operations, a limited, non-exclusive, non-transferable, fully paid up, right and license
(without the right to grant or authorize sublicenses) to: (i) install and use the object code version of the Shield
Software; (ii) use, and distribute internally a reasonable number of copies of the documentation, if any, provided with
the Shield Software ("Documentation"), provided that You must include on such copies all Elasticsearch trademarks, trade
names, logos and notices present on the Documentation as originally provided to You by Elasticsearch; (iii) permit third
party contractors performing services on Your behalf to use the Shield Software and Documentation as set forth in (i)
and (ii) above, provided that such use must be solely for Your benefit, and You shall be responsible for all acts and
omissions of such contractors in connection with their use of the Shield Software. For the avoidance of doubt, You
understand and agree that upon the expiration of the Trial Term, Your license to use the Shield Software will terminate,
unless you purchase a Qualifying Subscription (as defined below) for Elasticsearch support services.
(b) Fee-Bearing Production License. Subject to the terms and conditions of this Agreement and complete payment of any
and all applicable fees for a Gold or Platinum production subscription for support services for Elasticsearch open
source software (in each case, a "Qualifying Subscription"), Elasticsearch agrees to grant, and does hereby grant to You
during the term of the applicable Qualifying Subscription, and for the restricted scope of this Agreement, solely for
Your internal business operations, a limited, non-exclusive, non-transferable right and license (without the right to
grant or authorize sublicenses) to: (i) install and use the object code version of the Shield Software, subject to any
applicable quantitative limitations set forth in the applicable Order Form; (ii) use, and distribute internally a
reasonable number of copies of the Documentation, if any, provided with the Shield Software, provided that You must
include on such copies all Elasticsearch trademarks, trade names, logos and notices present on the Documentation as
originally provided to You by Elasticsearch; (iii) permit third party contractors performing services on Your behalf to
use the Shield Software and Documentation as set forth in (i) and (ii) above, provided that such use must be solely for
Your benefit, and You shall be responsible for all acts and omissions of such contractors in connection with their use
of the Shield Software.
1.2 Reservation of Rights; Restrictions. As between Elasticsearch and You, Elasticsearch owns all right title and
interest in and to the Shield Software and any derivative works thereof, and except as expressly set forth in Section
1.1 above, no other license to the Shield Software is granted to You by implication, estoppel or otherwise. You agree
not to: (i) prepare derivative works from, modify, copy or use the Shield Software in any manner except as expressly
permitted in this Agreement or applicable law; (ii) transfer, sell, rent, lease, distribute, sublicense, loan or
otherwise transfer the Shield Software in whole or in part to any third party; (iii) use the Shield Software for
providing time-sharing services, any software-as-a-service offering ("SaaS"), service bureau services or as part of an
application services provider or other service offering; (iv) alter or remove any proprietary notices in the Shield
Software; or (v) make available to any third party any analysis of the results of operation of the Shield Software,
including benchmarking results, without the prior written consent of Elasticsearch. The Shield Software may contain or
be provided with open source libraries, components, utilities and other open source software (collectively, "Open Source
Software"), which Open Source Software may have applicable license terms as identified on a website designated by
Elasticsearch or otherwise provided with the Shield Software or Documentation. Notwithstanding anything to the contrary
herein, use of the Open Source Software shall be subject to the license terms and conditions applicable to such Open
Source Software, to the extent required by the applicable licensor (which terms shall not restrict the license rights
granted to You hereunder, but may contain additional rights).
1.3 Open Source. The Shield Software may contain or be provided with open source libraries, components, utilities and
other open source software (collectively, "Open Source"), which Open Source may have applicable license terms as
identified on a website designated by Elasticsearch or otherwise provided with the applicable Software or Documentation.
Notwithstanding anything to the contrary herein, use of the Open Source shall be subject to the applicable Open Source
license terms and conditions to the extent required by the applicable licensor (which terms shall not restrict the
license rights granted to You hereunder but may contain additional rights).
1.4 Audit Rights. You agree that Elasticsearch shall have the right, upon five (5) business days' notice to You, to
audit Your use of the Shield Software for compliance with any quantitative limitations on Your use of the Shield
Software that are set forth in the applicable Order Form. You agree to provide Elasticsearch with the necessary access
to the Shield Software to conduct such an audit either (i) remotely, or (ii) if remote performance is not possible, at
Your facilities, during normal business hours and no more than one (1) time in any twelve (12) month period. In the
event any such audit reveals that You have used the Shield Software in excess of the applicable quantitative
limitations, You agree to promptly pay to Elasticsearch an amount equal to the difference between the fees actually paid
and the fees that You should have paid to remain in compliance with such quantitative limitations. This Section 1.3
shall survive for a period of two (2) years from the termination or expiration of this Agreement.
2. TERM AND TERMINATION
2.1 Term. This Agreement shall commence on the Effective Date, and shall continue in force for the license term set
forth in the applicable Order Form, unless earlier terminated under Section 2.2 below, provided, however, that if You do
not purchase a Qualifying Subscription prior to the expiration of the Trial Term, this Agreement will expire at the end
of the Trial Term.
2.2 Termination. Either party may, upon written notice to the other party, terminate this Agreement for material
breach by the other party automatically and without any other formality, if such party has failed to cure such material
breach within thirty (30) days of receiving written notice of such material breach from the non-breaching party.
Notwithstanding the foregoing, this Agreement shall automatically terminate in the event that You intentionally breach
the scope of the license granted in Section 1.1 of this Agreement.
2.3 Post Termination or Expiration. Upon termination or expiration of this Agreement, for any reason, You shall
promptly cease the use of the Shield Software and Documentation and destroy (and certify to Elasticsearch in writing the
fact of such destruction), or return to Elasticsearch, all copies of the Shield Software and Documentation then in Your
possession or under Your control.
2.4 Survival. Sections 2.3, 2.4, 3, 4 and 5 shall survive any termination or expiration of this Agreement.
3. DISCLAIMER OF WARRANTIES
TO THE MAXIMUM EXTENT PERMITTED UNDER APPLICABLE LAW, THE SHIELD SOFTWARE IS PROVIDED "AS IS" WITHOUT WARRANTY OF ANY
KIND, AND ELASTICSEARCH AND ITS LICENSORS MAKE NO WARRANTIES WHETHER EXPRESSED, IMPLIED OR STATUTORY REGARDING OR
RELATING TO THE SHIELD SOFTWARE OR DOCUMENTATION. TO THE MAXIMUM EXTENT PERMITTED UNDER APPLICABLE LAW, ELASTICSEARCH
AND ITS LICENSORS SPECIFICALLY DISCLAIM ALL IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NON-INFRINGEMENT WITH RESPECT TO THE SHIELD SOFTWARE AND DOCUMENTATION, AND WITH RESPECT TO THE USE OF THE FOREGOING.
FURTHER, ELASTICSEARCH DOES NOT WARRANT RESULTS OF USE OR THAT THE SHIELD SOFTWARE WILL BE ERROR FREE OR THAT THE USE OF
THE SHIELD SOFTWARE WILL BE UNINTERRUPTED.
4. LIMITATION OF LIABILITY
4.1 Disclaimer of Certain Damages. IN NO EVENT SHALL YOU OR ELASTICSEARCH OR ITS LICENSORS BE LIABLE FOR ANY LOSS OF
PROFITS, LOSS OF USE, BUSINESS INTERRUPTION, LOSS OF DATA, COST OF SUBSTITUTE GOODS OR SERVICES, OR FOR ANY INDIRECT,
SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES OF ANY KIND IN CONNECTION WITH OR ARISING OUT OF THE USE OR INABILITY TO
USE THE SHIELD SOFTWARE, OR THE PERFORMANCE OF OR FAILURE TO PERFORM THIS AGREEMENT, WHETHER ALLEGED AS A BREACH OF
CONTRACT OR TORTIOUS CONDUCT, INCLUDING NEGLIGENCE, EVEN IF THE RESPONSIBLE PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES. THE LIMITATIONS OF LIABILITY SET FORTH IN THIS SECTION 4.1 SHALL NOT APPLY TO A BREACH THROUGH GROSS
NEGLIGENCE OR INTENTIONAL MISCONDUCT BY YOU OF THE SCOPE OF THE LICENSE GRANTED IN SECTION 1.1 OR TO ANY OTHER LIABILITY
THAT CANNOT BE EXCLUDED OR LIMITED UNDER APPLICABLE LAW.
4.2 Damages Cap. IN NO EVENT SHALL ELASTICSEARCH'S OR ITS LICENSORS' AGGREGATE, CUMULATIVE LIABILITY UNDER THIS
AGREEMENT EXCEED THE AMOUNT YOU PAID, IN THE TWELVE (12) MONTHS IMMEDIATELY PRIOR TO THE EVENT GIVING RISE TO LIABILITY,
UNDER THE ELASTICSEARCH SUPPORT SERVICES AGREEMENT PURSUANT TO WHICH YOU PURCHASED THE QUALIFYING SUBSCRIPTION, PROVIDED
THAT IF YOU ARE USING THE SHIELD SOFTWARE UNDER A TRIAL LICENSE PURSUANT TO SECTION 1.1(a), IN NO EVENT SHALL
ELASTICSEARCH'S AGGREGATE, CUMULATIVE LIABILITY UNDER THIS AGREEMENT EXCEED ONE THOUSAND DOLLARS ($1,000).
4.3 YOU AGREE THAT THE FOREGOING LIMITATIONS, EXCLUSIONS AND DISCLAIMERS ARE A REASONABLE ALLOCATION OF THE RISK
BETWEEN THE PARTIES AND WILL APPLY TO THE MAXIMUM EXTENT PERMITTED BY APPLICABLE LAW, EVEN IF ANY REMEDY FAILS IN ITS
ESSENTIAL PURPOSE.
5. MISCELLANEOUS
This Agreement, including Attachment 1 hereto, which is hereby incorporated herein by this reference, completely and
exclusively states the entire agreement of the parties regarding the subject matter herein, and it supersedes, and its
terms govern, all prior proposals, agreements, or other communications between the parties, oral or written, regarding
such subject matter. For the avoidance of doubt, the parties hereby expressly acknowledge and agree that if You issue
any purchase order or similar document in connection with its purchase of a license to the Shield Software, You will do
so only for Your internal, administrative purposes and not with the intent to provide any contractual terms. This
Agreement may not be modified except by a subsequently dated, written amendment that expressly amends this Agreement and
which is signed on behalf of Elasticsearch and You, by duly authorized representatives. If any provision(s) hereof is
held unenforceable, this Agreement will continue without said provision and be interpreted to reflect the original
intent of the parties.
ATTACHMENT 1
ADDITIONAL TERMS AND CONDITIONS
A. The following additional terms and conditions apply to all Customers with principal offices in the United States of
America:
(1) Applicable Elasticsearch Entity. The entity providing the license is Elasticsearch, Inc., a Delaware corporation.
(2) Government Rights. The Shield Software product is "Commercial Computer Software," as that term is defined in 48
(C.F.R. 2.101, and as the term is used in 48 C.F.R. Part 12, and is a Commercial Item comprised of "commercial computer
(software" and "commercial computer software documentation". If acquired by or on behalf of a civilian agency, the U.S.
(Government acquires this commercial computer software and/or commercial computer software documentation subject to the
(terms of this Agreement, as specified in 48 C.F.R. 12.212 Computer Software) and 12.211 Technical Data) of the Federal
(Acquisition Regulation "FAR") and its successors. If acquired by or on behalf of any agency within the Department of
(Defense "DOD"), the U.S. Government acquires this commercial computer software and/or commercial computer software
(documentation subject to the terms of the Elasticsearch Software License Agreement as specified in 48 C.F.R. 227.7202-3
(and 48 C.F.R. 227.7202-4 of the DOD FAR Supplement "DFARS") and its successors, and consistent with 48 C.F.R. 227.7202.
(This U.S. Government Rights clause, consistent with 48 C.F.R. 12.212 and 48 C.F.R. 227.7202 is in lieu of, and
(supersedes, any other FAR, DFARS, or other clause or provision that addresses Government rights in computer software,
(computer software documentation or technical data related to the Shield Software under this Agreement and in any
(Subcontract under which this commercial computer software and commercial computer software documentation is acquired or
(licensed.
(3) Export Control. You acknowledge that the goods, software and technology acquired from Elasticsearch are subject to
(U.S. export control laws and regulations, including but not limited to the International Traffic In Arms Regulations
("ITAR") 22 C.F.R. Parts 120-130 2010)); the Export Administration Regulations "EAR") 15 C.F.R. Parts 730-774 2010));
(the U.S. antiboycott regulations in the EAR and U.S. Department of the Treasury regulations; the economic sanctions
(regulations and guidelines of the U.S. Department of the Treasury, Office of Foreign Assets Control, and the USA
(Patriot Act Title III of Pub. L. 107-56, signed into law October 26, 2001), as amended. You are now and will remain in
(the future compliant with all such export control laws and regulations, and will not export, re-export, otherwise
(transfer any Elasticsearch goods, software or technology or disclose any Elasticsearch software or technology to any
(person contrary to such laws or regulations. You acknowledge that remote access to the Shield Software may in certain
(circumstances be considered a re-export of Shield Software, and accordingly, may not be granted in contravention of
(U.S. export control laws and regulations.
(4) Governing Law. This Agreement will be governed by the laws of the State of California, without regard to its
(conflict of laws principles. This Agreement shall not be governed by the 1980 UN Convention on Contracts for the
(International Sale of Goods. All suits hereunder will be brought solely in Federal Court for the Northern District of
(California, or if that court lacks subject matter jurisdiction, in any California State Court located in Santa Clara
(County. The parties hereby irrevocably waive any and all claims and defenses either might otherwise have in any such
(action or proceeding in any of such courts based upon any alleged lack of personal jurisdiction, improper venue, forum
(non conveniens or any similar claim or defense.
B. The following additional terms and conditions apply to all Customers with principal offices in Canada:
(1) Applicable Elasticsearch Entity. The entity providing the license is Elasticsearch B.C. Ltd., a corporation
(incorporated under laws of the Province of British Columbia.
(2) Export Control. You acknowledge that the goods, software and technology acquired from Elasticsearch are subject to
the restrictions and controls set out in Section A(3) above as well as those imposed by the Export and Import Permits
Act (Canada) and the regulations thereunder and that you will comply with all applicable laws and regulations. Without
limitation, You acknowledge that the Marvel Software, or any portion thereof, will not be exported: (a) to any country
on Canada's Area Control List; (b) to any country subject to UN Security Council embargo or action; or (c) contrary to
Canada's Export Control List Item 5505. You are now and will remain in the future compliant with all such export control
laws and regulations, and will not export, re-export, otherwise transfer any Elasticsearch goods, software or technology
or disclose any Elasticsearch software or technology to any person contrary to such laws or regulations. You will not
export or re-export the Marvel Software, or any portion thereof, directly or indirectly, in violation of the Canadian
export administration laws and regulations to any country or end user, or to any end user who you know or have reason to
know will utilize them in the design, development or production of nuclear, chemical or biological weapons. You further
acknowledge that the Marvel Software product may include technical data subject to such Canadian export regulations.
Elasticsearch does not represent that the Marvel Software is appropriate or available for use in all countries.
Elasticsearch prohibits accessing materials from countries or states where contents are illegal. You are using the
Marvel Software on your own initiative and you are responsible for compliance with all applicable laws. You hereby agree
to indemnify Elasticsearch and its affiliates from any claims, actions, liability or expenses (including reasonable
lawyers' fees) resulting from Your failure to act in accordance with the acknowledgements, agreements, and
representations in this Section B(2).
(3) Governing Law and Dispute Resolution. This Agreement shall be governed by the Province of Ontario and the federal
laws of Canada applicable therein without regard to conflict of laws provisions. The parties hereby irrevocably waive
any and all claims and defenses either might otherwise have in any such action or proceeding in any of such courts based
upon any alleged lack of personal jurisdiction, improper venue, forum non conveniens or any similar claim or defense.
Any dispute, claim or controversy arising out of or relating to this Agreement or the existence, breach, termination,
enforcement, interpretation or validity thereof, including the determination of the scope or applicability of this
agreement to arbitrate, (each, a "Dispute"), which the parties are unable to resolve after good faith negotiations,
shall be submitted first to the upper management level of the parties. The parties, through their upper management level
representatives shall meet within thirty (30) days of the Dispute being referred to them and if the parties are unable
to resolve such Dispute within thirty (30) days of meeting, the parties agree to seek to resolve the Dispute through
mediation with ADR Chambers in the City of Toronto, Ontario, Canada before pursuing any other proceedings. The costs of
the mediator shall be shared equally by the parties. If the Dispute has not been resolved within thirty (30) days of the
notice to desire to mediate, any party may terminate the mediation and proceed to arbitration and the matter shall be
referred to and finally resolved by arbitration at ADR Chambers pursuant to the general ADR Chambers Rules for
Arbitration in the City of Toronto, Ontario, Canada. The arbitration shall proceed in accordance with the provisions of
the Arbitration Act (Ontario). The arbitral panel shall consist of three (3) arbitrators, selected as follows: each
party shall appoint one (1) arbitrator; and those two (2) arbitrators shall discuss and select a chairman. If the two
(2) party-appointed arbitrators are unable to agree on the chairman, the chairman shall be selected in accordance with
the applicable rules of the arbitration body. Each arbitrator shall be independent of each of the parties. The
arbitrators shall have the authority to grant specific performance and to allocate between the parties the costs of
arbitration (including service fees, arbitrator fees and all other fees related to the arbitration) in such equitable
manner as the arbitrators may determine. The prevailing party in any arbitration shall be entitled to receive
reimbursement of its reasonable expenses incurred in connection therewith. Judgment upon the award so rendered may be
entered in a court having jurisdiction or application may be made to such court for judicial acceptance of any award and
an order of enforcement, as the case may be. Notwithstanding the foregoing, Elasticsearch shall have the right to
institute an action in a court of proper jurisdiction for preliminary injunctive relief pending a final decision by the
arbitrator, provided that a permanent injunction and damages shall only be awarded by the arbitrator. The language to
be used in the arbitral proceedings shall be English.
(4) Language. Any translation of this Agreement is done for local requirements and in the event of a dispute between
(the English and any non-English version, the English version of this Agreement shall govern. At the request of the
(parties, the official language of this Agreement and all communications and documents relating hereto is the English
(language, and the English-language version shall govern all interpretation of the Agreement. Ë la demande des parties,
(la langue officielle de la prŽsente convention ainsi que toutes communications et tous documents s'y rapportant est la
(langue anglaise, et la version anglaise est celle qui rŽgit toute interprŽtation de la prŽsente convention.
(5) Disclaimer of Warranties. For Customers with principal offices in the Province of QuŽbec, the following new
(sentence is to be added to the end of Section 3: "SOME JURISDICTIONS DO NOT ALLOW LIMITATIONS OR EXCLUSIONS OF CERTAIN
(TYPES OF DAMAGES AND/OR WARRANTIES AND CONDITIONS. THE LIMITATIONS, EXCLUSIONS AND DISCLAIMERS SET FORTH IN THIS
(AGREEMENT SHALL NOT APPLY IF AND ONLY IF AND TO THE EXTENT THAT THE LAWS OF A COMPETENT JURISDICTION REQUIRE
(LIABILITIES BEYOND AND DESPITE THESE LIMITATIONS, EXCLUSIONS AND DISCLAIMERS."
(6) Limitation of Liability. For Customers with principal offices in the Province of QuŽbec, the following new
(sentence is to be added to the end of Section 4.1: "SOME JURISDICTIONS DO NOT ALLOW LIMITATIONS OR EXCLUSIONS OF
(CERTAIN TYPES OF DAMAGES AND/OR WARRANTIES AND CONDITIONS. THE LIMITATIONS, EXCLUSIONS AND DISCLAIMERS SET FORTH IN
(THIS AGREEMENT SHALL NOT APPLY IF AND ONLY IF AND TO THE EXTENT THAT THE LAWS OF A COMPETENT JURISDICTION REQUIRE
(LIABILITIES BEYOND AND DESPITE THESE LIMITATIONS, EXCLUSIONS AND DISCLAIMERS."
C. The following additional terms and conditions apply to all Customers with principal offices outside of the United
States of America and Canada:
(1) Applicable Elasticsearch Entity. The entity providing the license in Germany is Elasticsearch Gmbh; in France is
(Elasticsearch SARL, in the United Kingdom is Elasticsearch Ltd, in Australia is Elasticsearch Pty Ltd., in Japan is
(Elasticsearch KK, and in all other countries is Elasticsearch BV.
(2) Choice of Law. This Agreement shall be governed by and construed in accordance with the laws of the State of New
(York, without reference to or application of choice of law rules or principles. Notwithstanding any choice of law
(provision or otherwise, the Uniform Computer Information Transactions Act UCITA) and the United Nations Convention on
(the International Sale of Goods shall not apply.
(3) Arbitration. Any dispute, claim or controversy arising out of or relating to this Agreement or the existence,
(breach, termination, enforcement, interpretation or validity thereof, including the determination of the scope or
(applicability of this agreement to arbitrate, each, a "Dispute") shall be referred to and finally resolved by
(arbitration under the rules and at the location identified below. The arbitral panel shall consist of three 3)
(arbitrators, selected as follows: each party shall appoint one 1) arbitrator; and those two 2) arbitrators shall
(discuss and select a chairman. If the two party-appointed arbitrators are unable to agree on the chairman, the chairman
(shall be selected in accordance with the applicable rules of the arbitration body. Each arbitrator shall be independent
(of each of the parties. The arbitrators shall have the authority to grant specific performance and to allocate between
(the parties the costs of arbitration including service fees, arbitrator fees and all other fees related to the
(arbitration) in such equitable manner as the arbitrators may determine. The prevailing party in any arbitration shall
(be entitled to receive reimbursement of its reasonable expenses incurred in connection therewith. Judgment upon the
(award so rendered may be entered in a court having jurisdiction or application may be made to such court for judicial
(acceptance of any award and an order of enforcement, as the case may be. Notwithstanding the foregoing, Elasticsearch
(shall have the right to institute an action in a court of proper jurisdiction for preliminary injunctive relief pending
(a final decision by the arbitrator, provided that a permanent injunction and damages shall only be awarded by the
(arbitrator. The language to be used in the arbitral proceedings shall be English.
(a) In addition, the following terms only apply to Customers with principal offices within Europe, the Middle East or
(Africa EMEA):
Arbitration Rules and Location. Any Dispute shall be referred to and finally resolved by arbitration under the London
Court of International Arbitration ("LCIA") Rules (which Rules are deemed to be incorporated by reference into this
clause) on the basis that the governing law is the law of the State of New York, USA. The seat, or legal place, of
arbitration shall be London, England.
(b) In addition, the following terms only apply to Customers with principal offices within Asia Pacific, Australia &
(New Zealand:
Arbitration Rules and Location. Any Dispute shall be referred to and finally resolved by arbitration under the Rules of
Conciliation and Arbitration of the International Chamber of Commerce ("ICC") in force on the date when the notice of
arbitration is submitted in accordance with such Rules (which Rules are deemed to be incorporated by reference into this
clause) on the basis that the governing law is the law of the State of New York, USA. The seat, or legal place, of
arbitration shall be Singapore.
(c) In addition, the following terms only apply to Customers with principal offices within the Americas excluding North
(America):
Arbitration Rules and Location. Any Dispute shall be referred to and finally resolved by arbitration under
International Dispute Resolution Procedures of the American Arbitration Association ("AAA") in force on the date when
the notice of arbitration is submitted in accordance with such Procedures (which Procedures are deemed to be
incorporated by reference into this clause) on the basis that the governing law is the law of the State of New York,
USA. The seat, or legal place, of arbitration shall be New York, New York, USA.
(4) In addition, for Customers with principal offices within the UK, the following new sentence is added to the end of
(Section 4.1:
Nothing in this Agreement shall have effect so as to limit or exclude a party's liability for death or personal injury
caused by negligence or for fraud including fraudulent misrepresentation and this Section 4.1 shall take effect subject
to this provision.
(5) In addition, for Customers with principal offices within France, Sections 1.2, 3 and 4.1 of the Agreement are
(deleted and replaced with the following new Sections 1.2, 3 and 4.1:
1.2 Reservation of Rights; Restrictions. Elasticsearch owns all right title and interest in and to the Shield Software
and any derivative works thereof, and except as expressly set forth in Section 1.1 above, no other license to the Shield
Software is granted to You by implication, or otherwise. You agree not to prepare derivative works from, modify, copy or
use the Shield Software in any manner except as expressly permitted in this Agreement; provided that You may copy the
Shield Software for archival purposes, only where such software is provided on a non-durable medium; and You may
decompile the Shield Software, where necessary for interoperability purposes and where necessary for the correction of
errors making the software unfit for its intended purpose, if such right is not reserved by Elasticsearch as editor of
the Shield Software. Pursuant to article L122-6-1 of the French intellectual property code, Elasticsearch reserves the
right to correct any bugs as necessary for the Shield Software to serve its intended purpose. You agree not to: (i)
transfer, sell, rent, lease, distribute, sublicense, loan or otherwise transfer the Shield Software in whole or in part
to any third party; (ii) use the Shield Software for providing time-sharing services, any software-as-a-service
offering ("SaaS"), service bureau services or as part of an application services provider or other service offering;
(iii) alter or remove any proprietary notices in the Shield Software; or (iv) make available to any third party any
analysis of the results of operation of the Shield Software, including benchmarking results, without the prior written
consent of Elasticsearch.
3. DISCLAIMER OF WARRANTIES
TO THE MAXIMUM EXTENT PERMITTED UNDER APPLICABLE LAW, THE SHIELD SOFTWARE IS PROVIDED "AS IS" WITHOUT WARRANTY OF ANY
KIND, AND ELASTICSEARCH AND ITS LICENSORS MAKE NO WARRANTIES WHETHER EXPRESSED, IMPLIED OR STATUTORY REGARDING OR
RELATING TO THE SHIELD SOFTWARE OR DOCUMENTATION. TO THE MAXIMUM EXTENT PERMITTED UNDER APPLICABLE LAW, ELASTICSEARCH
AND ITS LICENSORS SPECIFICALLY DISCLAIM ALL IMPLIED WARRANTIES OF FITNESS FOR A PARTICULAR PURPOSE WITH RESPECT TO THE
SHIELD SOFTWARE AND DOCUMENTATION, AND WITH RESPECT TO THE USE OF THE FOREGOING. FURTHER, ELASTICSEARCH DOES NOT
WARRANT RESULTS OF USE OR THAT THE SHIELD SOFTWARE WILL BE ERROR FREE OR THAT THE USE OF THE SHIELD SOFTWARE WILL BE
UNINTERRUPTED.
4.1 Disclaimer of Certain Damages. IN NO EVENT SHALL YOU OR ELASTICSEARCH OR ITS LICENSORS BE LIABLE FOR ANY LOSS OF
PROFITS, LOSS OF USE, BUSINESS INTERRUPTION, LOSS OF DATA, COST OF SUBSTITUTE GOODS OR SERVICES, OR FOR ANY INDIRECT OR
UNFORESEEABLE DAMAGES OF ANY KIND IN CONNECTION WITH OR ARISING OUT OF THE USE OR INABILITY TO USE THE SHIELD SOFTWARE,
OR THE PERFORMANCE OF OR FAILURE TO PERFORM THIS AGREEMENT, WHETHER ALLEGED AS A BREACH OF CONTRACT OR TORTIOUS CONDUCT,
INCLUDING NEGLIGENCE. THE LIMITATIONS OF LIABILITY SET FORTH IN THIS SECTION 4.1 SHALL NOT APPLY TO A BREACH, THROUGH
GROSS NEGLIGENCE OR INTENTIONAL MISCONDUCT BY YOU, OF THE SCOPE OF THE LICENSE GRANTED IN SECTION 1.1, OR IN CASE OF
DEATH OR PERSONAL INJURY.
(6) In addition, for Customers with principal offices within Australia, Sections 4.1, 4.2 and 4.3 of the Agreement are
(deleted and replaced with the following new Sections 4.1, 4.2 and 4.3:
4.1 Disclaimer of Certain Damages. Subject to clause 4.3, a party is not liable for Consequential Loss however caused
(including by the negligence of that party) suffered or incurred by the other party in connection with this agreement.
"Consequential Loss" means loss of revenues, loss of reputation, indirect loss, loss of profits, consequential loss,
loss of actual or anticipated savings, indirect loss, lost opportunities, including opportunities to enter into
arrangements with third parties, loss or damage in connection with claims against by third parties, or loss or
corruption or data.
4.2 Damages Cap. SUBJECT TO CLAUSES 4.1 AND 4.3, ANY LIABILITY OF ELASTICSEARCH FOR ANY LOSS OR DAMAGE, HOWEVER CAUSED
(INCLUDING BY THE NEGLIGENCE OF ELASTICSEARCH), SUFFERED BY YOU IN CONNECTION WITH THIS AGREEMENT IS LIMITED TO THE
AMOUNT YOU PAID, IN THE TWELVE (12) MONTHS IMMEDIATELY PRIOR TO THE EVENT GIVING RISE TO LIABILITY, UNDER THE
ELASTICSEARCH SUPPORT SERVICES AGREEMENT IN CONNECTION WITH WHICH YOU OBTAINED THE LICENSE TO USE THE SHIELD SOFTWARE.
THE LIMITATION SET OUT IN THIS SECTION 4.2 IS AN AGGREGATE LIMIT FOR ALL CLAIMS, WHENEVER MADE.
4.3 Limitation and Disclaimer Exceptions. If the Competition and Consumer Act 2010 (Cth) or any other legislation or
any other legislation states that there is a guarantee in relation to any good or service supplied by Elasticsearch in
connection with this agreement, and Elasticsearch's liability for failing to comply with that guarantee cannot be
excluded but may be limited, Sections 4.1 and 4.2 do not apply to that liability and instead Elasticsearch's liability
for such failure is limited (at Elasticsearch's election) to, in the case of a supply of goods, the Elasticsearch
replacing the goods or supplying equivalent goods or repairing the goods, or in the case of a supply of services,
Elasticsearch supplying the services again or paying the cost of having the services supplied again.
(7) In addition, for Customers with principal offices within Japan, Sections 1.2, 3 and 4.1 of the Agreement are
(deleted and replaced with the following new Sections 1.2, 3 and 4.1:
1.2 Reservation of Rights; Restrictions. As between Elasticsearch and You, Elasticsearch owns all right title and
interest in and to the Shield Software and any derivative works thereof, and except as expressly set forth in Section
1.1 above, no other license to the Shield Software is granted to You by implication or otherwise. You agree not to: (i)
prepare derivative works from, modify, copy or use the Shield Software in any manner except as expressly permitted in
this Agreement or applicable law; (ii) transfer, sell, rent, lease, distribute, sublicense, loan or otherwise transfer
the Shield Software in whole or in part to any third party; (iii) use the Shield Software for providing time-sharing
services, any software-as-a-service offering ("SaaS"), service bureau services or as part of an application services
provider or other service offering; (iv) alter or remove any proprietary notices in the Shield Software; or (v) make
available to any third party any analysis of the results of operation of the Shield Software, including benchmarking
results, without the prior written consent of Elasticsearch.
3. DISCLAIMER OF WARRANTIES TO THE MAXIMUM EXTENT PERMITTED UNDER APPLICABLE LAW, THE SHIELD SOFTWARE IS PROVIDED "AS
IS" WITHOUT WARRANTY OF ANY KIND, AND ELASTICSEARCH AND ITS LICENSORS MAKE NO WARRANTIES WHETHER EXPRESSED, IMPLIED OR
STATUTORY REGARDING OR RELATING TO THE SHIELD SOFTWARE OR DOCUMENTATION. TO THE MAXIMUM EXTENT PERMITTED UNDER
APPLICABLE LAW, ELASTICSEARCH AND ITS LICENSORS SPECIFICALLY DISCLAIM ALL IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS
FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT WITH RESPECT TO THE SHIELD SOFTWARE AND DOCUMENTATION, AND WITH RESPECT TO
THE USE OF THE FOREGOING. FURTHER, ELASTICSEARCH DOES NOT WARRANT RESULTS OF USE OR THAT THE SHIELD SOFTWARE WILL BE
ERROR FREE OR THAT THE USE OF THE SHIELD SOFTWARE WILL BE UNINTERRUPTED.
4.1 Disclaimer of Certain Damages. IN NO EVENT SHALL YOU OR ELASTICSEARCH OR ITS LICENSORS BE LIABLE FOR ANY LOSS OF
PROFITS, LOSS OF USE, BUSINESS INTERRUPTION, LOSS OF DATA, COST OF SUBSTITUTE GOODS OR SERVICES, OR FOR ANY
SPECIALINDIRECT, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES OF ANY KIND IN CONNECTION WITH OR ARISING OUT OF THE USE
OR INABILITY TO USE THE SHIELD SOFTWARE, OR THE PERFORMANCE OF OR FAILURE TO PERFORM THIS AGREEMENT, WHETHER ALLEGED AS
A BREACH OF CONTRACT OR TORTIOUS CONDUCT, INCLUDING NEGLIGENCE, EVEN IF THE RESPONSIBLE PARTY HAS BEEN ADVISED OF THE
POSSIBILITY OF SUCH DAMAGES. THE LIMITATIONS OF LIABILITY SET FORTH IN THIS SECTION 4.1 SHALL NOT APPLY TO A BREACH
THROUGH GROSS NEGLIGENCE OR INTENTIONAL MISCONDUCT BY YOU OF THE SCOPE OF THE LICENSE GRANTED IN SECTION 1.1 OR TO ANY
OTHER LIABILITY THAT CANNOT BE EXCLUDED OR LIMITED UNDER APPLICABLE LAW.

134
shield/NOTICE.txt Normal file
View File

@ -0,0 +1,134 @@
Elasticsearch Shield
Copyright 2009-2015 Elasticsearch
---
This product includes software developed by The Apache Software
Foundation (http://www.apache.org/).
---
This product contains software developed by Anders Moeller. The
following is the copyright and notice text for this software:
Copyright (c) 2001-2011 Anders Moeller
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. The name of the author may not be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
---
This product contains software developed by UNBOUNDID CORP
(https://www.unboundid.com/). The following is the copyright and notice
text for this software:
UnboundID LDAP SDK Free Use License
THIS IS AN AGREEMENT BETWEEN YOU ("YOU") AND UNBOUNDID CORP. ("UNBOUNDID")
REGARDING YOUR USE OF UNBOUNDID LDAP SDK FOR JAVA AND ANY ASSOCIATED
DOCUMENTATION, OBJECT CODE, COMPILED LIBRARIES, SOURCE CODE AND SOURCE FILES OR
OTHER MATERIALS MADE AVAILABLE BY UNBOUNDID (COLLECTIVELY REFERRED TO IN THIS
AGREEMENT AS THE ("SDK").
BY INSTALLING, ACCESSING OR OTHERWISE USING THE SDK, YOU ACCEPT THE TERMS OF
THIS AGREEMENT. IF YOU DO NOT AGREE TO THE TERMS OF THIS AGREEMENT, DO NOT
INSTALL, ACCESS OR USE THE SDK.
USE OF THE SDK. Subject to your compliance with this Agreement, UnboundID
grants to You a non-exclusive, royalty-free license, under UnboundID's
intellectual property rights in the SDK, to use, reproduce, modify and
distribute this release of the SDK; provided that no license is granted herein
under any patents that may be infringed by your modifications, derivative works
or by other works in which the SDK may be incorporated (collectively, your
"Applications"). You may reproduce and redistribute the SDK with your
Applications provided that you (i) include this license file and an
unmodified copy of the unboundid-ldapsdk-se.jar file; and (ii) such
redistribution is subject to a license whose terms do not conflict with or
contradict the terms of this Agreement. You may also reproduce and redistribute
the SDK without your Applications provided that you redistribute the SDK
complete and unmodified (i.e., with all "read me" files, copyright notices, and
other legal notices and terms that UnboundID has included in the SDK).
SCOPE OF LICENSES. This Agreement does not grant You the right to use any
UnboundID intellectual property which is not included as part of the SDK. The
SDK is licensed, not sold. This Agreement only gives You some rights to use
the SDK. UnboundID reserves all other rights. Unless applicable law gives You
more rights despite this limitation, You may use the SDK only as expressly
permitted in this Agreement.
SUPPORT. UnboundID is not obligated to provide any technical or other support
("Support Services") for the SDK to You under this Agreement. However, if
UnboundID chooses to provide any Support Services to You, Your use of such
Support Services will be governed by then-current UnboundID support policies.
TERMINATION. UnboundID reserves the right to discontinue offering the SDK and
to modify the SDK at any time in its sole discretion. Notwithstanding anything
contained in this Agreement to the contrary, UnboundID may also, in its sole
discretion, terminate or suspend access to the SDK to You or any end user at
any time. In addition, if you fail to comply with the terms of this Agreement,
then any rights granted herein will be automatically terminated if such failure
is not corrected within 30 days of the initial notification of such failure.
You acknowledge that termination and/or monetary damages may not be a
sufficient remedy if You breach this Agreement and that UnboundID will be
entitled, without waiving any other rights or remedies, to injunctive or
equitable relief as may be deemed proper by a court of competent jurisdiction
in the event of a breach. UnboundID may also terminate this Agreement if the
SDK becomes, or in UnboundID?s reasonable opinion is likely to become, the
subject of a claim of intellectual property infringement or trade secret
misappropriation. All rights and licenses granted herein will simultaneously
and automatically terminate upon termination of this Agreement for any reason.
DISCLAIMER OF WARRANTY. THE SDK IS PROVIDED "AS IS" AND UNBOUNDID DOES NOT
WARRANT THAT THE SDK WILL BE ERROR-FREE, VIRUS-FREE, WILL PERFORM IN AN
UNINTERRUPTED, SECURE OR TIMELY MANNER, OR WILL INTEROPERATE WITH OTHER
HARDWARE, SOFTWARE, SYSTEMS OR DATA. TO THE MAXIMUM EXTENT ALLOWED BY LAW, ALL
CONDITIONS, REPRESENTATIONS AND WARRANTIES, WHETHER EXPRESS, IMPLIED, STATUTORY
OR OTHERWISE INCLUDING, WITHOUT LIMITATION, ANY IMPLIED WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE (EVEN IF UNBOUNDID HAD BEEN
INFORMED OF SUCH PURPOSE), OR NON-INFRINGEMENT OF THIRD PARTY RIGHTS ARE HEREBY
DISCLAIMED.
LIMITATION OF LIABILITY. IN NO EVENT WILL UNBOUNDID OR ITS SUPPLIERS BE LIABLE
FOR ANY DAMAGES WHATSOEVER (INCLUDING, WITHOUT LIMITATION, LOST PROFITS,
REVENUE, DATA OR DATA USE, BUSINESS INTERRUPTION, COST OF COVER, DIRECT,
INDIRECT, SPECIAL, PUNITIVE, INCIDENTAL OR CONSEQUENTIAL DAMAGES OF ANY KIND)
ARISING OUT OF THE USE OF OR INABILITY TO USE THE SDK OR IN ANY WAY RELATED TO
THIS AGREEMENT, EVEN IF UNBOUNDID HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH
DAMAGES.
ADDITIONAL RIGHTS. Certain states do not allow the exclusion of implied
warranties or limitation of liability for certain kinds of damages, so the
exclusion of limited warranties and limitation of liability set forth above may
not apply to You.
EXPORT RESTRICTIONS. The SDK is subject to United States export control laws.
You acknowledge and agree that You are responsible for compliance with all
domestic and international export laws and regulations that apply to the SDK.
MISCELLANEOUS. This Agreement constitutes the entire agreement with respect to
the SDK. If any provision of this Agreement shall be held to be invalid,
illegal or unenforceable, the validity, legality and enforceability of the
remaining provisions shall in no way be affected or impaired thereby. This
Agreement and performance hereunder shall be governed by and construed in
accordance with the laws of the State of Texas without regard to its conflict
of laws rules. Any disputes related to this Agreement shall be exclusively
litigated in the state or federal courts located in Travis County, Texas.

8
shield/README.asciidoc Normal file
View File

@ -0,0 +1,8 @@
= Elasticsearch Security Plugin
This plugins adds security features to elasticsearch
You can build the plugin with `mvn package`.
The documentation is put in the `docs/` directory.

32
shield/TESTING.asciidoc Normal file
View File

@ -0,0 +1,32 @@
[[Testing Framework Cheatsheet]]
= Testing
[partintro]
Elasticsearch and Shield use jUnit for testing, they also use randomness
in the tests, that can be set using a seed, please refer to the
Elasticsearch TESTING.asciidoc cheatsheet to know all about it.
Tests are executed with network transport and unicast discovery, as this is
the configuration that's secured by shield.
== Testing the REST layer
The available integration tests are specific for Shield functionalities
and make use of the java API to communicate with the elasticsearch nodes,
using the internal binary transport (port 9300 by default).
Shield is also tested using the REST tests provided by Elasticsearch core,
just by running those same tests against a cluster with Shield installed.
The REST tests are not run automatically when executing the maven test
command just yet (they are run with the regular suite when -Dtests.slow=true
is supplied). Some tests are blacklisted as they are known to fail against
shield due to different behaviours introduced by the security plugin.
---------------------------------------------------------------------------
mvn test -Dtests.filter="@Rest"
---------------------------------------------------------------------------
`ShieldRestTests` is the executable test class that runs all the
yaml suites available within the `rest-api-spec` folder.

97
shield/bin/shield/.in.bat Normal file
View File

@ -0,0 +1,97 @@
@echo off
rem Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
rem or more contributor license agreements. Licensed under the Elastic License;
rem you may not use this file except in compliance with the Elastic License.
REM .in.bat <java main class> [args,..]
SETLOCAL
if NOT DEFINED JAVA_HOME goto err
set JAVA_CMD=%1
if "%JAVA_CMD%" == "" goto err_java_cmd
REM fix args
for /f "usebackq tokens=1*" %%i in (`echo %*`) DO @ set params=%%j
SHIFT
set SCRIPT_DIR=%~dp0
for %%I in ("%SCRIPT_DIR%..\..") do set ES_HOME=%%~dpfI
REM ***** JAVA options *****
if "%ES_MIN_MEM%" == "" (
set ES_MIN_MEM=256m
)
if "%ES_MAX_MEM%" == "" (
set ES_MAX_MEM=1g
)
if NOT "%ES_HEAP_SIZE%" == "" (
set ES_MIN_MEM=%ES_HEAP_SIZE%
set ES_MAX_MEM=%ES_HEAP_SIZE%
)
set JAVA_OPTS=%JAVA_OPTS% -Xms%ES_MIN_MEM% -Xmx%ES_MAX_MEM%
if NOT "%ES_HEAP_NEWSIZE%" == "" (
set JAVA_OPTS=%JAVA_OPTS% -Xmn%ES_HEAP_NEWSIZE%
)
if NOT "%ES_DIRECT_SIZE%" == "" (
set JAVA_OPTS=%JAVA_OPTS% -XX:MaxDirectMemorySize=%ES_DIRECT_SIZE%
)
set JAVA_OPTS=%JAVA_OPTS% -Xss256k
REM Enable aggressive optimizations in the JVM
REM - Disabled by default as it might cause the JVM to crash
REM set JAVA_OPTS=%JAVA_OPTS% -XX:+AggressiveOpts
set JAVA_OPTS=%JAVA_OPTS% -XX:+UseParNewGC
set JAVA_OPTS=%JAVA_OPTS% -XX:+UseConcMarkSweepGC
set JAVA_OPTS=%JAVA_OPTS% -XX:CMSInitiatingOccupancyFraction=75
set JAVA_OPTS=%JAVA_OPTS% -XX:+UseCMSInitiatingOccupancyOnly
REM When running under Java 7
REM JAVA_OPTS=%JAVA_OPTS% -XX:+UseCondCardMark
REM GC logging options -- uncomment to enable
REM JAVA_OPTS=%JAVA_OPTS% -XX:+PrintGCDetails
REM JAVA_OPTS=%JAVA_OPTS% -XX:+PrintGCTimeStamps
REM JAVA_OPTS=%JAVA_OPTS% -XX:+PrintClassHistogram
REM JAVA_OPTS=%JAVA_OPTS% -XX:+PrintTenuringDistribution
REM JAVA_OPTS=%JAVA_OPTS% -XX:+PrintGCApplicationStoppedTime
REM JAVA_OPTS=%JAVA_OPTS% -Xloggc:/var/log/elasticsearch/gc.log
REM Causes the JVM to dump its heap on OutOfMemory.
set JAVA_OPTS=%JAVA_OPTS% -XX:+HeapDumpOnOutOfMemoryError
REM The path to the heap dump location, note directory must exists and have enough
REM space for a full heap dump.
REM JAVA_OPTS=%JAVA_OPTS% -XX:HeapDumpPath=$ES_HOME/logs/heapdump.hprof
REM Disables explicit GC
set JAVA_OPTS=%JAVA_OPTS% -XX:+DisableExplicitGC
set ES_CLASSPATH=%ES_CLASSPATH%;%ES_HOME%/lib/elasticsearch-1.4.0-SNAPSHOT.jar;%ES_HOME%/lib/*;%ES_HOME%/lib/sigar/*;%ES_HOME%/plugins/shield/*
set ES_PARAMS=-Des.path.home="%ES_HOME%"
SET HOSTNAME=%COMPUTERNAME%
"%JAVA_HOME%\bin\java" %JAVA_OPTS% %ES_JAVA_OPTS% %ES_PARAMS% -cp "%ES_CLASSPATH%" %JAVA_CMD% %PARAMS%
goto finally
:err
echo JAVA_HOME environment variable must be set!
ENDLOCAL
EXIT /B 1
:err_java_cmd
echo Can not call .in.bat without specifying a main java class
ENDLOCAL
EXIT /B 1
:finally
ENDLOCAL

132
shield/bin/shield/esusers Executable file
View File

@ -0,0 +1,132 @@
#!/bin/sh
# Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
# or more contributor license agreements. Licensed under the Elastic License;
# you may not use this file except in compliance with the Elastic License.
SCRIPT="$0"
# SCRIPT may be an arbitrarily deep series of symlinks. Loop until we have the concrete path.
while [ -h "$SCRIPT" ] ; do
ls=`ls -ld "$SCRIPT"`
# Drop everything prior to ->
link=`expr "$ls" : '.*-> \(.*\)$'`
if expr "$link" : '/.*' > /dev/null; then
SCRIPT="$link"
else
SCRIPT=`dirname "$SCRIPT"`/"$link"
fi
done
# determine elasticsearch home
ES_HOME=`dirname "$SCRIPT"`/../..
# make ELASTICSEARCH_HOME absolute
ES_HOME=`cd "$ES_HOME"; pwd`
# If an include wasn't specified in the environment, then search for one...
if [ "x$ES_INCLUDE" = "x" ]; then
# Locations (in order) to use when searching for an include file.
for include in /usr/share/elasticsearch/elasticsearch.in.sh \
/usr/local/share/elasticsearch/elasticsearch.in.sh \
/opt/elasticsearch/elasticsearch.in.sh \
~/.elasticsearch.in.sh \
"`dirname "$0"`"/../elasticsearch.in.sh \
$ES_HOME/bin/elasticsearch.in.sh; do
if [ -r "$include" ]; then
. "$include"
break
fi
done
# ...otherwise, source the specified include.
elif [ -r "$ES_INCLUDE" ]; then
. "$ES_INCLUDE"
fi
if [ -x "$JAVA_HOME/bin/java" ]; then
JAVA="$JAVA_HOME/bin/java"
else
JAVA=`which java`
fi
if [ ! -x "$JAVA" ]; then
echo "Could not find any executable java binary. Please install java in your PATH or set JAVA_HOME"
exit 1
fi
if [ -z "$ES_CLASSPATH" ]; then
echo "You must set the ES_CLASSPATH var" >&2
exit 1
fi
# Special-case path variables.
case `uname` in
CYGWIN*)
ES_CLASSPATH=`cygpath -p -w "$ES_CLASSPATH"`
ES_HOME=`cygpath -p -w "$ES_HOME"`
;;
esac
# Try to read package config files
if [ -f "/etc/sysconfig/elasticsearch" ]; then
CONF_DIR=/etc/elasticsearch
CONF_FILE=$CONF_DIR/elasticsearch.yml
. "/etc/sysconfig/elasticsearch"
elif [ -f "/etc/default/elasticsearch" ]; then
CONF_DIR=/etc/elasticsearch
CONF_FILE=$CONF_DIR/elasticsearch.yml
. "/etc/default/elasticsearch"
fi
# Parse any long getopt options and put them into properties before calling getopt below
# Be dash compatible to make sure running under ubuntu works
ARGCOUNT=$#
COUNT=0
while [ $COUNT -lt $ARGCOUNT ]
do
case $1 in
--*=*) properties="$properties -Des.${1#--}"
shift 1; COUNT=$(($COUNT+1))
;;
--*) properties="$properties -Des.${1#--}=$2"
shift ; shift; COUNT=$(($COUNT+2))
;;
*) set -- "$@" "$1"; shift; COUNT=$(($COUNT+1))
esac
done
# check if properties already has a config file or config dir
if [ -e "$CONF_DIR" ]; then
case "$properties" in
*-Des.default.path.conf=*) ;;
*)
if [ ! -d "$CONF_DIR/shield" ]; then
echo "ERROR: The configuration directory [$CONF_DIR/shield] does not exist. The esusers tool expects Shield configuration files in that location."
echo "The plugin may not have been installed with the correct configuration path. If [$ES_HOME/config/shield] exists, please copy the shield directory to [$CONF_DIR]"
exit 1
fi
properties="$properties -Des.default.path.conf=$CONF_DIR"
;;
esac
fi
if [ -e "$CONF_FILE" ]; then
case "$properties" in
*-Des.default.config=*) ;;
*)
properties="$properties -Des.default.config=$CONF_FILE"
;;
esac
fi
export HOSTNAME=`hostname -s`
# include shield jars in classpath
ES_CLASSPATH="$ES_CLASSPATH:$ES_HOME/plugins/shield/*"
cd $ES_HOME > /dev/null
"$JAVA" $ES_JAVA_OPTS -cp "$ES_CLASSPATH" -Des.path.home="$ES_HOME" $properties org.elasticsearch.shield.authc.esusers.tool.ESUsersTool "$@"
status=$?
cd - > /dev/null
exit $status

View File

@ -0,0 +1,9 @@
@echo off
rem Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
rem or more contributor license agreements. Licensed under the Elastic License;
rem you may not use this file except in compliance with the Elastic License.
PUSHD %~dp0
CALL %~dp0.in.bat org.elasticsearch.shield.authc.esusers.tool.ESUsersTool %*
POPD

132
shield/bin/shield/syskeygen Executable file
View File

@ -0,0 +1,132 @@
#!/bin/sh
# Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
# or more contributor license agreements. Licensed under the Elastic License;
# you may not use this file except in compliance with the Elastic License.
SCRIPT="$0"
# SCRIPT may be an arbitrarily deep series of symlinks. Loop until we have the concrete path.
while [ -h "$SCRIPT" ] ; do
ls=`ls -ld "$SCRIPT"`
# Drop everything prior to ->
link=`expr "$ls" : '.*-> \(.*\)$'`
if expr "$link" : '/.*' > /dev/null; then
SCRIPT="$link"
else
SCRIPT=`dirname "$SCRIPT"`/"$link"
fi
done
# determine elasticsearch home
ES_HOME=`dirname "$SCRIPT"`/../..
# make ELASTICSEARCH_HOME absolute
ES_HOME=`cd "$ES_HOME"; pwd`
# If an include wasn't specified in the environment, then search for one...
if [ "x$ES_INCLUDE" = "x" ]; then
# Locations (in order) to use when searching for an include file.
for include in /usr/share/elasticsearch/elasticsearch.in.sh \
/usr/local/share/elasticsearch/elasticsearch.in.sh \
/opt/elasticsearch/elasticsearch.in.sh \
~/.elasticsearch.in.sh \
"`dirname "$0"`"/../elasticsearch.in.sh \
$ES_HOME/bin/elasticsearch.in.sh; do
if [ -r "$include" ]; then
. "$include"
break
fi
done
# ...otherwise, source the specified include.
elif [ -r "$ES_INCLUDE" ]; then
. "$ES_INCLUDE"
fi
if [ -x "$JAVA_HOME/bin/java" ]; then
JAVA="$JAVA_HOME/bin/java"
else
JAVA=`which java`
fi
if [ ! -x "$JAVA" ]; then
echo "Could not find any executable java binary. Please install java in your PATH or set JAVA_HOME"
exit 1
fi
if [ -z "$ES_CLASSPATH" ]; then
echo "You must set the ES_CLASSPATH var" >&2
exit 1
fi
# Special-case path variables.
case `uname` in
CYGWIN*)
ES_CLASSPATH=`cygpath -p -w "$ES_CLASSPATH"`
ES_HOME=`cygpath -p -w "$ES_HOME"`
;;
esac
# Try to read package config files
if [ -f "/etc/sysconfig/elasticsearch" ]; then
CONF_DIR=/etc/elasticsearch
CONF_FILE=$CONF_DIR/elasticsearch.yml
. "/etc/sysconfig/elasticsearch"
elif [ -f "/etc/default/elasticsearch" ]; then
CONF_DIR=/etc/elasticsearch
CONF_FILE=$CONF_DIR/elasticsearch.yml
. "/etc/default/elasticsearch"
fi
# Parse any long getopt options and put them into properties before calling getopt below
# Be dash compatible to make sure running under ubuntu works
ARGCOUNT=$#
COUNT=0
while [ $COUNT -lt $ARGCOUNT ]
do
case $1 in
--*=*) properties="$properties -Des.${1#--}"
shift 1; COUNT=$(($COUNT+1))
;;
--*) properties="$properties -Des.${1#--}=$2"
shift ; shift; COUNT=$(($COUNT+2))
;;
*) set -- "$@" "$1"; shift; COUNT=$(($COUNT+1))
esac
done
# check if properties already has a config file or config dir
if [ -e "$CONF_DIR" ]; then
case "$properties" in
*-Des.default.path.conf=*) ;;
*)
if [ ! -d "$CONF_DIR/shield" ]; then
echo "ERROR: The configuration directory [$CONF_DIR/shield] does not exist. The syskeygen tool expects Shield configuration files in that location."
echo "The plugin may not have been installed with the correct configuration path. If [$ES_HOME/config/shield] exists, please copy the shield directory to [$CONF_DIR]"
exit 1
fi
properties="$properties -Des.default.path.conf=$CONF_DIR"
;;
esac
fi
if [ -e "$CONF_FILE" ]; then
case "$properties" in
*-Des.default.config=*) ;;
*)
properties="$properties -Des.default.config=$CONF_FILE"
;;
esac
fi
export HOSTNAME=`hostname -s`
# include shield jars in classpath
ES_CLASSPATH="$ES_CLASSPATH:$ES_HOME/plugins/shield/*"
cd $ES_HOME > /dev/null
$JAVA $ES_JAVA_OPTS -cp "$ES_CLASSPATH" -Des.path.home="$ES_HOME" $properties org.elasticsearch.shield.crypto.tool.SystemKeyTool "$@"
status=$?
cd - > /dev/null
exit $status

View File

@ -0,0 +1,9 @@
@echo off
rem Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
rem or more contributor license agreements. Licensed under the Elastic License;
rem you may not use this file except in compliance with the Elastic License.
PUSHD %~dp0
CALL %~dp0.in.bat org.elasticsearch.shield.crypto.tool.SystemKeyTool %*
POPD

View File

@ -0,0 +1,15 @@
logger:
shield.audit.logfile: INFO, access_log
additivity:
shield.audit.logfile: false
appender:
access_log:
type: dailyRollingFile
file: ${path.logs}/${cluster.name}-access.log
datePattern: "'.'yyyy-MM-dd"
layout:
type: pattern
conversionPattern: "[%d{ISO8601}] %m%n"

View File

View File

@ -0,0 +1,94 @@
admin:
cluster: all
indices:
'*': all
# monitoring cluster privileges
# All operations on all indices
power_user:
cluster: monitor
indices:
'*': all
# Read-only operations on indices
user:
indices:
'*': read
# Defines the required permissions for transport clients
transport_client:
cluster:
- cluster:monitor/nodes/info
#uncomment the following for sniffing
#- cluster:monitor/state
# The required role for kibana 3 users
kibana3:
cluster: cluster:monitor/nodes/info
indices:
'*': indices:data/read/search, indices:data/read/get, indices:admin/get
'kibana-int': indices:data/read/search, indices:data/read/get, indices:data/write/delete, indices:data/write/index, create_index
# The required permissions for kibana 4 users.
kibana4:
cluster:
- cluster:monitor/nodes/info
- cluster:monitor/health
indices:
'*':
- indices:admin/mappings/fields/get
- indices:admin/validate/query
- indices:data/read/search
- indices:data/read/msearch
- indices:admin/get
'.kibana':
- indices:admin/exists
- indices:admin/mapping/put
- indices:admin/mappings/fields/get
- indices:admin/refresh
- indices:admin/validate/query
- indices:data/read/get
- indices:data/read/mget
- indices:data/read/search
- indices:data/write/delete
- indices:data/write/index
- indices:data/write/update
- indices:admin/create
# The required permissions for the kibana 4 server
kibana4_server:
cluster:
- cluster:monitor/nodes/info
- cluster:monitor/health
indices:
'.kibana':
- indices:admin/exists
- indices:admin/mapping/put
- indices:admin/mappings/fields/get
- indices:admin/refresh
- indices:admin/validate/query
- indices:data/read/get
- indices:data/read/mget
- indices:data/read/search
- indices:data/write/delete
- indices:data/write/index
- indices:data/write/update
# The required role for logstash users
logstash:
cluster: indices:admin/template/get, indices:admin/template/put
indices:
'logstash-*': indices:data/write/bulk, indices:data/write/delete, indices:data/write/update, indices:data/read/search, indices:data/read/scroll, create_index
# Marvel role, allowing all operations
# on the marvel indices
marvel_user:
cluster: cluster:monitor/nodes/info, cluster:admin/plugin/license/get
indices:
'.marvel-*': all
# Marvel Agent users
marvel_agent:
cluster: indices:admin/template/get, indices:admin/template/put
indices:
'.marvel-*': indices:data/write/bulk, create_index

View File

View File

View File

@ -0,0 +1,14 @@
ELASTICSEARCH CONFIDENTIAL
__________________
[2014] Elasticsearch Incorporated. All Rights Reserved.
NOTICE: All information contained herein is, and remains
the property of Elasticsearch Incorporated and its suppliers,
if any. The intellectual and technical concepts contained
herein are proprietary to Elasticsearch Incorporated
and its suppliers and may be covered by U.S. and Foreign Patents,
patents in process, and are protected by trade secret or copyright law.
Dissemination of this information or reproduction of this material
is strictly forbidden unless prior written permission is obtained
from Elasticsearch Incorporated.

View File

@ -0,0 +1,3 @@
admin:
- "CN=SHIELD,CN=Users,DC=ad,DC=test,DC=elasticsearch,DC=com"
- "cn=SHIELD,ou=people,dc=oldap,dc=test,dc=elasticsearch,dc=com"

View File

@ -0,0 +1,13 @@
admin:
cluster: all
indices:
'*': all
power_user:
cluster: monitor
indices:
'*': all
user:
indices:
'*': read

View File

@ -0,0 +1 @@
*43QåÆ]Ûùð/÷ô<14>>eû.²¾g^lçH¶ûgu«•±Ê O/Gaoƒ˜ Ⱥâ•rr³ø´èk_ËÐ2û*¹©m•?д,”]‡<>Ƥå¦p¶I婳ò¼£¸sOYwu†¹äŸK•¨°+_¹0

View File

@ -0,0 +1,5 @@
admin-bcrypt:$2a$10$5uCJHPn3p0ZPQp6rIIgcDO0VZ3urZZmA.egHiy/WknxIkAyZXPGpy
admin-plain:{plain}changeme
admin-sha:{SHA}+pvrmeQCmtWmYVOZ57uuITVghrM=
admin-apr:$apr1$fCQ4kkwA$ETvNx37ooOcdau5a61S/s.
admin-sha2:$5$mw0LEbLr$s57Rbo0wfH8Z690Dc0..VgC1qn/a5h73bbpt8kql8B4

View File

@ -0,0 +1 @@
admin:admin-bcrypt,admin-sha,admin-plain,admin-apr,admin-sha2

View File

@ -0,0 +1,89 @@
{
"defaults": {
"plugins": [
"lmenezes/elasticsearch-kopf",
"elasticsearch/license/latest",
"elasticsearch/marvel/latest",
{ "name": "shield", "path" : "file:../../target/releases/elasticsearch-shield-1.0.0-SNAPSHOT.zip" }
],
"config" : {
"cluster.name": "shield",
"indices.store.throttle.max_bytes_per_sec": "100mb",
"discovery": {
"type": "zen",
"zen.ping" : {
"multicast.enabled": false,
"unicast.hosts": [ "127.0.0.1:9300", "127.0.0.1:9301" ]
}
},
"network": {
"bind_host": "127.0.0.1",
"publish_host": "127.0.0.1"
},
"marvel.agent.exporter.es.hosts": [ "https://admin-plain:changeme@127.0.0.1:9200"],
"marvel.agent.exporter.es.ssl.truststore.path": "../../src/test/resources/org/elasticsearch/shield/transport/ssl/certs/simple/testnode.jks",
"marvel.agent.exporter.es.ssl.truststore.password": "testnode",
"http.cors": {
"enabled": true,
"allow-origin": "/http:\/\/www.elasticsearch.(org|com)/"
},
"shield": {
"enabled": true,
"system_key.file": ".esvm-shield-config/system_key",
"audit.enabled": true,
"transport.ssl": true,
"http.ssl": true,
"ssl.hostname_verification": true,
"ssl.keystore": {
"path": "../../src/test/resources/org/elasticsearch/shield/transport/ssl/certs/simple/testnode.jks",
"password": "testnode"
},
"authc.realms" : {
"esusers": {
"type" : "esusers",
"order" : 0,
"files" : {
"users" : ".esvm-shield-config/users",
"users_roles" : ".esvm-shield-config/users_roles"
}
}
},
"authz.store.files.roles" : ".esvm-shield-config/roles.yml"
}
}
},
"clusters": {
"shield": {
"version": "1.4"
},
"oldap": {
"version": "1.4",
"config": {
"shield.authc.realms.oldap": {
"type": "ldap",
"order": 1,
"url": "ldaps://54.200.235.244:636",
"user_dn_templates": ["uid={0},ou=people,dc=oldap,dc=test,dc=elasticsearch,dc=com"],
"group_search.base_dn": "ou=people,dc=oldap,dc=test,dc=elasticsearch,dc=com",
"unmapped_groups_as_roles": false,
"hostname_verification": false,
"files.role_mapping": ".esvm-shield-config/role_mapping.yml"
}
}
},
"ad": {
"version": "1.4",
"config": {
"shield.authc.realms.ad": {
"type": "active_directory",
"order": 1,
"domain_name": "ad.test.elasticsearch.com",
"url": "ldaps://ad.test.elasticsearch.com:636",
"unmapped_groups_as_roles": false,
"hostname_verification": false,
"files.role_mapping": ".esvm-shield-config/role_mapping.yml"
}
}
}
}
}

View File

@ -0,0 +1,26 @@
Running ESVM with Shield
Upgrade/Install:
npm install esvm -g
Running:
1) cd to elasticsearch-shield/dev-tools/esvm
2) run esvm
a) For native users
esvm
b) For openldap users
esvm oldap
c) For active directory users
esvm ad
Users and roles are stored in .esvm-shield-config
Troubleshooting:
- elasticsearch is installed under ~/.esvm/<version>
- turn on debug in ~/.esvm/1.4.1/config/logging.yml
- esvm --fresh will reinstall ES
- plugins will not re-install, you can remove them manually by ~/.esvm/1.4.1/bin/plugin --remove shield
- errors during startup will not show up. If esvm fails startup look in ~/.esvm/1.4.1/logs/*

View File

@ -0,0 +1,61 @@
<?xml version="1.0"?>
<project name="commercial-integration-tests">
<import file="${elasticsearch.integ.antfile.default}"/>
<!-- unzip core release artifact, install license plugin, install plugin, then start ES -->
<target name="start-external-cluster-with-plugin" depends="stop-external-cluster" unless="${shouldskip}">
<local name="integ.home"/>
<local name="integ.repo.home"/>
<local name="integ.plugin.url"/>
<local name="integ.pid"/>
<delete dir="${integ.scratch}"/>
<unzip src="${org.elasticsearch:elasticsearch:zip}"
dest="${integ.scratch}"/>
<property name="integ.home" location="${integ.scratch}/elasticsearch-${elasticsearch.version}"/>
<property name="integ.repo.home" location="${integ.home}/repo"/>
<!-- begin commercial plugin mods -->
<local name="integ.license.plugin.url"/>
<makeurl property="integ.license.plugin.url" file="${org.elasticsearch:elasticsearch-license-plugin:zip}"/>
<echo>Installing license plugin...</echo>
<run-script dir="${integ.home}" script="bin/plugin"
args="-u ${integ.license.plugin.url} -i elasticsearch-license-plugin"/>
<!-- end commercial plugin mods -->
<makeurl property="integ.plugin.url" file="${project.build.directory}/releases/${project.artifactId}-${project.version}.zip"/>
<echo>Installing plugin ${project.artifactId}...</echo>
<run-script dir="${integ.home}" script="bin/plugin"
args="-u ${integ.plugin.url} -i ${project.artifactId}"/>
<!-- execute -->
<echo>Starting up external cluster...</echo>
<run-script dir="${integ.home}" script="bin/elasticsearch" spawn="true"
args="${integ.args} -Des.path.repo=${integ.repo.home}"/>
<!-- begin shield plugin mods -->
<run-script dir="${integ.home}" script="bin/shield/esusers"
args="useradd test_user -p changeme -r admin"/>
<!-- seems waitfor task doesnt support basic auth?
we do the next best thing, wait for the socket, then verify with get
<waitfor maxwait="3" maxwaitunit="minute" checkevery="500">
<http url="http://test_user:changeme@127.0.0.1:9200"/>
</waitfor>
-->
<waitfor maxwait="3" maxwaitunit="minute" checkevery="500">
<socket server="127.0.0.1" port="9200"/>
</waitfor>
<local name="temp.file"/>
<tempfile property="temp.file" destdir="${java.io.tmpdir}"/>
<get src="http://127.0.0.1:9200" dest="${temp.file}" username="test_user" password="changeme" verbose="true" retries="10"/>
<!-- end shield plugin mods -->
<extract-pid property="integ.pid"/>
<echo>External cluster started PID ${integ.pid}</echo>
</target>
</project>

View File

@ -0,0 +1,13 @@
<?xml version="1.0" encoding="UTF-8"?>
<additionalHeaders>
<javadoc_style>
<firstLine>/*</firstLine>
<beforeEachLine> * </beforeEachLine>
<endLine> */EOL</endLine>
<!--skipLine></skipLine-->
<firstLineDetectionPattern>(\s|\t)*/\*.*$</firstLineDetectionPattern>
<lastLineDetectionPattern>.*\*/(\s|\t)*$</lastLineDetectionPattern>
<allowBlankLines>false</allowBlankLines>
<isMultiline>true</isMultiline>
</javadoc_style>
</additionalHeaders>

View File

@ -0,0 +1,22 @@
randomization:
elasticsearch:
es150:
version: 1.5.0
branch: tags/v1.5.0
lucene.version: 4.10.4
es151:
version: 1.5.1
branch: tags/v1.5.1
lucene.version: 4.10.4
es152:
version: 1.5.2
branch: tags/v1.5.2
lucene.version: 4.10.4
es153:
version: 1.5.3-SNAPSHOT
branch: origin/1.5
lucene.version: 4.10.4
es160:
version: 1.6.0-SNAPSHOT
branch: origin/1.x
lucene.version: 4.10.4

View File

@ -0,0 +1,19 @@
All the following scenario are run from a user authorized for: `test.*`: read
[horizontal]
*Existing Indices*::*Action*::*Outcome (executed indices)*
`test1` `test2` `test3` `index1`::`GET _search`::`test1` `test2` `test3`
`test1` `test2` `test3` `index1`::`GET _search/*`::`test1` `test2` `test3`
`test1` `test2` `index1` `index2`::`GET _search/index*`::AuthorizationException
- empty cluster-::`GET _search`::IndexMissingException
- empty cluster-::`GET _search/*`::IndexMissingException
`index1` `index2`::`GET _search`::IndexMissingException
`index1` `index2`::`GET _search/*`::IndexMissingException
`test1` `test2` `index1`::`GET _search/test*,index1`::AuthorizationException
`test1` `test2` `index1`::`GET _search/missing`::AuthorizationException
`test1` `test2` `test3` `index1`::`GET _search/-test2`::`test1` `test3`
`test1` `test2` `test21` `test3` `index1`:: `GET _search/-test2*`::`test1` `test3`
`test1` `test2` `test3` `index1`::`GET msearch first item: all, second item: index1`:: AuthorizationException
`test1` `test2` `test3` `index1`::`GET msearch first item: all, second item: missing`:: AuthorizationException
`test1` `test2` `test3` `index1`::`GET msearch first item: all, second item: test4`:: 1st item:`test1` `test2` `test3`, 2nd item: IndexMissingException
`test1` `test2` `test3` `index1`::`GET msearch first item: all, second item: index*`:: IndexMissingException

View File

@ -0,0 +1,93 @@
== LDAP Configuration for INTERNAL only Test Servers
We've two LDAP servers for testing:
* Active Directory on Windows Server 2012
* OpenLdap on Suse Enterprise Linux 10.x
=== Configuration for OpenLdap
Here is a configuration that works for openldap. This is using OpenSuse's method for creating ldap users that can
authenticate to the box. So it is probably close to a real-world scenario. For SSL the following truststore has both
public certificates in it: elasticsearch-shield/src/test/resources/org/elasticsearch/shield/transport/ssl/certs/simple/testnode.jks
[source, yaml]
------------------------------------------------------------
shield:
ssl.keystore:
path: "/path/to/elasticsearch-shield/src/test/resources/org/elasticsearch/shield/transport/ssl/certs/simple/testnode.jks"
password: testnode
authc.realms.openldap:
type: ldap
order: 0
url: "ldaps://54.200.235.244:636"
user_dn_templates: [ "uid={0},ou=people,dc=oldap,dc=test,dc=elasticsearch,dc=com" ]
group_search:
base_dn: "ou=people,dc=oldap,dc=test,dc=elasticsearch,dc=com"
hostname_verification: false
------------------------------------------------------------
=== Configuration for Active Directory
You could configure Active Directory the same way (with type ldap and user_dn_templates). But where is the fun in that!
Active directory has a simplified (non-standard) authentication workflow that helps us eliminate the templates.
BUT this technique requires you use a DNS name for your active directory server. Do this adding the following to /etc/hosts:
`54.213.145.20 ad.test.elasticsearch.com ForestDnsZones.ad.test.elasticsearch.com DomainDnsZones.ad.test.elasticsearch.com`
[source, yaml]
------------------------------------------------------------
shield:
authc.realms.ad:
type: active_directory
order: 0
domain_name: ad.test.elasticsearch.com
------------------------------------------------------------
The above configuration results in a plaintext LDAP connection. For SSL the following configuration is required:
[source, yaml]
------------------------------------------------------------
shield:
ssl.keystore:
path: "/path/to/elasticsearch-shield/src/test/resources/org/elasticsearch/shield/transport/ssl/certs/simple/testnode.jks"
password: testnode
authc.realms.ad:
type: active_directory
order: 0
domain_name: ad.test.elasticsearch.com
url: "ldaps://ad.test.elasticsearch.com:636"
hostname_verification: false
------------------------------------------------------------
=== Users & Groups
Isn't LDAP fun?! No? Well that's why we've created super heros!
|=======================
| CN (common name) | uid | group memberships
| Commander Kraken | kraken | Hydra
| Bruce Banner | hulk | Geniuses, SHIELD, Philanthropists, Avengers
| Clint Barton | hawkeye | SHIELD, Avengers
| Jarvis | jarvis |
| Natasha Romanoff | blackwidow | SHIELD, Avengers
| Nick Fury | fury | SHIELD, Avengers
| Phil Colson | phil | SHIELD
| Steve Rogers | cap | SHIELD, Avengers
| Thor | thor | SHIELD, Avengers, Gods, Philanthropists
| Tony Stark | ironman | Geniuses, Billionaries, Playboys, Philanthropists, SHIELD, Avengers
| Odin | odin | Gods
|=======================
They aren't very good super-heros because they all share the same password: `NickFuryHeartsES`. You'll use the uid
for the username.
=== Groups
If you want to map group names to es roles, you'll use the fully distinguished names of the groups. The DNs for groups in ad is
`CN={group name},CN=Users,DC=ad,DC=test,DC=elasticsearch,DC=com`
the DNs for groups in openldap is
`cn={group name},ou=people,dc=oldap,dc=test,dc=elasticsearch,dc=com`
Ping Cam Morris or Bill Hwang for more questions.

View File

@ -0,0 +1,25 @@
[[shield]]
= Shield - Elasticsearch Security Plugin
:ref: http://www.elastic.co/guide/en/elasticsearch/reference/current
include::01-introduction.asciidoc[]
include::02-architecture.asciidoc[]
include::03-quick-getting-started.asciidoc[]
include::04-getting-started.asciidoc[]
include::05-authorization.asciidoc[]
include::06-authentication.asciidoc[]
include::07-securing-nodes.asciidoc[]
include::08-auditing.asciidoc[]
include::09-clients.asciidoc[]
include::10-appendices.asciidoc[]

View File

@ -0,0 +1,60 @@
[[introduction]]
== Introduction
This document discusses securing your Elasticsearch deployment, from initial installation to configuration.
[float]
=== Why Security?
An Elasticsearch cluster benefits from properly implemented security in the following ways:
* <<roles,Role-based>> access control at the index level and <<ldap,LDAP>> authentication integration to _prevent
unauthorized access_
* <<ssl-tls,Encryption>> to _preserve the integrity of your data_, keeping confidential data confidential.
* An _<<auditing,Audit>> trail_ to analyze access patterns.
[float]
==== Prevent Unauthorized Access
The term 'unauthorized access' properly covers two distinct security concepts: _Authentication_ and _Authorization_.
Authentication validates that a user is who they claim to be. A proper authentication setup enforces that only the
person named, for example, Kelsey Andorra can authenticate to Elasticsearch as the user `kandorra`. Shield ships with
out-of-the-box internal authentication mechanism and also integrates with LDAP and the Active Directory to provide
user authentication. Authorization enforces a set of privileges that are available to a specific user. To continue the
example, an authorization framework enforces that the user `kandorra` has the ability to perform specific actions on the
Elasticsearch cluster. These specific actions are called _privileges_. See the <<reference,Reference>> section for a
complete list of privileges. Privileges are bundled into sets, and a set of privileges is called a _role_.
Shield also provides for authorization based on the client's IP address. You may whitelist and blacklist subnets to
control network-level access to a server.
[float]
==== Preserve Data Integrity
A standard Elasticsearch cluster provides functionality that provides redundancy to protect against _accidental_ data
loss and corruption. By providing <<ssl-tls,_encryption_>> for data that is being transmitted from node to node within
the cluster, Elasticsearch security protects data from _deliberate_ tampering or unauthorized access.
[float]
==== Provides an Audit Trail
Knowing who requested which actions on your data, and when, is an important part of security. Keeping an auditable log
of the activity in your cluster can not only help diagnose performance issues, but provide insight into attacks and
attempted breaches.
[float]
=== Security as a Plugin
Security features for Elasticsearch are implemented in a plugin that you <<getting-started,install>> on each node in
your cluster.
[float]
=== What's In This Document
The information in this document covers the following broad categories:
* To learn about the architecture of the Elasticsearch security plugin and how the various elements of security
interact, see the <<architecture, Architecture Overview>> section.
* To get started with Elasticsearch security, from installation to initial configuration, see the
<<getting-started,Getting Started>> section.
* To answer specific questions about configuration elements and privileges in Elasticsearch security, see the
<<reference,Reference>> section.

View File

@ -0,0 +1,84 @@
[[architecture]]
== Architecture Overview
Shield installs as a plugin into Elasticsearch. Once installed, the plugin intercepts inbound API calls in order to
enforce authentication and authorization. The plugin can also provide encryption using Secure Sockets Layer/Transport
Layer Security (SSL/TLS) for the network traffic to and from the Elasticsearch node. The plugin also uses the API
interception layer that enables authentication and authorization to provide audit logging capability.
[float]
=== User Authentication
Shield defines a known set of users in order to authenticate users that make requests. These sets of users are defined
with an abstraction called a _realm_. A realm is a user database configured for the use of the Shield plugin. The
supported realms are _esusers_ and _LDAP_.
In the _esusers_ realm, users exist exclusively within the Elasticsearch cluster. With the _esusers_ realm, the
administrator manages users with <<esusers,tools provided by Elasticsearch>>, and all the user operations occur within
the Elasticsearch cluster. Users authenticate with a username and password pair.
In the _LDAP_ realm, the administrator manages users with the tools provided by the LDAP vendor. Elasticsearch
authenticates users by accessing the configured LDAP server. Users authenticate with a username and password pair. Shield
also enables mapping LDAP groups to roles in Shield (more on roles below).
Your application can be a user in a Shield realm. Elasticsearch Clients authenticate to the cluster by providing a
username and password pair (a.k.a _Authentication Token_) with each request. To learn more on how different clients
can authenticate, see <<clients, Clients>>.
[float]
=== Authorization
Shield's data model for action authorization consists of these elements:
* _Secured Resource_, a resource against which security permissions are defined, including the cluster, an index/alias,
or a set of indices/aliases in the cluster
* _Privilege_, one or more actions that a user may execute against a secured resource. This includes named groups of
actions (e.g. _read_), or a set specific actions (e.g. indices:/data/read/percolate)
* _Permissions_, one or more privileges against a secured resource (e.g. _read on the "products" index_)
* _Role_, named sets of permissions
* _Users_, entities which may be assigned zero or more roles, authorizing them to perform the actions on the secure
resources described in the union of their roles
A secure Elasticsearch cluster manages the privileges of users through <<roles, _roles_>>. A role has a unique name and identifies
a set of permissions that translate to privileges on resources. A user can have an arbitrary number of roles. There are
two types of permissions: _cluster_ and _index_. The total set of permissions that a user has is defined by union of the
permissions in all its roles.
Depending on the realm used, Shield provides the appropriate means to assign roles to users.
[float]
=== Node Authentication and Channel Encryption
Nodes communicate to other nodes over port 9300. With Shield, you can use SSL/TLS to wrap this communication. When
SSL/TLS is enabled, the nodes validate each other's certificates, establishing trust between the nodes. This validation
prevents unauthenticated nodes from joining the cluster. Communications between nodes in the cluster are also encrypted
when SSL/TLS is in use.
Users are responsible for generating and installing their own certificates.
You can choose a variety of ciphers for encryption. See the <<ciphers,_Adding Ciphers to Java for Stronger Encryption_>>
section for details.
For more information on how to secure nodes see <<securing-nodes, Securing Nodes>>.
[float]
=== IP Filtering
Shield provides IP-based access control for Elasticsearch nodes. This access control allows you to restrict which
other servers, via their IP address, can connect to your Elasticsearch nodes and make requests. For example, you can
configure Shield to allow access to the cluster only from your application servers. The configuration provides for
whitelisting and blacklisting of subnets, specific IP addresses, and DNS domains. To read more about IP filtering see
<<ip-filtering, IP filtering>>.
[float]
=== Auditing
The <<auditing,audit functionality>> in a secure Elasticsearch cluster logs particular events and activity on that
cluster. The events logged include authentication attempts, including granted and denied access.

View File

@ -0,0 +1,75 @@
[[quick-getting-started]]
== Getting Started (Short Version)
The following tutorial will get you up and running with Shield in 2 minutes.
[float]
=== Assumptions
* You have Java(TM) 7 or above installed.
* You have downloaded elasticsearch 1.5.0+ and extracted it (from now on, we'll refer to the elasticsearch directory as `ES_HOME`).
If you haven't done so, you can download it https://download.elastic.co/elasticsearch/elasticsearch/elasticsearch-1.5.1.tar.gz[here].
* You are *not* using a package installation (RPM/DEB) or a custom configuration directory. If you are, please see the full <<getting-started,getting started>> guide.
[float]
=== Installation
1. `cd` to `ES_HOME`
2. Install the license plugin
+
[source,shell]
----------------------------------------------------------
bin/plugin -i elasticsearch/license/latest
----------------------------------------------------------
3. Next, install the shield plugin
+
[source,shell]
----------------------------------------------------------
bin/plugin -i elasticsearch/shield/latest
----------------------------------------------------------
4. Start Elasticsearch
+
[source,shell]
----------------------------------------------------------
bin/elasticsearch
----------------------------------------------------------
5. Add a `es_admin` user with administrative permissions
+
[source,shell]
----------------------------------------------------------
bin/shield/esusers useradd es_admin -r admin
----------------------------------------------------------
6. Try it out - without username/password, the request should be rejected:
+
[source,shell]
----------------------------------------------------------
curl -XGET 'http://localhost:9200/'
----------------------------------------------------------
7. Now try with username and password
+
[source,shell]
----------------------------------------------------------
curl -u es_admin -XGET 'http://localhost:9200/'
----------------------------------------------------------
8. Optionally, verify the Shield version
+
[source,shell]
----------------------------------------------------------
curl -u es_admin -XGET 'http://localhost:9200/_shield'
----------------------------------------------------------
[float]
=== Next Steps
* For a more in-depth look into the meaning of each step above, please proceed to the full <<getting-started,getting started>> guide.
* For better understanding of the authentication mechanisms we just used, please refer to <<esusers, esusers - internal file based authentication>>
* To learn about how to create roles and customize the permissions for users, please refer to the <<authorization, authorization>> section.
* To enable secure SSL/TLS encryption of cluster and client communication, please refer to the <<securing-nodes, securing nodes>> section.
* If you are new to Shield, we suggest following the guide's natural path and reading each section in order. To continue, <<getting-started, proceed to the next section>>

View File

@ -0,0 +1,322 @@
[[getting-started]]
== Getting Started (Long Version)
Security is installed as an Elasticsearch plugin. The plugin must be installed on every node in the cluster, and every
node must be restarted after installation. Plan for a complete cluster restart before beginning the installation
process.
IMPORTANT: Shield 2.0.x is compatible with Elasticsearch 1.5.0 and above.
[float]
=== Configuring your environment
If you install Elasticsearch as a package or you specify a custom configuration directory, the command line
tools require you to specify the configuration directory. On Linux systems, add the following line to your
`.profile` file:
[source,shell]
----------------------------------------------------------
export ES_JAVA_OPTS="-Des.path.conf=/etc/elasticsearch"
----------------------------------------------------------
NOTE: When using `sudo` to run commands as a different user, the `ES_JAVA_OPTS` setting from your profile will not be
available in the other user's environment. You can manually pass the environment variables to the command or you can
make the environment variable available by adding the following line to the `/etc/sudoers` file:
[source,shell]
----------------------------------------------------------
Defaults env_keep += "ES_JAVA_OPTS"
----------------------------------------------------------
On Windows systems, the `setx` command can be used to specify a custom configuration directory:
[source,shell]
----------------------------------------------------------
setx ES_JAVA_OPTS "-Des.path.conf=C:\config"
----------------------------------------------------------
[float]
=== Shield And Licensing
Shield requires a license to operate and the licensing is managed by a separate plugin. For this reason,
the License plugin must be installed (without the license plugin Shield will prevent the node from starting up).
For instructions on how to install the License plugin, please refer to <<license-management, License Management>>.
Once you have the licensing plugin installed, you may begin working with Shield immediately. When elasticsearch starts for the
first time with Shield and the licensing plugin installed, a 30-day trial license for Shield will automatically be generated.
If you have a license for Shield that you would like to install, please refer to <<installing-license, installing a license>>.
IMPORTANT: With a valid license, Shield will be fully operational. Upon license expiry, Shield will operate in a
degraded mode, where cluster health, cluster stats, and index stats APIs will be blocked. All other operations will
continue operating normally. Additional information can be found at the <<license-expiration, Shield license expiration>>
section.
[float]
=== Installing the Shield plugin
Follow these steps on every node in the cluster:
. From the Elasticsearch home directory, run:
+
[source,sh]
------------------------------------------
bin/plugin -i elasticsearch/shield/latest
------------------------------------------
. Restart your Elasticsearch node.
+
Before restarting your cluster, consider temporarily {ref}/modules-cluster.html[disabling shard allocation].
If your server doesn't have direct Internet access, see <<manual_download,manual download>> for an alternative way to
get the Security binaries.
[[manual_download]]
[float]
==== Manual Download
Elasticsearchs `bin/plugin` script requires direct Internet access for downloading and installing the security plugin.
If your server doesnt have Internet access, you can download the required binaries from the following link:
[source,sh]
----------------------------------------------------
https://download.elastic.co/elasticsearch/shield/shield-2.0.0.zip
----------------------------------------------------
Transfer the compressed file to your server, then install the plugin with the `bin/plugin` script:
[source,shell]
----------------------------------------------------
bin/plugin -i shield -u file://PATH_TO_ZIP_FILE <1>
----------------------------------------------------
<1> Absolute path to Shield plugin zip distribution file (e.g. `file:///path/to/file/shield-2.0.0.zip`,
note the three slashes at the beginning)
[[install-layout]]
[float]
=== Shield Installation Layout
Shield comes with its own set of configuration files and executable tools. These include:
[horizontal]
[[shield-bin]] *Executables*::
Shield's bin directory is located at `$ES_HOME/bin/shield`. Consider adding this directory to
your `PATH` environment variable.
[[shield-config]] *Configuration*::
Shield's config directory is located at `<elasticsearch_config>/shield` (where
`<elasticsearch_config>` refers to the standard config directory of
Elasticsearch - typically at `$ES_HOME/config`).
Unless otherwise stated, Shield's settings are placed in the main
`elasticsearch.yml` configuration file.
[[message-authentication]]
[float]
=== Message Authentication
Message authentication verifies that a message has not been tampered with or corrupted in transit. To enable message
authentication, run the `syskeygen` tool without any options:
[source, shell]
----------------
bin/shield/syskeygen
----------------
This creates the system key file in Shield's <<shield-config,config>> directory, e.g. `config/shield/system_key`. You
can customize this file's location by changing the value of the `shield.system_key.file` setting in the
`elasticsearch.yml` file.
IMPORTANT: Because the system key is a symmetric key, the same key must be on every node in the cluster. Copy the key to
every node in the cluster after generating it.
[float]
=== Enabling Role-based Access Control
Now that we have Shield installed, we'll move to configuring the users (and their roles) with which we'll be able to execute
various of APIs on Elasticsearch.
[float]
==== Defining Roles
A _role_ encompasses a set of permissions over the cluster and/or the indices in it. Roles are defined in the
`$ES_HOME/config/shield/roles.yml` file.
.Example role definition
[source,yaml]
--------------------------------------------------
# All cluster rights
# All operations on all indices
admin: <1>
cluster: all
indices:
'*': all
# monitoring cluster privileges
# All operations on all indices
power_user: <2>
cluster: monitor
indices:
'*': all
# Read-only operations on indices
user: <3>
indices:
'*': read
--------------------------------------------------
<1> The `admin` role enables full access to the cluster and all its indices.
<2> The `power_user` role enables monitoring only access on the cluster and full access on all its indices.
<3> The `user` role has no cluster wide permissions and only has data read access on all its indices.
For this quick getting started guide, we won't need to change anything in the `roles.yml` file that comes out-of-the-box
with Shield, as it already defines the roles listed in the snippet above. To learn more on roles and how one can configure
them, please see <<roles, Roles>>.
[float]
==== Defining Users
Shield supports different authentication realms that authenticate users from different sources. In this example, we'll
use the internal <<esusers,`esusers`>> realm that comes with Shield. The `esusers` realm supports user management using
the `esusers` command line tool from Shield's `bin` directory.
NOTE: The `esusers` realm is enabled by default when no realms are explicitly configured in `elasticsearch.yml`. For more
information on realms configuration please see <<realms, Realms>>.
[source,shell]
--------------------------------------------------
bin/shield/esusers useradd rdeniro -p taxidriver -r admin
--------------------------------------------------
[source,shell]
--------------------------------------------------
bin/shield/esusers useradd alpacino -p godfather -r user
--------------------------------------------------
The example above adds two users:
* The `rdeniro` user with password `taxidriver`, with the `admin` role in the cluster
* The `alpacino` user with password `godfather`, with the `user` role in the cluster
NOTE: To ensure that Elasticsearch can read the user and role information at startup, run `esusers useradd` as the
same user you use to run Elasticsearch. Running the command as root or some other user will update the permissions
for the `users` and `users_roles` files and prevent Elasticsearch from accessing them.
Now that we've defined the roles and the users of the cluster, you can start the Elasticsearch node and we'll verify that
Shield plugin has been loaded.
[float]
==== Verifying Shield Installation
Once your Elasticsearch node is running, you can issue a `curl` command to verify that Shield has been loaded and is the
expected version.
[source,shell]
-------------------------------------------------------------------------------
curl --user rdeniro:taxidriver 'localhost:9200/_shield'
-------------------------------------------------------------------------------
[source,json]
-------------------------------------------------------------------------------
{
"status" : "enabled",
"name" : "Mimic",
"cluster_name" : "elasticsearch",
"version" : {
"number" : "2.0.0",
"build_hash" : "",
"build_timestamp" : "NA",
"build_snapshot" : true
},
"tagline" : "You know, for security"
}
-------------------------------------------------------------------------------
You can also check the startup logs to verify that the Shield plugin has loaded and the network transports are using Shield.
A successful installation will show lines similar to the following:
[source,shell]
----------------
[2014-10-09 13:47:38,841][INFO ][transport ] [Ezekiel Stane] Using [org.elasticsearch.shield.transport.ShieldServerTransportService] as transport service, overridden by [shield]
[2014-10-09 13:47:38,841][INFO ][transport ] [Ezekiel Stane] Using [org.elasticsearch.shield.transport.netty.ShieldNettyTransport] as transport, overridden by [shield]
[2014-10-09 13:47:38,842][INFO ][http ] [Ezekiel Stane] Using [org.elasticsearch.shield.transport.netty.ShieldNettyHttpServerTransport] as http transport, overridden by [shield]
----------------
In the next section, we'll use a simple HTTP client to interact with Elasticsearch protected by Shield.
[[clientauth]]
[float]
=== Configuring HTTP REST Clients
Elasticsearch works with standard HTTP http://en.wikipedia.org/wiki/Basic_access_authentication[basic authentication]
headers to identify the requester. Since Elasticsearch is stateless, this header must be sent with every request:
[source,shell]
--------------------------------------------------
Authorization: Basic <TOKEN> <1>
--------------------------------------------------
<1> The `<TOKEN>` is computed as `base64(USERNAME:PASSWORD)`
[float]
==== Client examples
Using `curl` without basic authentication to create an index has the following result:
[source,shell]
-------------------------------------------------------------------------------
curl -XPUT 'localhost:9200/idx'
-------------------------------------------------------------------------------
[source,json]
-------------------------------------------------------------------------------
{
"error": "AuthenticationException[Missing authentication token]",
"status": 401
}
-------------------------------------------------------------------------------
Since no user is associated with the request above, the request returns an authentication error. Next, use `curl`
with basic auth to create an index as the `rdeniro` user:
[source,shell]
---------------------------------------------------------
curl --user rdeniro:taxidriver -XPUT 'localhost:9200/idx'
---------------------------------------------------------
[source,json]
---------------------------------------------------------
{
"acknowledged": true
}
---------------------------------------------------------
Since the request is executed on behalf of administrative user `rdeniro`, the create index request authenticates and
authorizes successfully, resulting in normal execution of the request. Creating another index as the `alpacino` user
results in the following error:
[source,shell]
------------------------------------------------------------------------------------------------------------------
curl --user alpacino:godfather -XPUT 'localhost:9200/idx2'
------------------------------------------------------------------------------------------------------------------
[source,json]
------------------------------------------------------------------------------------------------------------------
{
"error": "AuthorizationException[Action [indices:admin/create] is unauthorized for user [alpacino]]",
"status": 403
}
------------------------------------------------------------------------------------------------------------------
As user `alpacino` does not have any index administration rights, the request is rejected with an authorization error.
[float]
=== Next Steps
Now you have a working cluster with authentication and access control enabled.
In the <<authorization, _Authorization_>> section, we explain how to manage users and their roles. The
<<authentication, _Authentication_>> section explains how to use Shield's authentication realms and LDAP integration. The
<<securing-nodes, _Securing Nodes_>> section discusses enabling SSL/TLS encryption for nodes and clients.

View File

@ -0,0 +1,132 @@
[[authorization]]
== Authorization
Shield introduces the concept of _action authorization_ to Elasticsearch. Action authorization restricts the actions
users can execute on the cluster. Shield implements authorization as Role Based Access Control (RBAC), where all
actions are restricted by default. Users are associated with roles that define a set of actions that are allowed
for those users.
[[roles]]
[float]
=== Roles, Permissions and Privileges
Privileges are actions or a set of actions that users may execute in Elasticsearch. For example, the ability to run a
query is a privilege.
A permission is a set of privileges associated with one or more secured objects. For example, a permission could allow
querying or reading all documents of index `i1`. There are two types of secured objects in Elasticsearch -
cluster and indices. Cluster permissions grant access to cluster-wide administrative and monitoring actions. Index
permissions grant data access, including administrative and monitoring actions on specific indices in the cluster.
A role is a named set of permissions. For example, you could define a role as a logging administrator. The logging
administrator is allowed to take all actions on indices named `logs-*`.
As an administrator, you will need to define the roles that you want to use, then assign users to the roles.
[[roles-file]]
[float]
==== The Role Definition File `roles.yml`
Roles are defined in the role definition file `roles.yml` located under Shield's <<shield-config,config>> directory.
This is a YAML file where each entry defines the unique role name and the cluster and indices permissions associated
with it.
[IMPORTANT]
==============================
The `roles.yml` file is managed locally by the node and is not managed globally by the cluster. This means that
with a typical multi-node cluster, the exact same changes need to be applied on each and every node in the cluster.
A safer approach would be to apply the change on one of the nodes and have the `roles.yml` distributed/copied to
all other nodes in the cluster (either manually or using a configuration management system such as Puppet or Chef).
==============================
The following snippet shows an example configuration:
[source,yaml]
-----------------------------------
# All cluster rights
# All operations on all indices
admin:
cluster: all
indices:
'*': all
# Monitoring cluster privileges
# All operations on all indices
power_user:
cluster: monitor
indices:
'*': all
# Only read operations on indices
user:
indices:
'*': read
# Only read operations on indices named events_*
events_user:
indices:
'events_*': read
-----------------------------------
[[valid-role-name]]
NOTE: A valid role name must be at least 1 character and no longer than 30 characters. It must begin with a letter
(`a-z`) or an underscore (`_`). Subsequent characters can be letters, underscores (`_`), digits (`0-9`) or any
of the following symbols `@`, `-`, `.` or `$`
The above example defines these roles:
|=======================
| `admin` | Has full access (all privileges) on the cluster and full access on all indices in the cluster.
| `power_user` | Has monitoring-only access on the cluster, enabling the user to request cluster metrics, information,
and settings, without the ability to update settings. This user also has full access on all indices in
the cluster.
| `user` | Cannot update or monitor the cluster. Has read-only access to all indices in the cluster.
| `events_user` | Has read-only access to all indices with the `events_` prefix.
|=======================
See the complete list of available <<privileges-list, cluster and indices privileges>>.
[float]
==== Action Level Access Control
The Shield security plugin enables access to specific actions in Elasticsearch. Access control using specific actions
provides a finer level of granularity than roles based on named privileges.
The role in the following example allows access to document `GET` actions for a specific index and nothing else:
.Example Role Using Action-level Access Control
[source,yaml]
---------------------------------------------------
# Only GET read action on index named events_index
get_user:
indices:
'events_index': 'indices:data/read/get'
---------------------------------------------------
See the complete list of available <<ref-actions-list, cluster and indices actions>>.
TIP: When specifying index names, you can use indices and aliases with their full names or regular expressions that
refer to multiple indices.
* Wildcard (default) - simple wildcard matching where `*` is a placeholder for zero or more characters, `?` is a
placeholder for a single character and `\` may be used as an escape character.
* Regular Expressions - A more powerful syntax for matching more complex patterns. This regular expression is based on
Lucene's regexp automaton syntax. To enable this syntax, it must be wrapped within a pair of forward slashes (`/`).
Any pattern starting with `/` and not ending with `/` is considered to be malformed.
.Example Regular Expressions
[source,yaml]
------------------------------------------------------------------------------------
"foo-bar": all # match the literal `foo-bar`
"foo-*": all # match anything beginning with "foo-"
"logstash-201?-*": all # ? matches any one character
"/.*-201[0-9]-.*/": all # use a regex to match anything containing 2010-2019
"/foo": all # syntax error - missing final /
------------------------------------------------------------------------------------
TIP: Once the roles are defined, users can then be associated with any number of these roles. In the
<<authentication,next section>> we'll learn more about authentication and see how users can be associated with the
configured roles.

View File

@ -0,0 +1,142 @@
[[authentication]]
== Authentication
Authentication identifies an individual. To gain access to restricted resources, a user must prove their identity, via
passwords, credentials, or some other means (typically referred to as authentication tokens).
[[realms]]
[float]
=== Realms
A _realm_ is an authentication mechanism, which Shield uses to resolve and authenticate users and their roles. Shield
currently provides four realm types:
[horizontal]
_esusers_:: A native authentication system built into Shield and available by default. See <<esusers>>.
_LDAP_:: Authentication via an external Lightweight Directory Protocol. See <<ldap>>.
_Active Directory_:: Authentication via an external Active Directory service. See <<active_directory>>.
_PKI_:: Authentication through the use of trusted X.509 certificates. See <<pki>>.
NOTE: _esusers_, _LDAP_, and _Active Directory_ realms authenticate using the username and password authentication tokens.
Realms live within a _realm chain_. It is essentially a prioritized list of configured realms (typically of various types).
The order of the list determines the order in which the realms will be consulted. During the authentication process,
Shield will consult and try to authenticate the request one realm at a time. Once one of the realms successfully
authenticates the request, the authentication is considered to be successful and the authenticated user will be associated
with the request (which will then proceed to the authorization phase). If a realm cannot authenticate the request, the
next in line realm in the chain will be consulted. If all realms in the chain could not authenticate the request, the
authentication is then considered to be unsuccessful and an authentication error will be returned (as HTTP status code `401`).
NOTE: Shield attempts to authenticate to each configured realm sequentially. Some systems (e.g. Active Directory) have a
temporary lock-out period after several successive failed login attempts. If the same username exists in multiple realms,
unintentional account lockouts are possible. For more information, please see <<trouble-shoot-active-directory, here>>.
For example, if `UserA` exists in both Active Directory and esusers, and the Active Directory realm is checked first and
esusers is checked second, an attempt to authenticate as `UserA` in the esusers realm would first attempt to authenticate
against Active Directory and fail, before successfully authenticating against the esusers realm. Because authentication is
verified on each request, the Active Directory realm would be checked - and fail - on each request for `UserA` in the esusers
realm. In this case, while the Shield request completed successfully, the account on Active Directory would have received
several failed login attempts, and that account may become temporarily locked out. Plan the order of your realms accordingly.
The realm chain can be configured in the `elasticsearch.yml` file. When not explicitly configured, a default chain will be
created that only holds the `esusers` realm in it. When explicitly configured, the created chain will be the exact reflection
of the configuration (e.g. the only realms in the chain will be those configured realms that are enabled)
The following snippet shows an example of realms configuration:
[source,yaml]
----------------------------------------
shield.authc:
realms:
esusers:
type: esusers
order: 0
ldap1:
type: ldap
order: 1
enabled: false
url: 'url_to_ldap1'
...
ldap2:
type: ldap
order: 2
url: 'url_to_ldap2'
...
ad1:
type: active_directory
order: 3
url: 'url_to_ad'
----------------------------------------
As can be seen above, each realm has a unique name that identifies it. There are three settings that are common to all
realms:
* `type` (required) - Identifies the type of the ream (currently can be `esusers`, `ldap` or `active_directory`). The realm
type determines what other settings the realms should be configured with.
* `order` (optional) - Defines the priority/index of the realm within the realm chain. This will determine when the realm
will be consulted during authentication.
* `enabled` (optional) - When set to `false` the realm will be disabled and will not be added to the realm chain. This is
useful for debugging purposes, where one can remove a realm from the chain without deleting and
losing its configuration.
The realm types can roughly be categorized to two categories:
* `internal` - Internal realm types are realms that are internal to elasticsearch and don't require any communication with
external parties - they are fully managed by shield. There can only be a maximum of one configured realm
per internal realm type. (Currently, only one internal realm type exists - `esusers`).
* `external` - External realm types are realms that require interaction with parties/components external to elasticsearch,
typically, with enterprise level identity management systems. Unlike the `internal` realms, there can be
as many `external` realms as one would like - each with a unique name and different settings. (Currently
the only `external` realm types that exist are `ldap` and `active_directory`).
[[anonymous-access]]
[float]
=== Anonymous Access added[1.1.0]
The authentication process can be split into two phases - token extraction and user authentication. During the first
phase (token extraction phase), the configured realms are requested to try and extract/resolve an authentication token
from the incoming request. The first realm that finds an authentication token in the request "wins", meaning, the found
authentication token will be used for authentication (moving to the second phase - user authentication - where each realm
that support this authentication token type will try to authenticate the user).
In the event where no authentication token was resolved by any of the active realms, the incoming request is considered
to be anonymous.
By default, anonymous requests are rejected and an authentication error is returned (status code `401`). It is possible
to change this behaviour and instruct Shield to associate an default/anonymous user with the anonymous request. This can
be done by configuring the following settings in the `elasticsearch.yml` file:
[source,yaml]
----------------------------------------
shield.authc:
anonymous:
username: anonymous_user <1>
roles: role1, role2 <2>
authz_exception: true <3>
----------------------------------------
<1> The username/principal of the anonymous user. This setting is optional and will be set to `_es_anonymous_user` by default
when not configured.
<2> The roles that will be associated with the anonymous user. This setting is mandatory - without it, anonymous access
will be disabled (i.e. anonymous requests will be rejected and return an authentication error)
<3> When `true`, a HTTP 403 response will be returned when the anonymous user does not have the appropriate permissions
for the requested action. The web browser will not be prompt the user to provide credentials to access the requested
resource. When set to `false`, a HTTP 401 will be returned allowing for credentials to be provided for a user with
the appropriate permissions. If you are using anonymous access in combination with HTTP, setting this to `false` may
be necessary if your client does not support preemptive basic authentication. This setting is optional and will be
set to `true` by default.
include::realms/01-esusers.asciidoc[]
include::realms/02-ldap.asciidoc[]
include::realms/03-active-directory.asciidoc[]
include::realms/04-pki.asciidoc[]

View File

@ -0,0 +1,549 @@
[[securing-nodes]]
== Securing Nodes
Elasticsearch nodes store data that may be confidential. Attacks on the data may come from the network. These attacks
could include sniffing of the data, manipulation of the data, and attempts to gain access to the server and thus the
files storing the data. Securing your nodes with the procedures below helps to reduce risk from network-based attacks.
This section shows how to:
* encrypt traffic to and from Elasticsearch nodes using SSL/TLS,
* require that nodes authenticate new nodes that join the cluster using SSL certificates, and
* make it more difficult for remote attackers to issue any commands to Elasticsearch.
The authentication of new nodes will help prevent a rogue node from joining the cluster and receiving data through
replication.
[[ssl-tls]]
=== Encryption and Certificates
Shield allows for the installation of X.509 certificates that establish trust between nodes. When a client connects to a
node using SSL or TLS, the node will present its certificate to the client, and then as part of the handshake process the
node will prove that it owns the private key linked with the certificate. The client will then determine if the node's
certificate is valid, trusted, and matches the hostname or IP address it is trying to connect to. A node also acts as a
client when connecting to other nodes in the cluster, which means that every node must trust all of the other nodes in
the cluster.
The certificates used for SSL and TLS can be signed by a certificate authority (CA) or self-signed. The type of signing
affects how a client will trust these certificates. Self-signed certificates must be trusted individually, which means
that each node must have every other node's certificate installed. Certificates signed by a CA, can be trusted through
validation that the CA signed the certificate. This means that every node will only need the signing CA certificate
installed to trust the other nodes in the cluster.
The best practice with Shield is to use certificates signed by a CA. Self-signed certificates introduce a lot of
overhead as they require each client to trust every self-signed certificate. Self-signed certificates also limit
the elasticity of elasticsearch as adding a new node to the cluster requires a restart of every node after
installing the new node's certificate. This overhead is not present when using a CA as a new node only needs a
certificate signed by the CA to establish trust with the other nodes in the cluster.
Many organizations have a CA to sign certificates for each nodes. If not, see
<<certificate-authority, Appendix - Certificate Authority>> for instructions on setting up a CA.
The following steps will need to be repeated on each node to setup SSL/TLS:
* Install the CA certificate in the node's keystore
* Generate a private key and certificate for the node
* Create a signing request for the new node certificate
* Send the signing request to the CA
* Install the newly signed certificate in the node keystore
The steps in this procedure use the <<keytool,`keytool`>> command-line utility.
WARNING: Nodes that do not have SSL/TLS encryption enabled send passwords in plain text.
=== Set up a keystore
These instructions show how to place a CA certificate and a certificate for the node in a single keystore.
You can optionally store the CA certificate in a separate truststore. The configuration for this is
discussed later in this section.
First obtain the root CA certificate from your certificate authority. This certificate is used to verify that
any node certificate has been signed by the CA. Store this certificate in a keystore as a *trusted certificate*. With
the simplest configuration, Shield uses a keystore with a trusted certificate as a truststore.
The following shows how to create a keystore from a PEM encoded certificate. A _JKS file_ is a Java Key Store file.
It securely stores certificates.
[source,shell]
--------------------------------------------------
keytool -importcert \
-keystore /home/es/config/node01.jks \
-file /Users/Download/cacert.pem <1>
--------------------------------------------------
<1> The Certificate Authority's own certificate.
The keytool command will prompt you for a password, which will be used to protect the integrity of the keystore. You
will need to remember this password as it will be needed for all further interactions with the keystore.
The keystore needs an update when the CA expires.
[[private-key]]
=== Generate a node private key and certificate
This step creates a private key and certificate that the node will use to identify itself. This step must
be done for every node.
`keytool -genkey` can generate a private key and certificate for your node. The following is a typical usage:
[source,shell]
--------------------------------------------------
keytool -genkey \
-alias node01 \ <1>
-keystore node01.jks \ <2>
-keyalg RSA \
-keysize 2048 \
-validity 712 \
-ext san=dns:node01.example.com,ip:192.168.1.1 <3>
--------------------------------------------------
<1> An alias for this public/private key-pair.
<2> The keystore for this node -- will be created.
<3> The `SubjectAlternativeName` list for this host. The '-ext' parameter is optional and can be used to specify
additional DNS names and IP Addresses that the certificate will be valid for. Multiple DNS and IP entries can
be specified by separating each entry with a comma. If this option is used, *all* names and ip addresses must
be specified in this list.
This will create an RSA public/private key-pair with a key size of 2048 bits and store them in the `node01.jks` file.
The keystore is protected with the password of `myPass`. The `712` argument specifies the number of days that the
certificate is valid for -- two years, in this example.
The tool will prompt you for information to include in the certificate.
[IMPORTANT]
.Specifying the Node Identity
==========================
An Elasticsearch node with Shield will verify the hostname contained
in the certificate of each node it connects to. Therefore it is important
that each node's certificate contains the hostname or IP address used to connect
to the node. Hostname verification can be disabled, for more information see
the <<ref-ssl-tls-settings, Configuration Parameters for TLS/SSL>> section.
The recommended way to specify the node identity is by providing all names and
IP addresses of a node as a `SubjectAlternativeName` list using the the `-ext` option.
When using a commercial CA, internal DNS names and private IP addresses will not
be accepted as a `SubjectAlternativeName` due to https://cabforum.org/internal-names/[security concerns];
only publicly resolvable DNS names and IP addresses will be accepted. The use of an
internal CA is the most secure option for using private DNS names and IP addresses,
as it allows for node identity to be specified and verified. If you must use a commercial
CA and private DNS names or IP addresses, you will not be able to include the node
identity in the certificate and will need to disable <<ref-ssl-tls-settings, hostname verification>>.
Another way to specify node identity is by using the `CommonName` attribute
of the certificate. The first prompt from keytool, `What is your first and last name?`,
is asking for the `CommonName` attribute of certificate. When using the `CommonName` attribute
for node identity, a DNS name must be used. The rest of the prompts by keytool are for information only.
==========================
At the end, you will be prompted to optionally enter a password. The command line argument specifies the password for
the keystore. This prompt is asking if you want to set a different password that is specific to this certificate.
Doing so may provide some incremental improvement to security.
Here is a sample interaction with `keytool -genkey`
[source, shell]
--------------------------------------------------
What is your first and last name?
[Unknown]: node01.example.com <1>
What is the name of your organizational unit?
[Unknown]: test
What is the name of your organization?
[Unknown]: Elasticsearch
What is the name of your City or Locality?
[Unknown]: Amsterdam
What is the name of your State or Province?
[Unknown]: Amsterdam
What is the two-letter country code for this unit?
[Unknown]: NL
Is CN=node01.example.com, OU=test, O=elasticsearch, L=Amsterdam, ST=Amsterdam, C=NL correct?
[no]: yes
Enter key password for <mydomain>
(RETURN if same as keystore password):
--------------------------------------------------
<1> The DNS name or hostname of the node must be used here if you do not specify a `SubjectAlternativeName` list using the
`-ext` option.
Now you have a certificate and private key stored in `node01.jks`.
[[generate-csr]]
=== Create a certificate signing request
The next step is to get the node certificate signed by your CA. To do this you must generate a _Certificate Signing
Request_ (CSR) with the `keytool -certreq` command:
[source, shell]
--------------------------------------------------
keytool -certreq \
-alias node01 \ <1>
-keystore node01.jks \
-file node01.csr \
-keyalg rsa \
-ext san=dns:node01.example.com,ip:192.168.1.1 <2>
--------------------------------------------------
<1> The same `alias` that you specified when creating the public/private key-pair in <<private-key>>.
<2> The `SubjectAlternativeName` list for this host. The `-ext` parameter is optional and can be used to specify
additional DNS names and IP Addresses that the certificate will be valid for. Multiple DNS and IP entries can
be specified by separating each entry with a comma. If this option is used, *all* names and ip addresses must
be specified in this list.
The resulting file -- `node01.csr` -- is your _Certificate Signing Request_, or _CSR file_.
==== Send the signing request
Send the CSR file to the Certificate Authority for signing. The Certificate Authority will sign the certificate and
return a signed version of the certificate. See <<sign-csr>> if you are running your own Certificate Authority.
NOTE: When running multiple nodes on the same host, the same signed certificate can be used on each node or a unique
certificate can be requested per node if your CA supports multiple certificates with the same common name.
=== Install the newly signed certificate
Replace the existing unsigned certificate by importing the new signed certificate from your CA into the node keystore:
[source, shell]
--------------------------------------------------
keytool -importcert \
-keystore node01.jks \
-file node01-signed.crt \ <1>
-alias node01 <2>
--------------------------------------------------
<1> This name of the signed certificate file that you received from the CA.
<2> The `alias` must be the same as the alias that you used in <<private-key>>.
NOTE: keytool confuses some PEM-encoded certificates with extra text headers as DER-encoded certificates, giving
this error: `java.security.cert.CertificateParsingException: invalid DER-encoded certificate data`. The text information
can be deleted from the certificate. The following openssl command will remove the text headers:
[source, shell]
--------------------------------------------------
openssl x509 -in node01-signed.crt -out node01-signed-noheaders.crt
--------------------------------------------------
=== Configure the keystores and enable SSL
NOTE: All ssl related node settings that are considered to be highly sensitive and therefore are not exposed via the
{ref}/cluster-nodes-info.html#cluster-nodes-info[nodes info API].
The next step is to configure the node to enable SSL, to identify itself using
its signed certificate, and to verify the identify of incoming connections.
The settings below should be added to the main `elasticsearch.yml` config file.
==== Node identity
The `node01.jks` contains the certificate that `node01` will use to identify
itself to other nodes in the cluster, to transport clients, and to HTTPS
clients. Add the following to `elasticsearch.yml`:
[source, yaml]
--------------------------------------------------
shield.ssl.keystore.path: /home/es/config/node01.jks <1>
shield.ssl.keystore.password: myPass <2>
--------------------------------------------------
<1> The full path to the node keystore file.
<2> The password used to decrypt the `node01.jks` keystore.
If you specified a different password than the keystore password when executing the `keytool -genkey` command, you will
need to specify that password in the `elasticsearch.yml` configuration file:
[source, yaml]
--------------------------------------------------
shield.ssl.keystore.key_password: myKeyPass <1>
--------------------------------------------------
<1> The password entered at the end of the `keytool -genkey` command
[[create-truststore]]
==== Optional truststore configuration
The truststore holds the trusted CA certificates. Shield will use the keystore as the truststore
by default. You can optionally provide a separate path for the truststore. In this case, Shield
will use the keystore for the node's private key and the configured truststore for trusted certificates.
First obtain the CA certificates that will be trusted. Each of these certificates need to be imported into a truststore
by running the following command for each CA certificate:
[source,shell]
--------------------------------------------------
keytool -importcert \
-keystore /home/es/config/truststore.jks \ <1>
-file /Users/Download/cacert.pem <2>
--------------------------------------------------
<1> The full path to the truststore file. If the file does not exist it will be created.
<2> A trusted CA certificate.
The keytool command will prompt you for a password, which will be used to protect the integrity of the truststore. You
will need to remember this password as it will be needed for all further interactions with the truststore.
Add the following to `elasticsearch.yml`:
[source, yaml]
--------------------------------------------------
shield.ssl.truststore.path: /home/es/config/truststore.jks <1>
shield.ssl.truststore.password: myPass <2>
--------------------------------------------------
<1> The full path to the truststore file.
<2> The password used to decrypt the `truststore.jks` keystore.
[[ssl-transport]]
==== Enable SSL on the transport layer
Enable SSL on the transport networking layer to ensure that communication between nodes is encrypted. Add the following
value to the `elasticsearch.yml` configuration file:
[source, yaml]
--------------------------------------------------
shield.transport.ssl: true
--------------------------------------------------
Regardless of this setting, transport clients can only connect to the cluster with a valid username and password.
[[disable-multicast]]
==== Disable multicast
Multicast {ref}/modules-discovery.html[discovery] is
not supported with shield. To properly secure node communications, disable multicast by setting the following values
in the `elasticsearch.yml` configuration file:
[source, yaml]
--------------------------------------------------
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["node01:9300", "node02:9301"]
--------------------------------------------------
You can learn more about unicast configuration in the {ref}/modules-discovery.html[Zen Discovery] documentation.
[[ssl-http]]
==== Enable SSL on the HTTP layer
SSL should be enabled on the HTTP networking layer to ensure that communication between HTTP clients and the cluster is
encrypted:
[source, yaml]
--------------------------------------------------
shield.http.ssl: true
--------------------------------------------------
Regardless of this setting, HTTP clients can only connect to the cluster with a valid username and password.
Congratulations! At this point, you have a node with encryption enabled for both HTTPS and the transport layers.
Your node will correctly present its certificate to other nodes or clients when connecting. There are optional,
more advanced features you may use to further configure or protect your node. They are described in the following
paragraphs.
[[ciphers]]
=== Enabling Cipher Suites for Stronger Encryption
The SSL/TLS protocols use a cipher suite that determines the strength of encryption used to protect the data. You may
want to increase the strength of encryption used when using a Oracle JVM; the IcedTea OpenJDK ships without these
restrictions in place. This step is not required to successfully use Shield.
The Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files enable additional cipher suites for
Java in a separate JAR file that you need to add to your Java installation. You can download this JAR file from
Oracle's http://www.oracle.com/technetwork/java/javase/downloads/index.html[download page]. The JCE Unlimited Strength
Jurisdiction Policy Files are required for encryption with key lengths greater than 128 bits, such as 256-bit AES
encryption.
After installation, all cipher suites in the JCE are available for use. To enable the use of stronger cipher suites with
Shield, configure the `ciphers` parameter. See the <<ref-ssl-tls-settings, Configuration Parameters for TLS/SSL>> section
of this document for specific parameter information.
NOTE: The JCE Unlimited Strength Jurisdiction Policy Files must be installed on all nodes to establish an improved level
of encryption strength.
[[separating-node-client-traffic]]
=== Separating node to node and client traffic
Elasticsearch has the feature of so called {ref}/modules-transport.html#_tcp_transport_profiles[tcp transport profiles].
This allows elasticsearch to bind to several ports and addresses. Shield extends on this functionality to enhance the
security of the cluster by enabling the separation of node to node transport traffic from client transport traffic. This
is important if the client transport traffic is not trusted and could potentially be malicious. To separate the node to
node traffic from the client traffic, add the following to `elasticsearch.yml`:
[source, yaml]
--------------------------------------------------
transport.profiles.client<1>:
port: 9500-9600 <2>
shield:
type: client <3>
--------------------------------------------------
<1> `client` is the name of this example profile
<2> The port range that will be used by transport clients to communicate with this cluster
<3> A type of `client` enables additional filters for added security by denying internal cluster operations (e.g shard
level actions and ping requests)
If supported by your environment, an internal network can be used for node to node traffic and public network can be
used for client traffic by adding the following to `elasticsearch.yml`:
[source, yaml]
--------------------------------------------------
transport.profiles.default.bind_host: 10.0.0.1 <1>
transport.profiles.client.bind_host: 1.1.1.1 <2>
--------------------------------------------------
<1> The bind address for the network that will be used for node to node communication
<2> The bind address for the network used for client communication
If separate networks are not available, then <<ip-filtering, IP Filtering>> can be enabled to limit access to the profiles.
The tcp transport profiles also allow for enabling SSL on a per profile basis. This is useful if you have a secured network
for the node to node communication, but the client is on an unsecured network. To enable SSL on a client profile when SSL is
disabled for node to node communication, add the following to `elasticsearch.yml`:
[source, yaml]
--------------------------------------------------
transport.profiles.client.ssl: true <1>
--------------------------------------------------
<1> This enables SSL on the client profile. The default value for this setting is the value of `shield.transport.ssl`.
When using SSL for transport, a different set of certificates can also be used for the client traffic by adding the
following to `elasticsearch.yml`:
[source, yaml]
--------------------------------------------------
transport.profiles.client.shield.truststore:
path: /path/to/another/truststore
password: changeme
transport.profiles.client.shield.keystore:
path: /path/to/another/keystore
password: changeme
--------------------------------------------------
To change the default behavior that requires certificates for transport clients, set the following value in the `elasticsearch.yml`
file:
[source, yaml]
--------------------------------------------------
transport.profiles.client.shield.ssl.client.auth: no
--------------------------------------------------
This setting keeps certificate authentication active for node-to-node traffic, but removes the requirement to distribute
a signed certificate to transport clients. Please see the <<transport-client, Transport Client>> section.
[[ip-filtering]]
=== IP filtering
You can apply IP filtering to application clients, node clients, or transport clients, in addition to other nodes that
are attempting to join the cluster.
If a node's IP address is on the blacklist, Shield will still allow the connection to Elasticsearch. The connection will
be dropped immediately, and no requests will be processed.
NOTE: Elasticsearch installations are not designed to be publicly accessible over the Internet. IP Filtering and the
other security capabilities of Shield do not change this condition.
==== Node filtering
Shield features an access control feature that allows or rejects hosts, domains, or subnets.
===== Configuration setting
IP filtering configuration is part of the `elasticsearch.yml` file
===== Configuration Syntax
The configuration file for IP filtering consists of a list of one `allow` and `deny` statement each, possibly containing an array. Also, the `allow` rule is prioritized over the `deny` rule.
.Example 1. Allow/Deny Statement Priority
[source,yaml]
--------------------------------------------------
shield.transport.filter.allow: "192.168.0.1"
shield.transport.filter.deny: "192.168.0.0/24"
--------------------------------------------------
The `_all` keyword denies all connections that are not explicitly allowed earlier in the file.
.Example 2. `_all` Keyword Usage
[source,yaml]
--------------------------------------------------
shield.transport.filter.allow: [ "192.168.0.1", "192.168.0.2", "192.168.0.3", "192.168.0.4" ]
shield.transport.filter.deny: _all
--------------------------------------------------
IP Filtering configuration files support IPv6 addresses.
.Example 3. IPv6 Filtering
[source,yaml]
--------------------------------------------------
shield.transport.filter.allow: "2001:0db8:1234::/48"
shield.transport.filter.deny: "1234:0db8:85a3:0000:0000:8a2e:0370:7334"
--------------------------------------------------
Shield supports hostname filtering when DNS lookups are available.
.Example 4. Hostname Filtering
[source,yaml]
--------------------------------------------------
shield.transport.filter.allow: localhost
shield.transport.filter.deny: '*.google.com'
--------------------------------------------------
==== Disabling IP Filtering
Disabling IP filtering can slightly improve performance under some conditions. To disable IP filtering entirely, set the
value of the `shield.transport.filter.enabled` attribute in the `elasticsearch.yml` configuration file to `false`.
.Example 5. Disabled IP Filtering
[source,yaml]
--------------------------------------------------
shield.transport.filter.enabled: false
--------------------------------------------------
You can also disable IP filtering for the transport protocol but enable it for HTTP only like this
.Example 6. Enable HTTP based IP Filtering
[source,yaml]
--------------------------------------------------
shield.transport.filter.enabled: false
shield.http.filter.enabled: true
--------------------------------------------------
==== Support for TCP transport profiles
In order to support bindings on multiple host, you can specify the profile name as a prefix in order to allow/deny based on profiles
.Example 7. Profile based filtering
[source,yaml]
--------------------------------------------------
shield.transport.filter.allow: 172.16.0.0/24
shield.transport.filter.deny: _all
transport.profiles.client.shield.filter.allow: 192.168.0.0/24
transport.profiles.client.shield.filter.deny: _all
--------------------------------------------------
Note: When you do not specify a profile, `default` is used automatically.
==== Support for HTTP
You may want to have different filtering between the transport and HTTP protocol
.Example 8. HTTP only filtering
[source,yaml]
--------------------------------------------------
shield.transport.filter.allow: localhost
shield.transport.filter.deny: '*.google.com'
shield.http.filter.allow: 172.16.0.0/16
shield.http.filter.deny: _all
--------------------------------------------------
[[dynamic-ip-filtering]]
==== Dynamically updating ip filter settings added[1.1.0]
In case of running in an environment with highly dynamic IP addresses like cloud based hosting it is very hard to know the IP addresses upfront when provisioning a machine. Instead of changing the configuration file and restarting the node, you can use the Cluster Update Settings API like this
[source,json]
--------------------------------------------------
curl -XPUT localhost:9200/_cluster/settings -d '{
"persistent" : {
"shield.transport.filter.allow" : "172.16.0.0/24"
}
}'
--------------------------------------------------
You can also disable filtering completely setting `shield.transport.filter.enabled` like this
[source,json]
--------------------------------------------------
curl -XPUT localhost:9200/_cluster/settings -d '{
"persistent" : {
"shield.transport.filter.enabled" : false
}
}'
--------------------------------------------------
Note: In order to not lock yourself out, the default bound transport address will never be denied. This means you can always SSH into a system and use curl to apply changes.

View File

@ -0,0 +1,292 @@
[[auditing]]
== Auditing
[IMPORTANT]
====
Audit logs are **disabled** by default. To enable this functionality the following setting should be added to the
`elasticsearch.yml` file:
[source,yaml]
----------------------------
shield.audit.enabled: true
----------------------------
====
The audit functionality was added to keep track of important events occurring in elasticsearch, primarily around security
concerns. Keeping track and persisting these events is essential for any secured environment and potentially provides
evidence for suspicious/malicious activity on the elasticsearch cluster.
Shield provides two ways to output these events: in a dedicated `access.log` file stored on the host's file system, or
in an elasticsearch index on the same or separate cluster. These options are not mutually exclusive. For example, both
options can be enabled through an entry in the `elasticsearch.yml` file:
[source,yaml]
----------------------------
shield.audit.outputs: [index, logfile]
----------------------------
It is expected that the `index` output type will be used in conjunction with the `logfile` output type. This is
because the `index` output type can lose messages if the target index is unavailable. For this reason, it is recommended
that, if auditing is enabled, then the `logfile` output type should be used as an official record of events. The `index`
output type can be enabled as a convenience to allow historical browsing of events.
Please also note that, because audit events are batched together before being indexed, they may not appear immediately.
Please refer to the `shield.audit.index.flush_interval` setting below for instructions on how to modify the frequency
with which batched events are flushed.
[float]
=== Log Entry Types
Each audit related event that occurs is represented by a single log entry of a specific type (the type represents the
type of the event that occurred). Here are the possible log entry types:
* `anonymous_access_denied` is logged when the request is denied due to missing authentication token.
* `authentication_failed` is logged when the authentication token cannot be matched to a known user.
* `authentication_failed [<realm>]` is logged for every realm that fails to present a valid authentication token.
The value of _<realm>_ is the realm type.
* `access_denied` is logged when an authenticated user attempts an action the user does not have the
<<reference,privilege>> to perform.
* `access_granted` is logged when an authenticated user attempts an action the user has the correct
privilege to perform. In TRACE level all system (internal) actions are logged as
well (in all other level they're not logged to avoid cluttering of the logs.
* `tampered_request` is logged when the request was detected to be tampered (typically relates to `search/scroll` requests when the scroll id is believed to be tampered)
* `connection_granted` is logged when an incoming tcp connection has passed the ip filtering for a specific profile
* `connection_denied` is logged when an incoming tcp connection did not pass the ip filtering for a specific profile
To avoid needless proliferation of log entries, Shield enables you to control what entry types should be logged. This can
be done by setting the logging level. The following table lists the log entry types that will be logged for each of the
possible log levels:
.Log Entry Types and Levels
[options="header"]
|======
| Log Level | Entry Type
| `ERROR` | `authentication_failed`, `access_denied`, `tampered_request`, `connection_denied`
| `WARN` | `authentication_failed`, `access_denied`, `tampered_request`, `connection_denied`, `anonymous_access_denied`
| `INFO` | `authentication_failed`, `access_denied`, `tampered_request`, `connection_denied`, `anonymous_access_denied`, `access_granted`
| `DEBUG` | (doesn't output additional entry types beyond `INFO`, but extends the information emitted for each entry (see <<audit-log-entry-format, Log Entry Format>> below)
| `TRACE` | `authentication_failed`, `access_denied`, `tampered_request`, `connection_denied`, `anonymous_access_denied`, `access_granted`, `connection_granted`, `authentication_failed [<realm>]`. In addition, internal system requests (self-management requests triggered by elasticsearch itself) will also be logged for `access_granted` entry type.
|======
[float]
[[audit-log-entry-format]]
=== Log Entry Format
As mentioned above, every log entry represents an event that occurred in the system. As such, each entry is associated with
a timestamp (at which the event occurred), the component/layer the event is associated with and the entry/event type. In
addition, every log entry (depending ot its type) carries addition information about the event.
The format of a log entry is shown below:
[source,txt]
----------------------------------------------------------------------------
[<timestamp>] [<local_node_info>] [<layer>] [<entry_type>] <attribute_list>
----------------------------------------------------------------------------
Where:
* `<timestamp>` - the timestamp of the entries (in the fomrat configured in `logging.yml` as shown above)
* `<local_node_info>` - additional information about the local node that this log entry is printed from (the <<audit-log-entry-local-node-info, table below>> shows how this information can be controlled via settings)
* `<layer>` - the layer from which this entry relates to. Can be either `rest`, `transport` or `ip_filter`
* `<entry_type>` - the type of the entry as discussed above. Can be either `anonymous_access_denied`, `authentication_failed`,
`access_denied`, `access_granted`, `connection_granted`, `connection_denied`.
* `<attribute_list>` - A comma-separated list of attribute carrying data relevant to the occurred event (formatted as `attr1=[val1], attr2=[val2],...`)
[[audit-log-entry-local-node-info]]
.Local Node Info Settings
[options="header"]
|======
| Name | Default | Description
| `shield.audit.logfile.prefix.emit_node_name` | true | When set to `true`, the local node's name will be emitted
| `shield.audit.logfile.prefix.emit_node_host_address` | false | When set to `true`, the local node's IP address will be emitted
| `shield.audit.logfile.prefix.emit_node_host_name` | false | When set to `true`, the local node's host name will be emitted
|======
The following tables describe the possible attributes each entry type can carry (the attributes that will be available depend on the configured log level):
.`[rest] [anonymous_access_denied]` attributes
[options="header"]
|======
| Attribute | Minimum Log Level | Description
| `origin_address` | WARN | The address the rest request origins from
| `uri` | WARN | The REST endpoint URI
| `request_body` | DEBUG | The body of the request
|======
.`[rest] [authentication_failed]` attributes
[options="header"]
|======
| Attribute | Minimum Log Level | Description
| `origin_address` | ERROR | The address the rest request origins from
| `principal` | ERROR | The principal (username) that failed to authenticate
| `uri` | ERROR | The REST endpoint URI
| `request_body` | DEBUG | The body of the request
| `realm` | TRACE | The realm that failed to authenticate the user. NOTE: A separate entry will be printed for each of the consulted realms
|======
.`[transport] [anonymous_access_denied]` attributes
[options="header"]
|======
| Attribute | Minimum Log Level | Description
| `origin_type` | WARN | The type of the origin the request originated from. Can be either `rest` (request was originated from a rest API request), `transport` (request received on the transport channel), `local_node` (the local node issued the request)
| `origin_address` | WARN | The address the request origins from
| `action` | WARN | The name of the action that was executed
| `request` | DEBUG | The type of the request that was executed
| `indices` | WARN | A comma-separated list of indices this request relates to (when applicable)
|======
.`[transport] [authentication_failed]` attributes
[options="header"]
|======
| Attribute | Minimum Log Level | Description
| `origin_type` | ERROR | The type of the origin the request originated from. Can be either `rest` (request was originated from a rest API request), `transport` (request received on the transport channel), `local_node` (the local node issued the request)
| `origin_address` | ERROR | The address the request origins from
| `principal` | ERROR | The principal (username) that failed to authenticate
| `action` | ERROR | The name of the action that was executed
| `request` | DEBUG | The type of the request that was executed
| `indices` | ERROR | A comma-separated list of indices this request relates to (when applicable)
| `realm` | TRACE | The realm that failed to authenticate the user. NOTE: A separate entry will be printed for each of the consulted realms
|======
.`[transport] [access_granted]` attributes
[options="header"]
|======
| Attribute | Minimum Log Level | Description
| `origin_type` | INFO | The type of the origin the request originated from. Can be either `rest` (request was originated from a rest API request), `transport` (request received on the transport channel), `local_node` (the local node issued the request)
| `origin_address` | INFO | The address the request origins from
| `principal` | INFO | The principal (username) that failed to authenticate
| `action` | INFO | The name of the action that was executed
| `request` | DEBUG | The type of the request that was executed
| `indices` | INFO | A comma-separated list of indices this request relates to (when applicable)
|======
.`[transport] [access_denied]` attributes
[options="header"]
|======
| Attribute | Minimum Log Level | Description
| `origin_type` | ERROR | The type of the origin the request originated from. Can be either `rest` (request was originated from a rest API request), `transport` (request received on the transport channel), `local_node` (the local node issued the request)
| `origin_address` | ERROR | The address the request origins from
| `principal` | ERROR | The principal (username) that failed to authenticate
| `action` | ERROR | The name of the action that was executed
| `request` | DEBUG | The type of the request that was executed
| `indices` | ERROR | A comma-separated list of indices this request relates to (when applicable)
|======
.`[transport] [tampered_request]` attributes
[options="header"]
|======
| Attribute | Minimum Log Level | Description
| `origin_type` | ERROR | The type of the origin the request originated from. Can be either `rest` (request was originated from a rest API request), `transport` (request received on the transport channel), `local_node` (the local node issued the request)
| `origin_address` | ERROR | The address the request origins from
| `principal` | ERROR | The principal (username) that failed to authenticate
| `action` | ERROR | The name of the action that was executed
| `request` | DEBUG | The type of the request that was executed
| `indices` | ERROR | A comma-separated list of indices this request relates to (when applicable)
|======
.`[ip_filter] [connection_granted]` attributes
[options="header"]
|======
| Attribute | Minimum Log Level | Description
| `origin_address` | TRACE | The address the request origins from
| `transport_profile` | TRACE | The principal (username) that failed to authenticate
| `rule` | TRACE | The IP filtering rule that granted the request
|======
.`[ip_filter] [connection_denied]` attributes
[options="header"]
|======
| Attribute | Minimum Log Level | Description
| `origin_address` | ERROR | The address the request origins from
| `transport_profile` | ERROR | The principal (username) that failed to authenticate
| `rule` | ERROR | The IP filtering rule that denied the request
|======
[float]
=== Audit Logs Settings
As mentioned above, the audit logs are configured in the `logging.yml` file located in Shield's <<shield-config, config>>
directory. The following snippet shows the default logging configuration:
[[logging-file]]
.Default `logging.yml` File
[source,yaml]
----
logger:
shield.audit.logfile: INFO, access_log
additivity:
shield.audit.logfile: false
appender:
access_log:
type: dailyRollingFile
file: ${path.logs}/${cluster.name}-access.log
datePattern: "'.'yyyy-MM-dd"
layout:
type: pattern
conversionPattern: "[%d{ISO8601}] %m%n"
----
As can be seen above, by default audit information is appended to the `access.log` file located in the
standard elasticsearch `logs` directory (typically located at `$ES_HOME/logs`).
[float]
[[audit-index]]
=== Storing Audit Logs in an Elasticsearch Index
It is possible to store audit logs in an elasticsearch index. This index can be either on the same cluster, or on
a different cluster (see below). Several settings in `elasticsearch.yml` control this behavior.
.`audit log indexing configuration`
[options="header"]
|======
| Attribute | Default Setting | Description
| `shield.audit.outputs` | `logfile` | Must be set to *index* or *[index, logfile]* to enable
| `shield.audit.index.bulk_size` | `1000` | Controls how many audit events will be batched into a single write
| `shield.audit.index.flush_interval` | `1s` | Controls how often to flush buffered events into the index
| `shield.audit.index.rollover` | `daily` | Controls how often to roll over to a new index: hourly, daily, weekly, monthly.
| `shield.audit.index.events.include` | `anonymous_access_denied, authentication_failed, access_granted, access_denied, tampered_request, connection_granted, connection_denied`| The audit events to be indexed. Valid values are `anonymous_access_denied, authentication_failed, access_granted, access_denied, tampered_request, connection_granted, connection_denied`, `system_access_granted`. `_all` is a special value that includes all types.
| `shield.audit.index.events.exclude` | `system_access_granted` | The audit events to exclude from indexing. By default, `system_access_granted` events are excluded; enabling these events results in every internal node communication being indexed, which will make the index size much larger.
|======
.audit index settings
The settings for the index that the events are stored in, can also be configured. The index settings should be placed under
the `shield.audit.index.settings` namespace. For example, the following sets the number of shards and replicas to 1 for
the audit indices:
[source,yaml]
----------------------------
shield.audit.index.settings:
index:
number_of_shards: 1
number_of_replicas: 1
----------------------------
[float]
=== Forwarding Audit Logs to a Remote Cluster
To have audit events stored into a remote Elasticsearch cluster, the additional following options are available.
.`remote audit log indexing configuration`
[options="header"]
|======
| Attribute | Default Setting | Description
| `shield.audit.index.client.hosts` | None | Comma separated list of host:port pairs. These hosts should be nodes in the cluster to which you want to index.
| `shield.audit.index.client.cluster.name` | None | The name of the remote cluster.
| `shield.audit.index.client.shield.user` | None | The username:password pair used to authenticate with the remote cluster.
|======
Additional settings may be passed to the remote client by placing them under the `shield.audit.index.client` namespace.
For example, to allow the remote client to discover all of the nodes in the remote cluster you could set
the *client.transport.sniff* option.
[source,yaml]
----------------------------
shield.audit.index.client.transport.sniff: true
----------------------------

View File

@ -0,0 +1,17 @@
[[clients]]
== Integrating Shield with clients
You will need to update the configuration for several clients to work with the Shield security plugin. The jump list in
the right side bar lists the configuration information for the clients that support Shield.
include::clients/java.asciidoc[]
include::clients/http.asciidoc[]
include::clients/logstash.asciidoc[]
include::clients/marvel.asciidoc[]
include::clients/kibana.asciidoc[]
include::clients/hadoop.asciidoc[]

View File

@ -0,0 +1,17 @@
include::appendices/01-certificate-authority.asciidoc[]
include::appendices/02-license-management.asciidoc[]
include::appendices/03-limitations.asciidoc[]
include::appendices/04-securing-aliases.asciidoc[]
include::appendices/05-tribe-node.asciidoc[]
include::appendices/06-example.asciidoc[]
include::appendices/07-trouble-shooting.asciidoc[]
include::appendices/08-reference.asciidoc[]
include::appendices/09-release-notes.asciidoc[]

View File

@ -0,0 +1,206 @@
[[certificate-authority]]
== Appendix 1. Running a Certificate Authority
A Certificate Authority (CA) can greatly simplify managing trust. Instead of trusting hundreds of certificates
individually, a client only needs to trust the certificate from the CA. When the CA signs other node certificates,
nodes that trust the CA also trust other nodes with certificates signed by the CA.
NOTE: This procedure is an example of how to set up a CA and cannot universally address a wide array of security needs.
To properly secure a production site, consult your organization's security experts to discuss requirements.
To run a CA, generate a public and private key, and wrap the public key in a certificate that clients will trust.
Node certificates are sent in a _Certificate Signing Request_ (CSR). Your CA signs the CSR, producing a newly
signed certificate that you install on the node.
IMPORTANT: Because a Certificate Authority is a central point for trust, the private keys to the CA must be protected
from compromise.
=== Setting up a CA
To set up a CA, generate a private and public key pair and build a certificate from the public key. This procedure
uses OpenSSL to create the CA certificate and sign CSRs. First, set up a file structure and configuration template for
the CA.
==== Creating the Certificate Authority
Create the `ca` directory along with the `private`, `certs`, and `conf` subdirectories, then populate the required
`serial` and `index.txt` files.
[source,shell]
--------------------------------------------------
mkdir -p ca/private ca/certs ca/conf
cd ca
echo '01' > serial
touch index.txt
--------------------------------------------------
A configuration template file specifies several configurations settings that cannot be passed from the command line.
The following sample configuration file highlights fields of particular interest.
Create the `ca/conf/caconfig.cnf` file with contents similar to the following:
[source,shell]
-------------------------------------------------------------------------------------
#..................................
[ ca ]
default_ca = CA_default
[ CA_default ]
copy_extensions = copy <1>
dir = /PATH/TO/YOUR/DIR/ca <2>
serial = $dir/serial
database = $dir/index.txt
new_certs_dir = $dir/certs
certificate = $dir/certs/cacert.pem
private_key = $dir/private/cakey.pem
default_days = 712 <3>
default_md = sha256
preserve = no
email_in_dn = no
x509_extensions = v3_ca
name_opt = ca_default
cert_opt = ca_default
policy = policy_anything
[ policy_anything ]
countryName = optional
stateOrProvinceName = optional
localityName = optional
organizationName = optional
organizationalUnitName = optional
commonName = supplied
emailAddress = optional
[ req ]
default_bits = 2048 # Size of keys
default_keyfile = key.pem # name of generated keys
default_md = sha256 # message digest algorithm
string_mask = nombstr # permitted characters
distinguished_name = req_distinguished_name
req_extensions = v3_req
[ req_distinguished_name ]
# Variable name Prompt string
#------------------------- ----------------------------------
0.organizationName = Organization Name (company)
organizationalUnitName = Organizational Unit Name (department, division)
emailAddress = Email Address
emailAddress_max = 40
localityName = Locality Name (city, district)
stateOrProvinceName = State or Province Name (full name)
countryName = Country Name (2 letter code)
countryName_min = 2
countryName_max = 2
commonName = Common Name (hostname, IP, or your name)
commonName_max = 64
# Default values for the above, for consistency and less typing.
# Variable name Value
#------------------------ ------------------------------
0.organizationName_default = Elasticsearch Test Org <4>
localityName_default = Amsterdam
stateOrProvinceName_default = Amsterdam
countryName_default = NL
emailAddress_default = cacerttest@YOUR.COMPANY.TLD
[ v3_ca ]
basicConstraints = CA:TRUE
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid:always,issuer:always
[ v3_req ]
basicConstraints = CA:FALSE
subjectKeyIdentifier = hash
---------------------------------------------------------------------------------------
<1> Copy extensions: Copies all X509 V3 extensions from a Certificate Signing Request into the signed certificate.
With the value set to `copy`, you need to ensure the extensions and their values are valid for the certificate
being requested prior to signing the certificate.
<2> CA directory: Add the full path to this newly created CA
<3> Certificate validity period: The default number of days that a certificate signed by this CA is valid for. Note the
certificates signed by a CA must expire before the CA certificate expires.
<4> Certificate Defaults: The `OrganizationName`, `localityName`, `stateOrProvinceName`, `countryName`, and
`emailAddress` fields are informational. The settings in the above example are the defaults for these values.
=== Create a CA Certificate
In the `ca` directory, create the CA certificate and export the certificate. The following command creates and signs
the CA certificate, resulting in a _self-signed_ certificate that establishes the CA as an authority.
[source,shell]
------------------------------------------------------------------------------
openssl req -new -x509 -extensions v3_ca \
-keyout private/cakey.pem \ <1>
-out certs/cacert.pem \ <2>
-days 1460 \ <3>
-config conf/caconfig.cnf
------------------------------------------------------------------------------
<1> The path to the file where the private key is stored.
<2> The path to the file where the CA certificate is stored.
<3> The duration, in days, that the CA certificate is valid. After the expiration, trust in the CA is revoked and
requires generation of a new CA certificate and re-signing of certificates.
The command prompts you to supply information to place in the certificate. You will have to pick a PEM passphrase to
encrypt the private key for your CA.
WARNING: You cannot recover the CA without this passphrase.
The following shows a sample interaction with the command above:
[source,shell]
------------------------------------------------------------------------------------------------------------------------
openssl req -new -x509 -extensions v3_ca -keyout private/cakey.pem -out certs/cacert.pem -days 1460 -config \
conf/caconfig.cnf
Generating a 2048 bit RSA private key
.....................++++++
.......++++++
writing new private key to 'private/cakey.pem'
Enter PEM pass phrase:
Verifying - Enter PEM pass phrase:
#-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
#-----
Organization Name (company) [Elasticsearch Test Org]:
Organizational Unit Name (department, division) []:.
Email Address [cacerttest@YOUR.COMPANY.TLD]:.
Locality Name (city, district) [Amsterdam]:.
State or Province Name (full name) [Amsterdam]:.
Country Name (2 letter code) [NL]:.
Common Name (hostname, IP, or your name) []:Elasticsearch Test CA
------------------------------------------------------------------------------------------------------------------------
You now have a CA private key and a CA certificate (which includes the public key). You can now distribute the CA
certificate and sign CSRs.
[[sign-csr]]
==== Signing a CSR
Signing a certificate with the CA means that the CA vouches for the owner of the certificate. The private key that is
linked to the certificate proves certificate ownership. The CSR includes the certificate. Signing a CSR results in a
new certificate that includes the old certificate, the CA certificate, and a signature. This resulting certificate is
a _certificate chain_. Send the certificate chain back to the private key's holder for use on the node.
TIP: If you do not yet have a CSR, you need to follow the steps described in <<private-key>> and <<generate-csr>>
before continuing.
The following commands sign the CSR with the CA:
[source,shell]
-----------------------------------------------------------------------------
openssl ca -in node01.csr -notext -out node01-signed.crt -config conf/caconfig.cnf -extensions v3_req
-----------------------------------------------------------------------------
The newly signed certificate chain `node01-signed.crt` can now be sent to the node to be imported back into its
keystore.
NOTE: If you plan on allowing more than one certificate per common name, OpenSSL must be configured to allow non-unique
subjects. This is necessary when running multiple nodes on a single host and requesting unique certificates per node.
Edit the `ca/index.txt.attr` file and ensure the `unique_subject` line matches below:
[source, shell]
-----------------------
unique_subject = no
-----------------------
These steps provide you with a basic CA that can sign certificates for your Shield nodes.
OpenSSL is an extremely powerful tool and there are many more options available for your certification strategy,
such as intermediate authorities and restrictions on the use of certificates. There are many tutorials on the internet
for these advanced options, and the OpenSSL website details all the intricacies.

View File

@ -0,0 +1,131 @@
[[license-management]]
== Appendix 2. License Management
[float]
==== Installing The License Plugin
To install the license plugin, you'll need to run the following command:
[source,shell]
----------------------------------------------------------
bin/plugin -i elasticsearch/license/latest
----------------------------------------------------------
If your server doesnt have direct Internet access, it is also possible to download the plugin separately and install
it manually by following these steps:
1. Download the plugin package in https://download.elastic.co/elasticsearch/license/license-latest.zip
2. Transfer the compressed file to your server, then install the plugin using the `bin/plugin` script:
[source,shell]
----------------------------------------------------
bin/plugin -i license -u file://PATH_TO_ZIP_FILE <1>
----------------------------------------------------
<1> URI to license plugin zip distribution file (e.g. `file:///path/to/file/license-latest.zip`,
note the three slashes at the beginning)
[[installing-license]]
[float]
==== Installing A License
When installing Shield for the first time, having the license plugin installed is the minimum required for Shield to work.
You can just start up the node and everything will just work as expected. The first time you start up the node, a 30 days
trial license will automatically be created which will enable Shield to be fully operational. Within these 30 days, you
will be able to replace the trial license with another one that will be provided to you up on purchase. Updating the
license can be done at runtime (no need to shutdown the nodes) using a dedicated API.
IMPORTANT: With a valid license, Shield will be fully operational. Upon license expiry, Shield will operate in a
degraded mode, where cluster health, cluster stats, and index stats APIs will be blocked. All other operations will
continue operating normally. Find out more about <<license-expiration, Shield license expiration>>.
The license itself is a _JSON_ file containing all information about the license (e.g. feature name, expiry date, etc...).
To install or update the license use the following REST API:
[source,shell]
-----------------------------------------------------------------------
curl -XPUT -u admin 'http://<host>:<port>/_licenses' -d @license.json
-----------------------------------------------------------------------
Where:
* `<host>` is the hostname of the elasticsearch node (`localhost` if executing locally)
* `<port>` is the http port (defaults to `9200`)
* `license.json` is the license json file
NOTE: The put license API is protected under the cluster admin privilege, therefore it has to be executed
by a user with the appropriate permissions.
[float]
=== Listing Currently Installed Licenses
You can list all currently installed licenses by executing the following REST API:
[source,shell]
-----------------------------------------------------
curl -XGET -u admin:password 'http://<host>:<port>/_licenses'
-----------------------------------------------------
The response of this command will be a JSON listing all available licenses. In the case of Shield, the following
entry will be shown:
[source,json]
--------------------------------------------
{
licenses: [
...
{
status: "active",
uid: "sample_uid",
type: "sample_type",
subscription_type: "sample_subscription_type",
"issue_date" : "2015-01-26T00:00:00.000Z",
"issue_date_in_millis" : 1422230400000,
feature: "shield",
"expiry_date" : "2015-04-26T23:59:59.999Z",
"expiry_date_in_millis" : 1430092799999,
max_nodes: 1,
issued_to: "sample customer",
issuer: "elasticsearch"
}
...
]
}
--------------------------------------------
NOTE: The get license API is protected under the cluster admin privilege, therefore it has to be executed
by a user with the appropriate permissions.
[[license-expiration]]
[float]
=== License Expiration
License expiration should never be a surprise. Beginning 30 days from license expiration, Shield will begin logging daily messages
containing the license expiration date and a brief description of unlicensed behavior. Beginning 7 days from license expiration,
Shield will begin logging error messages every 10 minutes with the same information. After expiration, Shield will continue to
log error messages informing you that the license has expired. These messages will also be generated at node startup, to ensure
that there are no surprises. Here is an example message:
[source,sh]
---------------------------------------------------------------------------------------------------------------------------------
[ERROR][shield.license] Shield license will expire on 1/1/1970. Cluster health, cluster stats and indices stats operations are
blocked on Shield license expiration. All data operations (read and write) continue to work. If you have a new license, please
update it. Otherwise, please reach out to your support contact.
---------------------------------------------------------------------------------------------------------------------------------
When the license for Shield is expired, Shield will block requests to the cluster health, cluster stats, and index stats APIs.
Calls to these APIs will fail with a LicenseExpiredException, and will return HTTP status code 401. By disabling only these APIs,
any automated cluster monitoring should detect the license failure, while users of the cluster should not be immediately impacted.
It is not recommended to run for any length of time with a disabled Shield license; cluster health and stats APIs are critical
for monitoring and management of an Elasticsearch cluster.
Example error response the clients will receive when license is expired and cluster health, cluster stats or index stats APIs are called:
[source,json]
----------------------------------------------------------------------------------------------------------------------------------------------
{"error":"LicenseExpiredException[license expired for feature [shield]]","status":401}
----------------------------------------------------------------------------------------------------------------------------------------------
If you receive a new license file and <<installing-license, install it>>, it will take effect immediately and the health and
stats APIs will be available.

View File

@ -0,0 +1,94 @@
[[limitations]]
== Appendix 3. Limitations
[float]
=== Plugins
Elasticsearch's plugin infrastructure is extremely flexible in terms of what can be extended. While it opens up Elasticsearch
to a wide variety of (often custom) additional functionality, when it comes to security, this high extensibility level
comes at a cost. We have no control over the third-party plugins' code (open source or not) and therefore we cannot
guarantee their compliance with Shield. For this reason, third-party plugins are not officially supported on clusters
with the Shield security plugin installed.
[float]
=== Changes in Index Wildcard Behavior
Elasticsearch clusters with the Shield security plugin installed apply the `/_all` wildcard, and all other wildcards,
to the indices that the current user has privileges for, not the set of all indices on the cluster. There are two
notable results of this behavior:
* Elasticsearch clusters with the Shield security plugin installed do not honor the `ignore_unavailable` option.
This behavior means that requests involving indices that the current user lacks authorization for throw an
`AuthorizationException` error, regardless of the option's setting.
* The `allow_no_indices` option is ignored, resulting in the following behavior: when the final set of indices after
wildcard expansion and replacement is empty, the request throws a `IndexMissingException` error.
As a general principle, core Elasticsearch will return empty results in scenarios where wildcard expansion returns no
indices, while Elasticsearch with Shield returns exceptions. Note that this behavior means that operations with
multiple items will fail the entire set of operations if any one operation throws an exception due to wildcard
expansion resulting in an empty set of authorized indices.
[[limitations-filtered-aliases]]
[float]
=== Filtered Index Aliases
You can combine a secured index alias with a {ref}/query-dsl-filters.html[filter]
to approximate document-level security. By manipulating the specific filtering, you can control the set of documents
that users with privileges on that index alias can access.
WARNING: Filtering secured index aliases does not provide security for documents retrieved through the
{ref}/docs-get.html[get api]. Read
https://github.com/elasticsearch/elasticsearch/issues/3861[elasticsearch issue #3861] to learn more about this limitation.
Users can obtain secure near-real-time get under this restriction with searches by document ID, using the
{ref}/search-search.html[search api] instead. Restrict get operations when you use this approach by granting the `search`
privilege and disallowing `get`.
WARNING: In Elasticsearch, issuing a delete operation on an alias also deletes all of the indices that the alias
points to, regardless of the filter that the alias might hold. Keep this behavior in mind when granting users
administrative privileges to filtered index aliases. Read
https://github.com/elasticsearch/elasticsearch/issues/2318[elasticsearch issue #2318] to learn more about this limitation.
[float]
=== Queries and Filters
[[limitations-disable-cache]]
[float]
==== Elasticsearch 1.6+
Elasticsearch 1.6 removes all of the limitations below with queries and filters, *but* there is the possibility of
authorization being bypassed when using a terms filter with the
{ref}/query-dsl-terms-filter.html#_terms_lookup_mechanism[terms lookup mechanism]. The authorization that could be
bypassed is for the index containing the terms. In order to ensure that all requests are properly authorized when using
Shield 1.2.0 and 1.2.1, add the following setting to your `elasticsearch.yml` file:
[source,yaml]
--------------------------------------------------
indices.cache.filter.terms.size: 0
--------------------------------------------------
[float]
==== Elasticsearch pre-1.6.0
Certain Elasticsearch requests execute other requests as part of their implementation. Some of these requests do not
maintain the security context that the original request was made with. This causes an `AuthorizationException` even when
the user has authorization to make the subsequent requests. The following requests have this behavior:
* {ref}/query-dsl-mlt-query.html[More Like This Query]
* {ref}/query-dsl-geo-shape-query.html[GeoShape Query] and {ref}/query-dsl-geo-shape-filter.html[GeoShape Filter] when
used with an {ref}/query-dsl-geo-shape-filter.html#_pre_indexed_shape[indexed shape]
* {ref}/query-dsl-terms-filter.html[Terms Filter] when using the {ref}/query-dsl-terms-filter.html#_terms_lookup_mechanism[terms lookup mechanism]
* {ref}/search-suggesters-phrase.html[Phrase Suggester] when specifying the `collate` field
* Any query using {ref}/modules-scripting.html#_indexed_scripts[indexed scripts]
* Queries using a {ref}/search-template.html[search template]
[float]
=== Document Expiration (_ttl)
Document expiration handled using the built-in {ref}/mapping-ttl-field.html#mapping-ttl-field[`_ttl` (time to live) mechanism]
does not work with Shield. The document deletions will fail and the documents continue to live past their expiration.
[float]
=== LDAP Realm
The <<ldap, LDAP Realm>> does not currently support the discovery of nested LDAP Groups. For example, if a user is a member
of GroupA and GroupA is a member of GroupB, only GroupA will be discovered. However, the <<active_directory, Active Directory Realm>> _does_
support transitive group membership.

View File

@ -0,0 +1,101 @@
[[securing-aliases]]
== Appendix 4. Securing Indices & Aliases
Elasticsearch allows to execute operations against {ref}/indices-aliases.html[index aliases],
which are effectively virtual indices. An alias points to one or more indices, holds metadata and potentially a filter.
Shield treats aliases and indices the same. Privileges for indices actions are granted on specific indices or aliases.
In order for an indices action to be authorized by Shield, the user that executes it needs to have permissions for that
action on all the specific indices or aliases that the request relates to.
Let's look at an example. Assuming we have an index called `2015`, an alias that points to it called `current_year`,
and a user with the following role:
[source,yaml]
--------------------------------------------------
current_year_read:
indices:
'2015': read
--------------------------------------------------
The user attempts to retrieve a document from `current_year`:
[source,shell]
-------------------------------------------------------------------------------
curl -XGET 'localhost:9200/current_year/logs/1'
-------------------------------------------------------------------------------
The above request gets rejected, although the user has read permissions on the concrete index that the `current_year`
alias points to. The correct permission would be as follows:
[source,yaml]
--------------------------------------------------
current_year_read:
indices:
'current_year': read
--------------------------------------------------
[float]
=== Managing aliases
Unlike creating indices, which requires `create_index` privilege, adding/removing/retrieving aliases requires
`manage_aliases` permission. Aliases can be added to an index directly as part of the index creation:
[source,shell]
-------------------------------------------------------------------------------
curl -XPUT localhost:9200/2015 -d '{
"aliases" : {
"current_year" : {}
}
}'
-------------------------------------------------------------------------------
or via the dedicated aliases api if the index already exists:
[source,shell]
-------------------------------------------------------------------------------
curl -XPOST 'http://localhost:9200/_aliases' -d '
{
"actions" : [
{ "add" : { "index" : "2015", "alias" : "current_year" } }
]
}'
-------------------------------------------------------------------------------
The above requests both require `manage_aliases` privilege on the alias name as well as the targeted index, as follows:
[source,yaml]
--------------------------------------------------
admin:
indices:
'20*,current_year': create_index,manage_aliases
--------------------------------------------------
Note also that the `manage` privilege includes both `create_index` and `manage_aliases` in addition to all of the other
management related privileges:
[source,yaml]
--------------------------------------------------
admin:
indices:
'20*,current_year': manage
--------------------------------------------------
The index aliases api allows also to delete aliases from existing indices, as follows. The privileges required for such
a request are the same as above. Both index and alias need the `manage_aliases` permission.
[source,shell]
-------------------------------------------------------------------------------
curl -XPOST 'http://localhost:9200/_aliases' -d '
{
"actions" : [
{ "delete" : { "index" : "2015", "alias" : "current_year" } }
]
}'
-------------------------------------------------------------------------------
[float]
=== Filtered aliases
Aliases can hold a filter, which allows to select a subset of documents that can be accessed out of all the documents that
the physical index contains. Filtered aliases allow to mimic document level security, but have limitations. Please read
the <<limitations-filtered-aliases,limitations>> section to know more.

View File

@ -0,0 +1,110 @@
[[tribe-node]]
== Appendix 5. Tribe Node
Shield supports the {ref}/modules-tribe.html[Tribe Node], which acts as a federated client across multiple clusters.
When using Tribe Node with Shield, you must have the same Shield configurations (users, roles, user-role mappings, SSL/TLS CA)
on each cluster, and on the Tribe Node itself, where security checking is primarily done. This, of course, also means
that all clusters must be running Shield. The following are the current limitations to keep in mind when using the
Tribe Node in combination with Shield.
[float]
=== Same privileges on all connected clusters
The Tribe Node has its own configuration and privileges, which need to grant access to actions and indices on all of the
connected clusters. Also, each cluster needs to grant access to indices belonging to other connected clusters as well.
Let's look at an example: assuming we have two clusters, `cluster1` and `cluster2`, each one holding an index, `index1`
and `index2`. A search request that targets multiple clusters, as follows
[source,shell]
-----------------------------------------------------------
curl -XGET tribe_node:9200/index1,index2/_search -u tribe_user:tribe_user
-----------------------------------------------------------
requires `search` privileges for both `index1` and `index2` on the Tribe Node:
[source,yaml]
-----------------------------------------------------------
tribe_user:
indices:
'index*': search
-----------------------------------------------------------
Also, the same privileges need to be granted on the connected clusters, meaning that `cluster1` has to grant access to
`index2` even though `index2` only exists on `cluster2`; the same requirement applies for `index1` on `cluster2`. This
applies to any indices action. As for cluster state read operations (e.g. cluster state api, get mapping api etc.),
they always get executed locally on the Tribe Node, to make sure that the merged cluster state gets returned; their
privileges are then required on the Tribe Node only.
[float]
=== Same system key on all clusters
In order for <<message-authentication,message authentication>> to properly work across multiple clusters, the Tribe Node
and all of the connected clusters need to share the same system key.
[float]
=== Encrypted communication
Encrypted communication via SSL can only be enabled globally, meaning that either all of the connected clusters and the
Tribe Node have SSL enabled, or none of them have.
[float]
=== Same certification authority on all clusters
When using encrypted communication, for simplicity, we recommend all of the connected clusters and the Tribe Node use
the same certification authority to generate their certificates.
[float]
=== Example
Let's see a complete example on how to use the Tribe Node with shield and the configuration required. First of all the
Shield and License plugins need to be installed and enabled on all clusters and on the Tribe Node.
The system key needs to be generated on one node, as described in the <<message-authentication, Getting Started section>>,
and then copied over to all of the other nodes in each cluster and the Tribe Node itself.
Each cluster can have its own users with `admin` privileges that don't need to be present in the Tribe Node too. In fact,
administration tasks (e.g. create index) cannot be performed through the Tribe Node but need to be sent directly to the
corresponding cluster. The users that need to be created on Tribe Node are those that allow to get back data merged from
the different clusters through the Tribe Node itself. Let's for instance create as follows a `tribe_user` user, with
role `user`, that has `read` privileges on any index.
[source,shell]
-----------------------------------------------------------
./bin/shield/esusers useradd tribe_user -p tribe_user -r user
-----------------------------------------------------------
The above command needs to be executed on each cluster, since the same user needs to be present on the Tribe Node as well
as on every connected cluster.
The following is the configuration required on the Tribe Node, that needs to be added to `elasticsearch.yml`.
Elasticsearch allows to list specific settings per cluster. We disable multicast discovery as described in the
<<disable-multicast, Disable Multicast section>> and configure the proper unicast discovery hosts for each cluster,
as well as their cluster names:
[source,yaml]
-----------------------------------------------------------
tribe:
t1:
cluster.name: tribe1
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["tribe1:9300"]
t2:
cluster.name: tribe2
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["tribe2:9300"]
-----------------------------------------------------------
The Tribe Node can then be started and once initialized it will be ready to accept requests like the following search,
which will return documents coming from the different connected clusters:
[source,shell]
-----------------------------------------------------------
curl -XGET localhost:9200/_search -u tribe_user:tribe_user
-----------------------------------------------------------
As for encrypted communication, the required settings are the same as described in <<securing-nodes, Securing Nodes>>,
but need to be specified per tribe as we did for discovery settings above.

View File

@ -0,0 +1,94 @@
[[example]]
== Appendix 6. Full `esusers` Example
[float]
=== Putting it all together: Ecommerce Store Example
The e-commerce store site in this example store has the following components:
* A webshop application, which executes queries
* A nightly bulk import process, which reindexes the documents to ensure correct pricing for the following day
* A update mechanism that writes data concurrently during business hours on a per-document base
* A sales representative that needs to read sales-specific indices
[float]
=== Defining the roles
[source,yaml]
--------------------------------------------------
bulk:
indices:
'products_*': write, manage, read
updater:
indices:
'products': index, delete, indices:admin/optimize
webshop:
indices:
'products': search, get
monitoring:
cluster: monitor
indices:
'*': monitor
sales_rep :
cluster : none
indices:
'sales_*' : all
'social_events' : data_access, monitor
--------------------------------------------------
Let's step through each of the role definitions:
* The `bulk` role definition has the privileges to create/delete all indices starting with `products_` as well as
indexing data into it. This set of privileges enables the user with this role to delete and repopulate a particular
index.
* The `updater` role does not require any information about concrete indices. The only privileges required for updating
the `products` index are the `write` and `delete` privileges, as well as index optimization.
* The `webshop` role is a read-only role that solely executes queries and GET requests.
* The `monitoring` role extracts monitoring data for display on an internal screen of the web application.
* The `sales_rep` role has write access on all indices starting with `sales` and read access to the `social_events`
index.
[float]
=== Creating Users and Their Roles
After creating the `roles.yml` file, you can use the `esusers` tool to create the needed users and the respective
user-to-role mapping.
[source,shell]
-----------------------------------------------------------
bin/shield/esusers useradd webshop -r webshop,monitoring
-----------------------------------------------------------
[source,shell]
-----------------------------------------------------------
bin/shield/esusers useradd bulk -r bulk
-----------------------------------------------------------
[source,shell]
-----------------------------------------------------------
bin/shield/esusers useradd updater -r updater
-----------------------------------------------------------
[source,shell]
--------------------------------------------------------------------
bin/shield/esusers useradd best_sales_guy_of_the_world -r sales_rep
--------------------------------------------------------------------
[source,shell]
----------------------------------------------------------------------------
bin/shield/esusers useradd second_best_sales_guy_of_the_world -r sales_rep
----------------------------------------------------------------------------
[float]
=== Modifying Your Application
With the users and roles defined, you now need to modify your application. Each part of the application must
authenticate to Elasticsearch using the username and password you gave it in the previous steps.

View File

@ -0,0 +1,236 @@
[[trouble-shooting]]
== Appendix 7. Trouble Shooting
[float]
=== `settings`
Some settings are not returned via the nodes settings API::
+
--
This is intentional. Some of the settings are considered to be highly sensitive (e.g. all `ssl` settings, ldap `bind_dn`,
`bind_password` and `hostname_verification`). For this reason, we filter these settings and not exposing them via the
nodes info API rest endpoint. It is also possible to define additional sensitive settings that should be hidden using
the `shield.hide_settings` setting:
[source, yaml]
------------------------------------------
shield.hide_settings: shield.authc.realms.ldap1.url, shield.authc.realms.ad1.*
------------------------------------------
The snippet above will also hide the `url` settings of the `ldap1` realm and all settings of the `ad1` realm.
--
[float]
=== `esusers`
I configured the appropriate roles and the users, but I still get an authorization exception::
+
--
Verify that the role names associated with the users match the roles defined in the `roles.yml` file. You
can use the `esusers` tool to list all the users. Any unknown roles are marked with `*`.
[source, shell]
------------------------------------------
esusers list
rdeniro : admin
alpacino : power_user
jacknich : marvel,unknown_role* <1>
------------------------------------------
<1> `unknown_role` was not found in `roles.yml`
--
ERROR: extra arguments [...] were provided::
+
--
This error occurs when the esusers tool is parsing the input and finds unexepected arguments. This can happen when there
are special characters used in some of the arguments. For example, on Windows systems the `,` character is considered
a parameter separator; in other words `-r role1,role2` is translated to `-r role1 role2` and the `esusers` tool only recognizes
`role1` as an expected parameter. The solution here is to quote the parameter: `-r "role1,role2"`.
--
[[trouble-shoot-active-directory]]
[float]
=== Active Directory
Certain users are being frequently locked out of Active Directory::
+
--
Check your realm configuration; realms are checked serially, one after another. If your Active Directory realm is being checked before other realms and there are usernames
that appear in both Active Directory and another realm, a valid login for one realm may be causing failed login attempts in another realm.
For example, if `UserA` exists in both Active Directory and esusers, and the Active Directory realm is checked first and
esusers is checked second, an attempt to authenticate as `UserA` in the esusers realm would first attempt to authenticate
against Active Directory and fail, before successfully authenticating against the esusers realm. Because authentication is
verified on each request, the Active Directory realm would be checked - and fail - on each request for `UserA` in the esusers
realm. In this case, while the Shield request completed successfully, the account on Active Directory would have received
several failed login attempts, and that account may become temporarily locked out. Plan the order of your realms accordingly.
Also note that it is not typically necessary to define multiple Active Directory realms to handle domain controller failures. When using Microsoft DNS, the DNS entry for
the domain should always point to an available domain controller.
--
[float]
=== LDAP
I can authenticate to LDAP, but I still get an authorization exception::
+
--
A number of configuration options can cause this error.
|======================
|_group identification_ |
Groups are located by either an LDAP search or by the "memberOf" attribute on
the user. Also, If subtree search is turned off, it will search only one
level deep. See the <<ldap-settings, LDAP Settings>> for all the options.
There are many options here and sticking to the defaults will not work for all
scenarios.
| _group to role mapping_|
Either the `role_mapping.yml` file or the location for this file could be
misconfigured. See <<ref-shield-files, Shield Files>> for more.
|_role definition_|
Either the `roles.yml` file or the location for this file could be
misconfigured. See <<ref-shield-files, Shield Files>> for more.
|======================
To help track down these possibilities, add `shield.authc: DEBUG` to the `logging.yml` <<shield-config, config file>>. A successful
authentication should produce debug statements that list groups and role mappings.
--
[float]
=== Encryption & Certificates
`curl` on the Mac returns a certificate verification error even when the `--cacert` option is used::
+
--
Apple's integration of `curl` with their keychain technology disables the `--cacert` option.
See http://curl.haxx.se/mail/archive-2013-10/0036.html for more information.
You can use another tool, such as `wget`, to test certificates. Alternately, you can add the certificate for the
signing certificate authority MacOS system keychain, using a procedure similar to the one detailed at the
http://support.apple.com/kb/PH14003[Apple knowledge base]. Be sure to add the signing CA's certificate and not the server's certificate.
--
[float]
==== SSLHandshakeException causing connections to fail
A `SSLHandshakeException` will cause a connection to a node to fail and indicates that there is a configuration issue. Some of the
common exceptions are shown below with tips on how to resolve these issues.
`java.security.cert.CertificateException: No name matching node01.example.com found`::
+
--
Indicates that a client connection was made to `node01.example.com` but the certificate returned did not contain the name `node01.example.com`.
In most cases, the issue can be resolved by ensuring the name is specified as a `SubjectAlternativeName` during <<private-key, certificate creation>>.
Another scenario is when the environment does not wish to use DNS names in certificates at all. In this scenario, all settings
in `elasticsearch.yml` should only use IP addresses and the following setting needs to be set in `elasticsearch.yml`:
[source, yaml]
--------------------------------------------------
shield.ssl.hostname_verification.resolve_name: false
--------------------------------------------------
--
`java.security.cert.CertificateException: No subject alternative names present`::
+
--
Indicates that a client connection was made to an IP address but the returned certificate did not contain any `SubjectAlternativeName` entries.
IP addresses are only used for hostname verification if they are specified as a `SubjectAlternativeName` during
<<private-key, certificate creation>>. If the intent was to use IP addresses for hostname verification, then the certificate
will need to be regenerated. Also verify that `shield.ssl.hostname_verification.resolve_name: false` is *not* set in
`elasticsearch.yml`.
--
`javax.net.ssl.SSLHandshakeException: null cert chain` and `javax.net.ssl.SSLException: Received fatal alert: bad_certificate`::
+
--
The `SSLHandshakeException` above indicates that a self-signed certificate was returned by the client that is not trusted
as it cannot be found in the `truststore` or `keystore`. The `SSLException` above is seen on the client side of the connection.
--
`sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target` and `javax.net.ssl.SSLException: Received fatal alert: certificate_unknown`::
+
--
The `SunCertPathBuilderException` above indicates that a certificate was returned during the handshake that is not trusted.
This message is seen on the client side of the connection. The `SSLException` above is seen on the server side of the
connection. The CA certificate that signed the returned certificate was not found in the `keystore` or `truststore` and
needs to be added to trust this certificate.
--
[float]
==== Other SSL/TLS related exceptions
The are other exceptions related to SSL that may be seen in the logs. Below you will find some common exceptions and their
meaning.
WARN: received plaintext http traffic on a https channel, closing connection::
+
--
Indicates that there was an incoming plaintext http request. This typically occurs when an external applications attempts
to make an unencrypted call to the REST interface. Please ensure that all applications are using `https` when calling the
REST interface with SSL enabled.
--
`org.jboss.netty.handler.ssl.NotSslRecordException: not an SSL/TLS record:`::
+
--
Indicates that there was incoming plaintext traffic on an SSL connection. This typically occurs when a node is not
configured to use encrypted communication and tries to connect to nodes that are using encrypted communication. Please
verify that all nodes are using the same setting for `shield.transport.ssl`.
--
`java.io.StreamCorruptedException: invalid internal transport message format, got`::
+
--
Indicates an issue with data received on the transport interface in an unknown format. This can happen when a node with
encrypted communication enabled connects to a node that has encrypted communication disabled. Please verify that all
nodes are using the same setting for `shield.transport.ssl`.
--
`java.lang.IllegalArgumentException: empty text`::
+
--
The exception is typically seen when a `https` request is made to a node that is not using `https`. If `https` is desired,
please ensure the following setting is in `elasticsearch.yml`:
[source,yaml]
----------------
shield.http.ssl: true
----------------
--
ERROR: unsupported ciphers [...] were requested but cannot be used in this JVM::
+
--
This error occurs when a SSL/TLS cipher suite is specified that cannot supported by the JVM that elasticsearch is running
in. Shield will try to use the specified cipher suites that are supported by this JVM. This error can occur when using
the Shield defaults as some distributions of OpenJDK do not enable the PKCS11 provider by default. In this case, we
recommend consulting your JVM documentation for details on how to enable the PKCS11 provider.
Another common source of this error is requesting cipher suites that use encrypting with a key length greater than 128 bits
when running on an Oracle JDK. In this case, you will need to install the <<ciphers, JCE Unlimited Strength Jurisdiction Policy Files>>.
--
[float]
=== Exceptions when unlicensed
WARN: Failed to execute IndicesStatsAction for ClusterInfoUpdateJob::
+
--
This warning occurs in the logs every 30 seconds when the Shield license is expired or invalid. It is caused by a periodic
internal request to gather disk usage information from the nodes and indices, to enable {ref}/index-modules-allocation.html#disk[disk-based shard allocation].
Disk-based shard allocation is not required, though it is enabled by default.
If you are using elasticsearch 1.4.3 or higher with disk-based shard allocation enabled, it will be automatically disabled when the Shield
license is expired or invalid, and will be automatically re-enabled when a valid Shield license is installed.
If you are using elasticsearch 1.4.2 with disk-based shard allocation enabled, we recommend manually disabling disk-based shard
allocation while your Shield license is expired, and re-enabling it after installing a valid Shield license. Instructions for
disabling disk-based shard allocation are {ref}/index-modules-allocation.html#disk[here].
--

View File

@ -0,0 +1,409 @@
[[reference]]
== Appendix 8. Reference
[[privileges-list]]
[float]
=== Privileges
[[privileges-list-cluster]]
[float]
==== Cluster
[horizontal]
`all`:: All cluster administration operations, like snapshotting, node shutdown/restart, settings update or rerouting
`monitor`:: All cluster read-ony operations, like cluster health & state, hot threads, node info, node & cluster
stats, snapshot/restore status, pending cluster tasks
`manage_shield`:: All Shield related operations (currently only exposing an API for clearing the realm caches)
[[privileges-list-indices]]
[float]
==== Indices
[horizontal]
`all`:: Any action on an index
`manage`:: All `monitor` privileges plus index administration (aliases, analyze, cache clear, close, delete, exists,
flush, mapping, open, optimize, refresh, settings, search shards, templates, validate, warmers)
`monitor`": All actions, that are required for monitoring and read-only (recovery, segments info, index stats & status)
`data_access`:: A shortcut of all of the below privileges
`crud`:: A shortcut of `read` and `write` privileges
`read`:: Read only access to actions (count, explain, get, exists, mget, get indexed scripts, more like this, multi
percolate/search/termvector), percolate, scroll, clear_scroll, search, suggest, tv)
`search`:: All of `suggest` and executing an arbitrary search request (including multi-search API)
`get`:: Allow to execute a GET request for a single document or multiple documents via the multi-get API
`suggest`:: Allow to execute the `_suggest` API
`index`:: Privilege to index and update documents
`create_index`:: Privilege to create an index. A create index request may contain aliases to be added to the index once
created. In that case the request requires `manage_aliases` privilege as well, on both the index and the aliases names.
`manage_aliases`:: Privilege to add and remove aliases, as well as retrieve aliases information. Note that in order
to add an alias to an existing index, the `manage_aliases` privilege is required on the existing index as well as on the
alias name
`delete`:: Privilege to delete documents (includes delete by query)
`write`:: Privilege to index, update, delete, delete by query and bulk operations on documents, in addition to delete
and put indexed scripts
[[ref-actions-list]]
[float]
==== Action level privileges
Although rarely needed, it is also possible to define privileges on specific actions that are available in
Elasticsearch. This only applies to publicly available indices and cluster actions.
[[ref-actions-list-cluster]]
[float]
===== Cluster actions privileges
* `cluster:admin/nodes/restart`
* `cluster:admin/nodes/shutdown`
* `cluster:admin/repository/delete`
* `cluster:admin/repository/get`
* `cluster:admin/repository/put`
* `cluster:admin/repository/verify`
* `cluster:admin/reroute`
* `cluster:admin/settings/update`
* `cluster:admin/snapshot/create`
* `cluster:admin/snapshot/delete`
* `cluster:admin/snapshot/get`
* `cluster:admin/snapshot/restore`
* `cluster:admin/snapshot/status`
* `cluster:admin/plugin/license/get`
* `cluster:admin/plugin/license/delete`
* `cluster:admin/plugin/license/put`
* `cluster:admin/indices/scroll/clear_all`
* `cluster:admin/analyze`
* `cluster:admin/shield/realm/cache/clear`
* `cluster:monitor/health`
* `cluster:monitor/nodes/hot_threads`
* `cluster:monitor/nodes/info`
* `cluster:monitor/nodes/stats`
* `cluster:monitor/state`
* `cluster:monitor/stats`
* `cluster:monitor/task`
* `indices:admin/template/delete`
* `indices:admin/template/get`
* `indices:admin/template/put`
NOTE: While indices template actions typically relate to indices, they are categorized under cluster actions to avoid
potential security leaks (e.g. having one user define a template that may match another user's index and then be
applied).
[[ref-actions-list-indices]]
[float]
===== Indices actions privileges
* `indices:admin/aliases`
* `indices:admin/aliases/exists`
* `indices:admin/aliases/get`
* `indices:admin/analyze`
* `indices:admin/cache/clear`
* `indices:admin/close`
* `indices:admin/create`
* `indices:admin/delete`
* `indices:admin/exists`
* `indices:admin/flush`
* `indices:admin/get`
* `indices:admin/mapping/delete`
* `indices:admin/mapping/put`
* `indices:admin/mappings/fields/get`
* `indices:admin/mappings/get`
* `indices:admin/open`
* `indices:admin/optimize`
* `indices:admin/refresh`
* `indices:admin/settings/update`
* `indices:admin/shards/search_shards`
* `indices:admin/types/exists`
* `indices:admin/validate/query`
* `indices:admin/warmers/delete`
* `indices:admin/warmers/get`
* `indices:admin/warmers/put`
* `indices:monitor/recovery`
* `indices:monitor/segments`
* `indices:monitor/settings/get`
* `indices:monitor/stats`
* `indices:monitor/status`
* `indices:data/read/count`
* `indices:data/read/exists`
* `indices:data/read/explain`
* `indices:data/read/get`
* `indices:data/read/mget`
* `indices:data/read/mlt`
* `indices:data/read/mpercolate`
* `indices:data/read/msearch`
* `indices:data/read/mtv`
* `indices:data/read/percolate`
* `indices:data/read/script/get`
* `indices:data/read/scroll`
* `indices:data/read/scroll/clear`
* `indices:data/read/search`
* `indices:data/read/suggest`
* `indices:data/read/tv`
* `indices:data/write/bulk`
* `indices:data/write/delete`
* `indices:data/write/delete/by_query`
* `indices:data/write/index`
* `indices:data/write/script/delete`
* `indices:data/write/script/put`
* `indices:data/write/update`
[[ref-shield-settings]]
[float]
=== Shield Settings
The parameters listed in this section are configured in the `config/elasticsearch.yml` configuration file.
[[message-auth-settings]]
.Shield Message Authentication Settings
[options="header"]
|======
| Name | Default | Description
| `shield.system_key.file` | `system_key` under Shield's <<shield-config,config>> | Sets the location of the `system_key` file (read more <<message-authentication,here>>)
|======
[[ref-anonymous-access]]
.Shield Anonymous Access Settings added[1.1.0]
[options="header"]
|======
| Name | Default | Description
| `shield.authc.anonymous.username` | `_es_anonymous_user` | The username/principal of the anonymous user (this setting is optional)
| `shield.authc.anonymous.roles` | - | The roles that will be associated with the anonymous user. This setting must be set to enable anonymous access.
| `shield.authc.anonymous.authz_exception` | `true` | When `true`, a HTTP 403 response will be returned when the anonymous user does not have the appropriate permissions for the requested action. The user will not be prompted to provide credentials to access the requested resource. When set to `false`, a HTTP 401 will be returned allowing for credentials to be provided for a user with the appropriate permissions.
|======
[[ref-realm-settings]]
[float]
==== Realm Settings
All realms are configured under the `shield.authc.realms` settings, keyed by their names as follows:
[source,yaml]
----------------------------------------
shield.authc.realms:
realm1:
type: esusers
order: 0
...
realm2:
type: ldap
order: 1
...
realm3:
type: active_directory
order: 2
...
...
----------------------------------------
.Common Settings to All Realms
[options="header"]
|======
| Name | Required | Default | Description
| `type` | yes | - | The type of the reamlm (currently `esusers`, `ldap` or `active_directory`)
| `order` | no | Integer.MAX_VALUE | The priority of the realm within the realm chain
| `enabled` | no | true | Enable/disable the realm
|======
[[ref-esusers-settings]]
._esusers_ Realm
[options="header"]
|======
| Name | Required | Default | Description
| `files.users` | no | `users` under Shield's <<shield-config,config>> | The location of <<users-file, _users_>> file
| `files.users_roles` | no | `users_roles` under Shield's <<shield-config,config>> | The location of <<users_roles-file, _users_roles_>> file
| `cache.ttl` | no | `20m` | Specified the time-to-live for cached user entries (a user and its credentials will be cached for this configured period of time). Defaults to `20m` (use the standard elasticsearch {ref}/common-options.html#time-units[time units]).
| `cache.max_users` | no | 100000 | Specified the maximum number of user entries that can live in the cache at a given time. Defaults to 100,000.
| `cache.hash_algo` | no | `ssha256` | (Expert Setting) Specifies the hashing algorithm that will be used for the in-memory cached user credentials (see <<ref-cache-hash-algo,Cache hash algorithms>> table for all possible values).
|======
[[ref-ldap-settings]]
.Shield LDAP Settings
[options="header"]
|======
| Name | Required | Default | Description
| `url` | yes | - | An LDAP URL in the format `ldap[s]://<server>:<port>`.
| `bind_dn` | no | Empty | The DN of the user that will be used to bind to the LDAP and perform searches. If this is not specified, an anonymous bind will be attempted.
| `bind_password` | no | Empty | The password for the user that will be used to bind to the LDAP.
| `user_dn_templates` | yes * | - | The DN template that replaces the user name with the string `{0}`. This element is multivalued, allowing for multiple user contexts.
| `user_search.base_dn` | yes * | - | Specifies a container DN to search for users.
| `user_search.scope` | no | `sub_tree` | The scope of the user search. Valid values are `sub_tree`, `one_level` or `base`. `one_level` only searches objects directly contained within the `base_dn`. `sub_tree` searches all objects contained under `base_dn`. `base` specifies that the `base_dn` is the user object, and that it is the only user considered.
| `user_search.attribute` | no | `uid` | The attribute to match with the username presented to Shield.
| `user_search.pool.size` | no | `20` | The maximum number of connections to the LDAP server to allow in the connection pool.
| `user_search.pool.initial_size` | no | `5` | The initial number of connections to create to the LDAP server on startup.
| `user_search.pool.health_check.enabled` | no | `true` | Flag to enable or disable a health check on LDAP connections in the connection pool. Connections will be checked in the background at the specified interval.
| `user_search.pool.health_check.dn` | no | Value of `bind_dn` | The distinguished name to be retrieved as part of the health check. If `bind_dn` is not specified, a value must be specified.
| `user_search.pool.health_check.interval` | no | `60s` | The interval to perform background checks of connections in the pool.
| `group_search.base_dn` | yes | - | The container DN to search for groups in which the user has membership. When this element is absent, Shield searches for a `memberOf` attribute set on the user in order to determine group membership.
| `group_search.scope` | no | `sub_tree` | Specifies whether the group search should be `sub_tree`, `one_level` or `base`. `one_level` only searches objects directly contained within the `base_dn`. `sub_tree` searches all objects contained under `base_dn`. `base` specifies that the `base_dn` is a group object, and that it is the only group considered.
| `group_search.filter` | no | See description | When not set, the realm will search for `group`, `groupOfNames`, or `groupOfUniqueNames`, with the attributes `member` or `memberOf`. Any instance of `{0}` in the filter will be replaced by the user attribute defined in `group_search.user_attribute`
| `group_search.user_attribute` | no | Empty | Specifies the user attribute that will be fetched and provided as a parameter to the filter. If not set, the user DN is passed into the filter.
| `unmapped_groups_as_roles` | no | false | Takes a boolean variable. When this element is set to `true`, the names of any unmapped LDAP groups are used as role names and assigned to the user. THe default value is `false`.
| `files.role_mapping` | no | `role_mapping.yml` under Shield's <<shield-config,config>> | The path and file name for the <<ldap-role-mapping, YAML role mapping configuration file>>. The default file name is `role_mapping.yml` in the <<shield-config,Shield config directory>>.
| `follow_referrals` | no | `true` | Boolean value that specifies whether Shield should follow referrals returned by the LDAP server. Referrals are URLs returned by the server that are to be used to continue the LDAP operation (e.g. search).
| `connect_timeout` | no | "5s" - for 5 seconds | The timeout period for establishing an LDAP connection. An `s` at the end indicates seconds, or `ms` indicates milliseconds.
| `read_timeout` | no | "5s" - for 5 seconds | The timeout period for an LDAP operation. An `s` at the end indicates seconds, or `ms` indicates milliseconds.
| `hostname_verification` | no | true | Performs hostname verification when using `ldaps` to protect against man in the middle attacks.
| `cache.ttl` | no | `20m` | Specified the time-to-live for cached user entries (a user and its credentials will be cached for this configured period of time). (use the standard elasticsearch {ref}/common-options.html#time-units[time units]).
| `cache.max_users` | no | 100000 | Specified the maximum number of user entries that can live in the cache at a given time.
| `cache.hash_algo` | no | `ssha256` |(Expert Setting) Specifies the hashing algorithm that will be used for the in-memory cached user credentials (see <<ref-cache-hash-algo,Cache hash algorithms>> table for all possible values).
|======
NOTE: `user_dn_templates` is required to operate in user template mode and `user_search.base_dn` is required to operated in user search mode. Only one is required for a given realm configuration. For more information on the different modes, see <<ldap-realms, ldap realms>>.
[[ref-ad-settings]]
.Shield Active Directory Settings
[options="header"]
|======
| Name | Required | Default | Description
| `url` | no | `ldap://<domain_name>:389` | A URL in the format `ldap[s]://<server>:<port>` If not specified the URL will be derived from the domain_name, assuming clear-text `ldap` and port `389` (e.g. `ldap://<domain_name>:389`).
| `domain_name` | yes | - | The domain name of Active Directory. The cluster can derive the URL and `user_search_dn` fields from values in this element if those fields are not otherwise specified.
| `unmapped_groups_as_roles` | no | false | Takes a boolean variable. When this element is set to `true`, the names of any unmapped groups and the user's relative distinguished name are used as role names and assigned to the user. THe default value is `false`.
| `files.role_mapping` | no | `role_mapping.yml` under Shield's <<shield-config,config>> | The path and file name for the <<ad-role-mapping, YAML role mapping configuration file>>. The default file name is `role_mapping.yml` in the <<shield-config,Shield config directory>>.
| `user_search.base_dn` | no | Root of Active Directory | The context to search for a user. The default value for this element is the root of the Active Directory domain.
| `user_search.scope` | no | `sub_tree` | Specifies whether the user search should be `sub_tree`, `one_level` or `base`. `one_level` only searches users directly contained within the `base_dn`. `sub_tree` searches all objects contained under `base_dn`. `base` specifies that the `base_dn` is a user object, and that it is the only user considered.
| `user_search.filter` | no | See description | Specifies a filter to use to lookup a user given a username. The default filter looks up `user` objects with either `sAMAccountName` or `userPrincipalName`
| `group_search.base_dn` | no | Root of Active Directory | The context to search for groups in which the user has membership. The default value for this element is the root of the the Active Directory domain
| `group_search.scope` | no | `sub_tree` | Specifies whether the group search should be `sub_tree`, `one_level` or `base`. `one_level` searches for groups directly contained within the `base_dn`. `sub_tree` searches all objects contained under `base_dn`. `base` specifies that the `base_dn` is a group object, and that it is the only group considered.
| `timeout.tcp_connect` | no | `5s` - for 5 seconds | The TCP connect timeout period for establishing an LDAP connection. An `s` at the end indicates seconds, or `ms` indicates milliseconds.
| `timeout.tcp_read` | no | `5s` - for 5 seconds | The TCP read timeout period after establishing an LDAP connection. An `s` at the end indicates seconds, or `ms` indicates milliseconds.
| `timeout.ldap_search` | no | `5s` - for 5 seconds | The LDAP Server enforced timeout period for an LDAP search. An `s` at the end indicates seconds, or `ms` indicates milliseconds.
| `hostname_verification` | no | true | Performs hostname verification when using `ldaps` to protect against man in the middle attacks.
| `cache.ttl` | no | `20m` | Specified the time-to-live for cached user entries (a user and its credentials will be cached for this configured period of time). (use the standard elasticsearch {ref}/common-options.html#time-units[time units]).
| `cache.max_users` | no | 100000 | Specified the maximum number of user entries that can live in the cache at a given time.
| `cache.hash_algo` | no | `ssha256` |(Expert Setting) Specifies the hashing algorithm that will be used for the in-memory cached user credentials (see <<ref-cache-hash-algo,Cache hash algorithms>> table for all possible values).
|======
[[ref-pki-settings]]
.Shield PKI Settings
[options="header"]
|======
| Name | Required | Default | Description
| `username_pattern` | no | `CN=(.*?)(?:,\|$)` | The regular expression pattern used to extract the username from the certificate DN. The first match group is the used as the username. Default is `CN=(.*?)(?:,\|$)`
| `truststore.path` | no | `shield.ssl.keystore` | The path of a truststore to use. The default truststore is the one defined by <<ref-ssl-tls-settings,SSL/TLS settings>>
| `truststore.password` | no | - | The password to the truststore. Must be provided if `truststore.path` is set.
| `truststore.algorithm` | no | SunX509 | Algorithm for the trustsore. Default is `SunX509`
| `files.role_mapping` | no | `role_mapping.yml` under Shield's <<shield-config,config>> | Specifies the path and file name for the <<pki-role-mapping, YAML role mapping configuration file>>. The default file name
|======
[[ref-cache-hash-algo]]
.Cache hash algorithms
|=======================
| Algorithm | Description
| `ssha256` | Uses a salted `SHA-256` algorithm (default).
| `md5` | Uses `MD5` algorithm.
| `sha1` | Uses `SHA1` algorithm.
| `bcrypt` | Uses `bcrypt` algorithm with salt generated in 10 rounds.
| `bcrypt4` | Uses `bcrypt` algorithm with salt generated in 4 rounds.
| `bcrypt5` | Uses `bcrypt` algorithm with salt generated in 5 rounds.
| `bcrypt6` | Uses `bcrypt` algorithm with salt generated in 6 rounds.
| `bcrypt7` | Uses `bcrypt` algorithm with salt generated in 7 rounds.
| `bcrypt8` | Uses `bcrypt` algorithm with salt generated in 8 rounds.
| `bcrypt9` | Uses `bcrypt` algorithm with salt generated in 9 rounds.
| `noop`,`clear_text` | Doesn't hash the credentials and keeps it in clear text in memory. CAUTION:
keeping clear text is considered insecure and can be compromised at the OS
level (e.g. memory dumps and `ptrace`).
|=======================
[[ref-roles-settings]]
.Shield Roles Settings
[options="header"]
|======
| Name | Default | Description
| `shield.authz.store.file.roles` | `roles.yml` under Shield's <<shield-config,config>> | The location of the roles definition file
|======
[[ref-ssl-tls-settings]]
[float]
==== TLS/SSL Settings
.Shield TLS/SSL Settings
[options="header"]
|======
| Name | Default | Description
| `shield.ssl.keystore.path` | None | Absolute path to the keystore that holds the private keys
| `shield.ssl.keystore.password` | None | Password to the keystore
| `shield.ssl.keystore.key_password` | Same value as `shield.ssl.keystore.password` | Password for the private key in the keystore
| `shield.ssl.keystore.algorithm` | SunX509 | Format for the keystore
| `shield.ssl.truststore.path` | `shield.ssl.keystore.path` | If not set, this setting defaults to `shield.ssl.keystore`
| `shield.ssl.truststore.password` | `shield.ssl.keystore.password` | Password to the truststore
| `shield.ssl.truststore.algorithm` | SunX509 | Format for the truststore
| `shield.ssl.protocol` | `TLSv1.2` | Protocol for security: `SSL`, `SSLv2`, `SSLv3`, `TLS`, `TLSv1`, `TLSv1.1`, `TLSv1.2`
| `shield.ssl.supported_protocols` | `TLSv1`, `TLSv1.1`, `TLSv1.2` | Supported protocols with versions. Valid protocols: `SSLv2Hello`, `SSLv3`, `TLSv1`, `TLSv1.1`, `TLSv1.2`
| `shield.ssl.ciphers` | `TLS_RSA_WITH_AES_128_CBC_SHA256`, `TLS_RSA_WITH_AES_128_CBC_SHA`, `TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA` | Supported cipher suites can be found in Oracle's http://docs.oracle.com/javase/7/docs/technotes/guides/security/SunProviders.html[Java Cryptography Architecture documentation]. Cipher suites using key lengths greater than 128 bits require the <<ciphers,JCE Unlimited Strength Jurisdiction Policy Files>>.
| `shield.ssl.hostname_verification` | `true` | Performs hostname verification on transport connections. This is enabled by default to protect against man in the middle attacks.
| `shield.ssl.hostname_verification.resolve_name` | `true` | A reverse DNS lookup is necessary to find the hostname when connecting to a node via an IP Address. If this is disabled and IP addresses are used to connect to a node, the IP address must be specified as a `SubjectAlternativeName` when <<private-key,creating the certificate>> or hostname verification will fail. IP addresses will be used to connect to a node if they are used in following settings: `network.host`, `network.publish_host`, `transport.publish_host`, `transport.profiles.$PROFILE.publish_host`, `discovery.zen.ping.unicast.hosts`
| `shield.ssl.session.cache_size` | `1000` | Number of SSL Sessions to cache in order to support session resumption. Setting the value to `0` means there is no size limit.
| `shield.ssl.session.cache_timeout` | `24h` | The time after the creation of a SSL session before it times out. (uses the standard elasticsearch {ref}/common-options.html#time-units[time units]).
| `shield.transport.ssl` | `false` | Set this parameter to `true` to enable SSL/TLS
| `shield.transport.ssl.client.auth` | `required` | Require client side certificates for transport protocol. Valid values are `required`, `optional`, and `no`. `required` forces a client to present a certificate, while `optional` requests a client certificate but the client is not required to present one.
| `shield.transport.filter.allow` | None | List of IP addresses to allow
| `shield.transport.filter.deny` | None | List of IP addresses to deny
| `shield.http.ssl` | `false` | Set this parameter to `true` to enable SSL/TLS
| `shield.http.ssl.client.auth` | `no` | Require client side certificates for HTTP. Valid values are `required`, `optional`, and `no`. `required` forces a client to present a certificate, while `optional` requests a client certificate but the client is not required to present one.
| `shield.http.filter.allow` | None | List of IP addresses to allow just for HTTP
| `shield.http.filter.deny` | None | List of IP addresses to deny just for HTTP
|======
[[ref-ssl-tls-profile-settings]]
.Shield TLS/SSL settings per profile
[options="header"]
|======
| Name | Default | Description
| `transport.profiles.$PROFILE.shield.ssl` | Same as `shield.transport.ssl`| Setting this parameter to true will enable SSL/TLS for this profile; false will disable SSL/TLS for this profile.
| `transport.profiles.$PROFILE.shield.truststore.path` | None | Absolute path to the truststore of this profile
| `transport.profiles.$PROFILE.shield.truststore.password` | None | Password to the truststore
| `transport.profiles.$PROFILE.shield.truststore.algorithm` | SunX509 | Format for the truststore
| `transport.profiles.$PROFILE.shield.keystore.path` | None | Absolute path to the keystore of this profile
| `transport.profiles.$PROFILE.shield.keystore.password` | None | Password to the keystore
| `transport.profiles.$PROFILE.shield.keystore.key_password` | Same value as `transport.profiles.$PROFILE.shield.keystore.password` | Password for the private key in the keystore
| `transport.profiles.$PROFILE.shield.keystore.algorithm` | SunX509 | Format for the keystore
| `transport.profiles.$PROFILE.shield.session.cache_size` | `1000` | Number of SSL Sessions to cache in order to support session resumption. Setting the value to `0` means there is no size limit.
| `transport.profiles.$PROFILE.shield.session.cache_timeout` | `24h` | The time after the creation of a SSL session before it times out. (uses the standard elasticsearch {ref}/common-options.html#time-units[time units]).
| `transport.profiles.$PROFILE.shield.filter.allow` | None | List of IP addresses to allow for this profile
| `transport.profiles.$PROFILE.shield.filter.deny` | None | List of IP addresses to deny for this profile
| `transport.profiles.$PROFILE.shield.ssl.client.auth` | `required` | Require client side certificates. Valid values are `required`, `optional`, and `no`. `required` forces a client to present a certificate, while `optional` requests a client certificate but the client is not required to present one.
| `transport.profiles.$PROFILE.shield.type` | `node` | Defines allowed actions on this profile, allowed values: `node` and `client`
| `transport.profiles.$PROFILE.shield.ciphers` | `TLS_RSA_WITH_AES_128_CBC_SHA256`, `TLS_RSA_WITH_AES_128_CBC_SHA`, `TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA` | Supported cipher suites can be found in Oracle's http://docs.oracle.com/javase/7/docs/technotes/guides/security/SunProviders.html[Java Cryptography Architecture documentation]. Cipher suites using key lengths greater than 128 bits require the <<ciphers,JCE Unlimited Strength Jurisdiction Policy Files>>.
| `transport.profiles.$PROFILE.shield.protocol` | `TLSv1.2` | Protocol for security: `SSL`, `SSLv2`, `SSLv3`, `TLS`, `TLSv1`, `TLSv1.1`, `TLSv1.2`
| `transport.profiles.$PROFILE.shield.supported_protocols` | `TLSv1`, `TLSv1.1`, `TLSv1.2` | Supported protocols with versions. Valid protocols: `SSLv2Hello`, `SSLv3`, `TLSv1`, `TLSv1.1`, `TLSv1.2`
|======
[[ref-shield-files]]
[float]
=== Files used by Shield
The Shield security plugin uses the following files:
* `config/shield/roles.yml` defines the roles in use on the cluster (read more <<roles-file,here>>).
* `config/shield/users` defines the hashed passwords for users on the cluster (read more <<users-file,here>>).
* `config/shield/users_roles` defines the role assignments for users on the cluster (read more <<users_roles-file,here>>).
* `config/shield/role_mapping.yml` defines the role assignments for a Distinguished Name (DN) to a role. This allows for
LDAP and Active Directory groups and users and PKI users to be mapped to roles (read more <<ldap-role-mapping,here>>).
* `config/shield/logging.yml` contains audit information (read more <<logging-file,here>>).
* `config/shield/system_key` holds a cluster secret key used for message authentication (read more <<message-authentication,here>>).
Several of these files are in the YAML format. When you edit these files, be aware that YAML is indentation-level
sensitive and indentation errors can lead to configuration errors. Avoid the tab character to set indentation levels,
or use an editor that automatically expands tabs to spaces.
Be careful to properly escape YAML constructs such as `:` or leading exclamation points within quoted strings. Using
the `|` or `>` characters to define block literals instead of escaping the problematic characters can help avoid
problems.

View File

@ -0,0 +1,137 @@
[[release-notes]]
== Appendix 9. Release Notes
[[version-compatibility]]
[float]
=== Version Compatibility
Shield 2.x is compatible with:
* elasticsearch: 1.5.0+
* license: 1.0
[[upgrade-instructions]]
=== Upgrading Shield
To upgrade Shield, just uninstall the current Shield plugin and install the new version of Shield. Your configuration
will be preserved and you do this with a rolling upgrade of Elasticsearch. On each node, after you have stopped it run:
[source,shell]
---------------------------------------------------
bin/plugin -r shield
bin/plugin -i elasticsearch/shield/latest <1>
---------------------------------------------------
<1> `latest` will install the latest version of Shield compatible with your version of elasticsearch. A specific version,
such as `1.1.0` can also be specified.
Then start the node. Larger sites should follow the steps in the {ref}/setup-upgrade.html#_1_0_and_later[rolling upgrade section]
in order to ensure recovery is as quick as possible.
On upgrade, your current configuration files will remain untouched. The configuration files provided by the new version
of Shield will be added with a `.new` extension.
==== updated role definitions
The default role definitions in the `roles.yml` file may need to be changed to ensure proper functionality with other
applications such as Marvel and Kibana. Any role changes will be found in `roles.yml.new` after upgrading to the new
version of Shield. We recommend copying the changes listed below to your `roles.yml` file.
* added[1.1.0] `kibana4_server` role added that defines the minimum set of permissions necessary for the Kibana 4 server.
* added[1.0.1] `kibana4` role updated to work with new features in Kibana 4 RC1
[[changelist]]
=== Change List
[float]
==== 1.3.0
.new features
* <<pki,PKI Realm>>: Adds Public Key Infrastructure (PKI) authentication through the use of X.509 certificates in place of
username and password credentials.
* <<auditing, Index Output for Audit Events>>: An index based output has been added for storing audit events in an Elasticsearch index.
.breaking changes
* The `sha2` and `apr1` hashing algorithms have been removed as options for the <<ref-cache-hash-algo,`cache.hash_algo` setting>>.
If your existing Shield installation uses either of these options, remove the setting and use the default `ssha256`
algorithm.
* The `users` file now only supports `bcrypt` password hashing. All existing passwords stored using the `esusers` tool
have been hashed with `bcrypt` and are not affected.
.enhancements
* TLS 1.2 is now the default protocol.
* Clients that do not support pre-emptive basic authentication can now support both anonymous and authenticated access
by specifying the `shield.authc.anonymous.authz_exception` <<anonymous-access,setting>> with a value of `false`.
* Reduced logging for common SSL exceptions, such as a client closing the connection during a handshake.
.bug fixes
* The `esusers` and `syskeygen` tools now work correctly with environment variables in the RPM and DEB installation
environment files `/etc/sysconfig/elasticsearch` and `/etc/default/elasticsearch`.
* Default ciphers no longer include `TLS_DHE_RSA_WITH_AES_128_CBC_SHA`.
[float]
==== 1.2.2
* The `esusers` tool no longer warns about missing roles that are properly defined in the `roles.yml` file.
* The period character, `.`, is now allowed in usernames and role names.
* The {ref}/query-dsl-terms-filter.html#_caching_19[terms filter lookup cache] has been disabled to ensure all requests
are properly authorized. This removes the need to <<limitations-disable-cache,manually disable>> the terms filter
cache.
* For LDAP client connections, only the protocols and ciphers specified in the `shield.ssl.supported_protocols` and
`shield.ssl.ciphers` <<ref-ssl-tls-settings,settings>> will be used.
* The auditing mechanism now logs authentication failed events when a request contains an invalid authentication token.
[float]
==== 1.2.1
* Several bug fixes including a fix to ensure that {ref}/index-modules-allocation.html#disk[Disk-based Shard Allocation]
works properly with Shield
[float]
==== 1.2.0
* Adds support for elasticsearch 1.5
[float]
==== 1.1.1
* Several bug fixes including a fix to ensure that {ref}/index-modules-allocation.html#disk[Disk-based Shard Allocation]
works properly with Shield
[float]
==== 1.1.0
.new features
* LDAP:
** Add the ability to bind as a specific user for LDAP searches, which removes the need to specify `user_dn_templates`.
This mode of operation also makes use of connection pooling for better performance. Please see <<ldap-user-search, ldap user search>>
for more information.
** User distinguished names (DNs) can now be used for <<ldap-role-mapping, role mapping>>.
* Authentication:
** <<anonymous-access, Anonymous access>> is now supported (disabled by default).
* IP Filtering:
** IP Filtering settings can now be <<dynamic-ip-filtering,dynamically updated>> using the {ref}/cluster-update-settings.html[Cluster Update Settings API].
.enhancements
* Significant memory footprint reduction of internal data structures
* Test if SSL/TLS ciphers are supported and warn if any of the specified ciphers are not supported
* Reduce the amount of logging when a non-encrypted connection is opened and `https` is being used
* Added the <<kibana4-roles, `kibana4_server` role>>, which is a role that contains the minimum set of permissions required for the Kibana 4 server.
* In-memory user credential caching hash algorithm defaults now to salted SHA-256 (see <<ref-cache-hash-algo, Cache hash algorithms>>
.bug fixes
* Filter out sensitive settings from the settings APIs
[float]
==== 1.0.2
* Filter out sensitive settings from the settings APIs
* Significant memory footprint reduction of internal data structures
[float]
==== 1.0.1
* Fixed dependency issues with Elasticsearch 1.4.3 and (Lucene 4.10.3 that comes with it)
* Fixed bug in how user roles were handled. When multiple roles were defined for a user, and one of the
roles only had cluster permissions, not all privileges were properly evaluated.
* Updated `kibana4` permissions to be compatible with Kibana 4 RC1
* Ensure the mandatory `base_dn` settings is set in the `ldap` realm configuration

View File

@ -0,0 +1,8 @@
[[hadoop]]
=== Shield with Elasticsearch for Apache Hadoop
Elasticsearch for Apache Hadoop ("ES-Hadoop") is capable of using HTTP basic and PKI authentication and/or TLS/SSL when accessing an Elasticsearch cluster. For full details please refer to the ES-Hadoop documentation, in particular the `Security` section.
For authentication purposes, select the user for your ES-Hadoop client (for maintenance purposes it is best to create a dedicated user). Then, assign that user to a role with the privileges required by your Hadoop/Spark/Storm job. Configure ES-Hadoop to use the user name and password through the `es.net.http.auth.user` and `es.net.http.auth.pass` properties. If PKI authentication is enabled, setup the appropriate `keystore` and `truststore` instead through `es.net.ssl.keystore.location` and `es.net.truststore.location` (and their respective `.pass` properties to specify the password).
For secured transport, enable SSL/TLS through the `es.net.ssl` property by setting it to `true`. Depending on your SSL configuration (keystore, truststore, etc...) you might need to set other parameters as well - please refer to the http://www.elastic.co/guide/en/elasticsearch/hadoop/current/configuration.html[ES-Hadoop] documentation, specifically the `Configuration` and `Security` chapter.

View File

@ -0,0 +1,57 @@
=== HTTP/REST Clients
Elasticsearch works with standard HTTP http://en.wikipedia.org/wiki/Basic_access_authentication[basic authentication]
headers to identify the requester. Since Elasticsearch is stateless, this header must be sent with every request:
[source,shell]
--------------------------------------------------
Authorization: Basic <TOKEN> <1>
--------------------------------------------------
<1> The `<TOKEN>` is computed as `base64(USERNAME:PASSWORD)`
==== Client examples
This example uses `curl` without basic auth to create an index:
[source,shell]
-------------------------------------------------------------------------------
curl -XPUT 'localhost:9200/idx'
-------------------------------------------------------------------------------
[source,json]
-------------------------------------------------------------------------------
{
"error": "AuthenticationException[Missing authentication token]",
"status": 401
}
-------------------------------------------------------------------------------
Since no user is associated with the request above, an authentication error is returned. Now we'll use `curl` with
basic auth to create an index as the `rdeniro` user:
[source,shell]
---------------------------------------------------------
curl --user rdeniro:taxidriver -XPUT 'localhost:9200/idx'
---------------------------------------------------------
[source,json]
---------------------------------------------------------
{
"acknowledged": true
}
---------------------------------------------------------
==== Client Libraries over HTTP
For more information about how to use Shield with the language specific clients please refer to
https://github.com/elasticsearch/elasticsearch-ruby/tree/master/elasticsearch-transport#authentication[Ruby],
http://elasticsearch-py.readthedocs.org/en/master/#ssl-and-authentication[Python],
https://metacpan.org/pod/Search::Elasticsearch::Role::Cxn::HTTP#CONFIGURATION[Perl],
http://www.elastic.co/guide/en/elasticsearch/client/php-api/current/_security.html[PHP],
http://nest.azurewebsites.net/elasticsearch-net/security.html[.NET],
http://www.elastic.co/guide/en/elasticsearch/client/javascript-api/current/auth-reference.html[Javascript]
////
Groovy - TODO link
////

View File

@ -0,0 +1,437 @@
=== Java clients
Elasticsearch supports two types of Java clients: _Node Client_ and _Transport Client_.
The _Node Client_ is a cluster node that joins the cluster and receives all the cluster events, in the same manner as
any other cluster node. Node clients cannot be allocated shards, and therefore cannot hold data. Node clients are not
eligible for election as a master node in the cluster. For more information about node clients, see the
http://www.elastic.co/guide/en/elasticsearch/client/java-api/current/node-client.html[following section].
Unlike the _Node Client_, the _Transport Client_ is not a node in the cluster. Yet it uses the same transport protocol
the cluster nodes use for inter-node communication and is therefore considered to be very efficient as it bypasses the
process of un/marshalling of request from/to JSON which you typically have in REST based clients (read more about
http://www.elastic.co/guide/en/elasticsearch/client/java-api/current/transport-client.html[_Transport Client_]).
Shield supports both clients. This section provides configuration instructions for these clients.
==== Node Client
WARNING: While _Node Clients_ may work with Shield, since these are actual nodes in the cluster, they require access
to a breadth of cluster management internal APIs. Additionally, just like all other nodes in the cluster,
_Node Clients_ require the License plugin to be installed and access to Shield configuration files that contain
sensitive data. For this reason, _Node Clients_ should be considered as unsafe clients. If you choose to use
these clients, make sure you treat them in the same way you treat any other node in your cluster. Your
application should sit next to the cluster within the same security zone.
There are several steps for setting up this client:
. Set the appropriate dependencies for you project
. Duplicate <<ref-shield-files, configuration files>> for authentication
. Configure the authentication token
. (Optional) If SSL/TLS is enabled, set up the keystore, then create and import the appropriate certificates.
===== Java project dependencies
If you plan on using the Node Client, you first need to make sure the Shield jar files (`elasticsearch-shield-2.0.0.jar`,
`automaton-1.11-8.jar`, `unboundid-ldapsdk-2.3.8.jar`) and the License jar file (`elasticsearch-license-2.0.0.jar`) are
in the classpath. You can either download the distributions, extract the jar files manually and include them in your
classpath, or you can pull them out of the Elasticsearch Maven repository.
===== Maven Example
The following snippet shows the configuration you will need to include in your project's `pom.xml` file:
[source,xml]
--------------------------------------------------------------
<project ...>
<repositories>
<!-- add the elasticsearch repo -->
<repository>
<id>elasticsearch-releases</id>
<url>http://maven.elasticsearch.org/releases</url>
<releases>
<enabled>true</enabled>
</releases>
<snapshots>
<enabled>false</enabled>
</snapshots>
</repository>
...
</repositories>
...
<dependencies>
<!-- add the Shield jar as a dependency -->
<dependency>
<groupId>org.elasticsearch</groupId>
<artifactId>elasticsearch-shield</artifactId>
<version>2.0.0</version>
</dependency>
<!-- add the License jar as a dependency -->
<dependency>
<groupId>org.elasticsearch</groupId>
<artifactId>elasticsearch-license-plugin</artifactId>
<version>2.0.0</version>
<scope>runtime</scope>
</dependency>
...
</dependencies>
...
</project>
--------------------------------------------------------------
===== Gradle Example
If you are using Gradle, then you will need to add the dependencies to your `build.gradle` file:
[source,groovy]
--------------------------------------------------------------
repositories {
/* ... Any other repositories ... */
// Add the Elasticsearch Maven Repository
maven {
url "http://maven.elasticsearch.org/releases"
}
}
dependencies {
// Provide the Shield jar on the classpath for compilation and at runtime
// Note: Many projects can use the Shield jar as a runtime dependency
compile "org.elasticsearch:elasticsearch-shield:2.0.0"
/* ... */
// Provide the License jar on the classpath at runtime (not needed for compilation)
runtime "org.elasticsearch:elasticsearch-license-plugin:2.0.0"
}
--------------------------------------------------------------
It is also possible to manually download the http://maven.elasticsearch.org/releases/org/elasticsearch/elasticsearch-shield/2.0.0/elasticsearch-shield-2.0.0.jar[Shield jar]
and the http://maven.elasticsearch.org/releases/org/elasticsearch/elasticsearch-license-plugin/1.0.0/elasticsearch-license-plugin-2.0.0.jar[License jar]
files from our Maven repository.
===== Duplicate Shield Configuration Files
The _Node Client_ will authenticate requests before sending the requests to the cluster. To do this, copy the `users`,
`users_roles`, `roles.yml`, and `system_key` files from the <<ref-shield-files,Shield configuration files>> to a place
accessible to the node client. These files should be stored on the filesystem in a folder with restricted access as they
contain sesnitive data. This can be configured with the following settings:
* `shield.authc.realms.esusers.files.users`
* `shield.authc.realms.esusers.files.users_roles`
* `shield.authz.store.files.roles`
* `shield.system_key.file`
[source, java]
------------------------------------------------------------------------------------------------------
import static org.elasticsearch.node.NodeBuilder.*;
...
Node node = nodeBuilder().client(true).settings(ImmutableSettings.builder()
.put("cluster.name", "myClusterName")
.put("discovery.zen.ping.multicast.enabled", false)
.putArray("discovery.zen.ping.unicast.hosts", "localhost:9300", "localhost:9301")
.put("shield.authc.realms.esusers.type", "esusers")
.put("shield.authc.realms.esusers.files.users", "/Users/es/config/shield/users")
.put("shield.authc.realms.esusers.files.users_roles", "/Users/es/config/shield/users_roles")
.put("shield.authz.store.files.roles", "/Users/es/config/shield/roles.yml")
.put("shield.system_key.file", "/Users/es/config/shield/system_key"))
...
.node();
------------------------------------------------------------------------------------------------------
Additionally, if you are using LDAP or Active Directory authentication then you will need to specify that configuration
in the settings when configuring the node or provide a `elasticsearch.yml` in the classpath with the appropriate settings.
===== Configuring Authentication Token
The authentication token can be configured in two ways - globally or per-request. When setting it up globally, the
values of the username and password are configured in the client's settings:
[source,java]
------------------------------------------------------------------------------------------------------
import static org.elasticsearch.node.NodeBuilder.*;
import static org.elasticsearch.shield.authc.support.UsernamePasswordToken.basicAuthHeaderValue;
...
Node node = nodeBuilder().client(true).settings(ImmutableSettings.builder()
...
.put("shield.user", "test_user:changeme"))
...
.node();
Client client = node.client();
------------------------------------------------------------------------------------------------------
Once the client is created as above, the `shield.user` setting is translated to a request header in the standard HTTP
basic authentication form `Authentication base64("test_user:changeme")` which will be sent with every request executed.
To skip the global configuration of the token, manually set the authentication token header on every request:
[source,java]
------------------------------------------------------------------------------------------------------
import org.elasticsearch.shield.authc.support.SecuredString;
import static org.elasticsearch.node.NodeBuilder.*;
import static org.elasticsearch.shield.authc.support.UsernamePasswordToken.basicAuthHeaderValue;
...
String token = basicAuthHeaderValue("test_user", new SecuredString("changeme".toCharArray()));
Node node = nodeBuilder().client(true).settings(ImmutableSettings.builder()
...
.node();
Client client = node.client();
client.prepareSearch().putHeader("Authorization", token).get();
------------------------------------------------------------------------------------------------------
The example above executes a search request and manually adds the authentication token as a header on it.
===== Setting up SSL
Authenticating to the cluster requires proof that a node client is trusted as part of the cluster. This is done through
standard PKI and SSL. A client node creates a private key and an associated certificate. The cluster Certificate
Authority signs the certificate. A Client node authenticates during the SSL connection setup by presenting the signed
certificate, and proving ownership of the private key. All of these setup steps are described in
<<private-key, Securing Nodes>>.
In addition, the node client acts like a node, authenticating locally any request made. Copies of the files `users`,
`users_roles`, `roles.yml` , and `system_key` need to be made available to the node client.
After following the steps in <<private-key, Securing Nodes>>, configuration for a node client with Shield might look
like this:
[source, java]
------------------------------------------------------------------------------------------------------
import static org.elasticsearch.node.NodeBuilder.*;
...
Node node = nodeBuilder().client(true).settings(ImmutableSettings.builder()
.put("cluster.name", "myClusterName")
.put("discovery.zen.ping.multicast.enabled", false)
.putArray("discovery.zen.ping.unicast.hosts", "localhost:9300", "localhost:9301")
.put("shield.ssl.keystore.path", "/Users/es/node_client/node_client.jks")
.put("shield.ssl.keystore.password", "password")
.put("shield.transport.ssl", "true")
.put("shield.authc.realms.esusers.type", "esusers")
.put("shield.authc.realms.esusers.files.users", "/Users/es/config/shield/users")
.put("shield.authc.realms.esusers.files.users_roles", "/Users/es/config/shield/users_roles")
.put("shield.authz.store.files.roles", "/Users/es/config/shield/roles.yml")
.put("shield.system_key.file", "/Users/es/config/shield/system_key"))
...
.node();
------------------------------------------------------------------------------------------------------
[[transport-client]]
==== Transport Client
If you plan on using the Transport Client over SSL/TLS you first need to make sure the Shield jar file
(`elasticsearch-shield-2.0.0.jar`) is in the classpath. You can either download the Shield distribution, extract the jar
files manually and include them in your classpath, or you can pull them out of the Elasticsearch Maven repository.
NOTE: Unlike the _Node Client_, the _Transport Client_ is not acting as a node in the cluster, and therefore
**does not** require the License plugin to be installed.
===== Maven Example
The following snippet shows the configuration you will need to include in your project's `pom.xml` file:
[source,xml]
--------------------------------------------------------------
<project ...>
<repositories>
<!-- add the elasticsearch repo -->
<repository>
<id>elasticsearch-releases</id>
<url>http://maven.elasticsearch.org/releases</url>
<releases>
<enabled>true</enabled>
</releases>
<snapshots>
<enabled>false</enabled>
</snapshots>
</repository>
...
</repositories>
...
<dependencies>
<!-- add the shield jar as a dependency -->
<dependency>
<groupId>org.elasticsearch</groupId>
<artifactId>elasticsearch-shield</artifactId>
<version>2.0.0</version>
</dependency>
...
</dependencies>
...
</project>
--------------------------------------------------------------
===== Gradle Example
If you are using Gradle, then you will need to add the dependencies to your `build.gradle` file:
[source,groovy]
--------------------------------------------------------------
repositories {
/* ... Any other repositories ... */
// Add the Elasticsearch Maven Repository
maven {
url "http://maven.elasticsearch.org/releases"
}
}
dependencies {
// Provide the Shield jar on the classpath for compilation and at runtime
// Note: Many projects can use the Shield jar as a runtime dependency
compile "org.elasticsearch:elasticsearch-shield:2.0.0"
/* ... */
}
--------------------------------------------------------------
It is also possible to manually download the http://maven.elasticsearch.org/releases/org/elasticsearch/elasticsearch-shield/2.0.0/elasticsearch-shield-2.0.0.jar[Shield jar]
file from our Maven repository.
TIP: Even if you are not planning on using the client over SSL/TLS, it is still worth having the Shield jar file in
the classpath as it provides various helpful utilities, such as the `UsernamePasswordToken` class for generating
basic-auth tokens and the `ShieldClient` that <<shield-client,exposes an API>> to clear realm caches.
[[java-transport-client-role]]
Before setting up the client itself, you need to make sure you have a user with sufficient privileges to start
the transport client. The transport client uses Elasticsearch's node info API to fetch information about the
nodes in the cluster. For this reason, the authenticated user of the transport client must have the
`cluster:monitor/nodes/info` cluster permission. Furthermore, if the client is configured to use sniffing, the
`cluster:monitor/state` cluster permission is required.
TIP: `roles.yml` ships with a predefined `transport_client` role. By default it is configured to only grant the
`cluster:monitor/nodes/info` cluster permission. You can use this role and assign it to any user
that will be attached to a transport client.
Setting up the transport client is similar to the Node client except authentication files do not need to be configured.
Without SSL, it is as easy as setting up the authentication token on the request, similarly to how they're set up with
the _Node Client_:
[source,java]
------------------------------------------------------------------------------------------------------
import org.elasticsearch.client.transport.TransportClient;
...
TransportClient client = new TransportClient(ImmutableSettings.builder()
.put("cluster.name", "myClusterName")
.put("shield.user", "test_user:changeme")
.addTransportAddress(new InetSocketTransportAddress("localhost", 9300))
.addTransportAddress(new InetSocketTransportAddress("localhost", 9301));
------------------------------------------------------------------------------------------------------
WARNING: Configuring a Transport Client without SSL will send passwords in plaintext.
When using SSL for transport client communication, a few more settings are required. By default, Shield requires client
authentication for secured transport communication. This means that every client would need to have a certificate signed
by a trusted CA. The client authentication can be disabled through the use of a <<separating-node-client-traffic, client
specific transport profile>>.
Configuration required for SSL when using client authentication:
[source,java]
------------------------------------------------------------------------------------------------------
import org.elasticsearch.client.transport.TransportClient;
...
TransportClient client = new TransportClient(ImmutableSettings.builder()
.put("cluster.name", "myClusterName")
.put("shield.user", "test_user:changeme")
.put("shield.ssl.keystore.path", "/path/to/client.jks")
.put("shield.ssl.keystore.password", "password")
.put("shield.transport.ssl", "true"))
.addTransportAddress(new InetSocketTransportAddress("localhost", 9300))
.addTransportAddress(new InetSocketTransportAddress("localhost", 9301));
------------------------------------------------------------------------------------------------------
NOTE: The `client.jks` keystore needs to contain the client's signed CA certificate and the CA certificate.
Configuration required for SSL without client authentication:
[source,java]
------------------------------------------------------------------------------------------------------
import org.elasticsearch.client.transport.TransportClient;
...
TransportClient client = new TransportClient(ImmutableSettings.builder()
.put("cluster.name", "myClusterName")
.put("shield.user", "test_user:changeme")
.put("shield.ssl.truststore.path", "/path/to/truststore.jks")
.put("shield.ssl.truststore.password", "password")
.put("shield.transport.ssl", "true"))
.addTransportAddress(new InetSocketTransportAddress("localhost", 9300))
.addTransportAddress(new InetSocketTransportAddress("localhost", 9301));
------------------------------------------------------------------------------------------------------
NOTE: The `truststore.jks` truststore needs to contain the certificate of the CA that has signed the Elasticsearch nodes'
certificates. If you are using a public CA that is already trusted by the Java runtime, then you can omit
`shield.ssl.truststore.path` and `shield.ssl.truststore.password`.
In the above code snippets, we set up a _Transport Client_ and configured the authentication token globally. Meaning,
that every request executed with this client will include this token in its headers.
The global configuration of the token *must be* set to some user with the privileges in the default `transport_client`
role, as described earlier. The global authentication token may also be overridden by adding a `Authorization` header on
each request. This is useful when an application uses multiple users to access Elasticsearch via the same client. When
operating in this mode, it is best to set the global token to a user that only has the `transport_client` role. The
following example directly sets the authentication token on the request when executing a search.
[source,java]
------------------------------------------------------------------------------------------------------
import org.elasticsearch.shield.authc.support.SecuredString;
import org.elasticsearch.client.transport.TransportClient;
import static org.elasticsearch.shield.authc.support.UsernamePasswordToken.basicAuthHeaderValue;
...
String token = basicAuthHeaderValue("test_user", new SecuredString("changeme".toCharArray()));
TransportClient client = new TransportClient(ImmutableSettings.builder()
.put("shield.user", "transport_client_user:changeme")
...
.addTransportAddress(new InetSocketTransportAddress("localhost", 9300))
.addTransportAddress(new InetSocketTransportAddress("localhost", 9301));
client.prepareSearch().putHeader("Authorization", token).get();
------------------------------------------------------------------------------------------------------
===== Anonymous Access
added[1.1.0]
If Shield enables <<anonymous-access,anonymous access>>, the `shield.user` setting may be dropped and all requests will
be executed under the anonymous user (with the exception of the requests on which the `Authorization` header is explicitly
set, as shown above). For this to work, please make sure the anonymous user is configured with sufficient roles that have
the same privileges as described <<java-transport-client-role,above>>.
[[shield-client]]
==== Shield Client
Shield exposes its own API to the user which is accessible by the `ShieldClient` class. The purpose of this API
is to manage all Shield related aspects. While at the moment it only exposes an operation for clearing up the
realm caches, the plan is to extend this API in the future.
`ShieldClient` is a wrapper around the existing clients (any client class implementing `org.elasticsearch.client.Client`.
The following example shows how one can clear up Shield's realm caches using the `ShieldClient`:
[source,java]
------------------------------------------------------------------------------------------------------
import static org.elasticsearch.node.NodeBuilder.*;
...
Client client = ... // create the client (either transport or node)
ShieldClient shieldClient = new ShieldClient(client);
ClearRealmCacheResponse response = shieldClient.authc().prepareClearRealmCache()
.realms("ldap1", "ad1")
.usernames("rdeniro")
.get();
------------------------------------------------------------------------------------------------------
In the example above, we clear the caches of two realms - `ldap1` and `ad1` - for the `rdeniro` user.

View File

@ -0,0 +1,185 @@
[[kibana]]
=== Kibana
Shield supports both Kibana 3 and Kibana 4.0+ releases. The configuration required differs
between Kibana 3 and 4. Please follow the instructions below for the version of Kibana you are working with.
=== Shield with Kibana 3
Shield and Kibana 3 have been tested together for recent versions of Chrome, Safari, and IE. This section describes
configuration changes and general information to ensure that the two products work together successfully for you.
Kibana 3 uses the `kibana-int` index to store saved dashboards. All users store dashboards in this index. Enable all
users to save and load dashboards from this index. When the Shield plugin is installed, users may be able to load
dashboards that access data in indices that they are not authorized to view. A user that loads such a dashboard
will receive a Kibana error stating that the disallowed index does not exist.
At the moment, there is no way to control which users can load which dashboards. We expect to address this
limitation with future versions of Shield and Kibana.
==== Kibana configuration
Kibana will need to be informed that you wish use credentials. In Kibana's `config.js` set the elasticsearch property:
[source,yaml]
------------------------------------
elasticsearch: {server: "http://YOUR_ELASTICSEARCH_SERVER:9200", withCredentials: true}
------------------------------------
[[cors]]
==== Elasticsearch configuration
HTTP authentication interacts with cross-origin resource sharing (CORS). Clusters that use CORS must send authentication
headers to the browser.
In the `elasticsearch.yml` file on all nodes, add the following configuration entries:
[source,yaml]
------------------------------------
http.cors.enabled: true
http.cors.allow-origin: "https://MYHOST:MYPORT"
http.cors.allow-credentials: true
------------------------------------
Note that in `http.cors.allow-origin`, `*` is disallowed for credentialed requests. You must enter the correct
protocol, hostname and port that would normally be entered into your browser.
Restart the nodes after modifying the configuration file. This change enables Elasticsearch to send the required
`Access-Control-Allow-Credentials` header.
NOTE: To learn more about enabling CORS, see http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-http.html[elasticsearch documentation].
==== Shield configuration
Shield includes a default <<roles,role>> for use with Kibana 3:
[source,yaml]
------------------------------------------------------------------------------------------------------------------------
kibana3:
cluster: cluster:monitor/nodes/info
indices:
'*': indices:data/read/search, indices:data/read/get, indices:admin/get <1>
'kibana-int': indices:data/read/get, indices:data/read/search, indices:data/write/delete, indices:data/write/index,
create_index
------------------------------------------------------------------------------------------------------------------------
<1> This line gives the Kibana 3 user read access to indices in order to search and display the data in them. To
constrain this role's access to specific indices, alter the wildcard.
Kibana 3 uses the `kibana-int` index to save and load dashboards. This role definition allows the user to manage and
use the dashboards in the `kibana-int` index.
Kibana 3 uses the cluster permission to access the `/_nodes` endpoint in order to check the node version.
Elasticsearch recommends that you create one or more roles derived from this role. These new roles will include access to
indices specified by your organization's goals and policies.
==== SSL/TLS and browsers
===== Trusting certificates
As discussed in <<securing-nodes, Securing Nodes>>, Shield supports adding SSL to the Elasticsearch HTTP interface.
When using Kibana, your browser verifies that the certificate received from the Elasticsearch node is trusted
before sending a request to the node. Establishing this trust requires that either your browser or operating
system trust the Certificate Authority (CA) that signed the node's certificate. To use SSL with Shield and
Kibana 3, ensure that the browser or operating system has been configured to trust this CA.
The process to ensure this trust varies per organization. Some organizations will have pre-installed these CA
certificates into the operating system or the browser's local certificate store. If this is the case, you will
not need to take any further action.
Other organizations will not have pre-installed the CA certificate. Or you may have created your own CA as discussed
in <<certificate-authority, Appendix 1>>. In these cases, we recommend that you consult your local IT professional to
determine the recommended procedure for adding trusted CAs in your organization.
===== Working with source builds of Kibana 3
Some developers use Kibana 3 by pulling the software from our GitHub repository, and not using a built package
from our download site. If you do this, be sure to clear your browser's cache after deploying Shield and
configuring the `http.cors.allow-credentials` parameter to avoid authentication errors with most browsers.
=== Shield with Kibana 4
Kibana 4 adds a server-side component that changes the integration with Shield and the steps required to configure Shield, Elasticsearch, and Kibana to work together. With Kibana 4, the browser makes requests to the Kibana 4 server, and not to Elasticsearch directly. The Kibana 4 server then makes requests to Elasticsearch on behalf of the browser. We recommend using separate roles for your users who log into Kibana and for the Kibana 4 server itself.
[[kibana4-roles]]
==== Configuring Roles for Kibana 4 Users
Kibana users need access to the indices that they will be working with and the `.kibana` index where their
saved searches, visualizations, and dashboards are stored. Shield includes a default `kibana4` role that grants
read access to all indices and full access to the `.kibana` index.
IMPORTANT: The default Kibana 4 user role grants read access to all indices. We strongly recommend deriving
custom roles for your Kibana users that limit access to specific indices according to your organization's goals and policies.
[source,yaml]
------------------------------------------------------------------------------------------------------------------------
kibana4:
cluster:
- cluster:monitor/nodes/info
- cluster:monitor/health
indices:
'*':
- indices:admin/mappings/fields/get
- indices:admin/validate/query
- indices:data/read/search
- indices:data/read/msearch
'.kibana':
- indices:admin/create
- indices:admin/exists
- indices:admin/mapping/put
- indices:admin/mappings/fields/get
- indices:admin/refresh
- indices:admin/validate/query
- indices:data/read/get
- indices:data/read/mget
- indices:data/read/search
- indices:data/write/delete
- indices:data/write/index
- indices:data/write/update
- indices:admin/create
------------------------------------------------------------------------------------------------------------------------
To constrain Kibana's access to specific indices, explicitly specify the index names in your role. When configuring a role for a Kibana user and granting access to a specific index, at a minimum the user needs the following privileges on the index:
* `indices:admin/mappings/fields/get`
* `indices:admin/validate/query`
* `indices:data/read/search`
* `indices:data/read/msearch`
* `indices:admin/get`
[[kibana4-server-role]]
==== Configuring a Role for the Kibana 4 Server
The Kibana 4 server needs access to the cluster monitoring APIs and the `.kibana` index. However, the server
does not need access to user indexes. The following `kibana4_server` role shows the privileges required
by the Kibana 4 server.
NOTE: This role is included in roles.yml by default as of Shield 1.1. If you are running an older version of Shield,
you need to add it yourself.
[source,yaml]
------------------------------------------------------------------------------------------------------------------------
kibana4_server:
cluster:
- cluster:monitor/nodes/info
- cluster:monitor/health
indices:
'.kibana':
- indices:admin/exists
- indices:admin/mapping/put
- indices:admin/mappings/fields/get
- indices:admin/refresh
- indices:admin/validate/query
- indices:data/read/get
- indices:data/read/mget
- indices:data/read/search
- indices:data/write/delete
- indices:data/write/index
- indices:data/write/update
------------------------------------------------------------------------------------------------------------------------
To configure the Kibana 4 server, you must assign this role to a user and add the user credentials to `kibana.yml`.
For more information, see http://www.elastic.co/guide/en/kibana/current/production.html#configuring-kibana-shield[Configuring Kibana to Work with Shield] in the Kibana 4 User Guide.
==== Configuring Kibana 4 to Use SSL
You should also configure Kibana 4 to use SSL encryption for both client requests and the requests the Kibana server sends to Elasticsearch. For more information, see http://www.elastic.co/guide/en/kibana/current/production.html#enabling-ssl[Enabling SSL] in the Kibana 4 User Guide.

View File

@ -0,0 +1,175 @@
[[logstash]]
=== Shield with Logstash
IMPORTANT: Shield 2.0.x is compatible with Logstash 1.5 and above.
Logstash provides Elasticsearch https://www.elastic.co/guide/en/logstash/current/plugins-outputs-elasticsearch.html[output], https://www.elastic.co/guide/en/logstash/current/plugins-inputs-elasticsearch.html[input] and https://www.elastic.co/guide/en/logstash/current/plugins-filters-elasticsearch.html[filter] plugins
used to index and retrieve documents through HTTP, transport or client node protocols.
All plugins support authentication and encryption over HTTP, while the output plugin additionally supports these
features over the transport protocol.
Note: When using the elasticsearch output, only the `transport` and `http` protocol are supported (i.e. `node` protocol is unsupported)
For information on setting up authentication and authorization on the Elasticsearch side, check the corresponding
documentation sections: <<authorization,_Authorization_>> and <<authentication,_Authentication_>>.
To configure the certificates and other SSL related options, see <<securing-nodes,_Securing Nodes_>>.
[[ls-user]]
==== Creating a user
By default, the Shield plugin installs a dedicated user <<roles,role>> that enables the creation of indices with names
that match the `logstash-*` regular expression, along with privileges to read, scroll, index, update, and delete
documents on those indices:
[source,yaml]
--------------------------------------------------------------------------------------------
logstash:
cluster: indices:admin/template/get, indices:admin/template/put
indices:
'logstash-*': indices:data/write/bulk, indices:data/write/delete, indices:data/write/update, indices:data/read/search, indices:data/read/scroll, create_index
--------------------------------------------------------------------------------------------
See the <<roles-file,_Role Definition File_>> section for information on modifying roles.
Create a user associated with the `logstash` role on the Elasticsearch cluster, using the <<esusers,`esusers` tool>>:
[source,shell]
--------------------------------------------------
esusers useradd <username> -p <password> -r logstash
--------------------------------------------------
NOTE: When using the transport protocol, the logstash user requires the predefined `transport_client` role in addition to the `logstash` role shown above (`-r logstash,transport_client`).
Once you've created the user, you are ready to configure Logstash.
[[ls-http]]
==== Connecting with HTTP/HTTPS
All three input, filter and output plugins support HTTP Basic Authentication as well as SSL/TLS.
The sections below demonstrate the output plugin's configuration parameters, but input and filter are the same.
[[ls-http-auth]]
===== Basic Authentication
To connect to an instance of Elasticsearch with Shield, set up the username and password credentials with the following
configuration parameters:
[source, shell]
--------------------------------------------------
input { ... }
output {
elasticsearch {
protocol => "http"
...
user => ... # string
password => ... # string
}
}
--------------------------------------------------
[[ls-http-ssl]]
===== SSL/TLS Configuration for HTTPS
To enable SSL/TLS encryption for HTTPS, use the following configuration block:
[source, shell]
--------------------------------------------------
input { ... }
output {
elasticsearch {
protocol => "http"
...
ssl => true
cacert => '/path/to/cert.pem' <1>
}
}
--------------------------------------------------
<1> The path to the `.pem` file in your filesystem that contains the Certificate Authority's certificate.
[[ls-transport]]
==== Connecting with Transport protocol
By setting the "protocol" option to "transport", Logstash communicates with the Elasticsearch cluster through the same
protocol nodes use between each other. This avoids JSON un/marshalling and is therefore more efficient.
In order to unlock this option, it's necessary to install an additional plugin in Logstash using the following command:
[source, shell]
--------------------------------------------------
bin/plugin install logstash-output-elasticsearch-shield
--------------------------------------------------
[[ls-transport-auth]]
===== Authentication for Transport protocol
Transport protocol supports both basic auth and client-certificate authentication through the use of Public Key Infrastructure (PKI).
[[ls-transport-auth-basic]]
===== Basic Authentication
To connect to an instance of Elasticsearch with Shield using basic auth, set up the username and password credentials with the following configuration parameters:
[source, shell]
--------------------------------------------------
input { ... }
output {
elasticsearch {
protocol => "transport"
...
user => ... # string
password => ... # string
}
}
--------------------------------------------------
[[ls-transport-auth-pki]]
===== PKI Authentication
To connect to an instance of Elasticsearch with Shield using client-certificate authentication you need to setup the keystore path which contain the client's certificate and the keystore password in the configuration:
[source, shell]
--------------------------------------------------
input { ... }
output {
elasticsearch {
protocol => "transport"
...
ssl => true
keystore => ... # string
keystore_password => ... # string
}
}
--------------------------------------------------
[[ls-transport-conf]]
===== SSL Configuration for Transport or Node protocols
Specify the paths to the keystore and truststore `.jks` files with the following configuration parameters:
[source, shell]
--------------------------------------------------
input { ... }
output {
elasticsearch {
protocol => "transport"
host => ... # string (optional)
cluster => ... # string (optional)
...
ssl => true
keystore => ... # string
keystore_password => ... # string
truststore => ... # string
truststore_password => ... # string
}
}
--------------------------------------------------
For more information on encryption and certificates, see the <<ssl-tls,Securing Nodes>> section:
[[ls-failure]]
==== Failures
Logstash raises an exception that halts the processing pipeline when the server's certificate does not validate over SSL
on any of the protocols discussed in this section. Same for the invalid user credentials.

View File

@ -0,0 +1,122 @@
[[marvel]]
=== Shield with Marvel
Marvel consists of a user interface over a data exporter known as the _agent_. The agent runs on each node and accesses
that node's monitoring API. The agent can store this collected data locally, on the cluster, or send the data to an
external monitoring cluster. Users can view and analyze the collected data with the Marvel UI.
To work with the Shield plugin, Marvel's configuration needs to be adapted for the _production_ cluster, which is the
cluster being monitored, as well as the _monitoring_ cluster, where the monitoring data is stored. For clusters that
store their own monitoring data, apply both sets of settings to the single, production cluster.
You will configure at least two users to work with Marvel. These users have to exist on the monitoring cluster.
* The Agent needs to be assigned a user with the correct <<roles,privileges>> to write data to the Marvel indices
named `.marvel-*`, check the Marvel index template, and upload the Marvel index template. You need only one agent user.
* Marvel UI users must authenticate and have privileges to read data from the Marvel indices. These users also
need to able to call the Nodes Info API in order to get the monitoring cluster's Elasticsearch version.
This version check allows Marvel to be compatible with many versions of Elasticsearch. You can have as many of
these end users configured as you would like.
The default `roles.yml` file includes definitions for these two roles. The steps below show you how to create these
users on the monitoring cluster.
[[monitoring-cluster]]
==== Monitoring Cluster Settings
The monitoring cluster is used to both store and view the Marvel data. When configuring Shield, you need to perform the
following actions:
* Make sure there is a user created with the `marvel_agent` role. Marvel uses this to export the data.
* Make sure there is a user created with the `marvel_user` role. You use this to view the Marvel UI and get license information.
* When using Marvel on a production server, you must enter your Marvel License. This license is stored in the
monitoring cluster. This step needs to be performed once, by a user with permissions to write to the `.marvel-kibana`
index. The .marvel-kibana index is used to store Marvel UI settings (for example, set custom warning levels) and
therefore write permission for `.marvel-kibana` is required for UI customizations. Both storing license and storing
settings can be done by any user added to the marvel_user role.
This is in the default `roles.yml`
[source,yaml]
--------------------------------------------------
marvel_agent:
cluster: indices:admin/template/get, indices:admin/template/put
indices:
'.marvel-*': indices:data/write/bulk, create_index
marvel_user:
cluster: cluster:monitor/nodes/info, cluster:admin/plugin/license/get
indices:
'.marvel-*': all
--------------------------------------------------
Once the roles are configured, create a user for the agent:
[source,shell]
--------------------------------------------------
bin/shield/esusers useradd marvel_export -p strongpassword -r marvel_agent
--------------------------------------------------
Then create one or more users for the Marvel UI:
[source,shell]
--------------------------------------------------
bin/shield/esusers useradd USER -p strongerpassword -r marvel_user
--------------------------------------------------
==== Production Cluster Settings
The Marvel agent is installed on every node in the production cluster. The agent collects monitoring data from the
production cluster and stores the data on the monitoring cluster. The agent's configuration specifies a list of
hostname and port combinations for access to the monitoring cluster.
When the monitoring cluster uses the Shield plugin and is configured to accept only HTTPS requests, you must configure the agent
on the production cluster to use HTTPS instead of the default HTTP protocol.
Authentication and protocol configuration are both controlled by the `marvel.agent.exporter.es.hosts` setting in the
node's `elasticsearch.yml` file. The setting accepts a list of monitoring cluster servers to serve as a fallback
in case a server is unavailable. Each of these servers must be properly configured, as in the following example:
Example `marvel.agent.exporter.es.hosts` setting
[source,yaml]
-------------------------------------------------------------------------------------------------------------------
marvel.agent.exporter.es.hosts: [ "https<1>://USER:PASSWORD<2>@node01:9200", "https://USER:PASSWORD@node02:9200"]
-------------------------------------------------------------------------------------------------------------------
<1> Indicates to use HTTPS.
<2> Username and password. The user needs to be configured on the Monitoring Cluster as described in the next section.
When the monitoring cluster uses HTTPS, the Marvel agent will attempt to validate the certificate of the Elasticsearch
node in the monitoring cluster. If you are using your own CA you should specify a trust store that has the signing
certificate of the CA. Here is an example config for the `marvel.agent.exporter.es.truststore.*` settings:
[source,yaml]
-------------------------------------------------------------------------------------------------------------
marvel.agent.exporter.es.hosts: [ "https://USER:PASSWORD@node01:9200", "https://USER:PASSWORD@node02:9200"]
marvel.agent.exporter.es.ssl.truststore.path: FULL_FILE_PATH
marvel.agent.exporter.es.ssl.truststore.password: PASSWORD
-------------------------------------------------------------------------------------------------------------
See the http://www.elastic.co/guide/en/marvel/current/configuration.html[Marvel documentation] for more details about
other SSL related settings.
NOTE: The 1.3.0 release of Marvel adds HTTPS support.
==== Marvel user interface & Sense
The Marvel UI supports SSL without the need for any additional configuration. You can change URL access scheme for Marvel to
HTTPS.
Users attempting to access the Marvel UI with the URL `https://HOST:9200/_plugin/marvel` must provide valid
credentials. See <<monitoring-cluster,Monitoring Cluster settings>> for information on the required user configuration.
Sense also supports HTTPS access. Users that access Sense over URLs of the form
`https://host:9200/_plugin/marvel/sense/index.html` must provide valid credentials if they have not already
authenticated to a dashboard.
Users connecting to the production cluster with Sense must provide valid credentials. Clusters must be configured to
enable cross-origin requests to enable users to connect with Sense. See the <<cors, CORS>> documentation for details.
NOTE: Providing user credentials to Sense in order to access another cluster is only supported in releases 1.3.0 and
later of Marvel.

View File

@ -0,0 +1,290 @@
[[esusers]]
=== esusers - Internal File Based Authentication
The _esusers_ realm is the default Shield realm. The _esusers_ realm enables the registration of users, passwords for
those users, and associates those users with roles. The `esusers` command-line tool assists with the registration and
administration of users.
==== `esusers` Realm Settings
Like all other realms, the `esusers` realm is configured under the `shield.authc.realms` settings namespace in the
`elasticsearch.yml` file. The following snippet shows an example of such configuration:
.Example `esusers` Realm Configuration
[source, yaml]
------------------------------------------------------------
shield:
authc:
realms:
default:
type: esusers
order: 0
------------------------------------------------------------
[[esusers-settings]]
.`esusers` Realm Settings
|=======================
| Setting | Required | Description
| `type` | yes | Indicates the realm type and must be set to `esusers`.
| `order` | no | Indicates the priority of this realm within the realm chain. Realms with lower order will be consulted first. Although not required, it is highly recommended to explicitly set this value when multiple realms are configured. Defaults to `Integer.MAX_VALUE`.
| `enabled` | no | Indicates whether this realm is enabled/disabled. Provides an easy way to disable realms in the chain without removing their configuration. Defaults to `true`.
| `files.users` | no | Points to the location of the `users` file where the users and their passwords are stored. Defaults to `users` file under shield's <<shield-config, config directory>>.
| `files.users_roles` | no | Points to the location of the `users_roles` file where the users and their roles are stored. Defaults to `users_roles` file under shield's <<shield-config, config directory>>.
| `cache.ttl` | no | Specified the time-to-live for cached user entries (a user and its credentials will be cached for this configured period of time). Defaults to `20m` (use the standard elasticsearch {ref}/common-options.html#time-units[time units]).
| `cache.max_users` | no | Specified the maximum number of user entries that can live in the cache at a given time. Defaults to 100,000.
| `cache.hash_algo` | no | (Expert Setting) Specifies the hashing algorithm that will be used for the in-memory cached user credentials (see <<esusers-cache-hash-algo,here>> for possible values).
|=======================
NOTE: When no realms are explicitly configured in `elasticsearch.yml`, a default realm chain will be created that holds
a single `esusers` realm. If you wish to only work with `esusers` realm and you're satisfied with the default
files paths, there is no real need to add the above configuration.
==== The `esusers` Command Line Tool
The `esusers` command line tool is located under Shield's <<shield-bin, bin>> directory and enables several
administrative tasks for managing users:
* <<esusers-add,Adding users>>
* <<esusers-list,Listing users and roles>>
* <<esusers-pass,Managing user passwords>>
* <<esusers-roles,Managing users' roles>>
* <<esusers-del,Removing users>>
[[esusers-add]]
===== Adding Users
The `esusers useradd` command adds a user to your cluster.
NOTE: To ensure that Elasticsearch can read the user and role information at startup, run `esusers useradd` as the
same user you use to run Elasticsearch. Running the command as root or some other user will update the permissions
for the `users` and `users_roles` files and prevent Elasticsearch from accessing them.
[source,shell]
----------------------------------------
esusers useradd <username>
----------------------------------------
A username must be at least 1 character and no longer than 30 characters. The first character must be a letter
(`a-z` or `A-Z`) or an underscore (`_`). Subsequent characters can be letters, underscores (`_`), digits (`0-9`), or any
of the following symbols `@`, `-`, `.` or `$`
You can specify the user's password at the command line with the `-p` option. When this option is absent, the
`esusers` command prompts you for the password. Omit the `-p` option to keep plaintext passwords out of the terminal
session's command history.
[source,shell]
----------------------------------------------------
esusers useradd <username> -p <secret>
----------------------------------------------------
Passwords must be at least 6 characters long.
You can define a user's roles with the `-r` parameter. This parameter accepts a comma-separated list of role names to
associate with the user.
[source,shell]
-------------------------------------------------------------------
esusers useradd <username> -r <comma-separated list of role names>
-------------------------------------------------------------------
The following example adds a new user named `jacknich` to the _esusers_ realm. The password for this user is
`theshining`, and this user is associated with the `logstash` and `marvel` roles.
[source,shell]
---------------------------------------------------------
esusers useradd jacknich -p theshining -r logstash,marvel
---------------------------------------------------------
For valid role names please see <<valid-role-name, Role Definitions>>.
[[esusers-list]]
===== Listing Users
The `esusers list` command lists the users registered in the _esusers_ realm, as in the following example:
[source, shell]
----------------------------------
esusers list
rdeniro : admin
alpacino : power_user
jacknich : marvel,logstash
----------------------------------
Users are in the left-hand column and their corresponding roles are listed in the right-hand column.
===== Listing Specific Users
The `esusers list <username>` command lists a specific user. Use this command to verify that a user has been
successfully added to the cluster.
[source,shell]
-----------------------------------
esusers list jacknich
jacknich : marvel,logstash
-----------------------------------
[[esusers-pass]]
===== Changing Users' Passwords
The `esusers passwd` command enables you to reset a user's password. You can specify the new password directly with the
`-p` option. When `-p` option is omitted, the tool will prompt you to enter and confirm a password in interactive mode.
[source,shell]
--------------------------------------------------
esusers passwd <username>
--------------------------------------------------
[source,shell]
--------------------------------------------------
esusers passwd <username> -p <password>
--------------------------------------------------
[[esusers-roles]]
===== Changing Users' Roles
The `esusers roles` command manages the roles associated to a particular user. The `-a` option adds a comma-separated
list of roles to a user. The `-r` option removes a comma-separated list of roles from a user. You can combine adding and
removing roles within the same command to change a user's roles.
[source,shell]
------------------------------------------------------------------------------------------------------------
esusers roles <username> -a <commma-separate list of roles> -r <commma-separate list of roles>
------------------------------------------------------------------------------------------------------------
The following command removes the `logstash` and `marvel` roles from user `jacknich`, as well as adding the `user` role:
[source,shell]
---------------------------------------------------------------
esusers roles jacknich -r logstash,marvel -a user
---------------------------------------------------------------
Listing the user displays the new role assignment:
[source,shell]
---------------------------------
esusers list jacknich
jacknich : user
---------------------------------
[[esusers-del]]
===== Deleting Users
The `esusers userdel` command deletes a user.
[source,shell]
--------------------------------------------------
userdel <username>
--------------------------------------------------
==== How `esusers` Works
The `esusers` tool manipulates two files, `users` and `users_roles`, in Shield's
<<shield-config,config>> directory. These two files store all user data for the _esusers_ realm and are read by Shield
on startup.
By default, Shield checks these files for changes every 5 seconds. You can change this default behavior by changing the
value of the `resource.reload.interval.high` setting in the `elasticsearch.yml` file.
[IMPORTANT]
==============================
These files are managed locally by the node and are **not** managed
globally by the cluster. This means that with a typical multi-node cluster,
the exact same changes need to be applied on each and every node in the
cluster.
A safer approach would be to apply the change on one of the nodes and have the
`users` and `users_roles` files distributed/copied to all other nodes in the
cluster (either manually or using a configuration management system such as
Puppet or Chef).
==============================
While it is possible to modify these files directly using any standard text
editor, we strongly recommend using the `esusers` command-line tool to apply
the required changes.
[[users-file]]
===== The `users` File
The `users` file stores all the users and their passwords. Each line in the `users` file represents a single user entry
consisting of the username and **hashed** password.
[source,bash]
----------------------------------------------------------------------
rdeniro:$2a$10$BBJ/ILiyJ1eBTYoRKxkqbuDEdYECplvxnqQ47uiowE7yGqvCEgj9W
alpacino:$2a$10$cNwHnElYiMYZ/T3K4PvzGeJ1KbpXZp2PfoQD.gfaVdImnHOwIuBKS
jacknich:$2a$10$GYUNWyABV/Ols/.bcwxuBuuaQzV6WIauW6RdboojxcixBq3LtI3ni
----------------------------------------------------------------------
NOTE: The `esusers` command-line tool uses `bcrypt` to hash the password by default.
[[users_roles-file]]
===== The `users_roles` File
The `users_roles` file stores the roles associated with the users, as in the following example:
[source,shell]
--------------------------------------------------
admin:rdeniro
power_user:alpacino,jacknich
user:jacknich
--------------------------------------------------
Each row maps a role to a comma-separated list of all the users that are associated with that role.
==== User Cache
The user credentials are not stored on disk in clear text. The esusers creates a `bcrypt` hashes of the passwords and
stores those. `bcrypt` is considered to be highly secured hash and by default it uses 10 rounds to generate the salts
it hashes with. While highly secured, it is also relatively slow. For this reason, Shield also introduce an in-memory
cache over the `esusers` store. This cache can use a different hashing algorithm for storing the passwords in memeory.
The default hashing algorithm that is used is `ssha256` - a salted SHA-256 algorithm.
We've seen in the table <<esusers-settings,above>> that the cache characteristics can be configured. The following table
describes the different hash algorithm that can be set:
[[esusers-cache-hash-algo]]
.Cache hash algorithms
|=======================
| Algorithm | Description
| `ssha256` | Uses a salted `SHA-256` algorithm (default).
| `md5` | Uses `MD5` algorithm.
| `sha1` | Uses `SHA1` algorithm.
| `bcrypt` | Uses `bcrypt` algorithm with salt generated in 10 rounds.
| `bcrypt4` | Uses `bcrypt` algorithm with salt generated in 4 rounds.
| `bcrypt5` | Uses `bcrypt` algorithm with salt generated in 5 rounds.
| `bcrypt6` | Uses `bcrypt` algorithm with salt generated in 6 rounds.
| `bcrypt7` | Uses `bcrypt` algorithm with salt generated in 7 rounds.
| `bcrypt8` | Uses `bcrypt` algorithm with salt generated in 8 rounds.
| `bcrypt9` | Uses `bcrypt` algorithm with salt generated in 9 rounds.
| `noop`,`clear_text` | Doesn't hash the credentials and keeps it in clear text in memory. CAUTION:
keeping clear text is considered insecure and can be compromised at the OS
level (e.g. memory dumps and `ptrace`).
|=======================
===== Cache Eviction API
Shield exposes an API to force cached user eviction. The following example, evicts all users from the `ldap1`
realm:
[source, java]
------------------------------------------------------------
$ curl -XPOST 'http://localhost:9200/_shield/realm/esusers/_cache/clear'
------------------------------------------------------------
It is also possible to evict specific users:
[source, java]
------------------------------------------------------------
$ curl -XPOST 'http://localhost:9200/_shield/realm/esusers/_cache/clear?usernames=rdeniro,alpacino'
------------------------------------------------------------
Multiple realms can also be specified using comma-delimited list:
[source, java]
------------------------------------------------------------
$ curl -XPOST 'http://localhost:9200/_shield/realm/esusers,ldap1/_cache/clear'
------------------------------------------------------------

View File

@ -0,0 +1,260 @@
[[ldap]]
=== LDAP Authentication
A secure Elasticsearch cluster can authenticate users from a Lightweight Directory Access Protocol (LDAP) directory.
With LDAP Authentication, you can assign roles to LDAP groups. When a user authenticates with LDAP, the privileges for
that user are the union of all privileges defined by the roles assigned to the set of groups that the user belongs to.
This section discusses configuration for an LDAP Realm.
==== LDAP Overview
LDAP stores users and groups hierarchically, similar to the way folders are grouped in a file system. The path to any
entry is a _Distinguished Name_, or DN. A DN uniquely identifies a user or group. User and group names typically use
attributes such as _common name_ (`cn`) or _unique ID_ (`uid`). An LDAP directory's hierarchy is built from containers
such as the _organizational unit_ (`ou`), _organization_ (`o`), or _domain controller_ (`dc`).
LDAP ignores white space in a DN definition. The following two DNs are equivalent:
[source,shell]
---------------------------------
"cn=admin,dc=example,dc=com"
"cn =admin ,dc= example , dc = com"
---------------------------------
Although optional, connections to the LDAP server should use the Secure Sockets Layer (SSL/TLS) protocol to protect
passwords. Clients and nodes that connect via SSL/TLS to the LDAP server require the certificate or the root CA for the
server. These certificates should be put into each node's keystore/truststore.
[[ldap-realms]]
==== LDAP Realm Settings
Like all realms, the `ldap` realm is configured under the `shield.authc.realms` settings namespace in the
`elasticsearch.yml` file. The LDAP realm supports two modes of operation, a user search mode and a mode with specific
templates for user DNs.
[[ldap-user-search]]
===== LDAP Realm with User Search added[1.1.0]
A LDAP user search is the most common mode of operation. In this mode, a specific user with permission to search the LDAP
is used to seach for the user DN based on the username and a LDAP attribute. The following snippet shows an example of
such configuration:
.Example LDAP Realm Configuration with User Search
[source, yaml]
------------------------------------------------------------
shield:
authc:
realms:
ldap1:
type: ldap
order: 0
url: "ldaps://ldap.example.com:636"
bind_dn: "cn=ldapuser, ou=users, o=services, dc=example, dc=com"
bind_password: changeme
user_search:
base_dn: "dc=example,dc=com"
attribute: cn
group_search:
base_dn: "dc=example,dc=com"
files:
role_mapping: "/mnt/elasticsearch/group_to_role_mapping.yml"
unmapped_groups_as_roles: false
------------------------------------------------------------
===== LDAP Realm with User DN Templates
User DN templates can be specified if your LDAP environment uses a few specific standard naming conditions for users. The
advantage of this method is that a search is not needed to find the user DN; conversely the disadvantage is multiple bind
operations may be needed to find the right user DN. The following snippet shows an example of such configuration:
.Example LDAP Realm Configuration with User DN Templates
[source, yaml]
------------------------------------------------------------
shield:
authc:
realms:
ldap1:
type: ldap
order: 0
url: "ldaps://ldap.example.com:636"
user_dn_templates:
- "cn={0}, ou=users, o=marketing, dc=example, dc=com"
- "cn={0}, ou=users, o=engineering, dc=example, dc=com"
group_search:
base_dn: "dc=example,dc=com"
files:
role_mapping: "/mnt/elasticsearch/group_to_role_mapping.yml"
unmapped_groups_as_roles: false
------------------------------------------------------------
[[ldap-settings]]
.Common LDAP Realm Settings
|=======================
| Setting | Required | Description
| `type` | yes | Indicates the realm type and must be set to `ldap`.
| `order` | no | Indicates the priority of this realm within the realm chain. Realms with lower order will be consulted first. Although not required, it is highly recommended to explicitly set this value when multiple realms are configured. Defaults to `Integer.MAX_VALUE`.
| `enabled` | no | Indicates whether this realm is enabled/disabled. Provides an easy way to disable realms in the chain without removing their configuration. Defaults to `true`.
| `url` | yes | Specifies the LDAP URL in the form of `ldap[s]://<server>:<port>`. Shield attempts to authenticate against this URL.
| `group_search.base_dn` | no | Specifies a container DN to search for groups in which the user has membership. When this element is absent, Shield searches for a `memberOf` attribute set on the user in order to determine group membership.
| `group_search.scope` | no | Specifies whether the group search should be `sub_tree`, `one_level` or `base`. `one_level` only searches objects directly contained within the `base_dn`. The default `sub_tree` searches all objects contained under `base_dn`. `base` specifies that the `base_dn` is a group object, and that it is the only group considered.
| `group_search.filter` | no | When not set, the realm will search for `group`, `groupOfNames`, or `groupOfUniqueNames`, with the attributes `member` or `memberOf`. Any instance of `{0}` in the filter will be replaced by the user attribute defined in `group_search.user_attribute`
| `group_search.user_attribute` | no | Specifies the user attribute that will be fetched and provided as a parameter to the filter. If not set, the user DN is passed into the filter.
| `unmapped_groups_as_roles` | no | When set to `true`, the names of any unmapped LDAP groups are used as role names and assigned to the user. The default value is `false`.
| `connect_timeout` | no | The timeout period for establishing an LDAP connection. An `s` at the end indicates seconds, or `ms` indicates milliseconds. Defaults to "5s" - for 5 seconds
| `read_timeout` | no | The timeout period for an LDAP operation. An `s` at the end indicates seconds, or `ms` indicates milliseconds. Defaults to "5s" - for 5 seconds
| `files.role_mapping` | no | Specifies the path and file name for the <<ldap-role-mapping, YAML role mapping configuration file>>. The default file name is `role_mapping.yml` in the <<shield-config,Shield config directory>>.
| `follow_referrals` | no | Boolean value that specifies whether Shield should follow referrals returned by the LDAP server. Referrals are URLs returned by the server that are to be used to continue the LDAP operation (e.g. search). Default is `true`.
| `hostname_verification` | no | When set to `true`, hostname verification will be performed when connecting to a LDAP server. The hostname or IP address used in the `url` must match one of the names in the certificate or the connection will not be allowed. Defaults to `true`.
| `cache.ttl` | no | Specified the time-to-live for cached user entries (a user and its credentials will be cached for this configured period of time). Defaults to `20m` (use the standard elasticsearch {ref}/common-options.html#time-units[time units]).
| `cache.max_users` | no | Specified the maximum number of user entries that can live in the cache at a given time. Defaults to 100,000.
| `cache.hash_algo` | no | (Expert Setting) Specifies the hashing algorithm that will be used for the in-memory cached user credentials (see <<ldap-cache-hash-algo,here>> for possible values).
|=======================
.User Template LDAP Realm Settings
|=======================
| Setting | Required | Description
| `user_dn_templates` | yes | Specifies the DN template that replaces the user name with the string `{0}`. This element is multivalued, allowing for multiple user contexts.
|=======================
.User Search LDAP Realm Settings added[1.1.0]
|=======================
| Setting | Required | Description
| `bind_dn` | no | The DN of the user that will be used to bind to the LDAP and perform searches. If this is not specified, an anonymous bind will be attempted.
| `bind_password` | no | The password for the user that will be used to bind to the LDAP.
| `user_search.base_dn` | yes | Specifies a container DN to search for users.
| `user_search.scope` | no | The scope of the user search. Valid values are `sub_tree`, `one_level` or `base`. `one_level` only searches objects directly contained within the `base_dn`. The default `sub_tree` searches all objects contained under `base_dn`. `base` specifies that the `base_dn` is the user object, and that it is the only user considered.
| `user_search.attribute` | no | The attribute to match with the username presented to Shield. The default attribute is `uid`
| `user_search.pool.size` | no | The maximum number of connections to the LDAP server to allow in the connection pool. Default is `20`.
| `user_search.pool.initial_size` | no | The initial number of connections to create to the LDAP server on startup. Default is `5`.
| `user_search.pool.health_check.enabled` | no | Flag to enable or disable a health check on LDAP connections in the connection pool. Connections will be checked in the background at the specified interval. Default is `true`
| `user_search.pool.health_check.dn` | no | The distinguished name to be retrieved as part of the health check. Default is the value of `bind_dn`. If `bind_dn` is not specified, a value must be specified.
| `user_search.pool.health_check.interval` | no | The interval to perform background checks of connections in the pool. Default is `60s`.
|=======================
NOTE: If any settings starting with `user_search` are specified the `user_dn_templates` setting is ignored.
NOTE: `bind_dn`, `bind_password` and `hostname_verification` are considered to be senstivie settings and therefore are not exposed via
{ref}/cluster-nodes-info.html#cluster-nodes-info[nodes info API].
[[ldap-role-mapping]]
==== Mapping Users and Groups to Roles
By default, the file that maps users and groups to roles is `config/shield/role_mapping.yml`. You can configure
the path and name of the mapping file by setting the appropriate value for the `shield.authc.ldap.files.role_mapping`
configuration parameter. When you map roles to groups, the roles of a user in that group are the combination of the
roles assigned to that group and the roles assigned to that user.
The `role_mapping.yml` file uses the YAML format. Within a mapping file, Elasticsearch roles are keys and LDAP groups
and users are values. The mapping can have a many-to-many relationship.
.Example Role Mapping File
[source, yaml]
------------------------------------------------------------
# Example LDAP group mapping configuration:
# roleA: <1>
# - groupA-DN <2>
# - groupB-DN
# - user1-DN <3>
monitoring:
- "cn=admins,dc=example,dc=com"
user:
- "cn=users,dc=example,dc=com"
- "cn=admins,dc=example,dc=com"
- "cn=John Doe,cn=contractors,dc=example,dc=com"
------------------------------------------------------------
<1> The name of the elasticsearch role found in the <<roles-file, roles file>>
<2> Example specifying the distinguished name of a LDAP group
<3> Example specifying the distinguished name of a LDAP user added[1.1.0]
After setting up role mappings, copy this file to each node. Tools like Puppet or Chef can help with this.
==== Adding an LDAP server certificate
To use SSL/TLS to access your LDAP server over an URL with the `ldaps` protocol, make sure the LDAP client used by
Shield can access the certificate of the CA that signed the LDAP server's certificate. This enables Shield's LDAP
client to authenticate the LDAP server before sending any passwords to it.
To do this, first obtain a certificate for the LDAP servers or a CA certificate that has signed the LDAP certificate.
You can use the `openssl` command to fetch the certificate and add the certificate to the `ldap.crt` file, as in
the following Unix example:
[source, shell]
----------------------------------------------------------------------------------------------
echo | openssl s_client -connect ldap.example.com:636 2>/dev/null | openssl x509 > ldap.crt
----------------------------------------------------------------------------------------------
NOTE: Older versions of openssl might not have the `-connect` option. Instead use the `-host` and `-port` options.
[[keytool]]
This certificate needs to be stored in the node keystore/truststore. Import the certificate into the truststore with the
following command, providing the password for the keystore when prompted.
[source,shell]
----------------------------------------------------------------------------------------------------
keytool -import -keystore node01.jks -file ldap.crt
----------------------------------------------------------------------------------------------------
If not already configured, add the path of the keystore/truststore to `elasticsearch.yml` as described in <<securing-nodes>>.
By default, Shield will attempt to verify the hostname or IP address used in the `url` with the values in the
certificate. If the values in the certificate do not match, Shield will not allow a connection to the LDAP server. This
behavior can be disabled by setting the `hostname_verification` property.
Restart Elasticsearch to pick up the changes to `elasticsearch.yml`.
NOTE: `hostname_verification` is considered to be a senstivie setting and therefore is not exposed via
{ref}/cluster-nodes-info.html#cluster-nodes-info[nodes info API].
[[ldap-user-cache]]
==== User Cache
To avoid connecting to the LDAP server for every incoming request, the users and their credentials are cached
locally on each node. This is a common practice when authenticating against remote servers and as can be seen
in the table <<ldap-settings,above>>, the characteristics of this cache are configurable.
The cached user credentials are hashed in memory, and there are several hash algorithms to choose from:
[[ldap-cache-hash-algo]]
.Cache hash algorithms
|=======================
| Algorithm | Description
| `ssha256` | Uses a salted `SHA-256` algorithm (default).
| `md5` | Uses `MD5` algorithm.
| `sha1` | Uses `SHA1` algorithm.
| `bcrypt` | Uses `bcrypt` algorithm with salt generated in 10 rounds.
| `bcrypt4` | Uses `bcrypt` algorithm with salt generated in 4 rounds.
| `bcrypt5` | Uses `bcrypt` algorithm with salt generated in 5 rounds.
| `bcrypt6` | Uses `bcrypt` algorithm with salt generated in 6 rounds.
| `bcrypt7` | Uses `bcrypt` algorithm with salt generated in 7 rounds.
| `bcrypt8` | Uses `bcrypt` algorithm with salt generated in 8 rounds.
| `bcrypt9` | Uses `bcrypt` algorithm with salt generated in 9 rounds.
| `sha2` | Uses `SHA2` algorithm.
| `apr1` | Uses `apr1` algorithm (md5 crypt).
| `noop`,`clear_text` | Doesn't hash the credentials and keeps it in clear text in memory. CAUTION:
keeping clear text is considered insecure and can be compromised at the OS
level (e.g. memory dumps and `ptrace`).
|=======================
===== Cache Eviction API
Shield exposes an API to force cached user eviction. The following example, evicts all users from the `ldap1`
realm:
[source, java]
------------------------------------------------------------
$ curl -XPOST 'http://localhost:9200/_shield/realm/ldap1/_cache/clear'
------------------------------------------------------------
It is also possible to evict specific users:
[source, java]
------------------------------------------------------------
$ curl -XPOST 'http://localhost:9200/_shield/realm/ldap1/_cache/clear?usernames=rdeniro,alpacino'
------------------------------------------------------------
Multiple realms can also be specified using comma-delimited list:
[source, java]
------------------------------------------------------------
$ curl -XPOST 'http://localhost:9200/_shield/realm/ldap1,ldap2/_cache/clear'
------------------------------------------------------------

View File

@ -0,0 +1,201 @@
[[active_directory]]
=== Active Directory Authentication
A secure Elasticsearch cluster can authenticate users from a Active Directory using the LDAP protocol.
With the Active Directory Realm Authentication, you can assign roles to Active Directory groups. When a user
authenticates with Active Directory, the privileges for that user are the union of all privileges defined by the roles
assigned to the set of groups that the user belongs to.
==== Active Directory and LDAP
The Active Directory Realm uses LDAP to communicate with Active Directory. The Active Directory Realm is similar to the
LDAP realm but takes advantage of extra features and streamlines configuration.
A general overview of LDAP will help with the configuration. LDAP databases, like Active Directory, store users and
groups hierarchically, similar to the way folders are grouped in a file system. The path to any
entry is a _Distinguished Name_, or DN. A DN uniquely identifies a user or group. User and group names typically use
attributes such as _common name_ (`cn`) or _unique ID_ (`uid`). An LDAP directory's hierarchy is built from containers
such as the _organizational unit_ (`ou`), _organization_ (`o`), or _domain controller_ (`dc`).
LDAP ignores white space in a DN definition. The following two DNs are equivalent:
[source,shell]
---------------------------------
"cn=admin,dc=example,dc=com"
"cn =admin ,dc= example , dc = com"
---------------------------------
Although optional, connections to the Active Directory server should use the Secure Sockets Layer (SSL/TLS) protocol to protect
passwords. Clients and nodes that connect via SSL/TLS to the LDAP server require the certificate or the root CA for the
server. These certificates should be put into each node's keystore/truststore.
==== Active Directory Realm Configuration
Like all realms, the `active_directory` realm is configured under the `shield.authc.realms` settings namespace in the
`elasticsearch.yml` file. The following snippet shows an example of such configuration:
.Example Active Directory Configuration
[source, yaml]
------------------------------------------------------------
shield:
authc:
realms:
active_directory:
type: active_directory
order: 0
domain_name: example.com
unmapped_groups_as_roles: true
...
------------------------------------------------------------
[[ad-settings]]
.Active Directory Realm Settings
|=======================
| Setting | Required | Description
| `type` | yes | Indicates the realm type and must be set to `active_directory`
| `order` | no | Indicates the priority of this realm within the realm chain. Realms with lower order will be consulted first. Although not required, it is highly recommended to explicitly set this value when multiple realms are configured. Defaults to `Integer.MAX_VALUE`.
| `enabled` | no | Indicates whether this realm is enabled/disabled. Provides an easy way to disable realms in the chain without removing their configuration. Defaults to `true`.
| `domain_name` | yes | Specifies the domain name of the Active Directory. The cluster can derive the LDAP URL and `user_search_dn` fields from values in this element if those fields are not otherwise specified.
| `url` | no | Specifies a LDAP URL in the form of `ldap[s]://<server>:<port>`. Shield attempts to authenticate against this URL. If not specified, the URL will be derived from the `domain_name`, assuming clear-text `ldap` and port `389` (e.g. `ldap://<domain_name>:389`).
| `user_search.base_dn` | no | Specifies the context to search for the user. The default value for this element is the root of the Active Directory domain.
| `user_search.scope` | no | Specifies whether the user search should be `sub_tree` (default), `one_level` or `base`. `one_level` only searches users directly contained within the `base_dn`. `sub_tree` searches all objects contained under `base_dn`. `base` specifies that the `base_dn` is a user object, and that it is the only user considered.
| `user_search.filter` | no | Specifies a filter to use to lookup a user given a username. The default filter looks up `user` objects with either `sAMAccountName` or `userPrincipalName`
| `group_search.base_dn` | no | Specifies the context to search for groups in which the user has membership. The default value for this element is the root of the Active Directory domain.
| `group_search.scope` | no | Specifies whether the group search should be `sub_tree` (default), `one_level` or `base`. `one_level` searches for groups directly contained within the `base_dn`. `sub_tree` searches all objects contained under `base_dn`. `base` specifies that the `base_dn` is a group object, and that it is the only group considered.
| `unmapped_groups_as_roles` | no | When set to `true`, the names of any unmapped LDAP groups are used as role names and assigned to the user. The default value is `false`.
| `files.role_mapping` | no | Specifies the path and file name for the <<ad-role-mapping, YAML role mapping configuration file>>. The default file name is `role_mapping.yml` in the <<shield-config,Shield config directory>>.
| `follow_referrals` | no | Boolean value that specifies whether Shield should follow referrals returned by the LDAP server. Referrals are URLs returned by the server that are to be used to continue the LDAP operation (e.g. search). Default is `true`.
| `hostname_verification` | no | When set to `true`, hostname verification will be performed when connecting to a LDAP server. The hostname or IP address used in the `url` must match one of the names in the certificate or the connection will not be allowed. Defaults to `true`.
| `cache.ttl` | no | Specified the time-to-live for cached user entries (a user and its credentials will be cached for this configured period of time). Defaults to `20m` (use the standard elasticsearch {ref}/common-options.html#time-units[time units])
| `cache.max_users` | no | Specified the maximum number of user entries that can live in the cache at a given time. Defaults to 100,000.
| `cache.hash_algo` | no | (Expert Setting) Specifies the hashing algorithm that will be used for the in-memory cached user credentials (see <<ad-cache-hash-algo,here>> for possible values).
|=======================
NOTE: `hostname_verification` is considered to be a senstivie setting and therefore is not exposed via
{ref}/cluster-nodes-info.html#cluster-nodes-info[nodes info API].
Active Directory authentication expects the username entered to be the same name as the `sAMAccountName` or `userPrincipalName` and not the
`CommonName` (CN). The URL is optional, but allows the use of custom ports.
NOTE: Binding to Active Directory fails when the domain name is not mapped in DNS. If DNS is not being provided
by a Windows DNS server, add a mapping for the domain in the local `/etc/hosts` file.
[[ad-role-mapping]]
==== Mapping Users and Groups to Roles
By default, the file that maps users and groups to roles is `config/shield/role_mapping.yml`. You can configure
the path and name of the mapping file by setting the appropriate value for the `shield.authc.active_directory.files.role_mapping`
configuration parameter. When you map roles to groups, the roles of a user in that group are the combination of the
roles assigned to that group and the roles assigned to that user.
The `role_mapping.yml` file uses the YAML format. Within a mapping file, Elasticsearch roles are keys and Active
Directory groups and users are values. The mapping can have a many-to-many relationship.
.Example Group and Role Mapping File
[source, yaml]
------------------------------------------------------------
# Example LDAP group mapping configuration:
# roleA: <1>
# - groupA-DN <2>
# - groupB-DN
# - user1-DN <3>
monitoring:
- "cn=admins,dc=example,dc=com"
user:
- "cn=users,dc=example,dc=com"
- "cn=admins,dc=example,dc=com"
- "cn=John Doe,cn=contractors,dc=example,dc=com"
------------------------------------------------------------
<1> The name of the elasticsearch role found in the <<roles-file, roles file>>
<2> Example specifying the distinguished name of a Active Directory group
<3> Example specifying the distinguished name of a Active Directory user
After setting up role mappings, copy this file to each node. Tools like Puppet or Chef can help with this.
==== Adding a Server Certificate
To use SSL/TLS to access your Active Directory server over an URL with the `ldaps` protocol, make sure the client
used by Shield can access the certificate of the CA that signed the LDAP server's certificate. This will enable
Shield's client to authenticate the Active Directory server before sending any passwords to it.
To do this, first obtain a certificate for the Active Directory servers or a CA certificate that has signed the certificate.
You can use the `openssl` command to fetch the certificate and add the certificate to the `ldap.crt` file, as in
the following Unix example:
[source, shell]
----------------------------------------------------------------------------------------------
echo | openssl s_client -connect ldap.example.com:636 2>/dev/null | openssl x509 > ldap.crt
----------------------------------------------------------------------------------------------
This certificate needs to be stored in the node keystore/truststore. Import the certificate into the truststore with the
following command, providing the password for the keystore when prompted.
[source,shell]
----------------------------------------------------------------------------------------------------
keytool -import -keystore node01.jks -file ldap.crt
----------------------------------------------------------------------------------------------------
If not already configured, add the path of the keystore/truststore to `elasticsearch.yml` as described in <<securing-nodes>>.
By default, Shield will attempt to verify the hostname or IP address used in the `url` with the values in the
certificate. If the values in the certificate do not match, Shield will not allow a connection to the Active Directory server.
This behavior can be disabled by setting the `hostname_verification` property.
Finally, restart Elasticsearch to pick up the changes to `elasticsearch.yml`.
==== User Cache
To avoid connecting to the Active Directory server for every incoming request, the users and their credentials
are cached locally on each node. This is a common practice when authenticating against remote servers and as
can be seen in the table <<ad-settings, above>>, the characteristics of this cache are configurable.
The cached user credentials are hashed in memory, and there are several hash algorithms to choose from:
[[ad-cache-hash-algo]]
.Cache hash algorithms
|=======================
| Algorithm | Description
| `ssha256` | Uses a salted `sha-256` algorithm (default).
| `md5` | Uses `MD5` algorithm.
| `sha1` | Uses `SHA1` algorithm.
| `bcrypt` | Uses `bcrypt` algorithm with salt generated in 10 rounds.
| `bcrypt4` | Uses `bcrypt` algorithm with salt generated in 4 rounds.
| `bcrypt5` | Uses `bcrypt` algorithm with salt generated in 5 rounds.
| `bcrypt6` | Uses `bcrypt` algorithm with salt generated in 6 rounds.
| `bcrypt7` | Uses `bcrypt` algorithm with salt generated in 7 rounds.
| `bcrypt8` | Uses `bcrypt` algorithm with salt generated in 8 rounds.
| `bcrypt9` | Uses `bcrypt` algorithm with salt generated in 9 rounds.
| `sha2` | Uses `SHA2` algorithm.
| `apr1` | Uses `apr1` algorithm (md5 crypt).
| `noop`,`clear_text` | Doesn't hash the credentials and keeps it in clear text in memory. CAUTION:
keeping clear text is considered insecure and can be compromised at the OS
level (e.g. memory dumps and `ptrace`).
|=======================
===== Cache Eviction API
Shield exposes an API to force cached user eviction. The following example, evicts all users from the `ad1`
realm:
[source, java]
------------------------------------------------------------
$ curl -XPOST 'http://localhost:9200/_shield/realm/ad1/_cache/clear'
------------------------------------------------------------
It is also possible to evict specific users:
[source, java]
------------------------------------------------------------
$ curl -XPOST 'http://localhost:9200/_shield/realm/ad1/_cache/clear?usernames=rdeniro,alpacino'
------------------------------------------------------------
Multiple realms can also be specified using comma-delimited list:
[source, java]
------------------------------------------------------------
$ curl -XPOST 'http://localhost:9200/_shield/realm/ad1,ad2/_cache/clear'
------------------------------------------------------------

View File

@ -0,0 +1,105 @@
[[pki]]
=== PKI Authentication
added[1.3.0] Shield allows for authentication through the use of Public Key Infrastructure (PKI). This works by requiring
clients to present X.509 certificates that are used for authentication and authorization will be performed by mapping the
distinguished name (DN) from the certificate to roles.
==== SSL/TLS setup
The PKI realm requires that SSL/TLS be enabled and client authentication also be enabled on the desired network layers
(http and/or transport). It is possible to enable SSL/TLS and client authentication on only one network layer and use PKI
authentication for that layer; for example, enabling SSL/TLS and client authentication on the transport layer with a PKI
realm defined would allow for transport clients to authenticate with X.509 certificates while HTTP traffic would still
authenticate using username and password authentication. The PKI realm supports a client authentication setting of either
`required` or `optional`; `required` forces all clients to present a certificate, while `optional` enables clients
without certificates to authenticate with other credentials. For SSL/TLS configuration details, please see
<<ref-ssl-tls-settings, SSL/TLS settings>>.
==== PKI Realm Configuration
Like all realms, the `pki` realm is configured under the `shield.authc.realms` settings namespace in the
`elasticsearch.yml` file. The following snippet shows an example of the most basic configuration:
[source, yaml]
------------------------------------------------------------
shield:
authc:
realms:
pki1:
type: pki
------------------------------------------------------------
In the above configuration, any certificate trusted by the SSL/TLS layer will be accepted for authentication. The username
will be the common name (CN) extracted from the DN of the certificate. If the username that should be used is something
other than the CN of the DN, a regex can be provided to extract the value desired for the username. The following example
will extract the email address from the DN:
[source, yaml]
------------------------------------------------------------
shield:
authc:
realms:
pki1:
type: pki
username_pattern: "EMAILADDRESS=(.*?)(?:,|$)"
------------------------------------------------------------
The PKI realm also provides configuration options to specify a specific truststore for authentication, which is useful
when the SSL/TLS layer trusts clients with certificates that are signed by a different CA than the one that signs the
certificates for client authentication. The following example shows such a configuration:
[source, yaml]
------------------------------------------------------------
shield:
authc:
realms:
pki1:
type: pki
truststore:
path: "/path/to/pki_truststore.jks"
password: "changeme"
------------------------------------------------------------
[[pki-settings]]
.PKI Realm Settings
|=======================
| Setting | Required | Description
| `type` | yes | Indicates the realm type and must be set to `pki`
| `order` | no | Indicates the priority of this realm within the realm chain. Realms with lower order will be consulted first. Although not required, it is highly recommended to explicitly set this value when multiple realms are configured. Defaults to `Integer.MAX_VALUE`.
| `enabled` | no | Indicates whether this realm is enabled/disabled. Provides an easy way to disable realms in the chain without removing their configuration. Defaults to `true`.
| `username_pattern` | no | The regular expression pattern used to extract the username from the certificate DN. The first match group is used as the username. Default is `CN=(.*?)(?:,\|$)`
| `truststore.path` | no | The path of a truststore to use. The default truststore is the one defined by <<ref-ssl-tls-settings,SSL/TLS settings>>
| `truststore.password` | no | The password to the truststore. Must be provided if `truststore.path` is set.
| `truststore.algorithm` | no | Algorithm for the trustsore. Default is `SunX509`
| `files.role_mapping` | no | Specifies the path and file name for the <<pki-role-mapping, YAML role mapping configuration file>>. The default file name is `role_mapping.yml` in the <<shield-config,Shield config directory>>.
|=======================
[[pki-role-mapping]]
==== Mapping Users and Groups to Roles
By default, the file that maps users to roles is `config/shield/role_mapping.yml`. You can configure
the path and name of the mapping file by setting the appropriate value for the `.files.role_mapping` configuration
parameter for a specific realm.
The `role_mapping.yml` file uses the YAML format. Within a mapping file, Elasticsearch roles are keys and distinguished
names (DNs) are values. The mapping can have a many-to-many relationship.
.Example Role Mapping File
[source, yaml]
------------------------------------------------------------
# Example group mapping configuration:
# roleA: <1>
# - user1-DN <2>
monitoring:
- "cn=Admin,ou=example,o=com"
user:
- "cn=John Doe,ou=example,o=com"
------------------------------------------------------------
<1> The name of the elasticsearch role found in the <<roles-file, roles file>>
<2> Example specifying the distinguished name of a PKI user
NOTE: For the PKI realm, only the DN of a user can be mapped as there is no concept of a group in PKI
After setting up role mappings, copy this file to each node. Tools like Puppet or Chef can help with this.

251
shield/pom.xml Normal file
View File

@ -0,0 +1,251 @@
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>org.elasticsearch</groupId>
<artifactId>elasticsearch-shield</artifactId>
<version>2.0.0.beta1-SNAPSHOT</version>
<scm>
<connection>scm:git:git@github.com:elasticsearch/elasticsearch-shield.git</connection>
<developerConnection>scm:git:git@github.com:elasticsearch/elasticsearch-shield.git</developerConnection>
<url>http://github.com/elasticsearch/elasticsearch-shield</url>
</scm>
<parent>
<groupId>org.elasticsearch.plugin</groupId>
<artifactId>elasticsearch-plugin</artifactId>
<version>2.0.0.beta1-SNAPSHOT</version>
</parent>
<repositories>
<repository>
<id>oss-snapshots</id>
<name>Sonatype OSS Snapshots</name>
<url>https://oss.sonatype.org/content/repositories/snapshots/</url>
<releases>
<enabled>false</enabled>
</releases>
<snapshots>
<enabled>true</enabled>
<updatePolicy>always</updatePolicy>
</snapshots>
</repository>
<repository>
<id>elasticsearch-internal-snapshots</id>
<url>http://maven.elasticsearch.org/artifactory/internal-snapshots</url>
<releases>
<enabled>false</enabled>
</releases>
<snapshots>
<enabled>true</enabled>
<updatePolicy>always</updatePolicy>
</snapshots>
</repository>
</repositories>
<properties>
<elasticsearch.license.header>dev-tools/elasticsearch_license_header.txt</elasticsearch.license.header>
<elasticsearch.license.headerDefinition>dev-tools/license_header_definition.xml</elasticsearch.license.headerDefinition>
<elasticsearch.integ.antfile>dev-tools/integration-tests.xml</elasticsearch.integ.antfile>
<license.plugin.version>2.0.0.beta1-SNAPSHOT</license.plugin.version>
<tests.rest.blacklist>indices.get/10_basic/*allow_no_indices*,cat.count/10_basic/Test cat count output,cat.aliases/10_basic/Empty cluster,indices.segments/10_basic/no segments test,indices.clear_cache/10_basic/clear_cache test,indices.status/10_basic/Indices status test,cat.indices/10_basic/Test cat indices output,cat.recovery/10_basic/Test cat recovery output,cat.shards/10_basic/Test cat shards output,termvector/20_issue7121/*,index/10_with_id/Index with ID,indices.get_alias/20_emtpy/*,cat.segments/10_basic/Test cat segments output,indices.put_settings/10_basic/Test indices settings allow_no_indices,indices.put_settings/10_basic/Test indices settings ignore_unavailable,indices.refresh/10_basic/Indices refresh test no-match wildcard,indices.stats/10_index/Index - star*,indices.recovery/10_basic/Indices recovery test*,template/30_render_search_template/*</tests.rest.blacklist>
</properties>
<dependencies>
<!-- test deps -->
<dependency>
<groupId>org.apache.lucene</groupId>
<artifactId>lucene-expressions</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.apache.commons</groupId>
<artifactId>commons-lang3</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.elasticsearch</groupId>
<artifactId>elasticsearch-securemock</artifactId>
<version>1.0-SNAPSHOT</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.apache.httpcomponents</groupId>
<artifactId>httpclient</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.codehaus.groovy</groupId>
<artifactId>groovy-all</artifactId>
<version>2.4.0</version>
<classifier>indy</classifier>
<scope>test</scope>
</dependency>
<dependency>
<groupId>com.google.jimfs</groupId>
<artifactId>jimfs</artifactId>
<version>1.0</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.elasticsearch</groupId>
<artifactId>elasticsearch-license-plugin</artifactId>
<version>${license.plugin.version}</version>
<type>zip</type>
<scope>test</scope>
</dependency>
<!-- needed for tests that use templating -->
<dependency>
<groupId>com.github.spullara.mustache.java</groupId>
<artifactId>compiler</artifactId>
<scope>test</scope>
</dependency>
<!-- real dependencies -->
<dependency>
<groupId>org.elasticsearch</groupId>
<artifactId>elasticsearch-license-plugin-api</artifactId>
<version>${license.plugin.version}</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.elasticsearch</groupId>
<artifactId>elasticsearch-license-plugin</artifactId>
<version>${license.plugin.version}</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>com.google.guava</groupId>
<artifactId>guava</artifactId>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>dk.brics.automaton</groupId>
<artifactId>automaton</artifactId>
<version>1.11-8</version>
</dependency>
<dependency>
<groupId>com.unboundid</groupId>
<artifactId>unboundid-ldapsdk</artifactId>
<version>2.3.8</version>
</dependency>
</dependencies>
<build>
<resources>
<resource>
<directory>src/main/resources</directory>
<filtering>true</filtering>
</resource>
</resources>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-enforcer-plugin</artifactId>
</plugin>
<plugin>
<artifactId>maven-assembly-plugin</artifactId>
<configuration>
<attach>false</attach>
</configuration>
</plugin>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>buildnumber-maven-plugin</artifactId>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-eclipse-plugin</artifactId>
</plugin>
</plugins>
<pluginManagement>
<plugins>
<plugin>
<groupId>com.mycila</groupId>
<artifactId>license-maven-plugin</artifactId>
<configuration>
<excludes>
<!-- BCrypt -->
<exclude>src/main/java/org/elasticsearch/shield/authc/support/BCrypt.java</exclude>
</excludes>
</configuration>
</plugin>
<plugin>
<groupId>de.thetaphi</groupId>
<artifactId>forbiddenapis</artifactId>
<executions>
<execution>
<id>check-forbidden-test-apis</id>
<configuration>
<signaturesFiles combine.children="append">
<signaturesFile>test-signatures.txt</signaturesFile>
</signaturesFiles>
</configuration>
<phase>test-compile</phase>
<goals>
<goal>testCheck</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</pluginManagement>
</build>
<profiles>
<profile>
<id>deploy-internal</id>
<distributionManagement>
<repository>
<id>elasticsearch-internal-releases</id>
<name>Elasticsearch Internal Releases</name>
<url>http://maven.elasticsearch.org/artifactory/internal-releases</url>
</repository>
<snapshotRepository>
<id>elasticsearch-internal-snapshots</id>
<name>Elasticsearch Internal Snapshots</name>
<url>http://maven.elasticsearch.org/artifactory/internal-snapshots</url>
</snapshotRepository>
</distributionManagement>
<build>
<plugins>
<plugin>
<artifactId>maven-assembly-plugin</artifactId>
<configuration>
<attach>true</attach>
</configuration>
</plugin>
</plugins>
</build>
</profile>
<profile>
<id>deploy-public</id>
<distributionManagement>
<repository>
<id>elasticsearch-public-releases</id>
<name>Elasticsearch Public Releases</name>
<url>http://maven.elasticsearch.org/artifactory/public-releases</url>
</repository>
</distributionManagement>
</profile>
<profile>
<id>default</id>
<activation>
<activeByDefault>true</activeByDefault>
</activation>
</profile>
</profiles>
</project>

View File

@ -0,0 +1,41 @@
<?xml version="1.0"?>
<assembly>
<id>plugin</id>
<formats>
<format>zip</format>
</formats>
<includeBaseDirectory>false</includeBaseDirectory>
<fileSets>
<fileSet>
<filtered>false</filtered>
<directory>bin/shield</directory>
<outputDirectory>bin</outputDirectory>
</fileSet>
<fileSet>
<directory>config/shield</directory>
<outputDirectory>config</outputDirectory>
</fileSet>
</fileSets>
<dependencySets>
<dependencySet>
<outputDirectory>/</outputDirectory>
<useProjectArtifact>true</useProjectArtifact>
<useTransitiveDependencies>false</useTransitiveDependencies>
<includes>
<include>org.elasticsearch:elasticsearch-shield</include>
<include>dk.brics.automaton:automaton</include>
<include>com.unboundid:unboundid-ldapsdk</include>
</includes>
</dependencySet>
</dependencySets>
<files>
<file>
<source>LICENSE.txt</source>
<outputDirectory>/</outputDirectory>
</file>
<file>
<source>NOTICE.txt</source>
<outputDirectory>/</outputDirectory>
</file>
</files>
</assembly>

View File

@ -0,0 +1,83 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.shield;
import org.elasticsearch.common.io.FastStringReader;
import org.elasticsearch.common.io.Streams;
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import org.joda.time.DateTimeZone;
import org.joda.time.format.ISODateTimeFormat;
import java.io.IOException;
import java.util.Properties;
/**
*
*/
public class ShieldBuild {
public static final ShieldBuild CURRENT;
static {
String hash = "NA";
String hashShort = "NA";
String timestamp = "NA";
try {
String properties = Streams.copyToStringFromClasspath("/shield-build.properties");
Properties props = new Properties();
props.load(new FastStringReader(properties));
hash = props.getProperty("hash", hash);
if (!hash.equals("NA")) {
hashShort = hash.substring(0, 7);
}
String gitTimestampRaw = props.getProperty("timestamp");
if (gitTimestampRaw != null) {
timestamp = ISODateTimeFormat.dateTimeNoMillis().withZone(DateTimeZone.UTC).print(Long.parseLong(gitTimestampRaw));
}
} catch (Exception e) {
// just ignore...
}
CURRENT = new ShieldBuild(hash, hashShort, timestamp);
}
private String hash;
private String hashShort;
private String timestamp;
ShieldBuild(String hash, String hashShort, String timestamp) {
this.hash = hash;
this.hashShort = hashShort;
this.timestamp = timestamp;
}
public String hash() {
return hash;
}
public String hashShort() {
return hashShort;
}
public String timestamp() {
return timestamp;
}
public static ShieldBuild readBuild(StreamInput in) throws IOException {
String hash = in.readString();
String hashShort = in.readString();
String timestamp = in.readString();
return new ShieldBuild(hash, hashShort, timestamp);
}
public static void writeBuild(ShieldBuild build, StreamOutput out) throws IOException {
out.writeString(build.hash());
out.writeString(build.hashShort());
out.writeString(build.timestamp());
}
}

View File

@ -0,0 +1,39 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.shield;
import org.elasticsearch.common.inject.Module;
import org.elasticsearch.common.inject.PreProcessModule;
import org.elasticsearch.common.inject.util.Providers;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.rest.RestModule;
import org.elasticsearch.shield.license.LicenseService;
import org.elasticsearch.shield.rest.action.RestShieldInfoAction;
import org.elasticsearch.shield.support.AbstractShieldModule;
public class ShieldDisabledModule extends AbstractShieldModule implements PreProcessModule {
public ShieldDisabledModule(Settings settings) {
super(settings);
}
@Override
protected void configure(boolean clientMode) {
assert !shieldEnabled : "shield disabled module should only get loaded with shield disabled";
if (!clientMode) {
// required by the shield info rest action (when shield is disabled)
bind(LicenseService.class).toProvider(Providers.<LicenseService>of(null));
}
}
@Override
public void processModule(Module module) {
if (module instanceof RestModule) {
//we want to expose the shield rest action even when the plugin is disabled
((RestModule) module).addRestAction(RestShieldInfoAction.class);
}
}
}

View File

@ -0,0 +1,85 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.shield;
import org.elasticsearch.cluster.ClusterChangedEvent;
import org.elasticsearch.cluster.ClusterService;
import org.elasticsearch.cluster.ClusterStateListener;
import org.elasticsearch.common.component.AbstractComponent;
import org.elasticsearch.common.component.LifecycleListener;
import org.elasticsearch.common.inject.Inject;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.util.concurrent.AbstractRunnable;
import org.elasticsearch.shield.audit.index.IndexAuditTrail;
import org.elasticsearch.threadpool.ThreadPool;
/**
* This class is used to provide a lifecycle for services that is based on the cluster's state
* rather than the typical lifecycle that is used to start services as part of the node startup.
*
* This type of lifecycle is necessary for services that need to perform actions that require the cluster to be in a
* certain state; some examples are storing index templates and creating indices. These actions would most likely fail
* from within a plugin if executed in the {@link org.elasticsearch.common.component.AbstractLifecycleComponent#doStart()}
* method. However, if the startup of these services waits for the cluster to form and recover indices then it will be
* successful. This lifecycle service allows for this to happen by listening for {@link ClusterChangedEvent} and checking
* if the services can start. Additionally, the service also provides hooks for stop and close functionality.
*/
public class ShieldLifecycleService extends AbstractComponent implements ClusterStateListener {
private final ThreadPool threadPool;
private final IndexAuditTrail indexAuditTrail;
@Inject
public ShieldLifecycleService(Settings settings, ClusterService clusterService, ThreadPool threadPool, IndexAuditTrail indexAuditTrail) {
super(settings);
this.threadPool = threadPool;
this.indexAuditTrail = indexAuditTrail;
clusterService.add(this);
clusterService.addLifecycleListener(new LifecycleListener() {
@Override
public void beforeStop() {
stop();
}
@Override
public void beforeClose() {
close();
}
});
}
@Override
public void clusterChanged(ClusterChangedEvent event) {
// TODO if/when we have more services this should not be checking the audit trail
if (indexAuditTrail.state() == IndexAuditTrail.State.INITIALIZED) {
final boolean master = event.localNodeMaster();
if (indexAuditTrail.canStart(event, master)) {
threadPool.generic().execute(new AbstractRunnable() {
@Override
public void onFailure(Throwable throwable) {
logger.error("failed to start shield lifecycle services", throwable);
assert false : "shield lifecycle services startup failed";
}
@Override
public void doRun() {
indexAuditTrail.start(master);
}
});
}
}
}
public void stop() {
indexAuditTrail.stop();
}
public void close() {
indexAuditTrail.close();
}
}

View File

@ -0,0 +1,61 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.shield;
import com.google.common.collect.ImmutableList;
import org.elasticsearch.common.inject.Module;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.shield.action.ShieldActionModule;
import org.elasticsearch.shield.audit.AuditTrailModule;
import org.elasticsearch.shield.authc.AuthenticationModule;
import org.elasticsearch.shield.authz.AuthorizationModule;
import org.elasticsearch.shield.crypto.CryptoModule;
import org.elasticsearch.shield.license.LicenseModule;
import org.elasticsearch.shield.rest.ShieldRestModule;
import org.elasticsearch.shield.ssl.SSLModule;
import org.elasticsearch.shield.support.AbstractShieldModule;
import org.elasticsearch.shield.transport.ShieldTransportModule;
/**
*
*/
public class ShieldModule extends AbstractShieldModule.Spawn {
public ShieldModule(Settings settings) {
super(settings);
}
@Override
public Iterable<? extends Module> spawnModules(boolean clientMode) {
assert shieldEnabled : "this module should get loaded only when shield is enabled";
// spawn needed parts in client mode
if (clientMode) {
return ImmutableList.<Module>of(
new ShieldActionModule(settings),
new ShieldTransportModule(settings),
new SSLModule(settings));
}
return ImmutableList.<Module>of(
new LicenseModule(settings),
new CryptoModule(settings),
new AuthenticationModule(settings),
new AuthorizationModule(settings),
new AuditTrailModule(settings),
new ShieldRestModule(settings),
new ShieldActionModule(settings),
new ShieldTransportModule(settings),
new SSLModule(settings));
}
@Override
protected void configure(boolean clientMode) {
if (!clientMode) {
bind(ShieldSettingsFilter.class).asEagerSingleton();
}
}
}

View File

@ -0,0 +1,167 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.shield;
import com.google.common.collect.ImmutableList;
import org.elasticsearch.client.Client;
import org.elasticsearch.client.support.Headers;
import org.elasticsearch.cluster.settings.ClusterDynamicSettingsModule;
import org.elasticsearch.common.component.LifecycleComponent;
import org.elasticsearch.common.inject.Module;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.env.Environment;
import org.elasticsearch.plugins.AbstractPlugin;
import org.elasticsearch.shield.authc.Realms;
import org.elasticsearch.shield.authc.support.SecuredString;
import org.elasticsearch.shield.authc.support.UsernamePasswordToken;
import org.elasticsearch.shield.authz.store.FileRolesStore;
import org.elasticsearch.shield.crypto.InternalCryptoService;
import org.elasticsearch.shield.license.LicenseService;
import org.elasticsearch.shield.transport.filter.IPFilter;
import java.nio.file.Path;
import java.util.Collection;
import java.util.Map;
/**
*
*/
public class ShieldPlugin extends AbstractPlugin {
public static final String NAME = "shield";
public static final String ENABLED_SETTING_NAME = NAME + ".enabled";
private final Settings settings;
private final boolean enabled;
private final boolean clientMode;
public ShieldPlugin(Settings settings) {
this.settings = settings;
this.enabled = shieldEnabled(settings);
this.clientMode = clientMode(settings);
}
@Override
public String name() {
return NAME;
}
@Override
public String description() {
return "Elasticsearch Shield (security)";
}
@Override
public Collection<Class<? extends Module>> modules() {
return enabled ?
ImmutableList.<Class<? extends Module>>of(ShieldModule.class) :
ImmutableList.<Class<? extends Module>>of(ShieldDisabledModule.class);
}
@Override
public Collection<Class<? extends LifecycleComponent>> services() {
ImmutableList.Builder<Class<? extends LifecycleComponent>> builder = ImmutableList.builder();
if (enabled && !clientMode) {
builder.add(LicenseService.class).add(InternalCryptoService.class).add(FileRolesStore.class).add(Realms.class).add(IPFilter.class);
}
return builder.build();
}
@Override
public Settings additionalSettings() {
if (!enabled) {
return Settings.EMPTY;
}
Settings.Builder settingsBuilder = Settings.settingsBuilder();
addUserSettings(settingsBuilder);
addTribeSettings(settingsBuilder);
return settingsBuilder.build();
}
public void onModule(ClusterDynamicSettingsModule clusterDynamicSettingsModule) {
clusterDynamicSettingsModule.addDynamicSettings("shield.transport.filter.*", "shield.http.filter.*", "transport.profiles.*", IPFilter.IP_FILTER_ENABLED_SETTING, IPFilter.IP_FILTER_ENABLED_HTTP_SETTING);
}
private void addUserSettings(Settings.Builder settingsBuilder) {
String authHeaderSettingName = Headers.PREFIX + "." + UsernamePasswordToken.BASIC_AUTH_HEADER;
if (settings.get(authHeaderSettingName) != null) {
return;
}
String userSetting = settings.get("shield.user");
if (userSetting == null) {
return;
}
int i = userSetting.indexOf(":");
if (i < 0 || i == userSetting.length() - 1) {
throw new IllegalArgumentException("invalid [shield.user] setting. must be in the form of \"<username>:<password>\"");
}
String username = userSetting.substring(0, i);
String password = userSetting.substring(i + 1);
settingsBuilder.put(authHeaderSettingName, UsernamePasswordToken.basicAuthHeaderValue(username, new SecuredString(password.toCharArray())));
}
/*
We inject additional settings on each tribe client if the current node is a tribe node, to make sure that every tribe has shield installed and enabled too:
- if shield is loaded on the tribe node we make sure it is also loaded on every tribe, by making it mandatory there
(this means that the tribe node will fail at startup if shield is not loaded on any tribe due to missing mandatory plugin)
- if shield is loaded and enabled on the tribe node, we make sure it is also enabled on every tribe, by forcibly enabling it
(that means it's not possible to disable shield on the tribe clients)
*/
private void addTribeSettings(Settings.Builder settingsBuilder) {
Map<String, Settings> tribesSettings = settings.getGroups("tribe", true);
if (tribesSettings.isEmpty()) {
return;
}
for (Map.Entry<String, Settings> tribeSettings : tribesSettings.entrySet()) {
String tribePrefix = "tribe." + tribeSettings.getKey() + ".";
//we copy over existing mandatory plugins under additional settings, as they would get overridden otherwise (arrays don't get merged)
String[] existingMandatoryPlugins = tribeSettings.getValue().getAsArray("plugin.mandatory", null);
if (existingMandatoryPlugins == null) {
//shield is mandatory on every tribe if installed and enabled on the tribe node
settingsBuilder.putArray(tribePrefix + "plugin.mandatory", NAME);
} else {
if (!isShieldMandatory(existingMandatoryPlugins)) {
String[] updatedMandatoryPlugins = new String[existingMandatoryPlugins.length + 1];
System.arraycopy(existingMandatoryPlugins, 0, updatedMandatoryPlugins, 0, existingMandatoryPlugins.length);
updatedMandatoryPlugins[updatedMandatoryPlugins.length - 1] = NAME;
//shield is mandatory on every tribe if installed and enabled on the tribe node
settingsBuilder.putArray(tribePrefix + "plugin.mandatory", updatedMandatoryPlugins);
}
}
//shield must be enabled on every tribe if it's enabled on the tribe node
settingsBuilder.put(tribePrefix + ENABLED_SETTING_NAME, true);
}
}
private static boolean isShieldMandatory(String[] existingMandatoryPlugins) {
for (String existingMandatoryPlugin : existingMandatoryPlugins) {
if (NAME.equals(existingMandatoryPlugin)) {
return true;
}
}
return false;
}
public static Path configDir(Environment env) {
return env.configFile().resolve(NAME);
}
public static Path resolveConfigFile(Environment env, String name) {
return configDir(env).resolve(name);
}
public static boolean clientMode(Settings settings) {
return !"node".equals(settings.get(Client.CLIENT_TYPE_SETTING));
}
public static boolean shieldEnabled(Settings settings) {
return settings.getAsBoolean(ENABLED_SETTING_NAME, true);
}
}

View File

@ -0,0 +1,33 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.shield;
import org.elasticsearch.common.inject.Inject;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.settings.SettingsFilter;
/**
*
*/
public class ShieldSettingsFilter {
static final String HIDE_SETTINGS_SETTING = "shield.hide_settings";
private final SettingsFilter filter;
@Inject
public ShieldSettingsFilter(Settings settings, SettingsFilter settingsFilter) {
this.filter = settingsFilter;
filter.addFilter(HIDE_SETTINGS_SETTING);
filterOut(settings.getAsArray(HIDE_SETTINGS_SETTING));
}
public void filterOut(String... patterns) {
for (String pattern : patterns) {
filter.addFilter(pattern);
}
}
}

View File

@ -0,0 +1,233 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.shield;
import org.elasticsearch.Version;
import org.elasticsearch.common.Nullable;
import org.elasticsearch.common.Strings;
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import org.elasticsearch.license.plugin.LicenseVersion;
import java.io.IOException;
import java.io.Serializable;
/**
*/
@SuppressWarnings("deprecation")
public class ShieldVersion implements Serializable {
// The logic for ID is: XXYYZZAA, where XX is major version, YY is minor version, ZZ is revision, and AA is Beta/RC indicator
// AA values below 50 are beta builds, and below 99 are RC builds, with 99 indicating a release
// the (internal) format of the id is there so we can easily do after/before checks on the id
public static final int V_1_0_0_ID = /*00*/1000099;
public static final ShieldVersion V_1_0_0 = new ShieldVersion(V_1_0_0_ID, false, Version.V_1_4_2, LicenseVersion.V_1_0_0);
public static final int V_1_0_1_ID = /*00*/1000199;
public static final ShieldVersion V_1_0_1 = new ShieldVersion(V_1_0_1_ID, false, Version.V_1_4_2, LicenseVersion.V_1_0_0);
public static final int V_1_0_2_ID = /*00*/1000299;
public static final ShieldVersion V_1_0_2 = new ShieldVersion(V_1_0_2_ID, false, Version.V_1_4_2, LicenseVersion.V_1_0_0);
public static final int V_1_1_0_ID = /*00*/1010099;
public static final ShieldVersion V_1_1_0 = new ShieldVersion(V_1_1_0_ID, false, Version.V_1_4_2, LicenseVersion.V_1_0_0);
public static final int V_1_1_1_ID = /*00*/1010199;
public static final ShieldVersion V_1_1_1 = new ShieldVersion(V_1_1_1_ID, false, Version.V_1_4_2, LicenseVersion.V_1_0_0);
public static final int V_1_2_0_ID = /*00*/1020099;
public static final ShieldVersion V_1_2_0 = new ShieldVersion(V_1_2_0_ID, false, Version.V_1_5_0, LicenseVersion.V_1_0_0);
public static final int V_1_2_1_ID = /*00*/1020199;
public static final ShieldVersion V_1_2_1 = new ShieldVersion(V_1_2_1_ID, false, Version.V_1_5_0, LicenseVersion.V_1_0_0);
public static final int V_1_2_2_ID = /*00*/1020299;
public static final ShieldVersion V_1_2_2 = new ShieldVersion(V_1_2_2_ID, false, Version.V_1_5_0, LicenseVersion.V_1_0_0);
public static final int V_1_3_0_ID = /*00*/1030099;
public static final ShieldVersion V_1_3_0 = new ShieldVersion(V_1_3_0_ID, false, Version.V_1_5_0, LicenseVersion.V_1_0_0);
public static final int V_2_0_0_ID = /*00*/2000099;
public static final ShieldVersion V_2_0_0 = new ShieldVersion(V_2_0_0_ID, true, Version.V_1_5_0, LicenseVersion.V_1_0_0);
public static final ShieldVersion CURRENT = V_2_0_0;
public static ShieldVersion readVersion(StreamInput in) throws IOException {
return fromId(in.readVInt());
}
public static ShieldVersion fromId(int id) {
switch (id) {
case V_1_0_0_ID: return V_1_0_0;
case V_1_0_1_ID: return V_1_0_1;
case V_1_0_2_ID: return V_1_0_2;
case V_1_1_0_ID: return V_1_1_0;
case V_1_1_1_ID: return V_1_1_1;
case V_1_2_0_ID: return V_1_2_0;
case V_1_2_1_ID: return V_1_2_1;
case V_1_2_2_ID: return V_1_2_2;
case V_1_3_0_ID: return V_1_3_0;
case V_2_0_0_ID: return V_2_0_0;
default:
return new ShieldVersion(id, null, Version.CURRENT, LicenseVersion.CURRENT);
}
}
public static void writeVersion(ShieldVersion version, StreamOutput out) throws IOException {
out.writeVInt(version.id);
}
/**
* Returns the smallest version between the 2.
*/
public static ShieldVersion smallest(ShieldVersion version1, ShieldVersion version2) {
return version1.id < version2.id ? version1 : version2;
}
/**
* Returns the version given its string representation, current version if the argument is null or empty
*/
public static ShieldVersion fromString(String version) {
if (!Strings.hasLength(version)) {
return ShieldVersion.CURRENT;
}
String[] parts = version.split("\\.|\\-");
if (parts.length < 3 || parts.length > 4) {
throw new IllegalArgumentException("the version needs to contain major, minor and revision, and optionally the build");
}
try {
//we reverse the version id calculation based on some assumption as we can't reliably reverse the modulo
int major = Integer.parseInt(parts[0]) * 1000000;
int minor = Integer.parseInt(parts[1]) * 10000;
int revision = Integer.parseInt(parts[2]) * 100;
int build = 99;
if (parts.length == 4) {
String buildStr = parts[3];
if (buildStr.startsWith("beta")) {
build = Integer.parseInt(buildStr.substring(4));
}
if (buildStr.startsWith("rc")) {
build = Integer.parseInt(buildStr.substring(2)) + 50;
}
}
return fromId(major + minor + revision + build);
} catch(NumberFormatException e) {
throw new IllegalArgumentException("unable to parse version " + version, e);
}
}
public final int id;
public final byte major;
public final byte minor;
public final byte revision;
public final byte build;
public final Boolean snapshot;
public final Version minEsCompatibilityVersion;
public final LicenseVersion minLicenseCompatibilityVersion;
ShieldVersion(int id, @Nullable Boolean snapshot, Version minEsCompatibilityVersion, LicenseVersion minLicenseCompatibilityVersion) {
this.id = id;
this.major = (byte) ((id / 1000000) % 100);
this.minor = (byte) ((id / 10000) % 100);
this.revision = (byte) ((id / 100) % 100);
this.build = (byte) (id % 100);
this.snapshot = snapshot;
this.minEsCompatibilityVersion = minEsCompatibilityVersion;
this.minLicenseCompatibilityVersion = minLicenseCompatibilityVersion;
}
public boolean snapshot() {
return snapshot != null && snapshot;
}
public boolean after(ShieldVersion version) {
return version.id < id;
}
public boolean onOrAfter(ShieldVersion version) {
return version.id <= id;
}
public boolean before(ShieldVersion version) {
return version.id > id;
}
public boolean onOrBefore(ShieldVersion version) {
return version.id >= id;
}
public boolean compatibleWith(ShieldVersion version) {
return version.onOrAfter(minimumCompatibilityVersion());
}
public boolean compatibleWith(Version esVersion) {
return esVersion.onOrAfter(minEsCompatibilityVersion);
}
/**
* Returns the minimum compatible version based on the current
* version. Ie a node needs to have at least the return version in order
* to communicate with a node running the current version. The returned version
* is in most of the cases the smallest major version release unless the current version
* is a beta or RC release then the version itself is returned.
*/
public ShieldVersion minimumCompatibilityVersion() {
return ShieldVersion.smallest(this, fromId(major * 1000000 + 99));
}
/**
* @return The minimum elasticsearch version this shield version is compatible with.
*/
public Version minimumEsCompatiblityVersion() {
return minEsCompatibilityVersion;
}
/**
* @return The minimum license plugin version this shield version is compatible with.
*/
public LicenseVersion minimumLicenseCompatibilityVersion() {
return minLicenseCompatibilityVersion;
}
/**
* Just the version number (without -SNAPSHOT if snapshot).
*/
public String number() {
StringBuilder sb = new StringBuilder();
sb.append(major).append('.').append(minor).append('.').append(revision);
if (build < 50) {
sb.append("-beta").append(build);
} else if (build < 99) {
sb.append("-rc").append(build - 50);
}
return sb.toString();
}
@Override
public String toString() {
StringBuilder sb = new StringBuilder();
sb.append(number());
if (snapshot()) {
sb.append("-SNAPSHOT");
}
return sb.toString();
}
@Override
public boolean equals(Object o) {
if (this == o) return true;
if (o == null || getClass() != o.getClass()) return false;
ShieldVersion that = (ShieldVersion) o;
if (id != that.id) return false;
return true;
}
@Override
public int hashCode() {
return id;
}
}

View File

@ -0,0 +1,121 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.shield;
import org.elasticsearch.common.Strings;
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import org.elasticsearch.shield.authz.SystemRole;
import java.io.IOException;
import java.util.Arrays;
/**
* An authenticated user
*/
public abstract class User {
public static final User SYSTEM = new System();
/**
* @return The principal of this user - effectively serving as the unique identity of of the user.
*/
public abstract String principal();
/**
* @return The roles this user is associated with. The roles are identified by their unique names
* and each represents as set of permissions
*/
public abstract String[] roles();
public final boolean isSystem() {
return this == SYSTEM;
}
public static User readFrom(StreamInput input) throws IOException {
if (input.readBoolean()) {
String name = input.readString();
if (!System.NAME.equals(name)) {
throw new IllegalStateException("invalid system user");
}
return SYSTEM;
}
return new Simple(input.readString(), input.readStringArray());
}
public static void writeTo(User user, StreamOutput output) throws IOException {
if (user.isSystem()) {
output.writeBoolean(true);
output.writeString(System.NAME);
return;
}
output.writeBoolean(false);
Simple simple = (Simple) user;
output.writeString(simple.username);
output.writeStringArray(simple.roles);
}
public static class Simple extends User {
private final String username;
private final String[] roles;
public Simple(String username, String... roles) {
this.username = username;
this.roles = roles == null ? Strings.EMPTY_ARRAY : roles;
}
@Override
public String principal() {
return username;
}
@Override
public String[] roles() {
return roles;
}
@Override
public boolean equals(Object o) {
if (this == o) return true;
if (o == null || getClass() != o.getClass()) return false;
Simple simple = (Simple) o;
if (!Arrays.equals(roles, simple.roles)) return false;
if (!username.equals(simple.username)) return false;
return true;
}
@Override
public int hashCode() {
int result = username.hashCode();
result = 31 * result + Arrays.hashCode(roles);
return result;
}
}
private static class System extends User {
private static final String NAME = "__es_system_user";
private static final String[] ROLES = new String[] { SystemRole.NAME };
private System() {
}
@Override
public String principal() {
return NAME;
}
@Override
public String[] roles() {
return ROLES;
}
}
}

View File

@ -0,0 +1,186 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.shield.action;
import com.google.common.base.Predicate;
import org.elasticsearch.action.ActionListener;
import org.elasticsearch.action.ActionRequest;
import org.elasticsearch.action.ActionResponse;
import org.elasticsearch.action.search.ClearScrollRequest;
import org.elasticsearch.action.search.SearchResponse;
import org.elasticsearch.action.search.SearchScrollRequest;
import org.elasticsearch.action.support.ActionFilter;
import org.elasticsearch.action.support.ActionFilterChain;
import org.elasticsearch.common.component.AbstractComponent;
import org.elasticsearch.common.inject.Inject;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.license.plugin.core.LicenseUtils;
import org.elasticsearch.shield.User;
import org.elasticsearch.shield.audit.AuditTrail;
import org.elasticsearch.shield.authc.AuthenticationService;
import org.elasticsearch.shield.authz.AuthorizationService;
import org.elasticsearch.shield.authz.Privilege;
import org.elasticsearch.shield.crypto.CryptoService;
import org.elasticsearch.shield.license.LicenseEventsNotifier;
import org.elasticsearch.shield.license.LicenseService;
import java.util.ArrayList;
import java.util.List;
import static org.elasticsearch.shield.support.Exceptions.authorizationError;
/**
*
*/
public class ShieldActionFilter extends AbstractComponent implements ActionFilter {
private static final Predicate<String> LICESE_EXPIRATION_ACTION_MATCHER = Privilege.HEALTH_AND_STATS.predicate();
private final AuthenticationService authcService;
private final AuthorizationService authzService;
private final CryptoService cryptoService;
private final AuditTrail auditTrail;
private final ShieldActionMapper actionMapper;
private volatile boolean licenseEnabled = true;
@Inject
public ShieldActionFilter(Settings settings, AuthenticationService authcService, AuthorizationService authzService, CryptoService cryptoService,
AuditTrail auditTrail, LicenseEventsNotifier licenseEventsNotifier, ShieldActionMapper actionMapper) {
super(settings);
this.authcService = authcService;
this.authzService = authzService;
this.cryptoService = cryptoService;
this.auditTrail = auditTrail;
this.actionMapper = actionMapper;
licenseEventsNotifier.register(new LicenseEventsNotifier.Listener() {
@Override
public void enabled() {
licenseEnabled = true;
}
@Override
public void disabled() {
licenseEnabled = false;
}
});
}
@Override
public void apply(String action, ActionRequest request, ActionListener listener, ActionFilterChain chain) {
/**
A functional requirement - when the license of shield is disabled (invalid/expires), shield will continue
to operate normally, except all read operations will be blocked.
*/
if (!licenseEnabled && LICESE_EXPIRATION_ACTION_MATCHER.apply(action)) {
logger.error("blocking [{}] operation due to expired license. Cluster health, cluster stats and indices stats \n" +
"operations are blocked on shield license expiration. All data operations (read and write) continue to work. \n" +
"If you have a new license, please update it. Otherwise, please reach out to your support contact.", action);
throw LicenseUtils.newExpirationException(LicenseService.FEATURE_NAME);
}
try {
/**
here we fallback on the system user. Internal system requests are requests that are triggered by
the system itself (e.g. pings, update mappings, share relocation, etc...) and were not originated
by user interaction. Since these requests are triggered by es core modules, they are security
agnostic and therefore not associated with any user. When these requests execute locally, they
are executed directly on their relevant action. Since there is no other way a request can make
it to the action without an associated user (not via REST or transport - this is taken care of by
the {@link Rest} filter and the {@link ServerTransport} filter respectively), it's safe to assume a system user
here if a request is not associated with any other user.
*/
String shieldAction = actionMapper.action(action, request);
User user = authcService.authenticate(shieldAction, request, User.SYSTEM);
authzService.authorize(user, shieldAction, request);
request = unsign(user, shieldAction, request);
chain.proceed(action, request, new SigningListener(this, listener));
} catch (Throwable t) {
listener.onFailure(t);
}
}
@Override
public void apply(String action, ActionResponse response, ActionListener listener, ActionFilterChain chain) {
chain.proceed(action, response, listener);
}
@Override
public int order() {
return Integer.MIN_VALUE;
}
<Request extends ActionRequest> Request unsign(User user, String action, Request request) {
try {
if (request instanceof SearchScrollRequest) {
SearchScrollRequest scrollRequest = (SearchScrollRequest) request;
String scrollId = scrollRequest.scrollId();
scrollRequest.scrollId(cryptoService.unsignAndVerify(scrollId));
return request;
}
if (request instanceof ClearScrollRequest) {
ClearScrollRequest clearScrollRequest = (ClearScrollRequest) request;
boolean isClearAllScrollRequest = clearScrollRequest.scrollIds().contains("_all");
if (!isClearAllScrollRequest) {
List<String> signedIds = clearScrollRequest.scrollIds();
List<String> unsignedIds = new ArrayList<>(signedIds.size());
for (String signedId : signedIds) {
unsignedIds.add(cryptoService.unsignAndVerify(signedId));
}
clearScrollRequest.scrollIds(unsignedIds);
}
return request;
}
return request;
} catch (IllegalArgumentException | IllegalStateException e) {
auditTrail.tamperedRequest(user, action, request);
throw authorizationError("invalid request. {}", e.getMessage());
}
}
<Response extends ActionResponse> Response sign(Response response) {
if (response instanceof SearchResponse) {
SearchResponse searchResponse = (SearchResponse) response;
String scrollId = searchResponse.getScrollId();
if (scrollId != null && !cryptoService.signed(scrollId)) {
searchResponse.scrollId(cryptoService.sign(scrollId));
}
return response;
}
return response;
}
static class SigningListener<Response extends ActionResponse> implements ActionListener<Response> {
private final ShieldActionFilter filter;
private final ActionListener innerListener;
private SigningListener(ShieldActionFilter filter, ActionListener innerListener) {
this.filter = filter;
this.innerListener = innerListener;
}
@Override @SuppressWarnings("unchecked")
public void onResponse(Response response) {
response = this.filter.sign(response);
innerListener.onResponse(response);
}
@Override
public void onFailure(Throwable e) {
innerListener.onFailure(e);
}
}
}

View File

@ -0,0 +1,47 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.shield.action;
import org.elasticsearch.action.admin.indices.analyze.AnalyzeAction;
import org.elasticsearch.action.admin.indices.analyze.AnalyzeRequest;
import org.elasticsearch.action.search.ClearScrollAction;
import org.elasticsearch.action.search.ClearScrollRequest;
import org.elasticsearch.transport.TransportRequest;
/**
* This class analyzes an incoming request and its action name, and returns the shield action name for it.
* In many cases the action name is the same as the original one used in es core, but in some exceptional cases it might need
* to be converted. For instance a clear_scroll that targets all opened scrolls gets converted to a different action that requires
* cluster privileges instead of the default indices privileges, still valid for clear scrolls that target specific scroll ids.
*/
public class ShieldActionMapper {
static final String CLUSTER_PERMISSION_SCROLL_CLEAR_ALL_NAME = "cluster:admin/indices/scroll/clear_all";
static final String CLUSTER_PERMISSION_ANALYZE = "cluster:admin/analyze";
/**
* Returns the shield specific action name given the incoming action name and request
*/
public String action(String action, TransportRequest request) {
switch (action) {
case ClearScrollAction.NAME:
assert request instanceof ClearScrollRequest;
boolean isClearAllScrollRequest = ((ClearScrollRequest) request).scrollIds().contains("_all");
if (isClearAllScrollRequest) {
return CLUSTER_PERMISSION_SCROLL_CLEAR_ALL_NAME;
}
break;
case AnalyzeAction.NAME:
assert request instanceof AnalyzeRequest;
String[] indices = ((AnalyzeRequest) request).indices();
if (indices == null || indices.length == 0) {
return CLUSTER_PERMISSION_ANALYZE;
}
break;
}
return action;
}
}

View File

@ -0,0 +1,47 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.shield.action;
import org.elasticsearch.action.ActionModule;
import org.elasticsearch.common.inject.Module;
import org.elasticsearch.common.inject.PreProcessModule;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.shield.action.authc.cache.ClearRealmCacheAction;
import org.elasticsearch.shield.action.authc.cache.TransportClearRealmCacheAction;
import org.elasticsearch.shield.support.AbstractShieldModule;
/**
*
*/
public class ShieldActionModule extends AbstractShieldModule implements PreProcessModule {
public ShieldActionModule(Settings settings) {
super(settings);
}
@Override
public void processModule(Module module) {
if (module instanceof ActionModule) {
// registering the security filter only for nodes
if (!clientMode) {
((ActionModule) module).registerFilter(ShieldActionFilter.class);
}
// registering all shield actions
((ActionModule) module).registerAction(ClearRealmCacheAction.INSTANCE, TransportClearRealmCacheAction.class);
}
}
@Override
protected void configure(boolean clientMode) {
if (!clientMode) {
bind(ShieldActionMapper.class).asEagerSingleton();
// we need to ensure that there's only a single instance of this filter.
bind(ShieldActionFilter.class).asEagerSingleton();
}
}
}

View File

@ -0,0 +1,32 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.shield.action.authc.cache;
import org.elasticsearch.action.Action;
import org.elasticsearch.client.ElasticsearchClient;
/**
*
*/
public class ClearRealmCacheAction extends Action<ClearRealmCacheRequest, ClearRealmCacheResponse, ClearRealmCacheRequestBuilder> {
public static final ClearRealmCacheAction INSTANCE = new ClearRealmCacheAction();
public static final String NAME = "cluster:admin/shield/realm/cache/clear";
protected ClearRealmCacheAction() {
super(NAME);
}
@Override
public ClearRealmCacheRequestBuilder newRequestBuilder(ElasticsearchClient client) {
return new ClearRealmCacheRequestBuilder(client, this);
}
@Override
public ClearRealmCacheResponse newResponse() {
return new ClearRealmCacheResponse();
}
}

View File

@ -0,0 +1,115 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.shield.action.authc.cache;
import org.elasticsearch.action.support.nodes.BaseNodeRequest;
import org.elasticsearch.action.support.nodes.BaseNodesRequest;
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import java.io.IOException;
/**
*
*/
public class ClearRealmCacheRequest extends BaseNodesRequest<ClearRealmCacheRequest> {
String[] realms;
String[] usernames;
/**
* @return {@code true} if this request targets realms, {@code false} otherwise.
*/
public boolean allRealms() {
return realms == null || realms.length == 0;
}
/**
* @return The realms that should be evicted. Empty array indicates all realms.
*/
public String[] realms() {
return realms;
}
/**
* Sets the realms for which caches will be evicted. When not set all the caches of all realms will be
* evicted.
*
* @param realms The realm names
*/
public ClearRealmCacheRequest realms(String... realms) {
this.realms = realms;
return this;
}
/**
* @return {@code true} if this request targets users, {@code false} otherwise.
*/
public boolean allUsernames() {
return usernames == null || usernames.length == 0;
}
/**
* @return The usernames of the users that should be evicted. Empty array indicates all users.
*/
public String[] usernames() {
return usernames;
}
/**
* Sets the usernames of the users that should be evicted from the caches. When not set, all users
* will be evicted.
*
* @param usernames The usernames
*/
public ClearRealmCacheRequest usernames(String... usernames) {
this.usernames = usernames;
return this;
}
@Override
public void readFrom(StreamInput in) throws IOException {
super.readFrom(in);
realms = in.readStringArray();
usernames = in.readStringArray();
}
@Override
public void writeTo(StreamOutput out) throws IOException {
super.writeTo(out);
out.writeStringArrayNullable(realms);
out.writeStringArrayNullable(usernames);
}
static class Node extends BaseNodeRequest {
String[] realms;
String[] usernames;
Node() {
}
Node(ClearRealmCacheRequest request, String nodeId) {
super(request, nodeId);
this.realms = request.realms;
this.usernames = request.usernames;
}
@Override
public void readFrom(StreamInput in) throws IOException {
super.readFrom(in);
realms = in.readStringArray();
usernames = in.readStringArray();
}
@Override
public void writeTo(StreamOutput out) throws IOException {
super.writeTo(out);
out.writeStringArrayNullable(realms);
out.writeStringArrayNullable(usernames);
}
}
}

View File

@ -0,0 +1,50 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.shield.action.authc.cache;
import org.elasticsearch.action.support.nodes.NodesOperationRequestBuilder;
import org.elasticsearch.client.ElasticsearchClient;
import org.elasticsearch.shield.client.ShieldAuthcClient;
import org.elasticsearch.shield.client.ShieldClient;
/**
*
*/
public class ClearRealmCacheRequestBuilder extends NodesOperationRequestBuilder<ClearRealmCacheRequest, ClearRealmCacheResponse, ClearRealmCacheRequestBuilder> {
private final ShieldAuthcClient authcClient;
public ClearRealmCacheRequestBuilder(ElasticsearchClient client) {
this(client, ClearRealmCacheAction.INSTANCE);
}
public ClearRealmCacheRequestBuilder(ElasticsearchClient client, ClearRealmCacheAction action) {
super(client, action, new ClearRealmCacheRequest());
authcClient = new ShieldClient(client).authc();
}
/**
* Sets the realms for which caches will be evicted. When not set all the caches of all realms will be
* evicted.
*
* @param realms The realm names
*/
public ClearRealmCacheRequestBuilder realms(String... realms) {
request.realms(realms);
return this;
}
/**
* Sets the usernames of the users that should be evicted from the caches. When not set, all users
* will be evicted.
*
* @param usernames The usernames
*/
public ClearRealmCacheRequestBuilder usernames(String... usernames) {
request.usernames(usernames);
return this;
}
}

View File

@ -0,0 +1,91 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.shield.action.authc.cache;
import org.elasticsearch.action.support.nodes.BaseNodeResponse;
import org.elasticsearch.action.support.nodes.BaseNodesResponse;
import org.elasticsearch.cluster.ClusterName;
import org.elasticsearch.cluster.node.DiscoveryNode;
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import org.elasticsearch.common.xcontent.ToXContent;
import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.common.xcontent.XContentFactory;
import java.io.IOException;
/**
*
*/
public class ClearRealmCacheResponse extends BaseNodesResponse<ClearRealmCacheResponse.Node> implements ToXContent {
public ClearRealmCacheResponse() {
}
public ClearRealmCacheResponse(ClusterName clusterName, Node[] nodes) {
super(clusterName, nodes);
}
@Override
public void readFrom(StreamInput in) throws IOException {
super.readFrom(in);
nodes = new Node[in.readVInt()];
for (int i = 0; i < nodes.length; i++) {
nodes[i] = Node.readNodeResponse(in);
}
}
@Override
public void writeTo(StreamOutput out) throws IOException {
super.writeTo(out);
out.writeVInt(nodes.length);
for (Node node : nodes) {
node.writeTo(out);
}
}
@Override
public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {
builder.field("cluster_name", getClusterName().value());
builder.startObject("nodes");
for (ClearRealmCacheResponse.Node node: getNodes()) {
builder.startObject(node.getNode().id());
builder.field("name", node.getNode().name());
builder.endObject();
}
return builder.endObject();
}
@Override
public String toString() {
try {
XContentBuilder builder = XContentFactory.jsonBuilder().prettyPrint();
builder.startObject();
toXContent(builder, EMPTY_PARAMS);
builder.endObject();
return builder.string();
} catch (IOException e) {
return "{ \"error\" : \"" + e.getMessage() + "\"}";
}
}
public static class Node extends BaseNodeResponse {
Node() {
}
Node(DiscoveryNode node) {
super(node);
}
public static Node readNodeResponse(StreamInput in) throws IOException {
Node node = new Node();
node.readFrom(in);
return node;
}
}
}

View File

@ -0,0 +1,104 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.shield.action.authc.cache;
import com.google.common.collect.Lists;
import org.elasticsearch.ElasticsearchException;
import org.elasticsearch.action.support.ActionFilters;
import org.elasticsearch.action.support.nodes.TransportNodesAction;
import org.elasticsearch.cluster.ClusterName;
import org.elasticsearch.cluster.ClusterService;
import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;
import org.elasticsearch.common.inject.Inject;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.shield.authc.Realm;
import org.elasticsearch.shield.authc.Realms;
import org.elasticsearch.shield.authc.support.CachingUsernamePasswordRealm;
import org.elasticsearch.threadpool.ThreadPool;
import org.elasticsearch.transport.TransportService;
import java.util.List;
import java.util.concurrent.atomic.AtomicReferenceArray;
/**
*
*/
public class TransportClearRealmCacheAction extends TransportNodesAction<ClearRealmCacheRequest, ClearRealmCacheResponse, ClearRealmCacheRequest.Node, ClearRealmCacheResponse.Node> {
private final Realms realms;
@Inject
public TransportClearRealmCacheAction(Settings settings, ClusterName clusterName, ThreadPool threadPool,
ClusterService clusterService, TransportService transportService,
ActionFilters actionFilters, Realms realms,
IndexNameExpressionResolver indexNameExpressionResolver) {
super(settings, ClearRealmCacheAction.NAME, clusterName, threadPool, clusterService, transportService, actionFilters,
indexNameExpressionResolver, ClearRealmCacheRequest.class, ClearRealmCacheRequest.Node.class, ThreadPool.Names.MANAGEMENT);
this.realms = realms;
}
@Override
protected ClearRealmCacheResponse newResponse(ClearRealmCacheRequest request, AtomicReferenceArray responses) {
final List<ClearRealmCacheResponse.Node> nodes = Lists.newArrayList();
for (int i = 0; i < responses.length(); i++) {
Object resp = responses.get(i);
if (resp instanceof ClearRealmCacheResponse.Node) {
nodes.add((ClearRealmCacheResponse.Node) resp);
}
}
return new ClearRealmCacheResponse(clusterName, nodes.toArray(new ClearRealmCacheResponse.Node[nodes.size()]));
}
@Override
protected ClearRealmCacheRequest.Node newNodeRequest(String nodeId, ClearRealmCacheRequest request) {
return new ClearRealmCacheRequest.Node(request, nodeId);
}
@Override
protected ClearRealmCacheResponse.Node newNodeResponse() {
return new ClearRealmCacheResponse.Node();
}
@Override
protected ClearRealmCacheResponse.Node nodeOperation(ClearRealmCacheRequest.Node nodeRequest) throws ElasticsearchException {
if (nodeRequest.realms == null || nodeRequest.realms.length == 0) {
for (Realm realm : realms) {
clearCache(realm, nodeRequest.usernames);
}
return new ClearRealmCacheResponse.Node(clusterService.localNode());
}
for (String realmName : nodeRequest.realms) {
Realm realm = realms.realm(realmName);
if (realm == null) {
throw new IllegalArgumentException("could not find active realm [" + realmName + "]");
}
clearCache(realm, nodeRequest.usernames);
}
return new ClearRealmCacheResponse.Node(clusterService.localNode());
}
private void clearCache(Realm realm, String[] usernames) {
if (!(realm instanceof CachingUsernamePasswordRealm)) {
return;
}
CachingUsernamePasswordRealm cachingRealm = (CachingUsernamePasswordRealm) realm;
if (usernames != null && usernames.length != 0) {
for (String username : usernames) {
cachingRealm.expire(username);
}
} else {
cachingRealm.expireAll();
}
}
@Override
protected boolean accumulateExceptions() {
return false;
}
}

View File

@ -0,0 +1,111 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.shield.audit;
import org.elasticsearch.rest.RestRequest;
import org.elasticsearch.shield.User;
import org.elasticsearch.shield.authc.AuthenticationToken;
import org.elasticsearch.shield.transport.filter.ShieldIpFilterRule;
import org.elasticsearch.transport.TransportMessage;
import org.elasticsearch.transport.TransportRequest;
import java.net.InetAddress;
/**
*
*/
public interface AuditTrail {
static final AuditTrail NOOP = new AuditTrail() {
static final String NAME = "noop";
@Override
public String name() {
return NAME;
}
@Override
public void anonymousAccessDenied(String action, TransportMessage<?> message) {
}
@Override
public void anonymousAccessDenied(RestRequest request) {
}
@Override
public void authenticationFailed(RestRequest request) {
}
@Override
public void authenticationFailed(String action, TransportMessage<?> message) {
}
@Override
public void authenticationFailed(AuthenticationToken token, String action, TransportMessage<?> message) {
}
@Override
public void authenticationFailed(AuthenticationToken token, RestRequest request) {
}
@Override
public void authenticationFailed(String realm, AuthenticationToken token, String action, TransportMessage<?> message) {
}
@Override
public void authenticationFailed(String realm, AuthenticationToken token, RestRequest request) {
}
@Override
public void accessGranted(User user, String action, TransportMessage<?> message) {
}
@Override
public void accessDenied(User user, String action, TransportMessage<?> message) {
}
@Override
public void tamperedRequest(User user, String action, TransportRequest request) {
}
@Override
public void connectionGranted(InetAddress inetAddress, String profile, ShieldIpFilterRule rule) {
}
@Override
public void connectionDenied(InetAddress inetAddress, String profile, ShieldIpFilterRule rule) {
}
};
String name();
void anonymousAccessDenied(String action, TransportMessage<?> message);
void anonymousAccessDenied(RestRequest request);
void authenticationFailed(RestRequest request);
void authenticationFailed(String action, TransportMessage<?> message);
void authenticationFailed(AuthenticationToken token, String action, TransportMessage<?> message);
void authenticationFailed(AuthenticationToken token, RestRequest request);
void authenticationFailed(String realm, AuthenticationToken token, String action, TransportMessage<?> message);
void authenticationFailed(String realm, AuthenticationToken token, RestRequest request);
void accessGranted(User user, String action, TransportMessage<?> message);
void accessDenied(User user, String action, TransportMessage<?> message);
void tamperedRequest(User user, String action, TransportRequest request);
void connectionGranted(InetAddress inetAddress, String profile, ShieldIpFilterRule rule);
void connectionDenied(InetAddress inetAddress, String profile, ShieldIpFilterRule rule);
}

View File

@ -0,0 +1,96 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.shield.audit;
import com.google.common.collect.Sets;
import org.elasticsearch.ElasticsearchException;
import org.elasticsearch.common.inject.Module;
import org.elasticsearch.common.inject.PreProcessModule;
import org.elasticsearch.common.inject.multibindings.Multibinder;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.shield.ShieldLifecycleService;
import org.elasticsearch.shield.audit.index.IndexAuditTrail;
import org.elasticsearch.shield.audit.index.IndexAuditUserHolder;
import org.elasticsearch.shield.audit.logfile.LoggingAuditTrail;
import org.elasticsearch.shield.authz.AuthorizationModule;
import org.elasticsearch.shield.support.AbstractShieldModule;
import java.util.Set;
/**
*
*/
public class AuditTrailModule extends AbstractShieldModule.Node implements PreProcessModule {
private final boolean enabled;
private IndexAuditUserHolder indexAuditUser;
public AuditTrailModule(Settings settings) {
super(settings);
enabled = auditingEnabled(settings);
}
@Override
protected void configureNode() {
if (!enabled) {
bind(AuditTrail.class).toInstance(AuditTrail.NOOP);
return;
}
String[] outputs = settings.getAsArray("shield.audit.outputs", new String[] { LoggingAuditTrail.NAME });
if (outputs.length == 0) {
bind(AuditTrail.class).toInstance(AuditTrail.NOOP);
return;
}
bind(AuditTrail.class).to(AuditTrailService.class).asEagerSingleton();
Multibinder<AuditTrail> binder = Multibinder.newSetBinder(binder(), AuditTrail.class);
Set<String> uniqueOutputs = Sets.newHashSet(outputs);
for (String output : uniqueOutputs) {
switch (output) {
case LoggingAuditTrail.NAME:
binder.addBinding().to(LoggingAuditTrail.class);
bind(LoggingAuditTrail.class).asEagerSingleton();
break;
case IndexAuditTrail.NAME:
// TODO should bind the lifecycle service in ShieldModule if we use it other places...
bind(ShieldLifecycleService.class).asEagerSingleton();
bind(IndexAuditUserHolder.class).toInstance(indexAuditUser);
binder.addBinding().to(IndexAuditTrail.class);
bind(IndexAuditTrail.class).asEagerSingleton();
break;
default:
throw new ElasticsearchException("unknown audit trail output [" + output + "]");
}
}
}
@Override
public void processModule(Module module) {
if (enabled && module instanceof AuthorizationModule) {
if (indexAuditLoggingEnabled(settings)) {
indexAuditUser = new IndexAuditUserHolder(IndexAuditTrail.INDEX_NAME_PREFIX);
((AuthorizationModule) module).registerReservedRole(indexAuditUser.role());
}
}
}
static boolean auditingEnabled(Settings settings) {
return settings.getAsBoolean("shield.audit.enabled", false);
}
public static boolean indexAuditLoggingEnabled(Settings settings) {
if (auditingEnabled(settings)) {
String[] outputs = settings.getAsArray("shield.audit.outputs");
for (String output : outputs) {
if (output.equals(IndexAuditTrail.NAME)) {
return true;
}
}
}
return false;
}
}

View File

@ -0,0 +1,129 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.shield.audit;
import org.elasticsearch.common.component.AbstractComponent;
import org.elasticsearch.common.inject.Inject;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.rest.RestRequest;
import org.elasticsearch.shield.User;
import org.elasticsearch.shield.authc.AuthenticationToken;
import org.elasticsearch.shield.transport.filter.ShieldIpFilterRule;
import org.elasticsearch.transport.TransportMessage;
import org.elasticsearch.transport.TransportRequest;
import java.net.InetAddress;
import java.util.Set;
/**
*
*/
public class AuditTrailService extends AbstractComponent implements AuditTrail {
final AuditTrail[] auditTrails;
@Override
public String name() {
return "service";
}
@Inject
public AuditTrailService(Settings settings, Set<AuditTrail> auditTrails) {
super(settings);
this.auditTrails = auditTrails.toArray(new AuditTrail[auditTrails.size()]);
}
@Override
public void anonymousAccessDenied(String action, TransportMessage<?> message) {
for (AuditTrail auditTrail : auditTrails) {
auditTrail.anonymousAccessDenied(action, message);
}
}
@Override
public void anonymousAccessDenied(RestRequest request) {
for (AuditTrail auditTrail : auditTrails) {
auditTrail.anonymousAccessDenied(request);
}
}
@Override
public void authenticationFailed(RestRequest request) {
for (AuditTrail auditTrail : auditTrails) {
auditTrail.authenticationFailed(request);
}
}
@Override
public void authenticationFailed(String action, TransportMessage<?> message) {
for (AuditTrail auditTrail : auditTrails) {
auditTrail.authenticationFailed(action, message);
}
}
@Override
public void authenticationFailed(AuthenticationToken token, String action, TransportMessage<?> message) {
for (AuditTrail auditTrail : auditTrails) {
auditTrail.authenticationFailed(token, action, message);
}
}
@Override
public void authenticationFailed(String realm, AuthenticationToken token, String action, TransportMessage<?> message) {
for (AuditTrail auditTrail : auditTrails) {
auditTrail.authenticationFailed(realm, token, action, message);
}
}
@Override
public void authenticationFailed(AuthenticationToken token, RestRequest request) {
for (AuditTrail auditTrail : auditTrails) {
auditTrail.authenticationFailed(token, request);
}
}
@Override
public void authenticationFailed(String realm, AuthenticationToken token, RestRequest request) {
for (AuditTrail auditTrail : auditTrails) {
auditTrail.authenticationFailed(realm, token, request);
}
}
@Override
public void accessGranted(User user, String action, TransportMessage<?> message) {
for (AuditTrail auditTrail : auditTrails) {
auditTrail.accessGranted(user, action, message);
}
}
@Override
public void accessDenied(User user, String action, TransportMessage<?> message) {
for (AuditTrail auditTrail : auditTrails) {
auditTrail.accessDenied(user, action, message);
}
}
@Override
public void tamperedRequest(User user, String action, TransportRequest request) {
for (AuditTrail auditTrail : auditTrails) {
auditTrail.tamperedRequest(user, action, request);
}
}
@Override
public void connectionGranted(InetAddress inetAddress, String profile, ShieldIpFilterRule rule) {
for (AuditTrail auditTrail : auditTrails) {
auditTrail.connectionGranted(inetAddress, profile, rule);
}
}
@Override
public void connectionDenied(InetAddress inetAddress, String profile, ShieldIpFilterRule rule) {
for (AuditTrail auditTrail : auditTrails) {
auditTrail.connectionDenied(inetAddress, profile, rule);
}
}
}

View File

@ -0,0 +1,37 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.shield.audit;
import org.elasticsearch.action.IndicesRequest;
import org.elasticsearch.common.xcontent.XContentHelper;
import org.elasticsearch.rest.RestRequest;
import org.elasticsearch.transport.TransportMessage;
import java.io.IOException;
/**
*
*/
public class AuditUtil {
public static String restRequestContent(RestRequest request) {
if (request.hasContent()) {
try {
return XContentHelper.convertToJson(request.content(), false, false);
} catch (IOException ioe) {
return "Invalid Format: " + request.content().toUtf8();
}
}
return "";
}
public static String[] indices(TransportMessage message) {
if (message instanceof IndicesRequest) {
return ((IndicesRequest) message).indices();
}
return null;
}
}

View File

@ -0,0 +1,68 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.shield.audit.index;
import java.util.Arrays;
import java.util.EnumSet;
import java.util.Locale;
public enum IndexAuditLevel {
ANONYMOUS_ACCESS_DENIED,
AUTHENTICATION_FAILED,
ACCESS_GRANTED,
ACCESS_DENIED,
TAMPERED_REQUEST,
CONNECTION_GRANTED,
CONNECTION_DENIED,
SYSTEM_ACCESS_GRANTED;
static EnumSet<IndexAuditLevel> parse(String[] levels) {
EnumSet<IndexAuditLevel> enumSet = EnumSet.noneOf(IndexAuditLevel.class);
for (String level : levels) {
String lowerCaseLevel = level.trim().toLowerCase(Locale.ROOT);
switch (lowerCaseLevel) {
case "_all":
enumSet.addAll(Arrays.asList(IndexAuditLevel.values()));
break;
case "anonymous_access_denied":
enumSet.add(ANONYMOUS_ACCESS_DENIED);
break;
case "authentication_failed":
enumSet.add(AUTHENTICATION_FAILED);
break;
case "access_granted":
enumSet.add(ACCESS_GRANTED);
break;
case "access_denied":
enumSet.add(ACCESS_DENIED);
break;
case "tampered_request":
enumSet.add(TAMPERED_REQUEST);
break;
case "connection_granted":
enumSet.add(CONNECTION_GRANTED);
break;
case "connection_denied":
enumSet.add(CONNECTION_DENIED);
break;
case "system_access_granted":
enumSet.add(SYSTEM_ACCESS_GRANTED);
break;
default:
throw new IllegalArgumentException("invalid event name specified [" + level + "]");
}
}
return enumSet;
}
public static EnumSet<IndexAuditLevel> parse(String[] includeLevels, String[] excludeLevels) {
EnumSet<IndexAuditLevel> included = parse(includeLevels);
EnumSet<IndexAuditLevel> excluded = parse(excludeLevels);
included.removeAll(excluded);
return included;
}
}

View File

@ -0,0 +1,763 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.shield.audit.index;
import com.google.common.base.Splitter;
import com.google.common.collect.ImmutableSet;
import org.elasticsearch.ElasticsearchException;
import org.elasticsearch.action.admin.indices.template.put.PutIndexTemplateRequest;
import org.elasticsearch.action.admin.indices.template.put.PutIndexTemplateResponse;
import org.elasticsearch.action.bulk.BulkProcessor;
import org.elasticsearch.action.bulk.BulkRequest;
import org.elasticsearch.action.bulk.BulkResponse;
import org.elasticsearch.action.index.IndexRequest;
import org.elasticsearch.client.Client;
import org.elasticsearch.client.transport.TransportClient;
import org.elasticsearch.cluster.ClusterChangedEvent;
import org.elasticsearch.cluster.ClusterState;
import org.elasticsearch.cluster.metadata.IndexMetaData;
import org.elasticsearch.common.Nullable;
import org.elasticsearch.common.Strings;
import org.elasticsearch.common.collect.Tuple;
import org.elasticsearch.common.component.AbstractComponent;
import org.elasticsearch.common.inject.Inject;
import org.elasticsearch.common.inject.Provider;
import org.elasticsearch.common.io.Streams;
import org.elasticsearch.common.network.NetworkUtils;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.transport.InetSocketTransportAddress;
import org.elasticsearch.common.transport.TransportAddress;
import org.elasticsearch.common.unit.TimeValue;
import org.elasticsearch.common.util.concurrent.EsExecutors;
import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.common.xcontent.XContentBuilderString;
import org.elasticsearch.common.xcontent.XContentFactory;
import org.elasticsearch.env.Environment;
import org.elasticsearch.gateway.GatewayService;
import org.elasticsearch.plugins.PluginsService;
import org.elasticsearch.rest.RestRequest;
import org.elasticsearch.shield.ShieldPlugin;
import org.elasticsearch.shield.User;
import org.elasticsearch.shield.audit.AuditTrail;
import org.elasticsearch.shield.authc.AuthenticationService;
import org.elasticsearch.shield.authc.AuthenticationToken;
import org.elasticsearch.shield.authz.Privilege;
import org.elasticsearch.shield.rest.RemoteHostHeader;
import org.elasticsearch.shield.transport.filter.ShieldIpFilterRule;
import org.elasticsearch.transport.TransportMessage;
import org.elasticsearch.transport.TransportRequest;
import org.joda.time.DateTime;
import org.joda.time.DateTimeZone;
import java.io.IOException;
import java.net.InetAddress;
import java.net.InetSocketAddress;
import java.net.UnknownHostException;
import java.util.*;
import java.util.concurrent.LinkedBlockingQueue;
import java.util.concurrent.atomic.AtomicReference;
import static org.elasticsearch.shield.audit.AuditUtil.indices;
import static org.elasticsearch.shield.audit.AuditUtil.restRequestContent;
import static org.elasticsearch.shield.audit.index.IndexAuditLevel.*;
import static org.elasticsearch.shield.audit.index.IndexNameResolver.resolve;
/**
* Audit trail implementation that writes events into an index.
*/
public class IndexAuditTrail extends AbstractComponent implements AuditTrail {
public static final int DEFAULT_BULK_SIZE = 1000;
public static final int MAX_BULK_SIZE = 10000;
public static final int DEFAULT_MAX_QUEUE_SIZE = 1000;
public static final TimeValue DEFAULT_FLUSH_INTERVAL = TimeValue.timeValueSeconds(1);
public static final IndexNameResolver.Rollover DEFAULT_ROLLOVER = IndexNameResolver.Rollover.DAILY;
public static final String NAME = "index";
public static final String INDEX_NAME_PREFIX = ".shield_audit_log";
public static final String DOC_TYPE = "event";
public static final String ROLLOVER_SETTING = "shield.audit.index.rollover";
public static final String QUEUE_SIZE_SETTING = "shield.audit.index.queue_max_size";
public static final String INDEX_TEMPLATE_NAME = "shield_audit_log";
public static final String DEFAULT_CLIENT_NAME = "shield-audit-client";
static final String[] DEFAULT_EVENT_INCLUDES = new String[] {
ACCESS_DENIED.toString(),
ACCESS_GRANTED.toString(),
ANONYMOUS_ACCESS_DENIED.toString(),
AUTHENTICATION_FAILED.toString(),
CONNECTION_DENIED.toString(),
CONNECTION_GRANTED.toString(),
TAMPERED_REQUEST.toString()
};
private static final ImmutableSet<String> forbiddenIndexSettings = ImmutableSet.of("index.mapper.dynamic");
private final AtomicReference<State> state = new AtomicReference<>(State.INITIALIZED);
private final String nodeName;
private final IndexAuditUserHolder auditUser;
private final Provider<Client> clientProvider;
private final AuthenticationService authenticationService;
private final Environment environment;
private final LinkedBlockingQueue<Message> eventQueue;
private final QueueConsumer queueConsumer;
private BulkProcessor bulkProcessor;
private Client client;
private boolean indexToRemoteCluster;
private IndexNameResolver.Rollover rollover;
private String nodeHostName;
private String nodeHostAddress;
private EnumSet<IndexAuditLevel> events;
@Override
public String name() {
return NAME;
}
@Inject
public IndexAuditTrail(Settings settings, IndexAuditUserHolder indexingAuditUser,
Environment environment, AuthenticationService authenticationService,
Provider<Client> clientProvider) {
super(settings);
this.auditUser = indexingAuditUser;
this.authenticationService = authenticationService;
this.clientProvider = clientProvider;
this.environment = environment;
this.nodeName = settings.get("name");
this.queueConsumer = new QueueConsumer(EsExecutors.threadName(settings, "audit-queue-consumer"));
int maxQueueSize = settings.getAsInt(QUEUE_SIZE_SETTING, DEFAULT_MAX_QUEUE_SIZE);
if (maxQueueSize <= 0) {
logger.warn("invalid value [{}] for setting [{}]. using default value [{}]", maxQueueSize, QUEUE_SIZE_SETTING, DEFAULT_MAX_QUEUE_SIZE);
maxQueueSize = DEFAULT_MAX_QUEUE_SIZE;
}
this.eventQueue = new LinkedBlockingQueue<>(maxQueueSize);
// we have to initialize this here since we use rollover in determining if we can start...
try {
rollover = IndexNameResolver.Rollover.valueOf(
settings.get(ROLLOVER_SETTING, DEFAULT_ROLLOVER.name()).toUpperCase(Locale.ENGLISH));
} catch (IllegalArgumentException e) {
logger.warn("invalid value for setting [shield.audit.index.rollover]; falling back to default [{}]",
DEFAULT_ROLLOVER.name());
rollover = DEFAULT_ROLLOVER;
}
// we have to initialize the events here since we can receive events before starting...
String[] includedEvents = settings.getAsArray("shield.audit.index.events.include", DEFAULT_EVENT_INCLUDES);
String[] excludedEvents = settings.getAsArray("shield.audit.index.events.exclude");
try {
events = parse(includedEvents, excludedEvents);
} catch (IllegalArgumentException e) {
logger.warn("invalid event type specified, using default for audit index output. include events [{}], exclude events [{}]", e, includedEvents, excludedEvents);
events = parse(DEFAULT_EVENT_INCLUDES, Strings.EMPTY_ARRAY);
}
}
public State state() {
return state.get();
}
/**
* This method determines if this service can be started based on the state in the {@link ClusterChangedEvent} and
* if the node is the master or not. In order for the service to start, the following must be true:
*
* <ol>
* <li>The cluster must not have a {@link GatewayService#STATE_NOT_RECOVERED_BLOCK}; in other words the gateway
* must have recovered from disk already.</li>
* <li>The current node must be the master OR the <code>shield_audit_log</code> index template must exist</li>
* <li>The current audit index must not exist or have all primary shards active. The current audit index name
* is determined by the rollover settings and current time</li>
* </ol>
*
* @param event the {@link ClusterChangedEvent} containing the up to date cluster state
* @param master flag indicating if the current node is the master
* @return true if all requirements are met and the service can be started
*/
public boolean canStart(ClusterChangedEvent event, boolean master) {
if (event.state().blocks().hasGlobalBlock(GatewayService.STATE_NOT_RECOVERED_BLOCK)) {
// wait until the gateway has recovered from disk, otherwise we think may not have .shield-audit-
// but they may not have been restored from the cluster state on disk
logger.debug("index audit trail waiting until gateway has recovered from disk");
return false;
}
final ClusterState clusterState = event.state();
if (!master && clusterState.metaData().templates().get(INDEX_TEMPLATE_NAME) == null) {
logger.debug("shield audit index template [{}] does not exist, so service cannot start", INDEX_TEMPLATE_NAME);
return false;
}
String index = resolve(INDEX_NAME_PREFIX, DateTime.now(DateTimeZone.UTC), rollover);
IndexMetaData metaData = clusterState.metaData().index(index);
if (metaData == null) {
logger.debug("shield audit index [{}] does not exist, so service can start", index);
return true;
}
if (clusterState.routingTable().index(index).allPrimaryShardsActive()) {
logger.debug("shield audit index [{}] all primary shards started, so service can start", index);
return true;
}
logger.debug("shield audit index [{}] does not have all primary shards started, so service cannot start", index);
return false;
}
/**
* Starts the service. The state is moved to {@link org.elasticsearch.shield.audit.index.IndexAuditTrail.State#STARTING}
* at the beginning of the method. The service's components are initialized and if the current node is the master, the index
* template will be stored. The state is moved {@link org.elasticsearch.shield.audit.index.IndexAuditTrail.State#STARTED}
* and before returning the queue of messages that came before the service started is drained.
*
* @param master flag indicating if the current node is master
*/
public void start(boolean master) {
if (state.compareAndSet(State.INITIALIZED, State.STARTING)) {
String hostname = "n/a";
String hostaddr = "n/a";
try {
hostname = InetAddress.getLocalHost().getHostName();
hostaddr = InetAddress.getLocalHost().getHostAddress();
} catch (UnknownHostException e) {
logger.warn("unable to resolve local host name", e);
}
this.nodeHostName = hostname;
this.nodeHostAddress = hostaddr;
initializeClient();
if (master) {
putTemplate(customAuditIndexSettings(settings));
}
initializeBulkProcessor();
queueConsumer.start();
state.set(State.STARTED);
}
}
public void stop() {
if (state.compareAndSet(State.STARTED, State.STOPPING)) {
try {
queueConsumer.interrupt();
if (bulkProcessor != null) {
bulkProcessor.flush();
}
} finally {
state.set(State.STOPPED);
}
}
}
public void close() {
if (state.get() != State.STOPPED) {
stop();
}
try {
if (bulkProcessor != null) {
bulkProcessor.close();
}
} finally {
if (indexToRemoteCluster) {
if (client != null) {
client.close();
}
}
}
}
@Override
public void anonymousAccessDenied(String action, TransportMessage<?> message) {
if (events.contains(ANONYMOUS_ACCESS_DENIED)) {
try {
enqueue(message("anonymous_access_denied", action, null, null, indices(message), message));
} catch (Exception e) {
logger.warn("failed to index audit event: [anonymous_access_denied]", e);
}
}
}
@Override
public void anonymousAccessDenied(RestRequest request) {
if (events.contains(ANONYMOUS_ACCESS_DENIED)) {
try {
enqueue(message("anonymous_access_denied", null, null, null, null, request));
} catch (Exception e) {
logger.warn("failed to index audit event: [anonymous_access_denied]", e);
}
}
}
@Override
public void authenticationFailed(String action, TransportMessage<?> message) {
if (events.contains(AUTHENTICATION_FAILED)) {
try {
enqueue(message("authentication_failed", action, null, null, indices(message), message));
} catch (Exception e) {
logger.warn("failed to index audit event: [authentication_failed]", e);
}
}
}
@Override
public void authenticationFailed(RestRequest request) {
if (events.contains(AUTHENTICATION_FAILED)) {
try {
enqueue(message("authentication_failed", null, null, null, null, request));
} catch (Exception e) {
logger.warn("failed to index audit event: [authentication_failed]", e);
}
}
}
@Override
public void authenticationFailed(AuthenticationToken token, String action, TransportMessage<?> message) {
if (events.contains(AUTHENTICATION_FAILED)) {
if (!principalIsAuditor(token.principal())) {
try {
enqueue(message("authentication_failed", action, token.principal(), null, indices(message), message));
} catch (Exception e) {
logger.warn("failed to index audit event: [authentication_failed]", e);
}
}
}
}
@Override
public void authenticationFailed(AuthenticationToken token, RestRequest request) {
if (events.contains(AUTHENTICATION_FAILED)) {
if (!principalIsAuditor(token.principal())) {
try {
enqueue(message("authentication_failed", null, token.principal(), null, null, request));
} catch (Exception e) {
logger.warn("failed to index audit event: [authentication_failed]", e);
}
}
}
}
@Override
public void authenticationFailed(String realm, AuthenticationToken token, String action, TransportMessage<?> message) {
if (events.contains(AUTHENTICATION_FAILED)) {
if (!principalIsAuditor(token.principal())) {
try {
enqueue(message("authentication_failed", action, token.principal(), realm, indices(message), message));
} catch (Exception e) {
logger.warn("failed to index audit event: [authentication_failed]", e);
}
}
}
}
@Override
public void authenticationFailed(String realm, AuthenticationToken token, RestRequest request) {
if (events.contains(AUTHENTICATION_FAILED)) {
if (!principalIsAuditor(token.principal())) {
try {
enqueue(message("authentication_failed", null, token.principal(), realm, null, request));
} catch (Exception e) {
logger.warn("failed to index audit event: [authentication_failed]", e);
}
}
}
}
@Override
public void accessGranted(User user, String action, TransportMessage<?> message) {
if (!principalIsAuditor(user.principal())) {
// special treatment for internal system actions - only log if explicitly told to
if (user.isSystem() && Privilege.SYSTEM.predicate().apply(action)) {
if (events.contains(SYSTEM_ACCESS_GRANTED)) {
try {
enqueue(message("access_granted", action, user.principal(), null, indices(message), message));
} catch (Exception e) {
logger.warn("failed to index audit event: [access_granted]", e);
}
}
} else if (events.contains(ACCESS_GRANTED)) {
try {
enqueue(message("access_granted", action, user.principal(), null, indices(message), message));
} catch (Exception e) {
logger.warn("failed to index audit event: [access_granted]", e);
}
}
}
}
@Override
public void accessDenied(User user, String action, TransportMessage<?> message) {
if (events.contains(ACCESS_DENIED)) {
if (!principalIsAuditor(user.principal())) {
try {
enqueue(message("access_denied", action, user.principal(), null, indices(message), message));
} catch (Exception e) {
logger.warn("failed to index audit event: [access_denied]", e);
}
}
}
}
@Override
public void tamperedRequest(User user, String action, TransportRequest request) {
if (events.contains(TAMPERED_REQUEST)) {
if (!principalIsAuditor(user.principal())) {
try {
enqueue(message("tampered_request", action, user.principal(), null, indices(request), request));
} catch (Exception e) {
logger.warn("failed to index audit event: [tampered_request]", e);
}
}
}
}
@Override
public void connectionGranted(InetAddress inetAddress, String profile, ShieldIpFilterRule rule) {
if (events.contains(CONNECTION_GRANTED)) {
try {
enqueue(message("ip_filter", "connection_granted", inetAddress, profile, rule));
} catch (Exception e) {
logger.warn("failed to index audit event: [connection_granted]", e);
}
}
}
@Override
public void connectionDenied(InetAddress inetAddress, String profile, ShieldIpFilterRule rule) {
if (events.contains(CONNECTION_DENIED)) {
try {
enqueue(message("ip_filter", "connection_denied", inetAddress, profile, rule));
} catch (Exception e) {
logger.warn("failed to index audit event: [connection_denied]", e);
}
}
}
private boolean principalIsAuditor(String principal) {
return (principal.equals(auditUser.user().principal()));
}
private Message message(String type, @Nullable String action, @Nullable String principal,
@Nullable String realm, @Nullable String[] indices, TransportMessage message) throws Exception {
Message msg = new Message().start();
common("transport", type, msg.builder);
originAttributes(message, msg.builder);
if (action != null) {
msg.builder.field(Field.ACTION, action);
}
if (principal != null) {
msg.builder.field(Field.PRINCIPAL, principal);
}
if (realm != null) {
msg.builder.field(Field.REALM, realm);
}
if (indices != null) {
msg.builder.array(Field.INDICES, indices);
}
if (logger.isDebugEnabled()) {
msg.builder.field(Field.REQUEST, message.getClass().getSimpleName());
}
return msg.end();
}
private Message message(String type, @Nullable String action, @Nullable String principal,
@Nullable String realm, @Nullable String[] indices, RestRequest request) throws Exception {
Message msg = new Message().start();
common("rest", type, msg.builder);
if (action != null) {
msg.builder.field(Field.ACTION, action);
}
if (principal != null) {
msg.builder.field(Field.PRINCIPAL, principal);
}
if (realm != null) {
msg.builder.field(Field.REALM, realm);
}
if (indices != null) {
msg.builder.array(Field.INDICES, indices);
}
msg.builder.field(Field.REQUEST_BODY, restRequestContent(request));
msg.builder.field(Field.ORIGIN_TYPE, "rest");
msg.builder.field(Field.ORIGIN_ADDRESS, request.getRemoteAddress());
msg.builder.field(Field.URI, request.uri());
return msg.end();
}
private Message message(String layer, String type, InetAddress originAddress, String profile,
ShieldIpFilterRule rule) throws IOException {
Message msg = new Message().start();
common(layer, type, msg.builder);
msg.builder.field(Field.ORIGIN_ADDRESS, originAddress.getHostAddress());
msg.builder.field(Field.TRANSPORT_PROFILE, profile);
msg.builder.field(Field.RULE, rule);
return msg.end();
}
private XContentBuilder common(String layer, String type, XContentBuilder builder) throws IOException {
builder.field(Field.NODE_NAME, nodeName);
builder.field(Field.NODE_HOST_NAME, nodeHostName);
builder.field(Field.NODE_HOST_ADDRESS, nodeHostAddress);
builder.field(Field.LAYER, layer);
builder.field(Field.TYPE, type);
return builder;
}
private static XContentBuilder originAttributes(TransportMessage message, XContentBuilder builder) throws IOException {
// first checking if the message originated in a rest call
InetSocketAddress restAddress = RemoteHostHeader.restRemoteAddress(message);
if (restAddress != null) {
builder.field(Field.ORIGIN_TYPE, "rest");
builder.field(Field.ORIGIN_ADDRESS, restAddress.getAddress().getHostAddress());
return builder;
}
// we'll see if was originated in a remote node
TransportAddress address = message.remoteAddress();
if (address != null) {
builder.field(Field.ORIGIN_TYPE, "transport");
if (address instanceof InetSocketTransportAddress) {
builder.field(Field.ORIGIN_ADDRESS, ((InetSocketTransportAddress) address).address().getAddress().getHostAddress());
} else {
builder.field(Field.ORIGIN_ADDRESS, address);
}
return builder;
}
// the call was originated locally on this node
builder.field(Field.ORIGIN_TYPE, "local_node");
builder.field(Field.ORIGIN_ADDRESS, NetworkUtils.getLocalHostAddress("_local"));
return builder;
}
void enqueue(Message message) {
State currentState = state();
if (currentState != State.STOPPING && currentState != State.STOPPED) {
boolean accepted = eventQueue.offer(message);
if (!accepted) {
throw new IllegalStateException("queue is full, bulk processor may have stopped indexing");
}
}
}
private void initializeClient() {
Settings clientSettings = settings.getByPrefix("shield.audit.index.client.");
if (clientSettings.names().size() == 0) {
// in the absence of client settings for remote indexing, fall back to the client that was passed in.
this.client = clientProvider.get();
indexToRemoteCluster = false;
} else {
String[] hosts = clientSettings.getAsArray("hosts");
if (hosts.length == 0) {
throw new ElasticsearchException("missing required setting " +
"[shield.audit.index.client.hosts] for remote audit log indexing");
}
if (clientSettings.get("cluster.name", "").isEmpty()) {
throw new ElasticsearchException("missing required setting " +
"[shield.audit.index.client.cluster.name] for remote audit log indexing");
}
List<Tuple<String, Integer>> hostPortPairs = new ArrayList<>();
for (String host : hosts) {
List<String> hostPort = Splitter.on(":").splitToList(host.trim());
if (hostPort.size() != 1 && hostPort.size() != 2) {
logger.warn("invalid host:port specified: [{}] for setting [shield.audit.index.client.hosts]", host);
}
hostPortPairs.add(new Tuple<>(hostPort.get(0), hostPort.size() == 2 ? Integer.valueOf(hostPort.get(1)) : 9300));
}
if (hostPortPairs.size() == 0) {
throw new ElasticsearchException("no valid host:port pairs specified for setting [shield.audit.index.client.hosts]");
}
final TransportClient transportClient = TransportClient.builder()
.settings(Settings.builder()
.put("name", DEFAULT_CLIENT_NAME)
.put("path.home", environment.homeFile())
.put(PluginsService.LOAD_PLUGIN_FROM_CLASSPATH, false)
.putArray("plugin.types", ShieldPlugin.class.getName())
.put(clientSettings))
.build();
for (Tuple<String, Integer> pair : hostPortPairs) {
transportClient.addTransportAddress(new InetSocketTransportAddress(pair.v1(), pair.v2()));
}
this.client = transportClient;
indexToRemoteCluster = true;
logger.info("forwarding audit events to remote cluster [{}] using hosts [{}]",
clientSettings.get("cluster.name", ""), hostPortPairs.toString());
}
}
Settings customAuditIndexSettings(Settings nodeSettings) {
Settings newSettings = Settings.builder()
.put(nodeSettings.getAsSettings("shield.audit.index.settings.index"))
.build();
if (newSettings.names().isEmpty()) {
return Settings.EMPTY;
}
// Filter out forbidden settings:
Settings.Builder builder = Settings.builder();
for (Map.Entry<String, String> entry : newSettings.getAsMap().entrySet()) {
String name = "index." + entry.getKey();
if (forbiddenIndexSettings.contains(name)) {
logger.warn("overriding the default [{}} setting is forbidden. ignoring...", name);
continue;
}
builder.put(name, entry.getValue());
}
return builder.build();
}
void putTemplate(Settings customSettings) {
try {
final byte[] template = Streams.copyToBytesFromClasspath("/" + INDEX_TEMPLATE_NAME + ".json");
PutIndexTemplateRequest request = new PutIndexTemplateRequest(INDEX_TEMPLATE_NAME).source(template);
if (customSettings != null && customSettings.names().size() > 0) {
Settings updatedSettings = Settings.builder()
.put(request.settings())
.put(customSettings)
.build();
request.settings(updatedSettings);
}
authenticationService.attachUserHeaderIfMissing(request, auditUser.user());
assert !Thread.currentThread().isInterrupted() : "current thread has been interrupted before putting index template!!!";
PutIndexTemplateResponse response = client.admin().indices().putTemplate(request).actionGet();
if (!response.isAcknowledged()) {
throw new IllegalStateException("failed to put index template for audit logging");
}
} catch (Exception e) {
logger.debug("unexpected exception while putting index template", e);
throw new IllegalStateException("failed to load [" + INDEX_TEMPLATE_NAME + ".json]", e);
}
}
private void initializeBulkProcessor() {
int bulkSize = Math.min(settings.getAsInt("shield.audit.index.bulk_size", DEFAULT_BULK_SIZE), MAX_BULK_SIZE);
bulkSize = (bulkSize < 1) ? DEFAULT_BULK_SIZE : bulkSize;
TimeValue interval = settings.getAsTime("shield.audit.index.flush_interval", DEFAULT_FLUSH_INTERVAL);
interval = (interval.millis() < 1) ? DEFAULT_FLUSH_INTERVAL : interval;
bulkProcessor = BulkProcessor.builder(client, new BulkProcessor.Listener() {
@Override
public void beforeBulk(long executionId, BulkRequest request) {
authenticationService.attachUserHeaderIfMissing(request, auditUser.user());
}
@Override
public void afterBulk(long executionId, BulkRequest request, BulkResponse response) {
if (response.hasFailures()) {
logger.info("failed to bulk index audit events: [{}]", response.buildFailureMessage());
}
}
@Override
public void afterBulk(long executionId, BulkRequest request, Throwable failure) {
logger.error("failed to bulk index audit events: [{}]", failure, failure.getMessage());
}
}).setBulkActions(bulkSize)
.setFlushInterval(interval)
.setConcurrentRequests(1)
.build();
}
private class QueueConsumer extends Thread {
volatile boolean running = true;
QueueConsumer(String name) {
super(name);
setDaemon(true);
}
@Override
public void run() {
while (running) {
try {
Message message = eventQueue.take();
IndexRequest indexRequest = client.prepareIndex()
.setIndex(resolve(INDEX_NAME_PREFIX, message.timestamp, rollover))
.setType(DOC_TYPE).setSource(message.builder).request();
authenticationService.attachUserHeaderIfMissing(indexRequest, auditUser.user());
bulkProcessor.add(indexRequest);
} catch (InterruptedException e) {
logger.debug("index audit queue consumer interrupted", e);
running = false;
return;
} catch (Exception e) {
// log the exception and keep going
logger.warn("failed to index audit message from queue", e);
}
}
}
}
static class Message {
final DateTime timestamp;
final XContentBuilder builder;
Message() throws IOException {
this.timestamp = DateTime.now(DateTimeZone.UTC);
this.builder = XContentFactory.jsonBuilder();
}
Message start() throws IOException {
builder.startObject();
builder.field(Field.TIMESTAMP, timestamp);
return this;
}
Message end() throws IOException {
builder.endObject();
return this;
}
}
interface Field {
XContentBuilderString TIMESTAMP = new XContentBuilderString("@timestamp");
XContentBuilderString NODE_NAME = new XContentBuilderString("node_name");
XContentBuilderString NODE_HOST_NAME = new XContentBuilderString("node_host_name");
XContentBuilderString NODE_HOST_ADDRESS = new XContentBuilderString("node_host_address");
XContentBuilderString LAYER = new XContentBuilderString("layer");
XContentBuilderString TYPE = new XContentBuilderString("event_type");
XContentBuilderString ORIGIN_ADDRESS = new XContentBuilderString("origin_address");
XContentBuilderString ORIGIN_TYPE = new XContentBuilderString("origin_type");
XContentBuilderString PRINCIPAL = new XContentBuilderString("principal");
XContentBuilderString ACTION = new XContentBuilderString("action");
XContentBuilderString INDICES = new XContentBuilderString("indices");
XContentBuilderString REQUEST = new XContentBuilderString("request");
XContentBuilderString REQUEST_BODY = new XContentBuilderString("request_body");
XContentBuilderString URI = new XContentBuilderString("uri");
XContentBuilderString REALM = new XContentBuilderString("realm");
XContentBuilderString TRANSPORT_PROFILE = new XContentBuilderString("transport_profile");
XContentBuilderString RULE = new XContentBuilderString("rule");
}
public enum State {
INITIALIZED,
STARTING,
STARTED,
STOPPING,
STOPPED
}
}

View File

@ -0,0 +1,49 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.shield.audit.index;
import org.elasticsearch.action.admin.indices.template.put.PutIndexTemplateAction;
import org.elasticsearch.action.bulk.BulkAction;
import org.elasticsearch.shield.User;
import org.elasticsearch.shield.authz.Permission;
import org.elasticsearch.shield.authz.Privilege;
/**
*
*/
public class IndexAuditUserHolder {
private static final String NAME = "__indexing_audit_user";
private static final String[] ROLE_NAMES = new String[] { "__indexing_audit_role" };
private final User user;
private final Permission.Global.Role role;
public IndexAuditUserHolder(String indexName) {
// append the index name with the '*' wildcard so that the principal can write to
// any index that starts with the given name. this allows us to rollover over
// audit indices hourly, daily, weekly, etc.
String indexPattern = indexName + "*";
this.role = Permission.Global.Role.builder(ROLE_NAMES[0])
.set(Privilege.Cluster.action(PutIndexTemplateAction.NAME))
.add(Privilege.Index.CREATE_INDEX, indexPattern)
.add(Privilege.Index.INDEX, indexPattern)
.add(Privilege.Index.action(BulkAction.NAME), indexPattern)
.build();
this.user = new User.Simple(NAME, ROLE_NAMES);
}
public User user() {
return user;
}
public Permission.Global.Role role() {
return role;
}
}

View File

@ -0,0 +1,40 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.shield.audit.index;
import org.joda.time.DateTime;
import org.joda.time.format.DateTimeFormat;
import org.joda.time.format.DateTimeFormatter;
public class IndexNameResolver {
public enum Rollover {
HOURLY ("-yyyy.MM.dd.HH"),
DAILY ("-yyyy.MM.dd"),
WEEKLY ("-yyyy.w"),
MONTHLY ("-yyyy.MM");
private final DateTimeFormatter formatter;
Rollover(String format) {
this.formatter = DateTimeFormat.forPattern(format);
}
DateTimeFormatter formatter() {
return formatter;
}
}
private IndexNameResolver() {}
public static String resolve(DateTime timestamp, Rollover rollover) {
return rollover.formatter().print(timestamp);
}
public static String resolve(String indexNamePrefix, DateTime timestamp, Rollover rollover) {
return indexNamePrefix + resolve(timestamp, rollover);
}
}

View File

@ -0,0 +1,305 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.shield.audit.logfile;
import org.elasticsearch.common.inject.Inject;
import org.elasticsearch.common.logging.ESLogger;
import org.elasticsearch.common.logging.Loggers;
import org.elasticsearch.common.network.NetworkUtils;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.transport.InetSocketTransportAddress;
import org.elasticsearch.common.transport.TransportAddress;
import org.elasticsearch.rest.RestRequest;
import org.elasticsearch.shield.User;
import org.elasticsearch.shield.audit.AuditTrail;
import org.elasticsearch.shield.authc.AuthenticationToken;
import org.elasticsearch.shield.authz.Privilege;
import org.elasticsearch.shield.rest.RemoteHostHeader;
import org.elasticsearch.shield.transport.filter.ShieldIpFilterRule;
import org.elasticsearch.transport.TransportMessage;
import org.elasticsearch.transport.TransportRequest;
import java.net.InetAddress;
import java.net.InetSocketAddress;
import java.net.UnknownHostException;
import static org.elasticsearch.common.Strings.arrayToCommaDelimitedString;
import static org.elasticsearch.shield.audit.AuditUtil.indices;
import static org.elasticsearch.shield.audit.AuditUtil.restRequestContent;
/**
*
*/
public class LoggingAuditTrail implements AuditTrail {
public static final String NAME = "logfile";
private final String prefix;
private final ESLogger logger;
@Override
public String name() {
return NAME;
}
@Inject
public LoggingAuditTrail(Settings settings) {
this(resolvePrefix(settings), Loggers.getLogger(LoggingAuditTrail.class));
}
LoggingAuditTrail(Settings settings, ESLogger logger) {
this(resolvePrefix(settings), logger);
}
LoggingAuditTrail(String prefix, ESLogger logger) {
this.logger = logger;
this.prefix = prefix;
}
@Override
public void anonymousAccessDenied(String action, TransportMessage<?> message) {
String indices = indicesString(message);
if (indices != null) {
if (logger.isDebugEnabled()) {
logger.debug("{}[transport] [anonymous_access_denied]\t{}, action=[{}], indices=[{}], request=[{}]", prefix, originAttributes(message), action, indices, message.getClass().getSimpleName());
} else {
logger.warn("{}[transport] [anonymous_access_denied]\t{}, action=[{}], indices=[{}]", prefix, originAttributes(message), action, indices);
}
} else {
if (logger.isDebugEnabled()) {
logger.debug("{}[transport] [anonymous_access_denied]\t{}, action=[{}], request=[{}]", prefix, originAttributes(message), action, message.getClass().getSimpleName());
} else {
logger.warn("{}[transport] [anonymous_access_denied]\t{}, action=[{}]", prefix, originAttributes(message), action);
}
}
}
@Override
public void anonymousAccessDenied(RestRequest request) {
if (logger.isDebugEnabled()) {
logger.debug("{}[rest] [anonymous_access_denied]\t{}, uri=[{}], request_body=[{}]", prefix, hostAttributes(request), request.uri(), restRequestContent(request));
} else {
logger.warn("{}[rest] [anonymous_access_denied]\t{}, uri=[{}]", prefix, hostAttributes(request), request.uri());
}
}
@Override
public void authenticationFailed(AuthenticationToken token, String action, TransportMessage<?> message) {
String indices = indicesString(message);
if (indices != null) {
if (logger.isDebugEnabled()) {
logger.debug("{}[transport] [authentication_failed]\t{}, principal=[{}], action=[{}], indices=[{}], request=[{}]", prefix, originAttributes(message), token.principal(), action, indices, message.getClass().getSimpleName());
} else {
logger.error("{}[transport] [authentication_failed]\t{}, principal=[{}], action=[{}], indices=[{}]", prefix, originAttributes(message), token.principal(), action, indices);
}
} else {
if (logger.isDebugEnabled()) {
logger.debug("{}[transport] [authentication_failed]\t{}, principal=[{}], action=[{}], request=[{}]", prefix, originAttributes(message), token.principal(), action, message.getClass().getSimpleName());
} else {
logger.error("{}[transport] [authentication_failed]\t{}, principal=[{}], action=[{}]", prefix, originAttributes(message), token.principal(), action);
}
}
}
@Override
public void authenticationFailed(RestRequest request) {
if (logger.isDebugEnabled()) {
logger.debug("{}[rest] [authentication_failed]\t{}, uri=[{}], request_body=[{}]", prefix, hostAttributes(request), request.uri(), restRequestContent(request));
} else {
logger.error("{}[rest] [authentication_failed]\t{}, uri=[{}]", prefix, hostAttributes(request), request.uri());
}
}
@Override
public void authenticationFailed(String action, TransportMessage<?> message) {
String indices = indicesString(message);
if (indices != null) {
if (logger.isDebugEnabled()) {
logger.debug("{}[transport] [authentication_failed]\t{}, action=[{}], indices=[{}], request=[{}]", prefix, originAttributes(message), action, indices, message.getClass().getSimpleName());
} else {
logger.error("{}[transport] [authentication_failed]\t{}, action=[{}], indices=[{}]", prefix, originAttributes(message), action, indices);
}
} else {
if (logger.isDebugEnabled()) {
logger.debug("{}[transport] [authentication_failed]\t{}, action=[{}], request=[{}]", prefix, originAttributes(message), action, message.getClass().getSimpleName());
} else {
logger.error("{}[transport] [authentication_failed]\t{}, action=[{}]", prefix, originAttributes(message), action);
}
}
}
@Override
public void authenticationFailed(AuthenticationToken token, RestRequest request) {
if (logger.isDebugEnabled()) {
logger.debug("{}[rest] [authentication_failed]\t{}, principal=[{}], uri=[{}], request_body=[{}]", prefix, hostAttributes(request), token.principal(), request.uri(), restRequestContent(request));
} else {
logger.error("{}[rest] [authentication_failed]\t{}, principal=[{}], uri=[{}]", prefix, hostAttributes(request), token.principal(), request.uri());
}
}
@Override
public void authenticationFailed(String realm, AuthenticationToken token, String action, TransportMessage<?> message) {
if (logger.isTraceEnabled()) {
String indices = indicesString(message);
if (indices != null) {
logger.trace("{}[transport] [authentication_failed]\trealm=[{}], {}, principal=[{}], action=[{}], indices=[{}], request=[{}]", prefix, realm, originAttributes(message), token.principal(), action, indices, message.getClass().getSimpleName());
} else {
logger.trace("{}[transport] [authentication_failed]\trealm=[{}], {}, principal=[{}], action=[{}], request=[{}]", prefix, realm, originAttributes(message), token.principal(), action, message.getClass().getSimpleName());
}
}
}
@Override
public void authenticationFailed(String realm, AuthenticationToken token, RestRequest request) {
if (logger.isTraceEnabled()) {
logger.trace("{}[rest] [authentication_failed]\trealm=[{}], {}, principal=[{}], uri=[{}], request_body=[{}]", prefix, realm, hostAttributes(request), token.principal(), request.uri(), restRequestContent(request));
}
}
@Override
public void accessGranted(User user, String action, TransportMessage<?> message) {
String indices = indicesString(message);
// special treatment for internal system actions - only log on trace
if (user.isSystem() && Privilege.SYSTEM.predicate().apply(action)) {
if (logger.isTraceEnabled()) {
if (indices != null) {
logger.trace("{}[transport] [access_granted]\t{}, principal=[{}], action=[{}], indices=[{}], request=[{}]", prefix, originAttributes(message), user.principal(), action, indices, message.getClass().getSimpleName());
} else {
logger.trace("{}[transport] [access_granted]\t{}, principal=[{}], action=[{}], request=[{}]", prefix, originAttributes(message), user.principal(), action, message.getClass().getSimpleName());
}
}
return;
}
if (indices != null) {
if (logger.isDebugEnabled()) {
logger.debug("{}[transport] [access_granted]\t{}, principal=[{}], action=[{}], indices=[{}], request=[{}]", prefix, originAttributes(message), user.principal(), action, indices, message.getClass().getSimpleName());
} else {
logger.info("{}[transport] [access_granted]\t{}, principal=[{}], action=[{}], indices=[{}]", prefix, originAttributes(message), user.principal(), action, indices);
}
} else {
if (logger.isDebugEnabled()) {
logger.debug("{}[transport] [access_granted]\t{}, principal=[{}], action=[{}], request=[{}]", prefix, originAttributes(message), user.principal(), action, message.getClass().getSimpleName());
} else {
logger.info("{}[transport] [access_granted]\t{}, principal=[{}], action=[{}]", prefix, originAttributes(message), user.principal(), action);
}
}
}
@Override
public void accessDenied(User user, String action, TransportMessage<?> message) {
String indices = indicesString(message);
if (indices != null) {
if (logger.isDebugEnabled()) {
logger.debug("{}[transport] [access_denied]\t{}, principal=[{}], action=[{}], indices=[{}], request=[{}]", prefix, originAttributes(message), user.principal(), action, indices, message.getClass().getSimpleName());
} else {
logger.error("{}[transport] [access_denied]\t{}, principal=[{}], action=[{}], indices=[{}]", prefix, originAttributes(message), user.principal(), action, indices);
}
} else {
if (logger.isDebugEnabled()) {
logger.debug("{}[transport] [access_denied]\t{}, principal=[{}], action=[{}], request=[{}]", prefix, originAttributes(message), user.principal(), action, message.getClass().getSimpleName());
} else {
logger.error("{}[transport] [access_denied]\t{}, principal=[{}], action=[{}]", prefix, originAttributes(message), user.principal(), action);
}
}
}
@Override
public void tamperedRequest(User user, String action, TransportRequest request) {
String indices = indicesString(request);
if (indices != null) {
if (logger.isDebugEnabled()) {
logger.debug("{}[transport] [tampered_request]\t{}, principal=[{}], action=[{}], indices=[{}], request=[{}]", prefix, request.remoteAddress(), user.principal(), action, indices, request.getClass().getSimpleName());
} else {
logger.error("{}[transport] [tampered_request]\t{}, principal=[{}], action=[{}], indices=[{}]", prefix, request.remoteAddress(), user.principal(), action, indices);
}
} else {
if (logger.isDebugEnabled()) {
logger.debug("{}[transport] [tampered_request]\t{}, principal=[{}], action=[{}], request=[{}]", prefix, request.remoteAddress(), user.principal(), action, request.getClass().getSimpleName());
} else {
logger.error("{}[transport] [tampered_request]\t{}, principal=[{}], action=[{}]", prefix, request.remoteAddress(), user.principal(), action);
}
}
}
@Override
public void connectionGranted(InetAddress inetAddress, String profile, ShieldIpFilterRule rule) {
if (logger.isTraceEnabled()) {
logger.trace("{}[ip_filter] [connection_granted]\torigin_address=[{}], transport_profile=[{}], rule=[{}]", prefix, inetAddress.getHostAddress(), profile, rule);
}
}
@Override
public void connectionDenied(InetAddress inetAddress, String profile, ShieldIpFilterRule rule) {
logger.error("{}[ip_filter] [connection_denied]\torigin_address=[{}], transport_profile=[{}], rule=[{}]", prefix, inetAddress.getHostAddress(), profile, rule);
}
private static String hostAttributes(RestRequest request) {
return "origin_address=[" + request.getRemoteAddress() + "]";
}
static String originAttributes(TransportMessage message) {
StringBuilder builder = new StringBuilder();
// first checking if the message originated in a rest call
InetSocketAddress restAddress = RemoteHostHeader.restRemoteAddress(message);
if (restAddress != null) {
builder.append("origin_type=[rest], origin_address=[").append(restAddress).append("]");
return builder.toString();
}
// we'll see if was originated in a remote node
TransportAddress address = message.remoteAddress();
if (address != null) {
builder.append("origin_type=[transport], ");
if (address instanceof InetSocketTransportAddress) {
builder.append("origin_address=[").append(((InetSocketTransportAddress) address).address()).append("]");
} else {
builder.append("origin_address=[").append(address).append("]");
}
return builder.toString();
}
// the call was originated locally on this node
return builder.append("origin_type=[local_node], origin_address=[")
.append(NetworkUtils.getLocalHostAddress("_local"))
.append("]")
.toString();
}
static String resolvePrefix(Settings settings) {
StringBuilder builder = new StringBuilder();
if (settings.getAsBoolean("shield.audit.logfile.prefix.emit_node_host_address", false)) {
try {
String address = InetAddress.getLocalHost().getHostAddress();
builder.append("[").append(address).append("] ");
} catch (UnknownHostException e) {
// ignore
}
}
if (settings.getAsBoolean("shield.audit.logfile.prefix.emit_node_host_name", false)) {
try {
String hostName = InetAddress.getLocalHost().getHostName();
builder.append("[").append(hostName).append("] ");
} catch (UnknownHostException e) {
// ignore
}
}
if (settings.getAsBoolean("shield.audit.logfile.prefix.emit_node_name", true)) {
String name = settings.get("name");
if (name != null) {
builder.append("[").append(name).append("] ");
}
}
return builder.toString();
}
static String indicesString(TransportMessage<?> message) {
String[] indices = indices(message);
return indices == null ? null : arrayToCommaDelimitedString(indices);
}
}

View File

@ -0,0 +1,56 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.shield.authc;
import org.elasticsearch.common.Nullable;
import org.elasticsearch.common.inject.Inject;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.shield.User;
public class AnonymousService {
public static final String SETTING_AUTHORIZATION_EXCEPTION_ENABLED = "shield.authc.anonymous.authz_exception";
static final String ANONYMOUS_USERNAME = "_es_anonymous_user";
@Nullable
private final User anonymousUser;
private final boolean authzExceptionEnabled;
@Inject
public AnonymousService(Settings settings) {
anonymousUser = resolveAnonymousUser(settings);
authzExceptionEnabled = settings.getAsBoolean(SETTING_AUTHORIZATION_EXCEPTION_ENABLED, true);
}
public boolean enabled() {
return anonymousUser != null;
}
public boolean isAnonymous(User user) {
if (enabled()) {
return anonymousUser.equals(user);
}
return false;
}
public User anonymousUser() {
return anonymousUser;
}
public boolean authorizationExceptionsEnabled() {
return authzExceptionEnabled;
}
static User resolveAnonymousUser(Settings settings) {
String[] roles = settings.getAsArray("shield.authc.anonymous.roles", null);
if (roles == null) {
return null;
}
String username = settings.get("shield.authc.anonymous.username", ANONYMOUS_USERNAME);
return new User.Simple(username, roles);
}
}

View File

@ -0,0 +1,37 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.shield.authc;
import org.elasticsearch.common.inject.multibindings.MapBinder;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.shield.authc.activedirectory.ActiveDirectoryRealm;
import org.elasticsearch.shield.authc.esusers.ESUsersRealm;
import org.elasticsearch.shield.authc.ldap.LdapRealm;
import org.elasticsearch.shield.authc.pki.PkiRealm;
import org.elasticsearch.shield.support.AbstractShieldModule;
/**
*
*/
public class AuthenticationModule extends AbstractShieldModule.Node {
public AuthenticationModule(Settings settings) {
super(settings);
}
@Override
protected void configureNode() {
MapBinder<String, Realm.Factory> mapBinder = MapBinder.newMapBinder(binder(), String.class, Realm.Factory.class);
mapBinder.addBinding(ESUsersRealm.TYPE).to(ESUsersRealm.Factory.class).asEagerSingleton();
mapBinder.addBinding(ActiveDirectoryRealm.TYPE).to(ActiveDirectoryRealm.Factory.class).asEagerSingleton();
mapBinder.addBinding(LdapRealm.TYPE).to(LdapRealm.Factory.class).asEagerSingleton();
mapBinder.addBinding(PkiRealm.TYPE).to(PkiRealm.Factory.class).asEagerSingleton();
bind(Realms.class).asEagerSingleton();
bind(AnonymousService.class).asEagerSingleton();
bind(AuthenticationService.class).to(InternalAuthenticationService.class).asEagerSingleton();
}
}

View File

@ -0,0 +1,60 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.shield.authc;
import org.elasticsearch.ElasticsearchSecurityException;
import org.elasticsearch.rest.RestRequest;
import org.elasticsearch.shield.User;
import org.elasticsearch.transport.TransportMessage;
/**
* Responsible for authenticating the Users behind requests
*/
public interface AuthenticationService {
/**
* Authenticates the user that is associated with the given request. If the user was authenticated successfully (i.e.
* a user was indeed associated with the request and the credentials were verified to be valid), the method returns
* the user and that user is then "attached" to the request's context.
*
* @param request The request to be authenticated
* @return The authenticated user
* @throws ElasticsearchSecurityException If no user was associated with the request or if the associated
* user credentials were found to be invalid
*/
User authenticate(RestRequest request) throws ElasticsearchSecurityException;
/**
* Authenticates the user that is associated with the given message. If the user was authenticated successfully (i.e.
* a user was indeed associated with the request and the credentials were verified to be valid), the method returns
* the user and that user is then "attached" to the message's context. If no user was found to be attached to the given
* message, the the given fallback user will be returned instead.
*
* @param action The action of the message
* @param message The message to be authenticated
* @param fallbackUser The default user that will be assumed if no other user is attached to the message. Can be
* {@code null}, in which case there will be no fallback user and the success/failure of the
* authentication will be based on the whether there's an attached user to in the message and
* if there is, whether its credentials are valid.
*
* @return The authenticated user (either the attached one or if there isn't the fallback one if provided)
*
* @throws ElasticsearchSecurityException If the associated user credentials were found to be invalid or in the
* case where there was no user associated with the request, if the defautl
* token could not be authenticated.
*/
User authenticate(String action, TransportMessage message, User fallbackUser);
/**
* Checks if there's alreay a user header attached to the given message. If missing, a new header is
* set on the message with the given user (encoded).
*
* @param message The message
* @param user The user to be attached if the header is missing
*/
void attachUserHeaderIfMissing(TransportMessage message, User user);
}

View File

@ -0,0 +1,18 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.shield.authc;
/**
*
*/
public interface AuthenticationToken {
String principal();
Object credentials();
void clearCredentials();
}

View File

@ -0,0 +1,306 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.shield.authc;
import org.elasticsearch.ElasticsearchSecurityException;
import org.elasticsearch.common.Base64;
import org.elasticsearch.common.component.AbstractComponent;
import org.elasticsearch.common.inject.Inject;
import org.elasticsearch.common.io.stream.BytesStreamOutput;
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.logging.ESLogger;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.rest.RestRequest;
import org.elasticsearch.shield.User;
import org.elasticsearch.shield.audit.AuditTrail;
import org.elasticsearch.shield.crypto.CryptoService;
import org.elasticsearch.transport.TransportMessage;
import java.io.IOException;
import static org.elasticsearch.shield.support.Exceptions.authenticationError;
/**
* An authentication service that delegates the authentication process to its configured {@link Realm realms}.
* This service also supports request level caching of authenticated users (i.e. once a user authenticated
* successfully, it is set on the request context to avoid subsequent redundant authentication process)
*/
public class InternalAuthenticationService extends AbstractComponent implements AuthenticationService {
public static final String SETTING_SIGN_USER_HEADER = "shield.authc.sign_user_header";
static final String TOKEN_KEY = "_shield_token";
static final String USER_KEY = "_shield_user";
private final Realms realms;
private final AuditTrail auditTrail;
private final CryptoService cryptoService;
private final AnonymousService anonymousService;
private final boolean signUserHeader;
@Inject
public InternalAuthenticationService(Settings settings, Realms realms, AuditTrail auditTrail, CryptoService cryptoService, AnonymousService anonymousService) {
super(settings);
this.realms = realms;
this.auditTrail = auditTrail;
this.cryptoService = cryptoService;
this.anonymousService = anonymousService;
this.signUserHeader = settings.getAsBoolean(SETTING_SIGN_USER_HEADER, true);
}
@Override
public User authenticate(RestRequest request) throws ElasticsearchSecurityException {
AuthenticationToken token;
try {
token = token(request);
} catch (Exception e) {
if (logger.isDebugEnabled()) {
logger.debug("failed to extract token from request", e);
} else {
logger.warn("failed to extract token from request: {}", e.getMessage());
}
auditTrail.authenticationFailed(request);
if (e instanceof ElasticsearchSecurityException) {
throw (ElasticsearchSecurityException) e;
}
throw authenticationError("error attempting to authenticate request", e);
}
if (token == null) {
if (anonymousService.enabled()) {
// we must put the user in the request context, so it'll be copied to the
// transport request - without it, the transport will assume system user
request.putInContext(USER_KEY, anonymousService.anonymousUser());
return anonymousService.anonymousUser();
}
auditTrail.anonymousAccessDenied(request);
throw authenticationError("missing authentication token for REST request [{}]", request.uri());
}
User user;
try {
user = authenticate(request, token);
} catch (Exception e) {
if (logger.isDebugEnabled()) {
logger.debug("authentication of request failed for principal [{}], uri [{}]", e, token.principal(), request.uri());
}
auditTrail.authenticationFailed(token, request);
if (e instanceof ElasticsearchSecurityException) {
throw (ElasticsearchSecurityException) e;
}
throw authenticationError("error attempting to authenticate request", e);
}
if (user == null) {
throw authenticationError("unable to authenticate user [{}] for REST request [{}]", token.principal(), request.uri());
}
// we must put the user in the request context, so it'll be copied to the
// transport request - without it, the transport will assume system user
request.putInContext(USER_KEY, user);
return user;
}
@Override
public User authenticate(String action, TransportMessage message, User fallbackUser) {
User user = message.getFromContext(USER_KEY);
if (user != null) {
return user;
}
String header = message.getHeader(USER_KEY);
if (header != null) {
if (signUserHeader) {
header = cryptoService.unsignAndVerify(header);
}
user = decodeUser(header);
}
if (user == null) {
user = authenticateWithRealms(action, message, fallbackUser);
header = signUserHeader ? cryptoService.sign(encodeUser(user, logger)) : encodeUser(user, logger);
message.putHeader(USER_KEY, header);
}
message.putInContext(USER_KEY, user);
return user;
}
@Override
public void attachUserHeaderIfMissing(TransportMessage message, User user) {
if (message.hasHeader(USER_KEY)) {
return;
}
User userFromContext = message.getFromContext(USER_KEY);
if (userFromContext != null) {
String userHeader = signUserHeader ? cryptoService.sign(encodeUser(userFromContext, logger)) : encodeUser(userFromContext, logger);
message.putHeader(USER_KEY, userHeader);
return;
}
message.putInContext(USER_KEY, user);
String userHeader = signUserHeader ? cryptoService.sign(encodeUser(user, logger)) : encodeUser(user, logger);
message.putHeader(USER_KEY, userHeader);
}
static User decodeUser(String text) {
try {
byte[] bytes = Base64.decode(text);
StreamInput input = StreamInput.wrap(bytes);
return User.readFrom(input);
} catch (IOException ioe) {
throw authenticationError("could not read authenticated user", ioe);
}
}
static String encodeUser(User user, ESLogger logger) {
try {
BytesStreamOutput output = new BytesStreamOutput();
User.writeTo(user, output);
byte[] bytes = output.bytes().toBytes();
return Base64.encodeBytes(bytes);
} catch (IOException ioe) {
if (logger != null) {
logger.error("could not encode authenticated user in message header... falling back to token headers", ioe);
}
return null;
}
}
/**
* Authenticates the user associated with the given request by delegating the authentication to
* the configured realms. Each realm that supports the given token will be asked to perform authentication,
* the first realm that successfully authenticates will "win" and its authenticated user will be returned.
* If none of the configured realms successfully authenticates the request, an {@link ElasticsearchSecurityException}
* will be thrown.
* <p/>
* The order by which the realms are checked is defined in {@link Realms}.
*
* @param action The executed action
* @param message The executed request
* @param fallbackUser The user to assume if there is not other user attached to the message
*
* @return The authenticated user
*
* @throws ElasticsearchSecurityException If none of the configured realms successfully authenticated the
* request
*/
@SuppressWarnings("unchecked")
User authenticateWithRealms(String action, TransportMessage<?> message, User fallbackUser) throws ElasticsearchSecurityException {
AuthenticationToken token;
try {
token = token(action, message);
} catch (Exception e) {
if (logger.isDebugEnabled()) {
logger.debug("failed to extract token from transport message", e);
} else {
logger.warn("failed to extract token from transport message: ", e.getMessage());
}
auditTrail.authenticationFailed(action, message);
if (e instanceof ElasticsearchSecurityException) {
throw e;
}
throw authenticationError("error attempting to authenticate request", e);
}
if (token == null) {
if (fallbackUser != null) {
return fallbackUser;
}
if (anonymousService.enabled()) {
return anonymousService.anonymousUser();
}
auditTrail.anonymousAccessDenied(action, message);
throw authenticationError("missing authentication token for action [{}]", action);
}
User user;
try {
user = authenticate(message, token, action);
} catch (Exception e) {
if (logger.isDebugEnabled()) {
logger.debug("authentication of transport message failed for principal [{}], action [{}]", e, token.principal(), action);
}
auditTrail.authenticationFailed(token, action, message);
if (e instanceof ElasticsearchSecurityException) {
throw (ElasticsearchSecurityException) e;
}
throw authenticationError("error attempting to authenticate request", e);
}
if (user == null) {
throw authenticationError("unable to authenticate user [{}] for action [{}]", token.principal(), action);
}
return user;
}
User authenticate(TransportMessage<?> message, AuthenticationToken token, String action) throws ElasticsearchSecurityException {
assert token != null : "cannot authenticate null tokens";
try {
for (Realm realm : realms) {
if (realm.supports(token)) {
User user = realm.authenticate(token);
if (user != null) {
return user;
}
auditTrail.authenticationFailed(realm.type(), token, action, message);
}
}
auditTrail.authenticationFailed(token, action, message);
return null;
} finally {
token.clearCredentials();
}
}
User authenticate(RestRequest request, AuthenticationToken token) throws ElasticsearchSecurityException {
assert token != null : "cannot authenticate null tokens";
try {
for (Realm realm : realms) {
if (realm.supports(token)) {
User user = realm.authenticate(token);
if (user != null) {
return user;
}
auditTrail.authenticationFailed(realm.type(), token, request);
}
}
auditTrail.authenticationFailed(token, request);
return null;
} finally {
token.clearCredentials();
}
}
AuthenticationToken token(RestRequest request) throws ElasticsearchSecurityException {
for (Realm realm : realms) {
AuthenticationToken token = realm.token(request);
if (token != null) {
request.putInContext(TOKEN_KEY, token);
return token;
}
}
return null;
}
@SuppressWarnings("unchecked")
AuthenticationToken token(String action, TransportMessage<?> message) {
AuthenticationToken token = message.getFromContext(TOKEN_KEY);
if (token != null) {
return token;
}
for (Realm realm : realms) {
token = realm.token(message);
if (token != null) {
if (logger.isTraceEnabled()) {
logger.trace("realm [{}] resolved authentication token [{}] from transport request with action [{}]", realm, token.principal(), action);
}
message.putInContext(TOKEN_KEY, token);
return token;
}
}
return null;
}
}

View File

@ -0,0 +1,137 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.shield.authc;
import org.elasticsearch.common.logging.ESLogger;
import org.elasticsearch.rest.RestRequest;
import org.elasticsearch.shield.ShieldSettingsFilter;
import org.elasticsearch.shield.User;
import org.elasticsearch.transport.TransportMessage;
/**
* An authentication mechanism to which the default authentication {@link org.elasticsearch.shield.authc.AuthenticationService service}
* delegates the authentication process. Different realms may be defined, each may be based on different
* authentication mechanism supporting its own specific authentication token type.
*/
public abstract class Realm<T extends AuthenticationToken> implements Comparable<Realm> {
protected final ESLogger logger;
protected final String type;
protected RealmConfig config;
public Realm(String type, RealmConfig config) {
this.type = type;
this.config = config;
this.logger = config.logger(getClass());
}
/**
* @return The type of this realm
*/
public String type() {
return type;
}
/**
* @return The name of this realm.
*/
public String name() {
return config.name;
}
/**
* @return The order of this realm within the executing realm chain.
*/
public int order() {
return config.order;
}
@Override
public int compareTo(Realm other) {
return Integer.compare(config.order, other.config.order);
}
/**
* @return {@code true} if this realm supports the given authentication token, {@code false} otherwise.
*/
public abstract boolean supports(AuthenticationToken token);
/**
* Attempts to extract an authentication token from the given rest request. If an appropriate token
* is found it's returned, otherwise {@code null} is returned.
*
* @param request The rest request
* @return The authentication token or {@code null} if not found
*/
public abstract T token(RestRequest request);
/**
* Attempts to extract an authentication token from the given transport message. If an appropriate token
* is found it's returned, otherwise {@code null} is returned.
*
* @param message The transport message
* @return The authentication token or {@code null} if not found
*/
public abstract T token(TransportMessage<?> message);
/**
* Authenticates the given token. A successful authentication will return the User associated
* with the given token. An unsuccessful authentication returns {@code null}.
*
* @param token The authentication token
* @return The authenticated user or {@code null} if authentication failed.
*/
public abstract User authenticate(T token);
@Override
public String toString() {
return type + "/" + config.name;
}
/**
* A factory for a specific realm type. Knows how to create a new realm given the appropriate
* settings
*/
public static abstract class Factory<R extends Realm> {
private final String type;
private final boolean internal;
public Factory(String type, boolean internal) {
this.type = type;
this.internal = internal;
}
/**
* @return The type of the ream this factory creates
*/
public String type() {
return type;
}
public boolean internal() {
return internal;
}
public void filterOutSensitiveSettings(String realmName, ShieldSettingsFilter filter) {
}
/**
* Creates a new realm based on the given settigns.
*
* @param config The configuration for the realm
* @return The new realm (this method never returns {@code null}).
*/
public abstract R create(RealmConfig config);
/**
* Creates a default realm, one that has no custom settings. Some realms might require minimal
* settings, in which case, this method will return {@code null}.
*/
public abstract R createDefault(String name);
}
}

View File

@ -0,0 +1,66 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.shield.authc;
import org.elasticsearch.common.logging.ESLogger;
import org.elasticsearch.common.logging.Loggers;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.env.Environment;
/**
*
*/
public class RealmConfig {
final String name;
final boolean enabled;
final int order;
final Settings settings;
private final Environment env;
private final Settings globalSettings;
public RealmConfig(String name, Settings settings, Settings globalSettings) {
this(name, settings, globalSettings, new Environment(globalSettings));
}
public RealmConfig(String name, Settings settings, Settings globalSettings, Environment env) {
this.name = name;
this.settings = settings;
this.globalSettings = globalSettings;
this.env = env;
enabled = settings.getAsBoolean("enabled", true);
order = settings.getAsInt("order", Integer.MAX_VALUE);
}
public String name() {
return name;
}
public boolean enabled() {
return enabled;
}
public int order() {
return order;
}
public Settings settings() {
return settings;
}
public Settings globalSettings() {
return globalSettings;
}
public ESLogger logger(Class clazz) {
return Loggers.getLogger(clazz, globalSettings);
}
public Environment env() {
return env;
}
}

View File

@ -0,0 +1,138 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.shield.authc;
import com.google.common.collect.Lists;
import com.google.common.collect.Sets;
import org.elasticsearch.ElasticsearchException;
import org.elasticsearch.common.component.AbstractLifecycleComponent;
import org.elasticsearch.common.inject.Inject;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.env.Environment;
import org.elasticsearch.shield.ShieldSettingsFilter;
import org.elasticsearch.shield.authc.esusers.ESUsersRealm;
import java.util.*;
import java.util.concurrent.CopyOnWriteArrayList;
/**
* Serves as a realms registry (also responsible for ordering the realms appropriately)
*/
public class Realms extends AbstractLifecycleComponent<Realms> implements Iterable<Realm> {
private final Environment env;
private final Map<String, Realm.Factory> factories;
private final ShieldSettingsFilter settingsFilter;
private List<Realm> realms = Collections.emptyList();
@Inject
public Realms(Settings settings, Environment env, Map<String, Realm.Factory> factories, ShieldSettingsFilter settingsFilter) {
super(settings);
this.env = env;
this.factories = factories;
this.settingsFilter = settingsFilter;
}
@Override
protected void doStart() throws ElasticsearchException {
realms = new CopyOnWriteArrayList<>(initRealms());
}
@Override
protected void doStop() throws ElasticsearchException {}
@Override
protected void doClose() throws ElasticsearchException {}
@Override
public Iterator<Realm> iterator() {
return realms.iterator();
}
public Realm realm(String name) {
for (Realm realm : realms) {
if (name.equals(realm.config.name)) {
return realm;
}
}
return null;
}
public Realm.Factory realmFactory(String type) {
return factories.get(type);
}
protected List<Realm> initRealms() {
Settings realmsSettings = settings.getAsSettings("shield.authc.realms");
Set<String> internalTypes = Sets.newHashSet();
List<Realm> realms = Lists.newArrayList();
for (String name : realmsSettings.names()) {
Settings realmSettings = realmsSettings.getAsSettings(name);
String type = realmSettings.get("type");
if (type == null) {
throw new IllegalArgumentException("missing realm type for [" + name + "] realm");
}
Realm.Factory factory = factories.get(type);
if (factory == null) {
throw new IllegalArgumentException("unknown realm type [" + type + "] set for realm [" + name + "]");
}
factory.filterOutSensitiveSettings(name, settingsFilter);
RealmConfig config = new RealmConfig(name, realmSettings, settings, env);
if (!config.enabled()) {
if (logger.isDebugEnabled()) {
logger.debug("realm [{}/{}] is disabled", type, name);
}
continue;
}
if (factory.internal()) {
// this is an internal realm factory, let's make sure we didn't already registered one
// (there can only be one instance of an internal realm)
if (internalTypes.contains(type)) {
throw new IllegalArgumentException("multiple [" + type + "] realms are configured. [" + type +
"] is an internal realm and therefore there can only be one such realm configured");
}
internalTypes.add(type);
}
realms.add(factory.create(config));
}
if (!realms.isEmpty()) {
Collections.sort(realms);
return realms;
}
// there is no "realms" configuration, go over all the factories and try to create defaults
// for all the internal realms
realms.add(factories.get(ESUsersRealm.TYPE).createDefault("default_" + ESUsersRealm.TYPE));
return realms;
}
/**
* returns the settings for the internal realm of the given type. Typically, internal realms may or may
* not be configured. If they are not configured, they work OOTB using default settings. If they are
* configured, there can only be one configured for an internal realm.
*/
public static Settings internalRealmSettings(Settings settings, String realmType) {
Settings realmsSettings = settings.getAsSettings("shield.authc.realms");
Settings result = null;
for (String name : realmsSettings.names()) {
Settings realmSettings = realmsSettings.getAsSettings(name);
String type = realmSettings.get("type");
if (type == null) {
throw new IllegalArgumentException("missing realm type for [" + name + "] realm");
}
if (type.equals(realmType)) {
if (result != null) {
throw new IllegalArgumentException("multiple [" + realmType + "] realms are configured. only one [" + realmType + "] may be configured");
}
result = realmSettings;
}
}
return result != null ? result : Settings.EMPTY;
}
}

View File

@ -0,0 +1,119 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.shield.authc.activedirectory;
import com.google.common.collect.ImmutableList;
import com.google.common.primitives.Ints;
import com.unboundid.ldap.sdk.*;
import org.elasticsearch.common.Strings;
import org.elasticsearch.common.logging.ESLogger;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.unit.TimeValue;
import org.elasticsearch.shield.authc.ldap.support.LdapSearchScope;
import org.elasticsearch.shield.authc.ldap.support.LdapSession.GroupsResolver;
import org.elasticsearch.shield.support.Exceptions;
import java.util.ArrayList;
import java.util.List;
import static org.elasticsearch.shield.authc.ldap.support.LdapUtils.*;
/**
*
*/
public class ActiveDirectoryGroupsResolver implements GroupsResolver {
private final String baseDn;
private final LdapSearchScope scope;
public ActiveDirectoryGroupsResolver(Settings settings, String baseDnDefault) {
this.baseDn = settings.get("base_dn", baseDnDefault);
this.scope = LdapSearchScope.resolve(settings.get("scope"), LdapSearchScope.SUB_TREE);
}
public List<String> resolve(LDAPInterface connection, String userDn, TimeValue timeout, ESLogger logger) {
Filter groupSearchFilter = buildGroupQuery(connection, userDn, timeout, logger);
logger.debug("group SID to DN search filter: [{}]", groupSearchFilter);
SearchRequest searchRequest = new SearchRequest(baseDn, scope.scope(), groupSearchFilter, Strings.EMPTY_ARRAY);
searchRequest.setTimeLimitSeconds(Ints.checkedCast(timeout.seconds()));
SearchResult results;
try {
results = search(connection, searchRequest, logger);
} catch (LDAPException e) {
throw Exceptions.authenticationError("failed to fetch AD groups for DN [{}]", e, userDn);
}
ImmutableList.Builder<String> groups = ImmutableList.builder();
for (SearchResultEntry entry : results.getSearchEntries()) {
groups.add(entry.getDN());
}
List<String> groupList = groups.build();
if (logger.isDebugEnabled()) {
logger.debug("found these groups [{}] for userDN [{}]", groupList, userDn);
}
return groupList;
}
static Filter buildGroupQuery(LDAPInterface connection, String userDn, TimeValue timeout, ESLogger logger) {
try {
SearchRequest request = new SearchRequest(userDn, SearchScope.BASE, OBJECT_CLASS_PRESENCE_FILTER, "tokenGroups");
request.setTimeLimitSeconds(Ints.checkedCast(timeout.seconds()));
SearchResultEntry entry = searchForEntry(connection, request, logger);
Attribute attribute = entry.getAttribute("tokenGroups");
byte[][] tokenGroupSIDBytes = attribute.getValueByteArrays();
List<Filter> orFilters = new ArrayList<>(tokenGroupSIDBytes.length);
for (byte[] SID : tokenGroupSIDBytes) {
orFilters.add(Filter.createEqualityFilter("objectSid", binarySidToStringSid(SID)));
}
return Filter.createORFilter(orFilters);
} catch (LDAPException e) {
throw Exceptions.authenticationError("failed to fetch AD groups for DN [{}]", e, userDn);
}
}
/**
* To better understand what the sid is and how its string representation looks like, see
* http://blogs.msdn.com/b/alextch/archive/2007/06/18/sample-java-application-that-retrieves-group-membership-of-an-active-directory-user-account.aspx
*
* @param SID byte encoded security ID
*/
static public String binarySidToStringSid(byte[] SID) {
String strSID;
//convert the SID into string format
long version;
long authority;
long count;
long rid;
strSID = "S";
version = SID[0];
strSID = strSID + "-" + Long.toString(version);
authority = SID[4];
for (int i = 0; i < 4; i++) {
authority <<= 8;
authority += SID[4 + i] & 0xFF;
}
strSID = strSID + "-" + Long.toString(authority);
count = SID[2];
count <<= 8;
count += SID[1] & 0xFF;
for (int j = 0; j < count; j++) {
rid = SID[11 + (j * 4)] & 0xFF;
for (int k = 1; k < 4; k++) {
rid <<= 8;
rid += SID[11 - k + (j * 4)] & 0xFF;
}
strSID = strSID + "-" + Long.toString(rid);
}
return strSID;
}
}

View File

@ -0,0 +1,55 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.shield.authc.activedirectory;
import org.elasticsearch.common.inject.Inject;
import org.elasticsearch.rest.RestController;
import org.elasticsearch.shield.ShieldSettingsFilter;
import org.elasticsearch.shield.authc.RealmConfig;
import org.elasticsearch.shield.authc.ldap.support.AbstractLdapRealm;
import org.elasticsearch.shield.authc.support.DnRoleMapper;
import org.elasticsearch.shield.ssl.ClientSSLService;
import org.elasticsearch.watcher.ResourceWatcherService;
/**
*
*/
public class ActiveDirectoryRealm extends AbstractLdapRealm {
public static final String TYPE = "active_directory";
public ActiveDirectoryRealm(RealmConfig config,
ActiveDirectorySessionFactory connectionFactory,
DnRoleMapper roleMapper) {
super(TYPE, config, connectionFactory, roleMapper);
}
public static class Factory extends AbstractLdapRealm.Factory<ActiveDirectoryRealm> {
private final ResourceWatcherService watcherService;
private final ClientSSLService clientSSLService;
@Inject
public Factory(ResourceWatcherService watcherService, RestController restController, ClientSSLService clientSSLService) {
super(ActiveDirectoryRealm.TYPE, restController);
this.watcherService = watcherService;
this.clientSSLService = clientSSLService;
}
@Override
public void filterOutSensitiveSettings(String realmName, ShieldSettingsFilter filter) {
ActiveDirectorySessionFactory.filterOutSensitiveSettings(realmName, filter);
}
@Override
public ActiveDirectoryRealm create(RealmConfig config) {
ActiveDirectorySessionFactory connectionFactory = new ActiveDirectorySessionFactory(config, clientSSLService);
DnRoleMapper roleMapper = new DnRoleMapper(TYPE, config, watcherService, null);
return new ActiveDirectoryRealm(config, connectionFactory, roleMapper);
}
}
}

View File

@ -0,0 +1,135 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.shield.authc.activedirectory;
import com.google.common.primitives.Ints;
import com.unboundid.ldap.sdk.*;
import org.elasticsearch.common.Strings;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.shield.ShieldSettingsFilter;
import org.elasticsearch.shield.authc.RealmConfig;
import org.elasticsearch.shield.authc.ldap.support.LdapSearchScope;
import org.elasticsearch.shield.authc.ldap.support.LdapSession;
import org.elasticsearch.shield.authc.ldap.support.LdapSession.GroupsResolver;
import org.elasticsearch.shield.authc.ldap.support.SessionFactory;
import org.elasticsearch.shield.authc.support.SecuredString;
import org.elasticsearch.shield.ssl.ClientSSLService;
import javax.net.SocketFactory;
import java.io.IOException;
import static org.elasticsearch.shield.authc.ldap.support.LdapUtils.createFilter;
import static org.elasticsearch.shield.authc.ldap.support.LdapUtils.search;
import static org.elasticsearch.shield.support.Exceptions.authenticationError;
/**
* This Class creates LdapSessions authenticating via the custom Active Directory protocol. (that being
* authenticating with a principal name, "username@domain", then searching through the directory to find the
* user entry in Active Directory that matches the user name). This eliminates the need for user templates, and simplifies
* the configuration for windows admins that may not be familiar with LDAP concepts.
*/
public class ActiveDirectorySessionFactory extends SessionFactory {
public static final String AD_DOMAIN_NAME_SETTING = "domain_name";
public static final String AD_GROUP_SEARCH_BASEDN_SETTING = "group_search.base_dn";
public static final String AD_GROUP_SEARCH_SCOPE_SETTING = "group_search.scope";
public static final String AD_USER_SEARCH_BASEDN_SETTING = "user_search.base_dn";
public static final String AD_USER_SEARCH_FILTER_SETTING = "user_search.filter";
public static final String AD_USER_SEARCH_SCOPE_SETTING = "user_search.scope";
private final String userSearchDN;
private final String domainName;
private final String userSearchFilter;
private final LdapSearchScope userSearchScope;
private final GroupsResolver groupResolver;
private final ServerSet ldapServerSet;
public ActiveDirectorySessionFactory(RealmConfig config, ClientSSLService sslService) {
super(config);
Settings settings = config.settings();
domainName = settings.get(AD_DOMAIN_NAME_SETTING);
if (domainName == null) {
throw new IllegalArgumentException("missing [" + AD_DOMAIN_NAME_SETTING + "] setting for active directory");
}
String domainDN = buildDnFromDomain(domainName);
userSearchDN = settings.get(AD_USER_SEARCH_BASEDN_SETTING, domainDN);
userSearchScope = LdapSearchScope.resolve(settings.get(AD_USER_SEARCH_SCOPE_SETTING), LdapSearchScope.SUB_TREE);
userSearchFilter = settings.get(AD_USER_SEARCH_FILTER_SETTING, "(&(objectClass=user)(|(sAMAccountName={0})(userPrincipalName={0}@" + domainName + ")))");
ldapServerSet = serverSet(config.settings(), sslService);
groupResolver = new ActiveDirectoryGroupsResolver(settings.getAsSettings("group_search"), domainDN);
}
static void filterOutSensitiveSettings(String realmName, ShieldSettingsFilter filter) {
filter.filterOut("shield.authc.realms." + realmName + "." + HOSTNAME_VERIFICATION_SETTING);
}
ServerSet serverSet(Settings settings, ClientSSLService clientSSLService) {
String[] ldapUrls = settings.getAsArray(URLS_SETTING, new String[] { "ldap://" + domainName + ":389" });
LDAPServers servers = new LDAPServers(ldapUrls);
LDAPConnectionOptions options = connectionOptions(settings);
SocketFactory socketFactory;
if (servers.ssl()) {
socketFactory = clientSSLService.sslSocketFactory();
if (settings.getAsBoolean(HOSTNAME_VERIFICATION_SETTING, true)) {
logger.debug("using encryption for LDAP connections with hostname verification");
} else {
logger.debug("using encryption for LDAP connections without hostname verification");
}
} else {
socketFactory = null;
}
FailoverServerSet serverSet = new FailoverServerSet(servers.addresses(), servers.ports(), socketFactory, options);
serverSet.setReOrderOnFailover(true);
return serverSet;
}
/**
* This is an active directory bind that looks up the user DN after binding with a windows principal.
*
* @param userName name of the windows user without the domain
* @return An authenticated
*/
@Override
public LdapSession session(String userName, SecuredString password) throws Exception {
LDAPConnection connection;
try {
connection = ldapServerSet.getConnection();
} catch (LDAPException e) {
throw new IOException("failed to connect to any active directory servers", e);
}
String userPrincipal = userName + "@" + domainName;
try {
connection.bind(userPrincipal, new String(password.internalChars()));
SearchRequest searchRequest = new SearchRequest(userSearchDN, userSearchScope.scope(), createFilter(userSearchFilter, userName), Strings.EMPTY_ARRAY);
searchRequest.setTimeLimitSeconds(Ints.checkedCast(timeout.seconds()));
SearchResult results = search(connection, searchRequest, logger);
int numResults = results.getEntryCount();
if (numResults > 1) {
throw new IllegalStateException("search for user [" + userName + "] by principle name yielded multiple results");
} else if (numResults < 1) {
throw new IllegalStateException("search for user [" + userName + "] by principle name yielded no results");
}
String dn = results.getSearchEntries().get(0).getDN();
return new LdapSession(connectionLogger, connection, dn, groupResolver, timeout);
} catch (LDAPException e) {
connection.close();
// TODO think more about this exception...
throw authenticationError("unable to authenticate user [{}] to active directory domain [{}]", e, userName, domainName);
}
}
/**
* @param domain active directory domain name
* @return LDAP DN, distinguished name, of the root of the domain
*/
String buildDnFromDomain(String domain) {
return "DC=" + domain.replace(".", ",DC=");
}
}

View File

@ -0,0 +1,83 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.shield.authc.esusers;
import org.elasticsearch.common.inject.Inject;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.env.Environment;
import org.elasticsearch.rest.RestController;
import org.elasticsearch.shield.User;
import org.elasticsearch.shield.authc.Realm;
import org.elasticsearch.shield.authc.RealmConfig;
import org.elasticsearch.shield.authc.support.CachingUsernamePasswordRealm;
import org.elasticsearch.shield.authc.support.RefreshListener;
import org.elasticsearch.shield.authc.support.UsernamePasswordToken;
import org.elasticsearch.watcher.ResourceWatcherService;
/**
*
*/
public class ESUsersRealm extends CachingUsernamePasswordRealm {
public static final String TYPE = "esusers";
final FileUserPasswdStore userPasswdStore;
final FileUserRolesStore userRolesStore;
public ESUsersRealm(RealmConfig config, FileUserPasswdStore userPasswdStore, FileUserRolesStore userRolesStore) {
super(TYPE, config);
Listener listener = new Listener();
this.userPasswdStore = userPasswdStore;
userPasswdStore.addListener(listener);
this.userRolesStore = userRolesStore;
userRolesStore.addListener(listener);
}
@Override
protected User doAuthenticate(UsernamePasswordToken token) {
if (!userPasswdStore.verifyPassword(token.principal(), token.credentials())) {
return null;
}
String[] roles = userRolesStore.roles(token.principal());
return new User.Simple(token.principal(), roles);
}
class Listener implements RefreshListener {
@Override
public void onRefresh() {
expireAll();
}
}
public static class Factory extends Realm.Factory<ESUsersRealm> {
private final Settings settings;
private final Environment env;
private final ResourceWatcherService watcherService;
@Inject
public Factory(Settings settings, Environment env, ResourceWatcherService watcherService, RestController restController) {
super(TYPE, true);
this.settings = settings;
this.env = env;
this.watcherService = watcherService;
restController.registerRelevantHeaders(UsernamePasswordToken.BASIC_AUTH_HEADER);
}
@Override
public ESUsersRealm create(RealmConfig config) {
FileUserPasswdStore userPasswdStore = new FileUserPasswdStore(config, watcherService);
FileUserRolesStore userRolesStore = new FileUserRolesStore(config, watcherService);
return new ESUsersRealm(config, userPasswdStore, userRolesStore);
}
@Override
public ESUsersRealm createDefault(String name) {
RealmConfig config = new RealmConfig(name, Settings.EMPTY, settings, env);
return create(config);
}
}
}

View File

@ -0,0 +1,200 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.shield.authc.esusers;
import com.google.common.base.Charsets;
import com.google.common.collect.ImmutableMap;
import org.elasticsearch.ElasticsearchException;
import org.elasticsearch.common.inject.internal.Nullable;
import org.elasticsearch.common.logging.ESLogger;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.env.Environment;
import org.elasticsearch.shield.ShieldPlugin;
import org.elasticsearch.shield.authc.RealmConfig;
import org.elasticsearch.shield.authc.support.Hasher;
import org.elasticsearch.shield.authc.support.RefreshListener;
import org.elasticsearch.shield.authc.support.SecuredString;
import org.elasticsearch.shield.support.NoOpLogger;
import org.elasticsearch.shield.support.Validation;
import org.elasticsearch.watcher.FileChangesListener;
import org.elasticsearch.watcher.FileWatcher;
import org.elasticsearch.watcher.ResourceWatcherService;
import java.io.IOException;
import java.io.PrintWriter;
import java.nio.file.Files;
import java.nio.file.Path;
import java.util.List;
import java.util.Locale;
import java.util.Map;
import java.util.concurrent.CopyOnWriteArrayList;
import static org.elasticsearch.shield.support.ShieldFiles.openAtomicMoveWriter;
/**
*
*/
public class FileUserPasswdStore {
private final ESLogger logger;
private final Path file;
final Hasher hasher = Hasher.BCRYPT;
private volatile ImmutableMap<String, char[]> users;
private CopyOnWriteArrayList<RefreshListener> listeners;
public FileUserPasswdStore(RealmConfig config, ResourceWatcherService watcherService) {
this(config, watcherService, null);
}
FileUserPasswdStore(RealmConfig config, ResourceWatcherService watcherService, RefreshListener listener) {
logger = config.logger(FileUserPasswdStore.class);
file = resolveFile(config.settings(), config.env());
users = parseFileLenient(file, logger);
if (users.isEmpty() && logger.isDebugEnabled()) {
logger.debug("realm [esusers] has no users");
}
FileWatcher watcher = new FileWatcher(file.getParent());
watcher.addListener(new FileListener());
try {
watcherService.add(watcher, ResourceWatcherService.Frequency.HIGH);
} catch (IOException e) {
throw new ElasticsearchException("failed to start watching users file [{}]", e, file.toAbsolutePath());
}
listeners = new CopyOnWriteArrayList<>();
if (listener != null) {
listeners.add(listener);
}
}
void addListener(RefreshListener listener) {
listeners.add(listener);
}
int usersCount() {
return users.size();
}
public boolean verifyPassword(String username, SecuredString password) {
if (users == null) {
return false;
}
char[] hash = users.get(username);
return hash != null && hasher.verify(password, hash);
}
public static Path resolveFile(Settings settings, Environment env) {
String location = settings.get("files.users");
if (location == null) {
return ShieldPlugin.resolveConfigFile(env, "users");
}
return env.homeFile().resolve(location);
}
/**
* Internally in this class, we try to load the file, but if for some reason we can't, we're being more lenient by
* logging the error and skipping all users. This is aligned with how we handle other auto-loaded files in shield.
*/
static ImmutableMap<String, char[]> parseFileLenient(Path path, ESLogger logger) {
try {
return parseFile(path, logger);
} catch (Throwable t) {
logger.error("failed to parse users file [{}]. skipping/removing all users...", t, path.toAbsolutePath());
return ImmutableMap.of();
}
}
/**
* parses the esusers file. Should never return {@code null}, if the file doesn't exist an
* empty map is returned
*/
public static ImmutableMap<String, char[]> parseFile(Path path, @Nullable ESLogger logger) {
if (logger == null) {
logger = NoOpLogger.INSTANCE;
}
logger.trace("reading users file [{}]...", path.toAbsolutePath());
if (!Files.exists(path)) {
return ImmutableMap.of();
}
List<String> lines;
try {
lines = Files.readAllLines(path, Charsets.UTF_8);
} catch (IOException ioe) {
throw new IllegalStateException("could not read users file [" + path.toAbsolutePath() + "]", ioe);
}
ImmutableMap.Builder<String, char[]> users = ImmutableMap.builder();
int lineNr = 0;
for (String line : lines) {
lineNr++;
if (line.startsWith("#")) { // comment
continue;
}
int i = line.indexOf(":");
if (i <= 0 || i == line.length() - 1) {
logger.error("invalid entry in users file [{}], line [{}]. skipping...", path.toAbsolutePath(), lineNr);
continue;
}
String username = line.substring(0, i).trim();
Validation.Error validationError = Validation.ESUsers.validateUsername(username);
if (validationError != null) {
logger.error("invalid username [{}] in users file [{}], skipping... ({})", username, path.toAbsolutePath(), validationError);
continue;
}
String hash = line.substring(i + 1).trim();
users.put(username, hash.toCharArray());
}
ImmutableMap<String, char[]> usersMap = users.build();
if (usersMap.isEmpty()){
logger.warn("no users found in users file [{}]. use bin/shield/esusers to add users and role mappings", path.toAbsolutePath());
}
return usersMap;
}
public static void writeFile(Map<String, char[]> esUsers, Path path) {
try (PrintWriter writer = new PrintWriter(openAtomicMoveWriter(path))) {
for (Map.Entry<String, char[]> entry : esUsers.entrySet()) {
writer.printf(Locale.ROOT, "%s:%s%s", entry.getKey(), new String(entry.getValue()), System.lineSeparator());
}
} catch (IOException ioe) {
throw new ElasticsearchException("could not write file [{}], please check file permissions", ioe, path.toAbsolutePath());
}
}
protected void notifyRefresh() {
for (RefreshListener listener : listeners) {
listener.onRefresh();
}
}
private class FileListener extends FileChangesListener {
@Override
public void onFileCreated(Path file) {
onFileChanged(file);
}
@Override
public void onFileDeleted(Path file) {
onFileChanged(file);
}
@Override
public void onFileChanged(Path file) {
if (file.equals(FileUserPasswdStore.this.file)) {
logger.info("users file [{}] changed. updating users... )", file.toAbsolutePath());
users = parseFileLenient(file, logger);
notifyRefresh();
}
}
}
}

View File

@ -0,0 +1,234 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.shield.authc.esusers;
import com.google.common.base.Charsets;
import com.google.common.collect.ImmutableMap;
import org.elasticsearch.ElasticsearchException;
import org.elasticsearch.common.Strings;
import org.elasticsearch.common.inject.internal.Nullable;
import org.elasticsearch.common.logging.ESLogger;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.env.Environment;
import org.elasticsearch.shield.ShieldPlugin;
import org.elasticsearch.shield.authc.RealmConfig;
import org.elasticsearch.shield.authc.support.RefreshListener;
import org.elasticsearch.shield.support.NoOpLogger;
import org.elasticsearch.shield.support.Validation;
import org.elasticsearch.watcher.FileChangesListener;
import org.elasticsearch.watcher.FileWatcher;
import org.elasticsearch.watcher.ResourceWatcherService;
import java.io.IOException;
import java.io.PrintWriter;
import java.nio.file.Files;
import java.nio.file.Path;
import java.util.*;
import java.util.concurrent.CopyOnWriteArrayList;
import java.util.regex.Pattern;
import static org.elasticsearch.shield.support.ShieldFiles.openAtomicMoveWriter;
/**
*
*/
public class FileUserRolesStore {
private static final Pattern USERS_DELIM = Pattern.compile("\\s*,\\s*");
private final ESLogger logger;
private final Path file;
private CopyOnWriteArrayList<RefreshListener> listeners;
private volatile ImmutableMap<String, String[]> userRoles;
public FileUserRolesStore(RealmConfig config, ResourceWatcherService watcherService) {
this(config, watcherService, null);
}
FileUserRolesStore(RealmConfig config, ResourceWatcherService watcherService, RefreshListener listener) {
logger = config.logger(FileUserRolesStore.class);
file = resolveFile(config.settings(), config.env());
userRoles = parseFileLenient(file, logger);
FileWatcher watcher = new FileWatcher(file.getParent());
watcher.addListener(new FileListener());
try {
watcherService.add(watcher, ResourceWatcherService.Frequency.HIGH);
} catch (IOException e) {
throw new ElasticsearchException("failed to start watching the user roles file [" + file.toAbsolutePath() + "]", e);
}
listeners = new CopyOnWriteArrayList<>();
if (listener != null) {
listeners.add(listener);
}
}
synchronized void addListener(RefreshListener listener) {
listeners.add(listener);
}
int entriesCount() {
return userRoles.size();
}
public String[] roles(String username) {
if (userRoles == null) {
return Strings.EMPTY_ARRAY;
}
String[] roles = userRoles.get(username);
return roles == null ? Strings.EMPTY_ARRAY : userRoles.get(username);
}
public static Path resolveFile(Settings settings, Environment env) {
String location = settings.get("files.users_roles");
if (location == null) {
return ShieldPlugin.resolveConfigFile(env, "users_roles");
}
return env.homeFile().resolve(location);
}
/**
* Internally in this class, we try to load the file, but if for some reason we can't, we're being more lenient by
* logging the error and skipping all enries. This is aligned with how we handle other auto-loaded files in shield.
*/
static ImmutableMap<String, String[]> parseFileLenient(Path path, ESLogger logger) {
try {
return parseFile(path, logger);
} catch (Throwable t) {
logger.error("failed to parse users_roles file [{}]. skipping/removing all entries...", t, path.toAbsolutePath());
return ImmutableMap.of();
}
}
/**
* parses the users_roles file. Should never return return {@code null}, if the file doesn't exist
* an empty map is returned. The read file holds a mapping per line of the form "role -> users" while the returned
* map holds entries of the form "user -> roles".
*/
public static ImmutableMap<String, String[]> parseFile(Path path, @Nullable ESLogger logger) {
if (logger == null) {
logger = NoOpLogger.INSTANCE;
}
logger.trace("reading users_roles file [{}]...", path.toAbsolutePath());
if (!Files.exists(path)) {
return ImmutableMap.of();
}
List<String> lines;
try {
lines = Files.readAllLines(path, Charsets.UTF_8);
} catch (IOException ioe) {
throw new ElasticsearchException("could not read users file [" + path.toAbsolutePath() + "]", ioe);
}
Map<String, List<String>> userToRoles = new HashMap<>();
int lineNr = 0;
for (String line : lines) {
lineNr++;
if (line.startsWith("#")) { //comment
continue;
}
int i = line.indexOf(":");
if (i <= 0 || i == line.length() - 1) {
logger.error("invalid entry in users_roles file [{}], line [{}]. skipping...", path.toAbsolutePath(), lineNr);
continue;
}
String role = line.substring(0, i).trim();
Validation.Error validationError = Validation.Roles.validateRoleName(role);
if (validationError != null) {
logger.error("invalid role entry in users_roles file [{}], line [{}] - {}. skipping...", path.toAbsolutePath(), lineNr, validationError);
continue;
}
String usersStr = line.substring(i + 1).trim();
if (Strings.isEmpty(usersStr)) {
logger.error("invalid entry for role [{}] in users_roles file [{}], line [{}]. no users found. skipping...", role, path.toAbsolutePath(), lineNr);
continue;
}
String[] roleUsers = USERS_DELIM.split(usersStr);
if (roleUsers.length == 0) {
logger.error("invalid entry for role [{}] in users_roles file [{}], line [{}]. no users found. skipping...", role, path.toAbsolutePath(), lineNr);
continue;
}
for (String user : roleUsers) {
List<String> roles = userToRoles.get(user);
if (roles == null) {
roles = new ArrayList<>();
userToRoles.put(user, roles);
}
roles.add(role);
}
}
ImmutableMap.Builder<String, String[]> builder = ImmutableMap.builder();
for (Map.Entry<String, List<String>> entry : userToRoles.entrySet()) {
builder.put(entry.getKey(), entry.getValue().toArray(new String[entry.getValue().size()]));
}
ImmutableMap<String, String[]> usersRoles = builder.build();
if (usersRoles.isEmpty()){
logger.warn("no entries found in users_roles file [{}]. use bin/shield/esusers to add users and role mappings", path.toAbsolutePath());
}
return usersRoles;
}
/**
* Accepts a mapping of user -> list of roles
*/
public static void writeFile(Map<String, String[]> userToRoles, Path path) {
HashMap<String, List<String>> roleToUsers = new HashMap<>();
for (Map.Entry<String, String[]> entry : userToRoles.entrySet()) {
for (String role : entry.getValue()) {
List<String> users = roleToUsers.get(role);
if (users == null) {
users = new ArrayList<>();
roleToUsers.put(role, users);
}
users.add(entry.getKey());
}
}
try (PrintWriter writer = new PrintWriter(openAtomicMoveWriter(path))) {
for (Map.Entry<String, List<String>> entry : roleToUsers.entrySet()) {
writer.printf(Locale.ROOT, "%s:%s%s", entry.getKey(), Strings.collectionToCommaDelimitedString(entry.getValue()), System.lineSeparator());
}
} catch (IOException ioe) {
throw new ElasticsearchException("could not write file [" + path.toAbsolutePath() + "], please check file permissions", ioe);
}
}
public void notifyRefresh() {
for (RefreshListener listener : listeners) {
listener.onRefresh();
}
}
private class FileListener extends FileChangesListener {
@Override
public void onFileCreated(Path file) {
onFileChanged(file);
}
@Override
public void onFileDeleted(Path file) {
onFileChanged(file);
}
@Override
public void onFileChanged(Path file) {
if (file.equals(FileUserRolesStore.this.file)) {
logger.info("users_roles file [{}] changed. updating users roles...", file.toAbsolutePath());
userRoles = parseFileLenient(file, logger);
notifyRefresh();
}
}
}
}

View File

@ -0,0 +1,530 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.shield.authc.esusers.tool;
import com.google.common.base.Joiner;
import com.google.common.collect.*;
import org.apache.commons.cli.CommandLine;
import org.elasticsearch.common.Strings;
import org.elasticsearch.common.cli.CheckFileCommand;
import org.elasticsearch.common.cli.CliTool;
import org.elasticsearch.common.cli.CliToolConfig;
import org.elasticsearch.common.cli.Terminal;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.env.Environment;
import org.elasticsearch.shield.authc.Realms;
import org.elasticsearch.shield.authc.esusers.ESUsersRealm;
import org.elasticsearch.shield.authc.esusers.FileUserPasswdStore;
import org.elasticsearch.shield.authc.esusers.FileUserRolesStore;
import org.elasticsearch.shield.authc.support.Hasher;
import org.elasticsearch.shield.authc.support.SecuredString;
import org.elasticsearch.shield.authz.store.FileRolesStore;
import org.elasticsearch.shield.support.Validation;
import java.nio.file.Files;
import java.nio.file.Path;
import java.util.*;
import java.util.regex.Pattern;
import static org.elasticsearch.common.cli.CliToolConfig.Builder.cmd;
import static org.elasticsearch.common.cli.CliToolConfig.Builder.option;
/**
*
*/
public class ESUsersTool extends CliTool {
private static final CliToolConfig CONFIG = CliToolConfig.config("esusers", ESUsersTool.class)
.cmds(Useradd.CMD, Userdel.CMD, Passwd.CMD, Roles.CMD, ListUsersAndRoles.CMD)
.build();
public static void main(String[] args) {
int status = new ESUsersTool().execute(args);
System.exit(status);
}
public ESUsersTool() {
super(CONFIG);
}
public ESUsersTool(Terminal terminal) {
super(CONFIG, terminal);
}
@Override
protected Command parse(String cmdName, CommandLine cli) throws Exception {
switch (cmdName.toLowerCase(Locale.ROOT)) {
case Useradd.NAME:
return Useradd.parse(terminal, cli);
case Userdel.NAME:
return Userdel.parse(terminal, cli);
case Passwd.NAME:
return Passwd.parse(terminal, cli);
case ListUsersAndRoles.NAME:
return ListUsersAndRoles.parse(terminal, cli);
case Roles.NAME:
return Roles.parse(terminal, cli);
default:
assert false : "should never get here, if the user enters an unknown command, an error message should be shown before parse is called";
return null;
}
}
static class Useradd extends CheckFileCommand {
private static final String NAME = "useradd";
private static final CliToolConfig.Cmd CMD = cmd(NAME, Useradd.class)
.options(
option("p", "password").hasArg(false).required(false),
option("r", "roles").hasArg(false).required(false))
.build();
public static Command parse(Terminal terminal, CommandLine cli) {
if (cli.getArgs().length == 0) {
return exitCmd(ExitStatus.USAGE, terminal, "username is missing");
} else if (cli.getArgs().length != 1) {
String[] extra = Arrays.copyOfRange(cli.getArgs(), 1, cli.getArgs().length);
return exitCmd(ExitStatus.USAGE, terminal, "extra arguments " + Arrays.toString(extra) + " were provided. please ensure all special characters are escaped");
}
String username = cli.getArgs()[0];
Validation.Error validationError = Validation.ESUsers.validateUsername(username);
if (validationError != null) {
return exitCmd(ExitStatus.DATA_ERROR, terminal, "Invalid username [" + username + "]... " + validationError);
}
char[] password;
String passwordStr = cli.getOptionValue("password");
if (passwordStr != null) {
password = passwordStr.toCharArray();
validationError = Validation.ESUsers.validatePassword(password);
if (validationError != null) {
return exitCmd(ExitStatus.DATA_ERROR, terminal, "Invalid password..." + validationError);
}
} else {
password = terminal.readSecret("Enter new password: ");
validationError = Validation.ESUsers.validatePassword(password);
if (validationError != null) {
return exitCmd(ExitStatus.DATA_ERROR, terminal, "Invalid password..." + validationError);
}
char[] retyped = terminal.readSecret("Retype new password: ");
if (!Arrays.equals(password, retyped)) {
return exitCmd(ExitStatus.USAGE, terminal, "Password mismatch");
}
}
String rolesCsv = cli.getOptionValue("roles");
String[] roles = (rolesCsv != null) ? rolesCsv.split(",") : Strings.EMPTY_ARRAY;
for (String role : roles) {
validationError = Validation.Roles.validateRoleName(role);
if (validationError != null) {
return exitCmd(ExitStatus.DATA_ERROR, terminal, "Invalid role [" + role + "]... " + validationError);
}
}
return new Useradd(terminal, username, new SecuredString(password), roles);
}
final String username;
final SecuredString passwd;
final String[] roles;
Useradd(Terminal terminal, String username, SecuredString passwd, String... roles) {
super(terminal);
this.username = username;
this.passwd = passwd;
this.roles = roles;
}
@Override
public ExitStatus doExecute(Settings settings, Environment env) throws Exception {
Settings esusersSettings = Realms.internalRealmSettings(settings, ESUsersRealm.TYPE);
verifyRoles(terminal, settings, env, roles);
Path file = FileUserPasswdStore.resolveFile(esusersSettings, env);
Map<String, char[]> users = new HashMap<>(FileUserPasswdStore.parseFile(file, null));
if (users.containsKey(username)) {
terminal.println("User [%s] already exists", username);
return ExitStatus.CODE_ERROR;
}
Hasher hasher = Hasher.BCRYPT;
users.put(username, hasher.hash(passwd));
FileUserPasswdStore.writeFile(users, file);
if (roles != null && roles.length > 0) {
file = FileUserRolesStore.resolveFile(esusersSettings, env);
Map<String, String[]> userRoles = new HashMap<>(FileUserRolesStore.parseFile(file, null));
userRoles.put(username, roles);
FileUserRolesStore.writeFile(userRoles, file);
}
return ExitStatus.OK;
}
@Override
protected Path[] pathsForPermissionsCheck(Settings settings, Environment env) {
Settings esusersSettings = Realms.internalRealmSettings(settings, ESUsersRealm.TYPE);
Path userPath = FileUserPasswdStore.resolveFile(esusersSettings, env);
Path userRolesPath = FileUserRolesStore.resolveFile(esusersSettings, env);
return new Path[] {userPath, userRolesPath};
}
}
static class Userdel extends CheckFileCommand {
private static final String NAME = "userdel";
private static final CliToolConfig.Cmd CMD = cmd(NAME, Userdel.class).build();
public static Command parse(Terminal terminal, CommandLine cli) {
if (cli.getArgs().length == 0) {
return exitCmd(ExitStatus.USAGE, terminal, "username is missing");
} else if (cli.getArgs().length != 1) {
String[] extra = Arrays.copyOfRange(cli.getArgs(), 1, cli.getArgs().length);
return exitCmd(ExitStatus.USAGE, terminal, "extra arguments " + Arrays.toString(extra) + " were provided. userdel only supports deleting one user at a time");
}
String username = cli.getArgs()[0];
return new Userdel(terminal, username);
}
final String username;
Userdel(Terminal terminal, String username) {
super(terminal);
this.username = username;
}
@Override
protected Path[] pathsForPermissionsCheck(Settings settings, Environment env) {
Settings esusersSettings = Realms.internalRealmSettings(settings, ESUsersRealm.TYPE);
Path userPath = FileUserPasswdStore.resolveFile(esusersSettings, env);
Path userRolesPath = FileUserRolesStore.resolveFile(esusersSettings, env);
if (Files.exists(userRolesPath)) {
return new Path[] { userPath, userRolesPath };
}
return new Path[] { userPath };
}
@Override
public ExitStatus doExecute(Settings settings, Environment env) throws Exception {
Settings esusersSettings = Realms.internalRealmSettings(settings, ESUsersRealm.TYPE);
Path file = FileUserPasswdStore.resolveFile(esusersSettings, env);
Map<String, char[]> users = new HashMap<>(FileUserPasswdStore.parseFile(file, null));
if (!users.containsKey(username)) {
terminal.println("User [%s] doesn't exist", username);
return ExitStatus.NO_USER;
}
if (Files.exists(file)) {
char[] passwd = users.remove(username);
if (passwd != null) {
FileUserPasswdStore.writeFile(users, file);
}
}
file = FileUserRolesStore.resolveFile(esusersSettings, env);
Map<String, String[]> userRoles = new HashMap<>(FileUserRolesStore.parseFile(file, null));
if (Files.exists(file)) {
String[] roles = userRoles.remove(username);
if (roles != null) {
FileUserRolesStore.writeFile(userRoles, file);
}
}
return ExitStatus.OK;
}
}
static class Passwd extends CheckFileCommand {
private static final String NAME = "passwd";
private static final CliToolConfig.Cmd CMD = cmd(NAME, Passwd.class)
.options(option("p", "password").hasArg(false).required(false))
.build();
public static Command parse(Terminal terminal, CommandLine cli) {
if (cli.getArgs().length == 0) {
return exitCmd(ExitStatus.USAGE, terminal, "username is missing");
} else if (cli.getArgs().length != 1) {
String[] extra = Arrays.copyOfRange(cli.getArgs(), 1, cli.getArgs().length);
return exitCmd(ExitStatus.USAGE, terminal, "extra arguments " + Arrays.toString(extra) + " were provided");
}
String username = cli.getArgs()[0];
char[] password;
String passwordStr = cli.getOptionValue("password");
if (passwordStr != null) {
password = passwordStr.toCharArray();
} else {
password = terminal.readSecret("Enter new password: ");
char[] retyped = terminal.readSecret("Retype new password: ");
if (!Arrays.equals(password, retyped)) {
return exitCmd(ExitStatus.USAGE, terminal, "Password mismatch");
}
}
return new Passwd(terminal, username, password);
}
final String username;
final SecuredString passwd;
Passwd(Terminal terminal, String username, char[] passwd) {
super(terminal);
this.username = username;
this.passwd = new SecuredString(passwd);
Arrays.fill(passwd, (char) 0);
}
@Override
protected Path[] pathsForPermissionsCheck(Settings settings, Environment env) {
Settings esusersSettings = Realms.internalRealmSettings(settings, ESUsersRealm.TYPE);
Path path = FileUserPasswdStore.resolveFile(esusersSettings, env);
return new Path[] { path };
}
@Override
public ExitStatus doExecute(Settings settings, Environment env) throws Exception {
Settings esusersSettings = Realms.internalRealmSettings(settings, ESUsersRealm.TYPE);
Path file = FileUserPasswdStore.resolveFile(esusersSettings, env);
Map<String, char[]> users = new HashMap<>(FileUserPasswdStore.parseFile(file, null));
if (!users.containsKey(username)) {
terminal.println("User [%s] doesn't exist", username);
return ExitStatus.NO_USER;
}
Hasher hasher = Hasher.BCRYPT;
users.put(username, hasher.hash(passwd));
FileUserPasswdStore.writeFile(users, file);
return ExitStatus.OK;
}
}
static class Roles extends CheckFileCommand {
private static final String NAME = "roles";
private static final CliToolConfig.Cmd CMD = cmd(NAME, Roles.class)
.options(
option("a", "add").hasArg(true).required(false),
option("r", "remove").hasArg(true).required(false))
.build();
public static Command parse(Terminal terminal, CommandLine cli) {
if (cli.getArgs().length == 0) {
return exitCmd(ExitStatus.USAGE, terminal, "username is missing");
} else if (cli.getArgs().length != 1) {
String[] extra = Arrays.copyOfRange(cli.getArgs(), 1, cli.getArgs().length);
return exitCmd(ExitStatus.USAGE, terminal, "extra arguments " + Arrays.toString(extra) + " were provided. please ensure all special characters are escaped");
}
String username = cli.getArgs()[0];
String addRolesCsv = cli.getOptionValue("add");
String[] addRoles = (addRolesCsv != null) ? addRolesCsv.split(",") : Strings.EMPTY_ARRAY;
String removeRolesCsv = cli.getOptionValue("remove");
String[] removeRoles = (removeRolesCsv != null) ? removeRolesCsv.split(",") : Strings.EMPTY_ARRAY;
return new Roles(terminal, username, addRoles, removeRoles);
}
public static final Pattern ROLE_PATTERN = Pattern.compile("[\\w@-]+");
final String username;
final String[] addRoles;
final String[] removeRoles;
public Roles(Terminal terminal, String username, String[] addRoles, String[] removeRoles) {
super(terminal);
this.username = username;
this.addRoles = addRoles;
this.removeRoles = removeRoles;
}
@Override
protected Path[] pathsForPermissionsCheck(Settings settings, Environment env) {
Settings esusersSettings = Realms.internalRealmSettings(settings, ESUsersRealm.TYPE);
Path path = FileUserPasswdStore.resolveFile(esusersSettings, env);
return new Path[] { path } ;
}
@Override
public ExitStatus doExecute(Settings settings, Environment env) throws Exception {
// check if just need to return data as no write operation happens
// Nothing to add, just list the data for a username
boolean readOnlyUserListing = removeRoles.length == 0 && addRoles.length == 0;
if (readOnlyUserListing) {
return new ListUsersAndRoles(terminal, username).execute(settings, env);
}
// check for roles if they match
String[] allRoles = ObjectArrays.concat(addRoles, removeRoles, String.class);
for (String role : allRoles) {
if (!ROLE_PATTERN.matcher(role).matches()) {
terminal.println("Role name [%s] is not valid. Please use lowercase and numbers only", role);
return ExitStatus.DATA_ERROR;
}
}
Settings esusersSettings = Realms.internalRealmSettings(settings, ESUsersRealm.TYPE);
Path path = FileUserPasswdStore.resolveFile(esusersSettings, env);
Map<String, char[]> usersMap = FileUserPasswdStore.parseFile(path, null);
if (!usersMap.containsKey(username)) {
terminal.println("User [%s] doesn't exist", username);
return ExitStatus.NO_USER;
}
Path file = FileUserRolesStore.resolveFile(esusersSettings, env);
Map<String, String[]> userRoles = FileUserRolesStore.parseFile(file, null);
List<String> roles = Lists.newArrayList();
if (userRoles.get(username) != null) {
roles.addAll(Arrays.asList(userRoles.get(username)));
}
verifyRoles(terminal, settings, env, addRoles);
roles.addAll(Arrays.asList(addRoles));
roles.removeAll(Arrays.asList(removeRoles));
Map<String, String[]> userRolesToWrite = Maps.newHashMapWithExpectedSize(userRoles.size());
userRolesToWrite.putAll(userRoles);
if (roles.size() == 0) {
userRolesToWrite.remove(username);
} else {
userRolesToWrite.put(username, Sets.newLinkedHashSet(roles).toArray(new String[]{}));
}
FileUserRolesStore.writeFile(userRolesToWrite, file);
return ExitStatus.OK;
}
}
static class ListUsersAndRoles extends CliTool.Command {
private static final String NAME = "list";
private static final CliToolConfig.Cmd CMD = cmd(NAME, Useradd.class).build();
public static Command parse(Terminal terminal, CommandLine cli) {
String username = null;
if (cli.getArgs().length == 1) {
username = cli.getArgs()[0];
} else if (cli.getArgs().length > 1) {
String[] extra = Arrays.copyOfRange(cli.getArgs(), 1, cli.getArgs().length);
return exitCmd(ExitStatus.USAGE, terminal, "extra arguments " + Arrays.toString(extra) + " were provided. list can be used without a user or with a single user");
}
return new ListUsersAndRoles(terminal, username);
}
String username;
public ListUsersAndRoles(Terminal terminal, String username) {
super(terminal);
this.username = username;
}
@Override
public ExitStatus execute(Settings settings, Environment env) throws Exception {
Settings esusersSettings = Realms.internalRealmSettings(settings, ESUsersRealm.TYPE);
ImmutableSet<String> knownRoles = loadRoleNames(terminal, settings, env);
Path userRolesFilePath = FileUserRolesStore.resolveFile(esusersSettings, env);
Map<String, String[]> userRoles = FileUserRolesStore.parseFile(userRolesFilePath, null);
Path userFilePath = FileUserPasswdStore.resolveFile(esusersSettings, env);
Set<String> users = FileUserPasswdStore.parseFile(userFilePath, null).keySet();
if (username != null) {
if (!users.contains(username)) {
terminal.println("User [%s] doesn't exist", username);
return ExitStatus.NO_USER;
}
if (userRoles.containsKey(username)) {
String[] roles = userRoles.get(username);
Set<String> unknownRoles = Sets.difference(Sets.newHashSet(roles), knownRoles);
String[] markedRoles = markUnknownRoles(roles, unknownRoles);
terminal.println("%-15s: %s", username, Joiner.on(",").useForNull("-").join(markedRoles));
if (!unknownRoles.isEmpty()) {
// at least one role is marked... so printing the legend
Path rolesFile = FileRolesStore.resolveFile(esusersSettings, env).toAbsolutePath();
terminal.println();
terminal.println(" [*] An unknown role. Please check [%s] to see available roles", rolesFile.toAbsolutePath());
}
} else {
terminal.println("%-15s: -", username);
}
} else {
boolean unknownRolesFound = false;
boolean usersExist = false;
for (Map.Entry<String, String[]> entry : userRoles.entrySet()) {
String[] roles = entry.getValue();
Set<String> unknownRoles = Sets.difference(Sets.newHashSet(roles), knownRoles);
String[] markedRoles = markUnknownRoles(roles, unknownRoles);
terminal.println("%-15s: %s", entry.getKey(), Joiner.on(",").join(markedRoles));
unknownRolesFound = unknownRolesFound || !unknownRoles.isEmpty();
usersExist = true;
}
// list users without roles
Set<String> usersWithoutRoles = Sets.newHashSet(users);
usersWithoutRoles.removeAll(userRoles.keySet());
for (String user : usersWithoutRoles) {
terminal.println("%-15s: -", user);
usersExist = true;
}
if (!usersExist) {
terminal.println("No users found");
return ExitStatus.OK;
}
if (unknownRolesFound) {
// at least one role is marked... so printing the legend
Path rolesFile = FileRolesStore.resolveFile(esusersSettings, env).toAbsolutePath();
terminal.println();
terminal.println(" [*] An unknown role. Please check [%s] to see available roles", rolesFile.toAbsolutePath());
}
}
return ExitStatus.OK;
}
}
private static ImmutableSet<String> loadRoleNames(Terminal terminal, Settings settings, Environment env) {
Path rolesFile = FileRolesStore.resolveFile(settings, env);
try {
return FileRolesStore.parseFileForRoleNames(rolesFile, null);
} catch (Throwable t) {
// if for some reason, parsing fails (malformatted perhaps) we just warn
terminal.println("Warning: Could not parse [%s] for roles verification. Please revise and fix it. Nonetheless, the user will still be associated with all specified roles", rolesFile.toAbsolutePath());
}
return null;
}
private static String[] markUnknownRoles(String[] roles, Set<String> unknownRoles) {
if (unknownRoles.isEmpty()) {
return roles;
}
String[] marked = new String[roles.length];
for (int i = 0; i < roles.length; i++) {
if (unknownRoles.contains(roles[i])) {
marked[i] = roles[i] + "*";
} else {
marked[i] = roles[i];
}
}
return marked;
}
private static void verifyRoles(Terminal terminal, Settings settings, Environment env, String[] roles) {
ImmutableSet<String> knownRoles = loadRoleNames(terminal, settings, env);
Set<String> unknownRoles = Sets.difference(Sets.newHashSet(roles), knownRoles);
if (!unknownRoles.isEmpty()) {
Path rolesFile = FileRolesStore.resolveFile(settings, env);
terminal.println("Warning: The following roles [%s] are unknown. Make sure to add them to the [%s] file. " +
"Nonetheless the user will still be associated with all specified roles",
Strings.collectionToCommaDelimitedString(unknownRoles), rolesFile.toAbsolutePath());
}
}
}

View File

@ -0,0 +1,66 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.shield.authc.ldap;
import org.elasticsearch.common.inject.Inject;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.rest.RestController;
import org.elasticsearch.shield.ShieldSettingsFilter;
import org.elasticsearch.shield.authc.RealmConfig;
import org.elasticsearch.shield.authc.ldap.support.AbstractLdapRealm;
import org.elasticsearch.shield.authc.ldap.support.SessionFactory;
import org.elasticsearch.shield.authc.support.DnRoleMapper;
import org.elasticsearch.shield.ssl.ClientSSLService;
import org.elasticsearch.watcher.ResourceWatcherService;
/**
* Authenticates username/password tokens against ldap, locates groups and maps them to roles.
*/
public class LdapRealm extends AbstractLdapRealm {
public static final String TYPE = "ldap";
public LdapRealm(RealmConfig config, SessionFactory ldap, DnRoleMapper roleMapper) {
super(TYPE, config, ldap, roleMapper);
}
public static class Factory extends AbstractLdapRealm.Factory<LdapRealm> {
private final ResourceWatcherService watcherService;
private final ClientSSLService clientSSLService;
@Inject
public Factory(ResourceWatcherService watcherService, RestController restController, ClientSSLService clientSSLService) {
super(TYPE, restController);
this.watcherService = watcherService;
this.clientSSLService = clientSSLService;
}
@Override
public void filterOutSensitiveSettings(String realmName, ShieldSettingsFilter filter) {
LdapUserSearchSessionFactory.filterOutSensitiveSettings(realmName, filter);
}
@Override
public LdapRealm create(RealmConfig config) {
SessionFactory sessionFactory = sessionFactory(config, clientSSLService);
DnRoleMapper roleMapper = new DnRoleMapper(TYPE, config, watcherService, null);
return new LdapRealm(config, sessionFactory, roleMapper);
}
static SessionFactory sessionFactory(RealmConfig config, ClientSSLService clientSSLService) {
Settings searchSettings = config.settings().getAsSettings("user_search");
if (!searchSettings.names().isEmpty()) {
if (config.settings().getAsArray(LdapSessionFactory.USER_DN_TEMPLATES_SETTING).length > 0) {
throw new IllegalArgumentException("settings were found for both user search and user template modes of operation. Please remove the settings for the\n"
+ "mode you do not wish to use. For more details refer to the ldap authentication section of the Shield guide.");
}
return new LdapUserSearchSessionFactory(config, clientSSLService);
}
return new LdapSessionFactory(config, clientSSLService);
}
}
}

Some files were not shown because too many files have changed in this diff Show More