HBASE-15298 Fix missing or wrong asciidoc anchors in the reference guide
This commit is contained in:
parent
e58c0385a7
commit
2966eee602
|
@ -66,6 +66,7 @@ the issue there. When you have developed a potential fix, submit it for review.
|
|||
If it addresses the issue and is seen as an improvement, one of the HBase committers
|
||||
will commit it to one or more branches, as appropriate.
|
||||
|
||||
[[submit_doc_patch_procedure]]
|
||||
.Procedure: Suggested Work flow for Submitting Patches
|
||||
This procedure goes into more detail than Git pros will need, but is included
|
||||
in this appendix so that people unfamiliar with Git can feel confident contributing
|
||||
|
|
|
@ -501,6 +501,7 @@ It is generally a better idea to use the startRow/stopRow methods on Scan for ro
|
|||
This is primarily used for rowcount jobs.
|
||||
See link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/filter/FirstKeyOnlyFilter.html[FirstKeyOnlyFilter].
|
||||
|
||||
[[architecture.master]]
|
||||
== Master
|
||||
|
||||
`HMaster` is the implementation of the Master Server.
|
||||
|
@ -1490,6 +1491,7 @@ It's an asynchronous operation and call returns immediately without waiting merg
|
|||
Passing `true` as the optional third parameter will force a merge. Normally only adjacent regions can be merged.
|
||||
The `force` parameter overrides this behaviour and is for expert use only.
|
||||
|
||||
[[store]]
|
||||
=== Store
|
||||
|
||||
A Store hosts a MemStore and 0 or more StoreFiles (HFiles). A Store corresponds to a column family for a table for a given region.
|
||||
|
@ -1552,6 +1554,7 @@ Matteo Bertozzi has also put up a helpful description, link:http://th30z.blogspo
|
|||
For more information, see the link:http://hbase.apache.org/xref/org/apache/hadoop/hbase/io/hfile/HFile.html[HFile source code].
|
||||
Also see <<hfilev2>> for information about the HFile v2 format that was included in 0.92.
|
||||
|
||||
[[hfile_tool]]
|
||||
===== HFile Tool
|
||||
|
||||
To view a textualized version of HFile content, you can use the `org.apache.hadoop.hbase.io.hfile.HFile` tool.
|
||||
|
@ -1585,6 +1588,7 @@ For more information on compression, see <<compression>>.
|
|||
|
||||
For more information on blocks, see the link:http://hbase.apache.org/xref/org/apache/hadoop/hbase/io/hfile/HFileBlock.html[HFileBlock source code].
|
||||
|
||||
[[keyvalue]]
|
||||
==== KeyValue
|
||||
|
||||
The KeyValue class is the heart of data storage in HBase.
|
||||
|
@ -1670,6 +1674,7 @@ The end result of a _major compaction_ is a single StoreFile per Store.
|
|||
Major compactions also process delete markers and max versions.
|
||||
See <<compaction.and.deletes>> and <<compaction.and.versions>> for information on how deletes and versions are handled in relation to compactions.
|
||||
|
||||
[[compaction.and.deletes]]
|
||||
.Compaction and Deletions
|
||||
When an explicit deletion occurs in HBase, the data is not actually deleted.
|
||||
Instead, a _tombstone_ marker is written.
|
||||
|
@ -1678,6 +1683,7 @@ During a major compaction, the data is actually deleted, and the tombstone marke
|
|||
If the deletion happens because of an expired TTL, no tombstone is created.
|
||||
Instead, the expired data is filtered out and is not written back to the compacted StoreFile.
|
||||
|
||||
[[compaction.and.versions]]
|
||||
.Compaction and Versions
|
||||
When you create a Column Family, you can specify the maximum number of versions to keep, by specifying `HColumnDescriptor.setMaxVersions(int versions)`.
|
||||
The default value is `3`.
|
||||
|
@ -1885,7 +1891,7 @@ For a full list of all configuration parameters available, see <<config.files,co
|
|||
you are balancing write costs with read costs. Raising the value (to something like
|
||||
1.4) will have more write costs, because you will compact larger StoreFiles.
|
||||
However, during reads, HBase will need to seek through fewer StoreFiles to
|
||||
accomplish the read. Consider this approach if you cannot take advantage of <<bloom>>.
|
||||
accomplish the read. Consider this approach if you cannot take advantage of <<blooms>>.
|
||||
* Alternatively, you can lower this value to something like 1.0 to reduce the
|
||||
background cost of writes, and use to limit the number of StoreFiles touched
|
||||
during reads. For most cases, the default value is appropriate.
|
||||
|
@ -2052,7 +2058,7 @@ Why?
|
|||
[[compaction.config.impact]]
|
||||
.Impact of Key Configuration Options
|
||||
|
||||
NOTE: This information is now included in the configuration parameter table in <<compaction.configuration.parameters>>.
|
||||
NOTE: This information is now included in the configuration parameter table in <<compaction.parameters>>.
|
||||
|
||||
[[ops.stripe]]
|
||||
===== Experimental: Stripe Compactions
|
||||
|
@ -2190,7 +2196,7 @@ When at least `hbase.store.stripe.compaction.minFilesL0` such files (by default,
|
|||
[[ops.stripe.config.compact]]
|
||||
.Normal Compaction Configuration and Stripe Compaction
|
||||
|
||||
All the settings that apply to normal compactions (see <<compaction.configuration.parameters>>) apply to stripe compactions.
|
||||
All the settings that apply to normal compactions (see <<compaction.parameters>>) apply to stripe compactions.
|
||||
The exceptions are the minimum and maximum number of files, which are set to higher values by default because the files in stripes are smaller.
|
||||
To control these for stripe compactions, use `hbase.store.stripe.compaction.minFiles` and `hbase.store.stripe.compaction.maxFiles`, rather than `hbase.hstore.compaction.min` and `hbase.hstore.compaction.max`.
|
||||
|
||||
|
|
|
@ -122,6 +122,7 @@ For more details about Prefix Tree encoding, see link:https://issues.apache.org/
|
|||
+
|
||||
It is difficult to graphically illustrate a prefix tree, so no image is included. See the Wikipedia article for link:http://en.wikipedia.org/wiki/Trie[Trie] for more general information about this data structure.
|
||||
|
||||
[[data.block.encoding.types]]
|
||||
=== Which Compressor or Data Block Encoder To Use
|
||||
|
||||
The compression or codec type to use depends on the characteristics of your data. Choosing the wrong type could cause your data to take more space rather than less, and can have performance implications.
|
||||
|
@ -277,7 +278,7 @@ See <<hbase.regionserver.codecs,hbase.regionserver.codecs>>.
|
|||
|
||||
LZ4 support is bundled with Hadoop.
|
||||
Make sure the hadoop shared library (libhadoop.so) is accessible when you start HBase.
|
||||
After configuring your platform (see <<hbase.native.platform,hbase.native.platform>>), you can make a symbolic link from HBase to the native Hadoop libraries.
|
||||
After configuring your platform (see <<hadoop.native.lib,hadoop.native.lib>>), you can make a symbolic link from HBase to the native Hadoop libraries.
|
||||
This assumes the two software installs are colocated.
|
||||
For example, if my 'platform' is Linux-amd64-64:
|
||||
[source,bourne]
|
||||
|
|
|
@ -131,6 +131,7 @@ support.
|
|||
|
||||
NOTE: In HBase 0.98.5 and newer, you must set `JAVA_HOME` on each node of your cluster. _hbase-env.sh_ provides a handy mechanism to do this.
|
||||
|
||||
[[os]]
|
||||
.Operating System Utilities
|
||||
ssh::
|
||||
HBase uses the Secure Shell (ssh) command and utilities extensively to communicate between cluster nodes. Each server in the cluster must be running `ssh` so that the Hadoop and HBase daemons can be managed. You must be able to connect to all nodes via SSH, including the local node, from the Master as well as any backup Master, using a shared key rather than a password. You can see the basic methodology for such a set-up in Linux or Unix systems at "<<passwordless.ssh.quickstart>>". If your cluster nodes use OS X, see the section, link:http://wiki.apache.org/hadoop/Running_Hadoop_On_OS_X_10.5_64-bit_%28Single-Node_Cluster%29[SSH: Setting up Remote Desktop and Enabling Self-Login] on the Hadoop wiki.
|
||||
|
@ -145,6 +146,7 @@ Loopback IP::
|
|||
NTP::
|
||||
The clocks on cluster nodes should be synchronized. A small amount of variation is acceptable, but larger amounts of skew can cause erratic and unexpected behavior. Time synchronization is one of the first things to check if you see unexplained problems in your cluster. It is recommended that you run a Network Time Protocol (NTP) service, or another time-synchronization mechanism, on your cluster, and that all nodes look to the same service for time synchronization. See the link:http://www.tldp.org/LDP/sag/html/basic-ntp-config.html[Basic NTP Configuration] at [citetitle]_The Linux Documentation Project (TLDP)_ to set up NTP.
|
||||
|
||||
[[ulimit]]
|
||||
Limits on Number of Files and Processes (ulimit)::
|
||||
Apache HBase is a database. It requires the ability to open a large number of files at once. Many Linux distributions limit the number of files a single user is allowed to open to `1024` (or `256` on older versions of OS X). You can check this limit on your servers by running the command `ulimit -n` when logged in as the user which runs HBase. See <<trouble.rs.runtime.filehandles,the Troubleshooting section>> for some of the problems you may experience if the limit is too low. You may also notice errors such as the following:
|
||||
+
|
||||
|
@ -411,6 +413,7 @@ Standalone mode is what is described in the <<quickstart,quickstart>> section.
|
|||
In standalone mode, HBase does not use HDFS -- it uses the local filesystem instead -- and it runs all HBase daemons and a local ZooKeeper all up in the same JVM.
|
||||
Zookeeper binds to a well known port so clients may talk to HBase.
|
||||
|
||||
[[distributed]]
|
||||
=== Distributed
|
||||
|
||||
Distributed mode can be subdivided into distributed but all daemons run on a single node -- a.k.a. _pseudo-distributed_ -- and _fully-distributed_ where the daemons are spread across all nodes in the cluster.
|
||||
|
@ -769,6 +772,7 @@ Disable this functionality if you are running more than one Master: i.e. a backu
|
|||
Failing to do so, the dying Master may continue to receive RPCs though another Master has assumed the role of primary.
|
||||
See the configuration <<fail.fast.expired.active.master,fail.fast.expired.active.master>>.
|
||||
|
||||
[[recommended_configurations]]
|
||||
=== Recommended Configurations
|
||||
|
||||
[[recommended_configurations.zk]]
|
||||
|
|
|
@ -294,6 +294,7 @@ dependencies.
|
|||
`hdfs://<namenode>:<port>/user/<hadoop-user>/coprocessor.jar`.
|
||||
====
|
||||
|
||||
[[load_coprocessor_in_shell]]
|
||||
==== Using HBase Shell
|
||||
|
||||
. Disable the table using HBase Shell:
|
||||
|
|
|
@ -542,6 +542,7 @@ Thus, while HBase can support not only a wide number of columns per row, but a h
|
|||
The only way to get a complete set of columns that exist for a ColumnFamily is to process all the rows.
|
||||
For more information about how HBase stores data internally, see <<keyvalue,keyvalue>>.
|
||||
|
||||
[[joins]]
|
||||
== Joins
|
||||
|
||||
Whether HBase supports joins is a common question on the dist-list, and there is a simple answer: it doesn't, at not least in the way that RDBMS' support them (e.g., with equi-joins or outer-joins in SQL). As has been illustrated in this chapter, the read data model operations in HBase are Get and Scan.
|
||||
|
|
|
@ -94,6 +94,7 @@ See link:http://hbase.apache.org/source-repository.html[Source Code
|
|||
|
||||
== IDEs
|
||||
|
||||
[[eclipse]]
|
||||
=== Eclipse
|
||||
|
||||
[[eclipse.code.formatting]]
|
||||
|
@ -1759,6 +1760,7 @@ Please understand that not every patch may get committed, and that feedback will
|
|||
However, at times it is easier to refer to different version of a patch if you add `-vX`, where the [replaceable]_X_ is the version (starting with 2).
|
||||
* If you need to submit your patch against multiple branches, rather than just master, name each version of the patch with the branch it is for, following the naming conventions in <<submitting.patches.create,submitting.patches.create>>.
|
||||
|
||||
[[patching.methods]]
|
||||
.Methods to Create Patches
|
||||
Eclipse::
|
||||
Select the menu item.
|
||||
|
@ -1790,6 +1792,7 @@ See <<hbase.tests,hbase.tests>> for more on how the annotations work.
|
|||
|
||||
Significant new features should provide an integration test in addition to unit tests, suitable for exercising the new feature at different points in its configuration space.
|
||||
|
||||
[[reviewboard]]
|
||||
==== ReviewBoard
|
||||
|
||||
Patches larger than one screen, or patches that will be tricky to review, should go through link:http://reviews.apache.org[ReviewBoard].
|
||||
|
|
|
@ -105,7 +105,7 @@ Can I change a table's rowkeys?::
|
|||
This is a very common question. You can't. See <<changing.rowkeys>>.
|
||||
|
||||
What APIs does HBase support?::
|
||||
See <<datamodel>>, <<architecture.client>>, and <<nonjava.jvm>>.
|
||||
See <<datamodel>>, <<architecture.client>>, and <<external_apis>>.
|
||||
|
||||
=== MapReduce
|
||||
|
||||
|
|
|
@ -375,6 +375,7 @@ In those versions, you can print the contents of a WAL using the same configurat
|
|||
|
||||
See <<compression.test,compression.test>>.
|
||||
|
||||
[[copy.table]]
|
||||
=== CopyTable
|
||||
|
||||
CopyTable is a utility that can copy part or of all of a table, either to the same cluster or another cluster.
|
||||
|
@ -436,6 +437,7 @@ By default, CopyTable utility only copies the latest version of row cells unless
|
|||
See Jonathan Hsieh's link:http://www.cloudera.com/blog/2012/06/online-hbase-backups-with-copytable-2/[Online
|
||||
HBase Backups with CopyTable] blog post for more on `CopyTable`.
|
||||
|
||||
[[export]]
|
||||
=== Export
|
||||
|
||||
Export is a utility that will dump the contents of table to HDFS in a sequence file.
|
||||
|
@ -452,6 +454,7 @@ By default, the `Export` tool only exports the newest version of a given cell, r
|
|||
|
||||
Note: caching for the input Scan is configured via `hbase.client.scanner.caching` in the job configuration.
|
||||
|
||||
[[import]]
|
||||
=== Import
|
||||
|
||||
Import is a utility that will load data that has been exported back into HBase.
|
||||
|
@ -469,6 +472,7 @@ To import 0.94 exported files in a 0.96 cluster or onwards, you need to set syst
|
|||
$ bin/hbase -Dhbase.import.version=0.94 org.apache.hadoop.hbase.mapreduce.Import <tablename> <inputdir>
|
||||
----
|
||||
|
||||
[[importtsv]]
|
||||
=== ImportTsv
|
||||
|
||||
ImportTsv is a utility that will load data in TSV format into HBase.
|
||||
|
@ -560,6 +564,7 @@ If you have preparing a lot of data for bulk loading, make sure the target HBase
|
|||
|
||||
For more information about bulk-loading HFiles into HBase, see <<arch.bulk.load,arch.bulk.load>>
|
||||
|
||||
[[completebulkload]]
|
||||
=== CompleteBulkLoad
|
||||
|
||||
The `completebulkload` utility will move generated StoreFiles into an HBase table.
|
||||
|
@ -808,6 +813,7 @@ It will verify the region deployed in the new location before it will moves the
|
|||
At this point, the _graceful_stop.sh_ tells the RegionServer `stop`.
|
||||
The master will at this point notice the RegionServer gone but all regions will have already been redeployed and because the RegionServer went down cleanly, there will be no WAL logs to split.
|
||||
|
||||
[[lb]]
|
||||
.Load Balancer
|
||||
[NOTE]
|
||||
====
|
||||
|
@ -991,6 +997,7 @@ Apart from resulting in higher latency, it may also be able to use all of your n
|
|||
For practical purposes, consider that a standard 1GigE NIC won't be able to read much more than _100MB/s_.
|
||||
In this case, or if you are in a OLAP environment and require having locality, then it is recommended to major compact the moved regions.
|
||||
|
||||
[[hbase_metrics]]
|
||||
== HBase Metrics
|
||||
|
||||
HBase emits metrics which adhere to the link:http://hadoop.apache.org/core/docs/current/api/org/apache/hadoop/metrics/package-summary.html[Hadoop metrics] API.
|
||||
|
@ -1414,6 +1421,7 @@ The following configuration settings are recommended for maintaining an even dis
|
|||
* Set `replication.source.sleepforretries` to `1` (1 second). This value, combined with the value of `replication.source.maxretriesmultiplier`, causes the retry cycle to last about 5 minutes.
|
||||
* Set `replication.sleep.before.failover` to `30000` (30 seconds) in the source cluster site configuration.
|
||||
|
||||
[[cluster.replication.preserving.tags]]
|
||||
.Preserving Tags During Replication
|
||||
By default, the codec used for replication between clusters strips tags, such as cell-level ACLs, from cells.
|
||||
To prevent the tags from being stripped, you can use a different codec which does not strip them.
|
||||
|
@ -1657,7 +1665,7 @@ You can use the HBase Shell command `status 'replication'` to monitor the replic
|
|||
HBase provides the following mechanisms for managing the performance of a cluster
|
||||
handling multiple workloads:
|
||||
. <<quota>>
|
||||
. <<request-queues>>
|
||||
. <<request_queues>>
|
||||
. <<multiple-typed-queues>>
|
||||
|
||||
[[quota]]
|
||||
|
@ -1666,7 +1674,7 @@ HBASE-11598 introduces quotas, which allow you to throttle requests based on
|
|||
the following limits:
|
||||
|
||||
. <<request-quotas,The number or size of requests(read, write, or read+write) in a given timeframe>>
|
||||
. <<namespace-quotas,The number of tables allowed in a namespace>>
|
||||
. <<namespace_quotas,The number of tables allowed in a namespace>>
|
||||
|
||||
These limits can be enforced for a specified user, table, or namespace.
|
||||
|
||||
|
@ -1888,7 +1896,7 @@ See the HBase page on link:http://hbase.apache.org/book.html#replication[replica
|
|||
[[ops.backup.live.copytable]]
|
||||
=== Live Cluster Backup - CopyTable
|
||||
|
||||
The <<copytable,copytable>> utility could either be used to copy data from one table to another on the same cluster, or to copy data to another table on another cluster.
|
||||
The <<copy.table,copytable>> utility could either be used to copy data from one table to another on the same cluster, or to copy data to another table on another cluster.
|
||||
|
||||
Since the cluster is up, there is a risk that edits could be missed in the copy process.
|
||||
|
||||
|
@ -2191,7 +2199,7 @@ See <<compaction,compaction>> for some details.
|
|||
|
||||
When provisioning for large data sizes, however, it's good to keep in mind that compactions can affect write throughput.
|
||||
Thus, for write-intensive workloads, you may opt for less frequent compactions and more store files per regions.
|
||||
Minimum number of files for compactions (`hbase.hstore.compaction.min`) can be set to higher value; <<hbase.hstore.blockingstorefiles,hbase.hstore.blockingStoreFiles>> should also be increased, as more files might accumulate in such case.
|
||||
Minimum number of files for compactions (`hbase.hstore.compaction.min`) can be set to higher value; <<hbase.hstore.blockingStoreFiles,hbase.hstore.blockingStoreFiles>> should also be increased, as more files might accumulate in such case.
|
||||
You may also consider manually managing compactions: <<managed.compactions,managed.compactions>>
|
||||
|
||||
[[ops.capacity.config.presplit]]
|
||||
|
|
|
@ -222,7 +222,7 @@ This memory setting is often adjusted for the RegionServer process depending on
|
|||
[[perf.hstore.blockingstorefiles]]
|
||||
=== `hbase.hstore.blockingStoreFiles`
|
||||
|
||||
See <<hbase.hstore.blockingstorefiles>>.
|
||||
See <<hbase.hstore.blockingStoreFiles>>.
|
||||
If there is blocking in the RegionServer logs, increasing this can help.
|
||||
|
||||
[[perf.hregion.memstore.block.multiplier]]
|
||||
|
|
|
@ -70,7 +70,7 @@ Please use link:https://issues.apache.org/jira/browse/hbase[JIRA] to report non-
|
|||
|
||||
To protect existing HBase installations from new vulnerabilities, please *do not* use JIRA to report security-related bugs. Instead, send your report to the mailing list private@apache.org, which allows anyone to send messages, but restricts who can read them. Someone on that list will contact you to follow up on your report.
|
||||
|
||||
[hbase_supported_tested_definitions]
|
||||
[[hbase_supported_tested_definitions]]
|
||||
.Support and Testing Expectations
|
||||
|
||||
The phrases /supported/, /not supported/, /tested/, and /not tested/ occur several
|
||||
|
|
|
@ -84,7 +84,7 @@ expectations. Therefore, these rules of thumb are only an overview. Read the res
|
|||
of this chapter to get more details after you have gone through this list.
|
||||
|
||||
* Aim to have regions sized between 10 and 50 GB.
|
||||
* Aim to have cells no larger than 10 MB, or 50 MB if you use <<mob>>. Otherwise,
|
||||
* Aim to have cells no larger than 10 MB, or 50 MB if you use <<hbase_mob,mob>>. Otherwise,
|
||||
consider storing your cell data in HDFS and store a pointer to the data in HBase.
|
||||
* A typical schema has between 1 and 3 column families per table. HBase tables should
|
||||
not be designed to mimic RDBMS tables.
|
||||
|
@ -671,7 +671,7 @@ See <<mapreduce.example.summary,mapreduce.example.summary>> for more information
|
|||
=== Coprocessor Secondary Index
|
||||
|
||||
Coprocessors act like RDBMS triggers. These were added in 0.92.
|
||||
For more information, see <<coprocessors,coprocessors>>
|
||||
For more information, see <<cp,coprocessors>>
|
||||
|
||||
== Constraints
|
||||
|
||||
|
|
|
@ -572,6 +572,7 @@ Several procedures in this section require you to copy files between cluster nod
|
|||
When copying keys, configuration files, or other files containing sensitive strings, use a secure method, such as `ssh`, to avoid leaking sensitive data.
|
||||
====
|
||||
|
||||
[[security.data.basic.server.side]]
|
||||
.Procedure: Basic Server-Side Configuration
|
||||
. Enable HFile v3, by setting `hfile.format.version` to 3 in _hbase-site.xml_.
|
||||
This is the default for HBase 1.0 and newer.
|
||||
|
@ -1068,7 +1069,7 @@ public static void verifyAllowed(User user, AccessTestAction action, int count)
|
|||
----
|
||||
====
|
||||
|
||||
|
||||
[[hbase.visibility.labels]]
|
||||
=== Visibility Labels
|
||||
|
||||
Visibility labels control can be used to only permit users or principals associated with a given label to read or access cells with that label.
|
||||
|
|
|
@ -557,7 +557,7 @@ You can also tail all the logs at the same time, edit files, etc.
|
|||
[[trouble.client]]
|
||||
== Client
|
||||
|
||||
For more information on the HBase client, see <<client,client>>.
|
||||
For more information on the HBase client, see <<architecture.client,client>>.
|
||||
|
||||
=== Missed Scan Results Due To Mismatch Of `hbase.client.scanner.max.result.size` Between Client and Server
|
||||
If either the client or server version is lower than 0.98.11/1.0.0 and the server
|
||||
|
@ -1115,7 +1115,7 @@ to use. Was=myhost-1234, Now=ip-10-55-88-99.ec2.internal
|
|||
[[trouble.master]]
|
||||
== Master
|
||||
|
||||
For more information on the Master, see <<master,master>>.
|
||||
For more information on the Master, see <<architecture.master,master>>.
|
||||
|
||||
[[trouble.master.startup]]
|
||||
=== Startup Errors
|
||||
|
|
|
@ -98,6 +98,7 @@ These tests ensure that your `createPut` method creates, populates, and returns
|
|||
Of course, JUnit can do much more than this.
|
||||
For an introduction to JUnit, see https://github.com/junit-team/junit/wiki/Getting-started.
|
||||
|
||||
[[mockito]]
|
||||
== Mockito
|
||||
|
||||
Mockito is a mocking framework.
|
||||
|
|
|
@ -20,6 +20,7 @@
|
|||
////
|
||||
|
||||
[appendix]
|
||||
[[ycsb]]
|
||||
== YCSB
|
||||
:doctype: book
|
||||
:numbered:
|
||||
|
|
|
@ -102,7 +102,7 @@ In the example below we have ZooKeeper persist to _/user/local/zookeeper_.
|
|||
====
|
||||
The newer version, the better.
|
||||
For example, some folks have been bitten by link:https://issues.apache.org/jira/browse/ZOOKEEPER-1277[ZOOKEEPER-1277].
|
||||
If running zookeeper 3.5+, you can ask hbase to make use of the new multi operation by enabling <<hbase.zookeeper.usemulti,hbase.zookeeper.useMulti>>" in your _hbase-site.xml_.
|
||||
If running zookeeper 3.5+, you can ask hbase to make use of the new multi operation by enabling <<hbase.zookeeper.useMulti,hbase.zookeeper.useMulti>>" in your _hbase-site.xml_.
|
||||
====
|
||||
|
||||
.ZooKeeper Maintenance
|
||||
|
|
Loading…
Reference in New Issue