HBASE-17121 Undo the building of xref as part of site build; Remove links to xref
This commit is contained in:
parent
a8ee83c092
commit
be519ca1a5
19
pom.xml
19
pom.xml
|
@ -2942,25 +2942,6 @@
|
||||||
</configuration>
|
</configuration>
|
||||||
</plugin>
|
</plugin>
|
||||||
|
|
||||||
<!-- This seems to be needed by the surefire plugin.
|
|
||||||
The Javadoc below provide code as well -->
|
|
||||||
<plugin>
|
|
||||||
<groupId>org.apache.maven.plugins</groupId>
|
|
||||||
<artifactId>maven-jxr-plugin</artifactId>
|
|
||||||
<version>2.3</version>
|
|
||||||
<configuration>
|
|
||||||
<aggregate>true</aggregate>
|
|
||||||
<test-aggregate>true</test-aggregate>
|
|
||||||
<linkJavadoc>true</linkJavadoc>
|
|
||||||
<javadocDir>${project.reporting.outputDirectory}/devapidocs</javadocDir>
|
|
||||||
<testJavadocDir>${project.reporting.outputDirectory}/testdevapidocs</testJavadocDir>
|
|
||||||
<destDir>${project.reporting.outputDirectory}/xref</destDir>
|
|
||||||
<excludes>
|
|
||||||
<exclude>**/generated/*</exclude>
|
|
||||||
</excludes>
|
|
||||||
</configuration>
|
|
||||||
</plugin>
|
|
||||||
|
|
||||||
<plugin>
|
<plugin>
|
||||||
<groupId>org.apache.maven.plugins</groupId>
|
<groupId>org.apache.maven.plugins</groupId>
|
||||||
<artifactId>maven-javadoc-plugin</artifactId>
|
<artifactId>maven-javadoc-plugin</artifactId>
|
||||||
|
|
|
@ -145,7 +145,7 @@ artifacts using `mvn clean site site:stage`, check out the `asf-site` repository
|
||||||
. Remove previously-generated content using the following command:
|
. Remove previously-generated content using the following command:
|
||||||
+
|
+
|
||||||
----
|
----
|
||||||
rm -rf rm -rf *apidocs* *xref* *book* *.html *.pdf* css js
|
rm -rf rm -rf *apidocs* *book* *.html *.pdf* css js
|
||||||
----
|
----
|
||||||
+
|
+
|
||||||
WARNING: Do not remove the `0.94/` directory. To regenerate them, you must check out
|
WARNING: Do not remove the `0.94/` directory. To regenerate them, you must check out
|
||||||
|
|
|
@ -670,7 +670,7 @@ if creating a table from java, or set `IN_MEMORY => true` when creating or alter
|
||||||
hbase(main):003:0> create 't', {NAME => 'f', IN_MEMORY => 'true'}
|
hbase(main):003:0> create 't', {NAME => 'f', IN_MEMORY => 'true'}
|
||||||
----
|
----
|
||||||
|
|
||||||
For more information, see the link:http://hbase.apache.org/xref/org/apache/hadoop/hbase/io/hfile/LruBlockCache.html[LruBlockCache source]
|
For more information, see the LruBlockCache source
|
||||||
|
|
||||||
[[block.cache.usage]]
|
[[block.cache.usage]]
|
||||||
==== LruBlockCache Usage
|
==== LruBlockCache Usage
|
||||||
|
@ -1551,7 +1551,7 @@ StoreFiles are where your data lives.
|
||||||
The _HFile_ file format is based on the SSTable file described in the link:http://research.google.com/archive/bigtable.html[BigTable [2006]] paper and on Hadoop's link:http://hadoop.apache.org/common/docs/current/api/org/apache/hadoop/io/file/tfile/TFile.html[TFile] (The unit test suite and the compression harness were taken directly from TFile). Schubert Zhang's blog post on link:http://cloudepr.blogspot.com/2009/09/hfile-block-indexed-file-format-to.html[HFile: A Block-Indexed File Format to Store Sorted Key-Value Pairs] makes for a thorough introduction to HBase's HFile.
|
The _HFile_ file format is based on the SSTable file described in the link:http://research.google.com/archive/bigtable.html[BigTable [2006]] paper and on Hadoop's link:http://hadoop.apache.org/common/docs/current/api/org/apache/hadoop/io/file/tfile/TFile.html[TFile] (The unit test suite and the compression harness were taken directly from TFile). Schubert Zhang's blog post on link:http://cloudepr.blogspot.com/2009/09/hfile-block-indexed-file-format-to.html[HFile: A Block-Indexed File Format to Store Sorted Key-Value Pairs] makes for a thorough introduction to HBase's HFile.
|
||||||
Matteo Bertozzi has also put up a helpful description, link:http://th30z.blogspot.com/2011/02/hbase-io-hfile.html?spref=tw[HBase I/O: HFile].
|
Matteo Bertozzi has also put up a helpful description, link:http://th30z.blogspot.com/2011/02/hbase-io-hfile.html?spref=tw[HBase I/O: HFile].
|
||||||
|
|
||||||
For more information, see the link:http://hbase.apache.org/xref/org/apache/hadoop/hbase/io/hfile/HFile.html[HFile source code].
|
For more information, see the HFile source code.
|
||||||
Also see <<hfilev2>> for information about the HFile v2 format that was included in 0.92.
|
Also see <<hfilev2>> for information about the HFile v2 format that was included in 0.92.
|
||||||
|
|
||||||
[[hfile_tool]]
|
[[hfile_tool]]
|
||||||
|
@ -1586,7 +1586,7 @@ The blocksize is configured on a per-ColumnFamily basis.
|
||||||
Compression happens at the block level within StoreFiles.
|
Compression happens at the block level within StoreFiles.
|
||||||
For more information on compression, see <<compression>>.
|
For more information on compression, see <<compression>>.
|
||||||
|
|
||||||
For more information on blocks, see the link:http://hbase.apache.org/xref/org/apache/hadoop/hbase/io/hfile/HFileBlock.html[HFileBlock source code].
|
For more information on blocks, see the HFileBlock source code.
|
||||||
|
|
||||||
[[keyvalue]]
|
[[keyvalue]]
|
||||||
==== KeyValue
|
==== KeyValue
|
||||||
|
@ -1613,7 +1613,7 @@ The Key is further decomposed as:
|
||||||
|
|
||||||
KeyValue instances are _not_ split across blocks.
|
KeyValue instances are _not_ split across blocks.
|
||||||
For example, if there is an 8 MB KeyValue, even if the block-size is 64kb this KeyValue will be read in as a coherent block.
|
For example, if there is an 8 MB KeyValue, even if the block-size is 64kb this KeyValue will be read in as a coherent block.
|
||||||
For more information, see the link:http://hbase.apache.org/xref/org/apache/hadoop/hbase/KeyValue.html[KeyValue source code].
|
For more information, see the KeyValue source code.
|
||||||
|
|
||||||
[[keyvalue.example]]
|
[[keyvalue.example]]
|
||||||
===== Example
|
===== Example
|
||||||
|
@ -1741,7 +1741,7 @@ With the ExploringCompactionPolicy, major compactions happen much less frequentl
|
||||||
In general, ExploringCompactionPolicy is the right choice for most situations, and thus is the default compaction policy.
|
In general, ExploringCompactionPolicy is the right choice for most situations, and thus is the default compaction policy.
|
||||||
You can also use ExploringCompactionPolicy along with <<ops.stripe>>.
|
You can also use ExploringCompactionPolicy along with <<ops.stripe>>.
|
||||||
|
|
||||||
The logic of this policy can be examined in _link:http://hbase.apache.org/xref/org/apache/hadoop/hbase/regionserver/compactions/ExploringCompactionPolicy.html[hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/ExploringCompactionPolicy.java]_.
|
The logic of this policy can be examined in hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/ExploringCompactionPolicy.java.
|
||||||
The following is a walk-through of the logic of the ExploringCompactionPolicy.
|
The following is a walk-through of the logic of the ExploringCompactionPolicy.
|
||||||
|
|
||||||
|
|
||||||
|
@ -1957,7 +1957,7 @@ This section has been preserved for historical reasons and refers to the way com
|
||||||
You can still use this behavior if you enable <<compaction.ratiobasedcompactionpolicy.algorithm>>. For information on the way that compactions work in HBase 0.96.x and later, see <<compaction>>.
|
You can still use this behavior if you enable <<compaction.ratiobasedcompactionpolicy.algorithm>>. For information on the way that compactions work in HBase 0.96.x and later, see <<compaction>>.
|
||||||
====
|
====
|
||||||
|
|
||||||
To understand the core algorithm for StoreFile selection, there is some ASCII-art in the link:http://hbase.apache.org/xref/org/apache/hadoop/hbase/regionserver/Store.html#836[Store source code] that will serve as useful reference.
|
To understand the core algorithm for StoreFile selection, there is some ASCII-art in the Store source code that will serve as useful reference.
|
||||||
|
|
||||||
It has been copied below:
|
It has been copied below:
|
||||||
[source]
|
[source]
|
||||||
|
|
|
@ -475,10 +475,7 @@ In HBase 0.96 and newer, you can instead use the `removeCoprocessor()` method of
|
||||||
|
|
||||||
[[cp_example]]
|
[[cp_example]]
|
||||||
== Examples
|
== Examples
|
||||||
HBase ships examples for Observer Coprocessor in
|
HBase ships examples for Observer Coprocessor.
|
||||||
link:http://hbase.apache.org/xref/org/apache/hadoop/hbase/coprocessor/example/ZooKeeperScanPolicyObserver.html[ZooKeeperScanPolicyObserver]
|
|
||||||
and for Endpoint Coprocessor in
|
|
||||||
link:http://hbase.apache.org/xref/org/apache/hadoop/hbase/coprocessor/example/RowCountEndpoint.html[RowCountEndpoint]
|
|
||||||
|
|
||||||
A more detailed example is given below.
|
A more detailed example is given below.
|
||||||
|
|
||||||
|
|
|
@ -97,8 +97,6 @@
|
||||||
<item name="User API (Test)" href="testapidocs/index.html" target="_blank" />
|
<item name="User API (Test)" href="testapidocs/index.html" target="_blank" />
|
||||||
<item name="Developer API" href="devapidocs/index.html" target="_blank" />
|
<item name="Developer API" href="devapidocs/index.html" target="_blank" />
|
||||||
<item name="Developer API (Test)" href="testdevapidocs/index.html" target="_blank" />
|
<item name="Developer API (Test)" href="testdevapidocs/index.html" target="_blank" />
|
||||||
<item name="X-Ref" href="xref/index.html" />
|
|
||||||
<item name="X-Ref (Test)" href="xref-test/index.html" />
|
|
||||||
<item name="中文参考指南(单页)" href="http://abloz.com/hbase/book.html" target="_blank" />
|
<item name="中文参考指南(单页)" href="http://abloz.com/hbase/book.html" target="_blank" />
|
||||||
<item name="FAQ" href="book.html#faq" target="_blank" />
|
<item name="FAQ" href="book.html#faq" target="_blank" />
|
||||||
<item name="Videos/Presentations" href="book.html#other.info" target="_blank" />
|
<item name="Videos/Presentations" href="book.html#other.info" target="_blank" />
|
||||||
|
|
Loading…
Reference in New Issue