Edit of the making a release candidate/snapshotting section

git-svn-id: https://svn.apache.org/repos/asf/hbase/trunk@1529487 13f79535-47bb-0310-9956-ffa450edef68
This commit is contained in:
Michael Stack 2013-10-05 17:55:45 +00:00
parent bb7d2fdc21
commit 1fee106d6c
1 changed files with 53 additions and 39 deletions

View File

@ -330,24 +330,10 @@ This time we use the <varname>apache-release</varname> profile instead of just <
it will invoke the apache pom referenced by our poms. It will also sign your artifacts published to mvn as long as your settings.xml in your local <filename>.m2</filename>
repository is configured correctly (your <filename>settings.xml</filename> adds your gpg password property to the apache profile).
<programlisting>$ MAVEN_OPTS="-Xmx3g" mvn -f pom.xml.hadoop1 deploy -DskipTests -Papache-release</programlisting>
The last command above copies all artifacts for hadoop1 up to the apache mvn repo. If no <varname>-SNAPSHOT</varname> in the version, it puts the artifacts into a staging directory.
Making a release candidate, this is what you want. To make the staged artifacts, you will need to login at repository.apache.org, browse to the staging repository, select the
artifacts that you have just published, check they look good by making sure all modules are present and perhaps check out a pom or two and if all good, 'close' the repository
(If the published artifacts do NOT look good, delete the staging). If all successful, you will get an email with the URL of the temporary staging repository that has been
set up for you on close of the staged repository; give out this URL as place to download the release candidate (folks will need to add it to their poms or to their local settings.xml file
to pull the published artifacts)</para>
<para>If there IS a -SNAPSHOT on the hbase version, the artifacts are put into the apache snapshots repository.
Making a SNAPSHOT, this is what you want to happen.
<note>
<title>hbase-downstreamer</title>
<para>
See the <link xlink:href="https://github.com/saintstack/hbase-downstreamer">hbase-downstreamer</link> test for a simple example of a project that is downstream of hbase an depends on it.
Check it out and run its simple test to make sure maven hbase-hadoop1 and hbase-hadoop2 are properly deployed to the maven repository.
Be sure to edit the pom to point at the proper staging repo. Make sure you are pulling from the repo when tests run and that you are not
getting from your local repo (pass -U or delete your local repo content).
</para>
</note>
</para>
The last command above copies all artifacts for hadoop1 up to a temporary staging apache mvn repo in an 'open' state.
We'll need to do more work on these maven artifacts to make them generally available but before we do that,
lets get the hadoop2 build to the same stage as this hadoop1 build.
</para>
<para>Lets do the hadoop2 artifacts (read above hadoop1 section closely before coming here because we don't repeat explaination in the below).
<programlisting># Generate the hadoop2 poms.
@ -359,23 +345,28 @@ $ MAVEN_OPTS="-Xmx3g" mvn -f pom.xml.hadoop2 install -DskipTests site assembly:s
$ MAVEN_OPTS="-Xmx3g" mvn -f pom.xml.hadoop2 deploy -DskipTests -Papache-release
</programlisting>
</para>
<para>
At this stage we have three tarballs in our 'version directory' and two sets of artifacts up in maven in staging area.
First lets put the <emphasis>version directory</emphasis> up on people.apache.org. You will need to sign and fingerprint them before you
push them up. In the <emphasis>version directory</emphasis> do this:
<programlisting>$ for i in *.tar.gz; do echo $i; gpg --print-mds $i > $i.mds ; done
$ for i in *.tar.gz; do echo $i; gpg --armor --output $i.asc --detach-sig $i ; done
$ cd ..
# Presuming our 'version directory' is named 0.96.0RC0, now copy it up to people.apache.org.
$ rsync -av 0.96.0RC0 people.apache.org:public_html
</programlisting>
</para>
<para>
For the maven artifacts, login at repository.apache.org. Find your artifacts in the staging directory.
Close the artifacts. This will give you an URL for the temporary mvn staging repository. Do the closing
for hadoop1 and hadoop2 repos.
<para>Now lets get back to what is up in maven. We should now have two sets of artifacts up in the apache
maven staging area both in the 'open' state (they may both be under the one staging if they were pushed to maven around the same time).
While in this 'open' state you can check out what you've published to make sure all is good. To do this, login at repository.apache.org
using your apache id. Find your artifacts in the staging repository. Browse the content. Make sure all artifacts made it up
and that the poms look generally good. If it checks out, 'close' the repo. This will make the artifacts publically available.
You will receive an email with the URL to give out for the temporary staging repository for others to use trying out this new
release candidate. Include it in the email that announces the release candidate. Folks will need to add this repo URL to their
local poms or to their local settings.xml file to pull the published release candidate artifacts. If the published artifacts are incomplete
or borked, just delete the 'open' staged artifacts.
<note>
<title>hbase-downstreamer</title>
<para>
See the <link xlink:href="https://github.com/saintstack/hbase-downstreamer">hbase-downstreamer</link> test for a simple
example of a project that is downstream of hbase an depends on it.
Check it out and run its simple test to make sure maven hbase-hadoop1 and hbase-hadoop2 are properly deployed to the maven repository.
Be sure to edit the pom to point at the proper staging repo. Make sure you are pulling from the repo when tests run and that you are not
getting from your local repo (pass -U or delete your local repo content and check maven is pulling from remote out of the staging repo).
</para>
</note>
See <link xlink:href="http://www.apache.org/dev/publishing-maven-artifacts.html">Publishing Maven Artifacts</link> for
some pointers.
some pointers on this maven staging process.
<note>
<para>We no longer publish using the maven release plugin. Instead we do mvn deploy. It seems to give
us a backdoor to maven release publishing. If no <emphasis>-SNAPSHOT</emphasis> on the version
@ -385,15 +376,33 @@ $ rsync -av 0.96.0RC0 people.apache.org:public_html
apache snapshot repos).
</para>
</note>
</para>
<para>If the hbase version ends in <varname>-SNAPSHOT</varname>, the artifacts go elsewhere. They are put into the apache snapshots repository
directly and are immediately available. Making a SNAPSHOT release, this is what you want to happen.</para>
<para>
At this stage we have three tarballs in our 'version directory' and two sets of artifacts up in maven in staging area in the
'closed' state publically available in a temporary staging repository whose URL you should have gotten in an email.
The above mentioned script, <filename>make_rc.sh</filename> does all of the above for you minus the check of the artifacts built,
the closing of the staging repository up in maven, and the tagging of the release. If you run the script, do your checks at this
stage verifying the src and bin tarballs and checking what is up in staging using hbase-downstreamer project. Tag before you start
the build. You can always delete it if the build goes haywire.
</para>
<para>
If all checks out, next put the <emphasis>version directory</emphasis> up on people.apache.org. You will need to sign and fingerprint them before you
push them up. In the <emphasis>version directory</emphasis> do this:
<programlisting>$ for i in *.tar.gz; do echo $i; gpg --print-mds $i > $i.mds ; done
$ for i in *.tar.gz; do echo $i; gpg --armor --output $i.asc --detach-sig $i ; done
$ cd ..
# Presuming our 'version directory' is named 0.96.0RC0, now copy it up to people.apache.org.
$ rsync -av 0.96.0RC0 people.apache.org:public_html
</programlisting>
</para>
<para>Make sure the people.apache.org directory is showing -- it can take a while to show -- and that the
<para>Make sure the people.apache.org directory is showing and that the
mvn repo urls are good.
Announce the release candidate on the mailing list and call a vote.
</para>
<para>A strange issue I ran into was the one where the upload into the apache
repository was being sprayed across multiple apache machines making it so I could
not release. See <link xlink:href="https://issues.apache.org/jira/browse/INFRA-4482">INFRA-4482 Why is my upload to mvn spread across multiple repositories?</link>.</para>
</section>
<section xml:id="maven.snapshot">
<title>Publishing a SNAPSHOT to maven</title>
@ -412,6 +421,11 @@ Next, do the same to publish the hadoop2 artifacts.
<programlisting>$ ./dev-support/generate-hadoopX-poms.sh 0.96.0 0.96.0-hadoop2-SNAPSHOT
$ MAVEN_OPTS="-Xmx3g" mvn -f pom.xml.hadoop2 clean install -DskipTests javadoc:aggregate site assembly:single -Prelease
$ MAVEN_OPTS="-Xmx3g" mvn -f pom.xml.hadoop2 deploy -DskipTests -Papache-release</programlisting>
</para>
<para>The <filename>make_rc.sh</filename> script mentioned above in the
(see <xref linkend="maven.release"/>) can help you publish <varname>SNAPSHOTS</varname>.
Make sure your hbase.version has a <varname>-SNAPSHOT</varname> suffix and then run
the script. It will put a snapshot up into the apache snapshot repository for you.
</para>
</section>