ACTIVEMQ6-2 Update to HQ master

This commit is contained in:
Martyn Taylor 2014-11-10 16:14:12 +00:00 committed by Andy Taylor
parent 8ecd255f98
commit 177e6820b5
1022 changed files with 58725 additions and 20756 deletions

View File

@ -156,16 +156,30 @@ Do not use the [maven-eclipse-plugin] to copy the files as it conflicts with [m2
[maven-eclipse-plugin]: https://maven.apache.org/plugins/maven-eclipse-plugin/
[m2e]: http://eclipse.org/m2e/
## GitHub procedures
## Committing Changes
The best way to submit changes to HornetQ is through pull requests on
GitHub. After review a pull request should either get merged or be
rejected.
### GitHub
When a pull request needs to be reworked, say you have missed
something, the pull request is then closed. When you finished
addressing the required changes you should reopen your original pull
request and it will then be re-evaluated. At that point if the request
is approved we will then merge it.
We follow the GitHub workflow for all code changes in HornetQ. For information on the GitHub workflow please see:
https://guides.github.com/introduction/flow/index.html
Make sure you always rebase your branch on master before submitting pull requests.
### Commit Messages
We follow the 50/72 git commit message format. A HornetQ commit message should be formatted in the following manner:
* Add the HornetQ JIRA or Bugzilla reference (if one exists) followed by a brief description of the change in the first line.
* Insert a single blank line after the first line.
* Provide a detailed description of the change in the following lines, breaking paragraphs where needed.
* The first line should be limited to 50 characters
* Subsequent lines should be wrapped at 72 characters.
An example correctly formatted commit message:
```
HORNETQ-1234 Add new commit msg format to README
Adds a description of the new commit message format as well as examples
of well formatted commit messages to the README.md. This is required
to enable developers to quickly identify what the commit is intended to
do and why the commit was added.
```

View File

@ -28,11 +28,86 @@
<properties>
<schemaLocation>${project.build.directory}/${project.artifactId}-${project.version}-bin/${project.artifactId}-${project.version}/schema</schemaLocation>
<standalone>src/main/resources/config/stand-alone</standalone>
<as6>src/main/resources/config/jboss-as-6</as6>
<configLocation>src/main/resources/config</configLocation>
</properties>
<dependencies>
<!-- HornetQ artifacts -->
<dependency>
<groupId>org.hornetq</groupId>
<artifactId>hornetq-server</artifactId>
<version>${project.version}</version>
</dependency>
<dependency>
<groupId>org.hornetq</groupId>
<artifactId>hornetq-dto</artifactId>
<version>${project.version}</version>
</dependency>
<dependency>
<groupId>org.hornetq</groupId>
<artifactId>hornetq-bootstrap</artifactId>
<version>${project.version}</version>
</dependency>
<dependency>
<groupId>org.hornetq</groupId>
<artifactId>hornetq-jms-server</artifactId>
<version>${project.version}</version>
</dependency>
<dependency>
<groupId>org.hornetq</groupId>
<artifactId>hornetq-jms-client</artifactId>
<version>${project.version}</version>
</dependency>
<dependency>
<groupId>org.hornetq</groupId>
<artifactId>hornetq-tools</artifactId>
<version>${project.version}</version>
</dependency>
<dependency>
<groupId>org.hornetq</groupId>
<artifactId>hornetq-jboss-as-integration</artifactId>
<version>${project.version}</version>
</dependency>
<dependency>
<groupId>org.hornetq</groupId>
<artifactId>hornetq-ra</artifactId>
<version>${project.version}</version>
</dependency>
<dependency>
<groupId>org.hornetq</groupId>
<artifactId>hornetq-service-sar</artifactId>
<version>${project.version}</version>
</dependency>
<dependency>
<groupId>org.hornetq</groupId>
<artifactId>hornetq-spring-integration</artifactId>
<version>${project.version}</version>
</dependency>
<dependency>
<groupId>org.hornetq</groupId>
<artifactId>hornetq-twitter-integration</artifactId>
<version>${project.version}</version>
</dependency>
<dependency>
<groupId>org.hornetq</groupId>
<artifactId>hornetq-vertx-integration</artifactId>
<version>${project.version}</version>
</dependency>
<dependency>
<groupId>org.hornetq.rest</groupId>
<artifactId>hornetq-rest</artifactId>
<version>${project.version}</version>
</dependency>
<dependency>
<groupId>org.hornetq</groupId>
<artifactId>hornetq-aerogear-integration</artifactId>
<version>${project.version}</version>
</dependency>
<dependency>
<groupId>org.hornetq</groupId>
<artifactId>jnp-client</artifactId>
<version>${project.version}</version>
</dependency>
<dependency>
<groupId>org.hornetq</groupId>
<artifactId>hornetq-core-client</artifactId>
@ -49,17 +124,64 @@
<version>${project.version}</version>
</dependency>
<dependency>
<groupId>org.apache.qpid</groupId>
<artifactId>proton-api</artifactId>
<groupId>org.hornetq</groupId>
<artifactId>hornetq-openwire-protocol</artifactId>
<version>${project.version}</version>
</dependency>
<dependency>
<groupId>org.apache.qpid</groupId>
<artifactId>proton-j-impl</artifactId>
<groupId>org.hornetq</groupId>
<artifactId>hornetq-native</artifactId>
<version>${project.version}</version>
</dependency>
<!-- dependencies -->
<dependency>
<groupId>org.apache.qpid</groupId>
<artifactId>proton-jms</artifactId>
</dependency>
<dependency>
<groupId>io.airlift</groupId>
<artifactId>airline</artifactId>
</dependency>
<dependency>
<groupId>org.apache.activemq</groupId>
<artifactId>activemq-client</artifactId>
</dependency>
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-databind</artifactId>
<version>${jackson-databind.version}</version>
</dependency>
<dependency>
<groupId>org.apache.activemq</groupId>
<artifactId>activemq-client</artifactId>
</dependency>
<!-- javadoc -->
<dependency>
<groupId>org.hornetq</groupId>
<artifactId>hornetq-core-client</artifactId>
<version>${project.version}</version>
<classifier>javadoc</classifier>
</dependency>
<dependency>
<groupId>org.hornetq</groupId>
<artifactId>hornetq-server</artifactId>
<version>${project.version}</version>
<classifier>javadoc</classifier>
</dependency>
<dependency>
<groupId>org.hornetq</groupId>
<artifactId>hornetq-jms-server</artifactId>
<version>${project.version}</version>
<classifier>javadoc</classifier>
</dependency>
<dependency>
<groupId>org.hornetq</groupId>
<artifactId>hornetq-jms-client</artifactId>
<version>${project.version}</version>
<classifier>javadoc</classifier>
</dependency>
</dependencies>
<build>
@ -107,70 +229,84 @@
<configuration>
<validationSets>
<validationSet>
<dir>${standalone}/clustered</dir>
<dir>${configLocation}/clustered</dir>
<systemId>${schemaLocation}/hornetq-configuration.xsd</systemId>
<includes>
<include>hornetq-configuration.xml</include>
</includes>
</validationSet>
<validationSet>
<dir>${standalone}/non-clustered</dir>
<dir>${configLocation}/non-clustered</dir>
<systemId>${schemaLocation}/hornetq-configuration.xsd</systemId>
<includes>
<include>hornetq-configuration.xml</include>
</includes>
</validationSet>
<validationSet>
<dir>${as6}/clustered</dir>
<dir>${configLocation}/replicated</dir>
<systemId>${schemaLocation}/hornetq-configuration.xsd</systemId>
<includes>
<include>hornetq-configuration.xml</include>
</includes>
</validationSet>
<validationSet>
<dir>${as6}/non-clustered</dir>
<dir>${configLocation}/shared-store</dir>
<systemId>${schemaLocation}/hornetq-configuration.xsd</systemId>
<includes>
<include>hornetq-configuration.xml</include>
</includes>
</validationSet>
<validationSet>
<dir>${standalone}/clustered</dir>
<dir>${configLocation}/clustered</dir>
<systemId>${schemaLocation}/hornetq-jms.xsd</systemId>
<includes>
<include>hornetq-jms.xml</include>
</includes>
</validationSet>
<validationSet>
<dir>${standalone}/non-clustered</dir>
<dir>${configLocation}/non-clustered</dir>
<systemId>${schemaLocation}/hornetq-jms.xsd</systemId>
<includes>
<include>hornetq-jms.xml</include>
</includes>
</validationSet>
<validationSet>
<dir>${as6}/clustered</dir>
<dir>${configLocation}/replicated</dir>
<systemId>${schemaLocation}/hornetq-jms.xsd</systemId>
<includes>
<include>hornetq-jms.xml</include>
</includes>
</validationSet>
<validationSet>
<dir>${as6}/non-clustered</dir>
<dir>${configLocation}/shared-store</dir>
<systemId>${schemaLocation}/hornetq-jms.xsd</systemId>
<includes>
<include>hornetq-jms.xml</include>
</includes>
</validationSet>
<validationSet>
<dir>${standalone}/clustered</dir>
<dir>${configLocation}/clustered</dir>
<systemId>${schemaLocation}/hornetq-users.xsd</systemId>
<includes>
<include>hornetq-users.xml</include>
</includes>
</validationSet>
<validationSet>
<dir>${standalone}/non-clustered</dir>
<dir>${configLocation}/non-clustered</dir>
<systemId>${schemaLocation}/hornetq-users.xsd</systemId>
<includes>
<include>hornetq-users.xml</include>
</includes>
</validationSet>
<validationSet>
<dir>${configLocation}/replicated</dir>
<systemId>${schemaLocation}/hornetq-users.xsd</systemId>
<includes>
<include>hornetq-users.xml</include>
</includes>
</validationSet>
<validationSet>
<dir>${configLocation}/shared-store</dir>
<systemId>${schemaLocation}/hornetq-users.xsd</systemId>
<includes>
<include>hornetq-users.xml</include>
@ -179,7 +315,6 @@
</validationSets>
</configuration>
</plugin>
</plugins>
</build>

View File

@ -21,473 +21,92 @@
<format>tar.gz</format>
</formats>
<includeBaseDirectory>true</includeBaseDirectory>
<moduleSets>
<moduleSet>
<useAllReactorProjects>true</useAllReactorProjects>
<includes>
<include>org.hornetq:hornetq-commons</include>
</includes>
<binaries>
<includeDependencies>false</includeDependencies>
<outputDirectory>lib</outputDirectory>
<unpack>false</unpack>
<outputFileNameMapping>${module.artifactId}.${module.extension}</outputFileNameMapping>
</binaries>
</moduleSet>
<moduleSet>
<useAllReactorProjects>true</useAllReactorProjects>
<includes>
<include>org.hornetq:hornetq-journal</include>
</includes>
<binaries>
<includeDependencies>false</includeDependencies>
<outputDirectory>lib</outputDirectory>
<unpack>false</unpack>
<outputFileNameMapping>${module.artifactId}.${module.extension}</outputFileNameMapping>
</binaries>
</moduleSet>
<moduleSet>
<useAllReactorProjects>true</useAllReactorProjects>
<includes>
<include>org.hornetq:hornetq-native</include>
</includes>
<binaries>
<includeDependencies>false</includeDependencies>
<outputDirectory>lib</outputDirectory>
<unpack>false</unpack>
<outputFileNameMapping>${module.artifactId}.${module.extension}</outputFileNameMapping>
</binaries>
</moduleSet>
<moduleSet>
<useAllReactorProjects>true</useAllReactorProjects>
<includes>
<include>org.hornetq:hornetq-tools</include>
</includes>
<binaries>
<includeDependencies>false</includeDependencies>
<outputDirectory>lib</outputDirectory>
<unpack>false</unpack>
<outputFileNameMapping>${module.artifactId}.${module.extension}</outputFileNameMapping>
</binaries>
</moduleSet>
<moduleSet>
<useAllReactorProjects>true</useAllReactorProjects>
<includes>
<include>org.hornetq:hornetq-bootstrap</include>
</includes>
<binaries>
<includeDependencies>false</includeDependencies>
<outputDirectory>lib</outputDirectory>
<unpack>false</unpack>
<outputFileNameMapping>${module.artifactId}.${module.extension}</outputFileNameMapping>
</binaries>
</moduleSet>
<moduleSet>
<useAllReactorProjects>true</useAllReactorProjects>
<includes>
<include>org.hornetq:hornetq-server</include>
</includes>
<binaries>
<includeDependencies>false</includeDependencies>
<outputDirectory></outputDirectory>
<unpack>true</unpack>
<unpackOptions>
<includes>
<include>**/*.xsd</include>
</includes>
</unpackOptions>
</binaries>
</moduleSet>
<moduleSet>
<useAllReactorProjects>true</useAllReactorProjects>
<includes>
<include>org.hornetq:hornetq-native</include>
</includes>
<binaries>
<includeDependencies>false</includeDependencies>
<outputDirectory>bin</outputDirectory>
<unpack>true</unpack>
<unpackOptions>
<includes>
<include>**/*.so</include>
</includes>
</unpackOptions>
</binaries>
</moduleSet>
<moduleSet>
<useAllReactorProjects>true</useAllReactorProjects>
<includes>
<include>org.hornetq:hornetq-core-client</include>
</includes>
<binaries>
<includeDependencies>false</includeDependencies>
<outputDirectory>lib</outputDirectory>
<unpack>false</unpack>
<outputFileNameMapping>${module.artifactId}.${module.extension}</outputFileNameMapping>
</binaries>
</moduleSet>
<moduleSet>
<useAllReactorProjects>true</useAllReactorProjects>
<includes>
<include>org.hornetq:hornetq-server</include>
</includes>
<binaries>
<includeDependencies>false</includeDependencies>
<outputDirectory>lib</outputDirectory>
<unpack>false</unpack>
<outputFileNameMapping>${module.artifactId}.${module.extension}</outputFileNameMapping>
</binaries>
</moduleSet>
<moduleSet>
<useAllReactorProjects>true</useAllReactorProjects>
<includes>
<include>org.hornetq:hornetq-jboss-as-integration</include>
</includes>
<binaries>
<includeDependencies>false</includeDependencies>
<outputDirectory>lib</outputDirectory>
<unpack>false</unpack>
<outputFileNameMapping>${module.artifactId}.${module.extension}</outputFileNameMapping>
</binaries>
</moduleSet>
<moduleSet>
<useAllReactorProjects>true</useAllReactorProjects>
<includes>
<include>org.hornetq:hornetq-jms-client</include>
</includes>
<binaries>
<includeDependencies>false</includeDependencies>
<outputDirectory>lib</outputDirectory>
<unpack>false</unpack>
<outputFileNameMapping>${module.artifactId}.${module.extension}</outputFileNameMapping>
</binaries>
</moduleSet>
<moduleSet>
<useAllReactorProjects>true</useAllReactorProjects>
<includes>
<include>org.hornetq:hornetq-jms-server</include>
</includes>
<binaries>
<includeDependencies>false</includeDependencies>
<outputDirectory>lib</outputDirectory>
<unpack>false</unpack>
<outputFileNameMapping>${module.artifactId}.${module.extension}</outputFileNameMapping>
</binaries>
</moduleSet>
<moduleSet>
<useAllReactorProjects>true</useAllReactorProjects>
<includes>
<include>org.hornetq:hornetq-jms-server</include>
</includes>
<binaries>
<includeDependencies>false</includeDependencies>
<outputDirectory></outputDirectory>
<unpack>true</unpack>
<unpackOptions>
<includes>
<include>**/*.xsd</include>
</includes>
</unpackOptions>
</binaries>
</moduleSet>
<moduleSet>
<useAllReactorProjects>true</useAllReactorProjects>
<includes>
<include>org.hornetq:hornetq-jms-client</include>
</includes>
<binaries>
<includeDependencies>false</includeDependencies>
<outputDirectory>lib</outputDirectory>
<unpack>false</unpack>
<outputFileNameMapping>${module.artifactId}.${module.extension}</outputFileNameMapping>
</binaries>
</moduleSet>
<moduleSet>
<useAllReactorProjects>true</useAllReactorProjects>
<includes>
<include>org.hornetq:hornetq-ra</include>
</includes>
<binaries>
<includeDependencies>false</includeDependencies>
<outputDirectory>lib</outputDirectory>
<unpack>false</unpack>
<outputFileNameMapping>${module.artifactId}.${module.extension}</outputFileNameMapping>
</binaries>
</moduleSet>
<moduleSet>
<useAllReactorProjects>true</useAllReactorProjects>
<includes>
<include>org.hornetq:hornetq-service-sar</include>
</includes>
<binaries>
<includeDependencies>false</includeDependencies>
<outputDirectory>lib</outputDirectory>
<unpack>false</unpack>
<outputFileNameMapping>${module.artifactId}.${module.extension}</outputFileNameMapping>
</binaries>
</moduleSet>
<moduleSet>
<useAllReactorProjects>true</useAllReactorProjects>
<includes>
<include>org.hornetq:hornetq-spring-integration</include>
</includes>
<binaries>
<includeDependencies>false</includeDependencies>
<outputDirectory>lib</outputDirectory>
<unpack>false</unpack>
<outputFileNameMapping>${module.artifactId}.${module.extension}</outputFileNameMapping>
</binaries>
</moduleSet>
<moduleSet>
<useAllReactorProjects>true</useAllReactorProjects>
<includes>
<include>org.hornetq:hornetq-twitter-integration</include>
</includes>
<binaries>
<includeDependencies>false</includeDependencies>
<outputDirectory>lib</outputDirectory>
<unpack>false</unpack>
<outputFileNameMapping>${module.artifactId}.${module.extension}</outputFileNameMapping>
</binaries>
</moduleSet>
<moduleSet>
<useAllReactorProjects>true</useAllReactorProjects>
<includes>
<include>org.hornetq:hornetq-vertx-integration</include>
</includes>
<binaries>
<includeDependencies>false</includeDependencies>
<outputDirectory>lib</outputDirectory>
<unpack>false</unpack>
<outputFileNameMapping>${module.artifactId}.${module.extension}</outputFileNameMapping>
</binaries>
</moduleSet>
<moduleSet>
<useAllReactorProjects>true</useAllReactorProjects>
<includes>
<include>org.hornetq.rest:hornetq-rest</include>
</includes>
<binaries>
<includeDependencies>false</includeDependencies>
<outputDirectory>lib</outputDirectory>
<unpack>false</unpack>
<outputFileNameMapping>${module.artifactId}.${module.extension}</outputFileNameMapping>
</binaries>
</moduleSet>
<moduleSet>
<useAllReactorProjects>true</useAllReactorProjects>
<includes>
<include>org.hornetq:hornetq-amqp-protocol</include>
</includes>
<binaries>
<includeDependencies>false</includeDependencies>
<outputDirectory>lib</outputDirectory>
<unpack>false</unpack>
<outputFileNameMapping>${module.artifactId}.${module.extension}</outputFileNameMapping>
</binaries>
</moduleSet>
<moduleSet>
<useAllReactorProjects>true</useAllReactorProjects>
<includes>
<include>org.hornetq:hornetq-aerogear-integration</include>
</includes>
<binaries>
<includeDependencies>false</includeDependencies>
<outputDirectory>lib</outputDirectory>
<unpack>false</unpack>
<outputFileNameMapping>${module.artifactId}.${module.extension}</outputFileNameMapping>
</binaries>
</moduleSet>
<moduleSet>
<useAllReactorProjects>true</useAllReactorProjects>
<includes>
<include>org.hornetq:hornetq-stomp-protocol</include>
</includes>
<binaries>
<includeDependencies>false</includeDependencies>
<outputDirectory>lib</outputDirectory>
<unpack>false</unpack>
<outputFileNameMapping>${module.artifactId}.${module.extension}</outputFileNameMapping>
</binaries>
</moduleSet>
<moduleSet>
<useAllReactorProjects>true</useAllReactorProjects>
<includes>
<include>org.hornetq:hornetq-selector</include>
</includes>
<binaries>
<includeDependencies>false</includeDependencies>
<outputDirectory>lib</outputDirectory>
<unpack>false</unpack>
<outputFileNameMapping>${module.artifactId}.${module.extension}</outputFileNameMapping>
</binaries>
</moduleSet>
<moduleSet>
<useAllReactorProjects>true</useAllReactorProjects>
<includes>
<include>org.hornetq:jnp-client</include>
</includes>
<binaries>
<includeDependencies>false</includeDependencies>
<outputDirectory>lib</outputDirectory>
<unpack>false</unpack>
<outputFileNameMapping>${module.artifactId}.${module.extension}</outputFileNameMapping>
</binaries>
</moduleSet>
<moduleSet>
<useAllReactorProjects>true</useAllReactorProjects>
<includes>
<include>org.hornetq:jboss-mc</include>
</includes>
<binaries>
<includeDependencies>false</includeDependencies>
<outputDirectory>lib</outputDirectory>
<unpack>false</unpack>
<outputFileNameMapping>${module.artifactId}.${module.extension}</outputFileNameMapping>
</binaries>
</moduleSet>
<moduleSet>
<useAllReactorProjects>true</useAllReactorProjects>
<includes>
<include>org.hornetq:jboss-mc</include>
</includes>
<binaries>
<includeDependencies>false</includeDependencies>
<outputDirectory>lib</outputDirectory>
<unpack>false</unpack>
<outputFileNameMapping>${module.artifactId}.${module.extension}</outputFileNameMapping>
</binaries>
</moduleSet>
<moduleSet>
<useAllReactorProjects>true</useAllReactorProjects>
<includes>
<include>org.hornetq:hornetq-core-client</include>
</includes>
<binaries>
<attachmentClassifier>javadoc</attachmentClassifier>
<includeDependencies>false</includeDependencies>
<outputDirectory>docs/api/hornetq-client</outputDirectory>
</binaries>
</moduleSet>
<moduleSet>
<useAllReactorProjects>true</useAllReactorProjects>
<includes>
<include>org.hornetq:hornetq-server</include>
</includes>
<binaries>
<attachmentClassifier>javadoc</attachmentClassifier>
<includeDependencies>false</includeDependencies>
<outputDirectory>docs/api/hornetq-server</outputDirectory>
</binaries>
</moduleSet>
<moduleSet>
<useAllReactorProjects>true</useAllReactorProjects>
<includes>
<include>org.hornetq:hornetq-jms-client</include>
</includes>
<binaries>
<attachmentClassifier>javadoc</attachmentClassifier>
<includeDependencies>false</includeDependencies>
<outputDirectory>docs/api/hornetq-jms-client</outputDirectory>
</binaries>
</moduleSet>
<moduleSet>
<useAllReactorProjects>true</useAllReactorProjects>
<includes>
<include>org.hornetq:hornetq-jms-server</include>
</includes>
<binaries>
<attachmentClassifier>javadoc</attachmentClassifier>
<includeDependencies>false</includeDependencies>
<outputDirectory>docs/api/hornetq-jms-server</outputDirectory>
</binaries>
</moduleSet>
</moduleSets>
<dependencySets>
<dependencySet>
<includes>
<!-- modules -->
<include>org.hornetq:*</include>
<include>org.hornetq.rest:hornetq-rest</include>
<!-- dependencies -->
<include>org.jboss.spec.javax.jms:jboss-jms-api_2.0_spec</include>
</includes>
<outputDirectory>lib</outputDirectory>
<unpack>false</unpack>
<outputFileNameMapping>jboss-jms-api.jar</outputFileNameMapping>
</dependencySet>
<dependencySet>
<includes>
<include>org.jboss.naming:jnpserver</include>
<include>org.jboss.logmanager:jboss-logmanager</include>
<include>org.jboss:jboss-common-core</include>
<include>io.netty:netty-all</include>
<include>org.apache.qpid:proton-j</include>
<include>org.apache.qpid:proton-jms</include>
<include>org.apache.activemq:activemq-client</include>
<include>org.slf4j:slf4j-api</include>
<include>io.airlift:airline</include>
<include>com.google.guava:guava</include>
<include>javax.inject:javax.inject</include>
<include>com.fasterxml.jackson.core:jackson-*</include>
</includes>
<excludes>
<exclude>*:javadoc</exclude>
</excludes>
<outputDirectory>lib</outputDirectory>
<unpack>false</unpack>
<outputFileNameMapping>jnpserver.jar</outputFileNameMapping>
</dependencySet>
<!-- native -->
<dependencySet>
<includes>
<include>io.netty:netty-all</include>
<include>org.hornetq:hornetq-native</include>
</includes>
<outputDirectory>lib</outputDirectory>
<unpack>false</unpack>
<outputFileNameMapping>netty.jar</outputFileNameMapping>
<outputDirectory>bin</outputDirectory>
<unpack>true</unpack>
<unpackOptions>
<includes>
<include>**/*.so</include>
</includes>
</unpackOptions>
</dependencySet>
<!-- javadoc -->
<dependencySet>
<includes>
<include>org.hornetq:hornetq-core-client:*:javadoc</include>
<include>org.hornetq:hornetq-server:*:javadoc</include>
<include>org.hornetq:hornetq-jms-server:*:javadoc</include>
<include>org.hornetq:hornetq-jms-client:*:javadoc</include>
</includes>
<outputDirectory>docs/api/${artifact.artifactId}</outputDirectory>
<unpack>true</unpack>
</dependencySet>
<dependencySet>
<includes>
<include>org.apache.qpid:proton-api</include>
</includes>
<outputDirectory>lib</outputDirectory>
<unpack>false</unpack>
<outputFileNameMapping>proton-api.jar</outputFileNameMapping>
</dependencySet>
<dependencySet>
<includes>
<include>org.apache.qpid:proton-jms</include>
</includes>
<outputDirectory>lib</outputDirectory>
<unpack>false</unpack>
<outputFileNameMapping>proton-jms.jar</outputFileNameMapping>
</dependencySet>
<dependencySet>
<includes>
<include>org.apache.qpid:proton-j-impl</include>
</includes>
<outputDirectory>lib</outputDirectory>
<unpack>false</unpack>
<outputFileNameMapping>proton-j-impl.jar</outputFileNameMapping>
</dependencySet>
</dependencySets>
<fileSets>
<fileSet>
<directory>src/main/resources/config</directory>
<outputDirectory>config</outputDirectory>
<lineEnding>keep</lineEnding>
<excludes>
<exclude>**/trunk/**</exclude>
<exclude>*.properties</exclude>
</excludes>
</fileSet>
<fileSet>
<directory>src/main/resources/bin</directory>
<outputDirectory>bin</outputDirectory>
<lineEnding>keep</lineEnding>
</fileSet>
<fileSet>
<directory>src/main/resources/licenses</directory>
<outputDirectory>licenses</outputDirectory>
<lineEnding>keep</lineEnding>
</fileSet>
<fileSet>
<directory>src/main/resources/examples</directory>
<outputDirectory>examples</outputDirectory>
<lineEnding>keep</lineEnding>
</fileSet>
<fileSet>
<directory>../../examples</directory>
<outputDirectory>examples</outputDirectory>
<lineEnding>keep</lineEnding>
<excludes>
<fileSets>
<!-- schema -->
<fileSet>
<directory>../../hornetq-server/src/main/resources/schema/</directory>
<outputDirectory>schema</outputDirectory>
<lineEnding>keep</lineEnding>
</fileSet>
<fileSet>
<directory>../../hornetq-jms-server/src/main/resources/schema/</directory>
<outputDirectory>schema</outputDirectory>
<lineEnding>keep</lineEnding>
</fileSet>
<!-- resources -->
<fileSet>
<directory>src/main/resources</directory>
<outputDirectory>/</outputDirectory>
<lineEnding>keep</lineEnding>
<includes>
<include>bin/*</include>
<include>config/**</include>
<include>licenses/*</include>
</includes>
</fileSet>
<fileSet>
<directory>../../examples</directory>
<outputDirectory>examples</outputDirectory>
<lineEnding>keep</lineEnding>
<excludes>
<exclude>**/target/**</exclude>
<exclude>**/**/*.iml</exclude>
<exclude>**/**/*.dat</exclude>
</excludes>
</fileSet>
</excludes>
</fileSet>
<!-- docs -->
<!--todo, this is crap, there must be better jdocbook assembly integration-->
<fileSet>
<directory>../../docs/user-manual/target/docbook/publish/en</directory>

View File

@ -0,0 +1,87 @@
#!/bin/sh
if [ -z "$HORNETQ_HOME" ] ; then
## resolve links - $0 may be a link to hornetq's home
PRG="$0"
progname=`basename "$0"`
saveddir=`pwd`
# need this for relative symlinks
dirname_prg=`dirname "$PRG"`
cd "$dirname_prg"
while [ -h "$PRG" ] ; do
ls=`ls -ld "$PRG"`
link=`expr "$ls" : '.*-> \(.*\)$'`
if expr "$link" : '.*/.*' > /dev/null; then
PRG="$link"
else
PRG=`dirname "$PRG"`"/$link"
fi
done
HORNETQ_HOME=`dirname "$PRG"`
cd "$saveddir"
# make it fully qualified
HORNETQ_HOME=`cd "$HORNETQ_HOME/.." && pwd`
fi
# OS specific support.
cygwin=false;
darwin=false;
case "`uname`" in
CYGWIN*) cygwin=true
OSTYPE=cygwin
export OSTYPE
;;
Darwin*) darwin=true
if [ -z "$JAVA_HOME" ] ; then
JAVA_HOME=/System/Library/Frameworks/JavaVM.framework/Home
fi
;;
esac
# For Cygwin, ensure paths are in UNIX format before anything is touched
if $cygwin ; then
[ -n "$HORNETQ_HOME" ] &&
HORNETQ_HOME=`cygpath --unix "$HORNETQ_HOME"`
[ -n "$JAVA_HOME" ] &&
JAVA_HOME=`cygpath --unix "$JAVA_HOME"`
[ -n "$CLASSPATH" ] &&
CLASSPATH=`cygpath --path --unix "$CLASSPATH"`
fi
if [ -z "$JAVACMD" ] ; then
if [ -n "$JAVA_HOME" ] ; then
if [ -x "$JAVA_HOME/jre/sh/java" ] ; then
# IBM's JDK on AIX uses strange locations for the executables
JAVACMD="$JAVA_HOME/jre/sh/java"
else
JAVACMD="$JAVA_HOME/bin/java"
fi
else
JAVACMD=`which java 2> /dev/null `
if [ -z "$JAVACMD" ] ; then
JAVACMD=java
fi
fi
fi
if [ ! -x "$JAVACMD" ] ; then
echo "Error: JAVA_HOME is not defined correctly."
echo " We cannot execute $JAVACMD"
exit 1
fi
for i in `ls $HORNETQ_HOME/lib/*.jar`; do
CLASSPATH=$i:$CLASSPATH
done
JAVA_ARGS="-XX:+UseParallelGC -XX:+AggressiveOpts -XX:+UseFastAccessorMethods -Xms512M -Xmx1024M -Djava.naming.factory.initial=org.jnp.interfaces.NamingContextFactory -Djava.naming.factory.url.pkgs=org.jboss.naming:org.jnp.interfaces -Dhornetq.home=$HORNETQ_HOME -Ddata.dir=$HORNETQ_HOME/data -Djava.util.logging.manager=org.jboss.logmanager.LogManager -Dlogging.configuration=file:$HORNETQ_HOME/config/logging.properties -Djava.library.path=$HORNETQ_HOME/bin/lib/linux-i686:$HORNETQ_HOME/bin/lib/linux-x86_64"
#JAVA_ARGS="-agentlib:jdwp=transport=dt_socket,server=y,suspend=y,address=5005 -Djava.naming.factory.initial=org.jnp.interfaces.NamingContextFactory -Djava.naming.factory.url.pkgs=org.jboss.naming:org.jnp.interfaces"
exec "$JAVACMD" $JAVA_ARGS -classpath $CLASSPATH org.hornetq.cli.HornetQ $@

View File

@ -0,0 +1,138 @@
#!/bin/sh
service=`basename "$0"`
#
# Discover the HORNETQ_BASE from the location of this script.
#
if [ -z "$HORNETQ_BASE" ] ; then
## resolve links - $0 may be a link to HORNETQ's home
PRG="$0"
saveddir=`pwd`
# need this for relative symlinks
dirname_prg=`dirname "$PRG"`
cd "$dirname_prg"
while [ -h "$PRG" ] ; do
ls=`ls -ld "$PRG"`
link=`expr "$ls" : '.*-> \(.*\)$'`
if expr "$link" : '.*/.*' > /dev/null; then
PRG="$link"
else
PRG=`dirname "$PRG"`"/$link"
fi
done
HORNETQ_BASE=`dirname "$PRG"`
cd "$saveddir"
# make it fully qualified
HORNETQ_BASE=`cd "$HORNETQ_BASE/.." && pwd`
export HORNETQ_BASE
fi
PID_FILE="${HORNETQ_BASE}/data/hornetq.pid"
if [ ! -d "${HORNETQ_BASE}/data/" ]; then
mkdir "${HORNETQ_BASE}/data/"
fi
status() {
if [ -f "${PID_FILE}" ] ; then
pid=`cat "${PID_FILE}"`
# check to see if it's gone...
ps -p ${pid} > /dev/null
if [ $? -eq 0 ] ; then
return 0
else
rm "${PID_FILE}"
return 3
fi
fi
return 3
}
stop() {
if [ -f "${PID_FILE}" ] ; then
pid=`cat "${PID_FILE}"`
kill $@ ${pid} > /dev/null
fi
for i in 1 2 3 4 5 ; do
status
if [ $? -ne 0 ] ; then
return 0
fi
sleep 1
done
echo "Could not stop process ${pid}"
return 1
}
start() {
status
if [ $? -eq 0 ] ; then
echo "Already running."
return 1
fi
nohup ${HORNETQ_BASE}/bin/hornetq run > /dev/null 2> /dev/null &
echo $! > "${PID_FILE}"
# check to see if stays up...
sleep 1
status
if [ $? -ne 0 ] ; then
echo "Could not start ${service}"
return 1
fi
echo "${service} is now running (${pid})"
return 0
}
case $1 in
start)
echo "Starting ${service}"
start
exit $?
;;
force-stop)
echo "Forcibly Stopping ${service}"
stop -9
exit $?
;;
stop)
echo "Gracefully Stopping ${service}"
stop
exit $?
;;
restart)
echo "Restarting ${service}"
stop
start
exit $?
;;
status)
status
rc=$?
if [ $rc -eq 0 ] ; then
echo "${service} is running (${pid})"
else
echo "${service} is stopped"
fi
exit $rc
;;
*)
echo "Usage: $0 {start|stop|restart|force-stop|status}" >&2
exit 2
;;
esac

View File

@ -0,0 +1,58 @@
@echo off
setlocal
if NOT "%HORNETQ_HOME%"=="" goto CHECK_HORNETQ_HOME
PUSHD .
CD %~dp0..
set HORNETQ_HOME=%CD%
POPD
:CHECK_HORNETQ_HOME
if exist "%HORNETQ_HOME%\bin\hornetq.cmd" goto CHECK_JAVA
:NO_HOME
echo HORNETQ_HOME environment variable is set incorrectly. Please set HORNETQ_HOME.
goto END
:CHECK_JAVA
set _JAVACMD=%JAVACMD%
if "%JAVA_HOME%" == "" goto NO_JAVA_HOME
if not exist "%JAVA_HOME%\bin\java.exe" goto NO_JAVA_HOME
if "%_JAVACMD%" == "" set _JAVACMD=%JAVA_HOME%\bin\java.exe
goto RUN_JAVA
:NO_JAVA_HOME
if "%_JAVACMD%" == "" set _JAVACMD=java.exe
echo.
echo Warning: JAVA_HOME environment variable is not set.
echo.
:RUN_JAVA
if "%JVM_FLAGS%" == "" set JVM_FLAGS=-XX:+UseParallelGC -XX:+AggressiveOpts -XX:+UseFastAccessorMethods -Xms512M -Xmx1024M -Djava.naming.factory.initial=org.jnp.interfaces.NamingContextFactory -Djava.naming.factory.url.pkgs=org.jboss.naming:org.jnp.interfaces -Dhornetq.home=$HORNETQ_HOME -Ddata.dir=$HORNETQ_HOME/data -Djava.util.logging.manager=org.jboss.logmanager.LogManager -Dlogging.configuration="file:%HORNETQ_HOME%\config\logging.properties" -Djava.library.path="%HORNETQ_HOME%/bin/lib/linux-i686:%HORNETQ_HOME%/bin/lib/linux-x86_64"
if "x%HORNETQ_OPTS%" == "x" goto noHORNETQ_OPTS
set JVM_FLAGS=%JVM_FLAGS% %HORNETQ_OPTS%
:noHORNETQ_OPTS
if "x%HORNETQ_DEBUG%" == "x" goto noDEBUG
set JVM_FLAGS=%JVM_FLAGS% -Xdebug -Xnoagent -Djava.compiler=NONE -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=5005
:noDEBUG
if "x%HORNETQ_PROFILE%" == "x" goto noPROFILE
set JVM_FLAGS=-agentlib:yjpagent %JVM_FLAGS%
:noPROFILE
rem set JMX_OPTS=-Dcom.sun.management.jmxremote.port=1099 -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false
set JVM_FLAGS=%JVM_FLAGS% %JMX_OPTS% -Dhornetq.home="%HORNETQ_HOME%" -classpath "%HORNETQ_HOME%\lib\*"
"%_JAVACMD%" %JVM_FLAGS% org.hornetq.cli.HornetQ %*
:END
endlocal
GOTO :EOF
:EOF

22
distribution/hornetq/src/main/resources/bin/run.bat Normal file → Executable file
View File

@ -1,21 +1 @@
@ echo off
setlocal ENABLEDELAYEDEXPANSION
set HORNETQ_HOME=..
IF "a%1"== "a" (
set CONFIG_DIR=%HORNETQ_HOME%\config\stand-alone\non-clustered
) ELSE (
SET CONFIG_DIR=%1
)
set CLASSPATH=%CONFIG_DIR%;%HORNETQ_HOME%\schemas\
REM you can use the following line if you want to run with different ports
REM set CLUSTER_PROPS="-Djnp.port=1099 -Djnp.rmiPort=1098 -Djnp.host=localhost -Dhornetq.remoting.netty.host=localhost -Dhornetq.remoting.netty.port=5445"
set JVM_ARGS=%CLUSTER_PROPS% -XX:+UseParallelGC -XX:+AggressiveOpts -XX:+UseFastAccessorMethods -Xms512M -Xmx1024M -Dhornetq.config.dir=%CONFIG_DIR% -Djava.util.logging.manager=org.jboss.logmanager.LogManager -Djava.util.logging.config.file=%CONFIG_DIR%\logging.properties -Djava.library.path=.
REM export JVM_ARGS="-Xmx512M -Djava.util.logging.manager=org.jboss.logmanager.LogManager -Djava.util.logging.config.file=%CONFIG_DIR%\logging.properties -Dhornetq.config.dir=$CONFIG_DIR -Djava.library.path=. -Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=5005"
for /R ..\lib %%A in (*.jar) do (
SET CLASSPATH=!CLASSPATH!;%%A
)
mkdir ..\logs
echo ***********************************************************************************
echo "java %JVM_ARGS% -classpath %CLASSPATH% org.hornetq.integration.bootstrap.HornetQBootstrapServer hornetq-beans.xml"
echo ***********************************************************************************
java %JVM_ARGS% -classpath "%CLASSPATH%" org.hornetq.integration.bootstrap.HornetQBootstrapServer hornetq-beans.xml
hornetq.cmd run %*

View File

@ -1,35 +1,3 @@
#!/bin/sh
#------------------------------------------------
# Simple shell-script to run HornetQ standalone
#------------------------------------------------
export HORNETQ_HOME=..
mkdir -p ../logs
# By default, the server is started in the non-clustered standalone configuration
if [ a"$1" = a ]; then CONFIG_DIR=$HORNETQ_HOME/config/stand-alone/non-clustered; else CONFIG_DIR="$1"; fi
if [ a"$2" = a ]; then FILENAME=hornetq-beans.xml; else FILENAME="$2"; fi
if [ ! -d $CONFIG_DIR ]; then
echo script needs to be run from the HORNETQ_HOME/bin directory >&2
exit 1
fi
RESOLVED_CONFIG_DIR=`cd "$CONFIG_DIR"; pwd`
export CLASSPATH=$RESOLVED_CONFIG_DIR:$HORNETQ_HOME/schemas/
# Use the following line to run with different ports
#export CLUSTER_PROPS="-Djnp.port=1099 -Djnp.rmiPort=1098 -Djnp.host=localhost -Dhornetq.remoting.netty.host=localhost -Dhornetq.remoting.netty.port=5445"
export JVM_ARGS="$CLUSTER_PROPS -XX:+UseParallelGC -XX:+AggressiveOpts -XX:+UseFastAccessorMethods -Xms512M -Xmx1024M -Dhornetq.config.dir=$RESOLVED_CONFIG_DIR -Djava.util.logging.manager=org.jboss.logmanager.LogManager -Dlogging.configuration=file://$RESOLVED_CONFIG_DIR/logging.properties -Djava.library.path=./lib/linux-i686:./lib/linux-x86_64"
#export JVM_ARGS="-Xmx512M -Djava.util.logging.manager=org.jboss.logmanager.LogManager -Dlogging.configuration=$CONFIG_DIR/logging.properties -Dhornetq.config.dir=$CONFIG_DIR -Djava.library.path=. -Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=5005"
for i in `ls $HORNETQ_HOME/lib/*.jar`; do
CLASSPATH=$i:$CLASSPATH
done
echo "***********************************************************************************"
echo "java $JVM_ARGS -classpath $CLASSPATH org.hornetq.integration.bootstrap.HornetQBootstrapServer $FILENAME"
echo "***********************************************************************************"
java $JVM_ARGS -classpath $CLASSPATH -Dcom.sun.management.jmxremote org.hornetq.integration.bootstrap.HornetQBootstrapServer $FILENAME
./hornetq run $@

10
distribution/hornetq/src/main/resources/bin/stop.bat Normal file → Executable file
View File

@ -1,9 +1 @@
@ echo off
setlocal ENABLEDELAYEDEXPANSION
set HORNETQ_HOME=..
IF "a%1"== "a" (
set CONFIG_DIR=%HORNETQ_HOME%\config\stand-alone\non-clustered
) ELSE (
SET CONFIG_DIR=%1
)
dir >> %CONFIG_DIR%\STOP_ME
hornetq.cmd stop %*

View File

@ -1,5 +1,3 @@
#!/bin/sh
export HORNETQ_HOME=..
if [ a"$1" = a ]; then CONFIG_DIR=$HORNETQ_HOME/config/stand-alone/non-clustered; else CONFIG_DIR="$1"; fi
touch $CONFIG_DIR/STOP_ME;
./hornetq stop $@

View File

@ -0,0 +1,25 @@
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<!--
Copyright 2005-2014 Red Hat, Inc.
Red Hat licenses this file to you under the Apache License, version
2.0 (the "License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied. See the License for the specific language governing
permissions and limitations under the License.
-->
<broker xmlns="http://hornetq.org/schema">
<core configuration="file:${hornetq.home}/config/clustered/hornetq-configuration.xml"></core>
<jms configuration="file:${hornetq.home}/config/clustered/hornetq-jms.xml"></jms>
<basic-security/>
<naming bindAddress="localhost" port="1099" rmiBindAddress="localhost" rmiPort="1098"/>
</broker>

View File

@ -1,122 +0,0 @@
<!--
~ Copyright 2009 Red Hat, Inc.
~ Red Hat licenses this file to you under the Apache License, version
~ 2.0 (the "License"); you may not use this file except in compliance
~ with the License. You may obtain a copy of the License at
~ http://www.apache.org/licenses/LICENSE-2.0
~ Unless required by applicable law or agreed to in writing, software
~ distributed under the License is distributed on an "AS IS" BASIS,
~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
~ implied. See the License for the specific language governing
~ permissions and limitations under the License.
-->
<configuration xmlns="urn:hornetq"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:hornetq /schema/hornetq-configuration.xsd">
<!-- Don't change this name.
This is used by the dependency framework on the deployers,
to make sure this deployment is done before any other deployment -->
<name>HornetQ.main.config</name>
<bindings-directory>${jboss.server.data.dir}/${hornetq.data.dir:hornetq}/bindings</bindings-directory>
<journal-directory>${jboss.server.data.dir}/${hornetq.data.dir:hornetq}/journal</journal-directory>
<journal-min-files>10</journal-min-files>
<large-messages-directory>${jboss.server.data.dir}/${hornetq.data.dir:hornetq}/largemessages</large-messages-directory>
<paging-directory>${jboss.server.data.dir}/${hornetq.data.dir:hornetq}/paging</paging-directory>
<connectors>
<connector name="netty">
<factory-class>org.hornetq.core.remoting.impl.netty.NettyConnectorFactory</factory-class>
<param key="host" value="${jboss.bind.address:localhost}"/>
<param key="port" value="${hornetq.remoting.netty.port:5445}"/>
</connector>
<connector name="netty-throughput">
<factory-class>org.hornetq.core.remoting.impl.netty.NettyConnectorFactory</factory-class>
<param key="host" value="${jboss.bind.address:localhost}"/>
<param key="port" value="${hornetq.remoting.netty.batch.port:5455}"/>
<param key="batch-delay" value="50"/>
</connector>
<connector name="in-vm">
<factory-class>org.hornetq.core.remoting.impl.invm.InVMConnectorFactory</factory-class>
<param key="server-id" value="${hornetq.server-id:0}"/>
</connector>
</connectors>
<acceptors>
<acceptor name="netty">
<factory-class>org.hornetq.core.remoting.impl.netty.NettyAcceptorFactory</factory-class>
<param key="host" value="${jboss.bind.address:localhost}"/>
<param key="port" value="${hornetq.remoting.netty.port:5445}"/>
</acceptor>
<acceptor name="netty-throughput">
<factory-class>org.hornetq.core.remoting.impl.netty.NettyAcceptorFactory</factory-class>
<param key="host" value="${jboss.bind.address:localhost}"/>
<param key="port" value="${hornetq.remoting.netty.batch.port:5455}"/>
<param key="batch-delay" value="50"/>
<param key="direct-deliver" value="false"/>
</acceptor>
<acceptor name="in-vm">
<factory-class>org.hornetq.core.remoting.impl.invm.InVMAcceptorFactory</factory-class>
<param key="server-id" value="0"/>
</acceptor>
</acceptors>
<broadcast-groups>
<broadcast-group name="bg-group1">
<group-address>231.7.7.7</group-address>
<group-port>9876</group-port>
<broadcast-period>5000</broadcast-period>
<connector-ref>netty</connector-ref>
</broadcast-group>
</broadcast-groups>
<discovery-groups>
<discovery-group name="dg-group1">
<group-address>231.7.7.7</group-address>
<group-port>9876</group-port>
<refresh-timeout>10000</refresh-timeout>
</discovery-group>
</discovery-groups>
<cluster-connections>
<cluster-connection name="my-cluster">
<address>jms</address>
<connector-ref>netty</connector-ref>
<discovery-group-ref discovery-group-name="dg-group1"/>
</cluster-connection>
</cluster-connections>
<security-settings>
<security-setting match="#">
<permission type="createNonDurableQueue" roles="guest"/>
<permission type="deleteNonDurableQueue" roles="guest"/>
<permission type="consume" roles="guest"/>
<permission type="send" roles="guest"/>
</security-setting>
</security-settings>
<address-settings>
<!--default for catch all-->
<address-setting match="#">
<dead-letter-address>jms.queue.DLQ</dead-letter-address>
<expiry-address>jms.queue.ExpiryQueue</expiry-address>
<redelivery-delay>0</redelivery-delay>
<max-size-bytes>10485760</max-size-bytes>
<message-counter-history-day-limit>10</message-counter-history-day-limit>
<address-full-policy>BLOCK</address-full-policy>
</address-setting>
</address-settings>
</configuration>

View File

@ -1,62 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<deployment xmlns="urn:jboss:bean-deployer:2.0">
<!-- MBean server -->
<bean name="MBeanServer" class="javax.management.MBeanServer">
<constructor factoryClass="org.jboss.mx.util.MBeanServerLocator"
factoryMethod="locateJBoss"/>
</bean>
<!-- The core configuration -->
<bean name="Configuration" class="org.hornetq.core.config.impl.FileConfiguration">
<property name="configurationUrl">${jboss.server.home.url}/deploy/hornetq/hornetq-configuration.xml</property>
</bean>
<!-- The security manager -->
<bean name="HornetQSecurityManager" class="org.hornetq.integration.jboss.security.JBossASSecurityManager">
<start ignored="true"/>
<stop ignored="true"/>
<depends>JBossSecurityJNDIContextEstablishment</depends>
<property name="allowClientLogin">false</property>
<property name="authoriseOnClientLogin">false</property>
</bean>
<!-- The core server -->
<bean name="HornetQServer" class="org.hornetq.core.server.impl.HornetQServerImpl">
<constructor>
<parameter>
<inject bean="Configuration"/>
</parameter>
<parameter>
<inject bean="MBeanServer"/>
</parameter>
<parameter>
<inject bean="HornetQSecurityManager"/>
</parameter>
</constructor>
<start ignored="true"/>
<stop ignored="true"/>
</bean>
<!-- The JMS server -->
<bean name="JMSServerManager" class="org.hornetq.jms.server.impl.JMSServerManagerImpl">
<constructor>
<parameter>
<inject bean="HornetQServer"/>
</parameter>
</constructor>
</bean>
<!-- POJO which ensures HornetQ Resource Adapter is stopped before HornetQServer -->
<bean name="HornetQRAService" class="org.hornetq.ra.HornetQRAService">
<constructor>
<parameter>
<inject bean="MBeanServer"/>
</parameter>
<parameter>jboss.jca:name='jms-ra.rar',service=RARDeployment</parameter>
</constructor>
<depends>HornetQServer</depends>
</bean>
</deployment>

View File

@ -1,47 +0,0 @@
<configuration xmlns="urn:hornetq"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:hornetq /schema/hornetq-jms.xsd">
<connection-factory name="NettyConnectionFactory">
<xa>true</xa>
<connectors>
<connector-ref connector-name="netty"/>
</connectors>
<entries>
<entry name="/ConnectionFactory"/>
<entry name="/XAConnectionFactory"/>
</entries>
</connection-factory>
<connection-factory name="NettyThroughputConnectionFactory">
<xa>true</xa>
<connectors>
<connector-ref connector-name="netty-throughput"/>
</connectors>
<entries>
<entry name="/ThroughputConnectionFactory"/>
<entry name="/XAThroughputConnectionFactory"/>
</entries>
</connection-factory>
<connection-factory name="InVMConnectionFactory">
<xa>true</xa>
<connectors>
<connector-ref connector-name="in-vm"/>
</connectors>
<entries>
<entry name="java:/ConnectionFactory"/>
<entry name="java:/XAConnectionFactory"/>
</entries>
</connection-factory>
<queue name="DLQ">
<entry name="/queue/DLQ"/>
</queue>
<queue name="ExpiryQueue">
<entry name="/queue/ExpiryQueue"/>
</queue>
</configuration>

View File

@ -1,26 +0,0 @@
<connection-factories>
<!--
JMS Stuff
-->
<mbean code="org.jboss.jms.jndi.JMSProviderLoader" name="hornetq:service=JMSProviderLoader,name=JMSProvider">
<attribute name="ProviderName">DefaultJMSProvider</attribute>
<attribute name="ProviderAdapterClass">org.jboss.jms.jndi.JNDIProviderAdapter</attribute>
<attribute name="FactoryRef">java:/XAConnectionFactory</attribute>
<attribute name="QueueFactoryRef">java:/XAConnectionFactory</attribute>
<attribute name="TopicFactoryRef">java:/XAConnectionFactory</attribute>
</mbean>
<!--
JMS XA Resource adapter, use this to get transacted JMS in beans
-->
<tx-connection-factory>
<jndi-name>JmsXA</jndi-name>
<xa-transaction/>
<rar-name>jms-ra.rar</rar-name>
<connection-definition>org.hornetq.ra.HornetQRAConnectionFactory</connection-definition>
<config-property name="SessionDefaultType" type="java.lang.String">javax.jms.Topic</config-property>
<config-property name="JmsProviderAdapterJNDI" type="java.lang.String">java:/DefaultJMSProvider</config-property>
<max-pool-size>20</max-pool-size>
<security-domain-and-application>JmsXARealm</security-domain-and-application>
</tx-connection-factory>
</connection-factories>

View File

@ -1,97 +0,0 @@
<!--
~ Copyright 2009 Red Hat, Inc.
~ Red Hat licenses this file to you under the Apache License, version
~ 2.0 (the "License"); you may not use this file except in compliance
~ with the License. You may obtain a copy of the License at
~ http://www.apache.org/licenses/LICENSE-2.0
~ Unless required by applicable law or agreed to in writing, software
~ distributed under the License is distributed on an "AS IS" BASIS,
~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
~ implied. See the License for the specific language governing
~ permissions and limitations under the License.
-->
<configuration xmlns="urn:hornetq"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:hornetq /schema/hornetq-configuration.xsd">
<!-- Don't change this name.
This is used by the dependency framework on the deployers,
to make sure this deployment is done before any other deployment -->
<name>HornetQ.main.config</name>
<bindings-directory>${jboss.server.data.dir}/hornetq/bindings</bindings-directory>
<journal-directory>${jboss.server.data.dir}/hornetq/journal</journal-directory>
<journal-min-files>10</journal-min-files>
<large-messages-directory>${jboss.server.data.dir}/hornetq/largemessages</large-messages-directory>
<paging-directory>${jboss.server.data.dir}/hornetq/paging</paging-directory>
<connectors>
<connector name="netty">
<factory-class>org.hornetq.core.remoting.impl.netty.NettyConnectorFactory</factory-class>
<param key="host" value="${jboss.bind.address:localhost}"/>
<param key="port" value="${hornetq.remoting.netty.port:5445}"/>
</connector>
<connector name="netty-throughput">
<factory-class>org.hornetq.core.remoting.impl.netty.NettyConnectorFactory</factory-class>
<param key="host" value="${jboss.bind.address:localhost}"/>
<param key="port" value="${hornetq.remoting.netty.batch.port:5455}"/>
<param key="batch-delay" value="50"/>
</connector>
<connector name="in-vm">
<factory-class>org.hornetq.core.remoting.impl.invm.InVMConnectorFactory</factory-class>
<param key="server-id" value="${hornetq.server-id:0}"/>
</connector>
</connectors>
<acceptors>
<acceptor name="netty">
<factory-class>org.hornetq.core.remoting.impl.netty.NettyAcceptorFactory</factory-class>
<param key="host" value="${jboss.bind.address:localhost}"/>
<param key="port" value="${hornetq.remoting.netty.port:5445}"/>
</acceptor>
<acceptor name="netty-throughput">
<factory-class>org.hornetq.core.remoting.impl.netty.NettyAcceptorFactory</factory-class>
<param key="host" value="${jboss.bind.address:localhost}"/>
<param key="port" value="${hornetq.remoting.netty.batch.port:5455}"/>
<param key="batch-delay" value="50"/>
<param key="direct-deliver" value="false"/>
</acceptor>
<acceptor name="in-vm">
<factory-class>org.hornetq.core.remoting.impl.invm.InVMAcceptorFactory</factory-class>
<param key="server-id" value="0"/>
</acceptor>
</acceptors>
<security-settings>
<security-setting match="#">
<permission type="createNonDurableQueue" roles="guest"/>
<permission type="deleteNonDurableQueue" roles="guest"/>
<permission type="consume" roles="guest"/>
<permission type="send" roles="guest"/>
</security-setting>
</security-settings>
<address-settings>
<!--default for catch all-->
<address-setting match="#">
<dead-letter-address>jms.queue.DLQ</dead-letter-address>
<expiry-address>jms.queue.ExpiryQueue</expiry-address>
<redelivery-delay>0</redelivery-delay>
<max-size-bytes>10485760</max-size-bytes>
<message-counter-history-day-limit>10</message-counter-history-day-limit>
<address-full-policy>BLOCK</address-full-policy>
</address-setting>
</address-settings>
</configuration>

View File

@ -1,61 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<deployment xmlns="urn:jboss:bean-deployer:2.0">
<!-- MBean server -->
<bean name="MBeanServer" class="javax.management.MBeanServer">
<constructor factoryClass="org.jboss.mx.util.MBeanServerLocator"
factoryMethod="locateJBoss"/>
</bean>
<!-- The core configuration -->
<bean name="Configuration" class="org.hornetq.core.config.impl.FileConfiguration">
<property name="configurationUrl">${jboss.server.home.url}/deploy/hornetq/hornetq-configuration.xml</property>
</bean>
<!-- The security manager -->
<bean name="HornetQSecurityManager" class="org.hornetq.integration.jboss.security.JBossASSecurityManager">
<start ignored="true"/>
<stop ignored="true"/>
<depends>JBossSecurityJNDIContextEstablishment</depends>
<property name="allowClientLogin">false</property>
<property name="authoriseOnClientLogin">false</property>
</bean>
<!-- The core server -->
<bean name="HornetQServer" class="org.hornetq.core.server.impl.HornetQServerImpl">
<constructor>
<parameter>
<inject bean="Configuration"/>
</parameter>
<parameter>
<inject bean="MBeanServer"/>
</parameter>
<parameter>
<inject bean="HornetQSecurityManager"/>
</parameter>
</constructor>
<start ignored="true"/>
<stop ignored="true"/>
</bean>
<!-- The JMS server -->
<bean name="JMSServerManager" class="org.hornetq.jms.server.impl.JMSServerManagerImpl">
<constructor>
<parameter>
<inject bean="HornetQServer"/>
</parameter>
</constructor>
</bean>
<!-- POJO which ensures HornetQ Resource Adapter is stopped before HornetQServer -->
<bean name="HornetQRAService" class="org.hornetq.ra.HornetQRAService">
<constructor>
<parameter>
<inject bean="MBeanServer"/>
</parameter>
<parameter>jboss.jca:name='jms-ra.rar',service=RARDeployment</parameter>
</constructor>
<depends>HornetQServer</depends>
</bean>
</deployment>

View File

@ -1,46 +0,0 @@
<configuration xmlns="urn:hornetq"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:hornetq /schema/hornetq-jms.xsd">
<connection-factory name="NettyConnectionFactory">
<xa>true</xa>
<connectors>
<connector-ref connector-name="netty"/>
</connectors>
<entries>
<entry name="/ConnectionFactory"/>
<entry name="/XAConnectionFactory"/>
</entries>
</connection-factory>
<connection-factory name="NettyThroughputConnectionFactory">
<xa>true</xa>
<connectors>
<connector-ref connector-name="netty-throughput"/>
</connectors>
<entries>
<entry name="/ThroughputConnectionFactory"/>
<entry name="/XAThroughputConnectionFactory"/>
</entries>
</connection-factory>
<connection-factory name="InVMConnectionFactory">
<xa>true</xa>
<connectors>
<connector-ref connector-name="in-vm"/>
</connectors>
<entries>
<entry name="java:/ConnectionFactory"/>
<entry name="java:/XAConnectionFactory"/>
</entries>
</connection-factory>
<queue name="DLQ">
<entry name="/queue/DLQ"/>
</queue>
<queue name="ExpiryQueue">
<entry name="/queue/ExpiryQueue"/>
</queue>
</configuration>

View File

@ -1,26 +0,0 @@
<connection-factories>
<!--
JMS Stuff
-->
<mbean code="org.jboss.jms.jndi.JMSProviderLoader" name="hornetq:service=JMSProviderLoader,name=JMSProvider">
<attribute name="ProviderName">DefaultJMSProvider</attribute>
<attribute name="ProviderAdapterClass">org.jboss.jms.jndi.JNDIProviderAdapter</attribute>
<attribute name="FactoryRef">java:/XAConnectionFactory</attribute>
<attribute name="QueueFactoryRef">java:/XAConnectionFactory</attribute>
<attribute name="TopicFactoryRef">java:/XAConnectionFactory</attribute>
</mbean>
<!--
JMS XA Resource adapter, use this to get transacted JMS in beans
-->
<tx-connection-factory>
<jndi-name>JmsXA</jndi-name>
<xa-transaction/>
<rar-name>jms-ra.rar</rar-name>
<connection-definition>org.hornetq.ra.HornetQRAConnectionFactory</connection-definition>
<config-property name="SessionDefaultType" type="java.lang.String">javax.jms.Topic</config-property>
<config-property name="JmsProviderAdapterJNDI" type="java.lang.String">java:/DefaultJMSProvider</config-property>
<max-pool-size>20</max-pool-size>
<security-domain-and-application>JmsXARealm</security-domain-and-application>
</tx-connection-factory>
</connection-factories>

View File

@ -22,7 +22,7 @@
# Additional logger names to configure (root logger is always configured)
# Root logger option
loggers=org.jboss.logging,org.hornetq.core.server,org.hornetq.utils,org.hornetq.journal,org.hornetq.jms,org.hornetq.integration.bootstrap
loggers=org.jboss.logging,org.hornetq.core.server,org.hornetq.utils,org.hornetq.journal,org.hornetq.jms.server,org.hornetq.integration.bootstrap
# Root logger level
logger.level=INFO
@ -47,7 +47,7 @@ handler.FILE=org.jboss.logmanager.handlers.FileHandler
handler.FILE.level=DEBUG
handler.FILE.properties=autoFlush,fileName
handler.FILE.autoFlush=true
handler.FILE.fileName=logs/hornetq.log
handler.FILE.fileName=${hornetq.home}/logs/hornetq.log
handler.FILE.formatter=PATTERN
# Formatter pattern configuration

View File

@ -0,0 +1,25 @@
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<!--
Copyright 2005-2014 Red Hat, Inc.
Red Hat licenses this file to you under the Apache License, version
2.0 (the "License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied. See the License for the specific language governing
permissions and limitations under the License.
-->
<broker xmlns="http://hornetq.org/schema">
<core configuration="file:${hornetq.home}/config/non-clustered/hornetq-configuration.xml"></core>
<jms configuration="file:${hornetq.home}/config/non-clustered/hornetq-jms.xml"></jms>
<basic-security/>
<naming bindAddress="localhost" port="1099" rmiBindAddress="localhost" rmiPort="1098"/>
</broker>

View File

@ -0,0 +1,25 @@
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<!--
Copyright 2005-2014 Red Hat, Inc.
Red Hat licenses this file to you under the Apache License, version
2.0 (the "License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied. See the License for the specific language governing
permissions and limitations under the License.
-->
<broker xmlns="http://hornetq.org/schema">
<core configuration="file:${hornetq.home}/config/replicated/hornetq-configuration.xml"></core>
<jms configuration="file:${hornetq.home}/config/replicated/hornetq-jms.xml"></jms>
<basic-security/>
<naming bindAddress="localhost" port="1099" rmiBindAddress="localhost" rmiPort="1098"/>
</broker>

View File

@ -5,9 +5,6 @@
if you want to run this as a backup on different ports you would need to set the following variable
export CLUSTER_PROPS="-Djnp.port=1199 -Djnp.rmiPort=1198 -Djnp.host=localhost -Dhornetq.remoting.netty.host=localhost -Dhornetq.remoting.netty.port=5545 -Dhornetq.remoting.netty.batch.port=5555 -Dhornetq.backup=true"
-->
<shared-store>true</shared-store>
<backup>${hornetq.backup:false}</backup>
<paging-directory>${data.dir:../data}/paging</paging-directory>
@ -74,6 +71,12 @@
<discovery-group-ref discovery-group-name="dg-group1"/>
</cluster-connection>
</cluster-connections>
<ha-policy>
<replication>
<master/>
</replication>
</ha-policy>
<security-settings>
<security-setting match="#">

View File

@ -0,0 +1,25 @@
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<!--
Copyright 2005-2014 Red Hat, Inc.
Red Hat licenses this file to you under the Apache License, version
2.0 (the "License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied. See the License for the specific language governing
permissions and limitations under the License.
-->
<broker xmlns="http://hornetq.org/schema">
<core configuration="file:${hornetq.home}/config/shared-store/hornetq-configuration.xml"></core>
<jms configuration="file:${hornetq.home}/config/shared-store/hornetq-jms.xml"></jms>
<basic-security/>
<naming bindAddress="localhost" port="1099" rmiBindAddress="localhost" rmiPort="1098"/>
</broker>

View File

@ -5,9 +5,6 @@
if you want to run this as a backup on different ports you would need to set the following variable
export CLUSTER_PROPS="-Djnp.port=1199 -Djnp.rmiPort=1198 -Djnp.host=localhost -Dhornetq.remoting.netty.host=localhost -Dhornetq.remoting.netty.port=5545 -Dhornetq.remoting.netty.batch.port=5555 -Dhornetq.backup=true"
-->
<shared-store>false</shared-store>
<backup>${hornetq.backup:false}</backup>
<paging-directory>${data.dir:../data}/paging</paging-directory>
@ -74,6 +71,12 @@
<discovery-group-ref discovery-group-name="dg-group1"/>
</cluster-connection>
</cluster-connections>
<ha-policy>
<shared-store>
<master/>
</shared-store>
</ha-policy>
<security-settings>
<security-setting match="#">

View File

@ -1,61 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<deployment xmlns="urn:jboss:bean-deployer:2.0">
<!-- MBean server -->
<bean name="MBeanServer" class="javax.management.MBeanServer">
<constructor factoryClass="java.lang.management.ManagementFactory"
factoryMethod="getPlatformMBeanServer"/>
</bean>
<!-- The core configuration -->
<bean name="Configuration" class="org.hornetq.core.config.impl.FileConfiguration">
</bean>
<!-- The security manager -->
<bean name="HornetQSecurityManager" class="org.hornetq.spi.core.security.HornetQSecurityManagerImpl">
<start ignored="true"/>
<stop ignored="true"/>
</bean>
<!-- The core server -->
<bean name="HornetQServer" class="org.hornetq.core.server.impl.HornetQServerImpl">
<constructor>
<parameter>
<inject bean="Configuration"/>
</parameter>
<parameter>
<inject bean="MBeanServer"/>
</parameter>
<parameter>
<inject bean="HornetQSecurityManager"/>
</parameter>
</constructor>
<start ignored="true"/>
<stop ignored="true"/>
</bean>
<!-- The Stand alone server that controls the jndi server-->
<bean name="StandaloneServer" class="org.hornetq.jms.server.impl.StandaloneNamingServer">
<constructor>
<parameter>
<inject bean="HornetQServer"/>
</parameter>
</constructor>
<property name="port">${jnp.port:1099}</property>
<property name="bindAddress">${jnp.host:localhost}</property>
<property name="rmiPort">${jnp.rmiPort:1098}</property>
<property name="rmiBindAddress">${jnp.host:localhost}</property>
</bean>
<!-- The JMS server -->
<bean name="JMSServerManager" class="org.hornetq.jms.server.impl.JMSServerManagerImpl">
<constructor>
<parameter>
<inject bean="HornetQServer"/>
</parameter>
</constructor>
</bean>
</deployment>

View File

@ -1,56 +0,0 @@
#
# JBoss, Home of Professional Open Source.
# Copyright 2010, Red Hat, Inc., and individual contributors
# as indicated by the @author tags. See the copyright.txt file in the
# distribution for a full listing of individual contributors.
#
# This is free software; you can redistribute it and/or modify it
# under the terms of the GNU Lesser General Public License as
# published by the Free Software Foundation; either version 2.1 of
# the License, or (at your option) any later version.
#
# This software is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public
# License along with this software; if not, write to the Free
# Software Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA
# 02110-1301 USA, or see the FSF site: http://www.fsf.org.
#
# Additional logger names to configure (root logger is always configured)
# Root logger option
loggers=org.jboss.logging,org.hornetq.core.server,org.hornetq.utils,org.hornetq.journal,org.hornetq.jms,org.hornetq.integration.bootstrap
# Root logger level
logger.level=INFO
# HornetQ logger levels
logger.org.hornetq.core.server.level=INFO
logger.org.hornetq.journal.level=INFO
logger.org.hornetq.utils.level=INFO
logger.org.hornetq.jms.level=INFO
logger.org.hornetq.integration.bootstrap.level=INFO
# Root logger handlers
logger.handlers=FILE,CONSOLE
# Console handler configuration
handler.CONSOLE=org.jboss.logmanager.handlers.ConsoleHandler
handler.CONSOLE.properties=autoFlush
handler.CONSOLE.level=DEBUG
handler.CONSOLE.autoFlush=true
handler.CONSOLE.formatter=PATTERN
# File handler configuration
handler.FILE=org.jboss.logmanager.handlers.FileHandler
handler.FILE.level=DEBUG
handler.FILE.properties=autoFlush,fileName
handler.FILE.autoFlush=true
handler.FILE.fileName=logs/hornetq.log
handler.FILE.formatter=PATTERN
# Formatter pattern configuration
formatter.PATTERN=org.jboss.logmanager.formatters.PatternFormatter
formatter.PATTERN.properties=pattern
formatter.PATTERN.pattern=%d{HH:mm:ss,SSS} %-5p [%c] %s%E%n

View File

@ -1,61 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<deployment xmlns="urn:jboss:bean-deployer:2.0">
<!-- MBean server -->
<bean name="MBeanServer" class="javax.management.MBeanServer">
<constructor factoryClass="java.lang.management.ManagementFactory"
factoryMethod="getPlatformMBeanServer"/>
</bean>
<!-- The core configuration -->
<bean name="Configuration" class="org.hornetq.core.config.impl.FileConfiguration">
</bean>
<!-- The security manager -->
<bean name="HornetQSecurityManager" class="org.hornetq.spi.core.security.HornetQSecurityManagerImpl">
<start ignored="true"/>
<stop ignored="true"/>
</bean>
<!-- The core server -->
<bean name="HornetQServer" class="org.hornetq.core.server.impl.HornetQServerImpl">
<constructor>
<parameter>
<inject bean="Configuration"/>
</parameter>
<parameter>
<inject bean="MBeanServer"/>
</parameter>
<parameter>
<inject bean="HornetQSecurityManager"/>
</parameter>
</constructor>
<start ignored="true"/>
<stop ignored="true"/>
</bean>
<!-- The Stand alone server that controls the jndi server-->
<bean name="StandaloneServer" class="org.hornetq.jms.server.impl.StandaloneNamingServer">
<constructor>
<parameter>
<inject bean="HornetQServer"/>
</parameter>
</constructor>
<property name="port">${jnp.port:1099}</property>
<property name="bindAddress">${jnp.host:localhost}</property>
<property name="rmiPort">${jnp.rmiPort:1098}</property>
<property name="rmiBindAddress">${jnp.host:localhost}</property>
</bean>
<!-- The JMS server -->
<bean name="JMSServerManager" class="org.hornetq.jms.server.impl.JMSServerManagerImpl">
<constructor>
<parameter>
<inject bean="HornetQServer"/>
</parameter>
</constructor>
</bean>
</deployment>

View File

@ -1,61 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<deployment xmlns="urn:jboss:bean-deployer:2.0">
<!-- MBean server -->
<bean name="MBeanServer" class="javax.management.MBeanServer">
<constructor factoryClass="java.lang.management.ManagementFactory"
factoryMethod="getPlatformMBeanServer"/>
</bean>
<!-- The core configuration -->
<bean name="Configuration" class="org.hornetq.core.config.impl.FileConfiguration">
</bean>
<!-- The security manager -->
<bean name="HornetQSecurityManager" class="org.hornetq.spi.core.security.HornetQSecurityManagerImpl">
<start ignored="true"/>
<stop ignored="true"/>
</bean>
<!-- The core server -->
<bean name="HornetQServer" class="org.hornetq.core.server.impl.HornetQServerImpl">
<constructor>
<parameter>
<inject bean="Configuration"/>
</parameter>
<parameter>
<inject bean="MBeanServer"/>
</parameter>
<parameter>
<inject bean="HornetQSecurityManager"/>
</parameter>
</constructor>
<start ignored="true"/>
<stop ignored="true"/>
</bean>
<!-- The Stand alone server that controls the jndi server-->
<bean name="StandaloneServer" class="org.hornetq.jms.server.impl.StandaloneNamingServer">
<constructor>
<parameter>
<inject bean="HornetQServer"/>
</parameter>
</constructor>
<property name="port">${jnp.port:1099}</property>
<property name="bindAddress">${jnp.host:localhost}</property>
<property name="rmiPort">${jnp.rmiPort:1098}</property>
<property name="rmiBindAddress">${jnp.host:localhost}</property>
</bean>
<!-- The JMS server -->
<bean name="JMSServerManager" class="org.hornetq.jms.server.impl.JMSServerManagerImpl">
<constructor>
<parameter>
<inject bean="HornetQServer"/>
</parameter>
</constructor>
</bean>
</deployment>

View File

@ -1,61 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<deployment xmlns="urn:jboss:bean-deployer:2.0">
<!-- MBean server -->
<bean name="MBeanServer" class="javax.management.MBeanServer">
<constructor factoryClass="java.lang.management.ManagementFactory"
factoryMethod="getPlatformMBeanServer"/>
</bean>
<!-- The core configuration -->
<bean name="Configuration" class="org.hornetq.core.config.impl.FileConfiguration">
</bean>
<!-- The security manager -->
<bean name="HornetQSecurityManager" class="org.hornetq.spi.core.security.HornetQSecurityManagerImpl">
<start ignored="true"/>
<stop ignored="true"/>
</bean>
<!-- The core server -->
<bean name="HornetQServer" class="org.hornetq.core.server.impl.HornetQServerImpl">
<constructor>
<parameter>
<inject bean="Configuration"/>
</parameter>
<parameter>
<inject bean="MBeanServer"/>
</parameter>
<parameter>
<inject bean="HornetQSecurityManager"/>
</parameter>
</constructor>
<start ignored="true"/>
<stop ignored="true"/>
</bean>
<!-- The Stand alone server that controls the jndi server-->
<bean name="StandaloneServer" class="org.hornetq.jms.server.impl.StandaloneNamingServer">
<constructor>
<parameter>
<inject bean="HornetQServer"/>
</parameter>
</constructor>
<property name="port">${jnp.port:1099}</property>
<property name="bindAddress">${jnp.host:localhost}</property>
<property name="rmiPort">${jnp.rmiPort:1098}</property>
<property name="rmiBindAddress">${jnp.host:localhost}</property>
</bean>
<!-- The JMS server -->
<bean name="JMSServerManager" class="org.hornetq.jms.server.impl.JMSServerManagerImpl">
<constructor>
<parameter>
<inject bean="HornetQServer"/>
</parameter>
</constructor>
</bean>
</deployment>

View File

@ -1,56 +0,0 @@
#
# JBoss, Home of Professional Open Source.
# Copyright 2010, Red Hat, Inc., and individual contributors
# as indicated by the @author tags. See the copyright.txt file in the
# distribution for a full listing of individual contributors.
#
# This is free software; you can redistribute it and/or modify it
# under the terms of the GNU Lesser General Public License as
# published by the Free Software Foundation; either version 2.1 of
# the License, or (at your option) any later version.
#
# This software is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public
# License along with this software; if not, write to the Free
# Software Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA
# 02110-1301 USA, or see the FSF site: http://www.fsf.org.
#
# Additional logger names to configure (root logger is always configured)
# Root logger option
loggers=org.jboss.logging,org.hornetq.core.server,org.hornetq.utils,org.hornetq.journal,org.hornetq.jms,org.hornetq.integration.bootstrap
# Root logger level
logger.level=INFO
# HornetQ logger levels
logger.org.hornetq.core.server.level=INFO
logger.org.hornetq.journal.level=INFO
logger.org.hornetq.utils.level=INFO
logger.org.hornetq.jms.level=INFO
logger.org.hornetq.integration.bootstrap.level=INFO
# Root logger handlers
logger.handlers=FILE,CONSOLE
# Console handler configuration
handler.CONSOLE=org.jboss.logmanager.handlers.ConsoleHandler
handler.CONSOLE.properties=autoFlush
handler.CONSOLE.level=DEBUG
handler.CONSOLE.autoFlush=true
handler.CONSOLE.formatter=PATTERN
# File handler configuration
handler.FILE=org.jboss.logmanager.handlers.FileHandler
handler.FILE.level=DEBUG
handler.FILE.properties=autoFlush,fileName
handler.FILE.autoFlush=true
handler.FILE.fileName=logs/hornetq.log
handler.FILE.formatter=PATTERN
# Formatter pattern configuration
formatter.PATTERN=org.jboss.logmanager.formatters.PatternFormatter
formatter.PATTERN.properties=pattern
formatter.PATTERN.pattern=%d{HH:mm:ss,SSS} %-5p [%c] %s%E%n

View File

@ -1,74 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<!--
~ Copyright 2009 Red Hat, Inc.
~ Red Hat licenses this file to you under the Apache License, version
~ 2.0 (the "License"); you may not use this file except in compliance
~ with the License. You may obtain a copy of the License at
~ http://www.apache.org/licenses/LICENSE-2.0
~ Unless required by applicable law or agreed to in writing, software
~ distributed under the License is distributed on an "AS IS" BASIS,
~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
~ implied. See the License for the specific language governing
~ permissions and limitations under the License.
-->
<deployment xmlns="urn:jboss:bean-deployer:2.0">
<bean name="Naming" class="org.jnp.server.NamingBeanImpl"/>
<!-- JNDI server. Disable this if you don't want JNDI -->
<bean name="JNDIServer" class="org.jnp.server.Main">
<property name="namingInfo">
<inject bean="Naming"/>
</property>
<property name="port">${jnp.port:1199}</property>
<property name="bindAddress">${jnp.host:localhost}</property>
<property name="rmiPort">${jnp.rmiPort:1198}</property>
<property name="rmiBindAddress">${jnp.host:localhost}</property>
</bean>
<!-- MBean server -->
<bean name="MBeanServer" class="javax.management.MBeanServer">
<constructor factoryClass="java.lang.management.ManagementFactory"
factoryMethod="getPlatformMBeanServer"/>
</bean>
<!-- The core configuration -->
<bean name="Configuration" class="org.hornetq.core.config.impl.FileConfiguration">
</bean>
<!-- The security manager -->
<bean name="HornetQSecurityManager" class="org.hornetq.spi.core.security.HornetQSecurityManagerImpl">
<start ignored="true"/>
<stop ignored="true"/>
</bean>
<!-- The core server -->
<bean name="HornetQServer" class="org.hornetq.core.server.impl.HornetQServerImpl">
<constructor>
<parameter>
<inject bean="Configuration"/>
</parameter>
<parameter>
<inject bean="MBeanServer"/>
</parameter>
<parameter>
<inject bean="HornetQSecurityManager"/>
</parameter>
</constructor>
<start ignored="true"/>
<stop ignored="true"/>
</bean>
<!-- The JMS server -->
<bean name="JMSServerManager" class="org.hornetq.jms.server.impl.JMSServerManagerImpl">
<constructor>
<parameter>
<inject bean="HornetQServer"/>
</parameter>
</constructor>
</bean>
</deployment>

View File

@ -1,88 +0,0 @@
<!--
~ Copyright 2009 Red Hat, Inc.
~ Red Hat licenses this file to you under the Apache License, version
~ 2.0 (the "License"); you may not use this file except in compliance
~ with the License. You may obtain a copy of the License at
~ http://www.apache.org/licenses/LICENSE-2.0
~ Unless required by applicable law or agreed to in writing, software
~ distributed under the License is distributed on an "AS IS" BASIS,
~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
~ implied. See the License for the specific language governing
~ permissions and limitations under the License.
-->
<configuration xmlns="urn:hornetq"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:hornetq /schema/hornetq-configuration.xsd">
<backup>true</backup>
<allow-failback>true</allow-failback>
<shared-store>true</shared-store>
<journal-min-files>10</journal-min-files>
<connectors>
<connector name="netty">
<factory-class>org.hornetq.core.remoting.impl.netty.NettyConnectorFactory</factory-class>
<param key="host" value="${hornetq.remoting.netty.host:localhost}"/>
<param key="port" value="${hornetq.remoting.netty.port:5446}"/>
</connector>
</connectors>
<acceptors>
<acceptor name="netty">
<factory-class>org.hornetq.core.remoting.impl.netty.NettyAcceptorFactory</factory-class>
<param key="host" value="${hornetq.remoting.netty.host:localhost}"/>
<param key="port" value="${hornetq.remoting.netty.port:5446}"/>
</acceptor>
</acceptors>
<broadcast-groups>
<broadcast-group name="bg-group1">
<group-address>231.7.7.7</group-address>
<group-port>9876</group-port>
<broadcast-period>5000</broadcast-period>
<connector-ref>netty</connector-ref>
</broadcast-group>
</broadcast-groups>
<discovery-groups>
<discovery-group name="dg-group1">
<group-address>231.7.7.7</group-address>
<group-port>9876</group-port>
<refresh-timeout>60000</refresh-timeout>
</discovery-group>
</discovery-groups>
<cluster-connections>
<cluster-connection name="my-cluster">
<address>jms</address>
<connector-ref>netty</connector-ref>
<discovery-group-ref discovery-group-name="dg-group1"/>
</cluster-connection>
</cluster-connections>
<security-settings>
<security-setting match="#">
<permission type="createNonDurableQueue" roles="guest"/>
<permission type="deleteNonDurableQueue" roles="guest"/>
<permission type="consume" roles="guest"/>
<permission type="send" roles="guest"/>
</security-setting>
</security-settings>
<address-settings>
<!--default for catch all-->
<address-setting match="#">
<dead-letter-address>jms.queue.DLQ</dead-letter-address>
<expiry-address>jms.queue.ExpiryQueue</expiry-address>
<redelivery-delay>0</redelivery-delay>
<max-size-bytes>10485760</max-size-bytes>
<message-counter-history-day-limit>10</message-counter-history-day-limit>
<address-full-policy>BLOCK</address-full-policy>
</address-setting>
</address-settings>
</configuration>

View File

@ -1,28 +0,0 @@
<configuration xmlns="urn:hornetq"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:hornetq /schema/hornetq-jms.xsd">
<connection-factory name="ConnectionFactory">
<connectors>
<connector-ref connector-name="netty"/>
</connectors>
<entries>
<entry name="/ConnectionFactory"/>
<entry name="/XAConnectionFactory"/>
</entries>
</connection-factory>
<queue name="DLQ">
<entry name="/queue/DLQ"/>
</queue>
<queue name="ExpiryQueue">
<entry name="/queue/ExpiryQueue"/>
</queue>
<queue name="ExampleQueue">
<entry name="/queue/ExampleQueue"/>
</queue>
<topic name="ExampleTopic">
<entry name="/topic/ExampleTopic"/>
</topic>
</configuration>

View File

@ -1,15 +0,0 @@
#
# Copyright 2009 Red Hat, Inc.
# Red Hat licenses this file to you under the Apache License, version
# 2.0 (the "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied. See the License for the specific language governing
# permissions and limitations under the License.
#
java.naming.factory.initial=org.jnp.interfaces.NamingContextFactory
java.naming.factory.url.pkgs=org.jboss.naming:org.jnp.interfaces

View File

@ -1,34 +0,0 @@
############################################################
# Default Logging Configuration File
#
# You can use a different file by specifying a filename
# with the java.util.logging.config.file system property.
# For example java -Djava.util.logging.config.file=myfile
############################################################
############################################################
# Global properties
############################################################
# "handlers" specifies a comma separated list of log Handler
# classes. These handlers will be installed during VM startup.
# Note that these classes must be on the system classpath.
# By default we only configure a ConsoleHandler, which will only
# show messages at the INFO and above levels.
handlers=java.util.logging.ConsoleHandler,java.util.logging.FileHandler
java.util.logging.ConsoleHandler.formatter=org.hornetq.integration.logging.HornetQLoggerFormatter
java.util.logging.FileHandler.level=INFO
java.util.logging.FileHandler.pattern=logs/hornetq.log
java.util.logging.FileHandler.formatter=org.hornetq.integration.logging.HornetQLoggerFormatter
# Default global logging level.
# This specifies which kinds of events are logged across
# all loggers. For any given facility this global level
# can be overriden by a facility specific level
# Note that the ConsoleHandler also has a separate level
# setting to limit messages printed to the console.
.level= INFO
############################################################
# Handler specific properties.
# Describes specific configuration info for Handlers.
############################################################

View File

@ -1,61 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<deployment xmlns="urn:jboss:bean-deployer:2.0">
<bean name="Naming" class="org.jnp.server.NamingBeanImpl"/>
<!-- JNDI server. Disable this if you don't want JNDI -->
<bean name="JNDIServer" class="org.jnp.server.Main">
<property name="namingInfo">
<inject bean="Naming"/>
</property>
<property name="port">${jnp.port:1099}</property>
<property name="bindAddress">${jnp.host:localhost}</property>
<property name="rmiPort">${jnp.rmiPort:1098}</property>
<property name="rmiBindAddress">${jnp.host:localhost}</property>
</bean>
<!-- MBean server -->
<bean name="MBeanServer" class="javax.management.MBeanServer">
<constructor factoryClass="java.lang.management.ManagementFactory"
factoryMethod="getPlatformMBeanServer"/>
</bean>
<!-- The core configuration -->
<bean name="Configuration" class="org.hornetq.core.config.impl.FileConfiguration">
</bean>
<!-- The security manager -->
<bean name="HornetQSecurityManager" class="org.hornetq.spi.core.security.HornetQSecurityManagerImpl">
<start ignored="true"/>
<stop ignored="true"/>
</bean>
<!-- The core server -->
<bean name="HornetQServer" class="org.hornetq.core.server.impl.HornetQServerImpl">
<constructor>
<parameter>
<inject bean="Configuration"/>
</parameter>
<parameter>
<inject bean="MBeanServer"/>
</parameter>
<parameter>
<inject bean="HornetQSecurityManager"/>
</parameter>
</constructor>
<start ignored="true"/>
<stop ignored="true"/>
</bean>
<!-- The JMS server -->
<bean name="JMSServerManager" class="org.hornetq.jms.server.impl.JMSServerManagerImpl">
<constructor>
<parameter>
<inject bean="HornetQServer"/>
</parameter>
</constructor>
</bean>
</deployment>

View File

@ -1,73 +0,0 @@
<configuration xmlns="urn:hornetq"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:hornetq /schema/hornetq-configuration.xsd">
<failover-on-shutdown>false</failover-on-shutdown>
<shared-store>true</shared-store>
<journal-min-files>10</journal-min-files>
<connectors>
<connector name="netty">
<factory-class>org.hornetq.core.remoting.impl.netty.NettyConnectorFactory</factory-class>
<param key="host" value="${hornetq.remoting.netty.host:localhost}"/>
<param key="port" value="${hornetq.remoting.netty.port:5445}"/>
</connector>
</connectors>
<acceptors>
<acceptor name="netty">
<factory-class>org.hornetq.core.remoting.impl.netty.NettyAcceptorFactory</factory-class>
<param key="host" value="${hornetq.remoting.netty.host:localhost}"/>
<param key="port" value="${hornetq.remoting.netty.port:5445}"/>
</acceptor>
</acceptors>
<broadcast-groups>
<broadcast-group name="bg-group1">
<group-address>231.7.7.7</group-address>
<group-port>9876</group-port>
<broadcast-period>5000</broadcast-period>
<connector-ref>netty</connector-ref>
</broadcast-group>
</broadcast-groups>
<discovery-groups>
<discovery-group name="dg-group1">
<group-address>231.7.7.7</group-address>
<group-port>9876</group-port>
<refresh-timeout>60000</refresh-timeout>
</discovery-group>
</discovery-groups>
<cluster-connections>
<cluster-connection name="my-cluster">
<address>jms</address>
<connector-ref>netty</connector-ref>
<discovery-group-ref discovery-group-name="dg-group1"/>
</cluster-connection>
</cluster-connections>
<security-settings>
<security-setting match="#">
<permission type="createNonDurableQueue" roles="guest"/>
<permission type="deleteNonDurableQueue" roles="guest"/>
<permission type="consume" roles="guest"/>
<permission type="send" roles="guest"/>
</security-setting>
</security-settings>
<address-settings>
<!--default for catch all-->
<address-setting match="#">
<dead-letter-address>jms.queue.DLQ</dead-letter-address>
<expiry-address>jms.queue.ExpiryQueue</expiry-address>
<redelivery-delay>0</redelivery-delay>
<max-size-bytes>10485760</max-size-bytes>
<message-counter-history-day-limit>10</message-counter-history-day-limit>
<address-full-policy>BLOCK</address-full-policy>
</address-setting>
</address-settings>
</configuration>

View File

@ -1,28 +0,0 @@
<configuration xmlns="urn:hornetq"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:hornetq /schema/hornetq-jms.xsd">
<connection-factory name="ConnectionFactory">
<connectors>
<connector-ref connector-name="netty"/>
</connectors>
<entries>
<entry name="/ConnectionFactory"/>
<entry name="/XAConnectionFactory"/>
</entries>
</connection-factory>
<queue name="DLQ">
<entry name="/queue/DLQ"/>
</queue>
<queue name="ExpiryQueue">
<entry name="/queue/ExpiryQueue"/>
</queue>
<queue name="ExampleQueue">
<entry name="/queue/ExampleQueue"/>
</queue>
<topic name="ExampleTopic">
<entry name="/topic/ExampleTopic"/>
</topic>
</configuration>

View File

@ -1,2 +0,0 @@
java.naming.factory.initial=org.jnp.interfaces.NamingContextFactory
java.naming.factory.url.pkgs=org.jboss.naming:org.jnp.interfaces

View File

@ -1,34 +0,0 @@
############################################################
# Default Logging Configuration File
#
# You can use a different file by specifying a filename
# with the java.util.logging.config.file system property.
# For example java -Djava.util.logging.config.file=myfile
############################################################
############################################################
# Global properties
############################################################
# "handlers" specifies a comma separated list of log Handler
# classes. These handlers will be installed during VM startup.
# Note that these classes must be on the system classpath.
# By default we only configure a ConsoleHandler, which will only
# show messages at the INFO and above levels.
handlers=java.util.logging.ConsoleHandler,java.util.logging.FileHandler
java.util.logging.ConsoleHandler.formatter=org.hornetq.integration.logging.HornetQLoggerFormatter
java.util.logging.FileHandler.level=INFO
java.util.logging.FileHandler.pattern=logs/hornetq.log
java.util.logging.FileHandler.formatter=org.hornetq.integration.logging.HornetQLoggerFormatter
# Default global logging level.
# This specifies which kinds of events are logged across
# all loggers. For any given facility this global level
# can be overriden by a facility specific level
# Note that the ConsoleHandler also has a separate level
# setting to limit messages printed to the console.
.level= INFO
############################################################
# Handler specific properties.
# Describes specific configuration info for Handlers.
############################################################

View File

@ -1,60 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<deployment xmlns="urn:jboss:bean-deployer:2.0">
<bean name="Naming" class="org.jnp.server.NamingBeanImpl"/>
<!-- JNDI server. Disable this if you don't want JNDI -->
<bean name="JNDIServer" class="org.jnp.server.Main">
<property name="namingInfo">
<inject bean="Naming"/>
</property>
<property name="port">1099</property>
<property name="bindAddress">localhost</property>
<property name="rmiPort">1098</property>
<property name="rmiBindAddress">localhost</property>
</bean>
<!-- MBean server -->
<bean name="MBeanServer" class="javax.management.MBeanServer">
<constructor factoryClass="java.lang.management.ManagementFactory"
factoryMethod="getPlatformMBeanServer"/>
</bean>
<!-- The core configuration -->
<bean name="Configuration" class="org.hornetq.core.config.impl.FileConfiguration">
</bean>
<!-- The security manager -->
<bean name="HornetQSecurityManager" class="org.hornetq.spi.core.security.HornetQSecurityManagerImpl">
<start ignored="true"/>
<stop ignored="true"/>
</bean>
<!-- The core server -->
<bean name="HornetQServer" class="org.hornetq.core.server.impl.HornetQServerImpl">
<constructor>
<parameter>
<inject bean="Configuration"/>
</parameter>
<parameter>
<inject bean="MBeanServer"/>
</parameter>
<parameter>
<inject bean="HornetQSecurityManager"/>
</parameter>
</constructor>
<start ignored="true"/>
<stop ignored="true"/>
</bean>
<!-- The JMS server -->
<bean name="JMSServerManager" class="org.hornetq.jms.server.impl.JMSServerManagerImpl">
<constructor>
<parameter>
<inject bean="HornetQServer"/>
</parameter>
</constructor>
</bean>
</deployment>

View File

@ -1,59 +0,0 @@
<configuration xmlns="urn:hornetq"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:hornetq /schema/hornetq-configuration.xsd">
<journal-min-files>10</journal-min-files>
<connectors>
<connector name="netty">
<factory-class>org.hornetq.core.remoting.impl.netty.NettyConnectorFactory</factory-class>
<param key="host" value="${hornetq.remoting.netty.host:localhost}"/>
<param key="port" value="${hornetq.remoting.netty.port:5445}"/>
</connector>
<connector name="netty-throughput">
<factory-class>org.hornetq.core.remoting.impl.netty.NettyConnectorFactory</factory-class>
<param key="host" value="${hornetq.remoting.netty.host:localhost}"/>
<param key="port" value="${hornetq.remoting.netty.port:5455}"/>
<param key="batch-delay" value="50"/>
</connector>
</connectors>
<acceptors>
<acceptor name="netty">
<factory-class>org.hornetq.core.remoting.impl.netty.NettyAcceptorFactory</factory-class>
<param key="host" value="${hornetq.remoting.netty.host:localhost}"/>
<param key="port" value="${hornetq.remoting.netty.port:5445}"/>
</acceptor>
<acceptor name="netty-throughput">
<factory-class>org.hornetq.core.remoting.impl.netty.NettyAcceptorFactory</factory-class>
<param key="host" value="${jboss.bind.address:localhost}"/>
<param key="port" value="${hornetq.remoting.netty.port:5455}"/>
<param key="batch-delay" value="50"/>
<param key="direct-deliver" value="false"/>
</acceptor>
</acceptors>
<security-settings>
<security-setting match="#">
<permission type="createNonDurableQueue" roles="guest"/>
<permission type="deleteNonDurableQueue" roles="guest"/>
<permission type="consume" roles="guest"/>
<permission type="send" roles="guest"/>
</security-setting>
</security-settings>
<address-settings>
<!--default for catch all-->
<address-setting match="#">
<dead-letter-address>jms.queue.DLQ</dead-letter-address>
<expiry-address>jms.queue.ExpiryQueue</expiry-address>
<redelivery-delay>0</redelivery-delay>
<max-size-bytes>10485760</max-size-bytes>
<message-counter-history-day-limit>10</message-counter-history-day-limit>
<address-full-policy>BLOCK</address-full-policy>
</address-setting>
</address-settings>
</configuration>

View File

@ -1,40 +0,0 @@
<configuration xmlns="urn:hornetq"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:hornetq /schema/hornetq-jms.xsd">
<connection-factory name="ConnectionFactory">
<connectors>
<connector-ref connector-name="netty"/>
</connectors>
<entries>
<entry name="/ConnectionFactory"/>
<entry name="/XAConnectionFactory"/>
</entries>
</connection-factory>
<!--
<connection-factory name="NettyThroughputConnectionFactory">
<connectors>
<connector-ref connector-name="netty-throughput"/>
</connectors>
<entries>
<entry name="/ThroughputConnectionFactory"/>
<entry name="/XAThroughputConnectionFactory"/>
</entries>
</connection-factory>
-->
<queue name="DLQ">
<entry name="/queue/DLQ"/>
</queue>
<queue name="ExpiryQueue">
<entry name="/queue/ExpiryQueue"/>
</queue>
<queue name="ExampleQueue">
<entry name="/queue/ExampleQueue"/>
</queue>
<topic name="ExampleTopic">
<entry name="/topic/ExampleTopic"/>
</topic>
</configuration>

View File

@ -1,7 +0,0 @@
<configuration xmlns="urn:hornetq" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:hornetq /schema/hornetq-users.xsd">
<!-- the default user. this is used where username is null-->
<defaultuser name="guest" password="guest">
<role name="guest"/>
</defaultuser>
</configuration>

View File

@ -1,2 +0,0 @@
java.naming.factory.initial=org.jnp.interfaces.NamingContextFactory
java.naming.factory.url.pkgs=org.jboss.naming:org.jnp.interfaces

View File

@ -1,38 +0,0 @@
############################################################
# Default Logging Configuration File
#
# You can use a different file by specifying a filename
# with the java.util.logging.config.file system property.
# For example java -Djava.util.logging.config.file=myfile
############################################################
############################################################
# Global properties
############################################################
# "handlers" specifies a comma separated list of log Handler
# classes. These handlers will be installed during VM startup.
# Note that these classes must be on the system classpath.
# By default we only configure a ConsoleHandler, which will only
# show messages at the INFO and above levels.
handlers=java.util.logging.ConsoleHandler,java.util.logging.FileHandler
java.util.logging.ConsoleHandler.formatter=org.hornetq.integration.logging.HornetQLoggerFormatter
java.util.logging.FileHandler.level=INFO
java.util.logging.FileHandler.formatter=org.hornetq.integration.logging.HornetQLoggerFormatter
# cycle through 10 files of 20MiB max which append logs
java.util.logging.FileHandler.count=10
java.util.logging.FileHandler.limit=20971520
java.util.logging.FileHandler.append=true
java.util.logging.FileHandler.pattern=logs/hornetq.%g.log
# Default global logging level.
# This specifies which kinds of events are logged across
# all loggers. For any given facility this global level
# can be overriden by a facility specific level
# Note that the ConsoleHandler also has a separate level
# setting to limit messages printed to the console.
.level= INFO
############################################################
# Handler specific properties.
# Describes specific configuration info for Handlers.
############################################################

View File

@ -1,4 +0,0 @@
hornetq.example.logserveroutput=true
hornetq.jars.dir=${imported.basedir}/../../lib
jars.dir=${imported.basedir}/../../lib
aio.library.path=${imported.basedir}/../../bin

View File

@ -1,123 +0,0 @@
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>org.hornetq</groupId>
<artifactId>hornetq-distribution</artifactId>
<version>2.5.0-SNAPSHOT</version>
</parent>
<artifactId>jboss-mc</artifactId>
<packaging>jar</packaging>
<name>JBoss Microcontainer jar</name>
<dependencies>
<!--<dependency>
<groupId>org.jboss.logging</groupId>
<artifactId>jboss-logging-spi</artifactId>
</dependency>-->
<dependency>
<groupId>org.jboss.microcontainer</groupId>
<artifactId>jboss-kernel</artifactId>
</dependency>
<dependency>
<groupId>org.jboss.microcontainer</groupId>
<artifactId>jboss-dependency</artifactId>
</dependency>
<dependency>
<groupId>org.jboss</groupId>
<artifactId>jboss-reflect</artifactId>
</dependency>
<dependency>
<groupId>org.jboss</groupId>
<artifactId>jboss-common-core</artifactId>
</dependency>
<dependency>
<groupId>org.jboss</groupId>
<artifactId>jboss-mdr</artifactId>
</dependency>
<dependency>
<groupId>org.jboss</groupId>
<artifactId>jbossxb</artifactId>
</dependency>
<dependency>
<groupId>sun-jaxb</groupId>
<artifactId>jaxb-api</artifactId>
</dependency>
<dependency>
<groupId>org.jboss.logging</groupId>
<artifactId>jboss-logging</artifactId>
</dependency>
<dependency>
<groupId>org.jboss.logmanager</groupId>
<artifactId>jboss-logmanager</artifactId>
</dependency>
</dependencies>
<build>
<resources>
<resource>
<directory>src/main/resources</directory>
<filtering>true</filtering>
</resource>
</resources>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>shade</goal>
</goals>
<configuration>
<artifactSet>
<excludes>
<exclude>org.jboss.netty:netty</exclude>
<exclude>org.jboss.logging:jboss-logging-spi</exclude>
</excludes>
</artifactSet>
<filters>
<!--<filter>
<artifact>org.jboss.logging:jboss-logging-spi</artifact>
</filter>-->
<filter>
<artifact>org.jboss.microcontainer:jboss-kernel</artifact>
</filter>
<filter>
<artifact>org.jboss.microcontainer:jboss-dependency</artifact>
</filter>
<filter>
<artifact>org.jboss:jboss-reflect</artifact>
</filter>
<filter>
<artifact>org.jboss:jboss-common-core</artifact>
</filter>
<filter>
<artifact>org.jboss:jboss-mdr</artifact>
</filter>
<filter>
<artifact>org.jboss:jbossxb</artifact>
</filter>
<filter>
<artifact>sun-jaxb:jaxb-api</artifact>
</filter>
<filter>
<artifact>org.jboss.logging:jboss-logging</artifact>
</filter>
<filter>
<artifact>org.jboss.logmanager:jboss-logmanager</artifact>
</filter>
</filters>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
</project>

View File

@ -34,7 +34,6 @@
<modules>
<module>jnp-client</module>
<module>jboss-mc</module>
<module>hornetq</module>
</modules>

View File

@ -26,21 +26,21 @@
<section id="running.standalone">
<title>Standalone HornetQ</title>
<para>To run a stand-alone server, open up a shell or command prompt and navigate into the
<literal>bin</literal> directory. Then execute <literal>./run.sh</literal> (or <literal
>run.bat</literal> on Windows) and you should see the following output </para>
<literal>bin</literal> directory. Then execute <literal>./hornetq run</literal> (or <literal
>./hornetq.cmd run</literal> on Windows) and you should see the following output </para>
<programlisting>
bin$ ./run.sh
15:05:54,108 INFO @main [HornetQBootstrapServer] Starting HornetQ server
bin$ ./hornetq run
11:05:06,589 INFO [org.hornetq.integration.bootstrap] HQ101000: Starting HornetQ Server
...
15:06:02,566 INFO @main [HornetQServerImpl] HornetQ Server version
2.0.0.CR3 (yellowjacket, 111) started
11:05:10,848 INFO [org.hornetq.core.server] HQ221001: HornetQ Server version 2.5.0.SNAPSHOT (Wild Hornet, 125) [e32ae252-52ee-11e4-a716-7785dc3013a3]
</programlisting>
<para>HornetQ is now running.</para>
<para>Both the run and the stop scripts use the config under <literal
>config/stand-alone/non-clustered</literal> by default. The configuration can be changed
by running <literal>./run.sh ../config/stand-alone/clustered</literal> or another config of
your choosing. This is the same for the stop script and the windows bat files.</para>
>config/non-clustered</literal> by default. The configuration can be changed
by running <literal>./hornetq run xml:../config/non-clustered/bootstrap.xml</literal> or another config of
your choosing.</para>
<para>The server can be stopped by running <literal>./hornetq stop</literal></para>
</section>
<section id="running.jboss.Wildfly">
<title>HornetQ In Wildfly</title>

View File

@ -36,6 +36,7 @@
<xi:include xmlns:xi="http://www.w3.org/2001/XInclude" href="persistence.xml"/>
<xi:include xmlns:xi="http://www.w3.org/2001/XInclude" href="configuring-transports.xml"/>
<xi:include xmlns:xi="http://www.w3.org/2001/XInclude" href="connection-ttl.xml"/>
<xi:include xmlns:xi="http://www.w3.org/2001/XInclude" href="slow-consumers.xml"/>
<xi:include xmlns:xi="http://www.w3.org/2001/XInclude" href="transaction-config.xml"/>
<xi:include xmlns:xi="http://www.w3.org/2001/XInclude" href="flow-control.xml"/>
<xi:include xmlns:xi="http://www.w3.org/2001/XInclude" href="send-guarantees.xml"/>

View File

@ -594,11 +594,13 @@ ClientSession session = factory.createSession();</programlisting>
shows all the available configuration options</para>
<itemizedlist>
<listitem id="clusters.address">
<para><literal>address</literal>. Each cluster connection only applies to
messages sent to an address that starts with this value. Note: this does
not use wild-card matching.</para>
<para>In this case, this cluster connection will load balance messages sent to
address that start with <literal>jms</literal>. This cluster connection,
<para><literal>address</literal> Each cluster connection only applies to addresses that match the
specified address field. An address is matched on the cluster connection when it begins with the
string specified in this field. The address field on a cluster connection also supports comma
separated lists and an exclude syntax '!'. To prevent an address from being matched on this
cluster connection, prepend a cluster connection address string with '!'.</para>
<para>In the case shown above the cluster connection will load balance messages sent to
addresses that start with <literal>jms</literal>. This cluster connection,
will, in effect apply to all JMS queues and topics since they map to core
queues that start with the substring "jms".</para>
<para>The address can be any value and you can have many cluster connections
@ -611,6 +613,24 @@ ClientSession session = factory.createSession();</programlisting>
values of <literal>address</literal>, e.g. "europe" and "europe.news" since
this could result in the same messages being distributed between more than
one cluster connection, possibly resulting in duplicate deliveries.</para>
<para>
Examples:
<itemizedlist>
<listitem><literal>'jms.eu'</literal> matches all addresses starting with 'jms.eu'</listitem>
<listitem><literal>'!jms.eu'</literal> matches all address except for those starting with
'jms.eu'</listitem>
<listitem><literal>'jms.eu.uk,jms.eu.de'</literal> matches all addresses starting with either
'jms.eu.uk' or 'jms.eu.de'</listitem>
<listitem><literal>'jms.eu,!jms.eu.uk'</literal> matches all addresses starting with 'jms.eu'
but not those starting with 'jms.eu.uk'</listitem>
</itemizedlist>
Notes:
<itemizedlist>
<listitem>Address exclusion will always takes precedence over address inclusion.</listitem>
<listitem>Address matching on cluster connections does not support wild-card matching.
</listitem>
</itemizedlist>
</para>
<para>This parameter is mandatory.</para>
</listitem>
<listitem>

View File

@ -177,10 +177,10 @@ etc</programlisting>
Java IO, or NIO (non-blocking), also to use straightforward TCP sockets, SSL, or to
tunnel over HTTP or HTTPS..</para>
<para>We believe this caters for the vast majority of transport requirements.</para>
<section>
<section id="configuring-transports.single-port">
<title>Single Port Support</title>
<para>As of version 2.4 HornetQ now supports using a single port for all protocols, HornetQ will automatically
detect which protocol is being used CORE, AMQP or STOMP and use the appropriate HornetQ handler. It will also detect
detect which protocol is being used CORE, AMQP, STOMP or OPENWIRE and use the appropriate HornetQ handler. It will also detect
whether protocols such as HTTP or Web Sockets are being used and also use the appropriate decoders</para>
<para>It is possible to limit which protocols are supported by using the <literal>protocols</literal> parameter
on the Acceptor like so:</para>

View File

@ -197,11 +197,13 @@
connection used to forward messages to the target node. This attribute is
described in section <xref linkend="client-reconnection"/></para>
<warning><para>When using the bridge to forward messages from a queue which has a
max-size-bytes set it's important that confirmation-window-size is less than
or equal to <literal>max-size-bytes</literal> to prevent the flow of
messages from ceasing.</para>
</warning>
<warning><para>When using the bridge to forward messages to an address which uses
the <literal>BLOCK</literal> <literal>address-full-policy</literal> from a
queue which has a <literal>max-size-bytes</literal> set it's important that
<literal>confirmation-window-size</literal> is less than or equal to
<literal>max-size-bytes</literal> to prevent the flow of messages from
ceasing.</para>
</warning>
</listitem>
<listitem>

Binary file not shown.

Binary file not shown.

View File

@ -401,6 +401,11 @@
sessions are used, once and only once message delivery is not guaranteed and it is possible
that some messages will be lost or delivered twice.</para>
</section>
<section id="examples.openwire">
<title>OpenWire</title>
<para>The <literal>Openwire</literal> example shows how to configure a HornetQ
server to communicate with an ActiveMQ JMS client that uses open-wire protocol.</para>
</section>
<section id="examples.paging">
<title>Paging</title>
<para>The <literal>paging</literal> example shows how HornetQ can support huge queues

View File

@ -30,7 +30,6 @@
<para>A part of high availability is <emphasis>failover</emphasis> which we define as the
<emphasis>ability for client connections to migrate from one server to another in event of
server failure so client applications can continue to operate</emphasis>.</para>
<section>
<title>Live - Backup Groups</title>
@ -48,14 +47,71 @@
live server goes down, if the current live server is configured to allow automatic failback
then it will detect the live server coming back up and automatically stop.</para>
<section id="ha.mode">
<title>HA modes</title>
<section id="ha.policies">
<title>HA Policies</title>
<para>HornetQ supports two different strategies for backing up a server <emphasis>shared
store</emphasis> and <emphasis>replication</emphasis>.</para>
store</emphasis> and <emphasis>replication</emphasis>. Which is configured via the
<literal>ha-policy</literal> configuration element.</para>
<programlisting>
&lt;ha-policy>
&lt;replication/>
&lt;/ha-policy>
</programlisting>
<para>
or
</para>
<programlisting>
&lt;ha-policy>
&lt;shared-store/>
&lt;/ha-policy>
</programlisting>
<para>
As well as these 2 strategies there is also a 3rd called <literal>live-only</literal>. This of course means there
will be no Backup Strategy and is the default if none is provided, however this is used to configure
<literal>scale-down</literal> which we will cover in a later chapter.
</para>
<note>
<para>
The <literal>ha-policy</literal> configurations replaces any current HA configuration in the root of the
<literal>hornetq-configuration.xml</literal> configuration. All old configuration is now deprecated altho
best efforts will be made to honour it if configured this way.
</para>
</note>
<note>
<para>Only persistent message data will survive failover. Any non persistent message
data will not be available after failover.</para>
</note>
<para>The <literal>ha-policy</literal> type configures which strategy a cluster should use to provide the
backing up of a servers data. Within this configuration element is configured how a server should behave
within the cluster, either as a master (live), slave (backup) or colocated (both live and backup). This
would look something like: </para>
<programlisting>
&lt;ha-policy>
&lt;replication>
&lt;master/>
&lt;/replication>
&lt;/ha-policy>
</programlisting>
<para>
or
</para>
<programlisting>
&lt;ha-policy>
&lt;shared-store/>
&lt;slave/>
&lt;/shared-store/>
&lt;/ha-policy>
</programlisting>
<para>
or
</para>
<programlisting>
&lt;ha-policy>
&lt;replication>
&lt;colocated/>
&lt;/replication>
&lt;/ha-policy>
</programlisting>
</section>
<section id="ha.mode.replicated">
@ -81,7 +137,7 @@
the one at the live's storage. If you configure your live server to perform a
<xref linkend="ha.allow-fail-back">'fail-back'</xref> when restarted, it will synchronize
its data with the backup's. If both servers are shutdown, the administrator will have
to determine which one has the lastest data.</para>
to determine which one has the latest data.</para>
<para>The replicating live and backup pair must be part of a cluster. The Cluster
Connection also defines how backup servers will find the remote live servers to pair
@ -104,39 +160,40 @@
<itemizedlist>
<listitem>
<para><literal>specifying a node group</literal>. You can specify a group of live servers that a backup
server can connect to. This is done by configuring <literal>backup-group-name</literal> in the main
server can connect to. This is done by configuring <literal>group-name</literal> in either the <literal>master</literal>
or the <literal>slave</literal> element of the
<literal>hornetq-configuration.xml</literal>. A Backup server will only connect to a live server that
shares the same node group name</para>
</listitem>
<listitem>
<para><literal>connecting to any live</literal>. Simply put not configuring <literal>backup-group-name</literal>
will allow a backup server to connect to any live server</para>
<para><literal>connecting to any live</literal>. This will be the behaviour if <literal>group-name</literal>
is not configured allowing a backup server to connect to any live server</para>
</listitem>
</itemizedlist>
<note>
<para>A <literal>backup-group-name</literal> example: suppose you have 5 live servers and 6 backup
<para>A <literal>group-name</literal> example: suppose you have 5 live servers and 6 backup
servers:</para>
<itemizedlist>
<listitem>
<para><literal>live1</literal>, <literal>live2</literal>, <literal>live3</literal>: with
<literal>backup-group-name=fish</literal></para>
<literal>group-name=fish</literal></para>
</listitem>
<listitem>
<para><literal>live4</literal>, <literal>live5</literal>: with <literal>backup-group-name=bird</literal></para>
<para><literal>live4</literal>, <literal>live5</literal>: with <literal>group-name=bird</literal></para>
</listitem>
<listitem>
<para><literal>backup1</literal>, <literal>backup2</literal>, <literal>backup3</literal>,
<literal>backup4</literal>: with <literal>backup-group-name=fish</literal></para>
<literal>backup4</literal>: with <literal>group-name=fish</literal></para>
</listitem>
<listitem>
<para><literal>backup5</literal>, <literal>backup6</literal>: with
<literal>backup-group-name=bird</literal></para>
<literal>group-name=bird</literal></para>
</listitem>
</itemizedlist>
<para>After joining the cluster the backups with <literal>backup-group-name=fish</literal> will
search for live servers with <literal>backup-group-name=fish</literal> to pair with. Since there
<para>After joining the cluster the backups with <literal>group-name=fish</literal> will
search for live servers with <literal>group-name=fish</literal> to pair with. Since there
is one backup too many, the <literal>fish</literal> will remain with one spare backup.</para>
<para>The 2 backups with <literal>backup-group-name=bird</literal> (<literal>backup5</literal> and
<para>The 2 backups with <literal>group-name=bird</literal> (<literal>backup5</literal> and
<literal>backup6</literal>) will pair with live servers <literal>live4</literal> and
<literal>live5</literal>.</para>
</note>
@ -145,14 +202,14 @@
configured. If no live server is available it will wait until the cluster topology changes and
repeats the process.</para>
<note>
<para>This is an important distinction from a shared-store backup, as in that case if
the backup starts and does not find its live server, the server will just activate
and start to serve client requests. In the replication case, the backup just keeps
waiting for a live server to pair with. Notice that in replication the backup server
<para>This is an important distinction from a shared-store backup, if a backup starts and does not find
a live server, the server will just activate and start to serve client requests.
In the replication case, the backup just keeps
waiting for a live server to pair with. Note that in replication the backup server
does not know whether any data it might have is up to date, so it really cannot
decide to activate automatically. To activate a replicating backup server using the data
it has, the administrator must change its configuration to make a live server of it,
that change <literal>backup=true</literal> to <literal>backup=false</literal>.</para>
it has, the administrator must change its configuration to make it a live server by changing
<literal>slave</literal> to <literal>master</literal>.</para>
</note>
<para>Much like in the shared-store case, when the live server stops or crashes,
@ -169,12 +226,14 @@
<title>Configuration</title>
<para>To configure the live and backup servers to be a replicating pair, configure
both servers' <literal>hornetq-configuration.xml</literal> to have:</para>
the live server in ' <literal>hornetq-configuration.xml</literal> to have:</para>
<programlisting>
&lt;!-- FOR BOTH LIVE AND BACKUP SERVERS' -->
&lt;shared-store>false&lt;/shared-store>
.
&lt;ha-policy>
&lt;replication>
&lt;master/>
&lt;/replication>
&lt;/ha-policy>
.
&lt;cluster-connections>
&lt;cluster-connection name="my-cluster">
@ -183,12 +242,95 @@
&lt;/cluster-connections>
</programlisting>
<para>The backup server must also be configured as a backup.</para>
<para>The backup server must be similarly configured but as a <literal>slave</literal></para>
<programlisting>
&lt;backup>true&lt;/backup>
</programlisting>
&lt;ha-policy>
&lt;replication>
&lt;slave/>
&lt;/replication>
&lt;/ha-policy></programlisting>
</section>
<section>
<title>All Replication Configuration</title>
<para>The following table lists all the <literal>ha-policy</literal> configuration elements for HA strategy
Replication for <literal>master</literal>:</para>
<table>
<tgroup cols="2">
<colspec colname="c1" colnum="1"/>
<colspec colname="c2" colnum="2"/>
<thead>
<row>
<entry>name</entry>
<entry>Description</entry>
</row>
</thead>
<tbody>
<row>
<entry><literal>check-for-live-server</literal></entry>
<entry>Whether to check the cluster for a (live) server using our own server ID when starting
up. This option is only necessary for performing 'fail-back' on replicating servers.</entry>
</row>
<row>
<entry><literal>cluster-name</literal></entry>
<entry>Name of the cluster configuration to use for replication. This setting is only necessary if you
configure multiple cluster connections. If configured then the connector configuration of the
cluster configuration with this name will be used when connecting to the cluster to discover
if a live server is already running, see <literal>check-for-live-server</literal>. If unset then
the default cluster connections configuration is used (the first one configured)</entry>
</row>
<row>
<entry><literal>group-name</literal></entry>
<entry>If set, backup servers will only pair with live servers with matching group-name</entry>
</row>
</tbody>
</tgroup>
</table>
<para>The following table lists all the <literal>ha-policy</literal> configuration elements for HA strategy
Replication for <literal>slave</literal>:</para>
<table>
<tgroup cols="2">
<colspec colname="c1" colnum="1"/>
<colspec colname="c2" colnum="2"/>
<thead>
<row>
<entry>name</entry>
<entry>Description</entry>
</row>
</thead>
<tbody>
<row>
<entry><literal>cluster-name</literal></entry>
<entry>Name of the cluster configuration to use for replication. This setting is only necessary if you
configure multiple cluster connections. If configured then the connector configuration of the
cluster configuration with this name will be used when connecting to the cluster to discover
if a live server is already running, see <literal>check-for-live-server</literal>. If unset then
the default cluster connections configuration is used (the first one configured)</entry>
</row>
<row>
<entry><literal>group-name</literal></entry>
<entry>If set, backup servers will only pair with live servers with matching group-name</entry>
</row>
<row>
<entry><literal>max-saved-replicated-journals-size</literal></entry>
<entry>This specifies how many times a replicated backup server can restart after moving its files on start.
Once there are this number of backup journal files the server will stop permanently after if fails
back.</entry>
</row>
<row>
<entry><literal>allow-failback</literal></entry>
<entry>Whether a server will automatically stop when a another places a request to take over
its place. The use case is when the backup has failed over </entry>
</row>
<row>
<entry><literal>failback-delay</literal></entry>
<entry>delay to wait before fail-back occurs on (failed over live's) restart</entry>
</row>
</tbody>
</tgroup>
</table>
</section>
</section>
<section id="ha.mode.shared">
@ -213,20 +355,37 @@
the shared store which can take some time depending on the amount of data in the
store.</para>
<para>If you require the highest performance during normal operation, have access to
a fast SAN, and can live with a slightly slower failover (depending on amount of
data), we recommend shared store high availability</para>
a fast SAN and live with a slightly slower failover (depending on amount of
data).</para>
<graphic fileref="images/ha-shared-store.png" align="center"/>
<section id="ha/mode.shared.configuration">
<title>Configuration</title>
<para>To configure the live and backup servers to share their store, configure
all <literal>hornetq-configuration.xml</literal>:</para>
<programlisting>
&lt;shared-store>true&lt;/shared-store>
</programlisting>
<para>Additionally, each backup server must be flagged explicitly as a backup:</para>
<programlisting>
&lt;backup>true&lt;/backup></programlisting>
id via the <literal>ha-policy</literal> configuration in <literal>hornetq-configuration.xml</literal>:</para>
<programlisting>
&lt;ha-policy>
&lt;shared-store>
&lt;master/>
&lt;/shared-store>
&lt;/ha-policy>
.
&lt;cluster-connections>
&lt;cluster-connection name="my-cluster">
...
&lt;/cluster-connection>
&lt;/cluster-connections>
</programlisting>
<para>The backup server must also be configured as a backup.</para>
<programlisting>
&lt;ha-policy>
&lt;shared-store>
&lt;slave/>
&lt;/shared-store>
&lt;/ha-policy>
</programlisting>
<para>In order for live - backup groups to operate properly with a shared store,
both servers must have configured the location of journal directory to point
to the <emphasis>same shared location</emphasis> (as explained in
@ -244,14 +403,57 @@
<title>Failing Back to live Server</title>
<para>After a live server has failed and a backup taken has taken over its duties, you may want to
restart the live server and have clients fail back.</para>
<para>In case of "shared disk", simply restart the original live
server and kill the new live server. You can do this by killing the process itself or just waiting for the server to crash naturally.</para>
<para>In case of a replicating live server that has been replaced by a remote backup you will need to also set <link linkend="hq.check-for-live-server">check-for-live-server</link>. This option is necessary because a starting server cannot know whether there is a (remote) server running in its place, so with this option set, the server will check the cluster for another server using its node-ID and if it finds one it will try initiate a fail-back. This option only applies to live servers that are restarting, it is ignored by backup servers.</para>
<para>It is also possible to cause failover to occur on normal server shutdown, to enable
this set the following property to true in the <literal>hornetq-configuration.xml</literal>
configuration file like so:</para>
<para>In case of "shared disk", simply restart the original live server and kill the new live server by can
do this by killing the process itself. Alternatively you can set <literal>allow-fail-back</literal> to
<literal>true</literal> on the slave config which will force the backup that has become live to automatically
stop. This configuration would look like:</para>
<programlisting>
&lt;ha-policy>
&lt;shared-store>
&lt;slave>
&lt;allow-failback>true&lt;/allow-failback>
&lt;failback-delay>5000&lt;/failback-delay>
&lt;/slave>
&lt;/shared-store>
&lt;/ha-policy>
</programlisting>
<para>The <literal>failback-delay</literal> configures how long the backup must wait after automatically
stopping before it restarts. This is to gives the live server time to start and obtain its lock.</para>
<para id="hq.check-for-live-server">In replication HA mode you need to set an extra property <literal>check-for-live-server</literal>
to <literal>true</literal> in the <literal>master</literal> configuration. If set to true, during start-up
a live server will first search the cluster for another server using its nodeID. If it finds one, it will
contact this server and try to "fail-back". Since this is a remote replication scenario, the "starting live"
will have to synchronize its data with the server running with its ID, once they are in sync, it will
request the other server (which it assumes it is a back that has assumed its duties) to shutdown for it to
take over. This is necessary because otherwise the live server has no means to know whether there was a
fail-over or not, and if there was if the server that took its duties is still running or not.
To configure this option at your <literal>hornetq-configuration.xml</literal> configuration file as follows:</para>
<programlisting>
&lt;ha-policy>
&lt;replication>
&lt;master>
&lt;check-for-live-server>true&lt;/check-for-live-server>
&lt;master>
&lt;/replication>
&lt;/ha-policy></programlisting>
<warning>
<para>
Be aware that if you restart a live server while after failover has occurred then this value must be
set to <literal><emphasis role="bold">true</emphasis></literal>. If not the live server will restart and server the same
messages that the backup has already handled causing duplicates.
</para>
</warning>
<para>It is also possible, in the case of shared store, to cause failover to occur on normal server shutdown,
to enable this set the following property to true in the <literal>ha-policy</literal> configuration on either
the <literal>master</literal> or <literal>slave</literal> like so:</para>
<programlisting>
&lt;failover-on-shutdown>true&lt;/failover-on-shutdown></programlisting>
&lt;ha-policy>
&lt;shared-store>
&lt;master>
&lt;failover-on-shutdown>true&lt;/failover-on-shutdown>
&lt;/master>
&lt;/shared-store>
&lt;/ha-policy></programlisting>
<para>By default this is set to false, if by some chance you have set this to false but still
want to stop the server normally and cause failover then you can do this by using the management
API as explained at <xref linkend="management.core.server"/></para>
@ -259,39 +461,284 @@
the original live server to take over automatically by setting the following property in the
<literal>hornetq-configuration.xml</literal> configuration file as follows:</para>
<programlisting>
&lt;allow-failback>true&lt;/allow-failback></programlisting>
<para id="hq.check-for-live-server">In replication HA mode you need to set an extra property <literal>check-for-live-server</literal>
to <literal>true</literal>. If set to true, during start-up a live server will first search the cluster for another server using its nodeID. If it finds one, it will contact this server and try to "fail-back". Since this is a remote replication scenario, the "starting live" will have to synchronize its data with the server running with its ID, once they are in sync, it will request the other server (which it assumes it is a back that has assumed its duties) to shutdown for it to take over. This is necessary because otherwise the live server has no means to know whether there was a fail-over or not, and if there was if the server that took its duties is still running or not. To configure this option at your <literal>hornetq-configuration.xml</literal> configuration file as follows:</para>
<programlisting>
&lt;check-for-live-server>true&lt;/check-for-live-server></programlisting>
&lt;ha-policy>
&lt;shared-store>
&lt;slave>
&lt;allow-failback>true&lt;/allow-failback>
&lt;/slave>
&lt;/shared-store>
&lt;/ha-policy></programlisting>
<section>
<title>All Shared Store Configuration</title>
<para>The following table lists all the <literal>ha-policy</literal> configuration elements for HA strategy
shared store for <literal>master</literal>:</para>
<table>
<tgroup cols="2">
<colspec colname="c1" colnum="1"/>
<colspec colname="c2" colnum="2"/>
<thead>
<row>
<entry>name</entry>
<entry>Description</entry>
</row>
</thead>
<tbody>
<row>
<entry><literal>failback-delay</literal></entry>
<entry>If a backup server is detected as being live, via the lock file, then the live server
will wait announce itself as a backup and wait this amount of time (in ms) before starting as
a live</entry>
</row>
<row>
<entry><literal>failover-on-server-shutdown</literal></entry>
<entry>If set to true then when this server is stopped normally the backup will become live
assuming failover. If false then the backup server will remain passive. Note that if false you
want failover to occur the you can use the the management API as explained at <xref linkend="management.core.server"/></entry>
</row>
</tbody>
</tgroup>
</table>
<para>The following table lists all the <literal>ha-policy</literal> configuration elements for HA strategy
Shared Store for <literal>slave</literal>:</para>
<table>
<tgroup cols="2">
<colspec colname="c1" colnum="1"/>
<colspec colname="c2" colnum="2"/>
<thead>
<row>
<entry>name</entry>
<entry>Description</entry>
</row>
</thead>
<tbody>
<row>
<entry><literal>failover-on-server-shutdown</literal></entry>
<entry>In the case of a backup that has become live. then when set to true then when this server
is stopped normally the backup will become liveassuming failover. If false then the backup
server will remain passive. Note that if false you want failover to occur the you can use
the the management API as explained at <xref linkend="management.core.server"/></entry>
</row>
<row>
<entry><literal>allow-failback</literal></entry>
<entry>Whether a server will automatically stop when a another places a request to take over
its place. The use case is when the backup has failed over.</entry>
</row>
<row>
<entry><literal>failback-delay</literal></entry>
<entry>After failover and the slave has become live, this is set on the new live server.
When starting If a backup server is detected as being live, via the lock file, then the live server
will wait announce itself as a backup and wait this amount of time (in ms) before starting as
a live, however this is unlikely since this backup has just stopped anyway. It is also used
as the delay after failback before this backup will restart (if <literal>allow-failback</literal>
is set to true.</entry>
</row>
</tbody>
</tgroup>
</table>
</section>
</section>
<section id="ha.colocated">
<title>Colocated Backup Servers</title>
<para>It is also possible when running standalone to colocate backup servers in the same
JVM as another live server.The colocated backup will become a backup for another live
server in the cluster but not the one it shares the vm with. To configure a colocated
backup server simply add the following to the <literal>hornetq-configuration.xml</literal> file</para>
JVM as another live server. Live Servers can be configured to request another live server in the cluster
to start a backup server in the same JVM either using shared store or replication. The new backup server
will inherit its configuration from the live server creating it apart from its name, which will be set to
<literal>colocated_backup_n</literal> where n is the number of backups the server has created, and any directories
and its Connectors and Acceptors which are discussed later on in this chapter. A live server can also
be configured to allow requests from backups and also how many backups a live server can start. this way
you can evenly distribute backups around the cluster. This is configured via the <literal>ha-policy</literal>
element in the <literal>hornetq-configuration.xml</literal> file like so:</para>
<programlisting>
&lt;backup-servers>
&lt;backup-server name="backup2" inherit-configuration="true" port-offset="1000">
&lt;configuration>
&lt;bindings-directory>target/server1/data/messaging/bindings&lt;/bindings-directory>
&lt;journal-directory>target/server1/data/messaging/journal&lt;/journal-directory>
&lt;large-messages-directory>target/server1/data/messaging/largemessages&lt;/large-messages-directory>
&lt;paging-directory>target/server1/data/messaging/paging&lt;/paging-directory>
&lt;/configuration>
&lt;/backup-server>
&lt;/backup-servers>
&lt;ha-policy>
&lt;replication>
&lt;colocated>
&lt;request-backup>true&lt;/request-backup>
&lt;max-backups>1&lt;/max-backups>
&lt;backup-request-retries>-1&lt;/backup-request-retries>
&lt;backup-request-retry-interval>5000&lt;/backup-request-retry-interval>
&lt;master/>
&lt;slave/>
&lt;/colocated>
&lt;replication>
&lt;/ha-policy>
</programlisting>
<para> you will notice 3 attributes on the <literal>backup-server</literal>, <literal>name</literal>
which is a unique name used to identify the backup server, <literal>inherit-configuration</literal>
which if set to true means the server will inherit the configuration of its parent server
and <literal>port-offset</literal> which is what the port for any netty connectors or
acceptors will be increased by if the configuration is inherited.</para>
<para>it is also possible to configure the backup server in the normal way, in this example you will
notice we have changed the journal directories.</para>
<para>the above example is configured to use replication, in this case the <literal>master</literal> and
<literal>slave</literal> configurations must match those for normal replication as in the previous chapter.
<literal>shared-store</literal> is also supported</para>
<graphic fileref="images/ha-colocated.png" align="center"/>
<section id="ha.colocated.connectorsandacceptors">
<title>Configuring Connectors and Acceptors</title>
<para>If the HA Policy is colocated then connectors and acceptors will be inherited from the live server
creating it and offset depending on the setting of <literal>backup-port-offset</literal> configuration element.
If this is set to say 100 (which is the default) and a connector is using port 5445 then this will be
set to 5545 for the first server created, 5645 for the second and so on.</para>
<note><para>for INVM connectors and Acceptors the id will have <literal>colocated_backup_n</literal> appended,
where n is the backup server number.</para></note>
<section id="ha.colocated.connectorsandacceptors.remote">
<title>Remote Connectors</title>
<para>It may be that some of the Connectors configured are for external servers and hence should be excluded from the offset.
for instance a Connector used by the cluster connection to do quorum voting for a replicated backup server,
these can be omitted from being offset by adding them to the <literal>ha-policy</literal> configuration like so:</para>
<programlisting>
&lt;ha-policy>
&lt;replication>
&lt;colocated>
&lt;excludes>
&lt;connector-ref>remote-connector&lt;/connector-ref>
&lt;/excludes>
.........
&lt;/ha-policy>
</programlisting>
</section>
</section>
<section id="ha.colocated.directories">
<title>Configuring Directories</title>
<para>Directories for the Journal, Large messages and Paging will be set according to what the HA strategy is.
If shared store the the requesting server will notify the target server of which directories to use. If replication
is configured then directories will be inherited from the creating server but have the new backups name
appended.</para>
</section>
<para>The following table lists all the <literal>ha-policy</literal> configuration elements:</para>
<table>
<tgroup cols="2">
<colspec colname="c1" colnum="1"/>
<colspec colname="c2" colnum="2"/>
<thead>
<row>
<entry>name</entry>
<entry>Description</entry>
</row>
</thead>
<tbody>
<row>
<entry><literal>request-backup</literal></entry>
<entry>If true then the server will request a backup on another node</entry>
</row>
<row>
<entry><literal>backup-request-retries</literal></entry>
<entry>How many times the live server will try to request a backup, -1 means for ever.</entry>
</row>
<row>
<entry><literal>backup-request-retry-interval</literal></entry>
<entry>How long to wait for retries between attempts to request a backup server.</entry>
</row>
<row>
<entry><literal>max-backups</literal></entry>
<entry>Whether or not this live server will accept backup requests from other live servers.</entry>
</row>
<row>
<entry><literal>backup-port-offset</literal></entry>
<entry>The offset to use for the Connectors and Acceptors when creating a new backup server.</entry>
</row>
</tbody>
</tgroup>
</table>
</section>
</section>
<section id="ha.scaledown">
<title>Scaling Down</title>
<para>An alternative to using Live/Backup groups is to configure scaledown. when configured for scale down a server
can copy all its messages and transaction state to another live server. The advantage of this is that you dont need
full backups to provide some form of HA, however there are disadvantages with this approach the first being that it
only deals with a server being stopped and not a server crash. The caveat here is if you configure a backup to scale down. </para>
<para>Another disadvantage is that it is possible to lose message ordering. This happens in the following scenario,
say you have 2 live servers and messages are distributed evenly between the servers from a single producer, if one
of the servers scales down then the messages sent back to the other server will be in the queue after the ones
already there, so server 1 could have messages 1,3,5,7,9 and server 2 would have 2,4,6,8,10, if server 2 scales
down the order in server 1 would be 1,3,5,7,9,2,4,6,8,10.</para>
<graphic fileref="images/ha-scaledown.png" align="center"/>
<para>The configuration for a live server to scale down would be something like:</para>
<programlisting>
&lt;ha-policy>
&lt;live-only>
&lt;scale-down>
&lt;connectors>
&lt;connector-ref>server1-connector&lt;/connector-ref>
&lt;/connectors>
&lt;/scale-down>
&lt;/live-only>
&lt;/ha-policy>
</programlisting>
<para>In this instance the server is configured to use a specific connector to scale down, if a connector is not
specified then the first INVM connector is chosen, this is to make scale down fromm a backup server easy to configure.
It is also possible to use discovery to scale down, this would look like:</para>
<programlisting>
&lt;ha-policy>
&lt;live-only>
&lt;scale-down>
&lt;discovery-group>my-discovery-group&lt;/discovery-group>
&lt;/scale-down>
&lt;/live-only>
&lt;/ha-policy>
</programlisting>
<section id="ha.scaledown.group">
<title>Scale Down with groups</title>
<para>It is also possible to configure servers to only scale down to servers that belong in the same group. This
is done by configuring the group like so:</para>
<programlisting>
&lt;ha-policy>
&lt;live-only>
&lt;scale-down>
...
&lt;group-name>my-group&lt;/group-name>
&lt;/scale-down>
&lt;/live-only>
&lt;/ha-policy>
</programlisting>
<para>In this scenario only servers that belong to the group <literal>my-group</literal> will be scaled down to</para>
</section>
<section>
<title>Scale Down and Backups</title>
<para>It is also possible to mix scale down with HA via backup servers. If a slave is configured to scale down
then after failover has occurred, instead of starting fully the backup server will immediately scale down to
another live server. The most appropriate configuration for this is using the <literal>colocated</literal> approach.
it means as you bring up live server they will automatically be backed up by server and as live servers are
shutdown, there messages are made available on another live server. A typical configuration would look like:</para>
<programlisting>
&lt;ha-policy>
&lt;replication>
&lt;colocated>
&lt;backup-request-retries>44&lt;/backup-request-retries>
&lt;backup-request-retry-interval>33&lt;/backup-request-retry-interval>
&lt;max-backups>3&lt;/max-backups>
&lt;request-backup>false&lt;/request-backup>
&lt;backup-port-offset>33&lt;/backup-port-offset>
&lt;master>
&lt;group-name>purple&lt;/group-name>
&lt;check-for-live-server>true&lt;/check-for-live-server>
&lt;cluster-name>abcdefg&lt;/cluster-name>
&lt;/master>
&lt;slave>
&lt;group-name>tiddles&lt;/group-name>
&lt;max-saved-replicated-journals-size>22&lt;/max-saved-replicated-journals-size>
&lt;cluster-name>33rrrrr&lt;/cluster-name>
&lt;restart-backup>false&lt;/restart-backup>
&lt;scale-down>
&lt;!--a grouping of servers that can be scaled down to-->
&lt;group-name>boo!&lt;/group-name>
&lt;!--either a discovery group-->
&lt;discovery-group>wahey&lt;/discovery-group>
&lt;/scale-down>
&lt;/slave>
&lt;/colocated>
&lt;/replication>
&lt;/ha-policy>
</programlisting>
</section>
<section id="ha.scaledown.client">
<title>Scale Down and Clients</title>
<para>When a server is stopping and preparing to scale down it will send a message to all its clients informing them
which server it is scaling down to before disconnecting them. At this point the client will reconnect however this
will only succeed once the server has completed scaledown. This is to ensure that any state such as queues or transactions
are there for the client when it reconnects. The normal reconnect settings apply when the client is reconnecting so
these should be high enough to deal with the time needed to scale down.</para>
</section>
</section>
<section id="failover">
<title>Failover Modes</title>
<para>HornetQ defines two types of client failover:</para>

Binary file not shown.

After

Width:  |  Height:  |  Size: 19 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 16 KiB

View File

@ -285,4 +285,23 @@ java.naming.factory.url.pkgs=org.jboss.naming:org.jnp.interfaces</programlisting
however in this version HornetQ will only support single transactions per session</para></note>
</section>
</section>
<section>
<title>OpenWire</title>
<para>HornetQ now supports the <ulink url="http://activemq.apache.org/openwire.html">OpenWire</ulink>
protocol so that an ActiveMQ JMS client can talk directly to a HornetQ server. To enable OpenWire support
you must configure a Netty Acceptor, like so:</para>
<programlisting>
&lt;acceptor name="openwire-acceptor">
&lt;factory-class>org.hornetq.core.remoting.impl.netty.NettyAcceptorFactory&lt;/factory-class>
&lt;param key="protocols" value="OPENWIRE"/>
&lt;param key="port" value="61616"/>
&lt;/acceptor>
</programlisting>
<para>The HornetQ server will then listens on port 61616 for incoming openwire commands. Please note the "protocols" is not mandatory here.
The openwire configuration conforms to HornetQ's "Single Port" feature. Please refer to
<link linkend="configuring-transports.single-port">Configuring Single Port</link> for details.</para>
<para>Please refer to the openwire example for more coding details.</para>
<para>Currently we support ActiveMQ clients that using standard JMS APIs. In the future we will get more supports
for some advanced, ActiveMQ specific features into HornetQ.</para>
</section>
</chapter>

View File

@ -148,15 +148,16 @@
>core.server</literal>).</para>
</listitem>
<listitem>
<para>It is possible to stop the server and force failover to occur with any currently attached clients.</para>
<para>to do this use the <literal>forceFailover()</literal> on the <literal
<para>It is possible to stop the server and force failover to occur with any currently attached clients.</para>
<para>to do this use the <literal>forceFailover()</literal> on the <literal
>HornetQServerControl</literal> (with the ObjectName <literal
>org.hornetq:module=Core,type=Server</literal> or the resource name <literal
>core.server</literal>) </para>
<para>
<note>Since this method actually stops the server you will probably receive some sort of error
depending on which management service you use to call it.</note>
</para>
<note>
<para>Since this method actually stops the server you will probably receive some sort of error
depending on which management service you use to call it.
</para>
</note>
</listitem>
</itemizedlist>
</section>
@ -834,7 +835,7 @@ notificationConsumer.setMessageListener(new MessageListener()
how to use a JMS <literal>MessageListener</literal> to receive management notifications
from HornetQ server.</para>
</section>
<section>
<section id="notification.types.and.headers">
<title>Notification Types and Headers</title>
<para>Below is a list of all the different kinds of notifications as well as which headers are
on the messages. Every notification has a <literal>_HQ_NotifType</literal> (value noted in parentheses)
@ -966,6 +967,14 @@ notificationConsumer.setMessageListener(new MessageListener()
<literal>_HQ_Address</literal>, <literal>_HQ_Distance</literal></para>
</listitem>
</itemizedlist>
<itemizedlist>
<listitem>
<para><literal>CONSUMER_SLOW</literal> (21)</para>
<para><literal>_HQ_Address</literal>, <literal>_HQ_ConsumerCount</literal>,
<literal>_HQ_RemoteAddress</literal>, <literal>_HQ_ConnectionName</literal>,
<literal>_HQ_ConsumerName</literal>, <literal>_HQ_SessionName</literal></para>
</listitem>
</itemizedlist>
</section>
</section>
<section id="management.message-counters">

View File

@ -109,6 +109,9 @@
&lt;redistribution-delay>0&lt;/redistribution-delay>
&lt;send-to-dla-on-no-route>true&lt;/send-to-dla-on-no-route>
&lt;address-full-policy>PAGE&lt;/address-full-policy>
&lt;slow-consumer-threshold>-1&lt;/slow-consumer-threshold>
&lt;slow-consumer-policy>NOTIFY&lt;/slow-consumer-policy>
&lt;slow-consumer-check-period>5&lt;/slow-consumer-check-period>
&lt;/address-setting>
&lt;/address-settings></programlisting>
<para>The idea with address settings, is you can provide a block of settings which will be
@ -154,7 +157,16 @@
See the following chapters for more info <xref linkend="flow-control"/>, <xref linkend="paging"/>.
</para>
<para><literal>slow-consumer-threshold</literal>. The minimum rate of message consumption allowed before a
consumer is considered "slow." Measured in messages-per-second. Default is -1 (i.e. disabled); any other valid
value must be greater than 0.</para>
<para><literal>slow-consumer-policy</literal>. What should happen when a slow consumer is detected.
<literal>KILL</literal> will kill the consumer's connection (which will obviously impact any other client
threads using that same connection). <literal>NOTIFY</literal> will send a CONSUMER_SLOW management
notification which an application could receive and take action with. See
<xref linkend="notification.types.and.headers"/> for more details on this notification.</para>
<para><literal>slow-consumer-check-period</literal>. How often to check for slow consumers on a particular queue.
Measured in minutes. Default is 5. See <xref linkend="slow-consumers"/> for more information about slow
consumer detection.</para>
</section>
</chapter>

View File

@ -0,0 +1,53 @@
<?xml version="1.0" encoding="UTF-8"?>
<!-- ============================================================================= -->
<!-- Copyright © 2009 Red Hat, Inc. and others. -->
<!-- -->
<!-- The text of and illustrations in this document are licensed by Red Hat under -->
<!-- a Creative Commons AttributionShare Alike 3.0 Unported license ("CC-BY-SA"). -->
<!-- -->
<!-- An explanation of CC-BY-SA is available at -->
<!-- -->
<!-- http://creativecommons.org/licenses/by-sa/3.0/. -->
<!-- -->
<!-- In accordance with CC-BY-SA, if you distribute this document or an adaptation -->
<!-- of it, you must provide the URL for the original version. -->
<!-- -->
<!-- Red Hat, as the licensor of this document, waives the right to enforce, -->
<!-- and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent -->
<!-- permitted by applicable law. -->
<!-- ============================================================================= -->
<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.5//EN" "http://www.oasis-open.org/docbook/xml/4.5/docbookx.dtd" [
<!ENTITY % BOOK_ENTITIES SYSTEM "HornetQ_User_Manual.ent">
%BOOK_ENTITIES;
]>
<chapter id="slow-consumers">
<title>Detecting Slow Consumers</title>
<para>In this section we will discuss how HornetQ can be configured to deal with slow consumers. A slow consumer with
a server-side queue (e.g. JMS topic subscriber) can pose a significant problem for broker performance. If messages
build up in the consumer's server-side queue then memory will begin filling up and the broker may enter paging
mode which would impact performance negatively. However, criteria can be set so that consumers which don't
acknowledge messages quickly enough can potentially be disconnected from the broker which in the case of a
non-durable JMS subscriber would allow the broker to remove the subscription and all of its messages freeing up
valuable server resources.
</para>
<section id="slow.consumer.configuration">
<title>Configuration required for detecting slow consumers</title>
<para>By default the server will not detect slow consumers. If slow consumer detection is desired then see
<xref linkend="queue-attributes.address-settings"/>
for more details.
</para>
<para>The calculation to determine whether or not a consumer is slow only inspects the number of messages a
particular consumer has <emphasis>acknowledged</emphasis>. It doesn't take into account whether or not flow
control has been enabled on the consumer, whether or not the consumer is streaming a large message, etc. Keep
this in mind when configuring slow consumer detection.
</para>
<para>Please note that slow consumer checks are performed using the scheduled thread pool and that each queue on
the broker with slow consumer detection enabled will cause a new entry in the internal
<literal>java.util.concurrent.ScheduledThreadPoolExecutor</literal> instance. If there are a high number of
queues and the <literal>slow-consumer-check-period</literal> is relatively low then there may be delays in
executing some of the checks. However, this will not impact the accuracy of the calculations used by the
detection algorithm. See <xref linkend="server.scheduled.thread.pool"/> for more details about this pool.
</para>
</section>
</chapter>

View File

@ -181,29 +181,40 @@
<section>
<title>A simple example of using Core</title>
<para>Here's a very simple program using the core messaging API to send and receive a
message:</para>
message. Logically it's comprised of two sections: firstly setting up the producer to
write a message to an <emphasis>addresss</emphasis>, and secondly, creating a
<emphasis>queue</emphasis> for the consumer, creating the consumer and
<emphasis>starting</emphasis> it.</para>
<programlisting>
ServerLocator locator = HornetQClient.createServerLocatorWithoutHA(new TransportConfiguration(
InVMConnectorFactory.class.getName()));
ClientSessionFactory factory = locator.createClientSessionFactory();
// In this simple example, we just use one session for both producing and receiving
ClientSessionFactory factory = locator.createClientSessionFactory();
ClientSession session = factory.createSession();
// A producer is associated with an address ...
ClientProducer producer = session.createProducer("example");
ClientMessage message = session.createMessage(true);
message.getBodyBuffer().writeString("Hello");
// We need a queue attached to the address ...
session.createQueue("example", "example", true);
ClientProducer producer = session.createProducer("example");
ClientMessage message = session.createMessage(true);
message.getBodyBuffer().writeString("Hello");
producer.send(message);
session.start();
// And a consumer attached to the queue ...
ClientConsumer consumer = session.createConsumer("example");
// Once we have a queue, we can send the message ...
producer.send(message);
// We need to start the session before we can -receive- messages ...
session.start();
ClientMessage msgReceived = consumer.receive();
System.out.println("message = " + msgReceived.getBodyBuffer().readString());

View File

@ -35,30 +35,26 @@
<title>Starting and Stopping the standalone server</title>
<para>In the distribution you will find a directory called <literal>bin</literal>.</para>
<para><literal>cd</literal> into that directory and you will find a Unix/Linux script called
<literal>run.sh</literal> and a windows batch file called <literal
>run.bat</literal></para>
<para>To run on Unix/Linux type <literal>./run.sh</literal></para>
<para>To run on Windows type <literal>run.bat</literal></para>
<literal>hornetq</literal> and a Windows script called <literal>hornetq.cmd</literal>.</para>
<para>To start the HornetQ instance on Unix/Linux type <literal>./hornetq run</literal></para>
<para>To start the HornetQ instance on Windows type <literal>hornetq.cmd run</literal></para>
<para>These scripts are very simple and basically just set-up the classpath and some JVM
parameters and start the JBoss Microcontainer. The Microcontainer is a light weight
container used to deploy the HornetQ POJO's</para>
<para>To stop the server you will also find a Unix/Linux script <literal>stop.sh</literal> and
a windows batch file <literal>stop.bat</literal></para>
<para>To run on Unix/Linux type <literal>./stop.sh</literal></para>
<para>To run on Windows type <literal>stop.bat</literal></para>
parameters and bootstrap the server using <ulink
url="https://github.com/airlift/airline">Airline</ulink>.</para>
<para>To stop the HornetQ instance you will use the same <literal>hornetq</literal> script.</para>
<para>To run on Unix/Linux type <literal>./hornetq stop</literal></para>
<para>To run on Windows type <literal>hornetq.cmd stop</literal></para>
<para>Please note that HornetQ requires a Java 6 or later runtime to run.</para>
<para>Both the run and the stop scripts use the config under <literal
>config/stand-alone/non-clustered</literal> by default. The configuration can be
changed by running <literal>./run.sh ../config/stand-alone/clustered</literal> or
another config of your choosing. This is the same for the stop script and the windows
bat files.</para>
<para>By default the <literal>config/non-clustered/bootstrap.xml</literal> configuration is used. The
configuration can be changed e.g. by running
<literal>./hornetq run -- xml:../config/clustered/bootstrap.xml</literal> or another config of
your choosing.</para>
</section>
<section>
<title>Server JVM settings</title>
<para>The run scripts <literal>run.sh</literal> and <literal>run.bat</literal> set some JVM
settings for tuning running on Java 6 and choosing the garbage collection policy. We
recommend using a parallel garbage collection algorithm to smooth out latency and
minimise large GC pauses.</para>
<para>The run scripts set some JVM settings for tuning the garbage collection policy
and heap size. We recommend using a parallel garbage collection algorithm to smooth
out latency and minimise large GC pauses.</para>
<para>By default HornetQ runs in a maximum of 1GiB of RAM. To increase the memory settings
change the <literal>-Xms</literal> and <literal>-Xmx</literal> memory settings as you
would for any Java program.</para>
@ -66,15 +62,7 @@
are the place to do it.</para>
</section>
<section>
<title>Server classpath</title>
<para>HornetQ looks for its configuration files on the Java classpath.</para>
<para>The scripts <literal>run.sh</literal> and <literal>run.bat</literal> specify the
classpath when calling Java to run the server.</para>
<para>In the distribution, the run scripts will add the non clustered configuration
directory to the classpath. This is a directory which contains a set of configuration
files for running the HornetQ server in a basic non-clustered configuration. In the
distribution this directory is <literal>config/stand-alone/non-clustered/</literal> from
the root of the distribution.</para>
<title>Pre-configured Options</title>
<para>The distribution contains several standard configuration sets for running:</para>
<itemizedlist>
<listitem>
@ -84,22 +72,20 @@
<para>Clustered stand-alone</para>
</listitem>
<listitem>
<para>Non clustered in JBoss Application Server</para>
<para>Replicated stand-alone</para>
</listitem>
<listitem>
<para>Clustered in JBoss Application Server</para>
<para>Shared-store stand-alone</para>
</listitem>
</itemizedlist>
<para>You can of course create your own configuration and specify any configuration
directory when running the run script.</para>
<para>Just make sure the directory is on the classpath and HornetQ will search there when
starting up.</para>
when running the run script.</para>
</section>
<section id="using-server.library.path">
<title>Library Path</title>
<para>If you're using the <link linkend="aio-journal">Asynchronous IO Journal</link> on
Linux, you need to specify <literal>java.library.path</literal> as a property on your
Java options. This is done automatically in the <literal>run.sh</literal> script.</para>
Java options. This is done automatically in the scripts.</para>
<para>If you don't specify <literal>java.library.path</literal> at your Java options then
the JVM will use the environment variable <literal>LD_LIBRARY_PATH</literal>.</para>
</section>
@ -111,19 +97,9 @@
</section>
<section id="using-server.configuration">
<title>Configuration files</title>
<para>The configuration directory is specified on the classpath in the run scripts <literal
>run.sh</literal> and <literal>run.bat</literal> This directory can contain the
following files.</para>
<para>The configuration file used to bootstrap the server (e.g. <literal>bootstrap.xml</literal>
by default) references the specific broker configuration files.</para>
<itemizedlist>
<listitem>
<para><literal>hornetq-beans.xml</literal> (or <literal
>hornetq-jboss-beans.xml</literal> if you're running inside JBoss
Application Server). This is the JBoss Microcontainer beans file which defines
what beans the Microcontainer should create and what dependencies to enforce
between them. Remember that HornetQ is just a set of POJOs. In the stand-alone
server, it's the JBoss Microcontainer which instantiates these POJOs and
enforces dependencies between them and other beans. </para>
</listitem>
<listitem>
<para><literal>hornetq-configuration.xml</literal>. This is the main HornetQ
configuration file. All the parameters in this file are described in <xref
@ -156,11 +132,6 @@
file. For more information on using JMS, please see <xref linkend="using-jms"
/>.</para>
</listitem>
<listitem>
<para><literal>logging.properties</literal> This is used to configure the logging
handlers used by the Java logger. For more information on configuring logging,
please see <xref linkend="logging"/>.</para>
</listitem>
</itemizedlist>
<note>
<para>The property <literal>file-deployment-enabled</literal> in the <literal
@ -184,190 +155,43 @@
>${hornetq.remoting.netty.host}</literal>, however the system property
<emphasis>must</emphasis> be supplied in that case.</para>
</section>
<section id="server.microcontainer.configuration">
<title>JBoss Microcontainer Beans File</title>
<para>The stand-alone server is basically a set of POJOs which are instantiated by the light
weight<ulink url="http://www.jboss.org/jbossmc/"> JBoss Microcontainer
</ulink>engine.</para>
<note>
<para>A beans file is also needed when the server is deployed in the JBoss Application
Server but this will deploy a slightly different set of objects since the
Application Server will already have things like security etc deployed.</para>
</note>
<para>Let's take a look at an example beans file from the stand-alone server:</para>
<section id="server.bootstrap.configuration">
<title>Bootstrap File</title>
<para>The stand-alone server is basically a set of POJOs which are instantiated by Airline commands.</para>
<para>The bootstrap file is very simple. Let's take a look at an example:</para>
<para>
<programlisting>
&lt;?xml version="1.0" encoding="UTF-8"?>
&lt;broker xmlns="http://hornetq.org/schema">
&lt;deployment xmlns="urn:jboss:bean-deployer:2.0">
&lt;file:core configuration="${hornetq.home}/config/stand-alone/non-clustered/hornetq-configuration.xml">&lt;/core>
&lt;file:jms configuration="${hornetq.home}/config/stand-alone/non-clustered/hornetq-jms.xml">&lt;/jms>
&lt;!-- MBean server -->
&lt;bean name="MBeanServer" class="javax.management.MBeanServer">
&lt;constructor factoryClass="java.lang.management.ManagementFactory"
factoryMethod="getPlatformMBeanServer"/>
&lt;/bean>
&lt;basic-security/>
&lt;!-- The core configuration -->
&lt;bean name="Configuration" class="org.hornetq.core.config.impl.FileConfiguration">
&lt;/bean>
&lt;naming bindAddress="localhost" port="1099" rmiBindAddress="localhost" rmiPort="1098"/>
&lt;!-- The security manager -->
&lt;bean name="HornetQSecurityManager" class="org.hornetq.spi.core.security.HornetQSecurityManagerImpl">
&lt;start ignored="true"/>
&lt;stop ignored="true"/>
&lt;/bean>
&lt;!-- The core server -->
&lt;bean name="HornetQServer" class="org.hornetq.core.server.impl.HornetQServerImpl">
&lt;constructor>
&lt;parameter>
&lt;inject bean="Configuration"/>
&lt;/parameter>
&lt;parameter>
&lt;inject bean="MBeanServer"/>
&lt;/parameter>
&lt;parameter>
&lt;inject bean="HornetQSecurityManager"/>
&lt;/parameter>
&lt;/constructor>
&lt;start ignored="true"/>
&lt;stop ignored="true"/>
&lt;/bean>
&lt;!-- The Stand alone server that controls the jndi server-->
&lt;bean name="StandaloneServer" class="org.hornetq.jms.server.impl.StandaloneNamingServer">
&lt;constructor>
&lt;parameter>
&lt;inject bean="HornetQServer"/>
&lt;/parameter>
&lt;/constructor>
&lt;property name="port">${jnp.port:1099}&lt;/property>
&lt;property name="bindAddress">${jnp.host:localhost}&lt;/property>
&lt;property name="rmiPort">${jnp.rmiPort:1098}&lt;/property>
&lt;property name="rmiBindAddress">${jnp.host:localhost}&lt;/property>
&lt;/bean>
&lt;!-- The JMS server -->
&lt;bean name="JMSServerManager" class="org.hornetq.jms.server.impl.JMSServerManagerImpl">
&lt;constructor>
&lt;parameter>
&lt;inject bean="HornetQServer"/>
&lt;/parameter>
&lt;/constructor>
&lt;/bean>
&lt;/deployment></programlisting>
&lt;/broker></programlisting>
</para>
<para>We can see that, as well as the core HornetQ server, the stand-alone server
instantiates various different POJOs, let's look at them in turn:</para>
<itemizedlist>
<listitem>
<para>MBeanServer</para>
<para>In order to provide a JMX management interface a JMS MBean server is necessary
in which to register the management objects. Normally this is just the default
platform MBean server available in the JVM instance. If you don't want to
provide a JMX management interface this can be commented out or removed.</para>
<para>core</para>
<para>Instantiates a core server using the configuration file from the
<literal>configuration</literal> attribute. This is the main broker POJO necessary
to do all the real messaging work.</para>
</listitem>
<listitem>
<para>Configuration</para>
<para>The HornetQ server is configured with a Configuration object. In the default
stand-alone set-up it uses a FileConfiguration object which knows to read
configuration information from the file system. In different configurations such
as embedded you might want to provide configuration information from somewhere
else.</para>
</listitem>
<listitem>
<para>Security Manager. The security manager used by the messaging server is
pluggable. The default one used just reads user-role information from the
<literal>hornetq-users.xml</literal> file on disk. However it can be
replaced by a JAAS security manager, or when running inside JBoss Application
Server it can be configured to use the JBoss AS security manager for tight
integration with JBoss AS security. If you've disabled security altogether you
can remove this too.</para>
</listitem>
<listitem>
<para>HornetQServer</para>
<para>This is the core server. It's where 99% of the magic happens</para>
</listitem>
<listitem>
<para>StandaloneServer</para>
<para>Many clients like to look up JMS Objects from JNDI so we provide a JNDI server
for them to do that. This class is a wrapper around the JBoss naming server.
If you don't need JNDI this can be commented out or removed.</para>
</listitem>
<listitem id="bean-jmsservermanager">
<para>JMSServerManager</para>
<listitem id="jms">
<para>jms</para>
<para>This deploys any JMS Objects such as JMS Queues, Topics and ConnectionFactory
instances from <literal>hornetq-jms.xml</literal> files on the disk. It also
instances from the <literal>hornetq-jms.xml</literal> file specified. It also
provides a simple management API for manipulating JMS Objects. On the whole it
just translates and delegates its work to the core server. If you don't need to
deploy JMS Queues, Topics and ConnectionFactorys from server side configuration
deploy JMS Queues, Topics and ConnectionFactories from server side configuration
and don't require the JMS management interface this can be disabled.</para>
</listitem>
</itemizedlist>
</section>
<section id="server.microkernel.configuration">
<title>JBoss AS4 MBean Service.</title>
<note>
<para>The section is only to configure HornetQ on JBoss AS4. The service functionality is
similar to Microcontainer Beans</para>
</note>
<para>
<programlisting>
&lt;?xml version="1.0" encoding="UTF-8"?>
&lt;server>
&lt;mbean code="org.hornetq.service.HornetQFileConfigurationService"
name="org.hornetq:service=HornetQFileConfigurationService">
&lt;/mbean>
&lt;mbean code="org.hornetq.service.JBossASSecurityManagerService"
name="org.hornetq:service=JBossASSecurityManagerService">
&lt;/mbean>
&lt;mbean code="org.hornetq.service.HornetQStarterService"
name="org.hornetq:service=HornetQStarterService">
&lt;!--let's let the JMS Server start us-->
&lt;attribute name="Start">false&lt;/attribute>
&lt;depends optional-attribute-name="SecurityManagerService"
proxy-type="attribute">org.hornetq:service=JBossASSecurityManagerService&lt;/depends>
&lt;depends optional-attribute-name="ConfigurationService"
proxy-type="attribute">org.hornetq:service=HornetQFileConfigurationService&lt;/depends>
&lt;/mbean>
&lt;mbean code="org.hornetq.service.HornetQJMSStarterService"
name="org.hornetq:service=HornetQJMSStarterService">
&lt;depends optional-attribute-name="HornetQServer"
proxy-type="attribute">org.hornetq:service=HornetQStarterService&lt;/depends>
&lt;/mbean>
&lt;/server></programlisting>
</para>
<para>This jboss-service.xml configuration file is included inside the hornetq-service.sar
on AS4 with embedded HornetQ. As you can see, on this configuration file we are starting
various services:</para>
<itemizedlist>
<listitem>
<para>HornetQFileConfigurationService</para>
<para>This is an MBean Service that takes care of the life cycle of the <literal>FileConfiguration POJO</literal></para>
</listitem>
<listitem>
<para>JBossASSecurityManagerService</para>
<para>This is an MBean Service that takes care of the lifecycle of the <literal>JBossASSecurityManager</literal> POJO</para>
</listitem>
<listitem>
<para>HornetQStarterService</para>
<para>This is an MBean Service that controls the main <literal>HornetQServer</literal> POJO.
this has a dependency on JBossASSecurityManagerService and HornetQFileConfigurationService MBeans</para>
</listitem>
<listitem>
<para>HornetQJMSStarterService</para>
<para>This is an MBean Service that controls the <literal>JMSServerManagerImpl</literal> POJO.
If you aren't using jms this can be removed.</para>
</listitem>
<listitem>
<para>JMSServerManager</para>
<para>Has the responsibility to start the JMSServerManager and the same behaviour that JMSServerManager Bean</para>
<para>naming</para>
<para>Instantiates a naming server which implements JNDI. This is used by JMS
clients</para>
</listitem>
</itemizedlist>
</section>

View File

@ -21,6 +21,7 @@
<file name="en/configuration-index.xml"/>
<file name="en/configuring-transports.xml"/>
<file name="en/connection-ttl.xml"/>
<file name="en/slow-consumers.xml"/>
<file name="en/core-bridges.xml"/>
<file name="en/diverts.xml"/>
<file name="en/duplicate-detection.xml"/>

View File

@ -71,5 +71,16 @@
<property name="caseIndent" value="3"/>
<property name="throwsIndent" value="3"/>
</module>
<module name="IllegalImport">
<property name="illegalPkgs" value="junit.framework"/>
</module>
<!-- developed at https://github.com/hornetq/hornetq-checkstyle-checks -->
<module name="org.hornetq.checks.annotation.RequiredAnnotation">
<property name="annotationName" value="Parameters"/>
<property name="requiredParameters" value="name"/>
</module>
</module>
</module>

View File

@ -1,84 +0,0 @@
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>org.hornetq.examples.core</groupId>
<artifactId>core-examples</artifactId>
<version>2.5.0-SNAPSHOT</version>
</parent>
<artifactId>hornetq-core-microcontainer-example</artifactId>
<packaging>jar</packaging>
<name>HornetQ Core Microcontainer Example</name>
<dependencies>
<dependency>
<groupId>org.hornetq</groupId>
<artifactId>hornetq-bootstrap</artifactId>
<version>${project.version}</version>
</dependency>
<dependency>
<groupId>org.hornetq</groupId>
<artifactId>hornetq-server</artifactId>
<version>${project.version}</version>
</dependency>
<dependency>
<groupId>org.hornetq</groupId>
<artifactId>hornetq-core-client</artifactId>
<version>${project.version}</version>
</dependency>
<dependency>
<groupId>org.hornetq</groupId>
<artifactId>hornetq-commons</artifactId>
<version>${project.version}</version>
</dependency>
<dependency>
<groupId>io.netty</groupId>
<artifactId>netty-all</artifactId>
<version>${netty.version}</version>
</dependency>
<dependency>
<groupId>org.jboss.javaee</groupId>
<artifactId>jboss-jms-api</artifactId>
<version>1.1.0.GA</version>
</dependency>
<dependency>
<groupId>org.jboss.naming</groupId>
<artifactId>jnp-client</artifactId>
<version>5.0.5.Final</version>
</dependency>
<dependency>
<groupId>org.jboss.spec.javax.jms</groupId>
<artifactId>jboss-jms-api_2.0_spec</artifactId>
</dependency>
</dependencies>
<profiles>
<profile>
<id>example</id>
<build>
<plugins>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>exec-maven-plugin</artifactId>
<version>1.1</version>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>java</goal>
</goals>
</execution>
</executions>
<configuration>
<mainClass>org.hornetq.core.example.EmbeddedMicroContainerExample</mainClass>
</configuration>
</plugin>
</plugins>
</build>
</profile>
</profiles>
</project>

View File

@ -1,82 +0,0 @@
<html>
<head>
<title>HornetQ Embedded Example</title>
<link rel="stylesheet" type="text/css" href="../../common/common.css" />
<link rel="stylesheet" type="text/css" href="../../common/prettify.css" />
<script type="text/javascript" src="../../common/prettify.js"></script>
</head>
<body onload="prettyPrint()">
<h1>Micro Container Example</h1>
<p>This examples shows how to setup and run HornetQ through the Micro Container.</p>
<p>Refer to the user's manual for the list of required Jars, since JBoss Micro Container requires a few jars.</p>
<h2>Example step-by-step</h2>
<p><i>To run the example, simply type <code>mvn verify</code> from this directory</i></p>
<p>In this we don't use any configuration files. (Everything is embedded). We simply instantiate ConfigurationImpl, HornetQServer, start it and operate on JMS regularly</p>
<ol>
<li>Start the server</li>
<pre class="prettyprint">
hornetq = new HornetQBootstrapServer("./server0/hornetq-beans.xml");
hornetq.run();
</pre>
<li>As we are not using a JNDI environment we instantiate the objects directly</li>
<pre class="prettyprint">
ServerLocator serverLocator = HornetQClient.createServerLocatorWithoutHA(new TransportConfiguration(NettyConnectorFactory.class.getName()));
ClientSessionFactory sf = serverLocator.createSessionFactory();
</pre>
<li>Create a Core Queue</li>
<pre class="prettyprint">
ClientSession coreSession = sf.createSession(false, false, false);
final String queueName = "queue.exampleQueue";
coreSession.createQueue(queueName, queueName, true);
coreSession.close();
</pre>
<li>Create the session and producer</li>
<pre class="prettyprint">
session = sf.createSession();
ClientProducer producer = session.createProducer(queueName);
</pre>
<li>Create and send a Message</li>
<pre class="prettyprint">
ClientMessage message = session.createMessage(false);
message.putStringProperty(propName, "Hello sent at " + new Date());
System.out.println("Sending the message.");
producer.send(message);
</pre>
<li>Create the message consumer and start the connection</li>
<pre class="prettyprint">
ClientConsumer messageConsumer = session.createConsumer(queueName);
session.start();
</pre>
<li>Receive the message</li>
<pre class="prettyprint">
ClientMessage messageReceived = messageConsumer.receive(1000);
System.out.println("Received TextMessage:" + messageReceived.getProperty(propName));
</pre>
<li>Be sure to close our resources!</li>
<pre class="prettyprint">
if (sf != null)
{
sf.close();
}
</pre>
<li>Stop the server</li>
<pre class="prettyprint">
hornetq.shutdown();
</pre>
</ol>
</body>
</html>

View File

@ -1,108 +0,0 @@
/*
* Copyright 2005-2014 Red Hat, Inc.
* Red Hat licenses this file to you under the Apache License, version
* 2.0 (the "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
* http://www.apache.org/licenses/LICENSE-2.0
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
* implied. See the License for the specific language governing
* permissions and limitations under the License.
*/
package org.hornetq.core.example;
import java.util.Date;
import org.hornetq.api.core.TransportConfiguration;
import org.hornetq.api.core.client.*;
import org.hornetq.core.remoting.impl.netty.NettyConnectorFactory;
import org.hornetq.integration.bootstrap.HornetQBootstrapServer;
/**
*
* This example shows how to run a HornetQ core client and server embedded in your
* own application
*
* @author <a href="mailto:tim.fox@jboss.com">Tim Fox</a>
*
*/
public class EmbeddedMicroContainerExample
{
public static void main(final String[] args) throws Exception
{
HornetQBootstrapServer hornetQ = null;
try
{
// Step 1. Start the server
hornetQ = new HornetQBootstrapServer("hornetq-beans.xml");
hornetQ.run();
// Step 2. As we are not using a JNDI environment we instantiate the objects directly
ServerLocator serverLocator = HornetQClient.createServerLocatorWithoutHA(new TransportConfiguration(NettyConnectorFactory.class.getName()));
ClientSessionFactory sf = serverLocator.createSessionFactory();
// Step 3. Create a core queue
ClientSession coreSession = sf.createSession(false, false, false);
final String queueName = "queue.exampleQueue";
coreSession.createQueue(queueName, queueName, true);
coreSession.close();
ClientSession session = null;
try
{
// Step 4. Create the session, and producer
session = sf.createSession();
ClientProducer producer = session.createProducer(queueName);
// Step 5. Create and send a message
ClientMessage message = session.createMessage(false);
final String propName = "myprop";
message.putStringProperty(propName, "Hello sent at " + new Date());
System.out.println("Sending the message.");
producer.send(message);
// Step 6. Create the message consumer and start the connection
ClientConsumer messageConsumer = session.createConsumer(queueName);
session.start();
// Step 7. Receive the message.
ClientMessage messageReceived = messageConsumer.receive(1000);
System.out.println("Received TextMessage:" + messageReceived.getStringProperty(propName));
}
finally
{
// Step 8. Be sure to close our resources!
if (sf != null)
{
sf.close();
}
// Step 9. Shutdown the container
if (hornetQ != null)
{
hornetQ.shutDown();
}
}
}
catch (Exception e)
{
e.printStackTrace();
throw e;
}
}
}

View File

@ -1,36 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<deployment xmlns="urn:jboss:bean-deployer:2.0">
<!-- MBean server -->
<bean name="MBeanServer" class="javax.management.MBeanServer">
<constructor factoryClass="java.lang.management.ManagementFactory"
factoryMethod="getPlatformMBeanServer"/>
</bean>
<!-- The core configuration -->
<bean name="Configuration" class="org.hornetq.core.config.impl.FileConfiguration"/>
<!-- The security manager -->
<bean name="HornetQSecurityManager" class="org.hornetq.spi.core.security.HornetQSecurityManagerImpl">
<start ignored="true"/>
<stop ignored="true"/>
</bean>
<!-- The core server -->
<bean name="HornetQServer" class="org.hornetq.core.server.impl.HornetQServerImpl">
<constructor>
<parameter>
<inject bean="Configuration"/>
</parameter>
<parameter>
<inject bean="MBeanServer"/>
</parameter>
<parameter>
<inject bean="HornetQSecurityManager"/>
</parameter>
</constructor>
</bean>
</deployment>

View File

@ -1,29 +0,0 @@
<configuration xmlns="urn:hornetq"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:hornetq /schema/hornetq-configuration.xsd">
<bindings-directory>target/data/messaging/bindings</bindings-directory>
<journal-directory>target/data/messaging/journal</journal-directory>
<large-messages-directory>target/data/messaging/largemessages</large-messages-directory>
<paging-directory>target/data/messaging/paging</paging-directory>
<!-- Acceptors -->
<acceptors>
<acceptor name="netty-acceptor">
<factory-class>org.hornetq.core.remoting.impl.netty.NettyAcceptorFactory</factory-class>
<param key="tcp-no-delay" value="false"/>
<param key="tcp-send-buffer-size" value="1048576"/>
<param key="tcp-receive-buffer-size" value="1048576"/>
</acceptor>
</acceptors>
<security-enabled>false</security-enabled>
<persistence-enabled>false</persistence-enabled>
</configuration>

View File

@ -1,17 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<deployment xmlns="urn:jboss:bean-deployer:2.0">
<!-- The core configuration -->
<bean name="Configuration" class="org.hornetq.core.config.impl.FileConfiguration"/>
<!-- The core server -->
<bean name="HornetQServer" class="org.hornetq.core.server.impl.HornetQServerImpl">
<constructor>
<parameter>
<inject bean="Configuration"/>
</parameter>
</constructor>
</bean>
</deployment>

View File

@ -1,59 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<deployment xmlns="urn:jboss:bean-deployer:2.0">
<bean name="Naming" class="org.jnp.server.NamingBeanImpl"/>
<!-- JNDI server. Disable this if you don't want JNDI -->
<bean name="JNDIServer" class="org.jnp.server.Main">
<property name="namingInfo">
<inject bean="Naming"/>
</property>
<property name="port">1099</property>
<property name="bindAddress">localhost</property>
<property name="rmiPort">1098</property>
<property name="rmiBindAddress">localhost</property>
</bean>
<!-- MBean server -->
<bean name="MBeanServer" class="javax.management.MBeanServer">
<constructor factoryClass="java.lang.management.ManagementFactory"
factoryMethod="getPlatformMBeanServer"/>
</bean>
<!-- The core configuration -->
<bean name="Configuration" class="org.hornetq.core.config.impl.FileConfiguration"/>
<!-- The security manager -->
<bean name="HornetQSecurityManager" class="org.hornetq.spi.core.security.HornetQSecurityManagerImpl">
<start ignored="true"/>
<stop ignored="true"/>
</bean>
<!-- The core server -->
<bean name="HornetQServer" class="org.hornetq.core.server.impl.HornetQServerImpl">
<constructor>
<parameter>
<inject bean="Configuration"/>
</parameter>
<parameter>
<inject bean="MBeanServer"/>
</parameter>
<parameter>
<inject bean="HornetQSecurityManager"/>
</parameter>
</constructor>
<start ignored="true"/>
<stop ignored="true"/>
</bean>
<!-- The JMS server -->
<bean name="JMSServerManager" class="org.hornetq.jms.server.impl.JMSServerManagerImpl">
<constructor>
<parameter>
<inject bean="HornetQServer"/>
</parameter>
</constructor>
</bean>
</deployment>

View File

@ -13,7 +13,7 @@
<name>HornetQ Vert.x Example</name>
<properties>
<vertx.version>2.1RC1</vertx.version>
<vertx.version>2.1.2</vertx.version>
</properties>
<dependencies>
<dependency>

View File

@ -1,59 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<deployment xmlns="urn:jboss:bean-deployer:2.0">
<bean name="Naming" class="org.jnp.server.NamingBeanImpl"/>
<!-- JNDI server. Disable this if you don't want JNDI -->
<bean name="JNDIServer" class="org.jnp.server.Main">
<property name="namingInfo">
<inject bean="Naming"/>
</property>
<property name="port">1099</property>
<property name="bindAddress">localhost</property>
<property name="rmiPort">1098</property>
<property name="rmiBindAddress">localhost</property>
</bean>
<!-- MBean server -->
<bean name="MBeanServer" class="javax.management.MBeanServer">
<constructor factoryClass="java.lang.management.ManagementFactory"
factoryMethod="getPlatformMBeanServer"/>
</bean>
<!-- The core configuration -->
<bean name="Configuration" class="org.hornetq.core.config.impl.FileConfiguration"/>
<!-- The security manager -->
<bean name="HornetQSecurityManager" class="org.hornetq.spi.core.security.HornetQSecurityManagerImpl">
<start ignored="true"/>
<stop ignored="true"/>
</bean>
<!-- The core server -->
<bean name="HornetQServer" class="org.hornetq.core.server.impl.HornetQServerImpl">
<constructor>
<parameter>
<inject bean="Configuration"/>
</parameter>
<parameter>
<inject bean="MBeanServer"/>
</parameter>
<parameter>
<inject bean="HornetQSecurityManager"/>
</parameter>
</constructor>
<start ignored="true"/>
<stop ignored="true"/>
</bean>
<!-- The JMS server -->
<bean name="JMSServerManager" class="org.hornetq.jms.server.impl.JMSServerManagerImpl">
<constructor>
<parameter>
<inject bean="HornetQServer"/>
</parameter>
</constructor>
</bean>
</deployment>

View File

@ -44,4 +44,4 @@ This feature can also be controlled using the system property arquillian.deploym
</container>
-->
</arquillian>
</arquillian>

View File

@ -10,7 +10,11 @@
<paging-directory>${build.directory}/server0/data/messaging/paging</paging-directory>
<ha-policy template="SHARED_STORE"/>
<ha-policy>
<shared-store>
<master/>
</shared-store>
</ha-policy>
<!-- Connectors -->
<connectors>

View File

@ -10,7 +10,11 @@
<paging-directory>${build.directory}/server0/data/messaging/paging</paging-directory>
<ha-policy template="BACKUP_SHARED_STORE"/>
<ha-policy>
<shared-store>
<slave/>
</shared-store>
</ha-policy>
<!-- Connectors -->
<connectors>

View File

@ -80,37 +80,37 @@ public class ClusteredGroupingExample extends HornetQExample
// Step 7. We create a JMS Connection connection1 which is a connection to server 1
connection1 = cf1.createConnection();
// Step 7. We create a JMS Connection connection1 which is a connection to server 1
// Step 8. We create a JMS Connection connection2 which is a connection to server 2
connection2 = cf2.createConnection();
// Step 8. We create a JMS Session on server 0
// Step 9. We create a JMS Session on server 0
Session session0 = connection0.createSession(false, Session.AUTO_ACKNOWLEDGE);
// Step 9. We create a JMS Session on server 1
// Step 10. We create a JMS Session on server 1
Session session1 = connection1.createSession(false, Session.AUTO_ACKNOWLEDGE);
// Step 10. We create a JMS Session on server 1
// Step 11. We create a JMS Session on server 1
Session session2 = connection1.createSession(false, Session.AUTO_ACKNOWLEDGE);
// Step 11. We start the connections to ensure delivery occurs on them
// Step 12. We start the connections to ensure delivery occurs on them
connection0.start();
connection1.start();
connection2.start();
// Step 12. We create JMS MessageConsumer objects on server 0
// Step 13. We create JMS MessageConsumer objects on server 0
MessageConsumer consumer = session0.createConsumer(queue);
// Step 13. We create a JMS MessageProducer object on server 0, 1 and 2
// Step 14. We create a JMS MessageProducer object on server 0, 1 and 2
MessageProducer producer0 = session0.createProducer(queue);
MessageProducer producer1 = session1.createProducer(queue);
MessageProducer producer2 = session2.createProducer(queue);
// Step 14. We send some messages to server 0, 1 and 2 with the same groupid set
// Step 15. We send some messages to server 0, 1 and 2 with the same groupid set
final int numMessages = 10;
@ -148,7 +148,7 @@ public class ClusteredGroupingExample extends HornetQExample
System.out.println("Sent messages: " + message.getText() + " to node 2");
}
// Step 15. We now consume those messages from server 0
// Step 16. We now consume those messages from server 0
// We note the messages have all been sent to the same consumer on the same node
for (int i = 0; i < numMessages * 3; i++)
@ -163,7 +163,7 @@ public class ClusteredGroupingExample extends HornetQExample
}
finally
{
// Step 16. Be sure to close our resources!
// Step 17. Be sure to close our resources!
if (connection0 != null)
{
@ -197,4 +197,4 @@ public class ClusteredGroupingExample extends HornetQExample
}
}
}
}

View File

@ -1,60 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<deployment xmlns="urn:jboss:bean-deployer:2.0">
<bean name="Naming" class="org.jnp.server.NamingBeanImpl"/>
<!-- JNDI server. Disable this if you don't want JNDI -->
<bean name="JNDIServer" class="org.jnp.server.Main">
<property name="namingInfo">
<inject bean="Naming"/>
</property>
<property name="port">1099</property>
<property name="bindAddress">localhost</property>
<property name="rmiPort">1098</property>
<property name="rmiBindAddress">localhost</property>
</bean>
<!-- MBean server -->
<bean name="MBeanServer" class="javax.management.MBeanServer">
<constructor factoryClass="java.lang.management.ManagementFactory"
factoryMethod="getPlatformMBeanServer"/>
</bean>
<!-- The core configuration -->
<bean name="Configuration" class="org.hornetq.core.config.impl.FileConfiguration"/>
<!-- The security manager -->
<bean name="HornetQSecurityManager" class="org.hornetq.spi.core.security.HornetQSecurityManagerImpl">
<start ignored="true"/>
<stop ignored="true"/>
</bean>
<!-- The core server -->
<bean name="HornetQServer" class="org.hornetq.core.server.impl.HornetQServerImpl">
<constructor>
<parameter>
<inject bean="Configuration"/>
</parameter>
<parameter>
<inject bean="MBeanServer"/>
</parameter>
<parameter>
<inject bean="HornetQSecurityManager"/>
</parameter>
</constructor>
<start ignored="true"/>
<stop ignored="true"/>
</bean>
<!-- The JMS server -->
<bean name="JMSServerManager" class="org.hornetq.jms.server.impl.JMSServerManagerImpl">
<constructor>
<parameter>
<inject bean="HornetQServer"/>
</parameter>
</constructor>
</bean>
</deployment>

View File

@ -1,59 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<deployment xmlns="urn:jboss:bean-deployer:2.0">
<bean name="Naming" class="org.jnp.server.NamingBeanImpl"/>
<!-- JNDI server. Disable this if you don't want JNDI -->
<bean name="JNDIServer" class="org.jnp.server.Main">
<property name="namingInfo">
<inject bean="Naming"/>
</property>
<property name="port">2099</property>
<property name="bindAddress">localhost</property>
<property name="rmiPort">2098</property>
<property name="rmiBindAddress">localhost</property>
</bean>
<!-- MBean server -->
<bean name="MBeanServer" class="javax.management.MBeanServer">
<constructor factoryClass="java.lang.management.ManagementFactory"
factoryMethod="getPlatformMBeanServer"/>
</bean>
<!-- The core configuration -->
<bean name="Configuration" class="org.hornetq.core.config.impl.FileConfiguration"/>
<!-- The security manager -->
<bean name="HornetQSecurityManager" class="org.hornetq.spi.core.security.HornetQSecurityManagerImpl">
<start ignored="true"/>
<stop ignored="true"/>
</bean>
<!-- The core server -->
<bean name="HornetQServer" class="org.hornetq.core.server.impl.HornetQServerImpl">
<constructor>
<parameter>
<inject bean="Configuration"/>
</parameter>
<parameter>
<inject bean="MBeanServer"/>
</parameter>
<parameter>
<inject bean="HornetQSecurityManager"/>
</parameter>
</constructor>
<start ignored="true"/>
<stop ignored="true"/>
</bean>
<!-- The JMS server -->
<bean name="JMSServerManager" class="org.hornetq.jms.server.impl.JMSServerManagerImpl">
<constructor>
<parameter>
<inject bean="HornetQServer"/>
</parameter>
</constructor>
</bean>
</deployment>

View File

@ -104,7 +104,7 @@
<dependencies>
<dependency>
<groupId>org.hornetq.examples.jms</groupId>
<artifactId>colocated-failover-recover-only</artifactId>
<artifactId>colocated-failover-scale-down</artifactId>
<version>${project.version}</version>
</dependency>
<dependency>

View File

@ -12,19 +12,31 @@
HA Policy that is colocated. colocated means that backup servers can be created and maintained by live servers on behalf
of other requesting live servers. In this example we create a colocated shared store server that will scale down.
That is it will not become live but scale down the journal to the colocated live server.
<p>This example starts 2 live servers each with a backup server that backs up the other live server.</p>
<p>This example starts 2 live servers each will request the other to create a backup.</p>
<p>The first live server will be killed and the backup in the second will recover the journal and recreate its state
in the live server it shares its VM with.</p>
<p>The following shows how to configure the backup, the backup strategy is set to <b>SCALE_DOWN</b> which means
<p>The following shows how to configure the backup, the slave is configured <b>&lt;scale-down/></b> which means
that the backup server will not fully start on fail over, instead it will just recover the journal and write it
to its parent live server. Also notice we have over ridden some of the configuration since we want it to use the same
journal as server1 since it is using shared store.</p>
to its parent live server.</p>
<pre class="prettyprint">
<code>&lt;ha-policy template="COLOCATED_SHARED_STORE"/>
<code>&lt;ha-policy>
&lt;shared-store>
&lt;colocated>
&lt;backup-port-offset>100&lt;/backup-port-offset>
&lt;backup-request-retries>-1&lt;/backup-request-retries>
&lt;backup-request-retry-interval>2000&lt;/backup-request-retry-interval>
&lt;max-backups>1&lt;/max-backups>
&lt;request-backup>true&lt;/request-backup>
&lt;master/>
&lt;slave>
&lt;scale-down/>
&lt;/slave>
&lt;/colocated>
&lt;/shared-store>
&lt;/ha-policy>
</code>
</pre>
<p>note that for this HA policy we use a template that will use some sensibe settings, in this case this includes
setting scale down to true. Also note that since we dont specify a scale down connector it will use most appropriate
<p>Notice that we dont need to specify a scale down connector as it will use most appropriate
from the list of available connectors which in this case is the first INVM connector</p>
<p> One other thing to notice is that the cluster connection has its reconnect attempts set to 5, this is so it will
disconnect instead of trying to reconnect to a backup that doesn't exist.</p>

Some files were not shown because too many files have changed in this diff Show More