Building on HBase-15958.
Provided a ReplicationQueuesClientHBaseImpl that relies on the HBase Replication Table to track WAL queues.
Refactored out a large section of ReplicationQueuesHBaseImpl into a ReplicationTableClient class that handles Replication Table operations.
Signed-off-by: Elliott Clark <eclark@apache.org>
M hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcExecutor.java
Refactor which makes a Handler type. Put all 'handler' stuff inside this
new type. Also make it so subclass can provide its own Handler type.
M hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/SimpleRpcScheduler.java
Name the handler threads for their type so can tell if configs are
having an effect.
Signed-off-by: stack <stack@apache.org>
Invoking 'hbase hfile' inside a servlet raises several concerns. This
patch avoids invoking a separate process, and also adds validation that
the file being read is at least inside the HBase root directory.
Signed-off-by: Mikhail Antonov <antonov@apache.org>
Building on HBase-15883.
Now implementing the claim queues procedure within an HBase table.
Also added UnitTests to test claimQueue.
Peer tracking will still be performed by ZooKeeper though.
Also modified the queueId tracking procedure so we no longer have to perform scans over the Replication Table.
This does make our queue naming schema slightly different from ReplicationQueuesZKImpl though.
Signed-off-by: Elliott Clark <eclark@apache.org>
Changes how we do accounting of Connections to match how it is done in Hadoop.
Adds a ConnectionManager class. Adds new configurations for this new class.
"hbase.ipc.client.idlethreshold" 4000
"hbase.ipc.client.connection.idle-scan-interval.ms" 10000
"hbase.ipc.client.connection.maxidletime" 10000
"hbase.ipc.client.kill.max", 10
"hbase.ipc.server.handler.queue.size", 100
The new scheme does away with synchronization that purportedly would freeze out
reads while we were cleaning up stale connections (according to HADOOP-9955)
Also adds in new mechanism for accepting Connections by pulling in as many
as we can at a time adding them to a Queue instead of doing one at a time.
Can help when bursty traffic according to HADOOP-9956. Removes a blocking
while Reader is busy parsing a request. Adds configuration
"hbase.ipc.server.read.connection-queue.size" with default of 100 for
queue size.
Signed-off-by: stack <stack@apache.org>
Adds HADOOP-9955 RPC idle connection closing is extremely inefficient
Then removes queue added by HADOOP-9956 at Enis suggestion
Changes how we do accounting of Connections to match how it is done in Hadoop.
Adds a ConnectionManager class. Adds new configurations for this new class.
"hbase.ipc.client.idlethreshold" 4000
"hbase.ipc.client.connection.idle-scan-interval.ms" 10000
"hbase.ipc.client.connection.maxidletime" 10000
"hbase.ipc.client.kill.max", 10
"hbase.ipc.server.handler.queue.size", 100
The new scheme does away with synchronization that purportedly would freeze out
reads while we were cleaning up stale connections (according to HADOOP-9955)
Also adds in new mechanism for accepting Connections by pulling in as many
as we can at a time adding them to a Queue instead of doing one at a time.
Can help when bursty traffic according to HADOOP-9956. Removes a blocking
while Reader is busy parsing a request. Adds configuration
"hbase.ipc.server.read.connection-queue.size" with default of 100 for
queue size.
- Testing by executing a command will cover the exact path users will trigger, so its better then directly calling library functions in tests. Changing the tests to use @shell.command(:<command>, args) to execute them like it's a command coming from shell.
Norm change:
Commands should print the output user would like to see, but in the end, should also return the relevant value. This way:
- Tests can use returned value to check that functionality works
- Tests can capture stdout to assert particular kind of output user should see.
- We do not print the return value in interactive mode and keep the output clean. See Shell.command() function.
Bugs found due to this change:
- Uncovered bug in major_compact.rb with this approach. It was calling admin.majorCompact() which doesn't exist but our tests didn't catch it since they directly tested admin.major_compact()
- Enabled TestReplicationShell. If it's bad, flaky infra will take care of it.
Change-Id: I5d8af16bf477a79a2f526a5bf11c245b02b7d276
Functions format_simple_command and format_and_return_simple_command are used to print runtimes right now. They are called from within every single command and use Ruby's 'yield' magic. Instead, we can simplify it using 'command_safe' function. Since command_safe wraps all commands, we can simply time before and after we call individual command.
If a command only wants to time a part of its logic, it can set instance variables start_time and end_time accordingly which is far more simpler to understand and work with than 'yield'.
Change-Id: Ibfacf3593175af22fc4f7d80896dd2f6d7c5dde3