Bracketing utility for univariate root solvers now returns a tighter
interval than before. It also allows choosing the search interval
expansion rate, supporting both linear and asymptotically exponential
rates.
git-svn-id: https://svn.apache.org/repos/asf/commons/proper/math/trunk@1578428 13f79535-47bb-0310-9956-ffa450edef68
As functions evaluations were not counted properly, which has been fixed
a few days ago, the multi-start test in fact did not performed all
expected restarts. This commit fixes this (by increasing the max
evaluations), and fixes the result checks accordingly.
git-svn-id: https://svn.apache.org/repos/asf/commons/proper/math/trunk@1573523 13f79535-47bb-0310-9956-ffa450edef68
Allow using the SVD decomposition which has excellent stability and can
provide a "solution" for singular problems.
Figured out why one of the least squares tests did not check two of the
states. There was only one equation for the two states. The test now
verifies the states satisfy the constraint.
Patch provided by Evan Ward.
JIRA: MATH-1104
git-svn-id: https://svn.apache.org/repos/asf/commons/proper/math/trunk@1573351 13f79535-47bb-0310-9956-ffa450edef68
In LevenbergMarquardtOptimizer the number of iterations and evaluations
was switched in two of the return statements.
Patch provided by Evan Ward.
JIRA: MATH-1106
git-svn-id: https://svn.apache.org/repos/asf/commons/proper/math/trunk@1573308 13f79535-47bb-0310-9956-ffa450edef68
ConvergenceCheckers always saw previous.getPoint() to equal
current.getPoint() because the optimizers used the same array and did
not make a copy of the previous point. Fixed by using a new array for
each model evaluation and LSP.evaluate() now makes its own copy of the
state vector as well.
Path provided by Evan Ward.
JIRA: MATH-1103
git-svn-id: https://svn.apache.org/repos/asf/commons/proper/math/trunk@1573307 13f79535-47bb-0310-9956-ffa450edef68
Extracted class "LineSearch" from "PowellOptimizer".
Made method "computeObjectiveValue" public in "MultivariateOptimizer".
Modified "PowellOptimizer" to use the now public class.
"NonLinearConjugateGradientOptimizer" uses the new "LineSearch".
Added constructors to set the line search tolerances and deprecated
obsolete contructors and inner classes ("BracketingStep" and
"LineSearchFunction".
Removed method "findUpperBound".
git-svn-id: https://svn.apache.org/repos/asf/commons/proper/math/trunk@1572988 13f79535-47bb-0310-9956-ffa450edef68
- The residuals are the really the values of the objective function
that is being minimized.
- Forcing the value to be a real vector and the residual computed
through subtraction is an unnecessary constraint. For example
Rotation.distance() will compute a better residual than
subtracting the quaternion elements of two rotations.
- Method was not used by any optimizer.
JIRA: MATH-1102
git-svn-id: https://svn.apache.org/repos/asf/commons/proper/math/trunk@1571306 13f79535-47bb-0310-9956-ffa450edef68
The documentation now explicitly states that entries not iterated above
are the zero ones.
This is part of Apache Commons Math reconsidering support for sparse
linear algebra.
JIRA: MATH-875
git-svn-id: https://svn.apache.org/repos/asf/commons/proper/math/trunk@1570510 13f79535-47bb-0310-9956-ffa450edef68
Theoretically QR offers the best blend of speed and numerical stability.
Empirically the QR implementation is slightly faster than the Cholesky
implementation.
git-svn-id: https://svn.apache.org/repos/asf/commons/proper/math/trunk@1569907 13f79535-47bb-0310-9956-ffa450edef68
Re-factored the code in GaussNewtonOptimizer so that the decomposition
algorithm sees the Jacobian and residuals instead of the normal
equation. This lets the QR algorithm operate directly on the Jacobian
matrix, which is faster and less sensitive to numerical errors. As a
result, one test case that threw a singular matrix exception now passes
with the QR decomposition.
The refactoring also include a speed improvement when computing the
normal matrix for the LU decomposition. Since the normal matrix is
symmetric only half of is computed, which results in a factor of 2 speed
up in computing the normal matrix for problems with many more
measurements than states.
git-svn-id: https://svn.apache.org/repos/asf/commons/proper/math/trunk@1569905 13f79535-47bb-0310-9956-ffa450edef68
This is a reversal of a former decision, as we now think we should adopt
a generally accepted behavior which is ... to ignore the problems of
NaNs and infinities in sparse linear algebra entities.
JIRA: MATH-870 (which is therefore NOT fixed)
git-svn-id: https://svn.apache.org/repos/asf/commons/proper/math/trunk@1569825 13f79535-47bb-0310-9956-ffa450edef68
Added more test cases to EvaluationTest to test the other methods of
DenseWeightedEvaluation/UnweightedEvaluation.
Fixed a bug in DenseWeightedEvaluation.computeValue() where the the
weights were not applied to the return value.
git-svn-id: https://svn.apache.org/repos/asf/commons/proper/math/trunk@1569361 13f79535-47bb-0310-9956-ffa450edef68
Previously JUnit would make the call to test a specific optimizer, and
then that method would call all of the individual test cases relevant to
that optimizer.,Now JUnit will directly call each individual test case.
The same test coverage is preserved. The GaussNewtonOptimizerTest is
split into two classes, one for each decomposition algorithm it can use.
There is a significant amount of duplicated code between
GaussNewtonOptimizerWithLUTest and GaussNewtonOptimizerWithQRTest.
git-svn-id: https://svn.apache.org/repos/asf/commons/proper/math/trunk@1569359 13f79535-47bb-0310-9956-ffa450edef68
* There are now 3 factory methods: one using the previous interfaces, one using
the new interfaces with weights, and one using the new interfaces without
weights.
* Make model(...) method public.
* Fix javadoc typo
git-svn-id: https://svn.apache.org/repos/asf/commons/proper/math/trunk@1569355 13f79535-47bb-0310-9956-ffa450edef68
Covered all of the interfaces in the leastsquares package to use RealVector and
RealMatrix instead of double[] and double[][]. This reduced some duplicated code.
For example Evaluation.computeResiduals() was a complete duplication of
RealVector.subtract(). It also presents a consistent interface and allows data
encapsulation.
Lastly, this change enables [math] to "eat our own dog food." It enables the
linear package to be used in the implementation of the optimization algorithms.
git-svn-id: https://svn.apache.org/repos/asf/commons/proper/math/trunk@1569354 13f79535-47bb-0310-9956-ffa450edef68
Use Evaluation instead of PointVectorValuePair in the ConvergenceChecker. This
gives the checkers access to more information, such as the rms and covariances.
The change also simplified the optimizer implementations since they no longer
have to keep track of the current function value.
A method was added to LeastSquaresFactory to convert between the two types of
checkers and a method added to LeastSquaresBuilder so that it can accept either
type. I would have prefered to do this through method overloading, but
overloading doesn't play well with generics.
git-svn-id: https://svn.apache.org/repos/asf/commons/proper/math/trunk@1569353 13f79535-47bb-0310-9956-ffa450edef68