This is a very minor optimization but trivial to implement, so might as well. ``` Benchmark (nGramStrs) Mode Cnt Score Error Units NGramProcessorBenchmark.ngramInnerLoop 1,2,3 avgt 20 4415092.443 ± 31302.115 ns/op NGramProcessorBenchmark.ngramOuterLoop 1,2,3 avgt 20 4235550.340 ± 103393.465 ns/op ``` This measurement is in nanoseconds, consequently, the overall performance of inference is dominated by other factors (i.e. map#put). But, this optimization adds up overtime and is simple.
This commit is contained in:
parent
b7c47b1717
commit
0860746bf2
|
@ -184,8 +184,8 @@ public class NGram implements LenientlyParsedPreProcessor, StrictlyParsedPreProc
|
||||||
}
|
}
|
||||||
final int startPos = start < 0 ? (stringValue.length() + start) : start;
|
final int startPos = start < 0 ? (stringValue.length() + start) : start;
|
||||||
final int len = Math.min(startPos + length, stringValue.length());
|
final int len = Math.min(startPos + length, stringValue.length());
|
||||||
for (int i = 0; i < len; i++) {
|
for (int nGram : nGrams) {
|
||||||
for (int nGram : nGrams) {
|
for (int i = 0; i < len; i++) {
|
||||||
if (startPos + i + nGram > len) {
|
if (startPos + i + nGram > len) {
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
|
Loading…
Reference in New Issue