logd: remove start filtration from flushTo

We have already searched for the start point, the start filter check
is paranoia that removes out-of-order entries that we are undoubtably
interested in.  Out-of-order entries occur under reader pressure, as
the writer gets pushed back from in-place sorted order and lands it
at the end for the reader to pick it up.  If this occurred during a
batch run or a logger thread wakeup, the entry could be filtered out
and never output to the reader.

Found one case where logcat.tail_time* tests failed which was fixed
with this adjustment.

Test: gTest logd-unit-tests, liblog-unit-tests and logcat-unit-tests
Bug: 38046067
Bug: 37791296
Change-Id: Icbde6b33dca7ab98348c3a872793aeef3997d460
This commit is contained in:
Mark Salyzyn 2017-05-10 15:50:39 -07:00
parent 3d0186b97e
commit 982ad208b5

View file

@ -1142,10 +1142,6 @@ log_time LogBuffer::flushTo(SocketClient* reader, const log_time& start,
continue;
}
if (element->getRealTime() <= start) {
continue;
}
// NB: calling out to another object with wrlock() held (safe)
if (filter) {
int ret = (*filter)(element, arg);
@ -1172,11 +1168,10 @@ log_time LogBuffer::flushTo(SocketClient* reader, const log_time& start,
unlock();
// range locking in LastLogTimes looks after us
max = element->flushTo(reader, this, privileged, sameTid);
log_time next = element->flushTo(reader, this, privileged, sameTid);
if (max == element->FLUSH_ERROR) {
return max;
}
if (next == element->FLUSH_ERROR) return next;
if (next > max) max = next;
skip = maxSkip;
rdlock();