if you want to get this data and you're running outside the router, just toss on a
"-Dstat.logFilters=* -Dstat.logFile=proxy.stats" to the java command line. the resulting
file can be parsed w/ java -cp lib/i2p.jar net.i2p.stat.StatLogSplitter proxy.stats, then fed
into gnuplot or whatever
further and cancel all of the tags we're using for that peer so that we can react to their
potential restart / tag loss quicker.
* use the minimum resend delay as the base to be exponentiated if our RTT is too low
(so we resend less)
* dont be such a wuss when flushing a closed stream
* add new back-off logic to reduce payload resends during transient
lag - only let one packet be resent at a time, even if the window size
allows it (and the packet timers request it). this should make
congestion less painful, and reduce the overall number of messages
resent (as the SACKs for the one packet actively resent should clarify
what made it through)
* make sure we ack duplicate messages received (if we aren't already doing so)
* implement a choke on the local buffer, in case we receive data faster than its
removed from the i2psocket's MessageInputStream (handle via packet drop and
explicit congestion notification)
* Fix for a long standing synchronization bug in the JobQueue (and added
some kooky flags to make sure it stays dead)
* Update the ministreaming lib to force mode=guaranteed if the default
lib is used, and mode=best_effort for all other libs.
* Fixed up the configuration overrides for the streaming socket lib
integration so that it properly honors env settings.
* More memory usage streamlining (last major revamp for now, i promise)
* Increase the tunnel test timeout rapidly if our tunnels are failing.
* Honor message expirations for some tunnel jobs that were prematurely
expired.
* Streamline memory usage with temporary object caches and more efficient
serialization for SHA256 calculation, logging, and both I2CP and I2NP
message handling.
* Fix some situations where we forward messages too eagerly. For a
request at the tunnel endpoint, if the tunnel is inbound and the target
is remote, honor the message by tunnel routing the data rather than
sending it directly to the requested location.
* Fix a strange race condition on i2cp client disconnect.
* win98 startup fixes (thanks tester-1 and ardvark!)
* include build scripts for the new streaming lib (which is NOT ready
for use yet, but you can hack around with it)
(enjoy, duck)
packets through that point have been ACKed, throwing an
InterruptedIOException if there was a writeTimeout or an IOException
if the con failed
* revamped the ack/nack field settings to ack as much as possible
* handle some strange timeout/resend errors on connection
* pass 1/2rtt as the packet 'optional delay' field, and use that to
schedule the ack time (the 'last' messages in a window set the
optional delay to 0, asking for immediate ack of all received)
* increase the optional delay to 2 bytes (#ms to delay)
* inject random failures and delays if configured to do so in
PacketHandler.choke
* fix up the window size adjustment (increment on ack, /= 2 on resend)
* use the highest RTT in the new RTT calculation so that we fit more
in (via SACK)
* fix up the SACK handling (duh)
* revise the resend time calculation
* properly close the source file in StreamSinkSend
* always adjust the rtt on ack, not just for packets with 1 send
* handle dup SYN gracefully
* revamp the default connection options
* logging
* immediately send an ack on receiving a duplicate payload message
(unless we've sent one within the last RTT)
* only adjust the RTT when there have been no resends
* added some (disabled) throttles - randomly injecting delays on
received packets, as well as randomly dropping them
* logging
has session tags within it, send an additional ping to the peer,
bundling those tags a second time, ACKing those tags on the pong.
* handle packets transferred during a race after the receiver ACKs the
connection but before the establisher receives the ACK.
* notify the messageInputStream reader on close (duh)
* new stream sink test, shoving lots and lots of data down a stream
with the existing StreamSinkServer and StreamSinkClient apps
* logging