this will give a much smoother traffic pattern, as instead of waiting 6 seconds to write a 32KB message under a 6KB rate, it'll write 6KB for each of the first 5 seconds, and 2KB the next
this also allows people to have small buckets (but again, bucket sizes smaller than the rate just don't make sense)
(this is what caused the runtime errors on sun jvms but not on kaffe)
((aka i slacked and didn't test sufficiently. off with my head))
this now builds and runs fine in sun 1.3-1.5 jvms, as well as kaffe
the simple RouterThrottleImpl bases its decision entirely on how congested the jobQueue is - if there are jobs that have been waiting 5+ seconds, reject everything and stop reading from the network
(each i2npMessageReader randomly waits .5-1s when throttled before rechecking it)
minor adjustments in the stats published - removing a few useless ones and adding the router.throttleNetworkCause (which is the average ms lag in the jobQueue when an I2NP reader is throttled)
these had been broken out into seperate jobs before to reduce thread and lock contention, but that isn't as serious an issue anymore (in these cases) and the non-contention-related delays of these mini-jobs are trivial
current 0.3.2 throws an NPE which causes the submitMessageHistory functionality to fail, which isn't really a loss since i send that data to /dev/null at the moment ;)
(but you'll want to router.keepHistory=false and router.submitHistory=false)
this'll go into the next rev, whenever it comes out
(thanks ugha!)
dont time out for too many messages (just time out individual ones)
however, if any of the messages that time out have been there for a minute, kill the con (since its hung)
kaffe workaround for fast closing sockets
don't consider a connection valid until it has been up for 30 seconds (so people who are simply establishing connections but whose nats are still messed up get the error)
when dealing with expired after accepted, dont drop unless it expired outside the fudge factor
increase the default maxWaitingJobs to 100, since we can get lots at once (and we dont gobble as much memory as we used to)
also, don't wake up the jobQueueRunner in getNext once a second, instead just let the threads updating the queue notify
if they would exceed the currently available # of tokens, letting through those operations
in the order they are called. like the old trivial bandwidth limiter, this uses a token
bucket approach (keeping a pool of 'available' bytes, decrementing on their use and
periodically refilling it [up to a max limit, to prevent absurd bursts]). on the other
hand, it doesn't have the starvation issues the old one had, which would continue to let
small operations go through (e.g. 8 byte write) and potentially block large operations
indefinitely (e.g. 32KB write). However, this new version is, how shall I put it, context
switch heavy? :) We'll revise with a scheduling / queueing algorithm once we're away
from transports that require threads per connection
The two directions (input and output) are managed on their own queues, and if/when things
are backed up, you can see the details of what operations have been requested on the
router console.
Since we still need better router throttling code (to reject tunnels and back off more
accurately), I've included a minimum KBps on the limiter, currently set to 6KBps both
ways. Once there is good throttling code, we can drop that to 1-2KBps, and maybe even
less after we do some bandwidth usage tuning.
There were also a few minor touch ups to handle message data being discarded earlier
than it had been before (since write/read operations can now take a long period of time
in the face of contention)
The five config properties for the bandwidth limiter are:
* i2np.bandwidth.inboundKBytesPerSecond
* i2np.bandwidth.outboundKBytesPerSecond
(you can guess what those are)
* i2np.bandwidth.inboundBurstKBytes
* i2np.bandwidth.outboundBurstKBytes
the burst KBytes specify how many bytes we'll let accumulate in the bucket, allowing
us to burst after a period of inactivity. excess tokens greater than this limit are
discarded.
* i2np.bandwidth.replenishFrequencyMs
this is an internal setting, used to specify how frequently to refil the buckets (min
value of 1s, which is the default)
You may want to hold off on using these parameters though until the next release,
leaving it to the default of unlimited. They are read periodically from the config file
however, so you can update them without restart / etc. (if you want to have no limit on
the bandwidth, set the KBytesPerSecond to a value <= 0)