specify a "router.shutdownPassword" value in the router.config (or in the environment [ala -D]),
then specify that password on the shutdown form in the web console and hit submit. after 30 seconds, it'll kill the router (and unless you're running the sim, it'll kill the JVM too, including clientApp.* started tunnels / etc)
if we had some sort of ACL for accessing the web console, we could avoid the password field altogether, but we dont, so we cant.
* rather than have all jobs created hooked into the clock for offset updates, have the jobQueue stay hooked up and update any active jobs accordingly (killing a memory leak of a JobTiming objects - one per job)
* dont go totally insane during shutdown and log like mad (though the clientApp things still log like mad, since they don't know the router is going down)
* adjust memory buffer sizes based on real world values so we don't have to expand/contract a lot
* dont display things that are completely useless (who cares what the first 32 bytes of a public key are?)
* reduce temporary object creation
* use more efficient collections at times
* on shutdown, log some state information (ready/timed jobs, pending messages, etc)
* explicit GC every 10 jobs. yeah, not efficient, but just for now we'll keep 'er in there
* only reread the router config file if it changes (duh)
the router.config var "profileOrganizer.reliabilityThresholdFactor=0.75" (or environment property -DprofileOrganizer.reliabilityThresholdFactor=0.5) etc
"burn it it" some more, but its looking good.
* test all tunnels we manage every period or two. later we'll want to include some randomization to help fight traffic analysis, but that falls into the i2p 3.0 tunnel chaff / mixing / etc)
* test inbound tunnels correctly (use an outbound tunnel, not direct)
* only give the tunnels 30 seconds to succeed
* mark the tunnel as tested for both the inbound and outbound side and adjust the profiles for all participants accordingly
* keep track of the 'last test time' on a tunnel
* new tunnel test response time profile stat, as well as overall router stat (published in the netDb as "tunnel.testSuccessTime")
* rewrite of the speed calculator - the value it generates now is essentially "how many round trip messages can this router pass in a minute".
it also allows a few runtime configurable options:
= speedCalculator.eventThreshold:
we use the smallest measurement period that has at least this many events in it (10m, 60m, 24h, lifetime)
= speedCalculator.useInstantaneousRates:
when we use the estimated round trip time, do we use instantaneous or period averages?
= speedCalculator.useTunnelTestOnly:
do we only use the tunnel test time (no db response or tunnel create time, or even estimated round trip times)?
* fix the reliability calculator to use the 10 minute tunnel create successes, not the (almost always 0) 1 minute rate.
* persist the tunnel create response time and send success time correctly (duh)
* add a new main() to PeerProfile - PeerProfile [filename]* will calculate the values of the peer profiles specified. useful for tweaking the calculator, and/or the configurable options. ala:
java -DspeedCalculator.useInstantaneousRates peerProfiles/profile-*.dat
* netDb.failedPeers (how many peers didn't reply to a lookup in time)
* netDb.searchCount (how many searches we send out in a 3 hour period)
probabalistically include the leaseSet of the sender in the garlic sent
to a peer if the client requests it to be included (aka if they want
replies). By default, this is enabled (disable by setting the I2CP
prop "shouldBundleReplyInfo=false"). Also, by default the probability is
30% (w00 h00, arbitrary values!), which can be overridden via the I2CP
prop "bundleReplyInfoProbability=80" (or =10, or =100, etc). The tradeoff
here is quicker replies in exchange for bandwidth (the dbStore).
Yeah, it'd be nice if there were something keeping track of which leaseSet
each client sent to each peer so that it could explicitly include it only
if it were necessary, but for now that's probably overkill.
fire the LoadClientAppsJob right after the admin listener is booted, which now includes support for the onBoot property (which causes the client to run immediately, instead of waiting 2+ minutes)
(yeah, it'd suck if all routers started up, tried to connect to people, got shitlisted, then 2 minutes later got the right NTP time, 'eh?)
this stat is published in the netDb, but the quantity fields (how many acks the stat is averaged over) is h4x0red
(it always reads "666", since otherwise it'd be fairly easy to identify what routers run servers, and i can live without knowing the quantity)
(min/max frequency=1m/30m, doubling every time we don't find something new)
reduce the bredth and duration of explorations
republish less often (every 60s send out one random one [or however many explicit ones])
logging
new VMCommSystem (useful for running large multirouter instances)
new MultiRouterBuilder (helper app for setting up a MultiRouter simulator)
updates to the router to handle multiple routers in the same VM, as well as
deal with the multiple OOM listener stuff
see the javadocs for info on the MultiRouter and MutliRouterBuilder
(yeah, its not ready for prime time, and its really just for the simulator,
so I'm not sure if anyone else is going to use it anyway ;)
a rooted app context. The core itself has its own I2PAppContext
(see its javadoc for, uh, docs), and the router extends that to
expose the router's singletons. The main point of this is to
make it so that we can run multiple routers in the same JVM, even
to allow different apps in the same JVM to switch singleton
implementations (e.g. run some routers with one set of profile
calculators, and other routers with a different one).
There is still some work to be done regarding the actual boot up
of multiple routers in a JVM, as well as their configuration,
though the plan is to have the RouterContext override the
I2PAppContext's getProperty/getPropertyNames methods to read from
a config file (seperate ones per context) instead of using the
System.getProperty that the base I2PAppContext uses.
Once the multi-router is working, i'll shim in a VMCommSystem
that doesn't depend upon sockets or threads to read/write (and
that uses configurable message send delays / disconnects / etc,
perhaps using data from the routerContext.getProperty to drive it).
I could hold off until the sim is all working, but there's a
truckload of changes in here and I hate dealing with conflicts ;)
Everything works - I've been running 'er for a while and kicked
the tires a bit, but if you see something amiss, please let me
know.
if a client only wants 0 hop tunnels, give them 0 hop tunnels (rather than wasting a 2+ hop on it)
make inNetPool.dropped and inNetPool.duplicate rate stats, not frequency stats
formatting, minor refactoring