Hi Carl-Daniel,
since last week I've started shopping around for options with people who know a thing or two when it comes to storing and processing large amounts of data.
For the problem of submitters having to download the repo, I was thinking about setting up a relatively simple web service frontend that allows pushing files, that are then integrated into the git repo on the server side. It should be possible to implement that with little effort and no effect on other parts of the existing infrastructure. The status submission scripts could then use curl to PUT files, no download necessary, authentication through gerrit's http auth tokens (that can be verified against the actual gerrit instance easily).
Later steps would moving the data into a more suitable data store (that's what I'm currently looking into), and could be done transparently to the status submission scripts (as long as the web service's endpoint remains the same), and provide some way to build server-side and cached queries. That should also help with the bisection issue, by not downloading the entire data set in the first place. Of course, a complete download needs to be possible, it just shouldn't be necessary for every single query.
Regarding server load, gitweb/cgit only touch repos as they're accessed by users, and they scale reasonably. Since the wiki page mostly cares about the newest entry per board, the script that creates it can surely be reworked to scale primarily with the number of boards present, making the number of reports a minor factor.
Patrick