So, it's the 21st century--- why doesn't the database build its own indexes to match the observed load? I realize there's a space/time tradeoff that humans *sometimes* want to make. But why not set a limit and ask the optimizer to do its best within that limit? Or at least have a recommendation engine that's straightforward to use.
A little searching did turn up a set of scripts released by Microsoft in 2006, and a StackOverflow question on exactly this topic: http://stackoverflow.com/questions/231528/do-any-databases-support-automatic-index-creation The answer seems to be "it is a hard problem, and it's hard to gather sufficient information about the relative value of various queries or updates." That is, frequency != value. But, precisely because it is a hard problem is why there should be more tool support... Some of the comments on StackOverflow strike me as having somewhat the flavor of "well, you *could* have the runtime manage memory reclamation for you, but it'll always be an inferior solution."
On a related note, I am constantly amazed at what people (including developers!) are willing to put up with in terms of response time. In my mind, if some reasonably-sized operation takes more than 30 seconds, it's an indication that there's a problem that needs fixing, not a reason to bump the timeout. And "reasonably-sized" these days should be thousands of entities.