Mark Gritter (markgritter) wrote,
Mark Gritter

Some Algorithms

I have been reading up on suffix trees (and tries.) The Wikipedia article is poorly written and organized (I don't even know where to begin fixing it) but this 1996 blog post by Mark Nelson is a much better introduction.

I have also been searching for a good answer to the question "how do I estimate the amount of memory usage in a Java HashMap." Half the answers are "it depends, why do you want to know anyway" and the other half are pointers to tools for getting answers to real-life examples with a variety of accuracy. Don't Java programmers believe in back-of-the-envelope answers? (Yes, it gets complicated with JIT compilation. Even on a 64-bit machine some object references could be just 4 bytes.)

My estimate: 4 or 8 bytes per bucket. Each entry contains key (4-8 bytes), value (4-8), cached hash value (4 bytes), and a next pointer (4-8). If you give the JITC credit, that's 24 bytes each (rounding up to multiple of 8), pessimistically 40. If your keys or values are Strings, then it's about 32 + 2 * length bytes a pop. So, supposing your keys are 16 characters long on average that's between 92 and 112 bytes per entry (not counting the stored object.)

Finally, I have been doing a bit of thinking about how to computationally solve deck-building games. Even solving fixed deck vs. fixed deck games is nontrivial because of the potentially large number of states (# of cards remaining in draw, contents of discard pile, side effects in play, tokens, etc.) but seems amenable to fictitious play over enough random games. But then how do you move beyond that to constructing the deck, without running into a massive combinatorial search space?

This seems like an area where genetic algorithms might be appropriate. One of the search features you'd like is for a good "bundle" of cards to get tried out in a lot of different combinations. Hill-climbing might not add or remove cards in units that make sense. A GA will also give you a diversity of good opponents for every deck, without recursively having to solve the problem of "how do I construct the deck which the fictitious play opponent is using."

A real-life example that might well be tractable is Nathan Yourchuck's Wordfield. "Locally" optimal play given your existing tile set is just tree search, on a scale that is reasonable to do exhaustively. This could then be expanded to look at multi-round play (what do you want to leave in your draw pile for next round) with a limited window. From there expanding the tree search to look at the different purchase options, again with a limited number of rounds of lookahead, seems like it should suffice. You want to look far enough ahead to know that buying both a big-point tile and a multiplier is a good idea.
Tags: algorithm, games, geek, programming
  • Post a new comment


    default userpic

    Your reply will be screened

    Your IP address will be recorded 

    When you submit the form an invisible reCAPTCHA check will be performed.
    You must follow the Privacy Policy and Google Terms of use.