Mark Gritter's Journal
[Most Recent Entries] [Calendar View] [Friends]

Below are 20 journal entries, after skipping by the 20 most recent ones recorded in Mark Gritter's LiveJournal:

    [ << Previous 20 -- Next 20 >> ]
    Saturday, November 30th, 2013
    1:00 am
    Court Dress and Diplomatic Uniforms
    One of the things I love about reading the Economist is little historical tidbits that get brought to my attention. For example, a few weeks ago I learned about the British Honors Forfeiture Committee. And, of course, Wikipedia also has a category for persons stripped of their honors.

    Today's gem is that in 1853, the United States asked its diplomats not to wear court dress any longer. Wikipedia's explanation is a bit more involved:
    ...In 1853, Secretary of State William L. Marcy issued a circular recommending that U.S. diplomats wear “the simple dress of an American citizen.”

    In response to what was perceived as the excessive ostentatiousness of some of these individualized uniforms, Congress banned diplomatic uniforms altogether in 1867, by passing a resolution forbidding diplomatic officials to wear "any uniform or official costume not previously authorized by Congress". This caused some discomfort to American diplomats, who now had to appear "underdressed", in evening dress, to official functions. In 1910, Theodore Roosevelt attracted considerable attention when he was the only foreign official at the funeral of King Edward VII who was not in uniform.
    It goes on to state that modified Navy uniforms were in use for a while, but the practice was stopped by FDR in 1937, and codified in law in 1946.

    Now... it's pretty clear the intent of the law is being followed. But a quick search of pictures of diplomatic staff suggests "black suit and tie" is a de facto uniform for males. (A few grey suits, I admit.) How much uniformity is too much? Are members of other departments also forbidden to wear uniforms unless authorized by Congress? Who would have standing to sue if the State Department violated this law?
    Friday, November 29th, 2013
    6:57 pm
    Starfleet makes no sense (again)
    In our Deep Space Nine viewing we have made it through Sacrifice of Angels and the big battle.

    One of my big annoyances is that Captain Sisko is running the show. Why does Starfleet even *have* admirals and commodores, if not to coordinate major fleet movements? (I guess they're occasionally antagonists and scene-setters.) Would it be too much to have Admiral Ross, who's been a recurring character, direct the fight? Is this "The Main Characters Do Everything" or were they just too cheap to build a flagship set?

    Anyway, that leads to my Star Trek trivia question: which officer (in canon) is responsible for the greatest loss, by number of starships, in Starfleet history?

    I think it must be Sisko, given that the engagement was described as having about 600 ships and there were significant losses (though at least 200 survived.) Probably not all are capital ships. Wolf 359 was described as just 40 ships vs. the Borg Cube, so Admiral Hanson is probably off the hook.
    2:20 pm
    Kev introduced me to "Boomtown" at Thanksgiving, and we played it with my godkids. (We leanred that Rob has an unconventional bidding strategy.)

    In each round you bid to get first choice of the available mines. Each mine has a production value (1 through 6 gold) and a number that specifies when it triggers, based on the roll of two dice each turn. Further, collecting a set of mines from the same town gives you the mayorship, which lets you collect additional gold from players who build mines in that town (or are forced to by the available options.)

    Position is important because choice of mines proceeds clockwise from the winner, and bid payments are paid counterclockwise. (The player to the winner's right gets half, the next player gets 1/4, etc., with any remainder going back to the bank.)

    So, each mine has value in a variety of different ways:

    * Its "equity" value, the production value translates to points at the end of the game
    * The expected stream of future payments from production (more important in a small game than a big one). Usually the mines with rarer numbers (2 and 12) do not have big enough production value to compensate for the low probability. This can be calculated exactly.
    * Its contribution towards winning a mayor, or increasing the value of the mayor's office

    The mayor has the same components:

    * A 5-point value at the end of the game
    * The expected stream of future payments from mines in the matching city. This is somewhat dependent on future bidding but the number of undiscovered mines in that city is known.

    Both these need to be risk-weighted against the possibility of losing the mine (to a special card) or the mayor's office (due to somebody outbidding.) So valuation is complex enough to be interesting. I wonder if tools from conventional finance are worth using here, does it make sense to apply a discounting rate to future returns? I think they don't because there is no risk-free reward.

    However, the bid payout mechanism (and the fact that you get one card every round no matter what) makes bidding nontrivial as well. Let's ignore coalitions for a moment--- they're hard anyway--- and just look at two players, A and B. Let's also ignore any + or - value to position (the winning bidder bids first the next round.)

    mine X: value 3 to A, value 0 to B (a production-3 mine in A's city, assuming A has three mines in that city)
    mine Y: value 3 to A, value 6 to B (a production-6 mine in B's city, mirrored assumption)

    The global optimum is that B gets Y and A gets X. But because this is a competitive game, A prefers (AY,BX). So we can recast this in terms of A's utility:

    X to B: +3
    X to A: -3

    B's utility is the opposite (in games with more players we can't make this simplification). But A can't bid 3, because 2 of those gold would go to B. Writing things out:

    A bids 3 and wins: +3 -3 -2 = -2
    A bids 2 and wins: +3 -2 -1 = 0
    A bids 1 and wins: +3 -1 -1 = +1

    Is A's win enough? Well, from B's perspective (more negative is better) he can bid 2 and get an improved result--- the payoffs are all reversed:

    B bids 3 and wins: -3 +3 +2 = +2
    B bids 2 and wins: -3 +2 +1 = 0
    B bids 1 and wins: -3 +1 +1 = -1

    So if A goes first he should bid 1, forcing B to bid 2--- or he can bid 2 himself and achieve the same payoff (but B might make a mistake either way.)

    In a two-player game, is there always a way to force no net gain? No, because the mine values may be fractional due to the future revenue stream, and only whole-value bids are accepted. In that case, the winning strategy is to immediately bid the amount which produces a small (<1) positive result for the first bidder; the second player cannot improve his bid without going negative, since zero is not possible.

    But this strategy suggests there is an advantage to bidding first, equal to the fractional payouts in future rounds. So, confusingly, it might be worth overpaying in round 1 if you could go first on all subsequent rounds. But the other player could compensate by overbidding in round 2. I don't know what the end effect of this line of reasoning would be (it might not even be feasible with limited bankrolls)--- it might make an interesting toy game to study all by itself.
    Saturday, November 23rd, 2013
    12:18 am
    Build Me This Game
    So, I was reading the reviews for Rise of Venice, since it was on sale on Steam, but decided not to buy it at its current price. The campaign sounds like it's a lot of drudgery. But I like trading and economic games---- I'm looking forward to Elite: Dangerous coming out in March 2014.

    I have also been poking at a shopkeeping game on Kongregate (more grindy than I'm happy with but there are some time-and-motion optimizations!), and thinking of either getting back into Dwarf Fortress (also big on time-and-motion) or finally learning how Towns works.

    Not that I have time. Especially this month. However!

    What I'd really like to see is a mashup of the two genres. Instead of playing the dwarves tunneling into the earth below, play the nearby human town they trade with. When the dwarves discover a good seam of iron, you can trade it with visiting caravans, or build a forge to sell the iron back to the dwarves as pickaxes and armor. But if you don't take the latter route, the dwarves may decide to invest in their own ironworks, reducing the supply of raw materials--- but potentially opening the door to higher-value goods. I'm imagining both tactical decisions on what facilities to build and what goods to stock--- but also a strategic game about developing trade and competitive advantage, exploring a tree of different decision points.

    Most shopkeeper-type games tend to be more about collecting everything rather than specializing, alas. And most trading games are mostly about established patterns rather than industrial development. I'd like something a little closer to SimCity or Civ, with multiple paths to success.
    Monday, November 11th, 2013
    1:54 am
    Things I hate in adventure games
    Trying a new point-and-click adventure game that came in a Humble Bundle: "Broken Sword: Shadow of the Templars." OK so far except for some voice acting that sounds like it was recorded in a cardboard box.

    But: say you're a modern-era journalist who's planning on doing some exploring. Even pre-cellphone, you'd bring a flashlight, right? Or your camera? I'm willing to put up with Sam and Max scraping by on used chewing gum they pick up off the sidewalk, but anybody with a steady job has at least *some* resources.

    Or, say you need to pry open a stuck door. What's more logical: go home and get a crowbar, or use the secret sliding door to smash the end of a shell casing you found lying around? What the heck?!?

    The Batman (Arkham) games did somewhat better at making puzzles that admitted more than one solution (among the gadgets Batman had equipped), although the tool upgrade mechanism annoyed me there too as something not really in character. But I'm wondering if there's any "game" left if you gave adventurers access to real resources.
    1:26 am
    More stunning ignorance in the Strib opinion section
    I shouldn't even be surprised by this point, but The middle class: An American tragedy, in numbers contained this blatantly racist passage:

    America, uniquely, has always been a ­middle-class country — no aristocracy, no peasantry, and later, under industrialization, to the great frustration of Marxists, no proletariat. Americans were always of the middling social orders. This was the thesis of Harvard Prof. Louis Hartz as he explained in the middle of the last century why we had always had centrist politics.

    The Puritans of New England and the Quakers of Pennsylvania came from the rising middle class and early capitalists of England, Scotland and Wales. The more socially exclusive Cavaliers, who led the founding of Virginia, Georgia and the Carolinas, were not aristocrats in English terms but merely squires. Succeeding waves of immigrants — even down to today’s Vietnamese and Hispanics — have been thoroughly middle-class in their aspirations.

    In 1860, the U.S. Census recorded 3,953,760 slaves out of a total U.S. population of 31,443,321. One out of every eight people living in slavery doesn't count as a "middle-class country".

    The Chinese laborers who built the transcontinental railroad and worked the California gold mines? Whose wives were turned back by anti-Chinese immigration laws? Totally not allowed into the middle class. But they never amounted to even a single percent of the U.S. population until this century, so their experience must not count?

    While that's the article's greatest sin, the author does himself no favors by bringing up the increase of tattoos as an example of how the middle class is abandoning its values.
    Wednesday, October 30th, 2013
    10:07 pm
    Mark's Autumnal Salsa
    Peel two tomatoes. (If you abrade the skin of the tomato with the dull side of a knife first, the skin will come off more easily. Various other methods are discussed on the internet--- I don't remember where I learned this one.)

    Peel three pearl onions.

    Remove the tops from all the jalapeno peppers remaining from the garden. (In my case about 15 small-sized peppers, in a mixture of red and green, that had been sitting in the fridge for a week or two. Most recipes recommend two or three peppers.)

    Dice everything to an even consistency. You're aiming for something like a relish, not gazpacho. Mix with 1 tbsp lemon juice and a few shakes of salt.

    After warning onlookers to stand clear of the resulting fireball, sample and adjust to taste.
    Tuesday, October 15th, 2013
    9:09 pm
    More Tintri News and Quotes
    Tintri announced that Ken Klein, who was previously our independent board member, will take over as Chairman and CEO. Kieran Harty, my co-founder, will take the CTO role. Press release here.

    We also have a great collection of quotes from analysts, bloggers, and partners about our recent product launches, collected on the Tintri blog: Rave Reviews for Tintri VMstore T600 Series and Tintri Global Center.
    Tuesday, October 8th, 2013
    9:58 am
    Tintri Product Launch Links
    Tintri announced an expansion of our storage products today: The T620 and T650 are the first platforms based on our new chassis, and fill our our product line by providing a lower-scale and higher-scale option than the existing T540. The T620 has about the same capacity, 13.5 TB, but supports a smaller number of VMs and a lower maximum IOPS. The T650 provides 33.5 TB of capacity and supports 2000 VMs.

    We're also rolling out Tintri Global Center, which was previewed at VMworld: TGC is our scale-out solution for controlling and pooling multiple VMstores, either within the same data center or across the world. You can see me walking through a demo via the link at the bottom of the blog page. (Demos on Demand does not provide an easy link to share.)

    Press coverage so far:

    Computer Weekly:

    Virtualization Practice:

    LeMagIT (French language):

    PC World:
    Monday, September 23rd, 2013
    5:46 pm
    Error Messages Are Hard, a continuing series
    Poor little compiler.

        [javac] Compiling 95 source files to /hg/tc/out/classes
        [javac] /hg/tc/src/java/com/tintri/platform/ '(' expected
        [javac]         } else if if ( result == 1 ) {
        [javac]                  ^
        [javac] /hg/tc/src/java/com/tintri/platform/YYY:64: illegal start of expression
        [javac]         } else if if ( result == 1 ) {
        [javac]                   ^
        [javac] /hg/tc/src/java/com/tintri/platform/ ')' expected
        [javac]         } else if if ( result == 1 ) {
        [javac]                     ^
        [javac] /hg/tc/src/java/com/tintri/platform/ not a statement
        [javac]         } else if if ( result == 1 ) {
        [javac]                               ^
        [javac] /hg/tc/src/java/com/tintri/platform/ ';' expected
        [javac]         } else if if ( result == 1 ) {
        [javac]                                   ^
        [javac] 5 errors

    However, I think it could do better. That's only 5 errors out of 8 tokens remaining on the line. Why not one error message per token?
    Friday, September 13th, 2013
    9:03 am
    Lance Fortnow's "The Golden Ticket"
    I'm being driven kind of nuts by this book because it confuses perfect machine learning with near-perfect predictive ability. The examples given for the consequences for P=NP are simply outlandish. That's not so say they're all wrong--- machine-generated proofs really could demolish long-held problems if that were the situation. But in other cases he vastly overestimates our ability to predict even with perfect knowledge. There is no way that a year of weather or an entire baseball season can be accurately predicted no matter how much "big data" you have to learn from, that's just a limitation of chaotic systems.

    I am also a bit less than impressed that he doesn't much acknowledge that there are problems harder than NP where even verifying an answer takes non-polynomial time, or the answer is uncomputable. (This does get a mention later in the book.) But this is not a rigorous introduction, it's a popular science book.

    It's a good introduction for all that (including some interesting history even for those already familiar with the topic) But even a complexity theorist should understand the difference between modelling and prediction.
    Thursday, September 12th, 2013
    2:34 am
    I can't find the earlier version of this rant, though I'm sure I made it
    Nearly every kid (and, heck, probably adults learning mathematics for the first time) who is introduced to the concept of infinity asks questions like "what's zero times infinity" or "what's the square root of infinity" or "can you divide one by infinity?"

    Which is most helpful response?

    1. Snotty lectures about how infinity isn't really a number, so you can't even ask that sort of question.

    2. Reasonable explanations about how these terms aren't always well-defined, and the problems you run into trying to give them a definition.

    3. Here are some ways in which mathematicians have tried to answer those questions in ways that still make sense!

    Unfortunately, as the links above show, #1 is by far the most popular approach. #2 is not bad, but if you actually want somebody to stay interested in math, I think #3 is far superior.

    Mathematics is a game. It's not a set of rules, it's about making up rules and seeing where they lead. And concepts like surreal numbers, projective geometry, and Hilbert spaces all use infinities in ways that are mathematically consistent but "bizarre". Surreal numbers in particular are a great example of taking seriously such ideas as infinitesimals and "infinity plus one" and giving them a concrete meaning instead of blowing such obvious ideas off as stupid. For that matter, the whole field of complex analysis results from taking seriously something that was previously just a "hack" to make the cubic function come out right: "what is the square root of -1?"

    Some games don't work out, but that's probably just because nobody's been clever enough yet. Or in some cases, the game is provably not very interesting (which is sort of meta-interesting). But the next time somebody tries to get pseudo-sophisticated with you by explaining how your math question can't even be asked, treat them as you would any other bully.
    Saturday, September 7th, 2013
    1:17 pm
    Beef Kebab Marinade
    Here's what I threw together for kebabs yesterday:

    1/3 cup soy sauce
    1/3 cup red wine
    1/4 cup honey
    1 tsp ground ginger
    4 cloves garlic, smashed

    (mainly for future reference.)
    Thursday, September 5th, 2013
    10:24 pm
    Transpositions are Easy
    Building on the toy games introduced here.

    I figured the easiest tile-game move to analyze is transpositions. Only two tiles are affected by any move, and they have to be opposite colors in any minimal solution. My earlier code showed, for example, that it takes 8 tile swaps (orthogonal moves only) to convert:

    and it's pretty easy to find such a sequence of moves. To do so you sort of eyeball where the "gaps" are and which other pieces are closest to those gaps. Can this idea be formalized into a (non-brute-force) algorithm for finding minimal move counts? I think so.

    Let's start with the algorithm, then argue that its results make sense. Transform the problem into a minimum-cost bipartite matching problem. The two sets are the goal positions and the pieces, and the edge costs are the distance (in moves) between each pair. So the matrix representation of the transformed problem looks like this:
             pieces -->
    goals  0 1 3 2 3 3 3 5   
      |    1 0 2 1 2 2 4 4
      V    2 1 1 2 1 3 5 3
           3 2 0 3 2 4 6 4
           1 2 4 1 2 2 2 4
           2 1 3 0 1 1 3 3
           3 2 2 1 0 2 4 2
           4 3 1 2 1 3 5 3

    The minimum selection of 8 elements from this matrix, which share no row or column in common, gives the number of moves in the solution. It's obvious that no fewer number of moves will work. But can we show that this number is achievable?

    Now, alas, you can't just read off the matching to get the sequence of moves. For example, in the case
    moving "piece 8 to position 3" has cost 2, as does "piece 6 to position 3 and piece 8 to position 6", so both are solutions to the matching, but only the latter is feasible. If we tried to move piece 8 twice we'd need an additional move to put 6 back into place. We need to show that any infeasible plan can be transformed into one which is feasible with the same number of moves.

    Fortunately, this is pretty easy. Suppose the plan calls for moving piece A onto the square currently occupied by piece B. This move is a no-op since the pieces are identical in this puzzle. We can remove that transposition from the move sequence by "swapping the identities" of A and B instead of making the no-op move. That is, whenever we would move A to a non-empty space occupied by B, instead swap the labels and continue moving the new "A-prime" (previously "B"). This will leave "B-prime" one space away from its initial starting point, so after A-prime has moved on, use one move to put B-prime back into its starting point (or its goal point). This exactly balances out the move we removed because it was a no-op.
    Puzzle:                        *     *
                 A     0     0     B     0
    Plan:        A -->   -->   -->   -->    (4 transpositions)
                                   B        (0 transpositions)
    Execution:   A -->   --> rename
                                   A'-->    (3 transpositions)
                             B'-->          (1 transposition)

    The transformation can be applied multiple times to move through multiple pieces as well, just move the renamed pieces back in reverse order (first-in-last-out). As a check, I ran the minimum-matching algorithm across all 4x4 positions and it agreed with the previously computed values.

    Using this algorithm, we can both solve individual large instances as well as perform parallel counts of move distances without using a large amount of memory. The best minimal matching algorithms are O(N^3), though, so this is a significantly more compute-intensive method of finding all minimal word distances than the brute force approach.

    This example of a random 12x12 matrix

    requires 267 moves to restore to the sorted state, according to this algorithm, with one possible plan (goal, piece) = (0, 0), (1, 5), (2, 62), (3, 1), (4, 2), (5, 67), (6, 60), (7, 3), (8, 4), (9, 56), (10, 8), (11, 9), (12, 70), (13, 71), (14, 6), (15, 7), (16, 65), (17, 12), (18, 49), (19, 13), (20, 14), (21, 15), (22, 50), (23, 51), (24, 10), (25, 63), (26, 11), (27, 64), (28, 20), (29, 66), (30, 21), (31, 69), (32, 28), (33, 61), (34, 16), (35, 17), (36, 18), (37, 58), (38, 19), (39, 48), (40, 59), (41, 54), (42, 55), (43, 22), (44, 23), (45, 24), (46, 25), (47, 26), (48, 57), (49, 46), (50, 52), (51, 31), (52, 32), (53, 27), (54, 42), (55, 68), (56, 37), (57, 29), (58, 30), (59, 44), (60, 45), (61, 40), (62, 47), (63, 35), (64, 53), (65, 41), (66, 36), (67, 43), (68, 33), (69, 38), (70, 34), (71, 39). That is, piece 0 moves left twice into position 0, piece 5 moves left seven times into position 1, position 2 is filled from below, position 3 is filled with the piece "1" already in it. But following the procedure above we can see that piece "5" and "1" will be renamed if we carry out this sequence. (I haven't written the code to actually simulate playing all 267 moves to verify, so there may be an error lurking here but I'm pretty confident in the proof above.)

    There might be some way to tighten up the matching to prefer non-crossing moves, for example by increasing "1" to "1.001", "2" to "2.003", "3" to "3.006", etc., to introduce a bias for plans which perform two "real" moves instead of a longer one which will get broken up.

    Unfortunately this practical algorithm doesn't seem to provide any hints toward enumerating the distribution of minimal distances for various sizes of board.
    Thursday, August 22nd, 2013
    9:03 pm
    Federation Ethics Make No Sense
    We're up to Doctor Bashir, I Presume? in Deep Space 9. The episode is about how Doctor Bashir's parents had him genetically enhanced as a child, which is illegal within the Federation. As a result, Bashir may have to leave Starfleet and lose his medical license?!?

    This is bogus on multiple levels, and kind of troubling it you take it seriously.

    What would be interesting is to have some Star Trek in which this weird hang-up actually mattered. Are other civilizations which practice genetic engineering unwelcome within the Federation? It could be viewed, within the wider galaxy, as a pretty extreme political stance, a result of a particular historical accident within Human history, whose lessons are not more broadly applicable.

    Or which abnormalities count as "serious"? You don't see anybody in Star Trek wandering around with glasses. It's obviously easier to wave away cancer-causing genes once you've got a cure for cancer.

    In what ethical system is it OK to blame the child for this, anyway? I can't think of any better way to encourage potential Kahns to antiocial behavior than to close the door to all professional and political success. (What is with the obsession with Asians as genetic supermen?) This bit of boneheaded worldbuilding is profoundly pessimistic, suggesting the Federation couldn't find a place for a non-expansionary Borg (too inhuman) and wouldn't have let Data into Starfleet either (too artificial.) Heaven help them if they encountered a machine race that wanted to team up.

    This season of Deep Space Nine has really been bothering me by taking a reactionary approach to any politics that come up; the most painfully earnest TNG or TOS episode is far preferable. Chemical warfare, evidently, is just fine with the Federation. And the entrance of Bajor to the Federation is one to be signed by admirals rather than diplomats (look at who's at the table in *that* episode.)
    Sunday, August 18th, 2013
    8:19 pm
    The size of the search space in the toy problem I described in the last post is basically (n^2) choose (n^2)/2 for an n x n board. My initial implementation is in-memory only which only works for 3x3 and 4x4. It ought to work for 5x5 too but for some reason Python (on Windows) just stopped computation rather than giving me an out-of-memory error or finishing.

    9C5 = 126
    16C8 = 12,870
    25C13 = 5,200,300
    36C18 = 9,075,135,300
    49C25 =~ 6.3 * 10^13
    64C32 =~ 1.8 * 10^18

    What you need for the brute-force algorithm is basically a bit per tile arrangement (permutation) to tell whether you've visited it already, plus a list of the "frontier" of newly-generated arrangements:

    next = []
    for X in frontier:
      for P in permutations:
        if P.X not already visited:
           mark P.X visited
           next.add( P.X )
    frontier = next

    So you only need sequential access to the queue of positions at distance D, in order to calculate D+1; that queue can safely be placed in a file.

    9 billion bits is just over a gigabyte, thus a "streaming" algorithm where frontier/next are files will scale at least that far. And could even be written in pure Python! The files for each distance can be stored as a list of numbers representing the positions (no more than 64 GB for this size problem) or we can include some additional metadata if we want to remember the actual word.

    How much further could we go using the same basic algorithm? Well, 49C25 bits is about 7.2 terabytes, so in-memory on a single computer is out. But, we could certainly use an array of flash drives to store the visited set. (A Tintri T540 has 2.4TGB of raw flash, but you could get 1 TB SSDs if you really want them.) Assuming sufficient parallelism to keep the drives busy, >=100000 IOPS is feasible, but even if each position required even a single I/O we're talking 20 years. (We might be somewhat smarter about I/Os but on the other hand the brute-force approach will visit the same position many times. Even a million IOPS is still 2 years.)

    On the other hand, 7.2TB of memory is only about 31 Amazon EC2 cr1.8xlarge instances, so storing a distributed hash table would be feasible and solve the problem in a much shorter period of time. You would do this in a sort of map/reduce style, distributing the multi-TB list of positions in generation D to each node and have each independently produce its slice of generation D+1. Each node would categorize each P.X as "visited", "not visited", or "some other node's problem", and then merge all the "not visited" to form the list for the next iteration. Of course, you'd need at least at least half a petabyte to store the results, which doesn't come cheaply either. But you could throw money at the problem and it would still be doable within a reasonable amount of time. We're talking about 4 days to stream all the data at 10Gbps, though, so "reasonable" is probably at least a couple weeks.

    So what about 8x8? Well, now the hash table itself is 200 petabytes. Even SETI-at-home style resources aren't adequate for this, as you'd need almost 900,000 computers with 244GB of memory to implement the straightforward approach. 200 petabytes is about as large as a storage cluster gets today, or larger, I think--- most examples I can find are no more than half that. (Internet Archive is at about 10 petabytes, Megaupload held about 30 petabytes, Facebooks's Hadoop cluster is 100PB physical.) Instead--- assuming you can build out a multimillion dollar exabyte storage solution for the intermediate steps of the solution--- you'd need to find ways to make CPU-vs-space tradeoffs. The algorithm would have to change at this point, given today's technology, and would probably consume a nontrivial portion of the Internet's communication capability. (Partitioning a second time by giving each computer only part of D and part of the visited set doesn't scale because you require N^2 passes to make sure you found all the previously visited positions in D+1.)

    I asked earlier this year whether it's still true that computers can get just one "size" larger than humans--- in this example, I think computers can get 3 sizes larger, assuming the 4x4 grid is human-tractable and the 5x5 is not.
    Saturday, August 17th, 2013
    12:03 am
    Some preliminary goofing around on tile games
    My vacation project has been putting together some Python code to systematically study various tile-permuting operations found in match-3 or sliding puzzle games. What I'm interested in is: given a particular goal on a square grid (like making a group of N tiles, or arriving at some other in a set of arrangements), how many moves does it take to reach the goal? Are certain types of moves better suited to particular types of goals?

    Read more...Collapse )
    Monday, July 29th, 2013
    5:35 pm
    Why are middle-aged monarchs preferable to old ones?
    I am trying to understand the point of this Economist leader about how George Alexander Louis will have to wait until 2070 (or later) to become king, and perhaps the British monarchy should take a page from Queen Beatrix and King Albert, and abdicate into retirement.

    What is the benefit of having a monarch who is merely middle-aged instead of one who is elderly? I mean, I don't see the point of a monarch at all. (The Netherland's royal family is, in my opinion, a foreign imposition--- the Netherlands have been a monarchy for less time than a republic.) So why does it matter that the younger members of the family will be crown princes or princesses for a long time? They are hardly likely to stage a coup in order to ascend to the throne.

    If you want a young leader, elect one. If you insist on some hereditary ceremonial head of state, don't complain about the random and arbitrary nature of the succession. It's like complaining that the kings should have smaller ears or more hair or something.
    Thursday, July 25th, 2013
    2:02 am
    Solving stupid integer sequence problems
    Problems of the form "what is the next number in the sequence A1, A2, A3, A4, ..." are dumb. They don't rely upon mathematical ingenuity. They depend on some social construct about what the "right" answer ought to be, in terms of simplicity or "things you might be expected to know". And yet they are pretty popular puzzles.

    And, of course, mathematical psuedo-sophisticates make appeals to Kolmogorov complexity which is unhelpfully uncomputable.

    What I would like instead are a collection of mechanisms for proving such questions meaningless--- that is, tool for constructing an arbitrary fit for an integer sequence. Here's a few to kick things off:

    1. Fit the K integers as successive values x=1, x=2, x=3, ..., x=k of a polynomial of order N. There are N+1 coefficients, so when N+1 >= K we should be able to find an exact fit (except maybe in some degenerate cases not immediately obvious to me) and for N+1 > K we can find an arbitrary number of polynomials.

    2. Take N=ceil(log2(max(Ai))). Then we can view the sequence as the operation of N separate N-bit binary functions. Since there are 2^N such binary functions and only K examples, the problem is not particularly constrained, even if we restrict ourselves to "simple" N-to-1-bit functions.

    3. Define A_K as a (K-1)-ary function of its previous arguments. Backfill A_0, A_-1, A_-2 to make the sequence look non-arbitrary.

    4. Take differences between terms enough times, until the resulting sequence is short enough to be matched to something trivial.

    5. Split the sequence into two or more unrelated sequences that have been interleaved (which can be done in a variety of ways.)

    6. Construct a rational number such that A1 A2 A3 ... AK (suitably zero padded) are the repeating digits (or initial digits, or a combination) of its decimal expansion.

    7. Construct a number such that A1, A2, A3, ... AK are the first terms in its continued fraction representation.

    Any other suggestions?
    Monday, July 22nd, 2013
    12:25 pm
    Capital Raising Through Arm Twisting
    Today's "On Business" column by Neal St. Anthony discusses a plan to create a local fund-of-funds for early stage venture capital:

    Local venture capitalists and fund managers have been quietly urging Gov. Mark Dayton’s administration to embrace a privately financed and operated equity fund....

    The idea would be for flagship Minnesota institutions, such as U.S. Bancorp, Thrivent Financial and Ameriprise, to perhaps pledge several million dollars each, followed by foundations and perhaps affluent individuals. A fund-of-fund manager, operating under agreed-upon guidelines, would disburse the money to Minnesota and other private equity and venture capital managers.

    There's nothing wrong with a little cheerleading for local investment. That sort of boosterism is certainly the governor's job. But showing up at U.S Bancorp's door and asking them to invest more money with local VCs? That goes a bit far too me. Are the VC's going to give up their 2-and-20 on the deal, too? The article doesn't say, and hints at some desire for public-sector involvement.

    It would be one thing if Minnesota didn't have venture capitalists to provide early-stage funding--- but we do. Capital crosses borders (especially state borders!) easily, and if local VCs can't raise enough money for their investment funds, they're failing at their job. Why bail them out with cash from Minnesota's successful large companies?

    Now, don't get me wrong--- I'd much rather have had a $348m state venture capital fund than a new Vikings stadium. But it's not clear to me that this sort of deal is on the correct side of the line between "the governor provides a little bit of social lubricant for a good cause" and back-room cronyism. Maybe I'm hypocritical, but I'd rather have the state pony up some money if this is a social good, than have VCs propped up by private investments that otherwise wouldn't get made.
[ << Previous 20 -- Next 20 >> ]
My Website   About