?

Log in

Mark Gritter's Journal
 
[Most Recent Entries] [Calendar View] [Friends]

Below are the 20 most recent journal entries recorded in Mark Gritter's LiveJournal:

[ << Previous 20 ]
Thursday, December 1st, 2016
11:35 pm
Biting the Hand That Feeds Me
The National Venture Capital Association has some advice for Donald Trump: don't tax us. The letter is long on generalities except where tax issues are concerned. While most of the policy issues are ones to which I'm at least a little sympathetic, I find their presentation self-serving and overly optimistic, rather than really engaging with any complexity.

Since the Reagan Administration, our tax code has been relatively effective at encouraging patient, long-term investment, but on net has been hostile to entrepreneurial companies. For example, punitive loss limitation rules punish startups for hiring or investing in innovation, while benefits such as the R&D credit are inaccessible to startups. Unfortunately, tax reform conversations in Washington have ignored these challenges while at the same time proposing to raise taxes on long-term startup investment to pay for unrelated priorities.

For instance, carried interest has been an important feature of the tax code that has properly aligned the interests of entrepreneurs and venture investors since the creation of the modern venture capital industry. Increasing the tax rate on carried interest capital gains will have an outsized impact on entrepreneurship due to the venture industry’s longer holding periods, higher risk, smaller size, and less reliance on fees for compensation. These factors will magnify the negative impact of the tax increase for venture capital fund formation outside of the traditional venture regions on the coasts.


No startup founder cares about loss limitation rules. The tax code has never been a factor in whether we hire or invest in R&D. Why? Because startups lose money. It only matters later, when the startup starts making money, that the company doesn't get as big a deduction for its previous losses. (Those banked losses are an asset... but any startup for whom that is material is failing or doing it wrong.)

That's just a leadup to the NVCA's real concern: Trump's stance against the "carried interest loophole". VC's get paid 2-and-20 like other fund managers. 2% of fund assets annually (sometimes negotiated downwards) and 20% of fund income. The fund manager's 20% is counted as long-term capital gains because it is treated as their "share" of the multi-year investment rather than just a management fee.

There are some reasonable arguments that this arrangement is the proper way to think of VC compensation, but VCs often fall back on the "we won't do it if we're taxed higher than 15%" line which is in fact what NVCA says in their letter. This is nonsense. What else are they going to do? Work for salary and pay the same tax rate they're complaining about, only for less money? Engage in less risky investments and let somebody else get the big payoffs instead? Become hedge fund managers and live off the 2% management fee? (oh, wait, that's taxed as normal income too.)

Complaints about the IPO process being burdensome are fair, but it's not clear that is what's driving companies to stay private longer. NVCA is not clear at all what they'd like to do about it.

Debate during the election often focused on illegal immigration, but unfortunately not how legal immigration can create jobs for American citizens, including in underemployed areas. This can be accomplished by attracting the world’s best entrepreneurs to our country through creation of a Startup Visa and allowing more high-skill immigrants to help build startups.


Disingenuous. Startup-founding immigrants have little incentive to go to (and stay in) the Rust Belt.

Basic research: I'm in favor of that too, but the request here is also self-serving. The more applied research gets performed in universities, the less risk for venture-backed firms to develop it from zero. And in fact that's exactly what they want:


In addition to funding basic research, encouraging the commercialization of more technology will lead to increased job creation and economic growth. Many states with economically distressed areas can better utilize research universities to spread startup activity. High-growth startups frequently come out of university-funded research that is commercialized and become private companies.


This is a subsidy, and should be judged against other forms of public investment. (If that's the only way we can get more funding for public universities, I'm for it!)

Unfortunately, while our competitors have upped their game, American policymakers have taken our leadership in entrepreneurship for granted, as we detail below. Consequently, whereas twenty years ago U.S. entrepreneurs received approximately 90 percent of global venture capital investment, that share has fallen over time to only 54 percent in 2015. Further, in three of the last four years, at least half of the top ten largest venture investments in the world have occurred outside the U.S.


How much leadership in venture investments is "enough"? 75%? 90%? 95%? This relative comparison is meaningless. Venture funding in the U.S. has not dropped. The rest of the world is just growing faster. That's a good thing, and possibly inevitable. When China has a lot of cash, it's going to make a lot of investments.

Partner with states to spread startups in areas of economic distress. Your administration has an incredible opportunity to bolster public-private partnership economic development efforts to spur entrepreneurship. Ohio is one state that is leading the way to transform their economy and create opportunity. Third Frontier in Columbus is providing access to business expertise, mentorship, capital, and talent to turn great ideas into growing companies, and JumpStart in Cleveland provides equity-based capital to help startups grow, matches talented individuals with growing companies, and provides expertise to startups.


This is both helplessly vague and disingenuous at once. These seed efforts mean little unless coastal VCs are willing to cut checks for mid-stage and late-stage capital. That's happening somewhat. But the language used here is often code for "investment tax credits" which I feel are of questionable value. The bigger issue is that the Rust Belt often lacks the worker base to benefit from the technology startups we're talking about here. Or to put it another way, the jobs that are created are ones where the workers are already in high demand, not substitutes for the displaced manufacturing jobs. If Colombus creates a 1000 new programming jobs, they probably will act more to stop brain drain than to help rural Ohio.
Monday, November 28th, 2016
11:53 pm
More Sol LeWitt
A good article on Sol LeWitt which explains more about how the wall drawings are implemented in practice: http://www.greatwhatsit.com/archives/1379 You "buy" a wall drawing from its current owner; they paint it over and you get a draftsman to re-create it on your wall. It comes with a certificate of authenticity. (There is sometimes a diagram as well. A better picture of one can be found here: http://www.museoreinasofia.es/sites/default/files/salas/informacion/104.03_sol_lewitt_eng.pdf)

The important part of this process, of course, is that it preserves uniqueness. But it doesn't close the door on rogue implementations of the initial recipe. Here's Wall Drawing #47, although it is not on a wall and comes with no certificate, so of course it is not Wall Drawing #47 at all:



Here's my WaterColorBot working on a pen-and-paper version:

Thursday, November 24th, 2016
2:28 am
Two days with the WaterColorBot
I have a new toy; it's Super-Awesome Sylvia's WaterColorBot from Evil Mad Scientist Labs. This is basically a pen plotter that knows how to use a watercolor paint palette. It can be used with crayons or colored pencils or dip pens instead of watercolors, too.

My first attempt showed some limitations of the RoboDraw software, which takes a SVG and attempts to paint it:



I made heavy use of clip paths in trying to recreate this Sol LeWitt wall drawing, and RoboDraw 0.98 doesn't appear to obey them. The beta 2.0 software couldn't paint my SVG at all but from as far as it got, it looked like it had the same problem. Here's a rendering from the same program I used to create that SVG:



However, the Inkscape plugin for WaterColorBot has far better control (at the cost of some more complexity.) You draw each color as a separate layer in Inkscape, and the paths you defined there are exactly what the brush follows. Leaving only the complexity of judging fills, appropriate amount of water, distance between paint refills, height of the brush, and quality of the paper. Generating SVGs from Python code and then painting them in Inkscape works pretty well. Here are two crayon drawings I did via this method, both inspired by Sol LeWitt:




I was able to use Inkscape to convert a small piece of pixel art to a PNG, and then paint the PNG in RoboDraw. Prewatering the paint and using printer paper led to a rather soggy output, but the conversion process worked pretty well and the fills were more sensible here.



Converting a Tintri logo to an outline in Inkscape and plotting that with a colored pencil worked quite well. (But I didn't take a picture of that one.) The hardware seems rather inexact with brush strokes but quite accurate on pencil drawings--- not sure what's going on there. It's capable of drawing pretty well-formed circles though with a slight mismatch where the curve closes.

The control software makes choices that are a little surprising sometimes. It doesn't paint like a human, of course, who would refill at the end of a stroke. Instead it will quit halfway through when the distance threshold has been met, or start a stroke and then immediately go for more paint. The fills are also done in scan lines when a human would tend to break them up. There is a low-level API so if sufficiently motivated I could try my own hand at doing better.

A limitation of the simple hardware is that there is no feedback on the amount of pressure on the brush pencil. The brush mechanism has software-controlled height, and they even include software to draw spiral paintings where the height of the brush is used to print an image. But you have to calibrate the height by hand, and it's easy to lift the carriage up by putting a pen too low. On the other hand, simply resting on the paper doesn't provide good results with a pencil or crayon. The guide rails are fairly simple and open at the top, so the amount of pressure is limited, but it still would be nice to have a pressure sensor to tell if the crayon needs to be lowered a bit to keep drawing. (You can see in the first crayon drawing above that I paused and adjusted the height a couple times.)
Saturday, November 19th, 2016
9:32 pm
Sol LeWitt and Conventions
Some of my favorite pieces on a recent SFMOMA visit was Sol LeWitt's conceptual art. Unfortunately the wall drawings did not photograph well, so here's something that LeWitt actually executed himself, "Forms Derived from a Cube."



LeWitt's wall drawings are meant to be ephemeral, and the ones I saw on display were going to be painted over later. They are given as instructions for a draftsperson to execute. You can find examples at http://massmoca.org/sol-lewitt/ or http://www.ericdoeringer.com/ConArtRec/LeWitt/LeWitt.html

The instructions are somewhat terse. For example, Wall Drawing #104 is simply "Arcs from the midpoints of two sides of the wall." But, it looks like most executions do not actually engage with the freedom this provides. They instead more or less follow the initial interpretations and the conventions implied by other drawing instructions. For example, the standard interpretation is that the drawing is black unless specified otherwise. The draftsman ensures that lines or arcs are evenly spaced unless they are "not straight" or explicitly "random", and that they intersect the boundaries of the figure. This is an "originalist" reading that says we follow LeWitt's earliest examples if the text is ambiguous.

Unless, of course, you let the museum visitors try their hand. Then you get variety: http://risdmuseum.org/manual/45_variations_of_a_drawing_sol_lewitt_and_his_written_instructions Of course, some of this variety is "I want to draw Snoopy rather than following these stupid instructions." (Even the examples there assume that the basic topology of parallelogram-inside-circle is specified. But is it?) Which is more in the spirit of "conceptual art"? The RISD museum decided that the text was insufficient:

These differences made clear that despite LeWitt’s desire for objectivity, a great deal of subjectivity exists when others are involved in the making. LeWitt certainly was aware of the variations that might occur, and although he might have accepted that his instructions could be drawn a number of ways, he did have preferences as to how his works were installed. That’s why, rather than using the written instructions as the sole guide for installing the wall drawing at RISD, the Museum’s installation crew was assisted by draftspeople from the Sol LeWitt Foundation who provided a diagram to help LeWitt’s idea take form.
Tuesday, October 25th, 2016
8:42 pm
Principal Component Analysis
A similar technique to NNMF from my last entry is Principal Component Analysis. It still factors the input matrix, but now negative values are allowed. A big challenge for using PCA is what to do with those negative numbers, which have no standard interpretation. (This is one of the advantages of NNMF, that it can be applied in scenarios in which only positive values make sense.)

PCA is better at identifying features in my sample of pixel art, and degrades more gracefully as the number of components is lowered. The graphs here show original, reconstructed, and the PCA components separated by palette color. (Blue is negative, red is positive--- sorry.)



Reduced dimension (10 components.) Using nearest-neighbor of the blended colors works pretty well at reconstructing the original images:



However, the blends between flowers are, if anything, worse. Nearest-neighbor doesn't work very well at all. The best results I got were from capping the negative contributions at 0 and blending:



Sunday, October 23rd, 2016
10:14 pm
NNMF project
Some notes on a procedural generation project, just for fun.

Non-Negative Matrix Factorization is a dimensionality reduction technique. It reduces a set of samples (say, books consisting of words or images consisting of pixels) to a set of lower-dimensional "components". Then we can do fun things like interpolate between two of the samples by interpolating their component representation. One of the ProcJam 2016 speakers used this for blending Zelda levels; his talk can be found here: https://www.youtube.com/watch?v=3wcpLwvBTYo&feature=youtu.be&t=2h6m22s

I wanted to play around with this but lacked an appropriate data set that was easy to use. So I decided to try the flower pixel art from the ProcJam Garden Pack found here: http://www.procjam.com/art/tess.html

The first attempt was disastrous. Even with the number of components == the number of samples (which permits a trivial matrix factorization) I couldn't convince the NNMF implementation to learn the shapes. What I did here was build the matrix out of RGBA values.



So, my first thought was that we could make it simpler by using a matrix where there was one row per pixel per color in the palette (since the pictures are reduced-palette anyway.) This is actually a dimensional increase, but it means every training value is either zero or one. This produced somewhat more interesting results, not only successfully learning the shapes, but also managing to represent them with a little bit of dimensional reduction. Of course, subtleties between the similar shapes are lost quite quickly.

20 components:



16 components:



12 components:



In order to convert back from a vector of palette color indicator values (c0, c1, c2, c3, c4, c5) to RGBA, I first normalized (so the sum is 1) and then do a weighted average of the indicated colors. This is probably too linear to produce non-"muddy" results as you can see in the last image above and in the mixed images below.

So here's the first attempt at doing a 50% mixture between every sample, using the full dimension. Some of them are arguably a good mixture, but most just come out sort of grey or confusing.



The code for this iteration can be found in the Gist here: https://gist.github.com/mgritter/e11ce48beef1811feb6a2cab9b439f16

Probably the next thing to try is to switch the vector to RGB conversion to be nonlinear; perhaps just take the highest-ranked palette color. This would mean every output pixel uses the palette, which is a more faithful interpolation. But overall this doesn't look like a good data set for NNMF.
Sunday, September 25th, 2016
4:45 pm
Roguelike Celebration notes 3/3
Practical Low-Effort PCG: Tracery and data-oriented PCG authoring by Kate Compton (@galaxykate on Twitter). There were two parts to this talk, one on Tracery (https://tracery.io) and another on AI techniques in general.

Tracery is a tool for generating text from a grammar. This is a project near and dear to my heart, because I did the same thing for one of my first Java applets. Sadly, "The Technobabble Generator: a Recursive Random Grammar Unparser" has been lost to the mists of time, and not even the Internet Archive can attest to its existence. To me, this is sort of an object lesson in how much the world has changed. In 1996 we didn't have Github or even much of a Javascript ecosystem, and those are what has allowed Tracery to go from a personal project to a reusable facility.

Tracery's language is JSON-based and pretty easy to use. Kate pointed out that SVG is just text, and has some very cool procedurally-generated dresses and spaceships. JSON is just text, so people have even created tracery-generating tracery schemas. A Twitter bot hosting service, CheapBotsDoneQuick, uses tracery. And it's been ported to other languages from its original Javascript implementation. I put together "Your Next Roguelike" using this hosted service: https://twitter.com/NextRoguelike

The larger point she made is that when each piece of content requires its own code, STOP AND THINK first. What can you do instead to turn code into data, or use a tool? "Don't whittle your own screws". An example she gave was how, in the Sims, furniture actions and effects were all controlled by attributes which made it very easy to launch expansions.

(Did I mention Kate did the planets in Spore? She's gone back to school for her PhD.)

Examples of finding artificial intelligence "parts" out in the world:
* Finite State Machines: build an interpreter, not a FSM in code
* Decision Trees: once we have the tree as a data object separate from its implementation, it can be manipulated on its own. Examples: log which paths are being taken, evolve/modify/apply machine learning to the tree
* Constraint Sets/Solvers: e.g., Clingo (https://github.com/potassco/clingo) can be used once you cast your problem in this form. see "A Map Generation Speedrun with Answer Set Programming" https://eis-blog.soe.ucsc.edu/2011/10/map-generation-speedrun/
* Visualization tools. She mentioned an interesting effort in visualizing game logic which can be found here: http://ice-bound.com/news/visualizing-the-combinatorial/ While I got a pitch and demo from the author of "The Ice-Bound Concordat" later in the day, it didn't really click with me.

"Markov By Candelight" by Caves of Qud author Jason Grinblat. He described Markov text generators (like those behind Erowid Recruiter https://twitter.com/erowidrecruiter) and wanted to find a way to use them to generate in-game book content for his game, which actually tied back into the plot.

To do this sort of thing well you need to curate your training corpus well. Fortunately, Caves of Qud already has a lot of text in terms of in-game books, NPC dialogue, help files, quest text, object descriptions--- totaling about 45,000 words. So he mixed that with a couple of 19th century physics texts available on Project Gutenberg, about another 45K words. That produced a mix he was happy with, definitely game-relevant but also coming across like there had been centuries of linguistic drift, data corruption, and unknown referents.

Titles are hard, because they have extra structure. After trying a "mad lib" style templating approach he went back to Markov generation but chopped off undesirable words (about 30 of them.) Then he added some extra variation by sometimes including an author's or editor's note, appending "volume 1" to the title, making the book extra-long, etc. These were meant to combat the feeling after a while that all the Markov text looks "just about the same."

How to turn this into a game mechanic? He looked for common prefixes which turned out to be combinations like "in the", "of the", "to the". Then he can bury a secret in the books by making it one of the generated text options that follows these common prefixes (but as an indivisible unit.) So a book might have "... of the {something cool} which I hid six miles east of {place}." The frequency can be tuned; he aimed for about one such secret in every four books.

How Brogue Tells Stories by Brian Walker. What makes a situation in a game exciting? And how to you generate those experiences? The biggest reason roguelikes get stale is because it's not longer cognitively interesting to optimize (because the crowdsourced guide tells you how to do so.) So how do you get the most bang from new content? Classes probably aren't balanced, and there is probably also a dominant build within each class, so probably enhancing the environment is a better payoff.

Brogue does "improvised item-based advancement" where your character gets better because of equipment, rather than experience. But you're essentially on a budget of reusable "skill items" (weapons, staves, charms) and disposable "skill point items" (scrolls of enchantment.) You only see a subset of the available skills per game, so you have to improvise--- and commit before you know the whole set. So advancement != combat, enabling different play styles. The game rewards trickiness, a *player* skill not a character skill. And each dungeon produces subtly different character types.

The fallout from this decision is that effects can be intensified, and indeed should be because you want different builds to really feel different. The way he balances power is with situationality. For example, autohit is much more fun than a "-4" bonus for webbing. But that really only matters if the monster was hard to hit anyway! Axes don't just attack three adjacent characters, they attack *all* surrounding characters.

One way to create "situationality" is by making spaces interesting. Brogue does this with combinatorially overlapping zones, each with its own advantage and disadvantage. Terrain is more interesting than monsters, and is more self-explanatory and easier to plan around. This leads to emergent narrative from the player! (I have played some Brogue since his talk and this really happens--- one of the most fun things is figuring out how to use a trap or bridge or chasm against the monsters.) The interaction between terrain types and between monsters and terrain creates different "zones": for example, light of sight, a monster constrained to a terrain type, area of effect, vertical/horizonal effects, light levels, monsters avoiding chokepoints, etc.

(I asked one of the questions here; I was trying to get at whether guides also "up level" by becoming interactive--- there's still some optimal decision path to be found, it's just no longer static. He talked about the evolutionary race in general terms, but there's no "solver" for Brogue out there yet. There are, however, interactive guides for other games, though I forget the example I learned at the conference.)

This really tied back to other talks I'd heard about managing and exploiting combinatorial explosions. In my day job, I try very hard to make sure that combinatorial effects *don't* happen. Using feature A and feature B together should not cause a surprising interaction. But in a game, that's what you want! That provides novelty, it provides the opportunity for "wit", it gives a game a depth of behavior that would be hard to build piece by piece.

I'm not likely to rush out an build a roguelike, but maybe I'll participate in the 7DRL challenge next year. (My teenage attempt to write something Nethack-like in C++ was buried under the weight of its own ambitions.) But the conference was very thought-provoking and gave me a bunch of new games to try: Brogue, Rimworld, Caves of Qud, Strange Adventures in Infinite Space, DungeonMans, Transcendence. All the talks I attended were great, and I really encourage you to watch the videos. Drew Streib's talk on running a public Nethack server in particular is worth a listen to get the "ops" perspective instead of the "dev": https://www.youtube.com/watch?v=z3jwPszSgwg
Saturday, September 24th, 2016
11:09 pm
Roguelike Celebration 2/3
Angband by Erik Osheim and Robert Au, two of the current dev team. This was an interesting insight into being a maintainer of a codebase not originally your own. In fact, the very first Angband authors passed it on nearly immediately--- because they were graduating from college. This system broke down around 2005 when the transition from one benevolent dictator to another failed. This lead to the creation of an official dev team (and relicensing in GPLv2) with a more collaborative approach and better coverage for succession issues.

But the question of "what is Vanilla Angband" hasn't been answered! The sense I got is that this is something the dev team still struggles with--- what is their job? Some of their changes were obvious: prune the ports (old platforms no longer testable), induce less RSI on things like targeting missile attacks, and support UTF-8. But Angband circa 2005 was undergoing a revolution in play style of rapidly diving to lower levels. This increased the danger to the player--- but also provided better rewards, and trained the player in avoidance techniques that were crucial to the endgame.

They decided to embrace this strategy and made some gameplay changes to support and encourage it it. One is "forced descent" to prevent the player from farming easy levels. Another is selling more staples in town, and removing selling of items. (They also removed Charisma and haggling, but that is more in the "obvious cleanup" category than a "rebalance the game" change.)

They briefly mentioned some simulations they did, but didn't go into detail--- this would be interesting to look into. They also said that it's hard to tell whether players are actually happier with the game, given that lots of players never post on the forums to complain. :) I wonder how many roguelike games would benefit from "phone-home" functionality that tells the dev team what players are actually spending their time on.

What didn't get fixed: documentation and testing, like any other open-source project. A google search still turns up a guide from 1996 as the top result.

"The Infinite Dungeon" (originally titled "Difficulty in Roguelike Games" in the program) by John Harris. This was one of the most thought-provoking talks to me. John asks why, in a golden age of roguelikes, does he have difficulty building enthusiasm for some of the new offerings?

He goes back to some of the first procedurally-generated content. TSR's Strategic Review, Volume 1, Issue 1, has an article on randomly generating dungeons in D&D. But this sort of generation is not really interesting. It's ultimately just a linear sequence of challenges. It's literally gambling--- stop now or go on? Because the structure of the environment you've seen so far has no bearing whatsoever to what comes next. (If you back up and try a branch you passed over before, the next room will be the same as if you stayed where you are.) Rogue does better at giving an actual sense of place.

Contrast those, however, with an early D&D adventure like "Village of Hommlet" that has layered depth and backstory (and surprises!) I sometimes think the temptation is always to add too much--- I always wanted everything to tie together in my Shadowrun campaigns, but the real world does have things that are more or less as they seem. But the point John made is that this backstory and design is something players can exploit. They could figure out that the monsters are likely to have a midden, and go look for it. How could we do the same in roguelike games?

He proposed a trichotomy: knowledge, logic, and wit. (Though he doesn't much like "wit" for the third category.)

1. There are things about a game you could look up in a FAQ or guide, that are able to be written down. Like, here are all the possible potions in Nethack.

2. There's inferences you can make from the knowledge: if I know X, then Y must be true too. If I've fought a tiger, the other room must have a lady. If there is only one unknown potion, it must be Z.

3. Then there are things that constitute "game playing ability" or the ability to discern intent behind a game. It's the application of common sense or intuition or pattern-finding abilities to things not yet encoded in guides.

How can we make games where we get to exercise our "wit"?

(Brian Walker, whose talk I'll cover in the next entry, had some thoughts about how story emerges from place/space/environment in Brogue. But each level in Brogue still feels random, in that there's no geological reason why there's a pool of water one place and not another--- or maybe I just haven't figured it out yet.)

Dwarf Fortress Design Inspirations by Zach and Tarn Adams. This was basically a love letter to their wasted childhoods. :) But more than that, it was a fascinating look at how they put the things they loved in games into their own game. Even their earliest games (text-only scrolling narratives) incorporated the elements that they knew how to do at the time, like remembering what killed you.

Permadeath in roguelikes is often coupled to a high score list. They view Dwarf Fortress's legends mode as an evolution of that same high score list--- recording what you did for later use or comparison. The Hack "bones file" was a revelation for them and is translated almost literally into their initial vision for Dwarf Fortress: build a fort, then explore its ruins.

They also talked about games they have not yet been able to incorporate. The game Ragnarok features climactic final battles among the gods. They'd like Dwarf Fortress to have the same sort of arc where great powers arise and ultimately shape the world in large-scale ways. This seems like something that might also edge into John Harris's "wit" --- if you know the shape of the story, you can exploit it.
Friday, September 23rd, 2016
7:42 pm
Roguelike Celebration notes 1/N
Last weekend I attended the Roguelike Celebration (roguelike.club) at the Eventbrite headquarters. All the recordings are available online. Each session is short, only a half hour each, but the conference did have to split into two tracks. Here are my notes from ones I attended in person.

A Love Letter to Hack by George Moromisato, author of Transendence, a space exploration and shoot'em'up game. He talked about the ways he was inspired by Nethack and tried to apply the same lessons to his game. So what's so awesome about Nethack? Despite its crude ASCII nature, it came out in 1987, the same time as Ultima V.

He contrasts "graphical immersion" with "interactive immersion", "quality" vs "variety" and "experience" vs "mastery." The last axis is sometimes talked about as "replayability", but the distinction he's drawing is more about whether you are continuously learning the game, or just going back for another helping.

One of the key features he identified was the "illusion of winnability." Even though you almost always lose, there's a sense that "if only I had done X". Nethackers label this "Yet Another Stupid Death." But the combinatorial explosion of possibilities makes it hard to win even an "open book test" like Nethack where you have unlimited time to make up your mind. This combinatorial explosion and interaction was a common theme at the conference.

Because George is writing a space game, he found it hard to take advantage of another of Nethack's strengths, which is relying upon the player's knowledge of the world. Yes, Nethack is a fantasy game and has its own set of conventions. But it also has monsters the are familiar to fantasy fans, and interactions like "getting things wet" behave like they do in the real world. In a space game, this is harder, because the interactions between different technologies doesn't have the same sort of standard vocabulary.

A third feature he characterized as "quantity is quality." Interacting with 100's of monsters is its own good experience even if each of them is just an ASCII letter and a small set of behaviors. It's the combinatorial explosion of attributes interacting which makes this variety possible. Roguelike developers provide the most benefit when they add "one more experience" to the game rather than higher-quality content. He characterized this as "building a city, not a resort."

Accessibility in Roguelike Games" by Alexei Pepers talked about a project to make Nethack more accessible to visually impaired players. (Believe it or not, some do play it by using screen readers pronouncing the punctionation characters on the screen!) Her main idea was new commands that describe your surroundings in text, including the exact offsets in terms of relative number of tiles. Backtracing and mapping is also hard, so they also added some shortcuts for navigation ("head back to the staircase.")

Alexei characterized three main lessons learned: (1) No substitute for feedback from visually impaired users. They did some of their experiments with sighted players with the map turned off. But that population are not experts in screen-reading software! (2) Give users options on how exactly to get info, for example NSEW vs up/down/left/right. (3) The complexity of games leads to a lot of special cases. For example, a common tactic in Nethack is stealing from shops by training your pet to pick up items and put them down in the store entrance where you can grab them. A large store may have tons of items and having screen-reading software describe them all every clock tick, in order for the player to figure out what their dog is doing, is a complete pain.

Corridors in Nethack, and some of the lower levels, are very difficult. Precise spatial information using text is hard, while maintaining the sense of immersion that makes the game fun. (One idea that I'd like to see is building a tactile interface for showing the shapes of corridors--- but I don't have the time nor skills to put one together.)

Concrete Poems by Nick Montfort (a poet.) There was a lengthy opening discursion on why a tweet is limited to 140 characters, as an example of the "history of material texts" that Nick is interested in. But this left frustratingly little time to talk about roguelikes! (It's an entertaining enough exposition, so you should watch it.) Why were compute terminals a grid? Because of typewriters, which need to advance by the same amount on each letter. Previous typography was, sensibly, proportional.

But what really got my attention is talking about "Concrete poetry" and some of his examples were stunning. This is poetry that uses the typewriter, but in a "dirty" form: moving the paper around, masking letters, changing the ribbon, etc. He presented this as an early exploration of how artists made "space" out of ASCII.

His challenge was what roguelike developers can do to make something as visually stunning as the examples he showed, like Steve McCaffery's "Carnival".

(I have notes on several more sessions which I'll post later, but each session had a lot of depth so my writeups tend to be long.)
11:24 am
A micro-study of NFS operation sizes
Over at the Tintri blog, I wrote a study of NFS operation sizes in Tintri's customer base. I looked at averages weighted by operation and by byte, and tried to apply K-means clustering to the data to discover patterns of usage.

Here's a visualization I didn't include in that blog entry. It shows a timeline for several production VMstores, with each time point colored based on the characteristics of its read sizes. Many VMstores have a stable workload, others less so. (White areas are missing data.)

Tuesday, September 6th, 2016
9:46 am
Taxes
The Apple situation is not hard to explain, despite what many journalists seem to imply.

1. Apple told the EU that they were paying taxes in Ireland

2. Apple told Ireland that they were paying their taxes in the US

3. Apple told the US that they were holding the cash overseas instead of bringing it back and paying US corporate tax rate

Step #1 is recognized practice in the EU, Step #2 is acceptable to Ireland, and Step #3 is legal in the US. What the EU decided is that if the net effect of #2 and #3 looks like a special tax benefit specifically for Apple, it should be treated as one. That is. Ireland "should have" collected the taxes as the money goes through, instead of participating in the shell game. Specifically, Ireland's tax guidance that it was OK for all the European profits to be allocated to a notional Ireland-based "head office" (step 2) was what triggered the commission's ruling.

Apple, of course, is waiting for the next tax holiday or a really good reason to spend the money. Then they can either bring it back to the US (paying the lower rate rate; politicians are talking about 15%) or spend it in Europe and finally pay their Ireland taxes.
Sunday, September 4th, 2016
12:26 am
Drugsploitation
If we are going to have price controls on out-of-patent prescription drugs, this seems like just about the worst way to do it: http://www.slate.com/blogs/moneybox/2016/09/02/hillary_clinton_has_a_quietly_bold_idea_to_stop_drug_price_spikes.html The proposal is to have a committee evaluate whether price increases have been "reasonable" and, if not, impose fines or rebates.

The theory, perhaps, is that fear of government action will cause drug manufacturers to play it safe. However, by the time the hypothetical committee makes its findings, a significant amount of damage will probably already have been done. The whole thing seems sort of arbitrary in that the decision is after-the-fact and literally punishes success in pricing (not that this is necessarily a socially useful form of success.)

We have a model for this: electricity pricing. The power company goes to a local authority and proposes a rate, and gets a yea or nay for the next year or so. An out-of-patent drug fits this model pretty well--- there are no R&D costs here, just capital costs of maintaining the ability to produce it. It removes the uncertainty about what you're actually allow to charge, and prevents patients and insurance companies from bearing the costs up-front.
Saturday, July 16th, 2016
12:15 am
Arkham Knight
The fourth in the Batman series by Rocksteady Games.

The good:

I found the game fairly narratively satisfying (yes, even the awful third ending) with a few exceptions. It was firmly tied to Arkham City and the events there, rather than ignoring the previous games.

Most of the Riddler's puzzles were genuinely interesting to figure out rather than being exercises in perfection.

I <3 Cash's comments about all the items in the evidence room.

The additions to melee combat and stealth mode were reasonable and added some challenge.

Not so good:

I know this isn't a Bioware game, so your choices don't really make any difference. But I hate it when it's pointed out to you that your choice doesn't make a difference by circling around again and making you pick the "right" choice. If you had to play the rest of the game as Robin instead, that wouldn't be so horrible, would it?

The villain's identity is not a secret to any informed Batman fan. Despite trying misdirection, which doesn't work in the DC universe anyway. Similarly, Barbara Gordon's death is unconvincing rather than tragic.

Just how Scarecrow finances this huge army of men and drones is never really clarified, even in the audio files. With multiple billions of dollars he could probably create fear at a large scale somewhere other than Gotham, much more efficiently.

Arkham Asylum and Arkham City take place in small, bounded environments and that's OK--- it even adds to the atmosphere. Arkham Knight takes you into the city proper (three separate islands) and still feels tiny. This is particularly bad when you're chasing somebody through the streets and they keep circling the same few blocks because that's all there is.

The "stealth" tank battles were merely tedious. I hadn't taken out all the watchtowers or roadblocks but it didn't seem like they coordinated. The antagonists should have been using one of their flying drones (kept in reserve) to spot Batman.

Actively bad:

The running tank battles are fine (though I was annoyed because I didn't realize the Batmobile had a cannon for quite a while, which made puzzles that required the use of the big gun frustrating when I tried to solve them with the machine gun.) But the experience of driving the Batmobile made me feel like an incompetent driver, rather than experiencing what it would be like to be Batman racing through the streets. The hand-to-hand fights do a good job of putting you in the Batman fantasy. Tearing through the streets causing widespread mayhem, not so much. The amount of damage Batman does to the urban infrastructure is quite frankly unconscionable. It made me feel like I was playing Saints Row, not a Batman game.

The game's intro assuring you that everybody good has been evacuated from the city, thereby giving you permission to run rampant on the remaining citizens, is appalling. While the Batmobile has "non-lethal" features, I'm pretty sure some number of thugs I hit with the car would have died. Particularly the ones whose heads I parked on.

One of the things I liked about Arkham City was playing as Catwoman. Arkham Knight gives you two-hero battles in which you can switch back and forth between Batman and whoever's fighting alongside you (Catwoman, Robin, or Nightwing). Not only is this less satisfying, but you don't get enough time with an alternate character to really learn their combat moves. The melees are all "big", not giving a lot of room for experimentation. (It looks like the alternate-character content was moved into DLC, and the reviews of the DLC are not great.)
Tuesday, June 28th, 2016
12:46 am
Towards a Taxonomy of Crafting Games
Minecraft pretty much set the standard for survival sandbox crafting games and judging by Steam queue, we haven't really made much progress since. Here's an attempt at breaking down the core elements of the genre.

"Crafting" games are those in which the player combines multiple resources into higher-value items. The key challenges for the user are opportunity cost (a resource used to build X cannot be used to build Y.)

Goal: survival (build shelter and fend off baddies), efficiency (produce lots of X, or the best-cost solution for X), wealth/achievement (make lots of X)

Progress directed (fixed path or goals), sandbox (free placement and open decision-making)

Actors: first-person, character, directed agents, third-person omniscient ("god mode")

Environment: 2-D surface ("top down" or isometric view), 2-D plane ("side scroller" or "platformer"), 3-D

Example games I have played:

Don't Starve: Survival, Sandbox, Character, 2-D isometric

Played this a while back. Didn't get very far into it.

Infinifactory: Efficiency, Directed, First-person, 3-D

Very fun game about setting up production lines. Not a lot of choice in what you produce, but plenty of freedom on how to do it.

Triple Town: Wealth, Directed game with Sandbox metagame, Third-Person, 2-D surface, mashup with match-3

This one is a bit of a stretch. Unlike other crafting games, while you can only combine multiples of the same element, and combinations are formed via placing groups of three or more.

Factorio: Survival (though I play with enemies disabled), Achievement (build a spaceship), character but with heavy automation, 2-D surface

This is all about setting up large-scale industrial infrastructure. Achievement is not enough to make me actually finish the endgame. Sandbox is pretty fun, though, since the goal is to build machines to build things, not just craft everything yourself.

Craft the World: Survival, Achievement, Sandbox, Directed Agents, 2-D plane

Currently playing, sort of a 2-D Dwarf Fortress, of which there are many options. May be done soon, been through one endgame, and the different environments are not enough to keep me hooked.

Terraria: Survival, Sandbox, Character, 2-D plane

Meh.

Dwarf Fortress: Survival (doomed!), Sandbox, Directed Agents, 3-D world (with 2-D views only)

Really, I frequently ask myself why I'm not playing this more. The NetHack of this genre with many object interactions.

Puzzle Craft: Wealth, Directed, Third-Person, 2-D plane, another match-3 mashup

Definitely a crafting game, the main game is used for collecting resources, while the metagame involves building the village, but in a very strictly controlled manner.

Swords and Potions: Wealth, Directed, Directed Agents + Third-Person, 2-D isometric

Create a shop, build up the town with items made there, to produce more raw materials. Sell to adventurers, or outfit them to send on quests. Kind of grind-y but I would like to find a game like this with more depth.

The problem is when I look at the new games coming out in this category, they all seem to be concentrating on some axis I don't really care about. Some are science fictional instead of fantasy. Many are trying for better 3-D graphics or incorporating some more role-playing mechanics. But very few seem to have a unique goal, or a unique mechanic, that makes them at all interesting to me.

About the only one I considered was Blueprint Tycoon which is a stripped-down efficiency game sort of like Factorio.
Wednesday, May 11th, 2016
2:33 am
The Zuckerberg Master Plan
There was a lively discussion on this week's Slate Money podcast about nonvoting shares.

The theory Felix Salmon tries to expound upon is the following:

1. Mark Zuckerberg decides he wants to, say, control all media.
2. He issues a bunch of nonvoting Facebook stock and buys media companies with it.
3. Although common stock holders are diluted, Zuckerberg doesn't care and retains total control of Facebook.
4. Repeat until Mark Zuckerberg controls the world.

How far towards world domination can Zuckerberg proceed?

Facebook is worth $344 billion, with 2.3B shares. Disney has a market cap of $174 billion. If Zuckerberg wants to buy Disney, he needs to offer, say a 30% premium or $226 billion. That means post-merger the combined company should be worth about $54b less, or $466 billion.

Can Zuckerberg then rinse and repeat? At what point does everybody sell their nonvoting shares, leading to a valuation of $0 instead? Or does Facebook stock retain enough value to literally buy everything?
Monday, May 2nd, 2016
5:01 pm
Data! and Graphs!
Over at the Tintri corporate blog, I measured our customer's virtual machine sizes: https://www.tintri.com/blog/2016/05/data-dive-vm-sizes-real-world
Saturday, April 16th, 2016
11:06 pm
Structural problems in "Daredevil"
(The Netflix version.)

1. Severe head trauma and even broken bones can and do kill people. Particularly people who don't get great medical attention because they're criminals in a poor neighborhood.

So, having Daredevil and the Punisher argue about killing is more than a bit deliberately obtuse? Odds are at least one of Daredevil's victims has perished from a blood clot or infection or brain bleeding or some other complication from the initial beating. I don't expect this to come up in the remainder of S2.

I realize this is not unique to Daredevil. ("Arrow" even gave Ray Palmer a potentially fatal blood clot as a result of being shot by an arrow. I don't expect the same concern to ever arise again there either.)

2. Daredevil can fight in the dark. He is shown taking out light sources in order to use this to his advantage. (Bet the residents and landlords love that, in addition to the bloodstains and drywall damage.) But because it's TV, they don't want to just show a completely black screen during a fight scene, so instead there's plenty of ambient light left to see the action. It's just dim, not dark.

Perhaps this could have been handled with some visual convention. (I don't buy "dim lighting" itself as the convention.)

Dim lighting also allows the use of the stunt double.

3. Matt and Foggy are contemporaneous young lawyers, right? They're millennials. They would not make "Top Gun" references. They make other cultural references that are fine for me as a 40-year old, but I cannot believe two 20-somethings would say.

The design in Matt's childhood scenes attempt to look as if he is growing up in an older period than he possibly could. If he's under 30, he was born no later than 1986 and so grew up in the 90's.
Friday, April 15th, 2016
8:53 am
No thanks
This sounds like my personal version of hell:

What would you call the immediacy of Instagram; the impulsiveness of Tinder; and the exclusivity of a private Playboy Mansion event – all wrapped up into a new Super App? We call it a helluva lot of fun! Our client is about to fundamentally disrupt the traditional advertising/promotional model for event driven businesses with a crowdsourcing platform that encourages and incentivizes a "who's who" of ideal target participants to join the party.

...

POSITION: Chief Technology Officer Technology / Vice President Engineering & Product Development. This is a management position. You will be part of the core executive team that includes the CEO. Initially, this will also be an individual contributor position.


And it's in LA. It's really the last line that sells it.
Wednesday, April 13th, 2016
11:20 pm
Combinatorial Auctions and Implicit Collusion
I am rereading selected bits of "Combinatorial Auctions" in preparation for my session at Minnebar next week.

Here is a completely factual thing that happened in the U.S. government PCS spectrum auction (a simultaneous ascending auction). Bidders who wanted to discourage competition would engage in "code bids" against smaller carriers. Suppose A bid on on a block which B wanted. B then bid on a block for which A was the current leader, and which B had not previously indicated interest. The last three digits of the bid encoded the lot number for which the bid was "punishment."

Source: http://ftp.cramton.umd.edu/papers2000-2004/cramton-schwartz-collusive-bidding.pdf

This tactic appears to have been successful. There is also some evidence of "demand reduction" in which carriers strategically decided not to bid on additional spectrum allocations in order to keep overall price levels lower.
Wednesday, April 6th, 2016
1:48 am
Fantasy tournament design
Fantasy and science fiction tournaments have weird and terrible structures. There doesn't seem to be much reason for this other than to provide the author with the joy of explaining the tournament. Occasionally it may impact the plot. But nobody just runs single-elimination, double-elimination, round-robin (like chess or Scrabble), or World Cup-style tournaments.

The Element Games (Essen Tasch) in "A Gathering of Shadows": This otherwise lovely book has a 36-person tournament which proceeds in:
  • two single-elimination stages
  • a round-robin group of three which is decided by points scored (so you could lose two and still proceed, I think), and
  • a three-way championship match
Also the winner gets to host next year, like the Americas Cup. Changing formats for the championship is a standard trope, but makes little sense in terms of design. (Are the fans bored already?) Particularly when the honor of hosting is on the line, the final match comes down to coalitions rather than magical skill.

Azad in "Player of Games": two 10-player matches, and several 2-player matches (including the championship), but the dreaded 3-player match is thrown in. Some political justification in letting the dominant third ("apex") gender kick out males and females in the first round, which is a 10-player match. The purpose of the second 10-player match is less clear; perhaps this could serve to demonstrate that top Azad players can engage in coalition-forming, a useful real-life skill. But from a practical point of view, it's better to run both the 10-person matches immediately--- otherwise you need host a lot of two-player matches. There's no attempt at an excuse for three-player matches and it's a non-event in the book.

Triwizard Tournament in "Harry Potter and the Goblet of Fire": a purely arbitrary (and easily rigged) magical sorting artifact reduces the competitors to just three in a single stage. While this allows more challenging (and expensive) trials to be set up, there seems little justification for believing that the cup has really selected the best competitor from each school. There is little drama, either, and the fans would probably appreciate seeing a larger field to start.

The tournament itself resembles a decathlon, with competitors engaging in various tasks and challenges. But rather than an objective scale of point awards, or purely time-based scoring, the judges engage in further arbitrary decision-making based on "spirit" which allows them to promote their favorites. (The tasks are spaced out over the course of a year for no good reason.)

The blatant point-rigging is OK, because the points don't matter anyway, serving as only an insignificant handicap on the final task, a labyrinth.

(Quidditch, surprisingly, appears to have a fairly standard tournament structure--- all the misdesign was saved for the game itself.)

Assumption in "Last Call" (Tim Powers). This looks like a poker game. But the real object is to lose the "assumption" all-or-nothing side bet after merging hands with your victim. As a poker game this is a real mess, and virtually no spectators will understand who has actually won.

Hunger Games in "The Hunger Games": while theoretically designed for showmanship, the actual structure of this battle-royale tournament doesn't lend itself well to televised drama, requiring frequent intervention by the Gamemakers. The Cornucopia is an obvious "rules patch" to force early conflict and reduce the large number of contestants. (Otherwise, an equilibrium strategy is likely to wait for others to become injured.) Unwatchable in real-time, this blood sport probably gets cut down to just the 18 minutes of real action for most viewers (similar to a compressed baseball game.) Unfortunately, there's only one per year. Would be much improved by scheduling smaller weekly matches, but then the poorer Districts wouldn't feel quite so oppressed. Which, to be fair, seems to be the point of the exercise anyway---- the Capital is nobly willing to forgo entertainment value to promote heavy-handed social allegory.
[ << Previous 20 ]
My Website   About LiveJournal.com