?

Log in

Mark Gritter's Journal
 
[Most Recent Entries] [Calendar View] [Friends]

Below are the 20 most recent journal entries recorded in Mark Gritter's LiveJournal:

[ << Previous 20 ]
Saturday, February 18th, 2017
4:26 pm
The Puppy Functor
We introduce a finite category Pup. A standard representation is the objects {inside, outside} and in addition to the identities there are two morphisms that form a bijection:

door: inside -> outside
door -1: outside -> inside

The literature conventionally refers to these operations as "let the dog out" and "let the dog in", respectively.

A "puppy functor" P is defined as follows, and is applicable only to categories in which products exist:

P(S) = S x ( inside or outside )

P( 1S : S->S ) = f x 1

P(f : S->T ) = f x ( door or door-1 ) for all other morphisms

In the resulting category, any action must be accompanied by letting the dog outside or letting the dog back inside, as no function maps to the inside or outside identity. Further, any bijective morphism involves letting the dog out and then back in again (or the reverse), because

P(1) = P( ff -1 ) = P( f )P( f -1 )

Categories with endomorphisms (other than the identity, of course) cannot have a puppy functor. In fact, the only categories which have a puppy functor are those whose graphs are bipartite. The practical aspect of this restriction implies that dog owners cannot expect to perform any third task between letting the dog out and letting the dog back in.
Sunday, February 12th, 2017
2:56 am
Back from Tintri sales kickoff
Tintri has historically held its sales kickoff in the Bay Area to be close to headquaters. This year we went to Las Vegas instead, at the Hard Rock Hotel and Casino. I don't like going to Las Vegas; among the reasons is cigarette smoke and the Hard Rock is no exception.

The Hard Rock's theme meant guitars everywhere. Literally propped up on some of the food serving tables.

Tintri's outside speakers this year were a little different. We had a performance by Butterscotch (a beatboxer) and talks by Vancouver social entrepreneur Mark Brand and filmmaker Nirvan Mullick, of "Caine's Arcade" fame. We participated in a Cardboard Challenge where we made games out of cardboard (and other items, like leftover party favors from the previous night.) You can see my team's two-person maze in action here: https://goo.gl/photos/Z66EprrvTFrugjfA8

Mark Brand made us all promise to leave the world better than we found it. The only other thing I can clearly remember from his speech, unfortunately, is that people keep wanting to show up and cook food for the homeless despite not actually knowing how to cook. I think he also had a few anecdotes about testing one's assumptions, and about not becoming a reality TV star.

Hard Rock has three poker tables which are often empty. They have a nightly 8pm $70 poker tournament that usually doesn't run. But the Tintri federal team showed up en masse so we had a fun time losing to one of the Tintri sales engineers. (A few other tourists showed up as alternates, and one of them chopped with Ronnie.) I busted in fourth after hitting an ace out of position against a set.

Tintri had a good year and there is encouraging news for the year to come. We won a partner award from CDW and they hope to expand their Tintri business, which is potentially very big. I harangued the Central region sales team to make more use of me--- last year I visited Wisconsin and Seattle, and had calls with customers in Chicago and Florida, but didn't meet any Minnesota, Iowa, Nebraska, or Missouri customers.
Saturday, January 28th, 2017
9:14 pm
Peter Thiel
Before the election, I thought that Peter Thiel's support for Trump was just Silicon Valley "disruptive" thinking at its worst. If the system isn't working, let's try something different (without a lot of thought as to whether it's actually better.) After all, Peter Thiel would not be the one suffering, so from his perspective it would be a risk-free experiment. He even tried making the case that we should take Trump seriously but not literally.

Foolishly, I did not realize that billionaires always have a hedge in place. In Thiel's case, it's citizenship in New Zealand, which he somehow in 2011 obtained despite not fulfilling a five-year residency requirement.
Friday, January 27th, 2017
3:30 pm
Retiring another interview question
I've been talking to people who have built cloud services that communicate with on-premises equipment (like storage arrays or routers.) One of my questions has been: "can you give me an example of some difficulties, or customer requirements, that you've encountered about the device-to-cloud communication channel?"

The clearest sign of somebody who has not deployed to production is that they don't know of any, or the best they can come up with is "use a web proxy."

Here's my list, which I'm pretty sure is incomplete.

1. Some customers want to know the exact IP address and port you will connect to, and hard-code it in their firewall. Want to change cloud service providers? Forget it. Want to scale via DNS load balancing, or add a second server in Europe? Are you really going to go back and tell all those customers to open up more holes? Probably not.

2. Customers want you to use an HTTPS proxy. (No problem, everybody expects that.)

3. The HTTPS proxy is username and password protected. (No big deal.)

4. There are multiple ways to provide the username and password to the proxy. Are all of them supported? (Oops, we didn't turn on GSSAPI/Kerberos until a customer specifically requested it.)

5. Some customers install a proxy that spoofs HTTPS, so you get a certificate mismatch. Does your code give up at that point, or ignore the error and do application-level authentication and application-level encryption? (If so, what is the point of HTTPS? Well, some customers won't support HTTP going out.)

6. Some proxies support HTTPS but not WebSockets.

7. The customer wants you to use a web proxy, and also to configure that web proxy to only allow your box access to one server.

8. An isolated storage network is standard. Some customers put the admin interface on a separate network too. (This is not too different from having no access at all.)

9. Some proxies have maximum connection lifetimes or maximum request sizes configured.
Wednesday, January 25th, 2017
10:31 pm
Generalized Fermat Through Brute Force
The generalized Fermat equation Ax^a + By^b = Cz^c has many integer solutions if 1/a + 1/b + 1/c > 1. It has rather fewer in the opposite case. I collected a bunch of references in this Quora question. The case a = 3, b = 2, c= 7 has been completely solved and there are a few "large" solutions including 9262^3+15312283^2=113^7.

Suppose you're not deep into analytic number theory and want to do a brute-force search for some more examples. A natural question to ask is whether some cases can be eliminated by a simple criteria that reduces the search space.

Take x^3 + y^11 = z^2, for example. If we calculate mod 4, then working through the 16 possibilities gives us only 7 cases that work:


y = 0 : x in { 0, 1 }
y = 1 : x in { 0, 2, 3 }
y = 2 : x = 1
y = 3 : x = 1


That means we can partition the search space and do only 7/16ths of the work, excluding all cases shown to be impossible mod 4. How well can we do with this approach? (In some cases we can get all the way to zero and prove impossibility! But not here.)



The farther out I search, the incrementally better "filters" I find. Calculating the possibilities mod 720 reduces the search space to about 4.3% of the possibilities. Improvements come at moduli 8, 12, 16, 24, 48, 72, 96, 120, 144, 240, 336, 480, and 672.

Now, I'm sure if I actually knew any analytic number theory this would be the point to say "aha! L-functions" or something like that. But perhaps there's an algebraic number theory reason for this sequence which some thought could discover.

When it comes time to do the brute force search, my favorite technique is Daniel Bernstein's heap-based approach which give us the candidate x^3 + y^11 values in order. Without using any of the above, I can tell you that there are no nontrivial solutions with |X| <= 1000000 and 0 < y < 300.
Saturday, January 21st, 2017
6:05 pm
Punching
I don't fucking care what happens to Random Nazi Dude. He can piss off. We don't have to understand or dialogue with him. The media doesn't need to cover him. I don't care if people disrupt his interview. I fully support removing Nazis from your events, schools, organizations, and workplaces.

What I care about is that my Twitter feed is full of people who think political violence should be cheered and encouraged.

You know, just like Trump does. Remember that? From the campaign?

When we talk about "not normalizing Trump" it should mean "not normalizing his worldview." Not setting ourselves up as the mirror image of his fascist supporters.

There are times when it's correct and necessary to kill in defense of others. I'm willing to accept that even violent protest is necessary to correct injustice. Punching somebody whose views you find odious doesn't meet that standard.
Friday, January 20th, 2017
2:03 am
Wave Function Collapse for Automated Doodling
I've been playing around a little with https://github.com/mxgmn/WaveFunctionCollapse (actually with the C++ port here: https://github.com/emilk/wfc). The algorithm is pretty impressive at randomly generating large bitmaps from small examples. And the animated gif's of it in action are an inspired presentation.

But... my first attempt to use it failed entirely, because I picked a very complicated bitmap which, even after simplification, caused the algorithm to engage in lengthy calculations that produced blobby output.

So I tried something simpler, with a pattern I doodle in many of my notebooks:



I was a little surprised to find that 3x3 tiles just couldn't seem to produce a successful example:



It works better with 4x4 tiles, though the success rate is still low:



 

2x2 works great but doesn't preserve the closed paths. What's going on here? Why is this pattern challenging for the WaveFunctionCollapse algorithm?

One thing to note is that WFC doesn't backtrack. It heuristically picks the most-constrained (lowest-entropy) tile to place next--- in other words, it's basically a greedy search. For many tile sets, greedy search is fine. But when there is a higher-order constraint like a closed loop, some choices might not work out at all. (Or, it might just be something dumb like I need to tell it that my tiles are symmetric. But I don't think so.)

Is there any good way to characterize tile sets for which greedy search always works? Mostly works?
Tuesday, January 3rd, 2017
9:25 pm
Mathematics in "The Just City"
Because I'm reading a history of mathematics with lots of Greeks in it (including Aristotle questioning whether parallel lines really exist) I was inspired to reread "The Just City" looking for math. Now, Jo is not a mathematician, but there are a few tantalizing hints.

The city residents certainly do math, although astronomy is the only subject specifically mentioned. Interestingly, they have a Keplerian orrery showing elliptical orbits, presumably in a heliocentric model. This seems the sort of thing that would fit in well, and doesn't require any fundamental innovations beyond the conic sections Hellenistic mathematics was well acquainted with. (You can use hyperbolas to trisect angles, for example.) But the practice of astronomy is tricky.

Would the Just City use Greek numerals? They are somewhat cumbersome for high-precision arithmetic, though of course the Greeks did pretty well (and had abaci.) But compared to place-value notation they are far from ideal, and so shouldn't a city that strives toward excellence adopt the ahistoric Indo-Arabic numerals?

Similarly, astronomy requires a lot of precomputed tables of trigonometric functions to do well. The ancients had such tables--- but we know that many of them were buggy. Excellence ought to require using a modern table, but would the masters be willing to accept one? (Particularly since it would require allowing modern numerals.)

But if you're going to allow a modern place table, are you going to forbid the modern analysis which constructed it? Ancient techniques were very clever but again, cumbersome in practice. Even leaving aside the workers and the sciences involved in electricity, many of the masters would be conversant with Euclid's parallel postulate and the philosophical debate surrounding it. Would they really prefer that the library of the city not have the answer?

Jo is clear that philosophy in the City has a cut-off date. No moderns (and certainly no Christians.) History ends with the Battle of Lepanto in 1571. It's less clear to me where you draw that line in a principled way for mathematics. Of course, the Just City does not shy from unprincipled ways of making decisions, and practical concerns mean that much of mathematics that the masters might allow is in practice inaccessible, just because nobody among the masters understands and can teach it. But Kepler, who was born in 1571, used mathematics in a way that is so different from the Hellenistic tradition, and it would be difficult to include his work without including his foundation.

In fact, Simmea learns the calculus, which dates to the late 17th century. From a philosophical standpoint, this is particularly troubling as the logical problems of infinitesimals were well-understood at the time (and certainly could be apparent to anyone trained in the Socratic tradition.) They were only resolved in the 19th century with modern real analysis. I don't think this material could be included without bringing in aspect of modern philosophy as well, even though most practicing mathematicians default to a rough-and-ready Platonism.
Sunday, January 1st, 2017
12:54 pm
Learning Swift via Advent Of Code
This year I participated in the Advent of Code contest in a new programming language, Swift. New both to me and objectively! The language is still in active development. I posted my solutions on GitHub: https://github.com/mgritter/aoc2016 (Technically only 24 programs, because one day I used Wolfram Alpha instead of writing any Swift code.) Because I got started late and used a new language, I wasn't at all competitive for the leaderboard.

The online language guide and other online resources made it pretty easy to get up to speed and start programming, and the increasing complexity of AoC made a good fit, although I feel like I peaked in using new language features about halfway through. One of the downsides of a new language that is still willing to make changes is that many of my searches returned content about Swift 2, not Swift 3. Using Linux was also somewhat of a challenge because a lot of the "getting started" guides are more Xcode-specific. The interactive mode (REPL) helped try language features out, but I never did figure out how to import my modules in order to test my functions interactively.

What I liked about Swift:

Terse but clear syntax. The language removes a lot of cruft from C-style languages. "If" statements don't need parentheses, "switch" statements don't need breaks, closures are easy to write. Type inference means you're getting the benefit of type checking without spending a lot of time writing type names. The range operators ... and ..< beat any C++ or Python equivalent.

C#-style attributes work pretty well too, although I found their usage within the standard library inconsistent: stylistically what was an attribute and what should be a function was not always clear to me.

Swift makes it very easy to distinguish constants ("let") from variables ("var") and I found this pleasing to write, but I'm not sure how helpful it ultimately was. It allows a very powerful 'switch' for enumerating cases, not just value-matching, which I used pretty heavily but probably could have used more.

Type System. There's a lot of things right here. Enumerations are first-class types and can carry extra values. Interface definition via prototypes works pretty well. Structs and Classes make an entity/value distinction you can't get in Python or Java.

The most obvious innovation here is that types can be decorated as "optional": Int becomes Int?, MyClass becomes MyClass?. Optional types permit a 'nil' value and must be explicitly unwrapped (using either an operator or a special conditional form.) The "flatMap" function can be used to simultaneously transform and filter a collection by giving it a function of the form X->Y?, where the 'nil's get discarded, for example:

let program = input.lines.flatMap { Instruction.parse( $0 ) }

(The second part is a closure, and parentheses on the flatMap function call are optional in this common case.)

Dislikes:

Buggy implementation, particularly on the Linux port. I filed a couple bugs. One basically prevented me from exploring the regular expression library, since it wasn't usable in the REPL. I also encountered a known issue in type inference that fortunately had a workaround. The CharacterSet class (ported from NSCharacterSet) has a ton of filed bugs and a comment to the effect of "there sure are a lot of these, what are we going to do about it?"

Getting started was hard because the helpful "read a file into a String" function was one that has not yet been ported, so I had to find a (itself buggy) Gist to help me understand how to use Glibc and convert the raw bytes into a String.

Poor tuple support. Tuples work OK at basic stuff, and the ability to do named tuples is promising. Switching on a tuple works pretty well. But compared to Python, forget it. There's no easy way to do list/tuple conversion. Tuples aren't hashable by default, and for AoC I kept wanting to put a tuple as a dictionary key. And the absence of a 3-way zip led to this monstrosity:

return zip( zip( left, center), right ).map { x,y in (x.0,x.1,y) }

There were some libraries I could have downloaded that offered zip3, zip4, zip5, etc. as separate functions.

Whitespace matters for operators. I tried to stay away from this but I got bit a couple times by mistake. You can define new operators, which is cool. But "a + b" and "a +b" are not the same, the latter is a prefix "+". The rules are:


The whitespace around an operator is used to determine whether an operator is used as a prefix operator, a postfix operator, or a binary operator. This behavior is summarized in the following rules:

If an operator has whitespace around both sides or around neither side, it is treated as a binary operator. As an example, the +++ operator in a+++b and a +++ b is treated as a binary operator.

If an operator has whitespace on the left side only, it is treated as a prefix unary operator. As an example, the +++ operator in a +++b is treated as a prefix unary operator.

If an operator has whitespace on the right side only, it is treated as a postfix unary operator. As an example, the +++ operator in a+++ b is treated as a postfix unary operator.

If an operator has no whitespace on the left but is followed immediately by a dot (.), it is treated as a postfix unary operator. As an example, the +++ operator in a+++.b is treated as a postfix unary operator (a+++ .b rather than a +++ .b).


Love/Hate features

Unicode support. They have a very clear explanation of why Unicode-compliant string parsing is hard, and their String and Character classes are clearly written to encourage you to do things correctly. Plus you can use Unicode for identifier names (and not just the boring ones, you can use emoji too.) Unfortunately I couldn't figure out if Emacs really supported this.

The downside is that String manipulation is hard for Advent-of-Code style problems where the input is usually ASCII but you're doing a lot of parsing. Again, maybe if the regexp library were usable for me some of this could have been simpler. But by far the grossest thing I wrote was this code to rotate the letters a-z:

        switch ch {
        case "a"..."z":
            let scalars = String( ch ).unicodeScalars
            let val = scalars[scalars.startIndex].value - scalar_a
            let valInc = scalar_a + ( val + by ) % 26
            return Character( UnicodeScalar( valInc )! )


Often I ended up working with array of characters "[Character]" rather than String, which was a reasonable compromise I think. A friend tried to argue on Twitter that this made you think about the cost of some of the operations you were doing that were expensive on a Unicode string.

Extensions. You can add methods and attributes to any existing class. Even the standard library. "import Foundation" adds a lot of functionality to String and Array. I wrote this which I used over and over:

extension String {
    public var lines : [String] {
        get {
            return self.components( separatedBy: "\n" ).map { $0.trimmingCharacters( in:.whitespaces ) }
        }
    }
}


".components" is already a String extension added by Foundation (which was confusing at first because I didn't realize I had to "import Foundation" to get all the documented String functions.)

This can also be used to break up your implementation into smaller pieces where a single class definition would be hairy, for example if you want to define some utility classes in a place that "make sense". If you define a new protocol or want to make an existing type hashable, the mechanism works really well.

From a software engineering standpoint, I find this potentially disastrous. I don't entirely understand all the namespace semantics (it's there, and usually unobtrusive) but there doesn't seem to be any way to resolve conflicts in extensions, like there is for class names. I used a hashing library that added a ".md5()" function to String. That's great if there's only one such library in use.

The idiom here is "don't pollute the global namespace" which is good but it seems like you end up polluting every other namespace instead.

Package manager

Mostly this was OK, it "just works". I was easily able to reference external packages. For internal modules, though, I felt like it could do more, and it was hard to figure out the basics of "how do I use a library module in my main program." I ended up with this:

    Target( name: "AocMain", dependencies: ["SimpleFile"] ),
    Target( name: "day1", dependencies: [ "AocMain" ] ),
    Target( name: "day2", dependencies: [ "AocMain" ] ),
    Target( name: "day4", dependencies: [ "AocMain" ] ),


The first target is a library, that depends on another library. The remaining targets are executables. The difference? The day1, day2, day4, ... subdirectories contain a file named "main.swift". Easy to set up yet... I didn't feel very good about it.

It feels like there should be some automatic dependency analysis going on. If I "import Foo" and "Foo" is another library in the same package, shouldn't it just work? And is it linking the external packages with every target? I didn't investigate, but it doesn't feel right to me.

Named Parameters: Swift is showing its Objective-C roots here. If you have a function

doSomething( to:Nail, with:Hammer) -> Woodworking

then you call it like this:

let result = doSomething( to:myNail, with:borrowedHammer )

You can make the named parameters optional, but it's pretty clear this is the desired idiom and sometimes it's used to good effect. But often I end up writing what feels redundant, like:


parse( input:input )
Coordinate( x:x, y:y )


so I don't know if this is my problem and I should make better choices, or a mechanism that could be improved.

The compiler tries to be helpful if you misspell a named parameter, a variable name, or use an overloaded function that's not there. But often it's only helpy, particularly if you have imported Glibc with its ton of symbols.

I enjoyed learning the language but since I'm not doing any IOS programming I doubt I'll stick with it. Probably I will learn Go or Rust next, and keep Python as my prototyping language.
Thursday, December 1st, 2016
11:35 pm
Biting the Hand That Feeds Me
The National Venture Capital Association has some advice for Donald Trump: don't tax us. The letter is long on generalities except where tax issues are concerned. While most of the policy issues are ones to which I'm at least a little sympathetic, I find their presentation self-serving and overly optimistic, rather than really engaging with any complexity.

Since the Reagan Administration, our tax code has been relatively effective at encouraging patient, long-term investment, but on net has been hostile to entrepreneurial companies. For example, punitive loss limitation rules punish startups for hiring or investing in innovation, while benefits such as the R&D credit are inaccessible to startups. Unfortunately, tax reform conversations in Washington have ignored these challenges while at the same time proposing to raise taxes on long-term startup investment to pay for unrelated priorities.

For instance, carried interest has been an important feature of the tax code that has properly aligned the interests of entrepreneurs and venture investors since the creation of the modern venture capital industry. Increasing the tax rate on carried interest capital gains will have an outsized impact on entrepreneurship due to the venture industry’s longer holding periods, higher risk, smaller size, and less reliance on fees for compensation. These factors will magnify the negative impact of the tax increase for venture capital fund formation outside of the traditional venture regions on the coasts.


No startup founder cares about loss limitation rules. The tax code has never been a factor in whether we hire or invest in R&D. Why? Because startups lose money. It only matters later, when the startup starts making money, that the company doesn't get as big a deduction for its previous losses. (Those banked losses are an asset... but any startup for whom that is material is failing or doing it wrong.)

That's just a leadup to the NVCA's real concern: Trump's stance against the "carried interest loophole". VC's get paid 2-and-20 like other fund managers. 2% of fund assets annually (sometimes negotiated downwards) and 20% of fund income. The fund manager's 20% is counted as long-term capital gains because it is treated as their "share" of the multi-year investment rather than just a management fee.

There are some reasonable arguments that this arrangement is the proper way to think of VC compensation, but VCs often fall back on the "we won't do it if we're taxed higher than 15%" line which is in fact what NVCA says in their letter. This is nonsense. What else are they going to do? Work for salary and pay the same tax rate they're complaining about, only for less money? Engage in less risky investments and let somebody else get the big payoffs instead? Become hedge fund managers and live off the 2% management fee? (oh, wait, that's taxed as normal income too.)

Complaints about the IPO process being burdensome are fair, but it's not clear that is what's driving companies to stay private longer. NVCA is not clear at all what they'd like to do about it.

Debate during the election often focused on illegal immigration, but unfortunately not how legal immigration can create jobs for American citizens, including in underemployed areas. This can be accomplished by attracting the world’s best entrepreneurs to our country through creation of a Startup Visa and allowing more high-skill immigrants to help build startups.


Disingenuous. Startup-founding immigrants have little incentive to go to (and stay in) the Rust Belt.

Basic research: I'm in favor of that too, but the request here is also self-serving. The more applied research gets performed in universities, the less risk for venture-backed firms to develop it from zero. And in fact that's exactly what they want:


In addition to funding basic research, encouraging the commercialization of more technology will lead to increased job creation and economic growth. Many states with economically distressed areas can better utilize research universities to spread startup activity. High-growth startups frequently come out of university-funded research that is commercialized and become private companies.


This is a subsidy, and should be judged against other forms of public investment. (If that's the only way we can get more funding for public universities, I'm for it!)

Unfortunately, while our competitors have upped their game, American policymakers have taken our leadership in entrepreneurship for granted, as we detail below. Consequently, whereas twenty years ago U.S. entrepreneurs received approximately 90 percent of global venture capital investment, that share has fallen over time to only 54 percent in 2015. Further, in three of the last four years, at least half of the top ten largest venture investments in the world have occurred outside the U.S.


How much leadership in venture investments is "enough"? 75%? 90%? 95%? This relative comparison is meaningless. Venture funding in the U.S. has not dropped. The rest of the world is just growing faster. That's a good thing, and possibly inevitable. When China has a lot of cash, it's going to make a lot of investments.

Partner with states to spread startups in areas of economic distress. Your administration has an incredible opportunity to bolster public-private partnership economic development efforts to spur entrepreneurship. Ohio is one state that is leading the way to transform their economy and create opportunity. Third Frontier in Columbus is providing access to business expertise, mentorship, capital, and talent to turn great ideas into growing companies, and JumpStart in Cleveland provides equity-based capital to help startups grow, matches talented individuals with growing companies, and provides expertise to startups.


This is both helplessly vague and disingenuous at once. These seed efforts mean little unless coastal VCs are willing to cut checks for mid-stage and late-stage capital. That's happening somewhat. But the language used here is often code for "investment tax credits" which I feel are of questionable value. The bigger issue is that the Rust Belt often lacks the worker base to benefit from the technology startups we're talking about here. Or to put it another way, the jobs that are created are ones where the workers are already in high demand, not substitutes for the displaced manufacturing jobs. If Colombus creates a 1000 new programming jobs, they probably will act more to stop brain drain than to help rural Ohio.
Monday, November 28th, 2016
11:53 pm
More Sol LeWitt
A good article on Sol LeWitt which explains more about how the wall drawings are implemented in practice: http://www.greatwhatsit.com/archives/1379 You "buy" a wall drawing from its current owner; they paint it over and you get a draftsman to re-create it on your wall. It comes with a certificate of authenticity. (There is sometimes a diagram as well. A better picture of one can be found here: http://www.museoreinasofia.es/sites/default/files/salas/informacion/104.03_sol_lewitt_eng.pdf)

The important part of this process, of course, is that it preserves uniqueness. But it doesn't close the door on rogue implementations of the initial recipe. Here's Wall Drawing #47, although it is not on a wall and comes with no certificate, so of course it is not Wall Drawing #47 at all:



Here's my WaterColorBot working on a pen-and-paper version:

Thursday, November 24th, 2016
2:28 am
Two days with the WaterColorBot
I have a new toy; it's Super-Awesome Sylvia's WaterColorBot from Evil Mad Scientist Labs. This is basically a pen plotter that knows how to use a watercolor paint palette. It can be used with crayons or colored pencils or dip pens instead of watercolors, too.

My first attempt showed some limitations of the RoboDraw software, which takes a SVG and attempts to paint it:



I made heavy use of clip paths in trying to recreate this Sol LeWitt wall drawing, and RoboDraw 0.98 doesn't appear to obey them. The beta 2.0 software couldn't paint my SVG at all but from as far as it got, it looked like it had the same problem. Here's a rendering from the same program I used to create that SVG:



However, the Inkscape plugin for WaterColorBot has far better control (at the cost of some more complexity.) You draw each color as a separate layer in Inkscape, and the paths you defined there are exactly what the brush follows. Leaving only the complexity of judging fills, appropriate amount of water, distance between paint refills, height of the brush, and quality of the paper. Generating SVGs from Python code and then painting them in Inkscape works pretty well. Here are two crayon drawings I did via this method, both inspired by Sol LeWitt:




I was able to use Inkscape to convert a small piece of pixel art to a PNG, and then paint the PNG in RoboDraw. Prewatering the paint and using printer paper led to a rather soggy output, but the conversion process worked pretty well and the fills were more sensible here.



Converting a Tintri logo to an outline in Inkscape and plotting that with a colored pencil worked quite well. (But I didn't take a picture of that one.) The hardware seems rather inexact with brush strokes but quite accurate on pencil drawings--- not sure what's going on there. It's capable of drawing pretty well-formed circles though with a slight mismatch where the curve closes.

The control software makes choices that are a little surprising sometimes. It doesn't paint like a human, of course, who would refill at the end of a stroke. Instead it will quit halfway through when the distance threshold has been met, or start a stroke and then immediately go for more paint. The fills are also done in scan lines when a human would tend to break them up. There is a low-level API so if sufficiently motivated I could try my own hand at doing better.

A limitation of the simple hardware is that there is no feedback on the amount of pressure on the brush pencil. The brush mechanism has software-controlled height, and they even include software to draw spiral paintings where the height of the brush is used to print an image. But you have to calibrate the height by hand, and it's easy to lift the carriage up by putting a pen too low. On the other hand, simply resting on the paper doesn't provide good results with a pencil or crayon. The guide rails are fairly simple and open at the top, so the amount of pressure is limited, but it still would be nice to have a pressure sensor to tell if the crayon needs to be lowered a bit to keep drawing. (You can see in the first crayon drawing above that I paused and adjusted the height a couple times.)
Saturday, November 19th, 2016
9:32 pm
Sol LeWitt and Conventions
Some of my favorite pieces on a recent SFMOMA visit was Sol LeWitt's conceptual art. Unfortunately the wall drawings did not photograph well, so here's something that LeWitt actually executed himself, "Forms Derived from a Cube."



LeWitt's wall drawings are meant to be ephemeral, and the ones I saw on display were going to be painted over later. They are given as instructions for a draftsperson to execute. You can find examples at http://massmoca.org/sol-lewitt/ or http://www.ericdoeringer.com/ConArtRec/LeWitt/LeWitt.html

The instructions are somewhat terse. For example, Wall Drawing #104 is simply "Arcs from the midpoints of two sides of the wall." But, it looks like most executions do not actually engage with the freedom this provides. They instead more or less follow the initial interpretations and the conventions implied by other drawing instructions. For example, the standard interpretation is that the drawing is black unless specified otherwise. The draftsman ensures that lines or arcs are evenly spaced unless they are "not straight" or explicitly "random", and that they intersect the boundaries of the figure. This is an "originalist" reading that says we follow LeWitt's earliest examples if the text is ambiguous.

Unless, of course, you let the museum visitors try their hand. Then you get variety: http://risdmuseum.org/manual/45_variations_of_a_drawing_sol_lewitt_and_his_written_instructions Of course, some of this variety is "I want to draw Snoopy rather than following these stupid instructions." (Even the examples there assume that the basic topology of parallelogram-inside-circle is specified. But is it?) Which is more in the spirit of "conceptual art"? The RISD museum decided that the text was insufficient:

These differences made clear that despite LeWitt’s desire for objectivity, a great deal of subjectivity exists when others are involved in the making. LeWitt certainly was aware of the variations that might occur, and although he might have accepted that his instructions could be drawn a number of ways, he did have preferences as to how his works were installed. That’s why, rather than using the written instructions as the sole guide for installing the wall drawing at RISD, the Museum’s installation crew was assisted by draftspeople from the Sol LeWitt Foundation who provided a diagram to help LeWitt’s idea take form.
Tuesday, October 25th, 2016
8:42 pm
Principal Component Analysis
A similar technique to NNMF from my last entry is Principal Component Analysis. It still factors the input matrix, but now negative values are allowed. A big challenge for using PCA is what to do with those negative numbers, which have no standard interpretation. (This is one of the advantages of NNMF, that it can be applied in scenarios in which only positive values make sense.)

PCA is better at identifying features in my sample of pixel art, and degrades more gracefully as the number of components is lowered. The graphs here show original, reconstructed, and the PCA components separated by palette color. (Blue is negative, red is positive--- sorry.)



Reduced dimension (10 components.) Using nearest-neighbor of the blended colors works pretty well at reconstructing the original images:



However, the blends between flowers are, if anything, worse. Nearest-neighbor doesn't work very well at all. The best results I got were from capping the negative contributions at 0 and blending:



Sunday, October 23rd, 2016
10:14 pm
NNMF project
Some notes on a procedural generation project, just for fun.

Non-Negative Matrix Factorization is a dimensionality reduction technique. It reduces a set of samples (say, books consisting of words or images consisting of pixels) to a set of lower-dimensional "components". Then we can do fun things like interpolate between two of the samples by interpolating their component representation. One of the ProcJam 2016 speakers used this for blending Zelda levels; his talk can be found here: https://www.youtube.com/watch?v=3wcpLwvBTYo&feature=youtu.be&t=2h6m22s

I wanted to play around with this but lacked an appropriate data set that was easy to use. So I decided to try the flower pixel art from the ProcJam Garden Pack found here: http://www.procjam.com/art/tess.html

The first attempt was disastrous. Even with the number of components == the number of samples (which permits a trivial matrix factorization) I couldn't convince the NNMF implementation to learn the shapes. What I did here was build the matrix out of RGBA values.



So, my first thought was that we could make it simpler by using a matrix where there was one row per pixel per color in the palette (since the pictures are reduced-palette anyway.) This is actually a dimensional increase, but it means every training value is either zero or one. This produced somewhat more interesting results, not only successfully learning the shapes, but also managing to represent them with a little bit of dimensional reduction. Of course, subtleties between the similar shapes are lost quite quickly.

20 components:



16 components:



12 components:



In order to convert back from a vector of palette color indicator values (c0, c1, c2, c3, c4, c5) to RGBA, I first normalized (so the sum is 1) and then do a weighted average of the indicated colors. This is probably too linear to produce non-"muddy" results as you can see in the last image above and in the mixed images below.

So here's the first attempt at doing a 50% mixture between every sample, using the full dimension. Some of them are arguably a good mixture, but most just come out sort of grey or confusing.



The code for this iteration can be found in the Gist here: https://gist.github.com/mgritter/e11ce48beef1811feb6a2cab9b439f16

Probably the next thing to try is to switch the vector to RGB conversion to be nonlinear; perhaps just take the highest-ranked palette color. This would mean every output pixel uses the palette, which is a more faithful interpolation. But overall this doesn't look like a good data set for NNMF.
Sunday, September 25th, 2016
4:45 pm
Roguelike Celebration notes 3/3
Practical Low-Effort PCG: Tracery and data-oriented PCG authoring by Kate Compton (@galaxykate on Twitter). There were two parts to this talk, one on Tracery (https://tracery.io) and another on AI techniques in general.

Tracery is a tool for generating text from a grammar. This is a project near and dear to my heart, because I did the same thing for one of my first Java applets. Sadly, "The Technobabble Generator: a Recursive Random Grammar Unparser" has been lost to the mists of time, and not even the Internet Archive can attest to its existence. To me, this is sort of an object lesson in how much the world has changed. In 1996 we didn't have Github or even much of a Javascript ecosystem, and those are what has allowed Tracery to go from a personal project to a reusable facility.

Tracery's language is JSON-based and pretty easy to use. Kate pointed out that SVG is just text, and has some very cool procedurally-generated dresses and spaceships. JSON is just text, so people have even created tracery-generating tracery schemas. A Twitter bot hosting service, CheapBotsDoneQuick, uses tracery. And it's been ported to other languages from its original Javascript implementation. I put together "Your Next Roguelike" using this hosted service: https://twitter.com/NextRoguelike

The larger point she made is that when each piece of content requires its own code, STOP AND THINK first. What can you do instead to turn code into data, or use a tool? "Don't whittle your own screws". An example she gave was how, in the Sims, furniture actions and effects were all controlled by attributes which made it very easy to launch expansions.

(Did I mention Kate did the planets in Spore? She's gone back to school for her PhD.)

Examples of finding artificial intelligence "parts" out in the world:
* Finite State Machines: build an interpreter, not a FSM in code
* Decision Trees: once we have the tree as a data object separate from its implementation, it can be manipulated on its own. Examples: log which paths are being taken, evolve/modify/apply machine learning to the tree
* Constraint Sets/Solvers: e.g., Clingo (https://github.com/potassco/clingo) can be used once you cast your problem in this form. see "A Map Generation Speedrun with Answer Set Programming" https://eis-blog.soe.ucsc.edu/2011/10/map-generation-speedrun/
* Visualization tools. She mentioned an interesting effort in visualizing game logic which can be found here: http://ice-bound.com/news/visualizing-the-combinatorial/ While I got a pitch and demo from the author of "The Ice-Bound Concordat" later in the day, it didn't really click with me.

"Markov By Candelight" by Caves of Qud author Jason Grinblat. He described Markov text generators (like those behind Erowid Recruiter https://twitter.com/erowidrecruiter) and wanted to find a way to use them to generate in-game book content for his game, which actually tied back into the plot.

To do this sort of thing well you need to curate your training corpus well. Fortunately, Caves of Qud already has a lot of text in terms of in-game books, NPC dialogue, help files, quest text, object descriptions--- totaling about 45,000 words. So he mixed that with a couple of 19th century physics texts available on Project Gutenberg, about another 45K words. That produced a mix he was happy with, definitely game-relevant but also coming across like there had been centuries of linguistic drift, data corruption, and unknown referents.

Titles are hard, because they have extra structure. After trying a "mad lib" style templating approach he went back to Markov generation but chopped off undesirable words (about 30 of them.) Then he added some extra variation by sometimes including an author's or editor's note, appending "volume 1" to the title, making the book extra-long, etc. These were meant to combat the feeling after a while that all the Markov text looks "just about the same."

How to turn this into a game mechanic? He looked for common prefixes which turned out to be combinations like "in the", "of the", "to the". Then he can bury a secret in the books by making it one of the generated text options that follows these common prefixes (but as an indivisible unit.) So a book might have "... of the {something cool} which I hid six miles east of {place}." The frequency can be tuned; he aimed for about one such secret in every four books.

How Brogue Tells Stories by Brian Walker. What makes a situation in a game exciting? And how to you generate those experiences? The biggest reason roguelikes get stale is because it's not longer cognitively interesting to optimize (because the crowdsourced guide tells you how to do so.) So how do you get the most bang from new content? Classes probably aren't balanced, and there is probably also a dominant build within each class, so probably enhancing the environment is a better payoff.

Brogue does "improvised item-based advancement" where your character gets better because of equipment, rather than experience. But you're essentially on a budget of reusable "skill items" (weapons, staves, charms) and disposable "skill point items" (scrolls of enchantment.) You only see a subset of the available skills per game, so you have to improvise--- and commit before you know the whole set. So advancement != combat, enabling different play styles. The game rewards trickiness, a *player* skill not a character skill. And each dungeon produces subtly different character types.

The fallout from this decision is that effects can be intensified, and indeed should be because you want different builds to really feel different. The way he balances power is with situationality. For example, autohit is much more fun than a "-4" bonus for webbing. But that really only matters if the monster was hard to hit anyway! Axes don't just attack three adjacent characters, they attack *all* surrounding characters.

One way to create "situationality" is by making spaces interesting. Brogue does this with combinatorially overlapping zones, each with its own advantage and disadvantage. Terrain is more interesting than monsters, and is more self-explanatory and easier to plan around. This leads to emergent narrative from the player! (I have played some Brogue since his talk and this really happens--- one of the most fun things is figuring out how to use a trap or bridge or chasm against the monsters.) The interaction between terrain types and between monsters and terrain creates different "zones": for example, light of sight, a monster constrained to a terrain type, area of effect, vertical/horizonal effects, light levels, monsters avoiding chokepoints, etc.

(I asked one of the questions here; I was trying to get at whether guides also "up level" by becoming interactive--- there's still some optimal decision path to be found, it's just no longer static. He talked about the evolutionary race in general terms, but there's no "solver" for Brogue out there yet. There are, however, interactive guides for other games, though I forget the example I learned at the conference.)

This really tied back to other talks I'd heard about managing and exploiting combinatorial explosions. In my day job, I try very hard to make sure that combinatorial effects *don't* happen. Using feature A and feature B together should not cause a surprising interaction. But in a game, that's what you want! That provides novelty, it provides the opportunity for "wit", it gives a game a depth of behavior that would be hard to build piece by piece.

I'm not likely to rush out an build a roguelike, but maybe I'll participate in the 7DRL challenge next year. (My teenage attempt to write something Nethack-like in C++ was buried under the weight of its own ambitions.) But the conference was very thought-provoking and gave me a bunch of new games to try: Brogue, Rimworld, Caves of Qud, Strange Adventures in Infinite Space, DungeonMans, Transcendence. All the talks I attended were great, and I really encourage you to watch the videos. Drew Streib's talk on running a public Nethack server in particular is worth a listen to get the "ops" perspective instead of the "dev": https://www.youtube.com/watch?v=z3jwPszSgwg
Saturday, September 24th, 2016
11:09 pm
Roguelike Celebration 2/3
Angband by Erik Osheim and Robert Au, two of the current dev team. This was an interesting insight into being a maintainer of a codebase not originally your own. In fact, the very first Angband authors passed it on nearly immediately--- because they were graduating from college. This system broke down around 2005 when the transition from one benevolent dictator to another failed. This lead to the creation of an official dev team (and relicensing in GPLv2) with a more collaborative approach and better coverage for succession issues.

But the question of "what is Vanilla Angband" hasn't been answered! The sense I got is that this is something the dev team still struggles with--- what is their job? Some of their changes were obvious: prune the ports (old platforms no longer testable), induce less RSI on things like targeting missile attacks, and support UTF-8. But Angband circa 2005 was undergoing a revolution in play style of rapidly diving to lower levels. This increased the danger to the player--- but also provided better rewards, and trained the player in avoidance techniques that were crucial to the endgame.

They decided to embrace this strategy and made some gameplay changes to support and encourage it it. One is "forced descent" to prevent the player from farming easy levels. Another is selling more staples in town, and removing selling of items. (They also removed Charisma and haggling, but that is more in the "obvious cleanup" category than a "rebalance the game" change.)

They briefly mentioned some simulations they did, but didn't go into detail--- this would be interesting to look into. They also said that it's hard to tell whether players are actually happier with the game, given that lots of players never post on the forums to complain. :) I wonder how many roguelike games would benefit from "phone-home" functionality that tells the dev team what players are actually spending their time on.

What didn't get fixed: documentation and testing, like any other open-source project. A google search still turns up a guide from 1996 as the top result.

"The Infinite Dungeon" (originally titled "Difficulty in Roguelike Games" in the program) by John Harris. This was one of the most thought-provoking talks to me. John asks why, in a golden age of roguelikes, does he have difficulty building enthusiasm for some of the new offerings?

He goes back to some of the first procedurally-generated content. TSR's Strategic Review, Volume 1, Issue 1, has an article on randomly generating dungeons in D&D. But this sort of generation is not really interesting. It's ultimately just a linear sequence of challenges. It's literally gambling--- stop now or go on? Because the structure of the environment you've seen so far has no bearing whatsoever to what comes next. (If you back up and try a branch you passed over before, the next room will be the same as if you stayed where you are.) Rogue does better at giving an actual sense of place.

Contrast those, however, with an early D&D adventure like "Village of Hommlet" that has layered depth and backstory (and surprises!) I sometimes think the temptation is always to add too much--- I always wanted everything to tie together in my Shadowrun campaigns, but the real world does have things that are more or less as they seem. But the point John made is that this backstory and design is something players can exploit. They could figure out that the monsters are likely to have a midden, and go look for it. How could we do the same in roguelike games?

He proposed a trichotomy: knowledge, logic, and wit. (Though he doesn't much like "wit" for the third category.)

1. There are things about a game you could look up in a FAQ or guide, that are able to be written down. Like, here are all the possible potions in Nethack.

2. There's inferences you can make from the knowledge: if I know X, then Y must be true too. If I've fought a tiger, the other room must have a lady. If there is only one unknown potion, it must be Z.

3. Then there are things that constitute "game playing ability" or the ability to discern intent behind a game. It's the application of common sense or intuition or pattern-finding abilities to things not yet encoded in guides.

How can we make games where we get to exercise our "wit"?

(Brian Walker, whose talk I'll cover in the next entry, had some thoughts about how story emerges from place/space/environment in Brogue. But each level in Brogue still feels random, in that there's no geological reason why there's a pool of water one place and not another--- or maybe I just haven't figured it out yet.)

Dwarf Fortress Design Inspirations by Zach and Tarn Adams. This was basically a love letter to their wasted childhoods. :) But more than that, it was a fascinating look at how they put the things they loved in games into their own game. Even their earliest games (text-only scrolling narratives) incorporated the elements that they knew how to do at the time, like remembering what killed you.

Permadeath in roguelikes is often coupled to a high score list. They view Dwarf Fortress's legends mode as an evolution of that same high score list--- recording what you did for later use or comparison. The Hack "bones file" was a revelation for them and is translated almost literally into their initial vision for Dwarf Fortress: build a fort, then explore its ruins.

They also talked about games they have not yet been able to incorporate. The game Ragnarok features climactic final battles among the gods. They'd like Dwarf Fortress to have the same sort of arc where great powers arise and ultimately shape the world in large-scale ways. This seems like something that might also edge into John Harris's "wit" --- if you know the shape of the story, you can exploit it.
Friday, September 23rd, 2016
7:42 pm
Roguelike Celebration notes 1/N
Last weekend I attended the Roguelike Celebration (roguelike.club) at the Eventbrite headquarters. All the recordings are available online. Each session is short, only a half hour each, but the conference did have to split into two tracks. Here are my notes from ones I attended in person.

A Love Letter to Hack by George Moromisato, author of Transendence, a space exploration and shoot'em'up game. He talked about the ways he was inspired by Nethack and tried to apply the same lessons to his game. So what's so awesome about Nethack? Despite its crude ASCII nature, it came out in 1987, the same time as Ultima V.

He contrasts "graphical immersion" with "interactive immersion", "quality" vs "variety" and "experience" vs "mastery." The last axis is sometimes talked about as "replayability", but the distinction he's drawing is more about whether you are continuously learning the game, or just going back for another helping.

One of the key features he identified was the "illusion of winnability." Even though you almost always lose, there's a sense that "if only I had done X". Nethackers label this "Yet Another Stupid Death." But the combinatorial explosion of possibilities makes it hard to win even an "open book test" like Nethack where you have unlimited time to make up your mind. This combinatorial explosion and interaction was a common theme at the conference.

Because George is writing a space game, he found it hard to take advantage of another of Nethack's strengths, which is relying upon the player's knowledge of the world. Yes, Nethack is a fantasy game and has its own set of conventions. But it also has monsters the are familiar to fantasy fans, and interactions like "getting things wet" behave like they do in the real world. In a space game, this is harder, because the interactions between different technologies doesn't have the same sort of standard vocabulary.

A third feature he characterized as "quantity is quality." Interacting with 100's of monsters is its own good experience even if each of them is just an ASCII letter and a small set of behaviors. It's the combinatorial explosion of attributes interacting which makes this variety possible. Roguelike developers provide the most benefit when they add "one more experience" to the game rather than higher-quality content. He characterized this as "building a city, not a resort."

Accessibility in Roguelike Games" by Alexei Pepers talked about a project to make Nethack more accessible to visually impaired players. (Believe it or not, some do play it by using screen readers pronouncing the punctionation characters on the screen!) Her main idea was new commands that describe your surroundings in text, including the exact offsets in terms of relative number of tiles. Backtracing and mapping is also hard, so they also added some shortcuts for navigation ("head back to the staircase.")

Alexei characterized three main lessons learned: (1) No substitute for feedback from visually impaired users. They did some of their experiments with sighted players with the map turned off. But that population are not experts in screen-reading software! (2) Give users options on how exactly to get info, for example NSEW vs up/down/left/right. (3) The complexity of games leads to a lot of special cases. For example, a common tactic in Nethack is stealing from shops by training your pet to pick up items and put them down in the store entrance where you can grab them. A large store may have tons of items and having screen-reading software describe them all every clock tick, in order for the player to figure out what their dog is doing, is a complete pain.

Corridors in Nethack, and some of the lower levels, are very difficult. Precise spatial information using text is hard, while maintaining the sense of immersion that makes the game fun. (One idea that I'd like to see is building a tactile interface for showing the shapes of corridors--- but I don't have the time nor skills to put one together.)

Concrete Poems by Nick Montfort (a poet.) There was a lengthy opening discursion on why a tweet is limited to 140 characters, as an example of the "history of material texts" that Nick is interested in. But this left frustratingly little time to talk about roguelikes! (It's an entertaining enough exposition, so you should watch it.) Why were compute terminals a grid? Because of typewriters, which need to advance by the same amount on each letter. Previous typography was, sensibly, proportional.

But what really got my attention is talking about "Concrete poetry" and some of his examples were stunning. This is poetry that uses the typewriter, but in a "dirty" form: moving the paper around, masking letters, changing the ribbon, etc. He presented this as an early exploration of how artists made "space" out of ASCII.

His challenge was what roguelike developers can do to make something as visually stunning as the examples he showed, like Steve McCaffery's "Carnival".

(I have notes on several more sessions which I'll post later, but each session had a lot of depth so my writeups tend to be long.)
11:24 am
A micro-study of NFS operation sizes
Over at the Tintri blog, I wrote a study of NFS operation sizes in Tintri's customer base. I looked at averages weighted by operation and by byte, and tried to apply K-means clustering to the data to discover patterns of usage.

Here's a visualization I didn't include in that blog entry. It shows a timeline for several production VMstores, with each time point colored based on the characteristics of its read sizes. Many VMstores have a stable workload, others less so. (White areas are missing data.)

Tuesday, September 6th, 2016
9:46 am
Taxes
The Apple situation is not hard to explain, despite what many journalists seem to imply.

1. Apple told the EU that they were paying taxes in Ireland

2. Apple told Ireland that they were paying their taxes in the US

3. Apple told the US that they were holding the cash overseas instead of bringing it back and paying US corporate tax rate

Step #1 is recognized practice in the EU, Step #2 is acceptable to Ireland, and Step #3 is legal in the US. What the EU decided is that if the net effect of #2 and #3 looks like a special tax benefit specifically for Apple, it should be treated as one. That is. Ireland "should have" collected the taxes as the money goes through, instead of participating in the shell game. Specifically, Ireland's tax guidance that it was OK for all the European profits to be allocated to a notional Ireland-based "head office" (step 2) was what triggered the commission's ruling.

Apple, of course, is waiting for the next tax holiday or a really good reason to spend the money. Then they can either bring it back to the US (paying the lower rate rate; politicians are talking about 15%) or spend it in Europe and finally pay their Ireland taxes.
[ << Previous 20 ]
My Website   About LiveJournal.com