?

Log in

Mark Gritter's Journal
 
[Most Recent Entries] [Calendar View] [Friends]

Below are the 20 most recent journal entries recorded in Mark Gritter's LiveJournal:

[ << Previous 20 ]
Tuesday, June 28th, 2016
12:46 am
Towards a Taxonomy of Crafting Games
Minecraft pretty much set the standard for survival sandbox crafting games and judging by Steam queue, we haven't really made much progress since. Here's an attempt at breaking down the core elements of the genre.

"Crafting" games are those in which the player combines multiple resources into higher-value items. The key challenges for the user are opportunity cost (a resource used to build X cannot be used to build Y.)

Goal: survival (build shelter and fend off baddies), efficiency (produce lots of X, or the best-cost solution for X), wealth/achievement (make lots of X)

Progress directed (fixed path or goals), sandbox (free placement and open decision-making)

Actors: first-person, character, directed agents, third-person omniscient ("god mode")

Environment: 2-D surface ("top down" or isometric view), 2-D plane ("side scroller" or "platformer"), 3-D

Example games I have played:

Don't Starve: Survival, Sandbox, Character, 2-D isometric

Played this a while back. Didn't get very far into it.

Infinifactory: Efficiency, Directed, First-person, 3-D

Very fun game about setting up production lines. Not a lot of choice in what you produce, but plenty of freedom on how to do it.

Triple Town: Wealth, Directed game with Sandbox metagame, Third-Person, 2-D surface, mashup with match-3

This one is a bit of a stretch. Unlike other crafting games, while you can only combine multiples of the same element, and combinations are formed via placing groups of three or more.

Factorio: Survival (though I play with enemies disabled), Achievement (build a spaceship), character but with heavy automation, 2-D surface

This is all about setting up large-scale industrial infrastructure. Achievement is not enough to make me actually finish the endgame. Sandbox is pretty fun, though, since the goal is to build machines to build things, not just craft everything yourself.

Craft the World: Survival, Achievement, Sandbox, Directed Agents, 2-D plane

Currently playing, sort of a 2-D Dwarf Fortress, of which there are many options. May be done soon, been through one endgame, and the different environments are not enough to keep me hooked.

Terraria: Survival, Sandbox, Character, 2-D plane

Meh.

Dwarf Fortress: Survival (doomed!), Sandbox, Directed Agents, 3-D world (with 2-D views only)

Really, I frequently ask myself why I'm not playing this more. The NetHack of this genre with many object interactions.

Puzzle Craft: Wealth, Directed, Third-Person, 2-D plane, another match-3 mashup

Definitely a crafting game, the main game is used for collecting resources, while the metagame involves building the village, but in a very strictly controlled manner.

Swords and Potions: Wealth, Directed, Directed Agents + Third-Person, 2-D isometric

Create a shop, build up the town with items made there, to produce more raw materials. Sell to adventurers, or outfit them to send on quests. Kind of grind-y but I would like to find a game like this with more depth.

The problem is when I look at the new games coming out in this category, they all seem to be concentrating on some axis I don't really care about. Some are science fictional instead of fantasy. Many are trying for better 3-D graphics or incorporating some more role-playing mechanics. But very few seem to have a unique goal, or a unique mechanic, that makes them at all interesting to me.

About the only one I considered was Blueprint Tycoon which is a stripped-down efficiency game sort of like Factorio.
Wednesday, May 11th, 2016
2:33 am
The Zuckerberg Master Plan
There was a lively discussion on this week's Slate Money podcast about nonvoting shares.

The theory Felix Salmon tries to expound upon is the following:

1. Mark Zuckerberg decides he wants to, say, control all media.
2. He issues a bunch of nonvoting Facebook stock and buys media companies with it.
3. Although common stock holders are diluted, Zuckerberg doesn't care and retains total control of Facebook.
4. Repeat until Mark Zuckerberg controls the world.

How far towards world domination can Zuckerberg proceed?

Facebook is worth $344 billion, with 2.3B shares. Disney has a market cap of $174 billion. If Zuckerberg wants to buy Disney, he needs to offer, say a 30% premium or $226 billion. That means post-merger the combined company should be worth about $54b less, or $466 billion.

Can Zuckerberg then rinse and repeat? At what point does everybody sell their nonvoting shares, leading to a valuation of $0 instead? Or does Facebook stock retain enough value to literally buy everything?
Monday, May 2nd, 2016
5:01 pm
Data! and Graphs!
Over at the Tintri corporate blog, I measured our customer's virtual machine sizes: https://www.tintri.com/blog/2016/05/data-dive-vm-sizes-real-world
Saturday, April 16th, 2016
11:06 pm
Structural problems in "Daredevil"
(The Netflix version.)

1. Severe head trauma and even broken bones can and do kill people. Particularly people who don't get great medical attention because they're criminals in a poor neighborhood.

So, having Daredevil and the Punisher argue about killing is more than a bit deliberately obtuse? Odds are at least one of Daredevil's victims has perished from a blood clot or infection or brain bleeding or some other complication from the initial beating. I don't expect this to come up in the remainder of S2.

I realize this is not unique to Daredevil. ("Arrow" even gave Ray Palmer a potentially fatal blood clot as a result of being shot by an arrow. I don't expect the same concern to ever arise again there either.)

2. Daredevil can fight in the dark. He is shown taking out light sources in order to use this to his advantage. (Bet the residents and landlords love that, in addition to the bloodstains and drywall damage.) But because it's TV, they don't want to just show a completely black screen during a fight scene, so instead there's plenty of ambient light left to see the action. It's just dim, not dark.

Perhaps this could have been handled with some visual convention. (I don't buy "dim lighting" itself as the convention.)

Dim lighting also allows the use of the stunt double.

3. Matt and Foggy are contemporaneous young lawyers, right? They're millennials. They would not make "Top Gun" references. They make other cultural references that are fine for me as a 40-year old, but I cannot believe two 20-somethings would say.

The design in Matt's childhood scenes attempt to look as if he is growing up in an older period than he possibly could. If he's under 30, he was born no later than 1986 and so grew up in the 90's.
Friday, April 15th, 2016
8:53 am
No thanks
This sounds like my personal version of hell:

What would you call the immediacy of Instagram; the impulsiveness of Tinder; and the exclusivity of a private Playboy Mansion event – all wrapped up into a new Super App? We call it a helluva lot of fun! Our client is about to fundamentally disrupt the traditional advertising/promotional model for event driven businesses with a crowdsourcing platform that encourages and incentivizes a "who's who" of ideal target participants to join the party.

...

POSITION: Chief Technology Officer Technology / Vice President Engineering & Product Development. This is a management position. You will be part of the core executive team that includes the CEO. Initially, this will also be an individual contributor position.


And it's in LA. It's really the last line that sells it.
Wednesday, April 13th, 2016
11:20 pm
Combinatorial Auctions and Implicit Collusion
I am rereading selected bits of "Combinatorial Auctions" in preparation for my session at Minnebar next week.

Here is a completely factual thing that happened in the U.S. government PCS spectrum auction (a simultaneous ascending auction). Bidders who wanted to discourage competition would engage in "code bids" against smaller carriers. Suppose A bid on on a block which B wanted. B then bid on a block for which A was the current leader, and which B had not previously indicated interest. The last three digits of the bid encoded the lot number for which the bid was "punishment."

Source: http://ftp.cramton.umd.edu/papers2000-2004/cramton-schwartz-collusive-bidding.pdf

This tactic appears to have been successful. There is also some evidence of "demand reduction" in which carriers strategically decided not to bid on additional spectrum allocations in order to keep overall price levels lower.
Wednesday, April 6th, 2016
1:48 am
Fantasy tournament design
Fantasy and science fiction tournaments have weird and terrible structures. There doesn't seem to be much reason for this other than to provide the author with the joy of explaining the tournament. Occasionally it may impact the plot. But nobody just runs single-elimination, double-elimination, round-robin (like chess or Scrabble), or World Cup-style tournaments.

The Element Games (Essen Tasch) in "A Gathering of Shadows": This otherwise lovely book has a 36-person tournament which proceeds in:
  • two single-elimination stages
  • a round-robin group of three which is decided by points scored (so you could lose two and still proceed, I think), and
  • a three-way championship match
Also the winner gets to host next year, like the Americas Cup. Changing formats for the championship is a standard trope, but makes little sense in terms of design. (Are the fans bored already?) Particularly when the honor of hosting is on the line, the final match comes down to coalitions rather than magical skill.

Azad in "Player of Games": two 10-player matches, and several 2-player matches (including the championship), but the dreaded 3-player match is thrown in. Some political justification in letting the dominant third ("apex") gender kick out males and females in the first round, which is a 10-player match. The purpose of the second 10-player match is less clear; perhaps this could serve to demonstrate that top Azad players can engage in coalition-forming, a useful real-life skill. But from a practical point of view, it's better to run both the 10-person matches immediately--- otherwise you need host a lot of two-player matches. There's no attempt at an excuse for three-player matches and it's a non-event in the book.

Triwizard Tournament in "Harry Potter and the Goblet of Fire": a purely arbitrary (and easily rigged) magical sorting artifact reduces the competitors to just three in a single stage. While this allows more challenging (and expensive) trials to be set up, there seems little justification for believing that the cup has really selected the best competitor from each school. There is little drama, either, and the fans would probably appreciate seeing a larger field to start.

The tournament itself resembles a decathlon, with competitors engaging in various tasks and challenges. But rather than an objective scale of point awards, or purely time-based scoring, the judges engage in further arbitrary decision-making based on "spirit" which allows them to promote their favorites. (The tasks are spaced out over the course of a year for no good reason.)

The blatant point-rigging is OK, because the points don't matter anyway, serving as only an insignificant handicap on the final task, a labyrinth.

(Quidditch, surprisingly, appears to have a fairly standard tournament structure--- all the misdesign was saved for the game itself.)

Assumption in "Last Call" (Tim Powers). This looks like a poker game. But the real object is to lose the "assumption" all-or-nothing side bet after merging hands with your victim. As a poker game this is a real mess, and virtually no spectators will understand who has actually won.

Hunger Games in "The Hunger Games": while theoretically designed for showmanship, the actual structure of this battle-royale tournament doesn't lend itself well to televised drama, requiring frequent intervention by the Gamemakers. The Cornucopia is an obvious "rules patch" to force early conflict and reduce the large number of contestants. (Otherwise, an equilibrium strategy is likely to wait for others to become injured.) Unwatchable in real-time, this blood sport probably gets cut down to just the 18 minutes of real action for most viewers (similar to a compressed baseball game.) Unfortunately, there's only one per year. Would be much improved by scheduling smaller weekly matches, but then the poorer Districts wouldn't feel quite so oppressed. Which, to be fair, seems to be the point of the exercise anyway---- the Capital is nobly willing to forgo entertainment value to promote heavy-handed social allegory.
Monday, February 15th, 2016
4:08 pm
New Game -- "Splendor"
Marissa bought me "Splendor" for Valentine's Day: https://boardgamegeek.com/boardgame/148228/splendor I got to play it this weekend with Mike and Lily (my god-daughter.) The rules are simple enough that she had no difficulty playing at age 9.

Splendor is a card-collecting game. You pick up tokens from a shared pool in order to purchase cards that earn you points, and bonuses towards future purchases. Mike won handily by building up a large bonus in one particular color, which allowed him to make several high-value purchases using that color in a row. It is not clear whether this is a consistent winning strategy, since there is not a lot of choice of cards available, and most cards require multiple colors (as do all the "noble" bonuses.) We found ourselves a little adrift as to what was the best way to play --- specialization or generalization. So there is enough depth to be interesting, I think, without being totally lost.

It doesn't have the same feel of many building game in which an early lead cannot be overcome. It is definitely possible to interfere with the other players' strategies both by sitting on the tokens they need, and reserving cards that would be cheap for them. Our initial game was not particularly cutthroat in this way, though.
Tuesday, January 26th, 2016
9:30 pm
Legal Go Positions and Nonsingular Matrixes in F_3
A Go position is legal if there is no completely surrounded group of stones in it. It can be represented as an NxN array with each entry filled in as 0 (blank), 1 (white), or 2 (black.) I don't want to use the term "dead" because a dead group is a legal position, but an already-captured one still on the board is not. Note also that while players must alternate turns, 'pass' is a legal move so for example Black could fill the board all by herself without a single White stone. (But not fill in the last square as that would kill her group.)

A matrix over the finite field F_3 is also an NxN array with each entry either 0, 1, or 2. It is singular if its determinant is zero. The nonsingular matrices form a group under multiplication (because they are invertible.)

Owen Maresh asks how these two sets compare: https://twitter.com/graveolens/status/691732255333548032

For the easily-tractable cases, here is the answer:

11,232 nonsingular 3x3 matrices over F_3
12,675 legal 3x3 Go positions
6,996 that are both

24,261,120 nonsingular 4x4 matrices over F_3
24,318,165 legal 4x4 Go positions
13,518,392 that are both
Sunday, January 24th, 2016
12:43 am
Fermat's near misses
Using the technique described in my previous post, I attempted to answer this question on Quora: "What is the smallest possible difference between the sum of two perfect nth powers and a different nth power?" http://qr.ae/RO86UX

That is, what is the smallest nontrivial 'd' (in absolute terms) in the equation a^n + b^n = c^n + d?

After excluding the all-1 cases, and the cases where a = c, here are the best solutions for a,b,c < 10000:

8^3 + 6^3 = 9^3 + -1
5^4 + 5^4 = 6^4 + -46
16^5 + 13^5 = 17^5 + 12
2^6 + 2^6 = 3^6 + -601
2^7 + 2^7 = 3^7 + -1931

Here are some more computational number theory results on x^3 + y^2 = z^5: http://qr.ae/RO864s
Friday, January 22nd, 2016
1:10 am
Parallelizing
In 1991 Daniel Bernstein published a short paper on a computational technique for solving diophantine equations of the form p(a) + q(b) = r(c) + s(d): https://cr.yp.to/papers/sortedsums.pdf He describes how a heap can be used to visit all sums p(a) + q(b), with a,b < H, in increasing order, using only near-linear space and near-linear time (in H). You can then set up two heaps and "increment" whichever one has the lowest minimum value, outputting any cases of equality.

Can this technique be efficiently parallelized, on a SIMD machine (such as GPU)?

A simple approach is to divide up the elements in one of the heaps. That is, instead of, say, [math]r(d) = d^4[/math], you could have a family of functions [math]r_i(d) = (192d+i)^4[/math], which partition the search space. Each thread gets a different function to work with. This reduces one of the heaps by a factor of 192, but the other is just as large. Where previously we did H^2 + H^2 heap operations, now we have to do H^2 + H(H/192) heap operations, and the operations on the smaller heap are more efficient in the worst-case as well. Also, every thread needs its own pair of *both* heaps which is somewhat memory-intensive, limiting either the size of the problem that can be tackled or the number of available threads.

We could also partition *all* of the inputs as long as every partition is tested against every other combination of partitions. (This is a natural optimization when some restriction on the equations can be found for some moduli, thus eliminating a broad range of possible solutions.) For example, if we divided all the inputs in half by parity, then we need to check 8 different equations. This again requires a pair of heaps per thread.

We could perhaps continue to use a shared heap for the left-hand side of the equation, at the cost of threads becoming idle when their minimum value grows too large. I don't know how to evaluate what this does to the speedup. It also introduces some serialization as the heap operation itself can't be parallelized.

A variation on the shared heap would "unroll" the left-hand heap and have a shared array of the next K values. I can't decide whether this is helpful or not.

On a side note, perhaps the availability of cheap large-capacity flash devices may make it feasible to apply techniques previously dismissed as infeasible due to space requirements.
Saturday, January 16th, 2016
3:31 pm
Not recommended
I read Mark Lawrence's "Prince of Thorns" based on Amazon recommendations and I am afraid it is not the thing I was looking for. I am willing to read grimdark, but this was just (as Liz Bourke puts it) "misogynist creepy" rather than cathartic. I was particularly puzzled by the large mutant who didn't seem to mind all that much that the protagonist wiped out all his friends and relations. Also this seems to be one entry in a trend in which grim fantasy futures are revealed to be postapocalyptic futures, which in this book just came across as lazy worldbuilding. You don't have to worry about anachronistic idioms if it's the future! (Except that you totally do.)

I also read the latest from Jack McDevitt, "Thunderbird", which is a sequel to "Ancient Shores". While the story has some of the discovery and sense of wonder writing that McDevitt can do well, his heart didn't really seem to be in it. Instead he wanted to tell the story of how people would react to alien technologies and other worlds. But he's not very good at it in this book. The arguments are just barely sketched out, and the conclusion feels like the author just got tired of writing rather than bringing matters to a conclusion. It might have been a stronger book had he abandoned the viewpoint characters involved in the exploration and spent more time on the core of the emotional story.book
Saturday, January 9th, 2016
11:44 pm
My party, right or wrong, I guess.
“If Trump or Cruz wins the White House, then my side of the party has to re-evaluate who we are, what we stand for, and I’d be willing to do that,” Mr. Graham said.
http://www.nytimes.com/2016/01/10/us/politics/for-republicans-mounting-fears-of-lasting-split.html?_r=0

So, Mr. Graham is willing to sign up for Trumpism if Trump wins the White House. But not otherwise. Character!
3:20 pm
"... you got to learn to feel good about it. Look at the way the whole economy is structured."
What can rich people possibly do with their money?

This is a serious question. I'm rereading "The First Modern Economy: Success, Failure, and Perseverance in the Dutch Economy, 1500-1815." The authors take some time to discuss the rentiers of the 18th century, who were criticized both by contemporaries and historians for "a tendency on the part of capital... to prefer speculative trading... and investment in money lending and insurance, to the toil and hazards of foreign trade." (quote from Violet Barbour). The authors make a case that in part this was a natural reaction to a less-friendly trade environment --- mercantilist barriers were going up all over Europe --- but also a consequence of having more money than they really knew what to do with:

The vast accumulations of capital in the hands of the eighteenth-century capitalists placed them (ironically) in a position of dependence vis-a-vis national states. Only the states, with their taxing power, could hope to absorb such sums and make regular interest payments. The fact that governments sought to borrow almost exclusively to wage war guaranteed that Dutch capital would do little to stimulate economic growth directly; but neither the immediate economic possibilities nor the grandiose development projects that we, in retrospect, may be capable of imagining could have absorbed a large portion of the late eighteenth-century stock of financial capital. In the circumstances of the times, governments held a monopsonistic position in the capital markets.


(I think this is not quite on point because the Dutch did have some pretty grandiose empire-building designs of their own, which absorbed quite a bit of capital, they just didn't work out.)

The same criticism persists today, about too much money sloshing around in financial instruments instead of the "real economy." But, absent some grand redistributionary scheme, what exactly should the ultra-rich be doing with their money? (We have seen how even voluntary redistribution provokes acerbic backlash on the means and goals.)

Loaning it to the government is still a popular choice, and like the 17th and 18th century, it goes mainly to fund wars rather than development projects.

Starting a business takes lots of money --- and that's how at least a couple billionaires I know choose to use their money --- but, if the business is successful, only compounds the problem. And presumably we don't want capital going to unsuccessful businesses.

There's arguably a limit to even how much the startup ecosystem can fruitfully absorb. Small companies just don't need all that much capital. Plus an excess of cash leads to self-fulfilling prophecies in which success is measured by the ability to raise more cash.

Plunging all your cash into a 2-billion-dollar iron mine increasingly seems like a bad idea. Buying somebody else's company just shifts the problem around.

So where *should* all that capital be going? Is there even an answer? Clean energy? Our best answers don't seem all that different from the 18th century Dutch:

1. War and interest service on war
2. Naked speculation
3. Charity
4. World domination (aka Elon Musk, although in his case "space" might be more apropos)
Saturday, January 2nd, 2016
1:52 pm
more CUDA
I managed to tune my CUDA (GPU programming) demo project to get about 4x better throughput: https://github.com/mgritter/advent-of-cuda/tree/master/day4

The first major piece of surgery was moving constant tables into "shared memory" on the GPU, instead of device memory. The GPU's L1 cache does not serve as a read cache for device memory! At least, not in the version of the architecture I'm using, 3.2. Later revisions have a "read only" cache and a special intrinsic to use it. The "shared memory" is data that is available to all threads in a "block", and lives in the L1 memory, along with per-thread allocations and register spills.

After this, profiling (with the NVIDIA visual profiler) still showed that L2 cache throughput was maxed out, with a mix of both reads and writes. So the next step was to find a way to move the actual data being hashed into registers instead of memory. Shared memory would not be appropriate as each thread is working on a different input.

After both these steps, profiling shows that the performance is now bounded by compute, and that the kernel occupancy is 97.3%, so the GPU is almost fully utilized. The device is doing slightly more than 100 million MD5 hashes per second. (For this particular problem, only the first word of the hash is relevant.) Further improvements would probably require finding ways to hash with less integer arithmetic--- which is about 80% of the instructions. But, I have not looked at other GPU-based hash calculations and figured out what techniques they are using.
Friday, January 1st, 2016
1:35 am
How I've been entertaining myself
I participated in Advent of Code, and Advent calendar of programming puzzles. Of the 25 days of puzzles, one I solved without doing any coding, one other by minor scripting of an existing application, and the rest in Python. It was a fun exercise. (Sadly, I did not submit a solution quickly enough on the last day to make the leaderboard.)

For Christmas I received a NVIDIA Jetson TK1, a development board for the Tegra K1 "system-on-a-chip". Most notably, it comes with a 192-core GPU built in, making it a 5"x5" supercomputer.

I thought a good project to get more familiar with programming it would be to reimplement some of the Advent of Code problems using CUDA (Nvidia's programming API for GPU computing.) I posted my first effort on github: https://github.com/mgritter/advent-of-cuda/tree/master/day4 (But, I haven't rerun my old Python solution to demonstrate the speedup!) The new code is capable of finding the solutions to the original problems in less than a second each, and of finding solutions to the "next-hardest" version of the problem in about 9 seconds.

I also have a bunch of new games to play: Mini Metro (a fun logistics game), Concrete Jungle (deck-building + city-building), Infinifactory (factory layout for evil aliens), and Offworld Trading Company (a logistic RTS.) Yes, even my games are really all about programming. :)
Sunday, December 13th, 2015
11:08 pm
Algorithms that only need to be executed once
Charles Babbage knew from personal experience that mathematical tables are hard to create. He and William Herschel discovered error after error in published books. Even if you get all the calculations right, the odds are pretty good that there will be typesetting errors.

Because his whole motivation for the Difference Engine was to perfect this process, the design reflects an appreciation for unexpected errors. The engine prints its own output, rather than relying upon manual transcription. A second copy of the output is printed in a log, which allows checking the final copy for agreement. The mechanisms of the calculation operate in such a way as to jam rather than produce an incorrect sum.

One can think of the difference engine is a mechanism designed for running its algorithm just once. After you build a table of logarithms, you don't need to do it again. Of course, the machine is configurable to produce a variety of tables. And it's restartable in case a page is damaged. But the key difference is that correctness takes precedence over throughput or flexibility.

What's striking to me is how few examples of such run-once algorithms are of practical use today. Mathematical models generally don't consult pre-computed tables, we just run an algorithm that calculates the answer on demand. A lot of the problems that are interesting to us are online problems where new data shows up all the time.

Verification tools for circuit design are usually run *successfully* only once. However, they are often run multiple times on nearly the same input, because the whole point is to find and fix bugs.

An example from number theory is verification of the largest Mersenne prime. In that case, rather than writing one really solid algorithm, the number was verified on multiple implementations of primality testers! I don't know what Babbage would think of that.

So, what sort of algorithms today are run only once on a given problem? Admittedly this is a fuzzy definition because, say, Google never indexes the same Internet twice. And the circuit-verification example is technically different each time.
Tuesday, December 8th, 2015
5:54 pm
Cisco rumors
The current rumor is that Cisco may purchase SpringPath, a recently-launched startup that makes a "hyperconverged" software solution. http://m.crn.com/news/data-center/300079048/cisco-partners-acquiring-hyper-converged-startup-springpath-would-make-ton-of-sense.htm

Cisco has been partnering with Simplivity, a company with a similar offering.

What these companies offer is a software solution that combine local storage (flash drives attached to the hosts in a virtualized environment) into a distributed storage appliance. Cisco's previous attempt to sell a storage solution, by acquiring Whiptail, crashed and burned.

The only reason I can think of to pick SpringPath rather than one of the leaders in this area is because they're cheap. So it's not an expensive bet for Cisco to make. But they've been burned before by trying to build a storage solution without sufficient commitment or maturity.
Saturday, November 21st, 2015
12:39 am
NMBL, you had one job
I've written before about how Tintri's best public comparable is Nimble, and NMBL's price has been pretty flat despite revenue growth.

Well, Nimble missed their revenue and profitability targets this quarter, and the market reacted viciously. As Tintri's CFO has warned us on multiple occasions. The stock lost about half its value. They also pulled down Pure Storage by about 15%. Incumbent storage companies (EMC, NTAP, etc.) went up. The dominant story today seems to be "gosh, selling storage is tough and there's a lot of competition."

BMO Capital analyst Keith Bachman wrote
Our original thesis on Nimble is broken. We had previously assumed (incorrectly) that Nimble’s technology advantage would enable the company to take share in both the midmarket and enterprise, while also gradually growing profits and cash flows. However, we think the challenges of Nimble’s competitors are now engulfing Nimble, in terms of both revenue growth and profits. Moreover, while Nimble envisions improving profitability in 2H FY2017, we are less convinced.

Hence, we are moving to our back up thesis. We believe that Nimble could make an interesting take out target for the very same reasons mentioned above. The incumbent storage vendors are seeking growth and could meaningfully improve Nimble’s sales leverage. In addition, Nimble is a lot less expensive than it used to be. Nimble’s gross margins remain 67%, which would be accretive to Nimble’s competitors. Moreover, R&D is only about 22% of revenues, whereas S&M and G&A combined are about 58% of revenues. Further, we believe a large portion of non-R&D spending could be meaningfully reduced or eliminated.

Frankly, I *don't* think Nimble has a technology advantage. They lack an all-flash array (though one is coming) so can't compete with Pure, EMC's ExtremeIO, Netapp, Tintri, and Tegile in that market. They have good analytics capability in InfoSight but no real-time analytics on the box (which particularly hurts federal and financial customers) and no per-VM features like Tintri. The areas where Nimble has been strongest is in the commercial sector where they have affordable products and a very strong channel presence.

There is a tendency of market analysts to treat public companies as the only companies around. If somebody is going to get snapped up in an acquisition, I don't think it's going to be Nimble. (The fact that it's cheap hasn't helped Violin.)
Wednesday, November 18th, 2015
2:25 am
Competing narratives and yes, your wiseassery really will get brought up in court
It's interesting skimming through the competing trial briefs in the Optumsoft vs. Arista case. They're documents 216 and 218 in the electronic case filings here: https://www.scefiling.org/cases/docket/newdocket.jsp?caseId=956 Both lawyers are, unsurprisingly, very good at explaining why the other side's position is completely unreasonable and contradictory.

It bears repeating that you should treat company email very, very seriously. Ken's comment unearthed in discovery:

Moving forward, any project involving moving code out of //src/tacc will probably need Henk’s approval or something like that. All the more reason to not add any code there ever again.
is dissected at length and held up as an example of Arista's bad faith. (The Optumsoft narrative is that Arista is trying to pretend that only changes made in that directory are "improvements" to TACC, the software under dispute. Arista's actual claim is more subtle than that.)

Now, I know Ken from Stanford, and used to work for Henk. (In fact, I think I know everybody who testified in this case except the expert witnesses.) Ken almost certainly didn't mean "let's violate our agreement by not providing Optumsoft any of the required improvements." The email is in keeping with Ken's approach to technical issues --- "how can we fix the process to remove cause for future disputes?" --- but uses a hyperbolic tone that is easily misconstrued.

It is also a bit amusing, as the recipient of some jeremiads from David, to see them being brought up in the trial record.

I don't think it's a good idea to stress over how every email you write might look in court. But when you *know* there's a dispute, you should probably assume that any email discussions about that dispute can and will be dragged into the light.
[ << Previous 20 ]
My Website   About LiveJournal.com