?

Log in

Mark Gritter's Journal
 
[Most Recent Entries] [Calendar View] [Friends]

Below are 20 journal entries, after skipping by the 20 most recent ones recorded in Mark Gritter's LiveJournal:

[ << Previous 20 -- Next 20 >> ]
Saturday, August 8th, 2015
12:31 am
Comic Oligopolies
I watched the documentary "Stripped" last night, and enjoyed it. But it was trying to do too many things to really be a good documentary. The part I enjoyed most was hearing the artists talk about their process for "being funny" every day.

Many of the interviewees have created web-based comics, and so there was a longer-than-necessary section on "how do webcomic authors make money." (Like I said, it tried to do too many things.) There was also a fair amount of incomprehension from the older artists: "that's the part I want somebody else to take care of", "how do these kids make money", "I just like it when a bag of money shows up regularly", etc.

Although the comics syndicates do compete with each other, they form an oligopoly. And the gatekeeper function of that oligopoly is, I think, a large part of what kept artists paid. We know people will make comics--- even surprisingly good comics--- for a pittance, and that there are thousands of hopeful cartoonists out there. (One stat was something like 1 in 3500 applications gets accepted by the syndicates.) A gatekeeper can cut off this supply curve, thereby keeping prices (wages) high. You either are the top 1% and earn a decent living, or eventually go do something else with your life.

The web changes this, although in a complicated way because the revenue stream isn't the same. But it means nobody is saying "no" to a comic artist--- they might be a failure in the market, but they aren't getting rejected by an editor. That suggests that the distribution of money will look different, too.
Friday, August 7th, 2015
11:51 am
More press
The Minneapolis-St. Paul Business Journal selected me as one of its 2015 "Titans of Technology": http://www.bizjournals.com/twincities/morning_roundup/2015/08/2015-titans-of-technology-honorees-announced.html

There will be a more in-depth profile (or at least a photo!) later, and a fancy awards lunch in September. It's an honor to appear on the awards list with local leaders such as Clay Collins (of LeadPages) and Sona Mehring (of CaringBridge.) My award is in the category of "people responsible for the creation of breakthrough ideas, processes or products" along with a professor at UMN, Lucy Dunne, who studies wearable computing.
Wednesday, August 5th, 2015
8:53 am
Funding!
Tintri announced our "Series F" funding round, led by Silver Lake. They do a lot of late-stage and private equity investment--- for example, Silver Lake also announced a $1b investment in Motorola today! It's great to get a vote of confidence from a sophisticated investor with deep pockets.

News coverage:

http://www.computerworld.com/article/2956586/cloud-storage/becuase-storage-needs-to-get-smarter-tintri-raises-125m.html

http://fortune.com/2015/08/05/tintri-snags-125-million-for-storage-push/

http://www.reuters.com/article/2015/08/05/tintri-ipo-funding-idUSL1N10F3KZ20150805

http://venturebeat.com/2015/08/05/hybrid-storage-maker-tintri-raises-125m/

http://www.zdnet.com/article/hybrid-storage-startup-tintri-raises-125-million/

http://www.crn.com/news/storage/300077669/tintri-gets-125m-in-new-funding-preps-for-ipo-new-channel-push.htm

http://www.storagereview.com/tintri_raises_an_additional_125_million_in_funding

$125m is a lot. If we were a Minnesota-based company, Tintri would have tripled tech venture funding in the state (about $50m total in Q1 of this year.) But Silicon Valley is having a major funding boom, with $9.1 billion estimated in Q2 this year. It's also a smaller round than some of our competitors--- for example, Pure has raised about $530m total, including a series F of $225m.

Congratulations to everybody at Tintri!
Monday, July 20th, 2015
12:22 am
Fun with Bad Methodology
Here's a Bill Gross talk at TED about why startups fail. Mr. Gross identifies five factors that might influence a startup's success, analyzes 250 startups according to those factors, and analyzes the results.

This is possibly the worst methodology I have seen in a TED talk. Let's look at just the data presented in his slides. I could not find a fuller source for Mr. Gross's claims. (In fact, in his DLD conference talk he admits that he picked just 20 examples to work with.)

I did a multivariate linear regression--- this does not appear to be the analysis he performed. This produces a coefficient for each of the factors:

Idea	 0.021841942
Team	 0.033134009
Plan	 0.047012997
Funding	-0.03324002
Timing	 0.133669929


While this analysis agrees that "Timing" is the most important, it differs from Mr. Gross on what is second. It actually says that the business plan is a better predictor of success. That's strike one--- the same data admits multiple interpretations about importance. Note also that linear regression says more funding is actively harmful.

The second methodological fault is taking these numbers at face value in the first place. One might ask: how much of the variation in the dependent variable do these factors explain? For linear regression, the adjusted R^2 value is about 0.74. That means about 25% of the success is explained by some factor not listed here. What checks did Mr. Gross do to validate that his identified factors were the only ones?

While we're still on problems with statistical analysis, consider that the coefficients also come with error bars.

	Lower 95.0%	Upper 95.0%
Idea	-0.082270273	0.125954157
Team	-0.082753104	0.149021121
Plan	-0.049741296	0.143767289
Funding	-0.126226643	0.059746603
Timing	 0.034672747	0.232667111


The error bars are so wide that few useful statements can be made about relative ordering of importance. (Obviously this might change with 230 more data points. But it seems like Mr. Gross was operating on only 20 samples.)

Next let's transition to the data-gathering faults. Mr. Gross's sample of companies is obviously nonrandom. But it is nonrandom in a way that is particularly prone to bias. He lists startup companies that actually got big enough to attract sufficient attention that he'd heard of them! The even mix of success and failure when he's looking to explain the high rate of failure should be a huge red flag.

Suppose we add 15 more failed companies to the list that have an average timing of 7 (slightly better than the sample) but average on the other factors of 5.

Oops, now timing has slipped to third!

Idea	 0.079811441
Team	 0.067029361
Plan	-0.007925792
Funding	 0.033260711
Timing	 0.064723176


Not surprising, because I deliberately set things up that way. But Mr. Gross doesn't know what the "next" set of failures look like either. He has a complete list of IdeaLabs companies, but not the outside world, which is its own bias--- maybe it's Mr. Gross who is prone to timing failures, not the rest of the world!

Picking only large, well-known failures for your analysis is nearly the very definition of survivorship bias.

Finally, the inputs themselves are suspect even where they are complete. Mr. Gross already *knows* whether these companies are successes or failures when he's filling out his numbers. Does Pets.com really deserve a "3" in the idea category, or is that just retrospective thinking? Why are the successful IdeaLabs companies given 6's and 7's for Team, but the unsuccessful ones get two fours and a five? Did Mr. Gross really think he was giving those two ventures the "B" team at the time?

Even the Y axis is suspect! Pets.com had a successful IPO. Uber and AirBnB can't claim to have reached that stage--- they might yet implode. (Unlikely, admittedly, but possible. And their revenue numbers are better than Pets.com ever was.) As an investor, "Did I receive any return on my investment" is the measure of success.

To summarize,
* The data were generated from the opinions of the principal investigator and not subject to any cross-checks from other parties.
* The data exhibit survivorship bias and other selection bias.
* The analysis includes no confidence intervals or other measurements of the meaningfulness of the results.
* The results presented appear highly dependent upon the exact analysis performed, which was not disclosed.

Am I being unfair? Perhaps. This was not an academic conference talk. But if we are to take his advice seriously, then Mr. Gross should show that his analysis was serious too.
Sunday, July 12th, 2015
1:03 am
Hindsight
I'm reading "The Royalist Revolution" which tells a pretty good story about how the U.S. ended up with a strong executive, and how many of our founding fathers revived Royalist arguments. As in, James-and-Charles-sixteenth-century political thought. It's also a book about how ideas have consequences!

But, the sheer amount of pie-in-the-sky idealization on both sides of the monarchist/parlimentarian debate beggars belief, and nobody seems to have called them on it. In some cases this might just be due to lack of relevant experience.

A common theme is that a strong executive (or royal prerogatives) is needed to combat the despotism of sectional and factional interests. To be fair, the Long Parliament was not an exactly inspiring example of how a single-body legislature would behave. But the notion that a king cannot diminish one part of his realm without diminishing himself is pure hogwash, and can only be written by somebody who has not really grasped the notion of "penal colony." Similarly, claims that a chief magistrate / governor would, by benefit of being selected by the whole state, be beholden "to no single faction or segment of society" can be made with a straight face only if you haven't experienced modern political parties. But these claims were made, apparently in good faith, by otherwise intelligent men.

On the other side, the idea that Parliament is able to represent the population "virtually" because it is large and closely resembles the voting public (so how could it harm itself?) has shortcomings that were obvious even at the time.

I am also somewhat surprised to learn, at this late date (considering my Christian-school education), that Paine's "Common Sense" spent a lot of time exploring Biblical condemnation of the office of kingship.
Thursday, July 9th, 2015
12:26 am
I shouldn't get upset by linkbait
Look, I respect skepticism. Thomas the Doubter is a saint not just in spite of his doubt.

But Salon has been having a lot of low-quality articles lately such as "5 good reasons to think that Jesus never existed" and it's beginning to bug me.

I feel like there's a cyclic behavior here in which each generation which feels oppressed by the organized Christianity of its day comes up with lots of reasons (good and bad) why Christianity is probably false. Christianity has done bad things, therefore its roots are probably made up. (If they were not, how could Christianity be responsible for those bad things?)

Some of this leads to useful philosophy (Spinoza!) and historical criticism. Most just gets ignored and the next generation comes up with a different set of reasons. Questioning the foundations of Christianity convinces few Christians to mend their ways. And it's not like we're lacking in a reasonable intellectual foundation for atheism and agnosticism.

This article is particularly bad. The five reasons are:

"1. No first century secular evidence whatsoever exists to support the actuality of Yeshua ben Yosef": Not an argument against his nonexistence. Same can be said for Socrates, if you ignore all the writing influenced by Socrates. (Yes, I realize this point has been debated to death.)

"2. The earliest New Testament writers seem ignorant of the details of Jesus’ life, which become more crystalized [sic] in later texts." An argument for mythologization but not absence. Arguing that Paul didn't know very much about Jesus' life does not imply he placed him in the distant past.

"3. Even the New Testament stories don’t claim to be first-hand accounts." "4. The gospels, our only accounts of a historical Jesus, contradict each other." Certainly the Gospels were written to support already-existing Christianity, not spark its existence. They are not biographies and were not meant to be. If you accept them as texts meant to convey a point, it seems reasonable the authors (and the Holy Spirit, if one is a Christian) have organized events in a thematic manner. Does that imply there were not actual people who did the events portrayed?

"5. Modern scholars who claim to have uncovered the real historical Jesus depict wildly different persons." Of all the "good reasons", this is by far the worst. Attempts to find the historical Jesus often start by simply discarding everything that makes Jesus distinctive (his claims, words, and miracles) so it is not surprising what is left is a nonentity.
Saturday, July 4th, 2015
1:05 am
Stupid NFS Tricks
One of the Tintri SE's asked whether the advice here about tuning NFS was relevant to us: http://docstore.mik.ua/orelly/networking_2ndEd/nfs/ch18_01.htm (It is not.)

Back in the days where CPU cycles and network bits were considered precious, NFS (the Network File System) was often run over UDP. UDP is a connectionless, best-effort "datagram" service. So occasionally packets get lost along the way, or corrupted and dropped (if you didn't disable checksumming due to aforesaid CPU costs.) But UDP doesn't provide any way to tell that this happened. Thus, the next layer up (SUN-RPC) put a transaction ID in each outgoing call, and verified that the corresponding response had arrived. If you don't see a response within a given time window, you assume that the packet got lost and retry.

This causes problems of two sorts. First, NFS itself must be robust against retransmitted packets, because sometimes it was the response that got lost, or you just didn't wait long enough for the file server to do its thing. This is not so easy for operations like "create a file" or "delete a file" where a second attempt would normally result in an error (respectively "a file with that name already exists" and "there's no such file.") So NFS servers started using what's called a "duplicate request cache" which tracked recently-seen operations and if an incoming request matched, the same response was echoed.

The second problem is figuring out what the appropriate timeout is. You want to keep the average cost of operations low, but not spend a lot of resources retransmitting packets. The latter could be expensive even if you don't have a crappy network--- say the server is slow because it's overloaded. You don't want to bombard it with extra traffic.

Say you're operating at 0.1% packet loss. A read (to disk) normally takes about 10ms when the system is not under load. If you set your timeout to 100ms, then the average read takes about 0.999 * 10ms + 0.000999 * 110ms + 0.000000999 * 210ms + so on, about 10.1ms. But if the timeout is a second, that becomes 11ms, and if the timeout's 10 seconds then we're talking 20ms.

So, at least in theory, this is worth tuning because a 2x difference between a bad setting and a good setting is worthwhile. Except that the whole setup is completely nuts.

In order to make a reasonable decision, the system administrator needs statistics on how long various NFS calls tend to make, and the client captures this information. But once you've done that, why does the system administrator need to get involved? Why shouldn't the NFS client automatically adapt to the observed latency, and dynamically calculate a timeout value in keeping with a higher-level policy? (For a concrete example, the timeout could be set to the 99th percentile of observed response times, for an expected 1% over-retransmit rate.) Why on earth is it better to provide a tuning guide rather than develop a protocol implementation which doesn't need tuning? This fix wouldn't require any agreement from the server, you could just do it right on your own!

Fortunately, in the modern world NFS mainly runs over TCP, which has mechanisms which can usually tell more quickly that a request or response has gone missing. (Not always, and some of its specified timeouts suffer the same problem of being hard-coded rather than adaptive.) This doesn't remove the first problem (you still need the duplicate request cache) but makes an entire class of tuning pretty much obsolete.

Nothing upsets me more in systems research and development, than a parameter that the user must set in order for the system to work correctly. If the correct value can be obtained observationally or experimentally, the system should do so. If the parameter must be set based on intuition, "workload type", expensive consultants, or the researcher's best guess then the real problem hasn't been solved yet.
Monday, June 29th, 2015
12:03 am
Keeping this short, but longer than Twitter
I really should not engage with my Twitter followers and co-workers who are engaging in gloom-and-doom about the status of Christianity in the United States. But really, guys, grow a spine. Christians experience far worse persecution in the world today than the "threat" of having to treat a married couple like a married couple. Christianity grew and flourished in an environment where the dominant religion was the Hellenistic pantheon. I think it can handle getting a little less of its way from the state for a while. (I can't wait to see what sort of lame-ass civil disobedience the local rabble-rousers think up.)

It bothers me that this is treated as a Christian issue rather than a some-Christian issue. Guess what, Christians were on both sides of previous marriage issues too.

That gets to the essence of religious liberty. If religious liberty means anything, it's that the state won't always agree with your sect. Even on those matters you consider a threat to that state! (I am reading "Amsterdam: A History of the World's Most Liberal City", by the way, so this is something on my mind what with Spinoza and all that.)

It also bothers me whenever Christians let obsession with sin get in the way of proclaiming the Good News that sin is forgiven. (Not recorded, not obsessed over, not punished by natural disaster, not hair-split by doctrine into tiers, not encoded into civil law, but washed away as if it had never been. Permitted, even, for the Greek word for "forgive" has that connotation as well!)

I have also noticed a small number of critiques of the gay marriage movement coming from the queer community. I can accept their argument that the focus on gay marriage may have attracted attention and support at the expense of other queer causes. I am a little less sanguine about excluding the cisgendered community from those seeking alternatives to marriage (a tradition that extends all the way back to naked Anabaptists parading through the streets of Amsterdam, if not further.)
Tuesday, June 9th, 2015
10:38 pm
MinneAnalytics trip report
I attended the MinneAnalytics "Big Data Tech" conference on Monday. I think my session choices were poor. What I wanted was success stories about using various technologies. What I got was mainly pitches in disguise (despite being forbidden by the conference organizer.)

Mary Poppendieck, "Cognitive Bias in a Cloud of Data". Mainly about System 1 and System 2 thinking. Talked about overcoming bias by keeping multiple options open and finding multiple opinions (dissenters.) Not very big-dataish, though she talked about how even big data requies some System 1 thinking (expertise) to design, analyze, and store data.

Dan McCreary (MarkLogic), "NoSQL and Cost Models". Some interesting points about how to get at the Total Cost of Ownership of your database. If a NoSQL database can scale to serve all your applications, that avoids the cost of ETL and duplicate data. Also talked about the need to agree on a standard format for data to avoid quadratic scaling costs as the number of applications increase. Made some remarks about the lower cost of parallel transforms.

Ravi Shanbhag (United Healthcare), "Apache Solr: Search is the new SQL". Basic intro. I found the need to define a schema sort of off-putting, and the idea of "dynamic fields" where you pattern-match on field names even worse. I think we can do better, as the next session showed.

Keys Botzum (MapR), "SQL for NoSQL", about Apache Drill. Pretty good slideware demo showing how to use Drill to run SQL queries across JSON files. Would have liked to see a multi-source demo as well but Drill can handle this. Considering whether to try it out with Tintri autosupports. Definitely the best talk of the day, speaker was enthusiastic and the technology is cool.

Frank Catrine and Mike Mulligan (Pentaho), "Internet of Things: Managing Unstructured Data." This talk made me less interested in the company than dropping by their booth. Tedious explanation of the Internet of Things and lots of generalities about the solution. Gave customer use cases that architecturally all looked the same but repeated the architecture slide anyway. (Surprise--- they all use Pentaho!)




(I never did write up an entry on this year's MinneBar.)
Friday, May 29th, 2015
12:23 am
Internet arguments about math I have participated in, because I'm on vacation
1. Is there such thing as an uncomputable integer? Or, even if a function F(x) is uncomputable, does, say F(10^100) still count as computable because, hey, some Turing machine out there somewhere could output it! Or does a number count as computable only if we (humans) can distinguish that machine from some other machine that also outputs a big number?

Consider that if F() is uncomputable, it seems extremely dubious that humans have some mechanism that lets them create Turing machines that produce arbitrarily-large values of F.

I'm probably wrong on the strict technical merits but those technical grounds are unduly Platonist for my taste.

2. Is trial division the Sieve of Eratosthenes? Maybe it's the same algorithm, just a particularly bad implementation!

I'm with R. A. Fisher on this one. If the idea was just some way to laboriously generate primes, it wouldn't be famous, even among Eratosthenes' contemporaries. The whole thing that makes the sieve notable is that you can apply arithmetic sequences (which translate into straight lines on a grid!) to cut down your work.

The counterargument seems to be that Eratosthenes might well have performed "skip, skip, skip, skip, mark" instead of jumping ahead by five.
Saturday, May 9th, 2015
7:02 pm
The term "Hyper-Converged" is the biggest scam ever
"Hyper-Converged" infrastructure solutions, like Nutanix, combine compute, storage, and a hypervisor.

"Converged" infrastructure, like a NetApp FlexPod, combines compute, storage, networking, and optionally some sort of application stack (like a hypervisor or a database.)

Which is more converged? The one with networking or the one without?

Maybe we should call the former "hypo-converged" if you need to buy your own switch.
Sunday, May 3rd, 2015
9:24 pm
Big Omaha
On SwC, the 5-card Omaha variants seem more popular than the 4-card standard version.

Suppose you're up against a flop of 688 and you hold something like 89TQK. How much of a favorite is the made baby boat 68xxx?

The surprising answer: barely at all. You have 12 outs twice, with a 42 card stub. You'll make the bigger boat 1 - 30/42 * 29/41 = about 0.495 of the time.

Of course, Villain might be able to make a low (if you're playing O/8) or hold some of your overcards himself, so your equity is not really 49%. But against something like 668JJ you might even be the favorite. That extra card really makes a difference.
Saturday, May 2nd, 2015
1:21 am
Never Trust Any Published Algorithm
The slides from my Minnebar talk, "Never Trust Any Published Algorithm", can be found here.

I got a couple good stories in response.

A local person who works in computer vision talked about finding a new algorithm for shadow detection. It worked great on the graduate student's ten examples. It didn't work at all on his company's hundreds of examples in all different lighting and weather conditions.

A coworker shared:

A few years ago my brother was working at an aircraft instrument company. He was
looking for an algorithm that described max loading for a plane still able to take off
based on many variables.

He found what was supposed to be the definitive solution written by a grad student at
USC. He implemented the algorithm and started testing it against a large DB of data
from actual airframe tests. He quickly found that the algorithm worked just fine for
sea-level examples, but not as airport altitude rose. He looked through the algorithm
and finally found where the math was wrong. He fixed his code to match his new
algorithm and found that it now matched the data he had for testing.

He sent the updated algorithm to the grad student so he could update it on public
servers. He never heard back nor did he ever see the public code updated.


The example I put as a bonus slide wasn't previously mentioned in my series of blog posts is a good one too. In Clustering of Time Series Subsequences is Meaningless the authors showed that an entire big data technique that had been used dozens of times in published papers produced essentially random results. (The technique was to perform cluster analysis over sliding window slices of time series.)
1:05 am
Tintri brag
When we ask our customers "Would you recommend Tintri to a friend or colleague?" an astonishingly large number say yes.

One metric associated with this question is the Net Promoter score. You ask the above question on a scale of 0 to 10 (least to most likely). 9's and 10's count as positive. 7's and 8's are neutral. Anything 6 and below counts as negative. Take the percentage of positive responses, and subtract the percentage of negative responses.

Tintri scores a 94. Nearly every customer (who responds to surveys, anyway...) gives us a 9 or 10. Softmetrix Systems benchmarks Net Promoter scores by industry, and the leaders like USAA, Amazon, Apple, and Trader Joe's tend to have scores in the 70s and 80s.

It makes me incredibly happy that customers love our product so much. I almost feel like it's all downhill from here--- it'll be a big challenge as we grow to keep that level of customer satisfaction so high. Maybe it's even a sign that we're not selling as much as we should be! But it's great confirmation of the quality of the product and of the Tintri team, who I'm all very proud of.
Friday, April 24th, 2015
6:08 pm
Infinitesimals in the family of asymptotics
I answered this question about the Master Theorem (which is used to calculate the asymptotic behavior of divide-and-conquer algorithms): Why is the recurrence: T (n) = sqrt(2) *T (n/2) + log(n) a case 1 of master method?

The source of confusion (see the comments to my answer) seemed to be that the original poster did not really understand the asymptotic behavior of log(n) and n/log(n). In fact, he went so far as to post a graph "proving" that n^0.99 is larger than n/log(n). However, this is false for large enough numbers (large enough being somewhere around 10^100 in this case.) The logarithm grows more slowly than any positive power of n. As a consequence, n/log(n) grows asymptotically faster than any positive power of n less than 1.

What I realized is that this might be some student's first (and maybe only!) experience with infinitesimals! Normally we think of infinitesimals as numbers "smaller than any positive real number". For example, before epsilon-delta proofs and modern analysis, the differential and integral calculus informally treated dx as an infinitesimal value. But while students are told the "idea" is that dx is a very small value, they are repeatedly cautioned not to treat it as an actual number.

The surreal numbers (and thus the class of combinatoric Games too) have infinitesimals, but they are far outside the normal curriculum.

So what model is a student supposed to apply when confronted with O(log n) or Θ(log n)? It behaves exactly like an infinitesimal in the family of asymptotic limits. big-O = bounded above, Ω = bounded below, and Θ = bounded both above and below, thus:

log n = Ω( 1 )

log n = O( n^c ) for any c > 0, i.e., log n = O(n), log n = O(sqrt(n)), log n = O(n^0.0001)

n^k log n = O( n^c ) for any c > k.

n / log n = Ω( n^c ) for any c < 1

Nor does taking a power of the logarithm make any difference.

log^100 n = (log n)^100 = O( n^c ) for any c > 0

Crazy, right? But we can get Wolfram Alpha to calculate the crossover point, say, when does (log n)^100 = n^0.01? At about 4.3*10^50669.

That's what makes logarithm an infinitesimal. No matter what power we raise it to (think "multiply") it is still smaller in its asymptotic behavior than the smallest power of n (think "any positive real number.") And there's a whole family of infinitesimals hiding here. log log n is infinitesimally smaller than log n. Don't even ask about the inverse Ackermann function or the iterated-log function.

So it's not surprising that students might have difficulty if this is their first journey outside the real numbers. Everybody can handle the fact that O(n^2) and O(n^0.99) and O(n^0.5) are different, but the fact that none of these examples will be O(log n) is kind of perplexing, because O(log n) is obviously bigger than O(n^0). (Jumping to exponentials like O(2^n) seems like an easier stretch.)

What was your first encounter with an infinitesimal?
Monday, April 6th, 2015
8:29 pm
More Hearthstone Decks
This Druid deck is not particularly successful but it's so much fun to play, and once in a while it works beautifully:

cut for lengthCollapse )
Wednesday, March 18th, 2015
7:27 pm
Sums of Cubes
Somebody on Quora asked for an example of w^3 + x^3 + y^3 = z^3 in positive integers, all relatively prime. Turns out they were looking for a counterexample of a theorem they thought they proved, which is sort of a passive-aggressive way to approach it.

Anyway, brute force solves the day. Noam Elkies came up with this monster set of equations to characterize solutions to the "Fermat Cubic Surface": http://www.math.harvard.edu/~elkies/4cubes.html, reducing the search space dramatically since we only need to look at triples (of both positive and negative integers) and filter the resulting cubic equations to those which have only 1 negative value and the relative-primeness condition. Here's Elkies' characterization in python:

def elkies( s, r, t ):
    w = -(s+r)*t*t + (s*s+2*r*r)*t - s*s*s + r*s*s - 2*r*r*s - r*r*r
    x = t*t*t - (s+r)*t*t + (s*s+2*r*r)*t + r*s*s - 2*r*r*s + r*r*r
    y = -t*t*t + (s+r)*t*t - (s*s+2*r*r)*t + 2*r*s*s - r*r*s + 2*r*r*r
    z = (s-2*r)*t*t + (r*r-s*s)*t + s*s*s - r*s*s + 2*r*r*s - 2*r*r*r
    return (w, x, y, z)


The smallest one I found was:

365^3 = 107^3 + 209^3 + 337^3


and the first with two different solutions:

67477^3 = 36919^3 + 46049^3 + 54205^3
          26627^3 + 46747^3 + 57103^3


and just for posterity, here are the first 1000:
Read more...Collapse )
Tuesday, March 3rd, 2015
12:30 am
The hits keep on coming
Remember when I told you not to not to trust published algorithms, even famous ones bearing proofs? And how even functional programming experts screw up the Sieve of Eratosthenes?

Well, you should also not have been trusting TimSort, which got implemented (with the same bug) as Python's array sort and Java's collection sort: Proving Android, Java, and Python's Sorting Algorithm is Broken and How to Fix It.

I am dismayed that Java's response to the bug was to tune one of the algorithm parameters to a higher value so that it will take infeasibly-large arrays to trigger the bug, but waste some of everybody's memory in the meantime. (This was already the case in Python.)
Saturday, February 28th, 2015
1:16 pm
More on not trusting published algorithms
If you haven't read this paper on how not to implement the Sieve of Eratosthenes in a functional language, it's well worth your time: "The Genuine Sieve of Eratosthenes" by Melissa O'Neill of Harvey Mudd. It's about how a common simple functional programming example is completely wrong. (Her talk at Stanford on a new family of random number generators is worth the time too.. It's a cool hack to incorporate permutations into linear-congruential RNGs.)

I ran into a question on Quora about summing the first K prime numbers, and the person was asking whether there was a better way than trial division. When I pointed out that the Sieve was much better (even at relatively small scales!) his response was "then I need to fill all integers upto INTEGER.MAX_VALUE"--- i.e., how do we bridge the gap between "All the prime numbers up to N" and "The first K prime numbers".

There are three ways to tackle this and they're all interesting in their own way.

1) The Sieve is so much better than trial division that it almost doesn't matter. The prime number theorem tells us that there are about N/log(N) prime numbers less than N, and that the Kth prime is about K log K. So, trial division to find the Kth prime require work that's around O( (K log K)*K ) = O( K^2 log K ). We might be generous and note that most composite numbers don't require testing all previous primes in order to find a divisor, and bound below by Omega(K^2).

The sieve takes O(N log log N). If you're looking for, say, the 1000th prime, you can easily sieve up to 1000^2 = 1,000,000 and still come out ahead. (Particularly since, in practical terms, the Sieve doesn't require any division.)

2) The prime number theorem lets us come up with a pretty-good bound on how large an N we need to find the K'th prime. Run the sieve on, say, N = 2 K log K and that's a solid win.

3) Even if we could not predict where the Kth prime was likely to occur (or, say some subset of the primes of special interest), the Sieve can still be run to an indeterminate stopping point at only a constant-factor slowdown.

The paper linked above gives a sophisticated way of doing this, but brute force is fine too.

Say we sieve up to N=1000 and don't yet find the primes we're looking for. Then double N and start over from scratch. Do this until we find K primes; iteration J of the sieve has size N =(2^J)*1000. If we had an oracle to tell us J, we could have done one sieve pass. Instead we did sieves of size 1000, 2000, 4000, 8000, ... , 2^J*1000, but the sum of the sizes is just 2^(J+1)*1000 since it's a geometric series. So even if our answer was at 2^(J-1)*1000 + 1, the total amount of numbers sieved was at most 4x the minimum we needed to do.

And that's not the best we can do, if we re-use the first 1000 numbers rather than throwing them away and starting from scratch it gets somewhat better--- but that's just quibbling about constant factors. Even treating the Sieve as a black box gives us a better result than trial division.
Friday, February 27th, 2015
10:43 am
How not to manage your first contact.
[Subject: Question?]

Mark,

Can I show you XXY ZZY?

You can access our marketplace of over 45,000 technical contractors tied directly to an amazing workflow management software system. I would like to see if we can demonstrate our service.

The following does a good job showing how we work:

https://www.XXXYZZY.com/

XXY ZZY was ranked #43 on Inc. 500 for growth from 2010-2013, #3 in Business Services. There is no charge to utilize our system.

Are you the right contact or could you point me in the right direction?

Thanks for the help,
REDACTED


Do I want "technical contractors"? I don't know.

What problems will this help me solve? What would bring me to your web site? How does your "workflow management software" benefit me--- doesn't sound like anything I use.

Why do I care how fast you're growing? (That's a question for later if I decide you have something I want.)

At Tintri's sales kickoff we had Chip Heath as one of the speakers, about making ideas "sticky". He talked about making your idea simple, unexpected, concrete, credible, and emotional. He talked about telling stories rather than stats. About casting your company as the supporting actor, not the hero.

At least the recruiter spam references positions I'm actually looking to fill (even if I'm not the hiring manager...) This example just seemed to fail on every possible count after getting my name correct.
[ << Previous 20 -- Next 20 >> ]
My Website   About LiveJournal.com