mancalablog

posts to days old
of topics
with text
post

math

mike (10 Sep 2004 16:14): Do there exist equilateral triangles where the vertices lie on lattice points?
m (10 Sep 2004 16:14): google doesn't seem to think so.
Paul (19 Sep 2004 12:27): As for equilateral triangles on lattice points, they can be shown not to exist:

Indeed, no triplet of lattice points A,O,B has angle AOB equal to 60 (or any r where tan(r) is irrational). WLOG, O is the origin. If B=(1,0) on the horizontal axis, A can't work since tan(r) is irrational. But for any lattice points C,D there exists some A with B=(1,0) so angle AOB equals angle COD.

Specifically, let A be the image of D under a reflection-rotation-dilation of the lattice points which preserves angles, sends O to O and B to C. Then angle AOB equals angle COB minus angle DOB, which equals angle COD.
Paul (19 Sep 2004 12:57): ...in coordinates, if C=(x,y) and D=(w,z) then let A=(xw+yz, yw-xz).
m (20 Sep 2004 23:05): that's clever.
my solution was similar, but took tan(YOA) and tan(BOX) and showed that they couldn't both be rational, with Y and X lying on their respective axes, and A and B on the lattice wherever makes sense. not quite as cool.
m (20 Sep 2004 23:12): a friend posed this problem to me (as a result of which I started thinking about triangles and lattices):
let any point in a plane be assigned one of two colors, show minimally that an equilateral triangle of a single color exists.

so, it's pretty straight forward - by 'minimally' the point is just to try to do so using as few points as possible to force a contradiction under the assumption that no such triangle exists.

anyhow, I haven't yet thought of how to show that a square of a single color exists.
Paul (21 Sep 2004 11:23): I don't understand the question. Is the whole triangle (vertices, edges and innards) supposed to be one color, or just the vertices and edges, or just the vertices? I get confused thinking about coloration of uncountable points :(

Wait, I suppose it can't include edges, 'cause vertical stripes, red on rational X's and indigo on irrationals would prevent any non-vertical monochrome lines. So I guess we just want the 3 vertices to be the same color?
m (21 Sep 2004 12:07): right, just the vertices.
m (21 Sep 2004 12:11): I was thinking that if you could show that for any n, a regular n-gon of one color could be made, you'd have by the limit as n->inf that a circle would exist, but the vertical striping method shows that it can't - that's confusing. was the limiting argument wrong?
Paul (21 Sep 2004 18:03): Here's one way: Take any triplet of equilateral-vertices on the plane. WLOG, two of them are red. Call those D and E on the below diagram.

A
B C
D E F
G

Suppose no equilateral triplet is monochrome. So B and G must be blue. So F must be red (see it's equilateral with B and G). So C must be blue (E and F are red). So A must be red (B and C are blue). So A, D and F are all red--QED.
m (22 Sep 2004 18:53): yeah, that's identical to my solution

let's take a similar problem on the one dimensional lattice: in the worst case (i.e. min over all point->color mappings), what is the most equispaced like-colored points you can have? Arbitrarily many?

For starters, show that three in a row must exist. It's pretty similar to the triangle problem. Can you force four? I haven't been able to. I don't have any intuition here as to whether or not arbitrarily long chains can be made - what do you think?
m (22 Sep 2004 23:28): Can you generate an infinite string (in our two letter alphabet) where no substring is repeated consecutively more than k times, for some finite k?
m (5 Jan 2007 7:49): since, well, mathematicians no longer think that space filling curves are dangerous, I'm porting the discussion over here.

So, here's a sweet construction for a space filling curve (stolen from what I remember of C&P class). We take the cantor set and use it to make one completely differently from the limiting-sequence-of-functions method that Peano and Hilbert (turns out the xkcd one is Hilbert, not Peano -- my bad) used:

What we do is the following: One way to show that there are at least continuum many guys in the cantor set is to observe that if we were to write out everyone in the interval [0,1] in ternary, those points left in the set after the ith iteration are exactly those who don't use the digit 1 in the first i digits of their ternary representation (so just 0s and 2s). The Cantor Set, the intersection of all of these iterations, is exactly those points who never use a 1 at all in their ternary representation. Divide these 0,2 sequences by two, and now we've got all 0,1 sequences of digits. If we pretend we're in binary, this is everything in the [0,1] interval, we just stole it all back! (actually, some things twice, like 1.0000 and 0.111111..., but whatever)

We can make a function g that does this div-by-two-then-binary transformation, but only keeps every other digit (what?!), say the odd ones (Example: if the number we're looking at is x=0.20202020..., g(x) will be 0.1111... = 1. Similarly, we can define h(x) to do the same thing with the even digits and get h(x) = 0.

Let f(x) be our space filling curve, so f will map R to RxR. (or, since we're lazy, we just worry about mapping [0,1] to [0,1]x[0,1] and pretend this is good enough, which it is). We can define f on all of the points in the Cantor Set to be f(x) = (g(x),h(x)). Now we at least fill space, right? (I'm going to call I := [0,1] from now on for ease of notation) For every point in IxI, we can zip the digits together, multiply by 2 with our odd binary->ternary conversion and get back an element of the cantor set. Fantastic. But this is both not continuous, and not defined on all of those points not in the cantor set (convenient, this).

What do the things not in the cantor set look like? Well, they're (open) intervals that we excised during one of the i iterations. The boundary points of these intervals are elements of the cantor set that are getting (g,h)'ed, so we can just use the interval to connect them with a straight line.

f is now well defined. It fills space. Is it continuous? Sure! It's clearly continuous at the not-cantor-set points, and when we check for continuity at the points in the cantor set, we'll just make sure that other cantor set points are nearby and not worry about the non-cantor-set points since these guys are just lines between nearby cantor set points (convexity of balls is what allows us to not worry about this). So, def of continuity: for every delta, there exists epsilon such that for the epsilon ball around f(x) everything in the preimage is contained in the delta-ball around x. Run long enough down the digits to get your epsilon (I don't know, something like delta squared) and you'll be just fine.
Paul (6 Jan 2007 12:16): Now that I see it, I believe this (and not Hilbert) was exactly what we did (or rather, our TA did) in college. He definitely used base-3 and base-2 forms of numbers. I later took the mapping he used, and had my computer graph it, so I could see it filling up space, which it did. Fun fun. Maybe someday I'll find my old Basic code for that...
m (6 Jan 2007 14:52): show: given any n+1 of the numbers [1,2,...,2n], we have a pair that are coprime.
Paul (12 Jan 2007 15:54): I've been thinking about the coprime problem, but I can't figure it out. It seems obvious at one level, but then it degenerates if I want to, like, prove it.
m (26 Jan 2007 0:49): more number theory:
show that 1+2+...+(p-1) = 0 (mod p) in a not-highschool-formula way
then do 1^2+2^2+...+(p-1)^2 = 0 (mod p)
Paul (26 Jan 2007 18:45): There must be some part I don't understand, since if p=4, then 1+2+3 = 6 != 0 (mod 4). And 1+4+9 = 14 != 0 (mod 4). Is this only supposed to work for p odd? (Also, are you dissing "high school formula" proofs?)
m (27 Jan 2007 11:46): gnh. p is a prime. in the first case, a prime larger than 2. In the second case > 3 (prime bigger than 3 is a funny requirement).
Paul (29 Jan 2007 18:35): I haven't figured the answer yet, but I just noticed that n^2 = (p-n)^2 (mod p) for all n in [0,p], which is surprising. I have no idea why.
Paul (31 Jan 2007 19:00): Today's xkcd is hilarious, possibly because I'm extra tired.
Paul (11 May 2007 19:34): So there was an 18 walking along, when all of a sudden a square root landed on it! (√18)
The 18 was so startled that it broke into its prime factors. (√2·3·3)
They were all trying to get out, because who wants to be stuck under a square root?
But the square root would only let them out if they went with a friend.
Did the 3 have a friend? Yes: so the pair of 3s got out and joined up as a 3 in front of the square root. (3√2)
Did the 2 have a friend? No: the 2 couldn't get out and was stuck under the square root. And that's the final answer. [3√2]
m (30 Sep 2007 1:17): clevar
Paul (1 Oct 2007 3:42): but it can't display the value of the constant n, which is what actually encodes the picture...
m (1 Oct 2007 6:39): Yes, that is unfortunate.
G (17 Oct 2008 1:45): God Created the Integers
Paul (4 Mar 2009 10:49): Convenience link: java math exercises
Paul (5 Jun 2009 12:07): Pounder taught us to call (P or not P) a tautology because it's always true. What was the word he taught for an always-false statement? I think it was like ninneology, but not quite.
Paul (10 Feb 2011 4:43): I'm taking an econ course, which often has problems like "set a,b,c,d to maximize f(a,b,c,d) such that g(a,b,c,d) geq 0" Is there a general way to solve these? The professor always uses heuristic methods like assuming that some of the g_i constraints will equal 0, and others will not equal 0.
m (10 Feb 2011 9:19): Linear programming?
G (11 Feb 2011 19:08): Guess and Check?
Paul (12 Feb 2011 5:03): Interesting. Except f is nonlinear. Maybe the nonlinear problems could be turned into linear problems by taking logarithms...but then again, maybe not.
Paul (4 Jul 2011 15:50): I bought a bottle of Snapple, and its lid says "Real Fact #804: There are 293 ways to make change for a dollar. Find more Real Facts at snapple.com"

Is this correct? How could there be 293 ways to make change for a dollar? I totally forget how to do combinatorics like this, and I'm too lazy to brute force it.
m (4 Jul 2011 17:04): 293 ways uses both half dollars and silver dollars (though doesn't distinguish between Silver, Susan B, Sacajawea, etc).
------------------hr-----------------
I don't know whether this counts as brute-forcing or not:

To start, Look at making change for 25 cents with only 10/5/1 pieces.
We have 2/1/0, 2/0/5, 1/3/0, 1/2/5, etc. 2+4+6 ways.
This means we can make change for 100,75,50,25 cents using dimes, nickels and pennies in 121,72,36,12 ways.
So making change for a dollar using these coins plus quarters can be done in 121+72+36+12+1=242 ways.
Throw in a single half dollar, and we add another 36+12+1=49, for a total of 291.
Now all that's left are two half dollars, and one Susan B (a funny way to make change for a dollar).
m (8 Jul 2011 19:24): Tournament logistics are such a headache. Maybe you guys can shed some light on either of my two problems.

1: Running a double-elimination tournament, how do you move teams into the losers bracket? More specifically: how do you ensure that (a) no teams that played each other in the winners play again in the losers bracket (at least, until as close to the end of things as possible), and (b) teams with high seeds entering the losers bracket get paired with teams with low seeds already in the losers bracket (and teams with low seeds entering get paired against teams with high seeds already in). Some discussion of this can be found here.

2: How do you seed well within a Swiss system? Seeding takes place in the day(s) before the double elim tourney, and needs to handle as many as 128 teams. A round robin is out of the question, so Swiss rounds make the most sense. The trouble is, a Swiss is good at picking a first place, but not so good at distinguishing between everyone in the middle (my intuition here is that this is due to bad tie-break methods poorly determining how to rank teams with the same record). There's an additional problem in which approaching games from the bottom gives you a much easier path than from the top (i.e. if we've got five games to sort out 16 teams, I want to intentionally lose my first game and make my way to first seed that way, rather than try to win all of my games).

More Swiss articles. Nobody seems to talk about using a Swiss for seeding...
G (8 Jul 2011 22:42): I will get you a sample bracket when I'm at home. You'll get it.

Most people do not worry too much about these issues. In fact, being able to do seeding at all is a luxury only the very organized can regularly afford.

In five years of tournament fencing, Swiss tournaments were mentioned, but never used. I will read up on it when I get home, but I think most of the time it's completely ig ores.

Consider your prize or payout structure when you think about this. If you have 6 people, you should just do a round robin (pool).if you've got ten, half fromportland, you'd just ax well. Seed bSed on city as 'strength'.

I mean, if you want a math problem, it's an interesting one. But if you just want to runa tournament, you usuLly don't have to worry Bout it.
G (9 Jul 2011 0:46): Here is a typical bracket (32 entries)

See how the loser of A gets sent to take on the loser of B in the first part of the losers bracket.
Then later the winner of A and B have their match. The loser of this match (Q) gets put into the bottom part of the losers bracket. So Mr. Q, who knocked out either loser A or loser B, will have to fight through all of the bracket before they get matched against each other again.
m (9 Jul 2011 9:29): This first question really is more of a math problem. Yes, that bracket is good enough, but its far from perfect (secondary question: can perfect always be achieved?):
loser y and loser a/b could rematch in the quarterfinals of the losers bracket, well before they have to. Swap loser y and loser z (as well as loser aa / loser ab) to fix things for one round more. Then there's nothing you can do about loser ad and loser ac - they'll potentially get a rematch against one of loser u/v/w/x, loser q/r/s/t (respectively) that they faced not two rounds before. (but this too was avoidable, if we make the loser y/z/aa/ab instead read loser ab/aa/z/y - only, wait, I think that introduces its own problems)
m (9 Jul 2011 10:13): that secondary question was dumb.
m (9 Jul 2011 11:06): First, some notation: I'm sick of writing "winners bracket round 1", "losers bracket round 4" so those are going to be WB1 and LB4 respectively.

In that earlier post I called it the losers bracket quarterfinals, but that should have read LB4 (of 8 rounds, the pre-pre-quarterfinals, ugh), when the bracket linked has a few potential WB1 rematches (i.e. loser y, who is one of winners a/b/c/d, face one team from losers a/b/c/d/u/v).

Reading down from the top, I'm guessing LB4 should read z/y/ab/aa. It means two potential LB5 rematches of a WB1 game (loser a-h facing loser y/z in the top half), then in LB6 you have two potential WB2 rematches (between loser u/v/w/x and loser ad in the top half)

I'd like to avoid this, but don't see any way to - it's either that or have LB6 feature a WB3 rematch, which makes even less sense.

Then of course it's rematch city for LB7 and LB8. But there's no choice for the structure of those last two rounds anyway.
z (9 Jul 2011 15:12): Hm.. never considered the tournament ranking problem before.

From first principles it seems we are assuming there exists "some" full ordering that is fair. This may be not the case, e.g. A always beats B, B always beats C, but somehow C always beats A. Let's say this doesn't happen, because then what can you do?

So if there is a full ordering, this becomes some kind of... fuzzy sorting problem. Fuzzy because each player has a "strength" that varies from time to time. So you try to get them to play as many games as possible to get a good strength "measurement." Now comes the most interesting part. m had mentioned "can perfect be achieved"... well one should be more precise. What is "perfect"?

O(n^2) round-robin gives you a full sort but it's inefficient bubble sort. O(n) single-elimination is only meant to give the max (and 2nd?), so never was there a guarantee for a good ranking below that. Double-elimination seems like some illogical compromise that makes little sense to me. It just "feels" more reassuring than single-elimination but still gives no guarantees, I think. (I say this because the "play same player twice in short time" issue is equivalent to the bad draw issue in single elimination, and my wild hunch says this can't be eliminated with any O(n) tournament.)

Maybe this is already answered but if you really want a full sort, why not use O(n log(n)) merge sort? They do this partially to determine 3rd place sometimes. Maybe Swiss and seeding etc. already add modifications along these lines to get a more accurate sort farther from the top.
z (9 Jul 2011 15:33): Sorry, single-elimination is just max, no guarantee even on 2nd. Double-elimination gives 1st and 2nd, and no guarantee below that. Is that right? Yeah I think that's right.

I'm sorry, I read the linked to article again on the Starcraft forum, under "why is this unfair". So if A plays B again in the losers bracket, and A again beats B, even if B can beat all the other teams*, is that so unfair? We've determined twice now that A is better than B, so A is a better candidate for 2nd place. Why should A "definitely win" under another system? *It seems like they are making the unlikely assumption that B might beat W, the eventual winner. Again, if the goal is to determine 1st and 2nd places and not a full sort, double-elimination seems fair.
z (9 Jul 2011 15:55): re: seeding. Why run Swiss rounds? Just do several random rounds to get your initial win-loss matrix. Then you can process it after the fact as a matrix completion problem. If you did Swiss rounds for seeding then you're giving too much weight to earlier performance and introduce "strategery" artifacts, as you say. In fact, now that I think about it there should be an iterative/interactive bracketing system, where your next opponents are determined by your past performance, as well as what would give the most information to complete the ranking in this particular instance. There must be papers on this.
m (9 Jul 2011 20:55): I know! There must be papers, but I haven't been able to uncover any. Swiss seems like a sort of imperfect attempt at iterative matching - I don't think they're aiming for seeding, but nor are they trying just to establish a first place. That's a good way to articulate our goal: during each round, pairings should expose as much information as possible (and for this random pairings should do much worse than swiss pairings). An iterative method may still be subject to strategery, but perhaps can be made to dampen the effect of artifacts.

Double elimination is just a nice way of running a bracket at the end to give a climactic finish to the tournament. Also it's nice to dole out prizes (see "unfair" if you think you should get 2nd place but end up seeing the obvious 1st place two times in a row early on), and double elim gives 1-4, 5/5, 7/7, 9/9/9/9
z (10 Jul 2011 7:10): Oh well I was talking about the idea of using Swiss rounds to do seeding, which .. isn't necessarily better than random pairing, because I'm thinking the very first round (which is random even under Swiss, right?) somebody may get a weak opponent and somebody else may get a tough opponent, and if they both win, Swiss wants to pair them up next based on the assumption that they are of similar strengths, but this is not true. You need to get a lot more information before you want to force convergence on that kind of informed pairing.
m (10 Jul 2011 11:12): But wasn't the iterative method also for seeding? Think of a Swiss as a semi-mergesort: First we have (random) pairs that we compare, then we clump together for fours, but instead of three comparisons, we make two and skip the middle one. Then make eights, but perform only 4 comparisons instead of 7, skipping all of the cross comparisons that would give you a full ordering. Randomizing would give us less meaningful comparisons, wouldn't it? We want to maximize our information gained each time - so perhaps we would benefit from different comparisons than the ones that a Swiss picks, but certainly not by just randomizing them!
m (11 Jul 2011 9:29): Nate Silver has interesting things to say about tournaments
z (11 Jul 2011 18:17): But we've already established that single-elimination only determines the first-place 'correctly' (assuming no uncertainty). So advancing to the next round by itself shouldn't be considered meritorious, albeit 'exciting' to watch... Perhaps we do want an additional feature that the better teams should go on farther in a tournament?

On that note:

The iterative/seedless thing we've been discussing, I thought about it some more esp. your comment re: Swiss being a partial merge sort. If we really don't want to do an O(n log(n)) full merge sort, then we will end up with a poset. Swiss as you say gives good ranks top and bottom but leaves a lot of unresolvable partial ordering in the middle, which is mirrored at the microscopic level by a partial "merge" step that only compares winner-winner and loser-loser. Seems to me that a more ideal partial "merge" step should be "top heavy", I give example.

Teams: A B C D E F G H with true order A > B > ... > G > H

Round 0, random pairing
A-G, B-H, C-F, D-E, so A, B, C, D win

Round 1, the first partial merge, random subtree choices A G C F in one group, B H D E in another
Round 0 winners play: A-C, B-D, so A, B win
Losers play the cross: C-G, D-H, so C, D win
At this point, we have this poset
A->C->G, C->F
B->D->H, D->E
We can drop the bottom half, which are clearly E, F, G, H, but we don't and won't know how they rank among themselves.

Round 2, the second partial merge on the remaining (top) half
Round 1 winners play: A-B, so A wins
Loser plays the cross: B-C, B wins
The poset is now
A->B->C, B->D
Again drop the bottom half, which are C, D, but we don't know how they compare.

We're done, because we have the following binary-tree poset now:

A->B
|->C
| |->F
| |->G
|->D
|->E
|->H

which is precisely the kind of top-heavy structure we want. We know exactly who 1st and 2nd places are, then we know places 3,4 collectively, after that we know places 5,6,7,8 collectively.

One complaint may be that half the teams have to wait a round to play in each merge step, but this is not the case, since we can pipeline the winner-winner games in Round j together with the loser-cross games in Round j-1.

Even more, you never play the same team twice, and the better teams get to advance more rounds. Complexity is O(n) I believe. So I don't see any deficiency whatsoever in this kind of bracket. Why isn't it used?
z (11 Jul 2011 18:23): Bah, formatting issues.. leading spaces got stripped, that graph should be


A->B
|->C
| |->F
| |->G
|->D
|->E
|->H
G (11 Jul 2011 19:34): I can't help but think you guys are working really hard at something that's not going to work out well. If you want to hold to the best possible, totally complete method, it seems pretty clear to me that you do a round-robin. Everybody plays everybody, voila. And if you don't have time for that, do swiss. Right?

I don't know. If you want to think about what's "practical," I think separating by club/locale is more important than by strength. Nobody wants to go to a tournament and get knocked out by two people they practiced with last week. And if your double-elim can 'guarantee' the top two finishers, well... that can represent a big chunk of the prize money. After that, it's just not as important. 'pretty close' is good enough for top 8.
G (11 Jul 2011 19:36): People don't even do the match to decide 3rd and 4th a lot of the time.
m (11 Jul 2011 21:57): that's one nice thing about double elim: the third/fourth matches happen in the loser's bracket semi and final, so they're sorted out for you.

I'm just looking for the best bang for my buck here in the seeding rounds. Swiss is quite good - much better than, say, splitting into a bunch of pools and doing intra-pool RR with some weird tie-break mechanism to get seeding for bracket construction (the way the world cup does it, I think) - but if we can take the same number of games and squeeze a bit more seeding info out than we get with Swiss pairings - I mean, why not try?
z (11 Jul 2011 22:49): m, how should one decide whether one poset has more 'information' than another? i mean what's the metric. i still think the tree structure mentioned above is a good proxy for the final tournament result, but if you want closer to 'uniform' information up and down the entire rank (for seeding, though i don't know why one should separate seeding and tournament play), then i'm beginning to think that the swiss structure (narrow at top and bottom, fat in the middle) is ok.
m (12 Jul 2011 9:32): But final tournament result is going to be decided in a double elimination bracket! The Swiss/whatever takes place beforehand to determine the _seeding_ of the elimination bracket, and that's what we want to economize.
G (12 Jul 2011 22:27): As M says. I think the practical consideration is that tournament organizers need to have a 'grand finals' match that determines the first place outcome decisively. Nobody wants to get final ranks based on only swiss, and have it turn out that nobody was watching the most important match because it was right at the beginning.
z (13 Jul 2011 7:59): @G: round-robin is not necessary for full sorting, as mergesort could do the same with fewer games. but yes, still too many games to play perhaps.

@m: i see now, you actually have a particular practical concern at hand... hm, since double elimination itself is O(n), your seeding phase presumably should also be O(n). if n is large, O(n) comparisons give you a negligible ~exp(n) permutations out of n! possible -- impossible to get position 'accuracy' at all but a few ranks (just like in swiss) but i'm trying to think how bad the overall position errors must be.
z (13 Jul 2011 8:46): @G, i was unclear... i'll clarify:

- if you asked me to design tournament from scratch to save on games, i'd use the one in the post a few posts back, the one with the tree. it gives a decisive 1st place game, no repeat pairing, no prior seeding assumption, ease of determining more places correctly if needed all the way to a full sort, among other features.

- if you asked me how to do seeding well in O(n) (a different problem) for making better pairing in double-elim. then i have less to say. maybe top heavy accuracy is good enough, maybe top and bottom heavy like swiss is needed, or maybe only uniform accuracy will do (m's presumption going in i think), but it is unclear to me now whether uniform accuracy is even possible in O(n).
z (13 Sep 2011 9:18): How the IRS would present the quadratic formula. Of course having to reverse-engineer what various tax authorities really wanted (for the purpose of optimization) is the perennial beef I have with tax forms.
m (22 May 2012 15:04): I'm back on double elimination brackets. Guys, help me out here.

Everyone seems to run double elimination brackets like this: (24 teams, kind of a bad-case scenario)

What I see is that with a 24 team double elim it's advantageous to lose in the first round if the alternative is an impossible second round game (i.e. lose the 16-17 game rather than beat 17 and then face the 1-seed since you'll both end up in Round1-Losers-Bracket anyway). All the teams who lose in the first round get what amounts to a bye in R1-Loser (and R2-Loser takes R1-Loser's place..

One alternative I see is to make the first round losers face each other (no bye), then give some of the new losers from the second round a bye into round three of the losers bracket (this is different from the problem above: R3-Losers is still between teams from R1-Winners and R2-Winners, no teams who made it to R3-Winners are yet involved). This is because R1-Losers is like a half-round, where for the rest of the tournament there are 2 losers bracket rounds to every 1 winners bracket round.

Why doesn't anyone do it the way I describe?
G (23 May 2012 22:41): Is this a practical question or are you just intrigued by the problem.
m (24 May 2012 9:07): Practical question: I just got a complaint from a guy who went W1->W2->L1->out in the Midwest Quals and wants to know why he's ranked last alongside folks who went W1->L1->out. I started to explain to him "that guy's 24 team double elim is fucked up" when I checked challonge to see how it was supposed to be done and saw it was exactly the same.
m (24 May 2012 9:11): [actually Sven from MPLS didn't do quite that - his complaint was about W1->W2->L1->L2->out and why it is that he's ranked below teams who went W1->L1->L2->L3->out. But I feel like there's a kernel of misunderstanding for how double-elim ranking even works in Sven's question, and W1->W2->L1->out is a bit more dramatic anyway, so I gave that as the example]
m (24 May 2012 9:17): and we're doubly-practical now because I'm finally adding elimination tournaments to the tournament software
m (25 May 2012 11:05): no, no - don't worry about gnu health. Let's focus on double elimination tournaments. How are fighting game tournaments run - are they always single elim?
z (25 May 2012 23:21): didn't we conclude last time that precise lower ranks (i.e. besides the first two) is impossible with this many comparisons? if you insist on those, then you can make them play each other as you say. if you merely want to eliminate the sepuku strategizing, then maybe don't play seeds too far apart -- it can't be that serious a problem can it?
m (26 May 2012 9:02): @z: if you merely want to eliminate the sepuku strategizing, then maybe don't play seeds too far apart. -- This is exactly what I want to eliminate. What are you suggesting here with the seeds and not far apart?

And don't forget we're working here within the framework of a double elimination tournament. So the goal _isn't_ to seed folks anymore, but to make an exciting elimination-style trip to first place for which the better seeds have an easier route.

[i.e. first place plays last place in that first game, then 1/2 place in the second game, then 1/4 place for its third game, etc, assuming it wins every time. 1/4 place on the other hand plays 3/4 place for its first game, then 1/4+1 place for its second game, then 1st place for its third game -- the trickiness here isn't how to schedule games in the winners' bracket, though, but how to schedule games for the losers' bracket. I like the phrase "Seppuku Strategizing".
G (26 May 2012 23:32): Fighting game tournaments are typically double elim. Fencing tournaments are usually pools, then single elim.
You could do the whole winners bracket, then assemble a losers bracket after its done. It might not be inherently more fair, but it gives you more options, and perhaps most importantly could prevent a player from knowing that his loss could be Advantageous. But then the question is what are you going to do with you losers pool that you wouldn't have done before. The 'natural' ? choice is just to make a new bracket with everybody except the #1 seed in it.

But that adds a whole bunch of problems that will probably make you appreciate the standard double elim setup. Because you lose the double jeopardy protection (not being knocked out by two losses to the same person if it can be avoided), and players won't like having random numbers of matches. It also complicates the ranking.

Elimination tournaments will never really give you a good ranking, however. They just don't. No elimination bracket only tournament is going to pay out to more than the top four, though. So I think people just don't care. You have a top four, and otherwise, you made it to top 8, or whatever.

If you want to have a real ranking, pools are good, but obviously not suited to team sports where you can't just pair off everybody, then count up points and stuff half an hour later. That's why team sports have leagues - to allow many many matches before a tournament.
m (27 May 2012 19:26): Yeah, I hadn't thought of the question in that light: shifting the losers bracket byes around a little bit might make things more balanced for that one round of games, but it's a pretty minor thing; the fact that double elims are inherently bad at differentiating past third or fourth place is the real answer to this guy's complaint.
m (27 May 2012 19:29): the software's working well, btw, up in Vancouver this weekend
G (28 May 2012 12:14): CHALLONGE is taken, what about CHALLINJ.com ?
G (28 May 2012 20:42): Brackot.com?
Tornamont.com?
m (29 May 2012 19:53): I get it. You're making fun.
G (30 May 2012 2:36): Uh, sorry. More just thinking that the challonge website has a shitty domain.
Like, they couldn't just remove some vowels like a respectable website?
G (30 May 2012 2:36): I like the website.
Paul (17 Jul 2012 18:28): Here's a problem that shouldn't be too hard, but it's hard for me. In non-math terms, I'm making a stacked area graph of time allocated to different projects each week. The weeks have to be ordered naturally, but the projects can be stacked in any order I like. So, I would like to stack the projects in an order that minimizes the average slope of all line segments in the stacked area graph. How can I find such an order?

In mathy terms, I have a set S of m arrays Aa each containing n elements Aa1...Aan. I'm trying to optimize over all possible orderings of S. An ordering B of S looks like a map from 1...m to the arrays Aa, so we can call them B1...Bm. WLOG, suppose n=2, that is, two time periods on the horizontal axis. The slope of the first line segment is 0, of course; then the next one is B12-B11, then the next one is (B12-B11)+(B22-B21), on up to (B12-B11)+...+(Bm2-Bm1). But a negative slope in one case doesn't make up for a positive slope in another case, so we need to take the abs or ^2 of each slope before summing into a function to be minimized.

Ok, that's enough for now. Any suggestions?
G (17 Jul 2012 22:03): I assume that "How many projects do you have? Just brute force it." is not the desired response.
m (18 Jul 2012 13:48): it's not such a difficult computation - and the number of weeks isn't increasing the work significantly - it's just that since you've got m! permutations of the arrays it gets to be a lot of work pretty quickly.

But my intuition is that to find the optimum you'll have to check all orderings. So let's think about quicker algorithms for a decently good solution:

How about a divide-and-conquer type thing where you split the tasks in two groups with a stable sum over most of the weeks? Or actually, I think you only care about one of the two groups having a stable sum. Now how do we find this? Maybe take an arbitrary random group, look for weeks where you've got a big change, and swap out elements to adjust for it? I keep feeling like we've got a knapsack problem, but I can't quite make the fit.
m (18 Jul 2012 13:54): So I read over the start of what I just wrote and see a quick clarification to make: "it's not such a difficult computation" was meant to mean "for a given ordering, computing the (sum of slope^2, sum of abs(slope) - I don't know, whatever) isn't difficult [and so maybe brute-forcing over 10! or 12! permutations isn't going to take too long]"

not "the computation [of the solution to this problem] is easy"
Paul (19 Jul 2012 21:01): You're thinking along the same track as me when you talk about groups with stable sum. For example "work" and "work conference" tend to be in complementary distribution, so it would be useful for the graph to place them together. This was what got me thinking of the problem as minimizing the average slopes. Could this then be seen as a variant of a traveling-salesman problem, using covariance among projects as a sort of distance metric?
Paul (21 Jul 2012 6:33): I'm thinking a decently good algorithm might just be to start stacking with the one "flattest" project (maybe define flattest in terms of total abs(slope) or total (slope^2), or maybe ratio of total (slope^2) to area covered), then continue with the next project that minimizes total-flatness-so-far, etc. This feels like the nearest-neighbor solution to traveling salesman, which I think is considered mediocre, but might be decently good for my purpose.
Paul (21 Jul 2012 6:34): PS, I would be fine with brute force, but m~=80, and O(80!) won't run on my computer.
Paul (21 Jul 2012 6:36): Ok looks like m=55. O(55!) is still too much.
m (21 Jul 2012 13:06): It's definitely a greedy-algorithm solution, so in that sense is like TS's nearest-neighbor. But I can't shoehorn this problem onto traveling salesman.
g (21 Jul 2012 13:35): If the actual goal is to produce a graph, I imagine that if this were, like, a commercial product, the way they'd be ordered is alphabetically. Or maybe user choosable. So you could group related stuff.
m (22 Jul 2012 11:54): 55!, Too many to boil.
m (23 Jul 2012 16:18): Though this is a great problem - and I think the verdict is still out on whether finding an optimal solution is P, NP or Exp - it occurs to me that a stacked bar graph (which is what I've been imagining this whole time. is that right?) is a bad way to convey your 55 items worth of information over time. The stacking part doesn't give you anything (does it?), but it really does make it a lot harder to follow a given item over time.
Paul (24 Jul 2012 19:17): It's a stacked area chart, a cut below Randall's but in the same vein. The purpose of stacking is to fit all the projects vertically. If they were overlapping, some projects would inevitably be hidden; stacking keeps them all visible.
Paul (24 Jul 2012 19:17): It's a stacked area chart, a cut below Randall's but in the same vein. The purpose of stacking is to fit all the projects vertically. If they were overlapping, some projects would inevitably be hidden; stacking keeps them all visible.
G (8 Aug 2016 8:04): Have you guys heard about statistics?
m (8 Aug 2016 20:18): ...no?