Gerrymandering Is prohibited, But just Mathematicians Can Prove It

Partisan gerrymandering—the training of drawing voting districts to provide one political celebration an unfair edge—is one of the few political problems that voters of stripes find typical cause in condemning. Voters should choose their elected officials, the reasoning goes, as opposed to elected officials selecting their voters. The Supreme Court agrees, at the least theoretically: In 1986 it ruled that partisan gerrymandering, if extreme adequate, is unconstitutional.

Quanta Magazine


About

Original story reprinted with permission from Quanta Magazine, an editorially independent unit regarding the Simons Foundation whose objective would be to enhance public comprehension of science by addressing research developments and trends in math additionally the real and life sciences


Yet in that exact same ruling, the court declined to strike straight down two Indiana maps in mind, although both “used every trick within the guide,” based on a paper in the University of Chicago Law Review. As well as in the years since that time, the court has failed to dispose off a single map as an unconstitutional partisan gerrymander.

“If you’re never ever gonna declare a partisan gerrymander, the facts that’s unconstitutional?” said Wendy K. Tam Cho, a governmental scientist and statistician during the University of Illinois, Urbana-Champaign.

The issue is that there’s no such thing as being a perfect map—every map need some partisan impact. So how much is simply too much? In 2004, in a ruling that rejected just about any available test for partisan gerrymandering, the Supreme Court called this an “unanswerable concern.” At the same time, once the court wrestles with this particular problem, maps are growing increasingly biased, numerous specialists state.

Even so, the current minute is probably the absolute most auspicious one in years for reining in partisan gerrymandering. New quantitative approaches—measures of exactly how biased a map is, and algorithms that can create millions of alternative maps—could help set a tangible standard for just how much gerrymandering is simply too much.

Final November, a few of these new approaches assisted convince a united states of america region court to invalidate the Wisconsin state set up region map—the very first time much more than 30 years that any federal court has struck straight down a map for being unconstitutionally partisan. That instance is currently bound for the Supreme Court.

“Will the Supreme Court say, ‘Here is a fairness standard that we’re ready to stand by?’” Cho stated. “If it will, that’s a big statement by the court.”

So far, political and social boffins and attorneys were leading the fee to create quantitative measures of gerrymandering to the legal world. But mathematicians may soon enter the fray. A workshop being held come early july at Tufts University regarding the “Geometry of Redistricting” will, among other activities, train mathematicians to act as expert witnesses in gerrymandering cases. The workshop has drawn above 1,000 applicants.

“We have just been floored within reaction that we’ve gotten,” stated Moon Duchin, a mathematician at Tufts who is among the workshop’s organizers.

Gerry_450_double.jpgLucy Reading-Ikkanda/Quanta Magazine

Gerrymanderers rig maps by “packing” and “cracking” their opponents. In packaging, you cram most of the opposing party’s supporters into a a small number of districts, where they’ll win by a much bigger margin than they require. In cracking, you distribute your opponent’s staying supporters across many districts, in which they won’t muster sufficient votes to win.

As an example, suppose you’re drawing a 10-district map for the state with 1,000 residents, that are split evenly between Party the and Party B. you might create one district that Party the will win, 95 to 5, and nine districts it will lose, 45 to 55. Even though the events have equal help, Party B will win 90 per cent regarding the seats.

Such gerrymanders are occasionally very easy to spot: To pick up the proper mixture of voters, cartographers may design districts that meander bizarrely. It was the actual situation utilizing the “salamander”-shaped district finalized into legislation in 1812 by Massachusetts governor Elbridge Gerry—the incident that provided the practice its title. In an assortment of racial gerrymandering instances, the Supreme Court has “stated repeatedly … that crazy-looking forms can be an indicator of bad intent,” Duchin said.

Yet it’s one thing to state bizarre-looking districts are suspect, and one more thing to express exactly what bizarre-looking means. Numerous states require that districts ought to be fairly “compact” wherever possible, but there’s no body mathematical way of measuring compactness that completely captures exactly what these shapes should seem like. Instead, there are a number of measures; some consider a shape’s perimeter, other people how near the shape’s area is compared to the tiniest circle around it, but still other people on such things as the common distance between residents.

The Supreme Court justices have “thrown up their hands,” Duchin stated. “They just don’t learn how to decide what shapes are too bad.”

The compactness issue will be a main focus for the Tufts workshop. The goal just isn’t to generate an individual compactness measure, but to bring purchase toward jostling crowd of contenders. The existing literary works on compactness by nonmathematicians is filled up with elementary mistakes and oversights, Duchin stated, like comparing two measures statistically without realizing they are basically the exact same measure in disguise.

Mathematicians might be able to help, but to truly really make a difference, they’ve to exceed the simple models they’ve utilized in previous documents and look at the full complexity of real-world constraints, Duchin said. The workshop’s organizers “are absolutely, fundamentally motivated when you’re useful to this problem,” she said. Due to the flooding of interest, plans are afoot for many satellite workshops, become held in the united states on the year ahead.

Eventually, the workshop organizers desire to produce a deep bench of mathematicians with expertise in gerrymandering, to “get persuasive, well-armed mathematicians into these court conversations,” Duchin stated.

The Accidental Gerrymander

A compactness guideline would restrict the number of tactics readily available for drawing unfair maps, nonetheless it will be far from a panacea. For starters, there are a great number of legitimate reasoned explanations why some districts aren’t compact: in a lot of states, district maps are designed to you will need to preserve normal boundaries including rivers and county lines including “communities of interest,” as well as additionally needs to adhere to the Voting Rights Act’s protections for racial minorities. These demands can result in strange-looking districts—and can provide cartographers latitude to gerrymander under the cover of satisfying these other constraints.

More basically, drawing compact districts provides no guarantee your resulting map will likely to be reasonable. On the other hand, a 2013 study suggests that even though districts have to be compact, drawing biased maps is normally simple, and sometimes very nearly unavoidable.

The analysis’s authors—political researchers Jowei Chen for the University of Michigan and Jonathan Rodden of Stanford University—examined the 2000 presidential race in Florida, where George W. Bush and Al Gore received an nearly identical wide range of votes. Regardless of this perfect partisan stability, into the round of redistricting after the 2000 census, the Republican-controlled Florida legislature developed a congressional region map by which Bush voters outnumbered Gore voters in 68 % of this districts—a seemingly classic instance of gerrymandering.

Yet when Chen and Rodden received hundreds of random district maps using a nonpartisan computer algorithm, they discovered that their maps were biased in support of Republicans too, sometimes up to the state map. Democratic voters in very early 2000s, they found, had been clustering into highly homogeneous areas in big metropolitan areas like Miami and distributing away their staying help in suburbs and tiny towns that got swallowed up inside Republican-leaning districts. They were packing and breaking on their own.

This “unintentional gerrymandering” creates issues for Democrats in lots of for the large, urbanized states, Chen and Rodden found, even though some states—such as New Jersey, in which Democratic voters are evenly spread via a large urban corridor—have populace distributions that favor Democrats.

Chen and Rodden’s work suggests that biased maps could arise even in the absence of partisan intent, which drawing reasonable maps under such circumstances calls for considerable care. Maps are drawn that separation the tight city groups, as in Illinois, where in actuality the Democratic-controlled legislature has created districts that unite portions of Chicago with suburbs and nearby rural areas.

However, Chen and Rodden write, Democratic cartographers have tougher task than Republican ones, whom “can do strikingly well by literally choosing precincts randomly.”

Wasted Votes

Since drawing compact districts isn’t cure-all, solving the gerrymandering problem additionally calls for methods to measure exactly how biased a given map is. In a 2006 ruling, the Supreme Court offered tantalizing hints in what form of measure it could look kindly on: one which captures the notion of “partisan symmetry,” which calls for that each and every celebration have an equal chance to transform its votes into seats.

The court’s curiosity about partisan symmetry, coming as a result of its rejection of a lot of other feasible gerrymandering maxims, represents “the many promising development of this type in years,” penned two researchers—Nicholas Stephanopoulos, a legislation professor at the University of Chicago, and Eric McGhee, a study fellow during the Public Policy Institute of California—in a 2015 paper.

For the reason that paper, they proposed an easy way of measuring partisan symmetry, called the “efficiency space,” which tries to capture exactly what it really is that gerrymandering does. At its core, gerrymandering is mostly about wasting your opponent’s votes: packing them in which they aren’t required and distributing them in which they can’t win. Therefore the efficiency space determines the difference between each party’s squandered votes, as percentage of total vote—where a vote is recognized as wasted if it’s in a losing district or if it exceeds the 50 per cent threshold needed in a fantastic region.

For example, within our 10-district plan above, Party the wastes 45 votes into the one region it wins, and 45 votes each into the nine districts it loses, for the total of 450 wasted votes. Party B wastes just 5 votes inside region it loses, and 5 votes in each one of the districts it wins, for a total of 50. That makes a difference of 400, or 40 percent of voters. This percentage has a natural interpretation: it’s the percentage of seats Party B has won beyond just what it would get in a balanced plan with an effectiveness gap of zero.

Stephanopoulos and McGhee have actually determined the effectiveness gaps for pretty much most of the congressional and state legislative elections between 1972 and 2012. “The efficiency gaps of today’s many egregious plans dwarf those of the predecessors in earlier cycles,” they had written.

The effectiveness gap played a vital part in the Wisconsin instance, where in fact the map in question, in accordance with expert testimony by the political scientist Simon Jackman, had an efficiency space of 13 percent in 2012 and 10 percent in 2014. By comparison, the common effectiveness space among state legislatures in 2012 was just over 6 %, Stephanopoulos and McGhee have determined.

The 2 have proposed the efficiency space whilst the centerpiece of the simple standard the Supreme Court could follow for partisan gerrymandering cases. Become considered an unconstitutional gerrymander, they recommend, a district plan must first be demonstrated to meet or exceed some selected effectiveness space limit, to be dependant on the court. 2nd, since effectiveness gaps have a tendency to fluctuate within the decade that the region map is in force, the plaintiffs must show your effectiveness gap probably will prefer the exact same party over the whole ten years, even if voter preferences change about notably.

If those two demands are met, Stephanopoulos and McGhee propose, the responsibility then falls to the state to explain why it created that biased plan; perhaps, the state could argue, other factors such as compactness and conservation of boundaries tied its fingers. The plaintiffs could then rebut that claim by making a less biased plan that performed along with the existing map on measures like compactness.

This process, the set had written, “would neatly slice the Gordian knot the Court has tied up for it self,” by explicitly setting up the amount of partisan impact is too much.

Issue of Intent

The efficiency space can help determine plans with strong partisan bias, nonetheless it cannot say whether that bias was created deliberately. To disentangle the threads of deliberate and unintentional gerrymandering, a year ago Cho—along with her peers at Urbana-Champaign, senior research programmer Yan Liu and geographer Shaowen Wang—unveiled a simulation algorithm that yields many maps to compare to virtually any given districting map, to determine whether it’s an outlier.

There’s an almost unfathomably large numbers of possible maps around, quite a few for almost any algorithm to totally enumerate. But by distributing their algorithm’s tasks across a huge wide range of processors, Cho’s group found ways to produce millions or even huge amounts of whatever they call “reasonably imperfect” maps—ones that perform at least along with the original map on whatever nonpartisan measures (such as for instance compactness) a court may be thinking about. “As long as specific facet may be quantified, we are able to integrate it into our algorithm,” Cho and Liu composed in a second paper.

In that paper, Cho and Liu used their algorithm to draw 250 million imperfect but reasonable congressional district maps for Maryland, whose existing plan will be challenged in court. The majority of their maps, they discovered, are biased in favor of Democrats. Nevertheless the formal plan is even more biased, favoring Democrats more highly than 99.79 percent regarding the algorithm’s maps—a result extremely not likely to happen into the lack of an deliberate gerrymander.

In an identical vein, Chen and Rodden used simulations (though with numerous fewer maps) to declare that Florida’s 2012 congressional plan was almost clearly intentionally gerrymandered. Their expert testimony contributed to your Florida Supreme Court’s decision in 2015 to strike straight down eight of this plan’s 27 districts.

“We didn’t have this degree of elegance in simulation available a decade ago, that was the last major situation on this subject prior to the [United States Supreme] Court,” said Bernard Grofman, a political scientist on University of Ca, Irvine.

The Florida ruling had been on the basis of the state constitution, so its implications for other states are limited. Nevertheless the Wisconsin instance has “potential amazing precedent value,” Grofman stated.

Grofman is rolling out a five-pronged gerrymandering test that distills one of the keys aspects of the Wisconsin situation. Three prongs are similar to those Stephanopoulos and McGhee have actually proposed: proof partisan bias, indications your bias would endure for your ten years, additionally the existence of a minumum of one replacement plan that could remedy the prevailing plan’s bias. To these, Grofman adds two more demands: simulations showing your plan is definitely an extreme outlier, suggesting your gerrymander ended up being deliberate, and evidence your people who made the map knew they certainly were drawing a much more biased plan than necessary.

Source: Wendy K. Tam Cho, using PEAR algorithmSource: Wendy K. Tam Cho, utilizing PEAR algorithmLucy Reading-Ikkanda/Quanta Magazine

If the Supreme Court does follow a gerrymandering standard, it remains become seen whether it may need evidence of intent, as Grofman’s standard does, or alternatively consider outcomes, as Stephanopoulos and McGhee’s standard does.

“Do we genuinely believe that districts should come since close as you can to fair representation of the events?” Rodden said. “If so, we ought ton’t actually value whether [gerrymandering is] intentional or unintentional.” But, he added, “we don’t know in which the courts find yourself decreasing. I don’t think anyone knows.”

The option has major ramifications. This past year, Chen and David Cottrell, a quantitative social scientist at Dartmouth University, used simulations determine the extent of deliberate gerrymandering in congressional district maps across most of the 50 states; they uncovered a good bit, nevertheless they also unearthed that on nationwide level, it mostly canceled down. Banning just deliberate gerrymandering, they concluded, would have small influence on the partisan stability associated with the United States House of Representatives (even though it could have a substantial effect on specific state legislatures).

Banning unintentional gerrymandering also would result in a more radical redrawing of district maps, one which “could potentially make a very big change on account of your home,” McGhee said.

That choice is around the court. But there’s a great amount of work left for gerrymandering researchers, from understanding the limitations of these measures (a lot of which create odd leads to lopsided elections, as an example) to learning the trade-offs between ensuring partisan symmetry and, state, protecting the voting energy of minorities or drawing compact districts. Collaboration between governmental and social researchers, mathematicians, and computer researchers could be the perfect means forward, Rodden and McGhee both say.

“We must certanly be encouraging cross-pollination and attracting outside a few ideas, then debating those ideas robustly,” McGhee stated.

Original story reprinted with authorization from Quanta Magazine, an editorially separate book for the Simons Foundation whose objective is enhance public comprehension of technology by addressing research developments and styles in math therefore the real and lifetime sciences.

Get back to Top. Skip To: Start of Article.

How to Build Beautiful 3-D Fractals Out of the Simplest Equations

If you came across an animal in the wild and wanted to learn more about it, there are a few things you might do: You might watch what it eats, poke it to see how it reacts, and even dissect it if you got the chance.

Quanta Magazine


About

Original story reprinted with permission from Quanta Magazine, an editorially independent division of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences


Mathematicians are not so different from naturalists. Rather than studying organisms, they study equations and shapes using their own techniques. They twist and stretch mathematical objects, translate them into new mathematical languages, and apply them to new problems. As they find new ways to look at familiar things, the possibilities for insight multiply.

That’s the promise of a new idea from two mathematicians: Laura DeMarco, a professor at Northwestern University, and Kathryn Lindsey, a postdoctoral fellow at the University of Chicago. They begin with a plain old polynomial equation, the kind grudgingly familiar to any high school math student: f(x) = x2 – 1. Instead of graphing it or finding its roots, they take the unprecedented step of transforming it into a 3-D object.

With polynomials, “everything is defined in the two-dimensional plane,” Lindsey said. “There isn’t a natural place a third dimension would come into it until you start thinking about these shapes Laura and I are building.”

The 3-D shapes that they build look strange, with broad plains, subtle bends and a zigzag seam that hints at how the objects were formed. DeMarco and Lindsey introduce the shapes in a forthcoming paper in the Arnold Mathematical Journal, a new publication from the Institute for Mathematical Sciences at Stony Brook University. The paper presents what little is known about the objects, such as how they’re constructed and the measurements of their curvature. DeMarco and Lindsey also explain what they believe is a promising new method of inquiry: Using the shapes built from polynomial equations, they hope to come to understand more about the underlying equations—which is what mathematicians really care about.

Breaking Out of Two Dimensions

In mathematics, several motivating factors can spur new research. One is the quest to solve an open problem, such as the Riemann hypothesis. Another is the desire to build mathematical tools that can be used to do something else. A third—the one behind DeMarco and Lindsey’s work—is the equivalent of finding an unidentified species in the wild: One just wants to understand what it is. “These are fascinating and beautiful things that arise very naturally in our subject and should be understood!” DeMarco said by email, referring to the shapes.

Laura DeMarco, a professor at Northwestern University.Laura DeMarco, a professor at Northwestern University.Courtesy of Laura DeMarco

“It’s sort of been in the air for a couple of decades, but they’re the first people to try to do something with it,” said Curtis McMullen, a mathematician at Harvard University who won the Fields Medal, math’s highest honor, in 1988. McMullen and DeMarco started talking about these shapes in the early 2000s, while she was doing graduate work with him at Harvard. DeMarco then went off to do pioneering work applying techniques from dynamical systems to questions in number theory, for which she will receive the Satter Prize—awarded to a leading female researcher—from the American Mathematical Society on January 5.

Meanwhile, in 2010 William Thurston, the late Cornell University mathematician and Fields Medal winner, heard about the shapes from McMullen. Thurston suspected that it might be possible to take flat shapes computed from polynomials and bend them to create 3-D objects. To explore this idea, he and Lindsey, who was then a graduate student at Cornell, constructed the 3-D objects from construction paper, tape and a precision cutting device that Thurston had on hand from an earlier project. The result wouldn’t have been out of place at an elementary school arts and crafts fair, and Lindsey admits she was kind of mystified by the whole thing.

“I never understood why we were doing this, what the point was and what was going on in his mind that made him think this was really important,” said Lindsey. “Then unfortunately when he died, I couldn’t ask him anymore. There was this brilliant guy who suggested something and said he thought it was an important, neat thing, so it’s natural to wonder ‘What is it? What’s going on here?’”

In 2014 DeMarco and Lindsey decided to see if they could unwind the mathematical significance of the shapes.

A Fractal Link to Entropy

To get a 3-D shape from an ordinary polynomial takes a little doing. The first step is to run the polynomial dynamically—that is, to iterate it by feeding each output back into the polynomial as the next input. One of two things will happen: either the values will grow infinitely in size, or they’ll settle into a stable, bounded pattern. To keep track of which starting values lead to which of those two outcomes, mathematicians construct the Julia set of a polynomial. The Julia set is the boundary between starting values that go off to infinity and values that remain bounded below a given value. This boundary line—which differs for every polynomial—can be plotted on the complex plane, where it assumes all manner of highly intricate, swirling, symmetric fractal designs.

JuliaSet_450_double.pngLucy Reading-Ikkanda/Quanta Magazine

If you shade the region bounded by the Julia set, you get the filled Julia set. If you use scissors and cut out the filled Julia set, you get the first piece of the surface of the eventual 3-D shape. To get the second, DeMarco and Lindsey wrote an algorithm. That algorithm analyzes features of the original polynomial, like its degree (the highest number that appears as an exponent) and its coefficients, and outputs another fractal shape that DeMarco and Lindsey call the “planar cap.”

“The Julia set is the base, like the southern hemisphere, and the cap is like the top half,” DeMarco said. “If you glue them together you get a shape that’s polyhedral.”

The algorithm was Thurston’s idea. When he suggested it to Lindsey in 2010, she wrote a rough version of the program. She and DeMarco improved on the algorithm in their work together and “proved it does what we think it does,” Lindsey said. That is, for every filled Julia set, the algorithm generates the correct complementary piece.

The filled Julia set and the planar cap are the raw material for constructing a 3-D shape, but by themselves they don’t give a sense of what the completed shape will look like. This creates a challenge. When presented with the six faces of a cube laid flat, one could intuitively know how to fold them to make the correct 3-D shape. But, with a less familiar two-dimensional surface, you’d be hard-pressed to anticipate the shape of the resulting 3-D object.

“There’s no general mathematical theory that tells you what the shape will be if you start with different types of polygons,” Lindsey said.

Mathematicians have precise ways of defining what makes a shape a shape. One is to know its curvature. Any 3-D object without holes has a total curvature of exactly 4π; it’s a fixed value in the same way any circular object has exactly 360 degrees of angle. The shape—or geometry—of a 3-D object is completely determined by the way that fixed amount of curvature is distributed, combined with information about distances between points. In a sphere, the curvature is distributed evenly over the entire surface; in a cube, it’s concentrated in equal amounts at the eight evenly spaced vertices.

A unique attribute of Julia sets allows DeMarco and Lindsey to know the curvature of the shapes they’re building. All Julia sets have what’s known as a “measure of maximal entropy,” or MME. The MME is a complicated concept, but there is an intuitive (if slightly incomplete) way to think about it. First, picture a two-dimensional filled Julia set on the plane. Then picture a point on the same plane but very far outside the Julia set’s boundary (infinitely far, in fact). From that distant location the point is going to take a random walk across two-dimensional space, meandering until it strikes the Julia set. Wherever it first strikes the Julia set is where it comes to rest.

The MME is a way of quantifying the fact that the meandering point is more likely to strike certain parts of the Julia set than others. For example, the meandering point is more likely to strike a spike in the Julia set that juts out into the plane than it is to intersect with a crevice tucked into a region of the set. The more likely the meandering point is to hit a point on the Julia set, the higher the MME is at that point.

In their paper, DeMarco and Lindsey demonstrated that the 3-D objects they build from Julia sets have a curvature distribution that’s exactly proportional to the MME. That is, if there’s a 25 percent chance the meandering point will hit a particular place on the Julia set first, then 25 percent of the curvature should also be concentrated at that point when the Julia set is joined with the planar cap and folded into a 3-D shape.

“If it was really easy for the meandering point to hit some area on our Julia set we’d want to have a lot of curvature at the corresponding point on the 3-D object,” Lindsey said. “And if it was harder to hit some area on our Julia set, we’d want the corresponding area in the 3-D object to be kind of flat.”

This is useful information, but it doesn’t get you as far as you’d think. If given a two-dimensional polygon, and told exactly how its curvature should be distributed, there’s still no mathematical way to identify exactly where you need to fold the polygon to end up with the right 3-D shape. Because of this, there’s no way to completely anticipate what that 3-D shape will look like.

“We know how sharp and pointy the shape has to be, in an abstract, theoretical sense, and we know how far apart the crinkly regions are, again in an abstract, theoretical sense, but we have no idea how to visualize it in three dimensions,” DeMarco explained in an email.

She and Lindsey have evidence of the existence of a 3-D shape, and evidence of some of that shape’s properties, but no ability yet to see the shape. They are in a position similar to that of astronomers who detect an unexplained stellar wobble that hints at the existence of an exoplanet: The astronomers know there has to be something else out there and they can estimate its mass. Yet the object itself remains just out of view.

A Folding Strategy

Thus far, DeMarco and Lindsey have established basic details of the 3-D shape: They know that one 3-D object exists for every polynomial (by way of its Julia set), and they know the object has a curvature exactly given by the measure of maximal entropy. Everything else has yet to be figured out.

In particular, they’d like to develop a mathematical understanding of the “bending laminations,” or lines along which a flat surface can be folded to create a 3-D object. The question occurred early on to Thurston, too, who wrote to McMullen in 2010, “I wonder how hard it is to compute or characterize the pair of bending laminations, for the inside and the outside, and what they might tell us about the geometry of the Julia set.”

Kathryn Lindsey, a mathematician at the University of Chicago.Kathryn Lindsey, a mathematician at the University of Chicago.Courtesy of Kathryn Lindsey

In this, DeMarco and Lindsey’s work is heavily influenced by the mid 20th-century mathematician Aleksandr Aleksandrov. Aleksandrov established that there is only one unique way of folding a given polygon to get a 3-D object. He lamented that it seemed impossible to mathematically calculate the correct folding lines. Today, the best strategy is often to make a best guess about where to fold the polygon—and then to get out scissors and tape to see if the estimate is right.

“Kathryn and I spent hours cutting out examples and gluing them ourselves,” DeMarco said.

DeMarco and Lindsey are currently trying to describe the folding lines on their particular class of 3-D objects, and they think they have a promising strategy. “Our working conjecture is that the folding lines, the bending laminations, can be completely described in terms of certain dynamical properties,” DeMarco said. Put another way, they hope that by iterating the underlying polynomial in the right way, they’ll be able to identify the set of points along which the folding line occurs.

From there, possibilities for exploration are numerous. If you know the folding lines associated to the polynomial f(x) = x2– 1, you might then ask what happens to the folding lines if you change the coefficients and consider f(x) = x2 – 1.1. Do the folding lines of the two polynomials differ a little, a lot or not at all?

“Certain polynomials might have similar bending laminations, and that would tell us all these polynomials have something in common, even if on the surface they don’t look like they have anything in common,” Lindsey said.

It’s a bit early to think about all of this, however. DeMarco and Lindsey have found a systematic way to think about polynomials in 3-D terms, but whether that perspective will answer important questions about those polynomials is unclear.

“I would even characterize it as being sort of playful at this stage,” McMullen said, adding, “In a way that’s how some of the best mathematical research proceeds—you don’t know what something is going to be good for, but it seems to be a feature of the mathematical landscape.”

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Go Back to Top. Skip To: Start of Article.