Calculus: The Geometry of Polynomials, II

The original post on The Geometry of Polynomials generated rather more interest that usual.  One reader, William Meisel, commented that he wondered if something similar worked for curves like the Folium of Descartes, given by the equation

x^3+y^3=3xy,

and whose graph looks like:

Day141Folium1

I replied that yes, I had success, and what I found out would make a nice follow-up post rather than just a reply to his comment.  So let’s go!

Just a brief refresher:  if, for example, we wanted to describe the behavior of y=2(x-4)(x-1)^2 where it crosses the x-axis at x=1, we simply retain the (x-1)^2 term and substitute the root x=1 into the other terms, getting

y=2(1-4)(x-1)^2=-6(x-1)^2

as the best-fitting parabola at x=1.

Now consider another way to think about this:

\displaystyle\lim_{x\to1}\dfrac y{(x-1)^2}=-6.

For examples like the polynomial above, this limit is always trivial, and is essentially a simple substitution.

What happens when we try to evaluate a similar limit with the Folium of Descartes?  It seems that a good approximation to this curve at x=0 (the U-shaped piece, since the sideways U-shaped piece involves writing x as a function of y) is y=x^2/3, as shown below.

Day141Folium2

To see this, we need to find

\displaystyle\lim_{x\to0}\dfrac y{x^2}.

After a little trial and error, I found it was simplest to use the substitution z=y/x^2, and so rewrite the equation for the Folium of Descartes by using the substitution y=x^2z, which results in

1+x^3z^3=3z.

Now it is easy to see that as x\to0, we have z\to1/3, giving us a good quadratic approximation at the origin.

Success!  So I thought I’d try some more examples, and see how they worked out.  I first just changed the exponent of x, looking at the curve

x^n+y^3=3xy,

shown below when n=6.

Day141Folium3.png

What would be a best approximation near the origin?  You can almost eyeball a fifth-degree approximation here, but let’s assume we don’t know the appropriate power and make the substitution y=x^kz, with k yet to be determined. This results in

x^{3k-n}z^3+1=3zx^{k+1-n}.

Now observe that when k=n-1, we have

x^{2n-3}z^3+1=3z,

so that \displaystyle\lim_{x\to0}z=1/3. Thus, in our case with n=6, we see that y=x^5/3 is a good approximation to the curve near the origin.  The graph below shows just how good an approximation it is.

Day141Folium4.png

OK, I thought to myself, maybe I just got lucky.  Maybe introduce a change which will really alter the nature of the curve, such as

x^3+y^3=3xy+1,

whose graph is shown below.

Day141Folium5

Here, the curve passes through the x-axis at x=1, with what appears to be a linear pass-through.  This suggests, given our previous work, the substitution y=(x-1)z, which results in

x^3+(x-1)^3z^3=3x(x-1)z+1.

We don’t have much luck with \displaystyle\lim_{x\to1}z here.  But if we move the 1 to the other side and factor, we get

(x-1)(x^2+x+1)+(x-1)^3z^3=3x(x-1)z.

Nice!  Just divide through by x-1 to obtain

x^2+x+1+(x-1)^2z=3xz.

Now a simple calculation reveals that \displaystyle\lim_{x\to1}z=1. And sure enough, the line y=x-1 does the trick:

Day141Folium6

Then I decided to change the exponent again by considering

x^n+y^3=3xy+1.

Here is the graph of the curve when n=6:

Day141Folium7

It seems we have two roots this time, with linear pass-throughs.  Let’s try the same idea again, making the substitution y=(x-1)z, moving the 1 over, factoring, and dividing through by x-1.  This results in

x^{n-1}+x^{n-2}+\cdots+1+(x-1)^2z^3=3xz.

It is not difficult to calculate that \displaystyle\lim_{x\to1}z=n/3.

Now things become a bit more interesting when n is even, since there is always a root at x=-1 in this case.  Here, we make the substitution y=(x+1)z, move the 1 over, and divide by x+1, resulting in

\dfrac{x^n-1}{x+1}+(x+1)^2z^3=3xz.

But since n is even, then x^2-1 is a factor of x^n-1, so we have

(x-1)(x^{n-2}+x^{n-4}+\cdots+x^2+1)+(x+1)^2z^3=3xz.

Substituting x=-1 in this equation gives

-2\left(\dfrac n2\right)=3(-1)z,

which immediately gives  \displaystyle\lim_{x\to1}z=n/3 as well!  This is a curious coincidence, for which I have no nice geometrical explanation.  The case when n=6 is illustrated below.

Day141Folium8

This is where I stopped — but I was truly surprised that everything I tried actually worked.  I did a cursory online search for Taylor series of implicitly defined functions, but this seems to be much less popular than series for y=f(x).

Anyone more familiar with this topic care to chime in?  I really enjoyed this brief exploration, and I’m grateful that William Meisel asked his question about the Folium of Descartes.  These are certainly instances of a larger phenomenon, but I feel the statement and proof of any theorem will be somewhat more complicated than the analogous results for explicitly defined functions.

And if you find some neat examples, post a comment!  I’d enjoy writing another follow-up post if there is continued interested in this topic.

The Geometry of Polynomials

I recently needed to make a short demo lecture, and I thought I’d share it with you.  I’m sure I’m not the first one to notice this, but I hadn’t seen it before and I thought it was an interesting way to look at the behavior of polynomials where they cross the x-axis.

The idea is to give a geometrical meaning to an algebraic procedure:  factoring polynomials.  What is the geometry of the different factors of a polynomial?

Let’s look at an example in some detail:  f(x)=2(x-4)(x-1)^2.poly0b

Now let’s start looking at the behavior near the roots of this polynomial.

poly0c

Near x=1, the graph of the cubic looks like a parabola — and that may not be so surprising given that the factor (x-1) occurs quadratically.

poly0d

And near x=4, the graph passes through the x-axis like a line — and we see a linear factor of (x-4) in our polynomial.

But which parabola, and which line?  It’s actually pretty easy to figure out.  Here is an annotated slide which illustrates the idea.

Day137poly1

All you need to do is set aside the quadratic factor of (x-1)^2, and substitute the root, x=1, in the remaining terms of the polynomial, then simplify.  In this example, we see that the cubic behaves like the parabola y=-6(x-1)^2 near the root x=1. Note the scales on the axes; if they were the same, the parabola would have appeared much narrower.

We perform a similar calculation at the root x=4.

Day137poly2

Just isolate the linear factor (x-4), substitute x=4 in the remaining terms of the polynomial, and then simplify.  Thus, the line y=18(x-4) best describes the behavior of the graph of the polynomial as it passes through the x-axis.  Again, note the scale on the axes.

We can actually use this idea to help us sketch graphs of polynomials when they’re in factored form.  Consider the polynomial f(x)=x(x+1)^2(x-2)^3.  Begin by sketching the three approximations near the roots of the polynomial.  This slide also shows the calculation for the cubic approximation.

Day137poly3.png

Now you can begin sketching the graph, starting from the left, being careful to closely follow the parabola as you bounce off the x-axis at x=-1.

poly1d

Continue, following the red line as you pass through the origin, and then the cubic as you pass through x=2.  Of course you’d need to plot a few points to know just where to start and end; this just shows how you would use the approximations near the roots to help you sketch a graph of a polynomial.

poly1f

Why does this work?  It is not difficult to see, but here we need a little calculus.  Let’s look, in general, at the behavior of f(x)=p(x)(x-a)^n near the root x=a.  Given what we’ve just been observing, we’d guess that the best approximation near x=a would just be y=p(a)(x-a)^n.

Just what does “best approximation” mean?  One way to think about approximating, calculuswise, is matching derivatives — just think of Maclaurin or Taylor series.  My claim is that the first n derivatives of f(x)=p(x)(x-a)^n and y=p(a)(x-a)^n match at x=a.

First, observe that the first n-1 derivatives of both of these functions at x=a must be 0.  This is because (x-a) will always be a factor — since at most n-1 derivatives are taken, there is no way for the (x-a)^n term to completely “disappear.”

But what happens when the nth derivative is taken?  Clearly, the nth derivative of p(a)(x-a)^n at x=a is just n!p(a).  What about the nth derivative of f(x)=p(x)(x-a)^n?

Thinking about the product rule in general, we see that the form of the nth derivative must be f^{(n)}(x)=n!p(x)+ (x-a)(\text{terms involving derivatives of } p(x)). When a derivative of p(x) is taken, that means one factor of (x-a) survives.

So when we take f^{(n)}(a), we also get n!p(a).  This makes the nth derivatives match as well.  And since the first n derivatives of p(x)(x-a)^n and p(a)(x-a)^n match, we see that p(a)(x-a)^n is the best nth degree approximation near the root x=a.

I might call this observation the geometry of polynomials. Well, perhaps not the entire geometry of polynomials….  But I find that any time algebra can be illustrated graphically, students’ understanding gets just a little deeper.

Those who have been reading my blog for a while will be unsurprised at my geometrical approach to algebra (or my geometrical approach to anything, for that matter).  Of course a lot of algebra was invented just to describe geometry — take the Cartesian coordinate plane, for instance.  So it’s time for algebra to reclaim its geometrical heritage.  I shall continue to be part of this important endeavor, for however long it takes….

Geometrical Dissections III: Octagons and Dodecagons

It has been quite a while since I’ve written about geometrical dissections.  For a brief refresher, you might want to look at my first post on dissections.

But recently my interest has been rekindled — to the point that I started writing a paper a few weeks ago!  I often like to share mathematics I’m working on as it’s happening to give you some idea of the process of doing mathematics.  The paper has quite a bit more mathematics in it than I’ll include in this post, but I’ll try to give you the gist of what’s involved.

Recall (again, see my first post for a more thorough discussion) that I’m interested in finding dissections that you can draw on graph paper — I find these more enjoyable as well as being accessible to a wider audience.  So all the dissections I’ll talk about will be based on a grid.

The first example is a dissection of two octagons to one, such as shown below.

Day118Dissection2

In other words, you can take the pieces from the two smaller octagons, move them around, and build the larger octagon.  If you look carefully, you’ll notice that the larger octagon is rotated 45° relative to the smaller octagons.  Geometrically, this is because to get an octagon twice the area of a given octagon, you scale the side lengths by \sqrt2.  Imagine a square — a square with side length S has area S^2, while a square with side length \sqrt2S has area (\sqrt2S)^2=2S^2  — which is double the area.

Now think of a segment of unit length.  To get a segment of length \sqrt2, you need to take a diagonal of a unit square, which rotates the segment by 45° as well as scales it by a factor of \sqrt2.  This is why the larger octagon is rotated with respect to the smaller ones.

But what makes this dissection interesting is that there is an infinite family of very similar dissections.  By slightly varying the parameters, you get  a dissection which looks like this:

Day118Dissection3

Here, you can clearly see the 45° rotation of the larger octagon.  I always enjoy finding infinite families of dissections — it is very satisfying to discover that once you’ve found one dissection, you get infinitely many more for free!

The proof that this always works (which I will omit here!) involves using a geometrical argument — using arbitrary parameters — to show that the green triangles always have to be right isosceles triangles.  This is the essential feature of the dissection which makes it “work” all the time.

The second example I’m including in the paper is a dissection on a triangular grid, like the one below.

Day118Dissection4

Note that the figure on the left is an irregular dodecagon; that is, it has twelve sides.  Recall that the interior angles of a regular dodecagon have measure 150° — but so do the angles of this irregular dodecagon as it is drawn on a triangular grid.

If you look carefully, you’ll see how the sides in the pieces of the dissection on the right match up with sides of the same length from the other pieces.  Also, looking at the dissection on the right, you’ll see that around all the interior vertices, there are two angles with measure 150° and one with measure 60°, adding up to 360° — as we would expect if the pieces fit exactly together.

And — as you might have guessed from the first example — this is also one of an infinite family of dissections.  As long as the sides of the irregular dodecagon alternate and the vertices stay on the triangular lattice, there is a dissection to a rhombus.  Below is another example, and there are infinitely many more….

Day118Dissection5

My third and final example involves irregular dodecagons, but this time on a square grid.  And while I found the previous two dissections several years ago, I found this example just last week!  What inspired me was one of my favorite dissections — from an irregular dodecagon to a square — also found many years ago.

Day118Dissection1

In the spirit of the first two examples, I asked myself if this dissection could be one of an infinite family.  The difficulty here was that there were three parameters determining an irregular dodecagon, as shown below.

Day118Dodecagon

We do need b>c so that the dodecagon is convex and actually has 12 sides; if b=c, four pairs of sides are in perfect alignment and the figure becomes an octagon.

There is an additional constraint, however, which complicates things a bit.  The area of this dodecagon must also be the area of some tilted square with vertices on the grid, as illustrated below.  Note that the area of this square is just d^2+e^2.

Day118Square

It is not a difficult geometry exercise to show that the area of the dodecagon is a^2+4(a+b)(c+b).  So in order to create a dissection, we must find a solution to the equation

a^2+4(a+b)(c+b)=d^2+e^2.

Again, here is not the place to go into details.  But it is possible to find an infinite family of solutions when a=e.  You get a dissection which looks like this:

Day118Dissection6

I was particularly pleased to have found this eight-piece dissection since most of my attempts until this point had ten pieces or more.  And to give you a sense of this family of dissections, here is an example with different parameters within this family.

Day118Dissection7

You can definitely see the resemblance, but it is also clear that the dodecagon and square are not the same shape as those in the previous dissection.

So these are the three families of geometrical dissections I’ll be including in my paper.  I hope these examples might inspire you to pick up a pencil and graph paper and try to find some infinite families of your own!

 

Polygons

In working on a proposal for a book about three-dimensional polyhedra last week, I needed to write a brief section on polygons.  I found that there were so many different types of polygons with such interesting properties, I thought it worthwhile to spend a day talking about them.  If you’ve never thought a lot about polygons before, you might be surprised how much there is to say about them….

So start by imagining a polygon — make it a pentagon, to be specific.  Try to imagine as many different types of pentagons as you can.

How many did you come up with?  I stopped counting when I reached 20….

Likely one of the first to come to mind was the regular pentagon — five equal sides at angles of 108° from each other.  Question:  did you only think of the vertices and edges, or did you include the interior as well?

Day106Penta3

Why consider this question?  An important geometrical concept is that of convexity.  A convex polygon property has the property that a line segment joining any two points in the polygon lies entirely within the polygon.

Day106Convexity

The two polygons on the left are convex, while the two on the right are not.  But note that for this definition to make any sense at all, a polygon must include all of its interior points.

Convex polygons have various properties.  For example, if you take the vertices of a convex polygon and imagine stretching a rubber band beyond the vertices and letting it snap back, the rubber band will describe the edges of the polygon.  See this Wikipedia article on convex polygons for more properties of convex polygons.

Did the edges of any of your pentagons cross each other, like the one on the left below?

Day106Penta4

In this picture, we indicate vertices with dots to illustrate that this is in fact a pentagon.  The points where the edges cross are not considered vertices of the polygon.  The polygon on the right is actually a nonconvex decagon, even though it bears a resemblance to the pentagon on the left.

But not so fast!  If you ask Mathematica to draw the polygon on the left with the five vertices in the order they are traversed when drawing the edges, here is what you get:

Day106Penta1.png

So what’s going on here?  Why is the pentagon empty in the middle?  When I gave the same instructions using Tikz in LaTeX (which is how I created the light blue pentagram shown above), the middle pentagon was filled in.

Some computer graphics programs use the even-odd rule when drawing self-intersecting polygons.  This may be thought of in a few ways.  First, if you imagine drawing a segment from a point in the interior pentagon to a point outside, you have to cross two edges of the pentagon, as shown above.  If you draw a segment from a point in one of the light red regions to a point outside, you only need to cross one edge.  Points which require crossing an even number of edges are not considered as being interior to the polygon.

Said another way, if you imagine drawing the pentagram, you will notice that you are actually going around the interior pentagon twice.  Any region traversed twice (or an even number of times) is not considered interior to the polygon.

Why would you want to color a polygon in this way?  There are mathematical reasons, but if you watch this video by Vi Hart all the way through, you’ll see some compelling visual evidence why you might want to do this.

We call polygons whose edges intersect each other self-intersecting or crossed polygons.  And as you’ve seen, including the interiors can be done in one of two different ways.

But wait!  What about this polygon?  Can you really have a polygon where a vertex actually lies on one of the edges?

Day106Penta2.png

Again, it all depends on the context.  I think you’re beginning to see that the question “What is a pentagon?” is actually a subtle question.  There are many features a pentagon might have which you likely would not have encountered in a typical high school geometry course, but which still merit some thought.

Up to now, we’ve just considered a polygon as a two-dimensional geometrical object.  What changes when you jump up to three dimensions?

Again, it all depends on your definition.  You might insist that a polygon must lie in a plane, but….

It is possible to specify a polygon by a list of points in three dimensions — just connect the points one by one, and you’ve got a polygon!  Of course with this definition, many things are possible — maybe you can repeat points, and maybe the points do not all lie in the same plane.

An interesting example of such a polygon is shown below, outlined in black.

Day106Cube

It is called a Petrie polygon after the mathematician who first described it.  In this case, it is a hexagon — think of holding a cube by two opposite corners, and form a hexagon by the six edges which your fingers are not touching.

There is a Petrie polygon for every Platonic solid, and may be defined as follows:  it is a closed path of connected edges such that no three consecutive edges belong to the same face.  If you look at the figure above, you’ll find this is an alternative way to define a Petrie hexagon on a cube.

And if that isn’t enough, it is possible to define a polygon with an infinite number of sides!  Just imagine the following jagged segment continuing infinitely in both directions.

Day106Apeirogon

This is called an apeirogon, and may be used to study the tiling of the plane by squares, four meeting at each vertex of the tiling.

And we haven’t even begun to look at polygons in other geometries — spherical geometry, projective geometry, inversive geometry….

Suffice it to say that the world of polygons is much more than just doodling a few triangles, squares or pentagons.  It is always amazes me how such a simple idea — polygon — can be the source of such seemingly endless investigation!  And serve as another illustration of the seemingly infinite diversity within the universe of Geometry….

Still more on: What is…Inversive Geometry?

Now for the final post on inversive geometry!  I’ve been generating some fascinating images, and I’d like to share a bit about how I make them.

2017-07-19Inversion3v2.png

In order to create such images in Mathematica, you need to go beyond the geometrical definition of inversion and use coordinate geometry.  Let’s take a moment to see how to do this.

Recall that P′, the inverse point of P, is that point on a ray drawn from the origin through P such that

[OP]\cdot[OP']=1,

where [AB] denotes the distance from A to B.  Feel free to reread the previous two posts on inversive geometry for a refresher (here are links to the first post and the second post).

Now suppose that the point P has Cartesian coordinates (x,y).  Points on a ray drawn from the origin through P will then have coordinates (kx, ky), where k>0.  Thus, we just need to find the right k so that the point P'=(kx,ky) satisfies the definition of an inverse point.

This is just a matter of a little algebra; the result is

P'=\left(\dfrac{x}{x^2+y^2},\dfrac{y}{x^2+y^2}\right).

What this means is that if you have an equation of a curve in terms of x and y, if you substitute x/(x^2+y^2) everywhere you see x, and substitute y/(x^2+y^2) everywhere you see y, you’ll get an equation for the inverse curve.

Let’s illustrate with a simple example — in general, the computer will be doing all the work, so we won’t need to actually do the algebra in practice.  We’ll look at the line x=1.  From our previous work, we know that the inverse curve must be a circle going through the origin.

Making the substitution just discussed, we get the equation

\dfrac x{x^2+y^2}=1,

which may be written (after completing the square) in the form

\left(x-\dfrac12\right)^2+y^2=\dfrac14.

It is not hard to see that this is in fact a circle which passes through the point (0,0).

Now we need to add one more step.  In the definition of an inverse point, we had the point O being the origin with coordinates (0,0).  What if O were some other point, say with coordinates (a,b)?

Let’s proceed incrementally.  Beginning with a point (x,y), translate to the point (x-a,y-b) so that the point (a,b) now acts like the origin.  Now use the previous formula to invert:

\left(\dfrac{x-a}{(x-a)^2+(y-b)^2},\dfrac{y-b}{(x-a)^2+(y-b)^2}\right).

Finally, translate back:

\left(a+\dfrac{x-a}{(x-a)^2+(y-b)^2},b+\dfrac{y-b}{(x-a)^2+(y-b)^2}\right).

This is now the inverse of the point (x,y) about the point (a,b).

So what you see in the above image is several copies of the parabola y=x^2 inverted about a series of equally spaced points along the line segment with endpoints (1/2,-1/2) and (3/2,1/2).  This might seem a little arbitrary, but it takes quite a bit of experimentation to find a set of points to invert about in order to create an aesthetically pleasing image.

Of course there is another perspective on accomplishing the same task — just shift the parabolas first, invert about the origin, and then shift back.  This is geometrically equivalent (and the algebra is the same); it just depends on how you want to look at it.

Here is another image creating by inverting the parabola y=x^2 about points which lie on a circle.

2017-07-18Inversion1v2

And while we’re on the subject of inverting parabolas, let’s take a moment to discuss the cardioid example we looked at in our last conversation about inversion:

Cardioid2

To prove that this construction of circles actually yields a cardioid, the trick is to take the inverse of a parabola about its focus.  If you do this, the tangent lines of the parabola will then invert to circles tangent to a cardioid.  I won’t go into all the details here, but I’ll outline how the proof goes using the following diagram.

Day104ParabolaProperty

Draw a line (shown in black) tangent to the blue parabola at its vertex; the inverse curves are shown in the same color, but dashed.  Note that the black circle must be tangent to the blue cardioid since the inverse black line is tangent to the inverse parabola.

The small red disk is the focus of the parabola.  Key to the proof is the property of the parabola that if you draw a line from the focus to a point on the black line and then bounce off at a right angle (the red lines), the resulting line is tangent to the parabola.  So the inverse of this line — the red dashed circle — must be tangent to the cardioid.

Since perpendicularity is preserved and the line from the focus inverts to itself (since we’re inverting about the focus), the red circle must be perpendicular to this line — meaning that the line from the focus in fact contains a diameter, and hence the center, of the red circle.  Then using properties of circles, you can show that all centers of circles formed in this way lie on a circle (shown dotted in purple) which is half the size of the black circle.  I’ll leave the details to you….

Finally, I’d like to show a few examples of using the other conic sections.  Here is an image with 80 inversions of an ellipse around centers which lie on a line segment.

2017-07-22Inversion1v2

And here is an example of 100 hyperbolas inverted around centers which lie on a line segment.  Since the tails of the branches of a hyperbola all go to infinity, they all meet at the same point when inverted.

2017-07-22Inversion2v2

So now you know how to work with geometrical inversion from an algebraic standpoint.  I hope seeing some of the fascinating images you can create will inspire you to try creating some yourself!

Imagifractalous! 7: Fractal Binary Trees IV

Bridges 2017 is next week in Waterloo, Ontario, Canada!  We’re all looking forward to it…Nick and his parents are also going, and I’ll get to see a lot of new friends I met last year in Finland.

In preparation, I’ve been creating a portfolio of binary and ternary trees.  I’ve been coloring and highlighting the branches to make them easier to understand, so I thought I’d take the opportunity to share several of them.  Mathematics just as it’s happening….

Here is an example of what I’m talking about.

2017-07-20-0is11v1

Recall that we’re generalizing the idea of “left” and “right” branches of a binary tree — rather that just rotating left or right (and possibly scaling), we’re allowing any affine transformation to correspond to “left” and “right.”  In order to avoid any confusion, let’s refer to the transformations as “0” and “1.”

In the tree above, the 0 branches are shown in black, and the 1 branches in red.  Notice that sometimes, the 0 branches seem to go to the right, other times, to the left.  This is a common occurrence when you allow the branching to be determined by more general transformations.

Here, you’ll notice that 0 and 11 take you to the same place — beginning with the trunk (the vertical black line at the right), if you follow one black branch, you’ll end up at the same blue node as you would if you followed two red branches.  In fact, no matter what blue node you start at, following one black branch will always take you to the same place as following two red branches.

Here is not the place to go into too many details, but let me give you the gist of what we’re studying.  If you denote the transformation which generates the black branches by B_0 and that which generate the red branches by B_1, you will observe behavior like that in the above tree if

B_0=B_1+B_1^2.

Right now, most of the work I’m doing revolves around saying that I want to generate a tree with a certain property (like 0 and 11 going to the same node), and then solving the corresponding matrix equation to find interesting examples of trees with this property.  Usually the linear algebra involved is fairly complicated — feel free to comment if you’d like to know more.  As I said, this blog is not the place for all the details….

Below is another interesting example.

2017-07-20-0is00v2.png

In this tree, 0 and 00 go to the same node.  What this means in terms of branching is that there can never be two consecutive black branches (again, 0 is black and 1 is red) — taking the additional 0 branch leaves you in exactly the same place.

Note that for this property to hold, there is no restriction on what the 1 transformation can be.  Above, the transformation B_1 is a scaled rotation by 60°.  This is what produces the spirals of red branches emanating from the nodes of the tree.

2017-07-20-0is01v1.png

In this example, 0 and 01 go to the same node (0 is black, 1 is red).  Implications for the tree are that the red branches zigzag back and forth, getting shorter with each iteration.  Moreover, there aren’t any red branches drawn after a black branch (except the first time, off the end of the trunk of the tree).  This is because 01 — taking a red branch after a black branch — takes you to the same place as 0, which is taking a black branch.  In other words, the red branch doesn’t take you any further.

2017-06-27-01is10.png

Here, 01 and 10 take you to the same node (again, 0 is black and 1 is red).  This is easy to see — there are many quadrilaterals in the tree formed by alternating red and black branches.  In addition, the iterations of this tree move progressively higher each time — so, for example, all the nodes at the top are at a depth of 4.

If you look at the number of new nodes at each level, you get a sequence which begins

2, 3, 6, 12, 24, 48, 96, 192, 384, 768, ….

After the second level, it seems that the number of new nodes doubles with each iteration.  I say “seems” since it is generally very difficult to prove that such a relationship continues up arbitrarily many levels.  Although it is easy to find the sequence of numbers of nodes for any tree using Mathematica, I have rigorous proofs of correctness for only three cases so far.

What makes the proofs so difficult?  The following tree is another example of a case where 01 and 10 go to the same node.

2017-06-25-01is10v1

But the sequence of new nodes at each level is just 2, 3, 4, 5, 6, ….  (This is one case I’ve rigorously proved.)  The transformations B_0 and B_1 used to make this tree, in addition to forcing 01 and 10 to go to the same node, also have additional properties which force any two strings of the same length with the same number of 0’s (and therefore the same number of 1’s as well) to go to the same node.  So to prove the sequence of nodes in the previous case is 2, 3, 6, 12, 24, 48, …, you need to say which strings take you to the same nodes, and prove that this pattern continues no matter how far you go up — and that there are no “surprise” coincidences of other nodes as you proceed.

It is also possible to look at infinite strings going to the same node, as shown below.  The linear algebra gets quite a bit more involved in these cases.

2017-07-02infp1q1.png

Here, 0 and 1 do in fact correspond to going left and right, respectively.  The thicker black path corresponds to the string 011111…, that is, going left once, and then going to the right infinitely often, creating the spiral you see above.  The thicker green path corresponds to alternating 10101010… infinitely often, creating a zigzag path which approaches exactly the same point as the black spiral does.

Here is another example, where the spiral 011111… (in black) approaches the same point as the infinite zigzag 100100100100… (in purple).

2017-07-02infp1q2.png

These are some of my favorite examples.  But I should remark that once you know how to mathematically find transformations which produce trees with a desired property, it takes a lot of fiddling around to find aesthetically pleasing examples.

I hope these examples give you a better idea of the nature of our research.  I’ll update you on Bridges when I get back, and let you know about any interesting comments other participants make about our trees.  I leave on Wednesday, and will post pictures daily on my Twitter @cre8math if you’d like a daily dose of mathematical art!

More on: What is…Projective Geometry?

This week, I thought I’d go a little deeper into the subject of projective geometry (feel free to reread last week’s post for a refresher).  Why? I think this is a good opportunity to discuss the idea of an algebraic model of a geometry.  Certainly using Cartesian coordinates in the Euclidean plane makes many geometrical problems easier to approach.

So what would a coordinate system in projective geometry look like?  The most commonly used system is homogeneous coordinates.  Let’s see how they work.

The difficulty with adding the line at infinity is that we need infinitely many points in addition to those already in the plane.  Perhaps you might imagine that adding an “infinite” coordinate might do the trick, but there is an alternative — and algebraically simpler — approach.

First, think of the Euclidean plane as being the plane z = 1 in three-dimensional Cartesian coordinates, so that every Euclidean point (xy) is given homogeneous coordinates (xy, 1).  But we don’t want to make the z-coordinate special — we want all the homogeneous coordinates to have similar interpretations.  We accomplish this by giving each point infinitely many names — specifically, for any k ≠ 0, the coordinates (kxkyk) describe the same point as (xy, 1).  Geometrically, you might think of this as saying that the line through  (xy, 1) (except the origin) is, in some sense, the point (xy, 1).

So if z ≠ 0, the point (xyz) in homogeneous coordinates is another name for the Euclidean point (x/zy/z).  But why would you want to do this?

The punch line is that we can now use z = 0 to indicate that a point is on the line at infinity!  We still keep the convention that if k ≠ 0, the homogeneous coordinates (kxky, 0) describe the same point as (xy, 0), as long as x and y are not both 0.

So our system of homogenous coordinates contains all points (xyz) such that not all three coordinates are 0.  Any point with z ≠ 0 corresponds to a Euclidean point, and any point with z = 0 corresponds to a point on the line at infinity.

Is it really worth all this trouble?  There are many interesting features of such a coordinate system — and I’d like to mention a few of them here, omitting the proofs.  There are many resources online that include all the details — one example is the classic Projective Geometry by Veblen and Young available free as an ebook.

Let’s start be looking at equations of lines in projective geometry.  In the Cartesian plane, we may represent a line in the form Ax + By + C = 0, where A, B, and C are not all zero.  In the projective plane, a line has the form form Ax + By + Cz = 0, where A, B, and C are not all zero.  Nice, isn’t it?  This form is also consistent with our system of having many names for a point:  if Ax + By + Cz = 0 and k ≠ 0, then also A(kx) + B(ky) + C(kz) = 0 as well. So no problems there.

But the really neat aspect of this representation is how you can use linear algebra in three dimensions to solve many problems in projective geometry.  This isn’t a post about linear algebra — so I’ll limit myself to just one example.  But in case you do know something about linear algebra, I think you should see how it can be used in projective geometry when homogeneous coordinate are used.

We’ll consider the problem of finding the intersection of two lines, say Ax + By + Cz = 0 and Dx + Ey + Fz = 0.  The form of these expressions should remind you of taking the dot product, so that we can rewrite these expressions as

(A,B,C)\cdot(x,y,z)=(D,E,F)\cdot(x,y,z)=0.

Interpreting these coordinates and coefficients as vectors in three-dimensional space, we observe that the common point (xyz) is simultaneously perpendicular to both (A, B, C) and (D, E, F), since both dot products are zero.  So (xyz) can be found using the cross-product

(x,y,z)=(A,B,C)\times(D,E,F).

Very nice!  Again, there are many such examples, but this is not the place to discuss them….

This algebraic model also suggests another geometric model for the projective plane besides adding a line at infinity to the Euclidean plane.  Begin with the surface of a sphere centered at the origin in three-dimensional Cartesian space, and observe that opposite points on the sphere have homogeneous coordinates that are different names for the same point in the projective plane.

So, in some sense, we have exactly twice as many points as we need — so we identify opposite points on this sphere, so that they are in fact the same point.  (You might recall a similar discussion about points in the post on spherical geometry.)  Thinking about it in another way, we might just consider the top hemisphere of the sphere with only half of the equator, including just one of the endpoints of the semicircle.

And while this model is fairly simple geometrically, it is important to point out that this does not mean that the projective plane lies inside three-dimensional space.  Once we have our hemisphere and semicircle, we have to think about it without any of the surrounding space.  This is not easy to do, but this type of thinking is necessary all the time when studying differential geometry, a topic for another time….

One last benefit to using homogenous coordinates:  it can easily be abstracted to any number of dimensions.  Do you want a projective space?  Just add a plane at infinity to three-dimensional Euclidean space!  Coordinates are easy — all points (x, y, z, w), with not all coordinates equal to 0.  When w ≠ 0, the point (x, y, z, w) corresponds to the Euclidean point (x/w, y/w, z/w), and when w = 0, the point is on the plane at infinity.

And clearly, there would be no need to stop at three dimensions — it is just as easy to create a projective space of four or more dimensions.

Finding a workable system of coordinates for a particular geometry is not always a simple matter — but finding a system that allows problems to be solved easily is often a key step to studying any type of geometry.  I hope this gives you some insight to yet another aspect of the diverse universe of so many different Geometries….