Like last semester, I had students create a movie in Processing. The main tool I had them use was linear interpolation to create animation effects. As before, I encouraged them to use linear interpolation with *any* numerical parameter in their movie — how much red there is in the background color, the position of objects on the screen, or the width of points and lines, just to name a few possibilities.

Colette created some interesting visual effects. Some students (like her) took advantage of the fact that if you don’t set the background color in the draw function in Processing, successive calls to the draw function overlap the previous ones, giving a sense of movement.

Peyton was inspired when her friends invited her to go to the beach. Here is a screen shot of her movie, where the moon reflects off the rippling waves of the sea at Ocean Beach in San Francisco.

Next, I’d like to share a few pieces from students’ Final Projects. Recall that Final Projects are an important part of the course — during the second half of the semester, we spend one or two days a week working on them. This is the opportunity for students to explore any aspect of the course in more detail.

While Lainey focused on making images which reminded her of her dreams, she began with this collage including several motifs from our coursework throughout the semester. She was one of the few to incorporate L-systems into her Final Project.

Karla was interested in experimenting with color. She was interested in morphing images of the Buddha. Karla used the RGB values of the pixels in an original image, like the one shown below on the left, to determine the RGB values of the pointillistic image shown on the right.

Also like last semester, I asked students to write a final response paper about their experience with the digital course. The responses were similar to those of last semester, so I’ll only include a few excerpts here.

After having taken this course this semester I really have gained an interest in art and programming that I certainly didn’t have before, even after having taken some courses in programming. I think that actually being able to see the endless things that are able to be created using programming and math is really cool….

One student made a particularly interesting remark:

I wish that I had more knowledge on how to type my own code. There were points in the semester where the art I was creating felt a bit like a coloring page where we were given a page that was already drawn and told to fill it in and try to make it our own.

Of course this is what I want to read! I do want to inspire students to investigate programming further — but students taking this course receive a *mathematics* credit. There is certainly no doubt in your mind about how passionate I am about having students learn to write code, but I do need to emphasize the mathematics of digital art. Maybe next semester I’ll have Nick run some extra sessions on coding for those who are interested in learning more.

I also wanted to share some work from the other course I taught this semester, Linear Algebra and Probability. I have students work with affine transformations and iterated function systems in this course as well. Their first project is a still image using Sage, and their second project is a fractal animation using Processing. Here is Jay’s submission.

And finally, I want to remind you that Mathematics and Digital Art is becoming a university-wide course next year. I am working with the Fine Arts department here at USF to encourage incoming fine arts majors to earn their mathematics credit by creating digital art! I expect the course to be quite a bit larger this Fall, and will be connecting with the Fine Arts faculty to make sure the content meets their needs as well.

In addition, I will be giving a talk at Bridges this summer (in Waterloo, (the Canadian one, that is)) about my digital art course. I’m expecting that I’ll receive a wide range of comments and suggestions, so the course may be a little different in the fall. That’s one nice feature of this course — it’s not a prerequisite for any other course, so there is some flexibility as far as content is concerned.

Like this semester, I’ll provide monthly updates to let you know what changes I’ve implemented, and also to showcase student work. So stay tuned!

]]>

I haven’t repeated a post yet — but as I’m getting ready for Final Exams, I thought it appropriate. I posted this satire over a year ago, but of course it is still relevant. The traditional system of assigning grades as a measure of, well, *anything,* is woefully inadequate. Continue reading at your own peril….

A new semester is about to start! Finally, those students we have worked so hard to prepare are ready to learn The Calculus! (Note: We do not *teach* The Calculus, but rather *facilitate* its learning.)

With class Roster in hand, we make for the Cheesemonger. (Mr. Hadley has been here for decades.) “Mr. Hadley, twenty-one, if you please,” I say, handing him my Roster. That is all there is to it. The next morning, twenty-one perfectly crafted cubes of Preferred Cheese (each having a mass of precisely 100g) are in my class Cupboard — one for each student, clearly labelled. We are ready!

Of course the students’ initial enthusiasm is high, until the inevitable First Exam. Surely much of the material is review — so we use the Petite grater to shave off a small bit of cheese from a student’s Block when they make the inevitable First Mistake. (There is always the exception, naturally — the Micro is used for sign errors, as well as trivial arithmetic errors.) This is done to accurately track students’ progress.

As the semester progresses, the situation becomes more dire. More egregious errors require the use of a Usual grater, while blocks of students who neglect to turn in entire assignments surely warrant being shaved by an Ultra. After years of experience, we are able to gauge *precisely* how much cheese must be shaved for each particular type of error. (Truly, it is all in the wrist.)

The end draws near — and although well-acquainted with the system, students *still* worry about the mass of their Block. They must realize that what is done is done — and No!, it is not possible to replenish the cheese! Is there no justice? They have been warned aplenty, but it is not *our* fault they do not heed our omens.

We are *particularly* meticulous with the shaving of the Final Exam — as it is an important assessment. Certain errors now warrant more grievous penalties — the Micro is no longer relevant. Beware the sign error!

But the wonders of being Complete! As a result of our experience (and training, naturally — Mr. Hadley is *quite* the Cheesemonger), the mass of each student’s Block represents their understanding of The Calculus entirely! Moreover, this mass may be converted to any number of convenient units — a percentage, perhaps, or one of various letters meant to convey a State of Understanding. But the work of the Facilitator is done! We rest well, and enjoy our cuppa. (Yes, some decry the fairness of such a system. No matter. Surely it is the best possible.)

]]>

I had never really taken a step back to look at the larger picture, but that’s one of the purposes of this thread. I came up with seven major uses, in roughly chronological order from when I started using graphics in each way:

- Creating fractals.
- Graphic design.
- Rendering polyhedra in three dimensions.
- Mathematical illustration.
- Web design.
- Mathematical art.
- Research.

Today, I’ll give a brief overview, and follow up in future posts with more details about the various platforms I’ve used.

I don’t need to say a lot about the first two uses, since I’ve discussed them in some detail in my posts on Postscript and iterated function systems. My interest in fractals has of course continued, especially given my passion for Koch curves and binary trees. I work less with graphic design now — I used to design stationery and business cards for myself and friends, but now that communication is so largely electronic, there is less of a need. But I do still occasionally design business cards using my art logo, print them onto cardstock, and cut them myself when I need to.

I also spend quite a bit of time designing title and separator slides for presentations. (I frankly admit to *never* having created a PowerPoint presentation in my entire life. Yuck.) I now exclusively use TikZ in LaTeX since you have complete control over the graphics environment.

I haven’t mentioned polyhedra extensively on my blog so far, but my interest in three-dimensional geometry goes back to my undergraduate days. When I started teaching college for the first time, I designed (and eventually wrote a textbook for) a course on polyhedra based on spherical trigonometry.

The main tool I used early on was Mathematica, and when I redid all my graphics for my textbook about five years ago, I used Mathematica. The image you see above was rendered using POV-Ray, a ray-tracing package which allows the user to specify a wide range of parameters to create realistic effects. You can see shadows and a reflection as if the polyhedron were sitting on a shiny, black surface. And while the images generated by POV-Ray are quite a bit nicer than those created with Mathematica, they take more time and effort to create.

Mathematical illustration covers a wide range of uses. Drawing figures for a mathematical paper perhaps first comes to mind. I used a package called PSTricks for quite a while (since the “PS” stands for Postscript), but once I learned about TikZ, I changed over fairly quickly. This means I needed to re-render many older graphics (especially in my textbook) — but since for both packages you need precise coordinates, I already had all the mathematics worked out for the PSTricks function calls. So converting to TikZ wasn’t all that problematic. I will be talking in more detail about TikZ in a future post.

Of course there are also geometrical puzzles and problems, as I’ve discussed before on my blog. Nicely formatted graphics go a long way in making a puzzle appealing to the reader.

Web design is especially important, since I rely so much on my website for *everything.* I have a web page for my artwork, for my publications, talks, and anything I do professionally. I strive for functionality and aesthetics. I’m also a minimalist as far as design goes — “less is more.”

Also important is ease of use for *me.* It’s not hard to add something to my homepage. I don’t have the time (or interest) in redesigning my homepage every I want to add a new link. I’m not a fan of hiding everything in dropdown menus — I want the user to glance at my site, and immediately click on the relevant link without having to pore through menus.

Mathematical art has actually been a fairly recent use of computer graphics for me — I only really started getting into it about four years ago. I don’t feel I have to say a lot about it right here, since it is a common thread throughout so many of my posts. You can visit my art website, or look at my Twitter, where I try to update daily (as much as I can) with new and interesting pieces of mathematical art.

Finally, I’d like to say a word about the last use of computer graphics: research. I think this is *extremely* important, especially with the exploding world of data visualization. Really, this has been perhaps Nick’s and my most important tool when it comes to studying binary trees.

Without going into too much detail about this image, is consists of six scaled and rotated copies of one tree (the ones in color) and a related dual tree (in white). Now count the number of leaves (ends of paths) in common with the colorful trees — and you’ll count 1, 5, 10, 10, 5, and 1. These are, of course, binomial coefficients: row 5 of Pascal’s triangle.

Why is this the case? And is there some general result to be proved? The point I’m making is that when Nick and I create routines to display different trees or combinations of trees, we look at *lots* of examples, and seek recurring themes.

We learn a lot by doing this, and the direction of our research is heavily influenced by what we observe. This might not be so surprising, since after all, fractal trees are such geometrical objects. I can honestly say the pace of our progress in making conjectures is directly linked to our ability to produce large numbers of trees and knowing what to look for.

So that’s what I use computer graphics for! In future posts, I’ll discuss the various software packages I use, and try to give some idea of the advantages and disadvantages of each. I’ll give special emphasis to those that are open-source — there is so much out there that is free to download, anyone interesting in computer graphics will have no difficulty finding something interesting to experiment with!

]]>

We would like to consider the operation of *inverting* points, as illustrated above. Suppose the red circle is a unit circle, the white dot is the origin, and consider the blue point *P.* Now define *P**′ *(the orange dot) to be the inverse point of *P* as follows: draw a ray from the origin through *P*, and let *P*′ be that point on the ray such that

where denotes the distance from *A* to *B. *That is, when you multiply the distances from the origin of a point and its inverse point, you get 1.

Now why would you want to do this? It turns out this operation is actually very interesting, and has many unexpected consequences. Consider the circles in the figure below.

Start with the blue circle. Now for every point *P* on the blue circle, find its inverse point *P**′,* and plot it in orange. The result is actually another circle! Yes, it works out exactly, although we won’t prove that here. Note that since every point on the blue circle is outside the unit circle (that is, greater than a distance of 1 from the origin), every inverse point on the orange circle must be inside the unit circle (that is, less than a distance of 1 from the origin).

This is just one geometrical property of inversion, and there are many others. I just want to suggest that the operation of inversion has many interesting properties.

If you pause a moment to think about this operation, you’ll notice there’s a little snag. Not *every* point has an inverse point. There’s just one point which is problematic here: the origin. According to the definition,

But the distance from the origin to itself is just 0, and it is not possible to multiply 0 by another number and get 1, since 0 times any real number is still 0.

Unless…could we somehow make the distance from *O* to *O**′ infinite?* It should be clear that *O*′ cannot be a point in the plane different from the origin, since any point in the plane has a finite distance to the origin (just use the Pythagorean theorem).

So how do we solve this problem? We *add* another point to the usual Euclidean plane, called the *point at infinity,* which is usually denoted by the Greek letter ω.

You might be thinking, “Hey, wait a minute! You can’t just *add* a point because you need one. Where would you put it? The plane already extends out to infinity as it is!”

In a sense, that’s correct. But remember the idea of this thread — we’re exploring *what geometry is.* When we looked at taxicab geometry, we just changed the distance, not the points. And with spherical geometry, we looked at a very familiar geometrical object: the surface of a sphere. We can also change the points in our geometrical space.

How can we do this? Though perhaps a simplification, we must do this *consistently* as long as it is *interesting.*

What does this mean? In some sense, I can mathematically define *lots* of things. Let’s suppose I want to define a new operation on numbers, like this:

Wow, we’ve just created something likely no one has thought of before! And yes, we probably have — and for good reason. This operation is not very interesting at all — no nice properties like commutativity or associativity, no applications that I can think of (what are you going to do with the 37?). But it is a perfectly legitimate arithmetical operation, defined for all real numbers *a* and *b.*

As I tried to suggest earlier, the operation of inversion is actually *quite* interesting. So it would be very nice to have a definition which allowed *every* point to have an inverse point.

But if we want to add ω, we must do so *consistently.* That is, the properties of ω cannot result in any contradictory statements or results.

It turns out that this is in fact possible — thought we do have to be careful. Although we can’t go into *all* the details of adding the point at infinity to the plane, here are some important properties that ω has:

- The distance from ω to any other point in the plane is infinite;
- ω lies on any unbounded curve (like a line or a parabola, but not a circle, for example);
- The inverse point of ω is the origin; that is,
*O′ =*ω and ω′ =*O.*

So we can create an entirely consistent system of geometry which contains all the points in the Euclidean plane *plus* a point at infinity which is infinitely far away from every other point. We usually call the Euclidean plane with ω added the *extended plane.*

With the new point, we can now say that *every* point in the extended plane has a unique inverse point.

This idea of adding a point at infinity is also important is other areas of mathematics. In considering the *complex plane* — that is, the set of points of the form *a* + *bi,* where *i* is a solution of the equation — it is often useful to add the number ∞, so that division by 0 is now possible. This results in the *extended complex plane,* and is very important in the study of complex analysis.

The point at infinity is also important in defining stereographic projection. Here, a sphere is placed on the plane, so that the South pole is the origin of the plane. The sphere is then projected onto the plane as follows: for any point on the sphere, draw a ray from the North pole through that point, and see where it intersects the plane.

Where does the North pole get mapped to under this projection? Take one guess: the point at infinity!

So here is another new geometry! As we get introduced to more and more various geometrical systems, I hope you will continue to deepen your intuition about the question, *What is a geometry?*

]]>

At the end of my last update, I said I’d talk more about using L-systems in class. I decided to focus on the symmetrical Koch-like images I had been working on for the past few years. There are two reasons for this. First, it’s fresh — and demonstrates that creating new fractal images is an active research topic; not everything is known. Second, you need to know some elementary number theory in order to create symmetrical images. Since none of the mathematics we studied so far was closely related to number theory, this was a great opportunity to see yet another application of a different branch of mathematics.

I started with introducing the basics of modular arithmetic. This was new to most students, but the motivation was easy: the direction you’re pointing after any given move is relevant to deciding if your sequence of segments closes up. And any time you turn counterclockwise, you increment the direction by the angle you turn, but subtract 360° when you go over 360° since a turn of 360° doesn’t alter your direction. This is just using a modulus of 360 for the direction you’re pointing.

Then, I reminded them how to find the prime factorization of numbers in order to create a 2-adic valuation. Recall that the 2-adic valuation of a number is the exponent of the highest power of 2 which divides that number. This is significant since the 2-adic valuation (mod 2) indicates how to turn when drawing a Koch-like curve: a 0 represents one angle (60° for the Koch curve), and a 1 represents the other (240° for the Koch curve). So we created charts like this:

Then finally, I showed students how to find angle pairs which created symmetrical images using a theorem in a paper I’m working on for the *College Mathematics Journal*. As the proof involves significantly more mathematics at a level beyond what we could reasonably discuss in the course, I just showed them the result. I won’t go into details here, but I’d be glad to share a draft of the paper if you’re interested and adventurous….

For their project work, they had to create images using the results of the theorem in Processing. Karla created this image, which I find interesting since it exhibits six-fold symmetry, but the exterior elements have seven points on them. So rarely do you encounter 6 and 7 together in geometry.

Peyton created this image, which is suggestive of a complete image, but which doesn’t include all the line segments. But the overall symmetry of the image is clear; you can complete it in your mind’s eye.

I also asked students to create an image which did *not* close up, to experiment with parameters which generated a more chaotic image. Colette created this image, which reminded her of the top of a pine tree.

Some students did have difficulty using the theorem correctly to generate images with symmetry, so next semester I’ll spend a little extra time making sure everyone’s on track.

We also had another guest speaker visit the class since the last update. I met Gwen Fisher at the Art Exhibition in Santa Clara at the Regional MAA meeting last month, and thought she would be a great fit for our class. What I liked about her art is that she works with beads in very mathematical ways — and her work is *very* different from anything we had been doing in the class.

She brought in several of examples of her beadwork to pass around. You can see many beautiful pieces on her website, including this Wisdom Mandala piece she designed.

What was wonderful about her presentation was that Gwen discussed both the design and the execution of her pieces. My students were very engaged, and asked lots of questions along the way.

It turns out, though, that I had seen a talk she gave two or three years ago at another conference! Of course you can’t remember every speaker you see at every conference you attend, especially out of context. But after seeing her talk, I realized some of the slides looked strangely familiar, and that is because I had actually seen them before….

One more bit of news. You might remember that Mathematics and Digital Art has been offered as a First-Year Seminar course this year, meaning that only first-year students may enroll, and the maximum number of students in the course is set at 16.

Being a faculty member at the University of San Francisco, I am also working on a project with colleagues in creating a Mathematics for Educators minor — a series of courses aimed at prospective middle-school teachers to broaden their knowledge of mathematics especially suited to middle-school students. And of course a digital art course would fit nicely into this framework.

But what if a student decides to opt for the minor *after* their first year? Well, they couldn’t take digital art. So now, the course is a regular offering in the Mathematics and Statistics Department, open to any student at USF. I’m very excited about this, and really hope to spread the word about the Imagifractalous world of mathematics and digital art!

I’ll keep you updated in the Fall, as I have more changes in store for the course. I plan to move completely to Processing, since now everything I used Sage for has been rewritten for Processing. And next semester, I’ll include a short unit on binary trees as well. Stay tuned….

]]>

About a month ago, a colleague who takes care of the departmental bulletin boards in the hallway approached me and asked if I’d like to create a bulletin board about mathematical art. There was no need to think it over — of course I would!

Well, of course *we* would, since I immediately recruited Nick to help out. We talked it over, and decided that I would describe Koch-like fractal images on the left third of the board, Nick would discuss fractal trees on the right third, and the middle of the bulletin board would highlight other mathematical art we had created.

I’ll talk more about the specifics in a future post — especially since we’re still working on it! But this weekend I worked on designing a banner for the bulletin board, which is what I want to share with you today.

I really had a lot of fun making this! I decided to create fractals for as many letters of Imagifractalous! as I could, and use isolated letters when I couldn’t. Although I did opt *not* to use a third fractal “A,” since I already had ideas for four fractal letters in the second line.

The “I”‘s came first. You can see that they’re just relatively ordinary binary trees with small left and right branching angles. I had already incorporated the ability to have the branches in a tree decrease in thickness by a common ratio with each successive level, so it was not difficult to get started.

I did use Mathematica to help me out, though, with the spread of the branches. Instead of doing a lot of tweaking with the branching angles, I just adjusted the aspect ratio (the ratio of the height to the width of the image) of the displayed tree. For example, if the first “I” is displayed with an aspect ratio of 1, here is what it would look like:

I used an aspect ratio of 6 to get the “I” to look just like I wanted.

Next were the “A”‘s. The form of an “A” suggested an iterated function system to me, a type of transformed Sierpinski triangle. Being very familiar with the Sierpinski triangle, it wasn’t too difficult to modify the self-similarity ratios to produce something resembling an “A.” I also like how the first “A” is reminiscent of the Eiffel Tower, which is why I left it black.

I have to admit that discovering the “R” was serendipitous. I was reading a paper about trees with multiple branchings at each node, and decided to try a few random examples to make sure my code worked — it had been some time since I tried to make a tree with more than two branches at each node.

When I saw this, I immediately thought, “R”! I used this image in an earlier draft, but decided I needed to change the color scheme. Unfortunately, I had somehow overwritten the Mathematica notebook with an earlier version and lost the code for the original “R,” but luckily it wasn’t hard to reproduce since I had the original image. I knew I had created the branches only using simple scales and rotations, and could visually estimate the original parameters.

The “C” was a no-brainer — the fractal C-curve! This was fairly straightforward since I had already written the Mathematica code for basic L-systems when I was working with Thomas last year. This fractal is well-known, so it was an easy task to ask the internet for the appropriate recursive routine to generate the C-curve:

+45 F -90 F +45

For the coloring, I used simple linear interpolation from the RGB values of the starting color to the RGB values of the ending color. Of course there are many ways to use color here, but I didn’t want to spend a lot of time playing around. I was pleased enough with the result of something fairly uncomplicated.

For the “T,” it seemed pretty obvious to use a binary tree with branching angles of 90° to the left and right. Notice that the ends of the branches aren’t rounded, like the “I”‘s; you can specify these differences in Mathematica. Here, the branches are emphasized, not the leaves — although I did decide to use small, bright red circles for the leaves for contrast.

The “L” is my favorite letter in the entire banner! Here’s an enlarged version:

This probably took the longest to generate, since I had never made anything quite like it before. My inspiration was the self-similarity of the L-tromino, which may be made up of four smaller copies of itself.

The problem was that this “L” looked too square — I wanted something with a larger aspect ratio, but keeping the same self-similarity as much as possible. Of course exact self-similarity isn’t possible in general, so it took a bit of work to approximate is as closely as I could. I admit the color scheme isn’t too creative, but I liked how the bold, primary colors emphasized the geometry of the fractal.

The “O” was the easiest of the letters — I recalled a Koch-like fractal image I created earlier which looked like a wheel with spokes and which had a lot of empty space in the interior. All I needed to do was change the color scheme from white-on-gray to black-on-white.

Finally, the “S.” This is the fractal S-curve, also known as Heighway’s dragon. It does help to have a working fractal vocabulary — I knew the S-curve existed, so I just asked the internet again…. There are many ways to generate it, but the easiest for me was to recursively producing a string of 0’s and 1’s which told me which way to turn at each step. Easy from there.

So there it is! Took a lot of work, but it was worth it. I’ll take a photo when it’s actually displayed — and update you when the entire bulletin board is finally completed. We’ve only got until the end of the semester, so it won’t be too long….

]]>

But when you work with software like Mathematica, for example, and you create such a tree, you can specify the size of the displayed image in screen size.

So the trees above both have branching ratio 2 and branching angle of 70°. The left image is drawn to a depth of 7, and the right image is drawn to a depth of 12. I specified that both images be drawn the same size in Mathematica.

But even though they are visually the same size, if you start with a trunk 1 unit in length, the left image is about 200 units wide, while the second is 6000 units wide!

So this prompted us to look at scaling back trees with large branching ratios. In other words, as trees kept getting larger, scale them back even more. You saw why this was important last week: if the scale isn’t right, when you overlap trees with *r* less than one on top of the reciprocal tree with branching ratio 1/*r,* the leaves of the trees won’t overlap. The scale has to be *just* right.

So what should these scale factors be? This is such an interesting story about collaboration and creativity — and how new ideas are generated — that I want to share it with you.

For your usual binary tree with branching ratio less than one, you don’t have to scale at all. The tree remains bounded, which is easy to prove using convergent geometric series.

What about the case when *r* is exactly 1, as shown in the above figure? At depth *n,* if you start with a trunk of length 1, the path from the base of the trunk to the leaf is a path of exactly *n* + 1 segments of length 1, and so can’t be any longer than *n* + 1 in length. As the branching angle gets closer to 0°, you do approach this bound of *n* + 1. So we thought that scaling back by a factor of *n* + 1 would keep the tree bounded in the case when *r* is 1.

What about the case when *r* > 1? Let’s consider the case when *r* = 2 as an example. The segments in any path are of length 1, 2, 4, 8, 16, etc., getting longer each time by a power of 2. Going to a depth of *n,* the total length is proportional to in this case. In general, the total length is about for arbitrary *r,* so scaling back by a factor of would keep the trees bounded as well.

So we knew how to keep the trees bounded, and started including these scaling factors when drawing our images. But there were two issues. First, we still had to do some fudging when drawing trees together with their reciprocal trees. We could still create very appealing images, but we couldn’t use the scale factor on its own.

And second — and perhaps more importantly — Nick had been doing extensive exploration on his computer generating binary trees. Right now, we had three different cases for scaling factors, depending on whether *r* < 1, *r* = 1, or *r > *1. But in Nick’s experience, when he moved continuously through values of *r* less than 1 to values of *r* greater than one, the transition looked very smooth to him. There didn’t seem to be any “jump” when passing through *r* = 1, as happened with the scale factors we had at the moment.

I wasn’t too bothered by it, though. There are lots of instances in mathematics where 1 is some sort of boundary point. Take geometric series, for example. Or perhaps there is another boundary point which separates three fundamentally different types of solutions. For example, consider the quadratic equation

The three fundamentally different solution sets correspond to *c* < 0, *c* = 0, and *c* > 0. There is a common example from differential equations, too, though I won’t go into that here. Suffice it to say, this type of trichotomy occurs rather frequently.

I tried explaining this to Nick, but he just wouldn’t budge. He had looked at *so* many binary trees, his intuition led him to firmly believe there just *had* to be a way to unify these scale factors.

I can still remember the afternoon — the moment — when I saw it. It was truly beautiful, and I’ll share it in just a moment. But my point is this: I was so used to seeing trichotomies in mathematics, I was just willing to live with these three scale factors. But Nick wasn’t. He was tenacious, and just insisted that there was further digging to do.

Don’t ask me to explain *how* I came up with it. It was like that feeling when you *just* were holding on to some small thing, and now you couldn’t find it. But you never left the room, so it just *had* to be there. So you just kept looking, not giving up until you found it.

And there is was: if the branching ratio was *r *and you were iterating to a depth of *n,* you scaled back by a factor of

This took care of all three cases at once! When *r* < 1, this sum is bounded (think geometric series), so the boundedness of the tree isn’t affected. When *r* = 1, you just get *n* + 1 — the same scaling factor we looked at before! And when *r* > 1, this sum is large enough to scale back your tree so it’s bounded.

Not only that, this scale factor made proving the Dual Tree Theorem *so* nice. The scaling factors for a tree with *r* < 1 and its reciprocal tree with branching ratio 1/*r* matched *perfectly.* No need to fudge!

This isn’t the place to go into all the mathematics, but I’d be happy to share a copy of our paper if you’re interested. We go into a *lot* more detail than I ever could in a blog post.

This is how mathematics *happens,* incidentally. It isn’t just a matter of finding a right answer, or just solving an equation. It’s a give-and-take, an exploration, a discovery here and there, tenacity, persistence. A living, breathing endeavor.

But the saga isn’t over yet…. There is a lot more to say about binary trees. I’ll do just that in my next installment of Imagifractalous!

]]>

Recall that in that post, I discussed creating binary trees with branching ratios which were 1 or larger. Below are three examples of binary trees, with branching ratios less that 1, equal to 1, and larger than 1, respectively.

It was Nick’s insight to consider the following question: how are trees with branching ratio *r* related to those with branching ratio 1/*r*? He had done a lot of exploring with graphics in Python, and observed that there was definitely some relationship.

Let’s look at an example. The red tree is a binary tree with branching ratio *r* less than one, and the gray tree has a branching ratio which is the reciprocal *r*. Both are drawn to the same depth.

Of course you notice what’s happening — the leaves of the trees are overlapping! This was happening so frequently, it just couldn’t be coincidence. Here is another example.

Notice how three copies of the trees with branching ratio less than one are covering some of the leaves of a tree with the reciprocal ratio.

Now if you’ve ever created your own binary trees, you’ll likely have noticed that I left out a particularly important piece of information: the size of the trunks of the trees. You can imagine that if the sizes of the trunks of the *r* trees and the 1/*r* trees were not precisely related, you wouldn’t have the nice overlap.

Here is a figure taken from our paper which explains just how to find the correct relationship between the trunk sizes. It illustrates the main idea which we used to rigorously prove just about everything we observed about these reciprocal trees.

Let’s take a look at what’s happening. The thick, black tree has a branching ratio of 5/8, and a branching angle of 25°. The thick, black path going from *O* to *P* is created by following the sequence of instructions *RRRLL *(and so the tree is rendered to a depth of 5).

Now make a symmetric path (thick, gray, dashed) starting at *P* and going to *O*. If we start at *P* with the same trunk length we started with at *O,* and follow the exact same instructions, we have to end up back at *O.*

The trick is to now look at this gray path *backwards,* starting from *O.* The branches now get *larger* each time, by a factor of 8/5 (since they were getting smaller by a factor of 5/8 when going in the opposite direction). The size of the trunk, you can readily see, is the length of the *last* branch drawn in following the black path from *O* to *P*. This must be (5/8)^{5} times the length of the trunk, since the tree is of depth 5.

The sequence of instructions needed to follow this gray path is *RRLLL.* It turns out this is easy to predict from the geometry. Recall that beginning at *P*, we followed the instructions *RRRLL* along the gray path to get to *O.* When we reverse this path and go from *O* to *P,* we follow the instructions in reverse — except that in going in the reverse direction, what was previously a left turn becomes a right turn, and vice versa.

So all we need to do to get the reverse instructions is to reverse the string *RRRLL* to get *LLRRR,* and then change the *L*‘s to *R*‘s and the *R*‘s to *L*‘s, yielding *RRLLL.*

There’s one important detail to address: the fact that the black tree with branching ratio 5/8 is rotated by 25° to make everything work out. Again, this is easy to see from the geometry of the figure. Look at the thick gray path for a moment. Since following the instructions *RRLLL* means that in total, you make one more left turn than you do right turns, the last branch of the path must be oriented 25° to the left of your starting orientation (which was vertical). This tells you precisely how much you need to rotate the black tree to make the two paths have the same starting and ending points.

Of course one example does not make a proof — but in fact all the important ideas are contained in this one illustration. It is not difficult to make the argument more general, and we have successfully accomplished that (though this blog is not the place for it!).

If you look carefully at the diagram, you’ll count that there are exactly 10 leaves in common with these two trees with reciprocal branching ratios. There is some nice combinatorics going on here, which is again easy to explain from the geometry.

You can see that these common leaves (illustrated with small, black dots) are at the ends of gray branches which are oriented 25° from the vertical. Recall that this specific angle came from the fact that there was one more *L* than there were *R*‘s in the string *RRLLL.*

Now if you have a sequence of 5 instructions, the only way to have exactly one more *L* than *R*‘s is to have precisely three *L*‘s (and hence two *R*‘s). And the number of ways to have three *L*‘s in a string of length 5 is just

Again, these observations are easy to generalize and prove rigorously.

And where does this take us?

On the right are 12 copies of a tree with a braching ratio of *r* less than one and a branching angle of 30°, and on the left are 12 copies of a tree with a reciprocal branching ratio of 1/*r*, also with a branching angle of 30°. All are drawn to depth 4, and the trunks are appropriately scaled as previously discussed.

These sets of trees produce *exactly* the same leaves! We called this the Dual Tree Theorem, which was the culmination of all these observations. Here is an illustration with both sets of trees on top of each other.

As intriguing as this discovery was, it was only the *beginning* of a much broader and deeper exploration into the fractal world of binary trees. I’ll continue a discussion of our adventures in the next installment of Imagifractalous!

]]>

Later we’ll look at some student work (like Collette’s iterated function system),

but first, I’d like to talk about course content.

The main difference from last semester in terms of topics covered was including a unit on L-systems instead of polyhedra. You might recall the reasons for this: first, students didn’t really see a connection between the polyhedra unit and the rest of the course, and second, the little bit of exposure to L-systems (by way of project work) was well-received.

I’ve talked a lot about L-systems on my blog, but as a brief refresher, here is the prototypical L-system, the Koch curve. The scheme is to recursively follow the sequence of turtle graphics instructions

F +60 F +240 F +60 F.

There is also an excellent pdf available online, *The Algorithmic Beauty of Plants. *This is where I first learned about L-systems. It is a beautifully illustrated book, and I am fortunate enough to own a physical copy which I bought several years ago.

Talking about L-systems is also a great way to introduce Processing, since I have routines for creating L-systems written in Python. Up to this point, we’ve just explored changing parameters in the usual algorithm, but there will a deeper investigation later.

One main focus, however, was just *seeing* the fractal produced by the algorithm. When working in the Sage environment, the system automatically produced a graphic with axes labeled, enabling you to see what fractal image you created.

In Processing, though, you need to specify your screen space ahead of time. So if your image is drawn off-screen, well, you just won’t see it. You have to do your own scaling and translating, which is sometimes not a trivial undertaking.

I also decided to introduce both finite and infinite geometric series in conjunction with L-systems. This had two main applications.

First, we looked at the Sierpinski triangle. Begin with any triangle, and take out the triangle formed by joining the midpoints of the sides. Then repeat recursively, creating the Sierpinski triangle.

Now assume your original triangle had an area of 1, and calculate the area of *all* the triangles you removed. Since the process is repeated infinitely, this sum is just an infinite geometric series. Interestingly, the sum of this series is 1, meaning, in some sense, you’ve taken away *all* the area — but the Sierpinski triangle is still left over! This illustrates an idea not usually encountered by students before: infinite sets of points with no area. Makes for a nice discussion.

Second, we looked at the Koch curve (and similarly defined curves). Using a geometric sequence, you can look at the length of any iteration of the polygonal path drawn by the recursive algorithm. And, as expected, these paths get *longer* each time, and their lengths tend to infinity as the number of iterations increases. This is another nice way to involve geometric sequences and series.

We’ll be doing more with L-systems in the next few weeks, so I’ll finish this discussion on my next update.

A highlight of the past month was a visit by artist Stacy Speyer.

Having worked with weaving and textiles for some time, Stacy has moved on to an investigation of polyhedral forms.

Stacy’s talk provided a wonderful insight into integrating mathematics and art in ways we did not study in class. One of the goals of the Bridges papers presentations and the guest speakers is to do precisely this

She writes:

I’m now on a mission to share the fun of making geometric forms with others; I designed Cubes and Things, a 3D coloring book. These easy-to-make paper constructions have patterns that can be colored which emphasize different kinds of symmetric properties of the polyhedra. I bring this fun activity to schools and other groups in the form of Polyhedra Parties. And whenever possible, I still work on making more geometric art and learning more about math.

Visit Stacy’s website to take a look at her book, and view many more examples of her stunning work!

Now we’ll take a look at a few more examples of student artwork. These pieces were submitted for the assignment on iterated function systems. Karla created a piece which reminded her of icicles or twinkling lights.

Lainey thought her piece looked like a bolt lightning coming out of a wizard’s staff.

And Peyton’s piece reminder her of flowers.

Finally, as I did last semester, I asked students for some mid-semester comments on how the course was going. You can see the complete prompt on Day 19 of the course website. Here are a few of the comments:

I like how it takes a subject that we are all required to take and creates a real, palpable output. Rather than some types of math, where everything is theoretical, it creates a clear chain of events with an even clearer consequence.

[A]fter seeing the kinds of art works there are that involve the kind of math and programming we use, it opened up a new world of artistic possibilities.

What I enjoy most about this course aside from it being small and very interactive in terms of doing labs and having all of our questions answered, is the fact that I would never thought I would be able to create images using programming or math let alone enjoying the satisfaction of the final product.

I was pleased to read these responses, as they suggest the course is fulfilling its intended purpose. But there were also suggestions for improvement — there was a consensus that the math moved a bit too quickly. When we start the discussion on number theory for analyzing the Koch curve next week, I’ll make sure to keep an eye on the pace. I’ll let you know how it goes in my next update in April!

]]>

Yet another wonderful thing about LaTeX is how many mathematicians and scientists use it — and therefore write packages for it. You can go to the Comprehensive TeX Archive Network and download packages which make Feynman diagrams for physics, molecular structures for chemistry, musical scores, and even crossword puzzles or chessboards! There are literally thousands of packages available. And like LaTeX, it’s *all* open source. That is a feature which cannot be overstated. Arguably the world’s best and most comprehensive computer typesetting platform is *absolutely free.*

The package I use most often is TikZ — it’s a really amazing graphics package written by Till Tantau. You can do absolutely *anything* in TikZ, really. One extremely important feature is that you can easily put mathematical symbols in any graphic.

This is nice because any labels in your diagram will be in the same font as your text. I always find it jarring when I’m reading a mathematics paper or book, and the diagrams are labelled in some other font.

There is *so* much more to say about TikZ. I plan to talk about it in more detail in a future installment about computer graphics, so I’ll stop here and leave you with one more graphic made with TikZ.

Another package I use fairly often is the *hyperref* package. This is especially useful when you’re creating some type of report which relies on information found on the web. For example, when I request funding for a conference, I need to include a copy of the conference announcement. So I create a hyperlink (in blue, though you can customize this) in the document which takes you to the announcement online when you click on it.

These hyperlinks can also be linked to other documents in the cloud, so you can have a “master” document which links to all the documents you need. Now that I’m approaching 100 blog entries, I plan on making an index this way. I’ll create a pdf (using LaTeX, of course) which lists posts by topic with brief descriptions as well as hyperlinks to the relevant blog posts.

On to the next LaTeX feature! I learned about this one from a colleague (thanks, Noah!) when I was writing some notes on Taylor series for calculus. I used it as a text when I taught calculus; the notes are about 100 pages long.

I wanted to share these notes with others, and the style of the notes was such that the exercises weren’t at the end of the sections, but interwoven with the text. Students are supposed to do the exercises as they encounter them.

But for other calculus teachers, it was helpful to include solutions to the exercises. The problem in creating a solutions manual was that if I ever edited the notes, I’d have to also edit the solutions manual in parallel. I knew this was going to happen, since when I gave exams on this material, I added those problems as supplementary exercises to the text.

Enter the *ifthen* package in LaTeX. I created an *exercise* environment, so that every time I included an exercise, I had a block which looked like this:

\begin{exercise}

{….the exercise….}

{….the solution….}

\end{exercise}

Think of this as an exercise function with two arguments: the text of the exercise, and the text of the solution.

Then I created a boolean variable called *teacheredition*. If this variable was true, the exercise function printed the solutions with each exercise. If false, the solutions were omitted. This control structure was made easy by some functions in the ifthen package.

And that’s all there was to it! So every time I created an exercise, I added the solution right after it. Of course the exercises were automatically numbered as well. No separate solutions manual. Everything was all in one place. If you have ever had to deal with this type of issue before, you’ll immediately recognize how unbelievably useful the ability to do this is!

While not really features of LaTeX itself, there are now places in the cloud where you can work on LaTeX documents with others. I’d like to talk about the one Nick and I are currently using, called ShareLaTeX. This is an environment where you can create a project, and then share it with others so they can work on it, too.

So when Nick and I work on a paper together, we do it in ShareLaTeX. It’s *extremely* convenient. We can edit the paper on our own, but most often, we use ShareLaTeX when we’re working together. Usually, we’re working on different parts of the paper — but when one of us has something we want the other to see, it’s easy to just scroll down (or up) in the document and look at what’s been done.

Also nice is that it’s easy to copy projects — so as we’re about to make a big change (like use different notation, or alter a fundamental definition), our protocol is to make a copy of the current project to work on, and then download the older version of the project (just in case the internet dies).

It’s wonderful to use. And it actually *really* came in handy when Nick was working on his Bridges paper for last year. His computer hard drive seriously crashed. But since we were working on ShareLaTeX, the draft of his paper was unharmed.

I hope this is enough to convince you that it might be worthwhile to learn a little LaTeX! I seriously don’t know what I’d do without it. And — as it bears repeating — it’s all open source, available to anyone. So, really, why isn’t the whole world using LaTeX? That’s a mystery for another day….

]]>