Later we’ll look at some student work (like Collette’s iterated function system),

but first, I’d like to talk about course content.

The main difference from last semester in terms of topics covered was including a unit on L-systems instead of polyhedra. You might recall the reasons for this: first, students didn’t really see a connection between the polyhedra unit and the rest of the course, and second, the little bit of exposure to L-systems (by way of project work) was well-received.

I’ve talked a lot about L-systems on my blog, but as a brief refresher, here is the prototypical L-system, the Koch curve. The scheme is to recursively follow the sequence of turtle graphics instructions

F +60 F +240 F +60 F.

There is also an excellent pdf available online, *The Algorithmic Beauty of Plants. *This is where I first learned about L-systems. It is a beautifully illustrated book, and I am fortunate enough to own a physical copy which I bought several years ago.

Talking about L-systems is also a great way to introduce Processing, since I have routines for creating L-systems written in Python. Up to this point, we’ve just explored changing parameters in the usual algorithm, but there will a deeper investigation later.

One main focus, however, was just *seeing* the fractal produced by the algorithm. When working in the Sage environment, the system automatically produced a graphic with axes labeled, enabling you to see what fractal image you created.

In Processing, though, you need to specify your screen space ahead of time. So if your image is drawn off-screen, well, you just won’t see it. You have to do your own scaling and translating, which is sometimes not a trivial undertaking.

I also decided to introduce both finite and infinite geometric series in conjunction with L-systems. This had two main applications.

First, we looked at the Sierpinski triangle. Begin with any triangle, and take out the triangle formed by joining the midpoints of the sides. Then repeat recursively, creating the Sierpinski triangle.

Now assume your original triangle had an area of 1, and calculate the area of *all* the triangles you removed. Since the process is repeated infinitely, this sum is just an infinite geometric series. Interestingly, the sum of this series is 1, meaning, in some sense, you’ve taken away *all* the area — but the Sierpinski triangle is still left over! This illustrates an idea not usually encountered by students before: infinite sets of points with no area. Makes for a nice discussion.

Second, we looked at the Koch curve (and similarly defined curves). Using a geometric sequence, you can look at the length of any iteration of the polygonal path drawn by the recursive algorithm. And, as expected, these paths get *longer* each time, and their lengths tend to infinity as the number of iterations increases. This is another nice way to involve geometric sequences and series.

We’ll be doing more with L-systems in the next few weeks, so I’ll finish this discussion on my next update.

A highlight of the past month was a visit by artist Stacy Speyer.

Having worked with weaving and textiles for some time, Stacy has moved on to an investigation of polyhedral forms.

Stacy’s talk provided a wonderful insight into integrating mathematics and art in ways we did not study in class. One of the goals of the Bridges papers presentations and the guest speakers is to do precisely this

She writes:

I’m now on a mission to share the fun of making geometric forms with others; I designed Cubes and Things, a 3D coloring book. These easy-to-make paper constructions have patterns that can be colored which emphasize different kinds of symmetric properties of the polyhedra. I bring this fun activity to schools and other groups in the form of Polyhedra Parties. And whenever possible, I still work on making more geometric art and learning more about math.

Visit Stacy’s website to take a look at her book, and view many more examples of her stunning work!

Now we’ll take a look at a few more examples of student artwork. These pieces were submitted for the assignment on iterated function systems. Karla created a piece which reminded her of icicles or twinkling lights.

Lainey thought her piece looked like a bolt lightning coming out of a wizard’s staff.

And Peyton’s piece reminder her of flowers.

Finally, as I did last semester, I asked students for some mid-semester comments on how the course was going. You can see the complete prompt on Day 19 of the course website. Here are a few of the comments:

I like how it takes a subject that we are all required to take and creates a real, palpable output. Rather than some types of math, where everything is theoretical, it creates a clear chain of events with an even clearer consequence.

[A]fter seeing the kinds of art works there are that involve the kind of math and programming we use, it opened up a new world of artistic possibilities.

What I enjoy most about this course aside from it being small and very interactive in terms of doing labs and having all of our questions answered, is the fact that I would never thought I would be able to create images using programming or math let alone enjoying the satisfaction of the final product.

I was pleased to read these responses, as they suggest the course is fulfilling its intended purpose. But there were also suggestions for improvement — there was a consensus that the math moved a bit too quickly. When we start the discussion on number theory for analyzing the Koch curve next week, I’ll make sure to keep an eye on the pace. I’ll let you know how it goes in my next update in April!

]]>

Yet another wonderful thing about LaTeX is how many mathematicians and scientists use it — and therefore write packages for it. You can go to the Comprehensive TeX Archive Network and download packages which make Feynman diagrams for physics, molecular structures for chemistry, musical scores, and even crossword puzzles or chessboards! There are literally thousands of packages available. And like LaTeX, it’s *all* open source. That is a feature which cannot be overstated. Arguably the world’s best and most comprehensive computer typesetting platform is *absolutely free.*

The package I use most often is TikZ — it’s a really amazing graphics package written by Till Tantau. You can do absolutely *anything* in TikZ, really. One extremely important feature is that you can easily put mathematical symbols in any graphic.

This is nice because any labels in your diagram will be in the same font as your text. I always find it jarring when I’m reading a mathematics paper or book, and the diagrams are labelled in some other font.

There is *so* much more to say about TikZ. I plan to talk about it in more detail in a future installment about computer graphics, so I’ll stop here and leave you with one more graphic made with TikZ.

Another package I use fairly often is the *hyperref* package. This is especially useful when you’re creating some type of report which relies on information found on the web. For example, when I request funding for a conference, I need to include a copy of the conference announcement. So I create a hyperlink (in blue, though you can customize this) in the document which takes you to the announcement online when you click on it.

These hyperlinks can also be linked to other documents in the cloud, so you can have a “master” document which links to all the documents you need. Now that I’m approaching 100 blog entries, I plan on making an index this way. I’ll create a pdf (using LaTeX, of course) which lists posts by topic with brief descriptions as well as hyperlinks to the relevant blog posts.

On to the next LaTeX feature! I learned about this one from a colleague (thanks, Noah!) when I was writing some notes on Taylor series for calculus. I used it as a text when I taught calculus; the notes are about 100 pages long.

I wanted to share these notes with others, and the style of the notes was such that the exercises weren’t at the end of the sections, but interwoven with the text. Students are supposed to do the exercises as they encounter them.

But for other calculus teachers, it was helpful to include solutions to the exercises. The problem in creating a solutions manual was that if I ever edited the notes, I’d have to also edit the solutions manual in parallel. I knew this was going to happen, since when I gave exams on this material, I added those problems as supplementary exercises to the text.

Enter the *ifthen* package in LaTeX. I created an *exercise* environment, so that every time I included an exercise, I had a block which looked like this:

\begin{exercise}

{….the exercise….}

{….the solution….}

\end{exercise}

Think of this as an exercise function with two arguments: the text of the exercise, and the text of the solution.

Then I created a boolean variable called *teacheredition*. If this variable was true, the exercise function printed the solutions with each exercise. If false, the solutions were omitted. This control structure was made easy by some functions in the ifthen package.

And that’s all there was to it! So every time I created an exercise, I added the solution right after it. Of course the exercises were automatically numbered as well. No separate solutions manual. Everything was all in one place. If you have ever had to deal with this type of issue before, you’ll immediately recognize how unbelievably useful the ability to do this is!

While not really features of LaTeX itself, there are now places in the cloud where you can work on LaTeX documents with others. I’d like to talk about the one Nick and I are currently using, called ShareLaTeX. This is an environment where you can create a project, and then share it with others so they can work on it, too.

So when Nick and I work on a paper together, we do it in ShareLaTeX. It’s *extremely* convenient. We can edit the paper on our own, but most often, we use ShareLaTeX when we’re working together. Usually, we’re working on different parts of the paper — but when one of us has something we want the other to see, it’s easy to just scroll down (or up) in the document and look at what’s been done.

Also nice is that it’s easy to copy projects — so as we’re about to make a big change (like use different notation, or alter a fundamental definition), our protocol is to make a copy of the current project to work on, and then download the older version of the project (just in case the internet dies).

It’s wonderful to use. And it actually *really* came in handy when Nick was working on his Bridges paper for last year. His computer hard drive seriously crashed. But since we were working on ShareLaTeX, the draft of his paper was unharmed.

I hope this is enough to convince you that it might be worthwhile to learn a little LaTeX! I seriously don’t know what I’d do without it. And — as it bears repeating — it’s all open source, available to anyone. So, really, why isn’t the whole world using LaTeX? That’s a mystery for another day….

]]>

From a Euclidean standpoint, there are points on the sphere, but obviously no straight lines. In spherical geometry, we define a *Point* to be a pair of antipodal points on the sphere, and a *Line* to be a great circle on the sphere. This results in two nice theorems of spherical geometry: any two distinct Lines determine (intersect in) a single Point, and any two distinct points determine a single Line.

This is a departure from Euclidean geometry, for this means that there are *no parallel Lines* in spherical geometry, since distinct Lines always intersect. But there is something more going on here.

Consider the statement “Any two distinct Lines determine a single Point.” Now perform the following simple replacement: change the occurrence of “Line” to “Point,” and vice versa. This gives the statement “Any two distinct Points determine a single Line,” and is called the *dual* of the original statement.

Thus, we have the situation that some statement and its dual are both true. Now if this is true of some set of statements in spherical geometry — that the dual of each statement is true — and we derive a *new* result from this set of statements, then the following remarkable thing happens. Since the dual of any statement we used is true, then the dual of the new result must also be true! Just replace each statement used to derive the new result with its dual, and you get the dual of the new result as a true statement as well.

This is the principle of *duality* in mathematics, and is a very important concept. We will encounter it again when we investigate projective geometry.

Triangles and trigonometry are different on the sphere, as well. A *spherical triangle* is composed of three arcs of great circles, as in the image below.

If this were Earth, you could imagine starting at the North Pole, following 0 degrees of longitude to the Equator (shown in yellow), follow the Equator to 30 degrees east longitude, then follow this line of longitude back to the North Pole.

On the sphere, though, sides of a triangle are actually *angles.* Sure, you could measure the length of the arcs given the radius of the sphere, but that’s not as useful. Consider the choice of units on the Earth — kilometers or miles? We don’t want important geometrical results to depend on the choice of units. So a side is specified by the angle subtended by the arc at the center of the sphere. This makes sense since the sides are arcs of great circles, and the center of any great circle is the center of the sphere.

The angles between the sides are angles, too! A great circle is just the intersection of a plane passing through the origin and the sphere. So the angle between any two sides is defined to be the angle between the planes which contain them.

Really *nothing* from Euclidean trigonometry is valid on the sphere. For example, if you look at the triangle above, you can see that the angles between the sides are 30, 90, and 90 degrees. These angles add to 210! In fact, the three angles of a triangle *always* add up to more than 180 degrees on a sphere. (You may notice that the *sides* of this triangle are also 30, 90, and 90 degrees, but this is just a coincidence. The sides and angles are usually different.)

If *A,* *B,* and *C* are the angles of a spherical triangle, it turns out that the area of the triangle is proportional to *A* + *B* + *C* – 180. This means that the smaller a triangle is, the closer the angle sum is to 180 degrees.

There is no Pythagorean Theorem on the sphere, either. In addition, if *a,* *b,* and *c* are the sides opposite angles *A,* *B,* and *C, *respectively, then we have formulas like

and

One interesting consequence of the second of these formulas is this. If you know the angles of a triangle, you can determine the sides. You can’t do this in Euclidean trigonometry, since the triangles may be similar, but of different sizes. In other words, there are *no* similar triangles on the sphere. Spherical triangles are just congruent, or not. You can’t have two different triangles with the same angles.

For now, we’ve considered our sphere as being embedded in a Euclidean space. The definition of this surface is easy: just choose a point as the center of your sphere, and then find all points which are a given, fixed distance — the radius of the sphere — from that point. Sounds easy enough.

But can you imagine a sphere *without* thinking of the three-dimensional space around it? Or put another way, imagine you were a tiny ant, on a sphere of radius 1,000,000 km. That’s over 150 times the radius of the Earth! How would you know you were actually on the surface of a sphere? If you were that small and the sphere were that large, it would seem awfully flat to you….

So how could you determine the sphere was curved? This is a question for *differential geometry,* which among other things, is about the geometry of a surface *without* any reference to a space it’s embedded in. This is called the surface’s *intrinsic* geometry.

As an example of looking at the intrinsic geometry of the sphere, consider Lines. Now you *can’t* say they’re great circles any more, since this relies on thinking of a sphere as being embedded in three-dimensional space; in other words, its *extrinsic* geometry. You need the concept of a *geodesic* — in other words, the idea of a “shortest path.”

So if you’re the ant crawling between two points on a sphere, and you wanted to take the shortest path, you would *have* to follow a great circle arc. So it is possible to define Lines only using properties of the surface itself. But the mathematics to do this is really extremely challenging.

Lots of new ideas here — but we’ve just scratched the surface of a study of spherical geometry. You can see how *very* different spherical geometry is from both Euclidean and taxicab geometry. Hopefully you’re well on your way to wrapping your head around our original question, *What is a Geometry?*….

]]>

Last year the event was held at the University of California, Davis, and Shirley Yap from California State University, East Bay organized a highly successful exhibit — what we believe to be the first art exhibition ever to be a part of a sectional MAA meeting. I asked Shirley to say a few words about what motivated her to take on this task.

I exhibited an art piece at the Joint Mathematics Meetings in 2016. It was an interactive piece and I wanted to see how people would experiment with it. So I just hung around the exhibit for a while and not only saw how people played with my piece, but how they observed other pieces. The kind of delight that came from people’s faces convinced me that the art was really drawing them to math in a way that was different from how I had seen before. Perhaps because one is expected to sit in front of art for a long time to contemplate it, people felt relaxed enough to enjoy it. Whatever it was I saw, I knew that I wanted to share the experience with others outside of the JMM.

When we put a call for artists out on our Golden Section website, we didn’t get any responses. So I went through years of JMM art exhibit catalogs and looked up each artist to see if they lived in our section. Then I just started emailing them individually to ask if they were interested in showing their work at a local exhibition.

This year, I offered to help Shirley with organizing the exhibition. Given what was involved in the second year, I have a new appreciation for Shirley’s dedication to spreading the word about mathematical art. Such events do not organize themselves — and we are all grateful Shirley took on this huge task to start a new Golden Section tradition.

We didn’t have as many artists participate this year — but that’s part of the ebb and flow of yearly events like these. But the quality has high, as was the enthusiasm of the artists. Two of the artists this year were undergraduates — Nick Mendler from the Univiersity of San Francisco, and Juli Odomo from Santa Clara University. I think of them as future organizers of sectional MAA art exhibits….

In the morning, we had the usual opening remarks and a series of excellent speakers. The art exhibit took place in parallel with the Student Poster Sessions, which took place after lunch from 1:00-2:30. This was followed by another series of talks. You can see the full program here.

I asked the artists to say a few words about their experience about creating or exhibiting mathematical art. Here a few remarks.

Frank Farris (see artwork above):

I love the idea that we’re entering a golden age of mathematical art. New tools become available all the time and a growing community is finding new creative ways to use them. Can’t wait to see what the next years will bring.

I believe the sentiment in Gwen’s quote resonates very strongly with many mathematical artists.

Gwen Fisher:

The thing that keeps bringing me back to bead weaving is mathematics. Of course, I love colors of glass beads and the way they sparkle, but mostly, I keep returning to my seed beads because I keep finding new ways to use and represent mathematical structures with them.

Nick Mendler (see artwork above):

Since my first sectional meeting last Spring, I’ve continued research into the questions that generated my first mathematical artwork over a year ago.

Recognizing that my projects and thoughts are the most rewarding when realized through an aesthetic process has been not only productive, but has been a fascinating source of guidance to new questions. That focusing on more elegant images brings about more elegant mathematics has been only too clear from the sessions I’ve attended so far; I’m looking forward to seeing and learning from more art pieces!

Interested in organizing an art exhibit in your section? Since I helped Shirley with the organizational details this year, I can say a bit about what’s involved in putting together an exhibition.

The first step is, clearly, finding artists who want to show their work. It would be easier to get a student worker to do the search Shirley undertook — but don’t forget about the exhibitions at the Bridges conferences! Here is a link to both JMM and Bridges galleries. You can also contact the SIGMAA-ARTS and request that an email blast be sent to members.

As far as the submission process goes, that’s pretty standard. While it is always nice to accept every submission, sometimes it just isn’t possible. The works should have some real mathematical content, and be of good quality.

Since not all artists necessarily have business cards (especially student artists), I had the idea of making nametags for those who wanted one. You can download this nametag template in LaTeX if you would like, then edit and print onto cardstock. (Note: WordPress would not let me upload a .tex document, so I saved it as an Open Office document.)

*A Fine Mesh We’re In, © *dan bach 2016.

It is a good idea to have an assistant or a student helper in the exhibition venue during the conference. Not all artists attended the meeting, and so brought in their work at various times during the day.

Shirley had the wonderful idea of arranging a dinner for contributing artists after the conference. Last year we went to an excellent Thai restaurant, and this year, Frank Farris generously offered to host a pot luck dinner (he provided the lasagna) at his house. These have been very wonderful events, and give artists the opportunity to get to know each other a little better.

Finally, I wanted to mention that I am in the middle of my second time teaching *Mathematics and Digital Art* at the University of San Francisco. I say this in the event you are interested in offering such a course at your university. I have written extensively about this experience on my blog, and also have all course materials as well as a day-by-day outline available on the Fall 2016 course website. I would be happy to help you get such a course off the ground if you’re interested.

If you would like more information, or want to get in touch with any of the artists whose work is shown above, please make a comment and I’ll get back to you. I hope this is just the beginning of a long tradition of having mathematical art exhibits at sectional MAA meetings!

]]>

LaTeX is a markup language, like HTML, but its purpose is completely different. TeX was invented when its creator, Donald Knuth, thought the galley proofs for one of his computer science textbooks looked so awful, he thought that something should be done about it. This was in the late 70’s. TeX was initially released in 1978; in 1985, Leslie Lamport released LaTeX, which is a more user-friendly version of TeX. This Wikipedia article has a more complete history if you’re interested.

Basically, LaTeX makes math look *great.* Here’s a formula taken from the paper Nick and I submitted to Bridges 2017 recently.

It’s perfectly formatted, all the symbols and spacing nicely balanced. Without mentioning any names, try producing that same formula in some other well-known word-processing environment, and you’ll find out it doesn’t even come *close* to looking that good.

Before I go into more detail about all my favorite LaTeX features, I’d like to explain how it works. Typically, you download some TeX GUI — I use TeXworks at home. The environment looks like this:

On the left-hand side is the LaTeX markup, and on the right-hand side is a previewer which shows you how your compiled text would look as a pdf document. Just type in your text, compile, view, repeat. Much like HTML.

There are *many* things I like about LaTeX, and it’s hard to rank them in any particular order. Although first — and foremost — mathematical formulas look *fantastic.*

A close second is the fact that it’s *fast.* By that, I mean that because a markup language it text-based, there’s no mouse involved. I’m a very fast typist, which means I can type LaTeX markup almost as fast as I can type ordinary text. If you’ve ever hard to typeset a formula by means of drop-down menus, you know exactly what I mean.

The third feature is closely related to the second: it’s intuitive. For example, to get the trigonometric formula

you would type

$$\tan\frac{\theta}{2}=\frac{\sin\theta}{1+\cos\theta}$$.

All commands in LaTeX are preceded by a backslash (“\”), so you can always distinguish them from text. And if you look at the text, you can almost figure out the formula just from reading the commands. It maps perfectly.

Most of LaTeX is that way — the commands describe what they do. For example,

is created using

$$x\rightarrow y$$.

Now you might be thinking that’s a lot to type for a simple formula — surely there must be a shorter way! First, there is. You can just define your own macro by saying

\def\ra{\rightarrow}

so you could just type

$$x\ra y$$.

This might be a good thing to do if you use a right arrow all the time. But secondly, if you just use it occasionally, it’s really quite easy to remember. When you want a right arrow, you just type “\rightarrow”. If you want a longer arrow which points in both directions, like

you just type

$$x\longleftrightarrow y$$.

Once you understand how the commands are named, it’s often easy to guess which one you’ll need just by thinking about it.

Next up, LaTeX makes your life a lot easier, especially when you’re working on big projects. There are a *lot* of ways this is done, so I’ll just mention one of my favorites — the “\label” command.

This equation (again from the Bridges paper) is typeset using the commands (broken up for reference):

- \begin{equation}
- \bigcup_{j\in{\mathbb Z}}{\bf R}^{2j}_{\theta}\,C_{r,\theta}^n={\bf R}^{n\,\rm{mod}\,2}_{\theta}\bigcup_{j\in{\mathbb Z}}{\bf R}^{2j}_{\theta}\,C_{1/r,\theta}^n
- \label{theorem1}
- \end{equation}

The equation (described by (2)) is sandwiched between a begin/end block ((1) and (4)). But the key command is the “\label” command on line (3). When you want to refer to this equation in LaTeX, you don’t use text like “equation (5)”, you say “equation (\ref{theorem1})”.

The \label command assigns the number of the equation — in this case, 5 — to the label “theorem1”. So when you use the “\ref” command (stands for “reference,” naturally), LaTeX will look for the number assigned to “theorem1”.

This might not seem like a big deal at first. But as you work on a paper, you’re always deleting equations, adding them, or moving them around. By assigning them labels, any references you make to equations in your text are automatically updated when you make changes.

And while we’re on the subject of equations — of course an extremely important topic when thinking about writing mathematics — there is also the “\nonumber” command, as well. Before you end the equation, you migh add a \nonumber tag, as in

\begin{equation}<stuff>\label{eq1}\nonumber\end{equation}.

Why would you label an equation for easy reference, and then not even put the number next to the equation? It is good mathematical style to only number equations that are referenced in the text. If you just show them once and don’t refer to them later, they don’t need a number.

But as you rewrite a proof, for example, you might find you no longer need to reference a particular equation, and so you don’t need the number any more. So rather than having to format it as *not* an equation (deleting the begin/end block), you just add the \nonumber tag. It’s a lot easier.

So what I do is label every equation as I write, and then when I have a final draft, I just go through and unnumber all those equations which I never end up referencing. It’s so *nice*.

I know I went on a bit about equations, but similar conveniences are available for figures, tables, article sections, book chapters, bibliographic entries, etc. You *never* have to remember a number. Ever.

And yes, there’s more…. Stay tuned for the next On Coding installment, where I’ll give you more reasons for wanting to learn LaTeX!

]]>

There are no significant content changes yet — although I’ll be discussing L-systems rather than polyhedra this semester, and there will be more to say when we get to that point. But as far as the delivery is concerned, there have been some alterations.

First, I’m emphasizing the code more right from the start. You might recall that in their mid-semester comments last semester, students asked for more details about the actual coding. So I take more time in each lab explaining Python.

This change has already made an impact; I’ve noticed that students are getting more adventurous with coding earlier on. They really seem to enjoy experimenting with the geometry. The example I use for the Josef Albers assignment looks like this — just rectangles within rectangles.

But Collette took the geometry quite a few steps further. In her narrative, she discussed working with figure and ground, trying to make each geometrically interesting.

I am pleased to see students playing so intently with the geometry. At first, after a detailed discussion of using two-dimensional coordinates in Python, some students just tried randomly changing numbers to see what would happen. But I encouraged them to be a little more intentional — that is, spend more time in the design stage — and they were largely successful.

The second change is that I spent an extra day on affine transformations at the beginning of our discussion, slowing down the pace a little. Last semester, I recall that I needed to go back and review ideas I thought I covered in sufficient detail. Hopefully, slowing down the pace will help.

In addition, I put together a summary of commonly used affine transformations, such as reflections:

This seemed to be helpful — I used it for the linear algebra course I’m teaching as well, and students responded positively. Feel free to look at it; just go to Day 6 on the course website.

The third change involves using discussion boards more deliberately on Canvas (which is our University’s content management system). For each digital art assignment, I have students post drafts of their work, and have their peers comment on them. Since I have a small class this semester (six students), it is not a problem to have each student comment on every other student’s work.

Students really seem to enjoy this, and I participate by writing comments as well. But because everyone works at a different pace, some students lagged behind. So now I’m being more formal about using the discussion board, and making it an assignment.

For example, the next assignment involves creating three pieces, and I have assigned students to upload drafts on Canvas by the beginning of class next Friday. We’ll use Friday’s class so students can write and read comments; the assignment isn’t due until a few days later, so there will be time to incorporate new ideas into their drafts.

These changes are making a positive impact, and are making the course even more enjoyable this semester. And I am also fortunate to have Nick Mendler as my course assistant again this semester, meaning there are two of us to work with students each day. Students are really getting individual attention with their work.

Now let’s look at some more examples of student work! For the assignment to create a color texture using randomness, Lainey worked to create an image which resembled a piece of fabric.

For the Josef Albers assignment, Peyton (like Collette) also experimented a lot with the geometry of the individual elements. She chose a color palette which reminded her of a succulent, and so created geometrical objects which represented spikes on a plant.

And for the assignment based on my *Evaporation* piece, Karla chose a pink palette. She looked at various values for the radius and the randomness in the radius so as to create a balance between overlapping circles and white space between the circles.

Stay tuned for the next update! In the next installment, I’ll let you know how the work with L-systems went. One of my favorite topics…..

]]>

So what is the Cantor set?

The usual geometrical definition is to begin with a line segment of length one (as shown at the top of the figure). Next, remove the middle third of that segment, so that two segments of length 1/3 are formed. Then remove the middle thirds from those segments, and continue recursively.

Even though — after this process is repeated infinitely, of course — you remove segments whose lengths sum to 1, there are still infinitely many points left! (Actually, uncountably infinitely many, although this observation is not necessary for our discussion.)

There are many ways to describe the remaining points, but one common way is to say that the Cantor set consists of all those numbers between 0 and 1 (inclusive) which, when written in base 3, may be written *only* with 0’s and 2’s.

Again, we’ll briefly review. We have .2_{3} = 2/3_{10,} for example — since the places after the ternary point (no “deci”mals here) represent powers of 1/3.

So if we wanted to find 3/4 in base 3, we note that we’d need 2/3 = 0.2_{3}, but there would be 1/12 left over. This is smaller than 1/9, so we’re at 0.20_{3} so far. Next, we need 2/27, giving 0.202_{3}, with 1/108 left over. And so on. The result is that 3/4 is equal to 0.202020…_{3}, where the “20” repeats. This can also be shown using infinite geometric series, if desired:

Surprisingly, the idea for a possibly new type of “Cantor set” came from studying binary trees! I say *possibly* new, since I couldn’t find any reference to such a set online, but of course that doesn’t mean someone else hasn’t come across it. And I call it a type of Cantor set since it may also be formed by taking out thirds of segments, but in a slightly different way than described above.

Now I’ve talked a bit about binary trees before, so I won’t go into great detail. But here is the important idea: when you make a branch, you’re pointing in some particular direction, and then turn either left or right, but you *can’t* just keep going in the same direction.

So what if you looked at ternary expansions, and as you added digits, you had the option of adding 1 to the previous digit (like turning left), or subtracting 1 (like turning right), but you couldn’t use the same digit twice consecutively. So 0.21021201 would be OK (we’ll drop the 3 subscripts since we’ll be working exclusively in base 3 from now on), but 0.12002120 would not be allowed since there are consecutive 0’s.

Note that adding 1 to 2 gives 0 in base 3, and subtracting 1 from 0 gives 2. So essentially, starting with 0., you build ternary expansions with the property that each digit is different from the previous one. And, of course, the expansions must be infinite….

What do iterations of this scheme look like?

We start with a segment of length 1. Recall we begin with 0., so that means the ternary expansions may begin with 0.1 or 0.2. Expansions beginning with 0.0 are not allowed, so this precludes the first third of the segment.

Now here comes the interesting part! On the second iteration, (the third line from the top), we remove *different* thirds from the first two segments. Since the 0.1 may continued onto 0.10 or 0.12, but *not* 0.11, we remove the middle third from the 0.1 segment. Further, 0.2 may be continued as 0.20 or 0.21, but *not* 0.22, so we remove the last third of the 0.2 segment. The iteration process is not symmetrical.

We continue on…. Since 0.10 may be continued as 0.101 or 0.102, but *not* 0.100, we remove the first third of the 0.10 segment. You get the idea. Seven iterations of this procedure are shown in the figure above.

Note that since the process for creating the original Cantor set is symmetrical, this imposes a self-similarity on the set itself. The Cantor set is *exactly* the union of two duplicate copies of the original, scaled by a factor of 1/3.

In other words, the Cantor set may also be created using an iterated function system with the following two transformations:

What about the self-similarity of the new Cantor set? To help in seeing this, here’s a slightly more detailed version of the iteration scheme.

Is there any self-similarity here? Yes, but the fewest number of transformations I’ve found to describe this self-similarity is *five.* The curious reader is welcome to find fewer!

It isn’t hard to see the five vertical bands in this figure — the first three look the same (although the second one appears to be reflected), and the last two also look the same, although reflections of each other.

The first band is all ternary expansions in this new set beginning with 0.10. How do these relate to the whole set? Well, 1/9 of the set consists of expansions beginning with 0.001… or 0.002…, and then adding digits different from those previous. Adding 1/3 therefore gives all expansions beginning with 0.101… or 0.102…, and then adding different digits. This implies that the self-similarity describing the first vertical band is

The second band consists of those expansions in our strings beginning with 0.12. But if *x* is an expansion in our set beginning with 0.10, then 1 – *x* must be an expansion in our set beginning with 0.12, since we may write 1 as .22222…, repeating. Therefore, the second band is represented by the transformation

We can think of the third band just as we did the first — except that this band consists of number beginning with 0.20 (rather than 0.10). So this band is represented by the transformation

The last two bands consist of those expansions beginning with 0.21. Here, we break into the two cases 0.210 and 0.212, and use our previous work. For example, those beginning with 0.210 can be found by taking those beginning with 0.10, dividing by 3 to get expansions beginning with 0.010, and then adding 2/3 to get expansions beginning with 0.210:

We describe the self-similarity of the last band — those expansions beginning with 0.212 — analogously:

These are five transformations which describe the self-similarity of the new Cantor set. I haven’t rigorously proved yet that you can’t do it in fewer, but I think this is the case.

Of course this is just the beginning — you can use the same rule in *any* base. That is, begin with 0., and either add or subtract one from the previous digit. In base 4, you get nice self-similarity, but it gets more involved in higher bases. In bases higher than 3, you can *also* use the rule that the next digit in the expansion is different than the previous — and this gives yet *another* class of Cantor sets. I’ll leave you to investigate, and perhaps I’ll write again about these Cantor sets when I find out more!

]]>

You can see from the figure that the distance between (-2,3) and (3,-1) is 9, since you’d have to drive nine blocks to get from one point to the other. This might seem simple enough, but the situation is quite a bit different from Euclidean geometry.

In Euclidean geometry, the shortest distance between two points is a straight line segment. If you deviate from this segment in any way in getting from one point to the other, your path will get longer.

In taxicab geometry, there is usually *no* shortest path. If you look at the figure below, you can see two *other* paths from (-2,3) to (3,-1) which have a length of 9.

Strange! There are a few exceptions to this rule, however — when the segment between the points is parallel to one of the axes. For example, the distance from (0,0) to (4,0) is just 4 — and there’s only *one* way to get there along a path of length 4. You’ve got to travel along the line segment joining (0,0) and (4,0).

Well, perhaps “strange” is the wrong word. All of this is perfectly normal in taxicab geometry….

Now let’s take a look at triangles. We define them in the usual way — choose three points, and connect them in pairs to form three sides of a triangle. Just like in Euclidean geometry.

Because we’re so familiar with them, I’ve drawn what would be — if we were in the Euclidean realm! — two 3-4-5 right triangles. You’re welcome to verify that *OP’Q’* would indeed be a 3-4-5 triangle in Euclidean geometry.

OK, now let’s look at these triangles from the perspective of taxicab geometry. While we can just study these triangles by looking at the graph, it might be helpful to formally define the *taxicab distance* function. If two points have coordinates and the distance between them is defined to be

First, a word about saying that we’re *defining* the distance. That’s right — a *different* distance, like the one we’re used which comes from the Pythagorean theorem, would produce a *different* geometry. So how your geometry “works” depends upon how you define the distance.

Second, a word about the formula. Take a moment to convince yourself that is how far your taxicab would have to drive in an east-west direction, and is how far your taxicab would have to drive in a north-south direction. Absolute values are needed: the distances and should be positive and equal to each other.

Back to the triangles. In a taxicab world, *OPQ* is a 3-4-7 triangle! That might strike you as a bit unusual, since in the Euclidean world, the sum of the lengths of any two sides of a triangle is always *greater* than the length of the third. This is called the *triangle inequality,* and is usually written

for points *X*, *Y,* and *Z, *where we write to mean the Euclidean distance. In Euclidean geometry, the only way this inequality can actually be an *equality* is if *Y* is on the line segment whose endpoints are *X* and *Z* — and no other way. But as we’ve just seen,

where *P* is evidently *not* on the segment joining *O* and *Q. *Yet another difference between taxicab and Euclidean geometries.

Oh, but it gets better…. In the figure above — at least in a Euclidean world — when you rotate two triangles, the angles and lengths remain the same. In other words, the triangles are *congruent.* It may seem obvious, since clearly the two triangles *look* the same.

What happens when we “rotate” triangle *OPQ* in a taxicab world? I put “rotate” in quotes since we haven’t looked at whether or not the Euclidean idea of “angle” has a companion in the taxicab world.

If you do the calculations, you’ll see that *OP’Q’* is in fact a 21/5 – 5 – 28/5 triangle! Moreover, even though *OQ* was the longest side of triangle *OPQ, **OQ’* is *not* the longest side of triangle *OP’Q’!* The longest side is actually *OP’.*

What’s going on here? We are *so* used to being able to draw a triangle on a piece of paper — and certainly, if we turn our piece of paper, the lengths of the sides of the triangle don’t change! The triangle stays *exactly* the same — this is the notion of congruence. But when you change the distance function to the *geometry* changes. In a very fundamental way, as you see.

Essentially, we need to give up the idea of “angle” in taxicab geometry. That might sound odd — how can you have geometry without angles? As it turns out, there are *lots* of ways to do this. We’ll be looking at this idea more as this thread continues.

And although there are differences, there are certainly things in common — such as the fact that there is a triangle inequality in both geometries. Is this just coincidence? Mathematically, a *distance function* is one which satisfies, for points *X,* *Y,* and *Z*:

1.

2.

3.

Now we’ve already encountered properties 2 and 3 so far, and property 1 simply says that two differents points have to be separated by some positive distance.

The concept of a distance function evolved over time — whenever it made sense to talk about distance, it seemed these properties were always in play. And in fact, the distance functions in *both* Euclidean and taxicab geometry have these three propertes.

So different distance functions produce different geometries. Perhaps the hardest thing about encountering new geometries is putting aside ideas from Euclidean geometry — “forgetting” all about angles, for example, while we’re in a taxicab world. Hopefully, as you learn about more and more non-Euclidean geometries, it will become easier and easier to navigate this Universe of diverse geometrical worlds….

]]>

Technically speaking, using a markup language isn’t really coding. There isn’t any sort of algorithmic thinking going on when you type

<b>make this text bold!</b>

to make text bold in HTML. But it is easy to use coding constructs in both HTML and *LaTeX, *for example when embedding JavaScript in HTML or by writing macros or using the graphics package TikZ in *LaTeX.*

I do think, though, having students learn a markup language is a great introduction to programming. It gets students in the mindset that a particular set of instructions has a predictable, well-defined effect. Perhaps more importantly, getting the syntax correct when using a markup language is just as important as it is when coding.

I started using HTML in the early 90’s, when I made my first web pages. I don’t have those older designs any more — but they were still based on geometry, like my current website. I do have a copy of the version just before this, though.

Of course I have always loved pentominoes, polyominoes, and puzzles. But one reason I went with this theme was I could essentially build my home page by using a table in HTML.

Tables are currently out of fashion, though — the thought is that with all the new capabilities in CSS (cascading style sheets), tables shouldn’t be used for layout purposes. That may be so. But it took some effort to get everything to work initially, and creating new pages with similar functionality is easy now that I have a template created. So I am content to be out of fasion….

Being a programmer, though, made be decide to use HTML itself, rather than some web-designing software. The reason is simple. If I write my own HTML, I can do whatever I want. If I use some canned software, I can only do what the software lets me. And since I’m pretty particular about what my web pages look like, I don’t want to be in the position of having a design idea in my mind, but not being able to implement it because of the limitations of the software.

One of my favorite designs was a website I created when I did some educational consulting in Thailand.

The polyomino theme is pretty clear — in this case, I used all 35 hexominoes plus six 1 x 1 squares. If you look closely, you’ll see the six places where a square of the tiled background peeps through. I included these six squares so I could tile a 12 x 18 rectangle.

And why did I want to be so specific? Actually, the flag of Thailand has proportions of 2 : 3. Moreover, the stripes alternate red, white, blue, white, and red, with the blue stripe twice a wide as the others.

So I decided to design a web page for my trip based *precisely* on the dimensions of the Thai flag.

It was *difficult.* What made it so hard was that I wanted each hexomino to be a different color — but all the squares in any given hexomino had to be the *same* color. And I had to make the five colored stripes.

So I actually cut out the 35 hexominoes and set down to work — but was constrained by the fact that I basically had to use all the pieces lengthwise since the stripes needed to be horizontal. If you want a real polyomino challenge, try it yourself! You’ll see it isn’t all that easy.

Two overriding principles guide my use of HTML: functionality and minimalism. My websites are essentially a professional portfolio. So they have to be easy to navigate. The polyomino theme allows me to include all important links on one page — and they’re all easily visible. Moreover, it’s easy to add or take away a link without disturbing the overall design. Here’s a snapshot of my current homepage.

I also use websites when I teach, rather than an online platform like Moodle, for example. This is because I can archive them — and again, I have complete control over content. Further, when teaching a course like my Mathematics and Digital Art course, I can share all my course content with colleagues, who otherwise wouldn’t have access to it on a school content management platform.

Within each course web page, I also strive for ease of use. For example, links are always in bold face, so they’re easy to spot. I find that making them a different color is too distracting, so I alter the typeface instead. And I use an anchor

<a name=”currentweek”>

when I update course content — so when a student clicks on the course website, they don’t have to scroll down the page to find the new stuff. It’s automatically at the top. Students have commented that they like this feature of my course web pages.

I’m a minimalist when it comes to design. The difficulty with web design is because you *can* do so much, some web designers think they *should*. The challenge is to select design elements that work together, achieve the web pages’ purpose, and have a strong aesthetic appeal. Less is often more, to use a cliche.

I feel especially strongly about this when it comes to using color. Yes, you typically have 16,777,216 possible color choices when designing web pages. But that doesn’t mean you have to use them all! When I look at some sites, I get the feeling that the web designers were trying to….

In fact, another favorite design — my digital art website — is entirely in gray scale.

I use a bold, black-and-white variant of the yin-yang symbol to attract attention. No need to splash countless colors indiscriminately across the page….

I have enjoyed designing my own web pages, and I’ll continue to do so — every so many years, I get tired of the existing design and need to create a new one. And if you already code, it’s easy to pick up a markup language like HTML. You can always view the source code of any website, and you can just look up anything else you need online. Give it a try!

]]>

Let’s start with a few examples of simple binary trees. If you want to see more, just do a quick online search — there are lots of fractal trees out there on the web! The construction is pretty straightforward. Start by drawing a vertical trunk of length 1. Then, move left and right some specified angle, and draw a branch of some length *r* < 1. Recursively repeat from where you left off, always adding two more smaller branches at the tip of each branch you’ve already drawn.

If you look at these two examples for a moment, you’ll get the idea. Here, the angle used is 40 degrees, and the ratio is 5/8. On the left, there are 5 iterations of the recursive drawing, and there are 6 iterations on the right.

Here’s another example with a lot more interaction among the branches.

This type of fractal binary tree has been studied quite a bit. There is a well-known paper by Mandelbrot and Frame which discusses these trees, but it’s not available without paying for it. So here is a paper by Pons which addresses the same issues, but is available online. It’s an interesting read, but be forewarned that there’s a lot of mathematics in it!

In trying to understand various properties of these fractal trees, it’s natural to write code which creates them. But here’s the interesting thing about writing programs like that — once they’re written, you can input anything you like! Who says that *r* has to be less than 1? The tree above is a nice example of a fractal tree with *r* = 1. All the branches are of the same length, and there is a lot of overlap. This helps create an interesting texture.

But here’s the catch. The more iterations you go, the bigger the tree gets. In a mathematical sense, the iterations are said to be *unbounded.* But when Mathematica outputs a graphic, it is automatically scaled to fit your viewing window. So in practice, you don’t really care how large the tree gets, since it will automatically be scaled down so the entire tree is visible.

It is important to note that when *r* < 1, the trees *are* bounded, so they are easier to study mathematically. The paper Nick and I are working on scales unbounded trees so they are more accessible, but as I said, I’ll talk more about this in a later post.

Here are a few examples with *r* > 1. Notice that as there are more and more iterations, the branches keep getting *larger.* This creates a very different type of binary tree, and again, a tree which keeps getting bigger (and unbounded) as the number of iterations increases. But as mentioned earlier, Mathematica will automatically scale an image, so these trees are easy to generate and look at.

Nick created the following image using copies of binary trees with *r* approximately equal to 1.04. The ever-expanding branches allow for the creation of interesting textures you really can’t achieve when *r* < 1.

Another of my favorites is the following tree, created with *r* = 1. The angle used, though, is 90.9 degrees. Making the angle just slightly larger than a right angle creates an interesting visual effect.

But the exploration didn’t stop with just varying *r* so it could take on values 1 or greater. I started thinking about other ways to alter the parameters used to create fractal binary trees.

For example, why does *r* have to stay the same at each iteration? Well, it doesn’t! The following image was created using values of *r* which alternate between iterations.

And the values of *r* can vary in other ways from iteration to iteration. There is a lot more to investigate, such as generating a binary tree from *any* sequence of *r* values. But studying these mathematically may be somewhat more difficult….

Now in a typical binary tree, the angle you branch to the left is the *same* as the angle you branch to the right. Of course these two angles don’t *have* to be the same. What happens if the branching angle to the left is different from the branching angle to the right? Below is one possibility.

And for another possibility? What if you choose two different angles, but have the computer *randomly* decide which is used to branch left/right at each iteration? What then?

Here is one example, where the branching angles are 45 and 90 degrees, but which is left or right is chosen randomly (with equal probability) at each iteration. Gives the fractal tree a funky feel….

You might have noticed that none of these images are in color. One very practial reason is that for writing Bridges papers, you need to make sure your images look OK printed in black-and-white, since the book of conference papers is not printed in color.

But there’s another reason I didn’t include color images in this post. Yes, I’ve got plenty…and I will share them with you later. What I want to communicate is the amazing variety of textures available by using a simple algorithm to create binary trees. Nick and I never imagined there would be *such* a fantastic range of images we could create. But there are. You’ve just seen them.

Once the Bridges paper is submitted, accepted (hopefully!), and revised, I’ll continue the story of our arboreal adventure. There is a lot more to share, and it will certainly be worth the wait!

]]>