## Bridges: Mathematics and Art I

Just registered for Bridges 2016 last week!

Simply put, Bridges is the best mathematics conference ever.  You meet people from all around the world who are interested in the interplay between mathematics and art.

Not just M. C. Escher, either (though many are interested in his work).  Some Bridges attendees (shall we call them Bridgers?) are artists by profession, but others are mathematicians, computer scientists, physicists — you name it.  All are artists by vocation.

Interests span not only art in a more usual sense — watercolor, acrylic, oil, pastel, drawing — but also digital art, sculpture in almost any medium you can think of, poetry, architecture, music, fiber arts, dance, digital animations and movies, fashion, origami, and likely there will be some new art form introduced this summer as well!

The art exhibition is amazing.  You can see a few examples above.  You see the wooden spiral?  Each inlaid rectangle is a different piece of wood!  The craftsmanship is really superb.

One neat aspect is that most of the artists also attend Bridges.  That means if you see something you really like, you can just look for the right name tag and start up a conversation.  As you would expect, all the artists are eager to discuss their work.

Be ready for some surprises, too.  I met my friend Phil Webster at the conference – we starting talking because I was from San Francisco, and he also lives in the Bay area.  So we’ve met up a few times since the conference to discuss mathematics, art, and programming.  He even gave a talk in our Mathematics Colloquium at the University of San Francisco.  Of course, his talk was great….

Even if you don’t go to the conference, you can still appreciate all the art.  You can visit the Bridges 2015 online gallery and see all the art that was exhibited.  Not only are there descriptions of all the works by the artists themselves, but there’s also contact information so you can get in touch if you’d like.  Please do!

The Bridges 2016 gallery is not online yet, but I’ve got two pieces accepted for this year’s exhibition.  This is my favorite.

Then there are the talks.  You learn so much just by going to them.  The range of topics is incredibly diverse — look back at the list above!  Last summer, I gave a talk about Random Walks on Vertices of Archimedean Tilings.  My favorite work discussed in the paper is Bear.  You can read the paper to learn how it was made, if you’re interested.  The first print of Bear is hanging in my friend Cory’s house in Florida.  Hi, Cory!

As you’ll see if you click on the link to my paper, there is an archive of all papers — over 1000! — given at Bridges conferences since 1998.  What’s nice is that you can actually search for specific topics, so it’s easy to use.  No shortage of reading material on mathematics and art….

In addition to the exhibition and all the presentations, there are also dance performances, poetry readings, theatre performances, movie showings, a music night — any number of interesting activities relating mathematics and art.  If you want to learn more, just go to the Bridges 2016 website.  There’s complete information on the upcoming conference there.

This year, the conference is being held at the University of Jyväskylä in Jyväskylä, Finland.  I’ve never been to Finland before, so I’m looking forward to an exciting trip!  What’s also nice about the conference is that in the evenings, you can just take a stroll with other Bridgers, looking for some interesting place to have dinner.  I always love exploring new countries, and especially like trying new cuisines!

But even though Bridges 2016 is in July, I actually starting preparing last November.  Since there was a January deadline for submitting papers to the conference, and since I knew I’d be doing a lot of traveling over our Winter Break, I wanted to get an early start.  The papers are all reviewed by three referees, so your mathematics should be sound.  Usually they have comments to make, and so you often need to make some revisions a few months later before a final submission.

My paper is on fractals this year.  A lot of what I wrote in that paper you’ve already seen on my blog — but I’ll be sure to give a link when I write a follow-up post on Bridges 2016 later on in the summer.  Here’s one of my favorite images discussed in the paper.

There are deadlines to submit artwork as well, so it’s important to be organized.  For both papers and artwork, the online submission system is actually really easy to use.  I just wanted to let you know something about the process so you can submit something to next year’s conference….

Last Fall, I received an email about a new addition to the Bridges menu — student scholarships.  And in my calculus class, I had a student Nick who is a double major in mathematics and art.

Turns out Nick was really interested in trying to submit to Bridges, so we worked out a one-credit directed study course just for that purpose.  As of this moment, I’m happy to say that two of Nick’s artworks were accepted!  And we just submitted the final revisions to his paper, and are waiting to hear back.  We should know about the scholarship soon — I’ll update this post when I have more information.  One of my favorite images from Nick’s paper is this one.

You can read the paper to see how he creates it….link to follow.

So think about including Bridges in your future travel!  Many artists bring their families and make a summer vacation out of the conference.  It’s quite an experience.

And if you’re a student, consider submitting as well!  Maybe you’ll earn a scholarship to attend:  here’s more information on the Student Travel Scholarship.  Preference is given to those student who submit papers, artwork, or movies.

You will need a letter from one of your teachers or professors — so ask someone to be your mentor.  If you can’t find someone, well, just ask me.  I’ll be glad to help out (as long as I don’t get too many requests!).

Later on in the summer, I’ll tell you all about the experience.  Hope to see you at some Bridges conference soon!

P.S. (10 July 2106):  Nick did receive the travel scholarship.  Congratulations!

## Creating Fractals VIII: PostScript Programming

June 1998.

That’s when I wrote the code I’m going to talk about today.

When I became interested in computer graphics, a windows environment you’re used to by now just didn’t exist.  For example, the computer I used (called a Zenith H19) had a screen 80 x 25 characaters.  Yes, characters.  Rather like a larger version of those still unimaginable TI calculator screens (couldn’t resist).

There was no graphical user interface (GUI), mouse, touch-screen.  If you wanted to draw a picture, you had to write a computer program to do it.

And if you wanted to see what it looked like, well, you had to print it out.

Yep.  That was the only way to see if your code was correct.  It made debugging very challenging.  But that’s all there was.  So I learned how to write PostScript code.

There are two features of PostScript I want to discuss today.  First, the syntax is written in postfix notation, like most HP calculators.  If you wanted to add 2 and 3 on a typical calculator, you’d type in “2 + 3 =.”  The result would be 5.  This is called infix notation, where the operator is written in between the arguments.

In PostScript, though, you’d write “2 3 add.”  In other words, the arguments come first.  Then they’re added together — the operator comes last, which is referred to as postfix notation.  So if you wanted to write

(2 + 3 ) x (10 – 4)

in PostScript, you’d write

2 3 add 10 4 sub mul.

Notice that the multiplication is done last.

When programming in Python, most functions are described using prefix notation.  In other words, you give the function name first, and then the arguments to the function.  For example, randint(1,10) would output a random number between 1 and 10, inclusive.  If we had to write the arithmetic expression above in prefix notation, it would look like

You probably encounter these ideas on a daily basis.  For example, if you want to delete a bunch of files, you first select them, and then do something like click “Move to trash” from a dropdown menu.  But if you’re writing an email, you usually click on the attach icon first, and then select the files you’d like to attach.  Postfix and prefix.

Of course this is just an introduction to using the different types of notation.  In general, you’d transform the above arithmetic expression into what it called an expression tree, like this:

Then prefix notation may be derived from a preorder traversal of the expression tree, infix notation from an inorder traversal of the tree, and finally, postfix notation comes from a postorder traversal of the tree.  Feel free to ask the internet all about binary trees and tree traversals.  Trees are very important data structures in computer science.

The second feature of PostScript I’d like to discuss today is that it is an example of a stack-based language.  For example, if you typed

4  5  3  7  8

in a PostScript program, the number 4 would be pushed onto the stack — that is, the working memory of the program — and then 5, then 3, 7, and finally 8.  (For convenience, we’ll always think of the bottom of the stack as being on the left, and the top of the stack as being on the right.  Easier to write and read that way.)

If you want to follow along, do the following:  write 4 on an index card or small piece of paper, then write 5 on another and put it on top of the 4, and so on, until you put the 8 on top of the stack.  You really do have a stack of numbers.

Now when you type the “add” function, PostScript thinks, “OK, time to add!  What are the top two numbers on the stack?  Oh, 8 and 7!  I’ll add these and put the sum back on the stack for you!”

So you type

4  5  3  7  8  add,

and PostScript performs the appropriate action and the stack now becomes

4  5  3  15.

If you now want to divide (div in PostScript), the interpreter looks at the top number on the stack and divides it by the second number on the stack, and puts that result on the stack.  So

4  5  3  7  8  add  div

would create the stack

4  5  5.

You should be able to figure out that

4  5  3  7  8  add  div  mul  sub

would just leave the single number 21 on the stack.

Graphics are drawn this way, too.  For example, we would create a square with the commands

0 0 moveto  0 1 lineto  1 1 lineto  1 0 lineto  0 0 lineto  closepath  stroke.

Again, the arguments come first.  We move to the point (0,0), and then draw a line segment to (0,1), and so forth.  Closepath “ties” the beginning and end of our path together, and the command “stroke” actually draws the square on the page.

Commands like

1  0.5  0  setrgbcolor

set the current color to orange, and

5 setlinewidth

sets the width of lines to be drawn.  You can change any aspect of any drawn image — the only limit is your imagination!

Of course this is only the most basic introduction to the PostScript programming language.  But if you’re interested, there’s good news.  First, it shouldn’t be hard to download a PostScript interpreted for your computer.  I use MacGhostView — installing will also require you to install XQuartz, too, but not to worry.

Second, there are some great resources online.  Back when I was learning PostScript, there were three very helpful books — the Blue Book, the Red Book, and the Green Book.  Yep, that’s what there were called.   The Blue Book is the place you should start — it includes a tutoral on all the basics.  The Red Book  is the Reference Manual — handy in that it lists PostScript commands, their syntax, and examples of how to use them.  The Green Book is for the serious programmer, and discusses at a high level how the PostScript language is organized.

Finally, here is the code I wrote all those years ago!  (Click here: Sierpinski.)    I did modify the original code so it produces a Sierpinski triangle.  It will be a little daunting to figure out if you’re just starting to learn PostScript, but I include it as it motivated me to talk about PostScript today.

And a final few words about motivation.  Try to learn programming languages which expose you to different ways of thinking.  This develops flexibility of mind for coding in general.  Whenever I learn a new language, I’m always thinking things like, “Oh, that’s the same as stacks in PostScript,” or “Yeah, like maps in LISP.”

As you learn more, your programming toolbox gets bigger, and you find that the time it takes from conceiving of an idea to successfully implementing it becomes shorter and shorter.  That’s when it starts really getting fun….

## Creating Fractals VII: Iterated Function Systems III

Today, we’re going to wrap up our discussion of iterated function systems by looking at an algorithm which may be used to generate fractal images.

Recall (look back at the first post if you need to!) the Sierpinski triangle.  No matter what initial shape we started with, the iterations of the function system eventually looked like the Sierpinski triangle.

But there’s a computational issue at play.  Since there are three different transformations, each iteration produces three times the number of objects.  So if we carried out 10 iterations, we’d have 310 = 59,049 objects to keep track of. Not too difficult for a computer.

But let’s look at the Sierpinski carpet. With eight different transformations, we’d have 810 = 1,073,741,824 objects to keep track of. Keeping track of a billion objects just isn’t practical.

Of course you could use fewer iterations — but it turns out there’s a nice way out of this predicament. We can approximate the fractal using a random algorithm in the following way.

Begin with a single point (usually (0,0) is the easiest).  Then randomly choose a function in the system, and apply it to this point.  Then iterate:  keep randomly choosing a function from the system, then apply it to the last computed point.

What the theory says (again, read the Barnsley book for all the proofs!) is that these points keep getting closer and closer to the fractal image determined by the system.  Maybe the first few are a little off — but if we just get rid of the first 10 or 100, say, and plot the rest of the points, we can get a good approximation to the fractal image.

Consider the Sierpinski triangle again.  Below is what the first 20 points look like (after (0,0)), numbered in the order in which they’re produced.

Let’s look at this in a bit more detail.  For reference, we’ll copy the function system which produces the Sierpinski triangle from the first post.

$F_1\left(\begin{matrix}x\\y\end{matrix}\right)=\left[\begin{matrix}0.5&0\\0&0.5\end{matrix}\right] \left(\begin{matrix}x\\y\end{matrix}\right)$

$F_2\left(\begin{matrix}x\\y\end{matrix}\right)=\left[\begin{matrix}0.5&0\\0&0.5\end{matrix}\right] \left(\begin{matrix}x\\y\end{matrix}\right)+\left(\begin{matrix}1\\0\end{matrix}\right)$

$F_3\left(\begin{matrix}x\\y\end{matrix}\right)=\left[\begin{matrix}0.5&0\\0&0.5\end{matrix}\right] \left(\begin{matrix}x\\y\end{matrix}\right)+\left(\begin{matrix}0\\1\end{matrix}\right)$

Now here’s how the color scheme works.  Any time $F_1$ is randomly chosen, the point is colored red.  For example, you can see that after Point 6 was drawn, $F_1$ was chosen, so Point 7 was simply halfway between Point 6 and the origin.

Any time $F_2$ is chosen, the point is colored blue.  For example, after Point 8 was drawn, $F_2$ was randomly chosen.  You can see that if you move Point 8 halfway to the origin and then over right one unit, you land up exactly at Point 9.

Finally, any time $F_3$ is chosen, the point is colored orange.  So for example, it should be clear that moving Point 7 halfway toward the origin and then up one unit results in Point 8.

Of course plotting more points results in a more accurate representation of the fractal.  Below is an image produced using 5000 points.

To get more accuracy, simply increase the number of points, but decrease the size of the points (so they don’t overlap).  The following image is the result of increasing the number of points to 50,000, but using points of half the radius.

There’s just one more consideration, and then we can move on to the Python code for the algorithm.  How do we randomly choose the next affine transformation?

Of course we use a random number generator to select a transformation.  In the case of the Sierpinski triangle, each of the three transformations had the same likelihood of being selected.

Now consider one of the fractals we looked at last week.

If the algorithm chose either transformation with equal probability, this is what our image would look like:

Of course there’s a huge difference!  What’s happening is that transformation $F_{{\rm orange}}$ actually corresponds to a much smaller portion of the fractal than $F_{{\rm green}}$ — and so to get a more realistic idea of what the fractal is really like, we need to choose it less often.  Otherwise, we overemphasize those parts of the fractal corresponding to the effect of $F_{{\rm orange}}.$

Again, according to the theory, it’s best to choose probabilities roughly in proportion to the portions of the fractal corresponding to the affine transformations.  So rather than a 50/50 split, I chose $F_{{\rm green}}$ 87.5% of the time, and $F_{{\rm orange}}$ just 12.5% of the time.  As you’ll see when you experiment, a few percentage points may really impact the appearance of a fractal image.

From a theoretical perspective, it actually doesn’t matter what the probabilities are — if you let the number of points you draw go to infinity, you’ll always get the same fractal image!  But of course we are limited to a finite number of points, and so the probabilities do in fact strongly influence the final appearance of the image.

So once you’ve chosen some transformations, that’s just the beginning.  You’ve got to decide on a color scheme, the number of points and their size, as well as the probabilities that each transformation is chosen.  All these choices impact the result.

Now it’s your turn!  Here is the Sage link to the python code which you can use to generate your own fractal images.  (Remember, you’ve got to copy it into one of your own Projects first.)  Freely experiment — I’ve also added examples of non-affine transformations, as well as affine transformations in three dimensions!

And please comment with interesting images you create.  I’m interested to see what you can come up with!

## Creating Fractals VI: Iterated Function Systems II

What I find truly remarkable about iterated function systems is the astonishing variety of fractals you can create with just two affine transformations.  Today, we’ll explore some more examples so you can get a better feel for how the particular affine transformations used affect the final image.

Let’s begin with the example from last week, which I color coded for easy reference.

Below are the affine transformations which generated this fractal.

$F_{{\rm green}}\left(\begin{matrix}x\\y\end{matrix}\right)=\left[\begin{matrix}0.95&0\\0&0.95\end{matrix}\right]\left[\begin{matrix}\cos(20^\circ)&-\sin(20^\circ)\\\sin(20^\circ)&\cos(20^\circ)\end{matrix}\right]\left(\begin{matrix}x\\y\end{matrix}\right)$

$F_{{\rm orange}}\left(\begin{matrix}x\\y\end{matrix}\right)=\left[\begin{matrix}0.4&0\\0&0.4\end{matrix}\right]\left(\begin{matrix}x\\y\end{matrix}\right)+\left(\begin{matrix}1\\0\end{matrix}\right)$

Now compare the different colors in the image with the transformations.  The first transformation combines a scaling by 0.95 with a rotation by 20 degrees.  If you look closely, you’ll see that the green part of the image looks the same as a slightly smaller copy of the entire fractal, except rotated by 20 degrees from the center of the spiral (note that there is no translation involved in $F_{{\rm green}}$).

The orange part of the image is simply a scaled version of the entire fractal (by a factor of 0.4), but moved over 1 unit.  You can see this by looking at the form of the affine transformation $F_{{\rm orange}}.$

The pair of affine transformations used to generate the spiral exactly describes the self-similarity.  This is easy to see after the fact — but I really have no helpful suggestion as to how to predict the form of this wonderful spiral just by looking at the transformations.

Now let’s just slightly alter the second transformation so that in addition to scaling by a factor of 0.4, there is a shear as well.

$F_{{\rm orange}}\left(\begin{matrix}x\\y\end{matrix}\right)=\left[\begin{matrix}0.4&0.4\\0&0.4\end{matrix}\right]\left(\begin{matrix}x\\y\end{matrix}\right)+\left(\begin{matrix}1\\0\end{matrix}\right)$

Notice how the fractal changes.

The shear distorts each arm of the spiral — stretching it in the process and creating more interaction between the various arms.

Now let’s go back to the original spiral and change the first transformation instead, slightly altering the scale factor.

$F_{{\rm green}}\left(\begin{matrix}x\\y\end{matrix}\right)=\left[\begin{matrix}0.9&0\\0&0.9\end{matrix}\right]\left[\begin{matrix}\cos(20^\circ)&-\sin(20^\circ)\\\sin(20^\circ)&\cos(20^\circ)\end{matrix}\right]\left(\begin{matrix}x\\y\end{matrix}\right)$

Here is what happens:

I really don’t how to predict that this is what would happen — but I find it fascinating how even small changes in parameters can really change the way a fractal looks.

I can’t resist one more image created by changing the second transformation to

$F_{{\rm orange}}\left(\begin{matrix}x\\y\end{matrix}\right)=\left[\begin{matrix}0.4&0\\0.4&0.15\end{matrix}\right]\left(\begin{matrix}x\\y\end{matrix}\right)+\left(\begin{matrix}1\\0\end{matrix}\right).$

Here is the result.There’s so much you can do with just one pair of transformations!

While it may not be possible to guess exactly what a fractal will look like, it’s possible to know some features of a fractal based upon how you set up your transformations.  For example, condsider this combination of a shear and a scaling:

$G_1\left(\begin{matrix}x\\y\end{matrix}\right)=\dfrac23\left[\begin{matrix}1&-1/2\\0&1\end{matrix}\right]\left(\begin{matrix}x\\y\end{matrix}\right)+\left(\begin{matrix}1\\1\end{matrix}\right).$

There will be some aspect of the fractal which can be described by a shear.  For the second transformation, just take the opposite of the first, as follows.

$G_2\left(\begin{matrix}x\\y\end{matrix}\right)=-\dfrac23\left[\begin{matrix}1&-1/2\\0&1\end{matrix}\right]\left(\begin{matrix}x\\y\end{matrix}\right)-\left(\begin{matrix}1\\1\end{matrix}\right).$

This must force the fractal to have 180 degree rotational symmetry!  Think about how the algorithm works — take the image under one transformation, then the other — but by their definitions, the images must be symmetric about the origin.

So it is possible to use our knowledge of affine transformations to design, even if to a limited extent, some features of the resulting fractal.

As another example, we can assemble a fractal similar to two smaller copies of itself, one piece of which is a 90 degree rotation of the other.  Use $G_1$ as above, and then use

$G_2=\left[\begin{matrix}0&-1\\1&0\end{matrix}\right]G_1.$

Here is the resulting fractal.

You can clearly see how the red piece is a 90 degree counterclockwise rotation of the purple piece.  This follows from how we created the second transformation from the first.

One reason for the diversity of fractals is the number of parameters needed to specify two affine parameterizations — twelve in all.  Now some sets of parameters may create the same fractal — but even so, the wealth of variations is still staggering.

But will just any affine transformations work?  There are some constraints — for example, you can’t describe a fractal in terms of larger copies of itself.  So the affine transformations cannot make a starting object bigger, or else each successive iteration will expand.  Of course you can still use such affine transformations to create interesting graphical effects — but the resulting image will not be an approximation to a fractal.

Next week, we’ll look at the algorithm which I used to create all these fractal images, and I’ll give you some Python code you can use yourself.  But before I close, I’d like to share a few favorite fractals which are generated by just two affine transformations.

First, the Beetle.  It looks like it’s scampering toward you.

And the second, well, it’s just yellow.

Until next week!

## Creating Fractals V: Iterated Function Systems I

So just what are fractals?

In this blog, we’ve looked at a few examples of fractals so far — the Koch snowflake, for example — and informally called images created by the same algorithm “fractal images.”   But what makes the Koch snowflake a fractal?

Regardless of which definition you find on the internet, you’ll find ideas about self-similarity which keeps being repeated at ever smaller scales.  Truly rigorous definitions require some truly advanced mathematics…but we’ll illustrate the idea with another classic fractal, the Sierpinski triangle.

Begin with any triangle, and then remove the triangle formed by connecting the midpoints of the sides, as shown below.

Now repeat — in each triangle, remove the triangle formed by connecting the midpoints of the sides.  Two more iterations results in the following image.

And five more iterations gives the Sierpinski triangle.

Of course, in a mathematical sense, this process goes on infinitely — you still keep getting triangles on each iteration, so you’re never done.  But as far as creating computer graphics is concerned, you can just stop when the iterations start looking the same.

The self-similarity of the Sierpinski triangle should now be fairly evident.  The way it’s constructed, you’ll notice that each of the three smaller triangles shown in the first iteration looks identical to the entire Sierpinski triangle, except that it’s shrunk by a factor of two along all dimensions.  This is self-similarity:  one can say that the Sierpinski triangle is, in a mathematical sense, built up of smaller copies of itself.

Now how can we describe this self-similarity?  Using affine transformations.  (If the notations and functions below aren’t familiar to you, study up on linear and affine transformations, and how you use matrices to represent them.)

First, we’ll color code parts of the Sierpinski triangle for easy reference.

Next, we’ll introduce a coordinate system.  Let the origin be the vertex of the right angle of the triangle, with the usual x– and y-axes.  If you apply the transformation

$F_1\left(\begin{matrix}x\\y\end{matrix}\right)=\left[\begin{matrix}0.5&0\\0&0.5\end{matrix}\right] \left(\begin{matrix}x\\y\end{matrix}\right)$

to the entire Sierpinski triangle, you get the red “shrunk by half” triangle at the right angle of the Sierpinski triangle.

Now if you apply the transformation

$F_2\left(\begin{matrix}x\\y\end{matrix}\right)=\left[\begin{matrix}0.5&0\\0&0.5\end{matrix}\right] \left(\begin{matrix}x\\y\end{matrix}\right)+\left(\begin{matrix}1\\0\end{matrix}\right)$

to the Sierpinski triangle, you’ll get the blue portion — shrunk by a factor of two, and moved over one unit.  Finally, if you apply the transformation

$F_3\left(\begin{matrix}x\\y\end{matrix}\right)=\left[\begin{matrix}0.5&0\\0&0.5\end{matrix}\right] \left(\begin{matrix}x\\y\end{matrix}\right)+\left(\begin{matrix}0\\1\end{matrix}\right),$

you’ll get the orange part, shrunk by a factor of two and moved up one unit.  (Note that the choice of moving one unit is arbitrary — you’ll get something that looks like the Sierpinski triangle as long as you move the same amount in each direction.)

What all this means is that the functions $F_1,$ $F_2,$ and $F_3$ mathematically describe the self-similarity of the Sierpinski triangle, and so they may be used to generate the Sierpinski triangle.  You can see where the term iterated function system comes from by thinking about using these functions iteratively to generate a fractal image.

Now here’s the amazing part.  If you look at the very first iteration we did, you’ll see three triangles, each representing one of these transformations.  So imagine beginning with a different object — say a unit square — and perform these three transformations; that is, shrink by half, shrink and move to the right, and shrink and move up.  You’ll get something like this:

After two more iterations, here is what you’ll see.

Notice the self-similarity.  If we keep going another six iterations, we get…

…the Sierpinski triangle again!

If you start out with a small circle, you get the following after five iterations.

What this means is that the Sierpinski triangle — in a very real, rigorously mathematical sense — is completely determined by the transformations which describe its self-similarity.  Yes, the final result is always a triangle, independent of what shape you start out with!  Filled squares, circles, whatever object you choose — the result will always be the Sierpinski triangle.

If you really want to know why, read Michael Barnsley’s Fractals Everywhere.  But be forewarned — you need to know a lot of mathematics to completely understand what’s going on.  But it really is amazing.

In order to create your own fractals, you don’t need to know all the mathematical details.  You just need to understand the nature of self-similarity, and how to represent it using affine transformations.

Another famous fractal is the Sierpinski carpet.

You can see the self-similarity here as well.  Note that there are eight smaller, identical copies of the Sierpinski carpet which make up the fractal.  So you would need eight affine transformations, one of which would be

$G\left(\begin{matrix}x\\y\end{matrix}\right)=\left[\begin{matrix}1/3&0\\0&1/3\end{matrix}\right] \left(\begin{matrix}x\\y\end{matrix}\right)+\left(\begin{matrix}1\\0\end{matrix}\right).$

Notice that the scaling is only by a factor of $1/3$ here, and that the choice of units to move is arbitrary (as with the Sierpinski triangle).  Further, it doesn’t matter what object you begin with — you’ll always end up with the Sierpinski carpet when your done.

Where do we go from here?  Next week, we’ll look at how different choices of affine transformations affect the fractal image.  For example, suppose we return to the Sierpinski triangle example, but replace $F_1$ with the transformation

$F_1\left(\begin{matrix}x\\y\end{matrix}\right)=\left[\begin{matrix}0.25&0\\0&0.5\end{matrix}\right] \left(\begin{matrix}x\\y\end{matrix}\right).$

How should this affect the fractal?  Look at the result, and see if you can figure it out.

Can you see the two identical copies, each shrunk by half, generated by $F_2$ and $F_3$?  But note that the copy at the right angle is actually shrunk by one-fourth along the x direction — but this is expected as a result of the scaling by $0.25$ along the x direction in $F_1.$

In other words, when we change the description of the self-similarity (that is, when we change the affine transformations), we change what the fractal looks like — because, in a mathematical sense, the fractal is the set of affine transformations.  But here’s the difficult question — can you predict what the fractal will look like only by looking at the affine transformations?

Well, if you can, then you’re a lot better at this than I am!  It’s not an easy task — but it is what makes creating fractals fun.  Usually, you don’t know what the result will be.  That’s the adventure!  You often surprise yourself by what you discover.

In the next few posts, we’ll consider different types of affine transformations (like rotations, shears, etc.), and then discuss an algorithm which can generate approximations to fractals.  Stay tuned!

P.S.  This fractal was created using only two affine transformations.  Next week we’ll see how.

## New Koch Snowflakes

I was working with Thomas a few weeks ago in his quest to learn how to create fractal images.  He essentially asked the following question:  How can you locally change the appearance of a fractal?

For example, consider the well known Koch snowflake (which we discussed earlier on Day007), shown below.

Thomas asked if it might be possible to keep the overall shape of the snowflake, but change the details at the edges.  To discuss this possibility, let’s recall the algorithm for producing the snowflake (although this algorithm produces only one-third of the snowflake, so that three copies must be put together for the final image):

sf1(n):

• if n = 0, then draw a line segment,  otherwise:
• sf1(n-1); turn -60; sf1(n-1); turn 120; sf1(n-1); turn -60; sf1(n-1)

This is “pseudocode” for drawing part of the snowflake n levels deep.  The difficulty is that the segments are drawn only when n = 0.  Thomas was wanting something different to happen, say, at level 3 and below in an algorithm which recursed 6 levels deep.

Then it occurred to me — there’s no reason the “otherwise” couldn’t be broken down into further cases:  make the recursive call depend on the value n.

But in order to do this, I needed to use a different recursive call.  What to use?  Well, it happens that by slightly changing the angles in sf1, the snowflake undergoes a rotation by 30 degrees.  Here is the pseudocode, along with the image.

sf2(n):

• if n = 0, then draw a line segment,  otherwise:
• sf2(n-1); turn 120; sf2(n-1); turn -60; sf2(n-1); turn 120; sf2(n-1)

So how do we combine these fractals?  As suggested earlier, break down the original “otherwise” clause.  The following algorithm adds a “little” of the second snowflake to the first since the second algorithm is only called when n = 1.

new1(n):

• if n = 0, then draw a line segment, otherwise:
• if n = 1, sf2(n-1); turn 120; sf2(n-1); turn -60; sf2(n-1); turn 120; sf2(n-1)
• else sf1(n-1); turn -60; sf1(n-1); turn 120; sf1(n-1); turn -60; sf1(n-1)

Here is the resulting fractal image (recursing 6 levels):

You can already see a difference.  Now let’s try adding a little “more” of the second snowflake by using the second algorithm when n is 1 or 2, as shown below.

new2(n):

• if n = 0, then draw a line segment, otherwise:
• if n <= 2, sf2(n-1); turn 120; sf2(n-1); turn -60; sf2(n-1); turn 120; sf2(n-1)
• else sf1(n-1); turn -60; sf1(n-1); turn 120; sf1(n-1); turn -60; sf1(n-1)

Going one level further — using the second algorithm when n is 1, 2, or 3 — produces the following curve.

One level further yields:

And we’re done, since the next level will in fact give the rotated snowflake sf2(6) (although pieced together a little differently — but the resulting fractal is the same).

Now I’ll leave it up to you.  There’s really so much to explore here.  Here’s just one example.  Adding just one additional call to sf2(n-1) after the last one in the algorithm produces the following image:Amazing, isn’t it?  Such a small change results in a very different feel to the final image.  And of course once smaller changes are made, there’s no reason to stop.  Maybe branch into more than two fractal routines?  Or maybe perform sf1 when n is odd, and sf2 when n is even.  Or….

The only limitation, it seems, is one’s imagination.  Comment if you create something interesing!

## Number Searches II

This week I’ll talk about the solution to the final puzzle from last week — so if you haven’t tried it yet and want to, now’s the time!

For reference, here’s the puzzle:

The hints were that all circled numbers in the grid were four digits long, and that every number in the grid was used at least once.  The object was to determine two bases:  the base used to write the numbers in the list, and the base used for the circled numbers in the grid.  Then the rest is easy!

To start, I chose the bases first.  My first attempt was to use 41 and 19, although this didn’t quite work out (for reasons we’ll soon see).  I didn’t want the numbers in the list to be too long, so I chose bases which weren’t too far apart.  The bases happened to be prime numbers — but later on I saw that this information really doesn’t help the solver at all.  No harm done, though.

Now the main issue was that I wanted the numbers both in the list and the grid to use only the digits 0–9.  So next I used a brute-force loop to check all numbers up to 10,000,000 which only used the digits 0–9 in base 41.  Of these, I checked which used only the digits 0–9 in base 19.  There were only 2640 numbers — not very many, actually, out of 10,000,000.  (I used Mathematica here, since using Python in the Sage platform would have been far too slow.  Also, it is very easy in Mathematica to convert numbers from one base to another using the functions “IntegerDigits” and “FromDigits.”)

But what was I going to do with these numbers?  I decided I wanted a solution strategy which didn’t use the ending 0 again — I had used this device twice already.  I thought that if I looked for numbers which were off by just 100 in base 41 in the grid — and made sure the solver had enough clues to determine the corresponding numbers in the list — well, I’d be making progress.

Let’s look at the puzzle for a moment, so you see what’s at play here.  Notice that the largest number is roughly three or four times the smallest (3 x 1 = 3 in any base larger than 3!).  But there are two small numbers close to each other — 14940 and 14965 — they can’t begin with 0, naturally, and there is only one number you can circle in the grid beginning with a 1.  So they must at least start with a 2.  But they can’t start with a 3 or greater, or else the largest numbers would have to start with at least a 9 — and 7 is the largest number in the grid.  Therefore the two smaller numbers must start with a 2, and the three largest must begin with a 7.

And you’ll notice that there are the two numbers 7034 and 7134 in the grid — off by 100.  But here was my problem — when I looked at my list of numbers which used only the digits 0–9 in both bases 19 and 41, I couldn’t find any which were 100 apart in base 41.

Then I realized why.  In base 41, 100 represents 412 = 1681.  And the base 19 digits of 1681 are 4, 12, and 9.  See the problem?  There’s no way to be off by 12 in the base 19 list using only the digits 0–9!

So I needed to find a base whose square could be written in base 19 using single digits.  It turns out that 41 wasn’t too far off….432 = 52619.  And if you notice, 45311 and 45837 are off by exactly 526.  Recall that since these are the largest numbers, they must be the ones beginning with a 7.  (Note that the other four-digit number in the grid, 7042, is between 7034 and 7134, and so must correspond to 45334.)

What does this mean algebraically?  Let’s denote by L the base used in the number list, and  let G be the base used in the grid.  To say that being off by 100G in the grid is the same as being off by 526L in the list is to say that

G2 = 5 L2 + 2 L + 6.

So this is one relationship between G and L.

Now there are two ways to proceed.  You might try to figure out another relationship between G and L using 7042 and 45334.  Note that 7042 is in fact G – 2 more than 7034 (observe that a carry is involved), and 45334 is 23 more that 45311.  This would mean that

G – 2 = 2 L + 3.

Alternatively, note that the smallest two numbers must be represented by 2154 and 2164 in the grid (2407 is too big, since the numbers are only 25 apart in base L).  But 2154 and 2164 are just 10 apart in base G, so

G = 2 L + 5,

which you’ll note is equivalent to the previous equation.

But we now have two equations in two unknowns — just substitute for G into the quadratic, rearrange terms and simplify, and you’ll end up with the quadratic equation

L2 – 18 L – 19 = 0.

It’s not hard to see (just factor!) that the only positive root of this equation is L = 19.  This means that

G = 2 x 19 + 5 = 43.

So we’re just about done!  I’ll leave it to you to do all the conversions from one base to the other.  But once we know the bases, the rest is easy.

Of course there was no way I could have created this puzzle without a computer — finding numbers which can be written with the digits 0–9 in two different bases is just too difficult to do by hand (at least for me).  But even when this was done, it  was not a trivial task.  I pored over the numbers, looking for ones with the “right” digits.  Since I wanted the largest numbers to begin with 7, I avoided 8’s and 9’s in the corners.  I thought the solver needed some entry point without having to go down too many dead ends.

And then, well, the numbers had to fit.  Since the corners are common to so many entries, I had to look for numbers which started and ended with suitable digits.  This took some creative juggling!

Finally, I wanted to make sure that there were numbers differing by 100 and 10 in the grid. Again, the solver needed an entry point into the puzzle.  Just writing a computer program to look at every single possibility could be done — but then why spend all the effort making a puzzle which can, in large part, be solved by hand?  Just randomly picking numbers and writing them in bases 19 and 43 would certainly have been possible, but far from interesting.

If you feel adventurous enough to create your own Number Search puzzle, please comment.  I’d love to see what others might do with this idea!