Imagifractalous! 6: Imagifractalous!

No, the title of today’s post is not a typo….

About a month ago, a colleague who takes care of the departmental bulletin boards in the hallway approached me and asked if I’d like to create a bulletin board about mathematical art.  There was no need to think it over — of course I would!

Well, of course we would, since I immediately recruited Nick to help out.  We talked it over, and decided that I would describe Koch-like fractal images on the left third of the board, Nick would discuss fractal trees on the right third, and the middle of the bulletin board would highlight other mathematical art we had created.

I’ll talk more about the specifics in a future post — especially since we’re still working on it!  But this weekend I worked on designing a banner for the bulletin board, which is what I want to share with you today.

I really had a lot of fun making this!  I decided to create fractals for as many letters of Imagifractalous! as I could, and use isolated letters when I couldn’t.  Although I did opt not to use a third fractal “A,” since I already had ideas for four fractal letters in the second line.

The “I”‘s came first.  You can see that they’re just relatively ordinary binary trees with small left and right branching angles.  I had already incorporated the ability to have the branches in a tree decrease in thickness by a common ratio with each successive level, so it was not difficult to get started.

I did use Mathematica to help me out, though, with the spread of the branches.  Instead of doing a lot of tweaking with the branching angles, I just adjusted the aspect ratio (the ratio of the height to the width of the image) of the displayed tree.  For example, if the first “I” is displayed with an aspect ratio of 1, here is what it would look like:

I used an aspect ratio of 6 to get the “I” to look just like I wanted.

Next were the “A”‘s.  The form of an “A” suggested an iterated function system to me, a type of transformed Sierpinski triangle.  Being very familiar with the Sierpinski triangle, it wasn’t too difficult to modify the self-similarity ratios to produce something resembling an “A.”  I also like how the first “A” is reminiscent of the Eiffel Tower, which is why I left it black.

I have to admit that discovering the “R” was serendipitous.  I was reading a paper about trees with multiple branchings at each node, and decided to try a few random examples to make sure my code worked — it had been some time since I tried to make a tree with more than two branches at each node.

When I saw this, I immediately thought, “R”!  I used this image in an earlier draft, but decided I needed to change the color scheme.  Unfortunately, I had somehow overwritten the Mathematica notebook with an earlier version and lost the code for the original “R,” but luckily it wasn’t hard to reproduce since I had the original image.  I knew I had created the branches only using simple scales and rotations, and could visually estimate the original parameters.

The “C” was a no-brainer — the fractal C-curve!  This was fairly straightforward since I had already written the Mathematica code for basic L-systems when I was working with Thomas last year.  This fractal is well-known, so it was an easy task to ask the internet for the appropriate recursive routine to generate the C-curve:

+45  F  -90  F  +45

For the coloring, I used simple linear interpolation from the RGB values of the starting color to the RGB values of the ending color.  Of course there are many ways to use color here, but I didn’t want to spend a lot of time playing around.  I was pleased enough with the result of something fairly uncomplicated.

For the “T,” it seemed pretty obvious to use a binary tree with branching angles of 90° to the left and right.  Notice that the ends of the branches aren’t rounded, like the “I”‘s; you can specify these differences in Mathematica.  Here, the branches are emphasized, not the leaves — although I did decide to use small, bright red circles for the leaves for contrast.

The “L” is my favorite letter in the entire banner!  Here’s an enlarged version:

This probably took the longest to generate, since I had never made anything quite like it before.  My inspiration was the self-similarity of the L-tromino, which may be made up of four smaller copies of itself.

The problem was that this “L” looked too square — I wanted something with a larger aspect ratio, but keeping the same self-similarity as much as possible.  Of course exact self-similarity isn’t possible in general, so it took a bit of work to approximate is as closely as I could.  I admit the color scheme isn’t too creative, but I liked how the bold, primary colors emphasized the geometry of the fractal.

The “O” was the easiest of the letters — I recalled a Koch-like fractal image I created earlier which looked like a wheel with spokes and which had a lot of empty space in the interior.  All I needed to do was change the color scheme from white-on-gray  to black-on-white.

Finally, the “S.”  This is the fractal S-curve, also known as Heighway’s dragon.  It does help to have a working fractal vocabulary — I knew the S-curve existed, so I just asked the internet again….  There are many ways to generate it, but the easiest for me was to recursively producing a string of 0’s and 1’s which told me which way to turn at each step.  Easy from there.

So there it is!  Took a lot of work, but it was worth it.  I’ll take a photo when it’s actually displayed — and update you when the entire bulletin board is finally completed.  We’ve only got until the end of the semester, so it won’t be too long….

In Imagifractalous! 1, I talked about varying parameters in the usual algorithm for creating the Koch curve to produce a variety of images, and casually mentioned p-adic valuations.  What happened was about a year after I began exploring these interesting images, I did a Google search (as one periodically does when researching new and exciting topics), and stumbled upon the paper Arithmetic Self-Similarity of Infinite Sequences.

It was there that I recognized the sequence of 0’s and 1’s I’d been using all along, and this sequence was called a 2-adic valuation (mod 2).

Here’s the definition:  the p-adic valuation is the sequence such that the nth term in the sequence is the highest power of p which divides n. So the 2-adic valuation begins

0, 1, 0, 2, 0, 1, 0, 3, 0, 1, 0, 2, 0, 1, 0, 4, ….

Of course all terms with odd n are 0, and for the even n, you just look at the highest power of 2 dividing that even number.  And you can take this sequence (mod 2), or mod anything you like.

I naturally did what any right-thinking mathematician would do — looked up p-adic sequences in the The On-Line Encyclopedia of Integer Sequences.  Now it’s not that p-adic valuations are all that obscure, it’s just I had never encountered them before.

Of course there it was:  A096268.  And if you read through the comments, you’ll see one which states that this sequence can be used to determine the angle you need to turn to create the Koch snowflake.

I wasn’t particularly discouraged — this sort of thing happens all the time when exploring new mathematical ideas.  You just get used to it.  But something unexpected happened.

I starting experimenting with other p-adic valuations mod 2 (since I had two choices of angles), and found similar interesting behavior.  Not only that, p didn’t have to be prime — any integer would do.

But most papers about p-adic valuations assumed p was prime.  Why is that?  If $\nu_p$ is a p-adic valuation and p is prime, it’s not hard to show that

$\nu_p(mn)=\nu_p(m)+\nu_p(n),$

an often-used property in developing a theory of p-adic valuations.  But it only takes a moment to see that

$\nu_6(4)=0,\quad \nu_6(9)=0,\quad \nu_6(4\cdot9)\ne\nu_6(4)+\nu_6(9).$

Bad news for the p-adic theorists, but the fractal images couldn’t seem to care whether p was prime or not….

So I didn’t plunge into researching p-adic valuations, since I needed a treatment which included p composite, which didn’t seem to be out there.

But here’s the neat part.  Most of the work I’d done to prove something already known — that a 2-adic valuation (mod 2) could be used to produce a Koch curve — could be used to study generic p.  So I was able to make fairly good progress in a relatively short amount of time, since I’d already thought about it a lot before.  I suspect it would have taken me quite a bit longer if I’d just casually read about the 2-adic result rather than prove it myself.

The progress made was similar to that of the 2-adic case — number of segments in each arm in the symmetric case, how many solutions given a particular order of symmetry, and so forth.

Now my fractal brain was truly revved up!  I enjoyed creating images using various p-adic valuations — the variety seemed another dimension of endless.  So I started brainstorming about ways to even further diversify my repertoire of fractal images.

The first two weeks of November were particularly fruitful.  Two ideas seemed to coalesce.  The first revolved around an old unexplored question:  what happened when you didn’t only change the angles in the algorithm to produce the Koch snowflake, but you also changed the lengths?

Of course this seemed to make the parameter space impossibly large, but I was in an adventurous mood, and the only thing at stake was a few moments with Mathematica generating uninteresting images.

But what I found was that if an image closed up with a particular symmetry, then as long as the sequence of edge lengths was appropriately periodic, the image with different edge lengths also closed up with the same order of symmetry!

This was truly mind-boggling at first.  But after looking at lots of images and diving into the algorithm, it’s not all that improbable.  You can observe that in the image above, segments of the same length occur in six different orientations which are 60 degrees apart, and so will ultimately “cancel out” in any final vector sum and take you right back to the origin.

Now I don’t have the precise characterization of “appropriately periodic” as yet, but I know it’s out there.  Just a matter of time.

The second big idea at the beginning of November involved skimming through the paper on arithmetic self-similarity mentioned above.  Some results discussed adding one sequence to another, and so I wondered:  what if you added two p-adic sequences before taking their sum (mod 2)?

Well, preliminary results were promising only when the p-adic valuations involved were of the same power of some p, like in the image above (which also involves different edge lengths).

These ideas are only in a very preliminary stage — it’s the perennial problem of, well, waiting…..  It may look liking adding 2-adic and 3-adic valuations doesn’t get you anywhere, but maybe that’s just because you need so many more iterations to see what’s actually happening.  So there’s lots more to explore here.

So as the parameter space gets larger — mutliple different lengths, adding several different p-adic valuations — the variety becomes infinitely diverse, and the analysis becomes that much more involved.

But that makes it all the more intriguing.  And it will be all the more rewarding when I finally figure everything out….

Imagifractalous! 1: How it all began.

September 2, 2015 — that was the day Thomas sent an email to the faculty in the math department asking if anyone was willing to help him learn something about fractals.  And that was when it all began.

Now I’ve told this story in various forms over the past year or so on my blog, but I want to begin a thread along a different direction. I’d like to give a brief chronological history of my work with fractals, with an eye toward the fellow mathematician interested in really understanding what is going on.

I’ll be formal enough to state fairly precisely the mathematical nature of what I’m observing, so as to give the curious reader an idea of the nature of the mathematics involved.  But I’ll skip the proofs, so that the discussion doesn’t get bogged down in details.  And of course provide lots of pictures….

You might want to go back to read my post of Day007 — there is a more detailed discussion there.  But the gist of the post is the question posed by Thomas at one of our early meetings:  what happens when you change the angles in the algorithm which produces the Koch curve?

What sometimes happens is that the curve actually closes up and exhibits rotational symmetry.  As an example, when the recursive routine

F   +40   F   +60   F   +40   F

is implemented, the following sequence of segments is drawn.

Now this doesn’t look recursive at all, but rather iterative.  And you certainly will have noticed something curious — some sequences of eight segments, which I call arms, are traversed twice.

All of this behavior can be precisely described — all that’s involved is some fairly elementary number theory.  The crux of the analysis revolves around the following way of describing the algorithm:  after drawing the nth segment, turn +40 degrees if the highest power of 2 dividing n is even, and turn +60 degrees if it’s odd.  Then move forward.

I would learn later about p-adic valuations (this is just a 2-adic valuation), but that’s jumping a little ahead in the story.  I’ll just continue on with what I observed.

What is also true is that the kth arm (in this case, the kth sequence of eight segments) is retraced precisely when the highest power of power of 2 dividing k is odd.  This implies the following curious fact:  the curve in the video is traced over and over again as the recursion deepens, but never periodically.  This is because the 2-adic valuation of the positive integers isn’t periodic.

So I’ve written up some results and submitted them to a math journal.  What I essentially do is find large families of angle pairs (like +40 and +60) which close up, and I can describe in detail how the curves are drawn, what the symmetry is, etc.  I can use this information to create fractal images with a desired symmetry, and I discuss several examples on a page of my mathematical art website.

As one example, for my talks in Europe this past summer, I wanted to create images with 64 segments in each arm and 42-fold symmetry.  I also chose to divide the circle in 336 parts — subdivision into degrees is arbitrary.  Or another way of looking at it is that I want my angles to be rational multiples of $\pi,$ and I’m specifying the denominator of that fraction.

Why 336?  First, I needed to make sure 42 divided evenly into 336, since each arm is then 8/336 away from the previous one (although they are not necessarily drawn that way).  And second, I wanted there to be enough angle pairs so I’d have some choices.  I knew there would be 96 distinct images from the work I’d already done, so it seemed a reasonable choice.  Below is one of my favorites.  The angles used here are 160 and 168 of the 336 parts the circle is divided into.

Now my drawing routine has the origin at the center of the squares in the previous two images, so that the rotational symmetry is with respect to the origin.  If you think about it for a moment, you’ll realize that if both angles are the same rational multiple of $\pi,$ you’ll get an image like the following, where one of the vertices of the figure is at the origin, but the center of symmetry is not at the origin.

Of course it could be (and usually is!) the case that the curve does not close — for example, if one angle is a rational multiple of $\pi,$ and the other is an irrational multiple of $\pi.$  Even when they do close, some appealing results are obtained by cropping the images.

So where does this bring me?  I’d like a Theorem like this:  For the recursive Koch algorithm described by

F   $\alpha_0$   F   $\alpha_1$   F   $\alpha_0$   F,

the curve closes up precisely when (insert condition on $\alpha_0$ and $\alpha_1$), and has the following symmetry (insert description of the symmetry here).

I’m fairly confident I can handle the cases where the center of symmetry is at the origin, but how to address the case when the center of symmetry is not the origin is still baffling.  The only cases I’ve found so far are when $\alpha_0=\alpha_1,$ but that does not preclude the possibility of there being others, of course.

At this point, I was really enjoying creating digital artwork with these Koch-like images, and was also busy working on understanding the underlying mathematics.  I was also happy that I could specify certain aspects of the images (like the number of segments in each arm and the symmetry) and find parameters which produced images with these features.  By stark contrast, when I began this process, I would just randomly choose angles and hope for the best!  I’d consider myself lucky to stumble on something nice….

So it took about a year to move from blindly wandering around in a two-dimensional parameter space (the two angles to be specified) to being able to precisely engineer certain features of fractal images.  The coveted “if-and-only-if” Theorem which says it all is still yet to be formulated, but significant progress toward that end has been made.

But then my fractal world explodes by moving from 2-adic to p-adic.  That for next time….

Koch Curves and Canopies

Time for another “math in the moment” post!  There have been a few interesting developments just this past week, so I thought I’d share them with you.

The first revolves around the Koch curve.  I’ve talked a lot about the algorithm used to generate the Koch curve when the angles are changed — but this is the first time I’ve experimented with three different angles.  For example, the recursive procedure

F   +45   F   +180   F   +315   F

generates the following spiral:

Looks simple, right?  Just 16 line segments.  But here’s the unexpected part — it takes 21,846 iterations to make!  That’s right — the horizontal segment in the upper left is the 21,846th segment to be drawn.

Why might this be?  Partly because of the fact that one angle is 180, and also the fact that 45 + 415 = 360.  In other words, you are often just turning around and retracing your steps.  These arms are retraced multiple times, not just once or twice like in previous explorations.

I found this family of spirals by having Mathematica generate random angles and using the algorithm to draw the curves.  Without going into too many details, the “obvious” way to generalize the equation I came up with before didn’t work, so I resorted to trial and error — which is easy to do when the computer does all the work!

Notice the pattern in the angles:  45 (1/8 of a circle), 180 (1/2 of a circle), and 315 (7/8 of a circle).  Moving up to tenths, we get angles of 36 (1/10 of a circle), 180 (1/2 of a circle), and 324 (9/10 of a circle), which produce the image below.

Notice there are only five arms this time (not 10).  And it only takes 342 segments to draw!  There’s an alternating pattern here.  Moving up to twelfths gives 12 arms, and takes a staggering 5,592,406 segments to draw.  Yes, it really does take almost 6,000,000 iterations to draw the 24th and last segment!

With the help of Mathematica, though, I did find explicit formulas to calculate exactly how many iterations it will take to draw each type of image, depending on whether there are an even number of arms, or an odd number.  Now the hard part — prove mathematically why the formulas work!  That’s the next step.

I hope this “simple” example illustrates how much more challenging working with three different angles will be.  I think working out the proof in these cases will give me more insight into how the algorithm with three angles works in general, and might help me derive an equation analogous to the case when the first and third angles are the same.

The second development is part of an ongoing project with Nick to write a paper for the Bridges 2017 conference in Waterloo.  The project revolves around fractal trees generated by L-systems.  (We won’t be discussing L-systems in general, but I mention them because the Wikipedia had a decent article on L-systems.)

Consider the examples above.  You first need to specify an angle, a, and a ratio, r.  In this example, a is 45 degrees, and r is 0.5.  Start off by drawing a segment of some arbitrary length (in this case the trunk of a tree).  Then turn left by the angle a, and recurse using the length scaled by a factor of r.  When this is done, do the same recursion after turning right by the angle a.

On the left, you should see three different sized lengths, each half the size of the one before.  You should also be able to easily see the branching to the left and right by 45 degrees, and how this is done at each level.

In the middle, there are six levels, with the smallest branches only 1/32 the length of the tree trunk.  Can you guess how many levels on the right?  There are twelve, actually.  At some point, the resolution of the screen is such that recursing to deeper levels doesn’t actually make the fractal look any different.

What Nick is interested in is the following question.  Given an angle a, what is the ratio r such that all the branches of the tree just touch?  In the example below with a being 45 degrees, the leaves are just touching — increasing the ratio r would make them start to overlap.

What is r?  It turns out that there’s quite a bit of trigonometry involved to find the precise r given an angle a, but it’s not really necessary to go into all those details.  It’s just enough to know that Nick was able to work it out.

But what Nick is really interested in is just the canopies of these trees — in other words, just the outermost leaves, without the trunk or any of the branches.

Right now, he’s experimenting with creating movies which show the canopies changing as the angle changes — sometimes the ratio is fixed, other times it’s changing, too.

Two observations are worth making.  First, this was a real team effort!  Nick had done the programming and set up quite a bit of the math, and with the aid of Mathematica, I was able to help verify conjectures and get the expressions into a useable form.  We each had our own expertise, and so were each able to contribute in our own way to solving the problem.

Second, Nick is using mathematics to aid in the design process.  My first attempts to create symmetric curves using the algorithm for the Koch snowflake were fairly random.  But now that I’ve worked out the mathematics of what’s going on, I can design images with various interesting properties.

Likewise, Nick has in mind a very particular animation for his movies — using the just-touching canopies — and is using mathematics in a significant way to facilitate the design process.  Sure, you can let the computer crunch numbers until you get a good enough approximation — but the formula we derived gives the exact ratio needed for a given angle.  This is truly mathematical art.

I’ll keep you updated as more progress is made on these projects.  I’ll end with my favorite image of the week.  The idea came from Nick, but I added my own spin.  It’s actually canopies from many different trees, all superimposed on each other.  Enjoy!

Bridges: Mathematics and Art I

Just registered for Bridges 2016 last week!

Simply put, Bridges is the best mathematics conference ever.  You meet people from all around the world who are interested in the interplay between mathematics and art.

Not just M. C. Escher, either (though many are interested in his work).  Some Bridges attendees (shall we call them Bridgers?) are artists by profession, but others are mathematicians, computer scientists, physicists — you name it.  All are artists by vocation.

Interests span not only art in a more usual sense — watercolor, acrylic, oil, pastel, drawing — but also digital art, sculpture in almost any medium you can think of, poetry, architecture, music, fiber arts, dance, digital animations and movies, fashion, origami, and likely there will be some new art form introduced this summer as well!

The art exhibition is amazing.  You can see a few examples above.  You see the wooden spiral?  Each inlaid rectangle is a different piece of wood!  The craftsmanship is really superb.

One neat aspect is that most of the artists also attend Bridges.  That means if you see something you really like, you can just look for the right name tag and start up a conversation.  As you would expect, all the artists are eager to discuss their work.

Be ready for some surprises, too.  I met my friend Phil Webster at the conference – we starting talking because I was from San Francisco, and he also lives in the Bay area.  So we’ve met up a few times since the conference to discuss mathematics, art, and programming.  He even gave a talk in our Mathematics Colloquium at the University of San Francisco.  Of course, his talk was great….

Even if you don’t go to the conference, you can still appreciate all the art.  You can visit the Bridges 2015 online gallery and see all the art that was exhibited.  Not only are there descriptions of all the works by the artists themselves, but there’s also contact information so you can get in touch if you’d like.  Please do!

The Bridges 2016 gallery is not online yet, but I’ve got two pieces accepted for this year’s exhibition.  This is my favorite.

Then there are the talks.  You learn so much just by going to them.  The range of topics is incredibly diverse — look back at the list above!  Last summer, I gave a talk about Random Walks on Vertices of Archimedean Tilings.  My favorite work discussed in the paper is Bear.  You can read the paper to learn how it was made, if you’re interested.  The first print of Bear is hanging in my friend Cory’s house in Florida.  Hi, Cory!

As you’ll see if you click on the link to my paper, there is an archive of all papers — over 1000! — given at Bridges conferences since 1998.  What’s nice is that you can actually search for specific topics, so it’s easy to use.  No shortage of reading material on mathematics and art….

In addition to the exhibition and all the presentations, there are also dance performances, poetry readings, theatre performances, movie showings, a music night — any number of interesting activities relating mathematics and art.  If you want to learn more, just go to the Bridges 2016 website.  There’s complete information on the upcoming conference there.

This year, the conference is being held at the University of Jyväskylä in Jyväskylä, Finland.  I’ve never been to Finland before, so I’m looking forward to an exciting trip!  What’s also nice about the conference is that in the evenings, you can just take a stroll with other Bridgers, looking for some interesting place to have dinner.  I always love exploring new countries, and especially like trying new cuisines!

But even though Bridges 2016 is in July, I actually starting preparing last November.  Since there was a January deadline for submitting papers to the conference, and since I knew I’d be doing a lot of traveling over our Winter Break, I wanted to get an early start.  The papers are all reviewed by three referees, so your mathematics should be sound.  Usually they have comments to make, and so you often need to make some revisions a few months later before a final submission.

My paper is on fractals this year.  A lot of what I wrote in that paper you’ve already seen on my blog — but I’ll be sure to give a link when I write a follow-up post on Bridges 2016 later on in the summer.  Here’s one of my favorite images discussed in the paper.

There are deadlines to submit artwork as well, so it’s important to be organized.  For both papers and artwork, the online submission system is actually really easy to use.  I just wanted to let you know something about the process so you can submit something to next year’s conference….

Last Fall, I received an email about a new addition to the Bridges menu — student scholarships.  And in my calculus class, I had a student Nick who is a double major in mathematics and art.

Turns out Nick was really interested in trying to submit to Bridges, so we worked out a one-credit directed study course just for that purpose.  As of this moment, I’m happy to say that two of Nick’s artworks were accepted!  And we just submitted the final revisions to his paper, and are waiting to hear back.  We should know about the scholarship soon — I’ll update this post when I have more information.  One of my favorite images from Nick’s paper is this one.

You can read the paper to see how he creates it….link to follow.

So think about including Bridges in your future travel!  Many artists bring their families and make a summer vacation out of the conference.  It’s quite an experience.

And if you’re a student, consider submitting as well!  Maybe you’ll earn a scholarship to attend:  here’s more information on the Student Travel Scholarship.  Preference is given to those student who submit papers, artwork, or movies.

You will need a letter from one of your teachers or professors — so ask someone to be your mentor.  If you can’t find someone, well, just ask me.  I’ll be glad to help out (as long as I don’t get too many requests!).

Later on in the summer, I’ll tell you all about the experience.  Hope to see you at some Bridges conference soon!

P.S. (10 July 2106):  Nick did receive the travel scholarship.  Congratulations!

Fractals VIII: PostScript Programming

June 1998.

That’s when I wrote the code I’m going to talk about today.

When I became interested in computer graphics, a windows environment you’re used to by now just didn’t exist.  For example, the computer I used (called a Zenith H19) had a screen 80 x 25 characaters.  Yes, characters.  Rather like a larger version of those still unimaginable TI calculator screens (couldn’t resist).

There was no graphical user interface (GUI), mouse, touch-screen.  If you wanted to draw a picture, you had to write a computer program to do it.

And if you wanted to see what it looked like, well, you had to print it out.

Yep.  That was the only way to see if your code was correct.  It made debugging very challenging.  But that’s all there was.  So I learned how to write PostScript code.

There are two features of PostScript I want to discuss today.  First, the syntax is written in postfix notation, like most HP calculators.  If you wanted to add 2 and 3 on a typical calculator, you’d type in “2 + 3 =.”  The result would be 5.  This is called infix notation, where the operator is written in between the arguments.

In PostScript, though, you’d write “2 3 add.”  In other words, the arguments come first.  Then they’re added together — the operator comes last, which is referred to as postfix notation.  So if you wanted to write

(2 + 3 ) x (10 – 4)

in PostScript, you’d write

2 3 add 10 4 sub mul.

Notice that the multiplication is done last.

When programming in Python, most functions are described using prefix notation.  In other words, you give the function name first, and then the arguments to the function.  For example, randint(1,10) would output a random number between 1 and 10, inclusive.  If we had to write the arithmetic expression above in prefix notation, it would look like

You probably encounter these ideas on a daily basis.  For example, if you want to delete a bunch of files, you first select them, and then do something like click “Move to trash” from a dropdown menu.  But if you’re writing an email, you usually click on the attach icon first, and then select the files you’d like to attach.  Postfix and prefix.

Of course this is just an introduction to using the different types of notation.  In general, you’d transform the above arithmetic expression into what it called an expression tree, like this:

Then prefix notation may be derived from a preorder traversal of the expression tree, infix notation from an inorder traversal of the tree, and finally, postfix notation comes from a postorder traversal of the tree.  Feel free to ask the internet all about binary trees and tree traversals.  Trees are very important data structures in computer science.

The second feature of PostScript I’d like to discuss today is that it is an example of a stack-based language.  For example, if you typed

4  5  3  7  8

in a PostScript program, the number 4 would be pushed onto the stack — that is, the working memory of the program — and then 5, then 3, 7, and finally 8.  (For convenience, we’ll always think of the bottom of the stack as being on the left, and the top of the stack as being on the right.  Easier to write and read that way.)

If you want to follow along, do the following:  write 4 on an index card or small piece of paper, then write 5 on another and put it on top of the 4, and so on, until you put the 8 on top of the stack.  You really do have a stack of numbers.

Now when you type the “add” function, PostScript thinks, “OK, time to add!  What are the top two numbers on the stack?  Oh, 8 and 7!  I’ll add these and put the sum back on the stack for you!”

So you type

4  5  3  7  8  add,

and PostScript performs the appropriate action and the stack now becomes

4  5  3  15.

If you now want to divide (div in PostScript), the interpreter looks at the top number on the stack and divides it by the second number on the stack, and puts that result on the stack.  So

4  5  3  7  8  add  div

would create the stack

4  5  5.

You should be able to figure out that

4  5  3  7  8  add  div  mul  sub

would just leave the single number 21 on the stack.

Graphics are drawn this way, too.  For example, we would create a square with the commands

0 0 moveto  0 1 lineto  1 1 lineto  1 0 lineto  0 0 lineto  closepath  stroke.

Again, the arguments come first.  We move to the point (0,0), and then draw a line segment to (0,1), and so forth.  Closepath “ties” the beginning and end of our path together, and the command “stroke” actually draws the square on the page.

Commands like

1  0.5  0  setrgbcolor

set the current color to orange, and

5 setlinewidth

sets the width of lines to be drawn.  You can change any aspect of any drawn image — the only limit is your imagination!

Of course this is only the most basic introduction to the PostScript programming language.  But if you’re interested, there’s good news.  First, it shouldn’t be hard to download a PostScript interpreted for your computer.  I use MacGhostView — installing will also require you to install XQuartz, too, but not to worry.

Second, there are some great resources online.  Back when I was learning PostScript, there were three very helpful books — the Blue Book, the Red Book, and the Green Book.  Yep, that’s what there were called.   The Blue Book is the place you should start — it includes a tutoral on all the basics.  The Red Book  is the Reference Manual — handy in that it lists PostScript commands, their syntax, and examples of how to use them.  The Green Book is for the serious programmer, and discusses at a high level how the PostScript language is organized.

Finally, here is the code I wrote all those years ago!  (Click here: Sierpinski.)    I did modify the original code so it produces a Sierpinski triangle.  It will be a little daunting to figure out if you’re just starting to learn PostScript, but I include it as it motivated me to talk about PostScript today.

And a final few words about motivation.  Try to learn programming languages which expose you to different ways of thinking.  This develops flexibility of mind for coding in general.  Whenever I learn a new language, I’m always thinking things like, “Oh, that’s the same as stacks in PostScript,” or “Yeah, like maps in LISP.”

As you learn more, your programming toolbox gets bigger, and you find that the time it takes from conceiving of an idea to successfully implementing it becomes shorter and shorter.  That’s when it starts really getting fun….

Fractals VI: Iterated Function Systems II

What I find truly remarkable about iterated function systems is the astonishing variety of fractals you can create with just two affine transformations.  Today, we’ll explore some more examples so you can get a better feel for how the particular affine transformations used affect the final image.

Let’s begin with the example from last week, which I color coded for easy reference.

Below are the affine transformations which generated this fractal.

$F_{{\rm green}}\left(\begin{matrix}x\\y\end{matrix}\right)=\left[\begin{matrix}0.95&0\\0&0.95\end{matrix}\right]\left[\begin{matrix}\cos(20^\circ)&-\sin(20^\circ)\\\sin(20^\circ)&\cos(20^\circ)\end{matrix}\right]\left(\begin{matrix}x\\y\end{matrix}\right)$

$F_{{\rm orange}}\left(\begin{matrix}x\\y\end{matrix}\right)=\left[\begin{matrix}0.4&0\\0&0.4\end{matrix}\right]\left(\begin{matrix}x\\y\end{matrix}\right)+\left(\begin{matrix}1\\0\end{matrix}\right)$

Now compare the different colors in the image with the transformations.  The first transformation combines a scaling by 0.95 with a rotation by 20 degrees.  If you look closely, you’ll see that the green part of the image looks the same as a slightly smaller copy of the entire fractal, except rotated by 20 degrees from the center of the spiral (note that there is no translation involved in $F_{{\rm green}}$).

The orange part of the image is simply a scaled version of the entire fractal (by a factor of 0.4), but moved over 1 unit.  You can see this by looking at the form of the affine transformation $F_{{\rm orange}}.$

The pair of affine transformations used to generate the spiral exactly describes the self-similarity.  This is easy to see after the fact — but I really have no helpful suggestion as to how to predict the form of this wonderful spiral just by looking at the transformations.

Now let’s just slightly alter the second transformation so that in addition to scaling by a factor of 0.4, there is a shear as well.

$F_{{\rm orange}}\left(\begin{matrix}x\\y\end{matrix}\right)=\left[\begin{matrix}0.4&0.4\\0&0.4\end{matrix}\right]\left(\begin{matrix}x\\y\end{matrix}\right)+\left(\begin{matrix}1\\0\end{matrix}\right)$

Notice how the fractal changes.

The shear distorts each arm of the spiral — stretching it in the process and creating more interaction between the various arms.

Now let’s go back to the original spiral and change the first transformation instead, slightly altering the scale factor.

$F_{{\rm green}}\left(\begin{matrix}x\\y\end{matrix}\right)=\left[\begin{matrix}0.9&0\\0&0.9\end{matrix}\right]\left[\begin{matrix}\cos(20^\circ)&-\sin(20^\circ)\\\sin(20^\circ)&\cos(20^\circ)\end{matrix}\right]\left(\begin{matrix}x\\y\end{matrix}\right)$

Here is what happens:

I really don’t how to predict that this is what would happen — but I find it fascinating how even small changes in parameters can really change the way a fractal looks.

I can’t resist one more image created by changing the second transformation to

$F_{{\rm orange}}\left(\begin{matrix}x\\y\end{matrix}\right)=\left[\begin{matrix}0.4&0\\0.4&0.15\end{matrix}\right]\left(\begin{matrix}x\\y\end{matrix}\right)+\left(\begin{matrix}1\\0\end{matrix}\right).$

Here is the result.There’s so much you can do with just one pair of transformations!

While it may not be possible to guess exactly what a fractal will look like, it’s possible to know some features of a fractal based upon how you set up your transformations.  For example, condsider this combination of a shear and a scaling:

$G_1\left(\begin{matrix}x\\y\end{matrix}\right)=\dfrac23\left[\begin{matrix}1&-1/2\\0&1\end{matrix}\right]\left(\begin{matrix}x\\y\end{matrix}\right)+\left(\begin{matrix}1\\1\end{matrix}\right).$

There will be some aspect of the fractal which can be described by a shear.  For the second transformation, just take the opposite of the first, as follows.

$G_2\left(\begin{matrix}x\\y\end{matrix}\right)=-\dfrac23\left[\begin{matrix}1&-1/2\\0&1\end{matrix}\right]\left(\begin{matrix}x\\y\end{matrix}\right)-\left(\begin{matrix}1\\1\end{matrix}\right).$

This must force the fractal to have 180 degree rotational symmetry!  Think about how the algorithm works — take the image under one transformation, then the other — but by their definitions, the images must be symmetric about the origin.

So it is possible to use our knowledge of affine transformations to design, even if to a limited extent, some features of the resulting fractal.

As another example, we can assemble a fractal similar to two smaller copies of itself, one piece of which is a 90 degree rotation of the other.  Use $G_1$ as above, and then use

$G_2=\left[\begin{matrix}0&-1\\1&0\end{matrix}\right]G_1.$

Here is the resulting fractal.

You can clearly see how the red piece is a 90 degree counterclockwise rotation of the purple piece.  This follows from how we created the second transformation from the first.

One reason for the diversity of fractals is the number of parameters needed to specify two affine parameterizations — twelve in all.  Now some sets of parameters may create the same fractal — but even so, the wealth of variations is still staggering.

But will just any affine transformations work?  There are some constraints — for example, you can’t describe a fractal in terms of larger copies of itself.  So the affine transformations cannot make a starting object bigger, or else each successive iteration will expand.  Of course you can still use such affine transformations to create interesting graphical effects — but the resulting image will not be an approximation to a fractal.

Next week, we’ll look at the algorithm which I used to create all these fractal images, and I’ll give you some Python code you can use yourself.  But before I close, I’d like to share a few favorite fractals which are generated by just two affine transformations.

First, the Beetle.  It looks like it’s scampering toward you.

And the second, well, it’s just yellow.

Until next week!