Making Movies with Processing III

This week, we begin a discussion of creating movies consisting of animated fractals.  Last week’s post about the dot changing colors was at a beginning level as far as Processing goes.  This week’s post will be a little more involved, but will assume a knowledge of Iterated Function Systems.  I talked about IFS on Day034, Day035, and Day036.  Feel free to look back for a refresher….

Today, we’ll see how to create the following movie.  You’ll notice that both the beginning and final Sierpinski triangles are fractals discussed on Day034.

As a reminder, these are the three transformations which produce the initial Sierpinksi triangle:

F_1\left(\begin{matrix}x\\y\end{matrix}\right)=\left[\begin{matrix}0.5&0\\0&0.5\end{matrix}\right] \left(\begin{matrix}x\\y\end{matrix}\right),

F_2\left(\begin{matrix}x\\y\end{matrix}\right)=\left[\begin{matrix}0.5&0\\0&0.5\end{matrix}\right] \left(\begin{matrix}x\\y\end{matrix}\right)+\left(\begin{matrix}1\\0\end{matrix}\right),

F_3\left(\begin{matrix}x\\y\end{matrix}\right)=\left[\begin{matrix}0.5&0\\0&0.5\end{matrix}\right] \left(\begin{matrix}x\\y\end{matrix}\right)+\left(\begin{matrix}0\\1\end{matrix}\right).

Also, recall that to get the modified Sierpinski triangle at the end of the video, all we did was change the first transformation to

F_1\left(\begin{matrix}x\\y\end{matrix}\right)=\left[\begin{matrix}0.25&0\\0&0.5\end{matrix}\right] \left(\begin{matrix}x\\y\end{matrix}\right).

We’ll how to use linear interpolation to create the animation.  But first, let’s look at the Python code for creating a fractal using an iterated function system.

ifscode

The parameter p is for the linear interpolation (which we’ll discuss later), and n is the number of points to plot.  First, import the library for generating random integers — since each transformation will be weighted equally, it’s simpler just to choose a random integer from 1, 2, and 3.  The variable points keeps track of all the points, while last keeps track of the most recently plotted point.  Recall from earlier posts that you only need the last point in order to get the next one.

Next, the for loop just creates new points, one at a time, and appends them to points.  Once an affine transformation is randomly chosen by selecting a randint in the range from 1 to 3, it is applied to the last point generated.  For the purpose of writing Python code, it’s easier to use the notation

F_2\left(\begin{matrix}x\\y\end{matrix}\right)=\left(\begin{matrix}0.5 \ast x+1\\0.5\ast y\end{matrix}\right)

rather than matrix notation.  In order to use vector and matrix notation, you’d need to indicate that (1,2) is a vector by writing

v = vector(1,2),

and similarly for matrices.  Since we’re doing some fairly simple calculations, just writing out the individual terms of the result is easier and requires less code.

Once the points are all created, it’s time to plot them.  You’ll recognize the background, stroke, and strokeWeight functions from last week.  Nothing fancy here, since we’re focusing on algorithms today.  Just a black background and small orange dots.

The last line plots the points, and is an example of what is called list comprehension in Python.  First, note that the iterated function system would create a fractal which would fit inside a triangle with vertices (0,2), (0,0), and (2,0).  So we need to suitably scale the fractal — in this case by a factor of 225 so it will be large enough to see.  Remember that units are in pixels in Processing.

Then we need to compensate for Processing’s coordinate system.  You’ll notice a similarity to what we did a few weeks ago.

What the last line does is essentially this:  for every point x in the list points, adjust the coordinates for screen space, and then plot x with the point function.  List comprehension is convenient because you don’t have to make a loop or other iterative construct — it’s done automatically for you.

Of course that doesn’t mean you never need a for loop.  It wouldn’t be easy to replace the for loop above with a list comprehension as each new point depends on the previous one.  But for plotting a bunch of points, for example, it doesn’t matter which one you plot first.

Now for the linear interpolation!  We want the first frame to be the usual Sierpinski triangle, and the last frame to be our modified triangle.  The only difference is that one of the constants in the first function changes from 0.5 to 0.25.

This is perfect for using linear interpolation.  We’d like our parameter p to be 0.5 when p = 0, and 0.25 when p = 1.  So we just need to create a linear function of p which passes through the points (0, 0.5) and (1, 0.25).  This isn’t hard to do; you should easily be able to get

0.5 - 0.25 \ast p.

The effect of using the parameter p in this way is to create a series of fractals, each one slightly different from the one before.  Since we’re taking 360 steps to vary from 0.5 to 0.25, there is very little difference from one fractal to the next, and so when strung together, the fractals make a convincing animation.

I should point out the dots look like they’re “dancing” because each time a fractal image is made, a different series of random affine transformations is chosen.  So while the points in each successive fractal will be close to each other, most of them will actually be different.

For completeness, here is the code which comes before the sierpinski function is defined.

ifscodesetup2

It should look pretty familiar from last week.  Similar setup, creating the parameter p, writing out the frames, etc.  You’ll find this a general type of setup which you can use over and over again.

So that’s all there is to it!  Now that you’ve got a basic grasp of Processing’s screen space and a few different ways to use linear interpolation, you can start making movies on your own.

Of course there are lots of cool effects you can add by using linear interpolation in more creative ways.  We’ll start to take a look at some of those next week!

Making Movies with Processing II

Last week we learned a little about the history of Processing, as well as what the coordinate system is like in Processing (and many other graphics applications as well).  Today I’d like to discuss an idea which I find very helpful in making movies — linear interpolation.  We’ll only have time for one simple example today, but we’ll go through that example very thoroughly.

So here’s the movie we’ll explore today.  Not glamorous, but it’s a start.

You’ll notice what’s happening — the dot slowly turns from magenta to black, while the background does the opposite.  Let’s look at the dot first.  In Processing, RGB values go from 0 to 255.  Magenta is (255, 0, 255) in RGB, and black is (0, 0, 0).  We want the dot to go smoothly from magenta to black.

The way I like to do this is to introduce a parameter p.  I think of p = 0 as my starting point, and p = 1 as my ending point.  In this example, p = 0 corresponds to magenta, and p = 1 corresponds to black.  As p varies from 0 to 1, we want the dot to go from magenta go black.

This is where linear interpolation comes in.  For any start and end values, the expression

(1 – p) * start + p * end

will vary continuously from the start value to the end value as p goes from 0 to 1.  It should be clear that when p = 0, you get the start value, and when p = 1, you get the end value.  Any value of p in between 0 and 1 will be in between the start and end values — closer to the start value when p is near 0, and closer to the end value when p is near 1.

Because this expression is linear in p — no quadratic terms, no square roots, no sines or cosines — we say that this expression gives a linear interpolation between the start and end values.

Let’s see how this works in Processing.  Here’s a screen shot of the code that produced the video shown above.

screenshot2

The basic functions for making a simple movie are setup and draw.  The setup function is executed once before the movie starts.  Then the draw function is called repeatedly — each time the draw function is called, a new frame of the movie is created.

In a typical program, if you wanted to repeat something over and over, you’d need to make a loop.  But in Processing — since its purpose is to create frames for movies — this is built in to how the application works.  No need to do it yourself.

So in Processing, the variable frameCount automatically keeps track of which frame you’re creating.  And it is automatically incremented each time the draw function executes.  You’ll find that this is a very nice feature!  This way, you don’t have to keep track of frames yourself.

Note the size declaration in the setup function.  Last week, I mentioned that you’ve got to set the size of your coordinate system when you start your movie.  The “P2D” means you’re creating a two-dimensional image.  You can make 3D objects in processing, but we won’t look at that today.

Typical applications which make movies (like PhotoShop, or the Movie Maker found in Processing’s “Tools” menu) use about 30 frames per second.  So 360 frames (as above) will be a 12-second movie.  Enough to see the dot and background change colors.

The if/else clause determines the frames produced.  Note that when frameCount gets larger than 360, you skip to the “else” clause, which is “noLoop()”.  This essentially stops Processing from creating new frames.  The saveFrame command saves the individual frames to whatever directory you specify; if you forget the else clause, Processing will just keep generating more and more .tiff files and clutter up your directory.

So the if/else clause basically tells Processing this:  make 360 frames, one at a time, and save then in a directory called “frames.”  Once all 360 frames are made, it’s time to stop.

The makedot function is what actually creates the frame.  Now we’re getting to the linear interpolation!  The frameCount/360. is what creates the value of p.  When frameCount is 1, the value of p is 1/360 (close enough to 0 for the purposes of our movie), and when frameCount is 360, the value of p is 1.  As the value of frameCount increases, p increases from about 0 to 1.

It’s important to use a decimal point after the number 360.  If you don’t, Python will use integer division, and you’ll get 0 every time until frameCount reaches 360, and only then will you get 1.  (I learned this one the hard way…took a few minutes to figure out what went wrong.)

So let’s go through the makedot function line by line.  First, we set the background color.  Note that when p is 0, the color is (0, 0, 0) — black.  As p moves closer to 1, the black turns to magenta.  It is also important to note that when you call the background function, the entire screen is set to the color you specify.  Processing will write over anything else that was previously displayed.  So be careful where you put this command!

Next, the width of lines/points drawn is set with strokeWeight.  Remember that the units are pixels — so the width of the dot is 300 pixels on a screen 500 pixels wide.  Pretty big dot.

The stroke command sets the color of lines/points on the screen.  Notice the (1 – p) here — we want to begin with magenta (when p = 0), and end with black (when p = 1).  So we go the “opposite” direction from the background.

Maybe you noticed that at one point, the dot looked like it “disappeared.”  Not really — note that when p = 1/2, then 1 – p = 1/2 as well!  At this time, the dot and background are exactly the same color.  That’s why it looked like the dot vanished.

And last but not least — the dot.  Remembering the coordinate system, the dot is centered on the screen at (250, 250).

That’s it.  Not a long program, but all the essentials are there.  Now you’ve got a basic idea of how simple movies can me made.  The next post will build on this week’s and look at some more examples of linear interpolation.  Stay tuned!

 

Making Movies with Processing I

If you’ve been following my Twitter (@cre8math), you’ll have noticed I posted quite a few videos last fall.  At the time I was writing a lot about creating fractal images, and I really wanted to learn how to make animations using these images.  I had heard about Processing from a lot of friends, and decided that it was time to dive in!  Here is one of earlier movies, but still one of my favorites.

Processing has been around since 2001.  From the very beginning, Processing was used in the classroom as a tool to allow students to quickly create graphical images and movies.  It is now used widely by educators, artist, designers, architects, and researchers.  Many companies use Processing for data visualization.  Click on the link to read more!

One of the great things about Processing is that it’s open source — a specific intention of the developers.  Now, there’s even a Processing Foundation.  Its purpose is to “promote software literacy within the visual arts, and visual literacy within technology-related fields — and to make these fields accessible to diverse communities.”  The idea is to provide everyone access to software you can use to create a wide range of visual media.  Not just those who can afford it.

I support open-source initiatives as well.  Most of the digital art I’ve talked about on my blog was written in Mathematica, a powerful programming language, but expensive to buy.  I rewrote all the algorithms you’ve seen on the Sage worksheets in Python because I wanted them to be available to anyone who has access to the internet.  Not just those who can afford pricey software….

Now there won’t be time to go into everything about Processing — that’s what the internet is for.  I often google phrases like “How do you change the background color in Processing?”, and usually get the information I need pretty quickly.

What I want to focus on today is the coordinate system used in Processing.  If you’re new to computer graphics, it might be a little unfamiliar — the coordinate system is based on pixels.  For example, the screen on my Mac is 1440 x 900 pixels.  But the origin (0, 0) is in the upper left corner of the screen, so that  the lower right corner has coordinates (1440, 900).  In other words, while the positive x-axis is still to the right, the positive y-axis is now pointing down.  This is sometimes called a screen coordinate system, or screen space.

This convention has an interesting history.  Older television screens used cathode ray tubes, and literally updated the screen by refreshing one line at a time, from top to bottom and from left to right.  After updating the pixel at the lower right, refreshing began again at the upper left.  This process occurred several times each second to give the illusion of continuous motion.  Because the refresh was done from top to bottom, the y-coordinates increased from top to bottom as well.

Such a coordinate system is not unusual.  The PostScript programming language (which I told you about two weeks ago) has a coordinate system based on points.  The definition of a point has changed over time, but PostScript uses a coordinate system where there are 72 points per inch, and the lower left corner has coordinates (0,0).

PostScript is used for printing physical documents.  Because copier paper typically has dimensions 8.5″ x 11″ in the United States, the upper right corner of your document would have coordinates (612, 792).

In Processing, you need to specify the size of the screen you want for your movie.  In the example above, I chose 768 x 768 pixels.  This may sound like a strange number, but a commonly used screen size is 1024 x 768.  Because of what my image looked like, though, I wanted the screen to be square so there wouldn’t be extra space on the left and right.

Now the objects you see in the movie have rotational symmetry.  So it would make sense for the origin (0,0) to be at the center of the movie screen, not the upper left corner.  A convenient coordinate system for these objects might have the upper left corner be (-1, 1), and the lower right corner be (1, -1), just like the usual Cartesian coordinate system.  This system of coordinates is sometimes called local space or user space.  In other words, it’s the space the user is thinking in when designing various images.

So now the issue is converting from user space to screen space.  This isn’t difficult, so we’ll just work through this example to see how it’s done.  All you need to remember is how to find the equation of a line.

Why a line?  Let’s look at x-coordinates first.  In user space, x-coordinates range from -1 to 1.  In screen space, they range from 0 to 768.  If you went 1/4 of the way from -1 to 1 in user space — which would put you at -0.5 — then you’d expect the corresponding point in screen space to be 1/4 of the way from 0 to 768, which would be 192.  You’d expect this for any ratio, not just 1/4.  This is precisely what it means for the transformation of coordinates from user space to screen space to be linear.

Since -1 in user space corresponds to 0 in screen space, and 1 corresponds to 768, all we need to do is find the equation of the line between the points (-1,0) and (1,768).  In Cartesian coordinates, this would be y=384(x+1). Or we might alternatively say

x_{\rm screen} = 384(x_{\rm user}+1).

Once we know how to do this for the x-coordinates, we can proceed the same way for the y-coordinates.  You should be able to figure out that

y_{\rm screen} = 384(1-y_{\rm user}).

Note that the minus sign in front of y_{\rm user} is a result of the fact that the positive y direction is pointing down.

This is a very general idea which will allow you to convert from one space to another fairly easily.  And it’s definitely necessary in Processing to set up the screen for your movie.

Next week we’ll look at linear interpolation — another important concept which will help us make movies.  Until then!

 

Bridges: Mathematics and Art I

Just registered for Bridges 2016 last week!

Simply put, Bridges is the best mathematics conference ever.  You meet people from all around the world who are interested in the interplay between mathematics and art.

Day0054-11-02-15imag0538
Sculpture at Bridges 2015 in Baltimore

Not just M. C. Escher, either (though many are interested in his work).  Some Bridges attendees (shall we call them Bridgers?) are artists by profession, but others are mathematicians, computer scientists, physicists — you name it.  All are artists by vocation.

Interests span not only art in a more usual sense — watercolor, acrylic, oil, pastel, drawing — but also digital art, sculpture in almost any medium you can think of, poetry, architecture, music, fiber arts, dance, digital animations and movies, fashion, origami, and likely there will be some new art form introduced this summer as well!

The art exhibition is amazing.  You can see a few examples above.  You see the wooden spiral?  Each inlaid rectangle is a different piece of wood!  The craftsmanship is really superb.

One neat aspect is that most of the artists also attend Bridges.  That means if you see something you really like, you can just look for the right name tag and start up a conversation.  As you would expect, all the artists are eager to discuss their work.

Be ready for some surprises, too.  I met my friend Phil Webster at the conference – we starting talking because I was from San Francisco, and he also lives in the Bay area.  So we’ve met up a few times since the conference to discuss mathematics, art, and programming.  He even gave a talk in our Mathematics Colloquium at the University of San Francisco.  Of course, his talk was great….

CS-rZ1zUkAQoQ_i.jpg_large

Even if you don’t go to the conference, you can still appreciate all the art.  You can visit the Bridges 2015 online gallery and see all the art that was exhibited.  Not only are there descriptions of all the works by the artists themselves, but there’s also contact information so you can get in touch if you’d like.  Please do!

The Bridges 2016 gallery is not online yet, but I’ve got two pieces accepted for this year’s exhibition.  This is my favorite.

Spiral

Then there are the talks.  You learn so much just by going to them.  The range of topics is incredibly diverse — look back at the list above!  Last summer, I gave a talk about Random Walks on Vertices of Archimedean Tilings.  My favorite work discussed in the paper is Bear.  You can read the paper to learn how it was made, if you’re interested.  The first print of Bear is hanging in my friend Cory’s house in Florida.  Hi, Cory!

Bear4Web

As you’ll see if you click on the link to my paper, there is an archive of all papers — over 1000! — given at Bridges conferences since 1998.  What’s nice is that you can actually search for specific topics, so it’s easy to use.  No shortage of reading material on mathematics and art….

In addition to the exhibition and all the presentations, there are also dance performances, poetry readings, theatre performances, movie showings, a music night — any number of interesting activities relating mathematics and art.  If you want to learn more, just go to the Bridges 2016 website.  There’s complete information on the upcoming conference there.

This year, the conference is being held at the University of Jyväskylä in Jyväskylä, Finland.  I’ve never been to Finland before, so I’m looking forward to an exciting trip!  What’s also nice about the conference is that in the evenings, you can just take a stroll with other Bridgers, looking for some interesting place to have dinner.  I always love exploring new countries, and especially like trying new cuisines!

But even though Bridges 2016 is in July, I actually starting preparing last November.  Since there was a January deadline for submitting papers to the conference, and since I knew I’d be doing a lot of traveling over our Winter Break, I wanted to get an early start.  The papers are all reviewed by three referees, so your mathematics should be sound.  Usually they have comments to make, and so you often need to make some revisions a few months later before a final submission.

My paper is on fractals this year.  A lot of what I wrote in that paper you’ve already seen on my blog — but I’ll be sure to give a link when I write a follow-up post on Bridges 2016 later on in the summer.  Here’s one of my favorite images discussed in the paper.

koch090-150tweak

There are deadlines to submit artwork as well, so it’s important to be organized.  For both papers and artwork, the online submission system is actually really easy to use.  I just wanted to let you know something about the process so you can submit something to next year’s conference….

Last Fall, I received an email about a new addition to the Bridges menu — student scholarships.  And in my calculus class, I had a student Nick who is a double major in mathematics and art.

Turns out Nick was really interested in trying to submit to Bridges, so we worked out a one-credit directed study course just for that purpose.  As of this moment, I’m happy to say that two of Nick’s artworks were accepted!  And we just submitted the final revisions to his paper, and are waiting to hear back.  We should know about the scholarship soon — I’ll update this post when I have more information.  One of my favorite images from Nick’s paper is this one.

art4

You can read the paper to see how he creates it….link to follow.

So think about including Bridges in your future travel!  Many artists bring their families and make a summer vacation out of the conference.  It’s quite an experience.

And if you’re a student, consider submitting as well!  Maybe you’ll earn a scholarship to attend:  here’s more information on the Student Travel Scholarship.  Preference is given to those student who submit papers, artwork, or movies.

You will need a letter from one of your teachers or professors — so ask someone to be your mentor.  If you can’t find someone, well, just ask me.  I’ll be glad to help out (as long as I don’t get too many requests!).

Later on in the summer, I’ll tell you all about the experience.  Hope to see you at some Bridges conference soon!

P.S. (10 July 2106):  Nick did receive the travel scholarship.  Congratulations!

 

 

 

Creating Fractals VIII: PostScript Programming

June 1998.

That’s when I wrote the code I’m going to talk about today.

When I became interested in computer graphics, a windows environment you’re used to by now just didn’t exist.  For example, the computer I used (called a Zenith H19) had a screen 80 x 25 characaters.  Yes, characters.  Rather like a larger version of those still unimaginable TI calculator screens (couldn’t resist).

There was no graphical user interface (GUI), mouse, touch-screen.  If you wanted to draw a picture, you had to write a computer program to do it.

And if you wanted to see what it looked like, well, you had to print it out.

Yep.  That was the only way to see if your code was correct.  It made debugging very challenging.  But that’s all there was.  So I learned how to write PostScript code.

There are two features of PostScript I want to discuss today.  First, the syntax is written in postfix notation, like most HP calculators.  If you wanted to add 2 and 3 on a typical calculator, you’d type in “2 + 3 =.”  The result would be 5.  This is called infix notation, where the operator is written in between the arguments.

In PostScript, though, you’d write “2 3 add.”  In other words, the arguments come first.  Then they’re added together — the operator comes last, which is referred to as postfix notation.  So if you wanted to write

(2 + 3 ) x (10 – 4)

in PostScript, you’d write

2 3 add 10 4 sub mul.

Notice that the multiplication is done last.

When programming in Python, most functions are described using prefix notation.  In other words, you give the function name first, and then the arguments to the function.  For example, randint(1,10) would output a random number between 1 and 10, inclusive.  If we had to write the arithmetic expression above in prefix notation, it would look like

times(add(2,3), subtract(10,4)).

You probably encounter these ideas on a daily basis.  For example, if you want to delete a bunch of files, you first select them, and then do something like click “Move to trash” from a dropdown menu.  But if you’re writing an email, you usually click on the attach icon first, and then select the files you’d like to attach.  Postfix and prefix.

Of course this is just an introduction to using the different types of notation.  In general, you’d transform the above arithmetic expression into what it called an expression tree, like this:expressiontree

Then prefix notation may be derived from a preorder traversal of the expression tree, infix notation from an inorder traversal of the tree, and finally, postfix notation comes from a postorder traversal of the tree.  Feel free to ask the internet all about binary trees and tree traversals.  Trees are very important data structures in computer science.

The second feature of PostScript I’d like to discuss today is that it is an example of a stack-based language.  For example, if you typed

4  5  3  7  8

in a PostScript program, the number 4 would be pushed onto the stack — that is, the working memory of the program — and then 5, then 3, 7, and finally 8.  (For convenience, we’ll always think of the bottom of the stack as being on the left, and the top of the stack as being on the right.  Easier to write and read that way.)

If you want to follow along, do the following:  write 4 on an index card or small piece of paper, then write 5 on another and put it on top of the 4, and so on, until you put the 8 on top of the stack.  You really do have a stack of numbers.

Now when you type the “add” function, PostScript thinks, “OK, time to add!  What are the top two numbers on the stack?  Oh, 8 and 7!  I’ll add these and put the sum back on the stack for you!”

So you type

4  5  3  7  8  add,

and PostScript performs the appropriate action and the stack now becomes

4  5  3  15.

If you now want to divide (div in PostScript), the interpreter looks at the top number on the stack and divides it by the second number on the stack, and puts that result on the stack.  So

4  5  3  7  8  add  div

would create the stack

4  5  5.

You should be able to figure out that

4  5  3  7  8  add  div  mul  sub

would just leave the single number 21 on the stack.

Graphics are drawn this way, too.  For example, we would create a square with the commands

0 0 moveto  0 1 lineto  1 1 lineto  1 0 lineto  0 0 lineto  closepath  stroke.

Again, the arguments come first.  We move to the point (0,0), and then draw a line segment to (0,1), and so forth.  Closepath “ties” the beginning and end of our path together, and the command “stroke” actually draws the square on the page.

Commands like

1  0.5  0  setrgbcolor

set the current color to orange, and

5 setlinewidth

sets the width of lines to be drawn.  You can change any aspect of any drawn image — the only limit is your imagination!

Of course this is only the most basic introduction to the PostScript programming language.  But if you’re interested, there’s good news.  First, it shouldn’t be hard to download a PostScript interpreted for your computer.  I use MacGhostView — installing will also require you to install XQuartz, too, but not to worry.

Second, there are some great resources online.  Back when I was learning PostScript, there were three very helpful books — the Blue Book, the Red Book, and the Green Book.  Yep, that’s what there were called.   The Blue Book is the place you should start — it includes a tutoral on all the basics.  The Red Book  is the Reference Manual — handy in that it lists PostScript commands, their syntax, and examples of how to use them.  The Green Book is for the serious programmer, and discusses at a high level how the PostScript language is organized.

Finally, here is the code I wrote all those years ago!  (Click here: Sierpinski.)    I did modify the original code so it produces a Sierpinski triangle.  It will be a little daunting to figure out if you’re just starting to learn PostScript, but I include it as it motivated me to talk about PostScript today.

And a final few words about motivation.  Try to learn programming languages which expose you to different ways of thinking.  This develops flexibility of mind for coding in general.  Whenever I learn a new language, I’m always thinking things like, “Oh, that’s the same as stacks in PostScript,” or “Yeah, like maps in LISP.”

As you learn more, your programming toolbox gets bigger, and you find that the time it takes from conceiving of an idea to successfully implementing it becomes shorter and shorter.  That’s when it starts really getting fun….

Creating Fractals VII: Iterated Function Systems III

Today, we’re going to wrap up our discussion of iterated function systems by looking at an algorithm which may be used to generate fractal images.

Recall (look back at the first post if you need to!) the Sierpinski triangle.  No matter what initial shape we started with, the iterations of the function system eventually looked like the Sierpinski triangle.

Sierp4

But there’s a computational issue at play.  Since there are three different transformations, each iteration produces three times the number of objects.  So if we carried out 10 iterations, we’d have 310 = 59,049 objects to keep track of. Not too difficult for a computer.

But let’s look at the Sierpinski carpet. With eight different transformations, we’d have 810 = 1,073,741,824 objects to keep track of. Keeping track of a billion objects just isn’t practical.

Of course you could use fewer iterations — but it turns out there’s a nice way out of this predicament. We can approximate the fractal using a random algorithm in the following way.

Begin with a single point (usually (0,0) is the easiest).  Then randomly choose a function in the system, and apply it to this point.  Then iterate:  keep randomly choosing a function from the system, then apply it to the last computed point.

What the theory says (again, read the Barnsley book for all the proofs!) is that these points keep getting closer and closer to the fractal image determined by the system.  Maybe the first few are a little off — but if we just get rid of the first 10 or 100, say, and plot the rest of the points, we can get a good approximation to the fractal image.

Consider the Sierpinski triangle again.  Below is what the first 20 points look like (after (0,0)), numbered in the order in which they’re produced.

Sierp20.png

Let’s look at this in a bit more detail.  For reference, we’ll copy the function system which produces the Sierpinski triangle from the first post.

F_1\left(\begin{matrix}x\\y\end{matrix}\right)=\left[\begin{matrix}0.5&0\\0&0.5\end{matrix}\right] \left(\begin{matrix}x\\y\end{matrix}\right)

F_2\left(\begin{matrix}x\\y\end{matrix}\right)=\left[\begin{matrix}0.5&0\\0&0.5\end{matrix}\right] \left(\begin{matrix}x\\y\end{matrix}\right)+\left(\begin{matrix}1\\0\end{matrix}\right)

F_3\left(\begin{matrix}x\\y\end{matrix}\right)=\left[\begin{matrix}0.5&0\\0&0.5\end{matrix}\right] \left(\begin{matrix}x\\y\end{matrix}\right)+\left(\begin{matrix}0\\1\end{matrix}\right)

Now here’s how the color scheme works.  Any time F_1 is randomly chosen, the point is colored red.  For example, you can see that after Point 6 was drawn, F_1 was chosen, so Point 7 was simply halfway between Point 6 and the origin.

Any time F_2 is chosen, the point is colored blue.  For example, after Point 8 was drawn, F_2 was randomly chosen.  You can see that if you move Point 8 halfway to the origin and then over right one unit, you land up exactly at Point 9.

Finally, any time F_3 is chosen, the point is colored orange.  So for example, it should be clear that moving Point 7 halfway toward the origin and then up one unit results in Point 8.

Of course plotting more points results in a more accurate representation of the fractal.  Below is an image produced using 5000 points.

Sierp5000.png

To get more accuracy, simply increase the number of points, but decrease the size of the points (so they don’t overlap).  The following image is the result of increasing the number of points to 50,000, but using points of half the radius.

Sierp50000

There’s just one more consideration, and then we can move on to the Python code for the algorithm.  How do we randomly choose the next affine transformation?

Of course we use a random number generator to select a transformation.  In the case of the Sierpinski triangle, each of the three transformations had the same likelihood of being selected.

Now consider one of the fractals we looked at last week.

Two4

If the algorithm chose either transformation with equal probability, this is what our image would look like:

Two450.pngOf course there’s a huge difference!  What’s happening is that transformation F_{{\rm orange}} actually corresponds to a much smaller portion of the fractal than F_{{\rm green}} — and so to get a more realistic idea of what the fractal is really like, we need to choose it less often.  Otherwise, we overemphasize those parts of the fractal corresponding to the effect of F_{{\rm orange}}.

Again, according to the theory, it’s best to choose probabilities roughly in proportion to the portions of the fractal corresponding to the affine transformations.  So rather than a 50/50 split, I chose F_{{\rm green}} 87.5% of the time, and F_{{\rm orange}} just 12.5% of the time.  As you’ll see when you experiment, a few percentage points may really impact the appearance of a fractal image.

From a theoretical perspective, it actually doesn’t matter what the probabilities are — if you let the number of points you draw go to infinity, you’ll always get the same fractal image!  But of course we are limited to a finite number of points, and so the probabilities do in fact strongly influence the final appearance of the image.

So once you’ve chosen some transformations, that’s just the beginning.  You’ve got to decide on a color scheme, the number of points and their size, as well as the probabilities that each transformation is chosen.  All these choices impact the result.

Now it’s your turn!  Here is the Sage link to the python code which you can use to generate your own fractal images.  (Remember, you’ve got to copy it into one of your own Projects first.)  Freely experiment — I’ve also added examples of non-affine transformations, as well as affine transformations in three dimensions!

And please comment with interesting images you create.  I’m interested to see what you can come up with!

Creating Fractals VI: Iterated Function Systems II

What I find truly remarkable about iterated function systems is the astonishing variety of fractals you can create with just two affine transformations.  Today, we’ll explore some more examples so you can get a better feel for how the particular affine transformations used affect the final image.

Let’s begin with the example from last week, which I color coded for easy reference.

Two2.png

Below are the affine transformations which generated this fractal.

F_{{\rm green}}\left(\begin{matrix}x\\y\end{matrix}\right)=\left[\begin{matrix}0.95&0\\0&0.95\end{matrix}\right]\left[\begin{matrix}\cos(20^\circ)&-\sin(20^\circ)\\\sin(20^\circ)&\cos(20^\circ)\end{matrix}\right]\left(\begin{matrix}x\\y\end{matrix}\right)

F_{{\rm orange}}\left(\begin{matrix}x\\y\end{matrix}\right)=\left[\begin{matrix}0.4&0\\0&0.4\end{matrix}\right]\left(\begin{matrix}x\\y\end{matrix}\right)+\left(\begin{matrix}1\\0\end{matrix}\right)

Now compare the different colors in the image with the transformations.  The first transformation combines a scaling by 0.95 with a rotation by 20 degrees.  If you look closely, you’ll see that the green part of the image looks the same as a slightly smaller copy of the entire fractal, except rotated by 20 degrees from the center of the spiral (note that there is no translation involved in F_{{\rm green}}).

The orange part of the image is simply a scaled version of the entire fractal (by a factor of 0.4), but moved over 1 unit.  You can see this by looking at the form of the affine transformation F_{{\rm orange}}.

The pair of affine transformations used to generate the spiral exactly describes the self-similarity.  This is easy to see after the fact — but I really have no helpful suggestion as to how to predict the form of this wonderful spiral just by looking at the transformations.

Now let’s just slightly alter the second transformation so that in addition to scaling by a factor of 0.4, there is a shear as well.

F_{{\rm orange}}\left(\begin{matrix}x\\y\end{matrix}\right)=\left[\begin{matrix}0.4&0.4\\0&0.4\end{matrix}\right]\left(\begin{matrix}x\\y\end{matrix}\right)+\left(\begin{matrix}1\\0\end{matrix}\right)

Notice how the fractal changes.

Two3.png

The shear distorts each arm of the spiral — stretching it in the process and creating more interaction between the various arms.

Now let’s go back to the original spiral and change the first transformation instead, slightly altering the scale factor.

F_{{\rm green}}\left(\begin{matrix}x\\y\end{matrix}\right)=\left[\begin{matrix}0.9&0\\0&0.9\end{matrix}\right]\left[\begin{matrix}\cos(20^\circ)&-\sin(20^\circ)\\\sin(20^\circ)&\cos(20^\circ)\end{matrix}\right]\left(\begin{matrix}x\\y\end{matrix}\right)

Here is what happens:

Two4

I really don’t how to predict that this is what would happen — but I find it fascinating how even small changes in parameters can really change the way a fractal looks.

I can’t resist one more image created by changing the second transformation to

F_{{\rm orange}}\left(\begin{matrix}x\\y\end{matrix}\right)=\left[\begin{matrix}0.4&0\\0.4&0.15\end{matrix}\right]\left(\begin{matrix}x\\y\end{matrix}\right)+\left(\begin{matrix}1\\0\end{matrix}\right).

Here is the result.Two5.pngThere’s so much you can do with just one pair of transformations!

While it may not be possible to guess exactly what a fractal will look like, it’s possible to know some features of a fractal based upon how you set up your transformations.  For example, condsider this combination of a shear and a scaling:

G_1\left(\begin{matrix}x\\y\end{matrix}\right)=\dfrac23\left[\begin{matrix}1&-1/2\\0&1\end{matrix}\right]\left(\begin{matrix}x\\y\end{matrix}\right)+\left(\begin{matrix}1\\1\end{matrix}\right).

There will be some aspect of the fractal which can be described by a shear.  For the second transformation, just take the opposite of the first, as follows.

G_2\left(\begin{matrix}x\\y\end{matrix}\right)=-\dfrac23\left[\begin{matrix}1&-1/2\\0&1\end{matrix}\right]\left(\begin{matrix}x\\y\end{matrix}\right)-\left(\begin{matrix}1\\1\end{matrix}\right).

This must force the fractal to have 180 degree rotational symmetry!  Think about how the algorithm works — take the image under one transformation, then the other — but by their definitions, the images must be symmetric about the origin.

Two21.png

So it is possible to use our knowledge of affine transformations to design, even if to a limited extent, some features of the resulting fractal.

As another example, we can assemble a fractal similar to two smaller copies of itself, one piece of which is a 90 degree rotation of the other.  Use G_1 as above, and then use

G_2=\left[\begin{matrix}0&-1\\1&0\end{matrix}\right]G_1.

Here is the resulting fractal.

Two22.png

You can clearly see how the red piece is a 90 degree counterclockwise rotation of the purple piece.  This follows from how we created the second transformation from the first.

One reason for the diversity of fractals is the number of parameters needed to specify two affine parameterizations — twelve in all.  Now some sets of parameters may create the same fractal — but even so, the wealth of variations is still staggering.

But will just any affine transformations work?  There are some constraints — for example, you can’t describe a fractal in terms of larger copies of itself.  So the affine transformations cannot make a starting object bigger, or else each successive iteration will expand.  Of course you can still use such affine transformations to create interesting graphical effects — but the resulting image will not be an approximation to a fractal.

Next week, we’ll look at the algorithm which I used to create all these fractal images, and I’ll give you some Python code you can use yourself.  But before I close, I’d like to share a few favorite fractals which are generated by just two affine transformations.

First, the Beetle.  It looks like it’s scampering toward you.

Beetle2.png

And the second, well, it’s just yellow.

Yellow.png

Until next week!