Creating Animated GIFs in Processing

Last week at our Digital Art Club meeting, I mentioned that I had started making a few animated gifs using Processing.  Like this one.

GIF

(I’m not sure exactly why the circles look like they have such jagged edges — must have to do with the say WordPress uploads the gif.  But it was my first animated gif, so I thought I’d include it anyway.)

And, of course, my students wanted to learn how to make them.  A natural question.  So I thought I’d devote today’s post to showing you how to create a rather simple animated gif.

sample

Certainly not very exciting, but I wanted to use an example where I can include all the code and explain how it works.  For some truly amazing animated gifs, visit David Whyte’s Bees & Bombs page, or my friend Roger Antonsen’s Art page.

Here is the code that produces the moving circles.

gif3

I’ll assume you’ve done some work with Processing, so you understand the setup and draw functions, you know that background(0, 0, 0) sets the background color to black, etc.

The idea behind an animated gif which seems to be a continuous loop is to create a sequence of frames whose last frame is essentially the same as the first.  That way, when the gif keeps repeating, it will seem as though the image is continually moving.

One way to do this is with the “mod” function — in other words, using modular arithmetic.  Recall that taking 25 mod 4 means asking “What is the remainder after dividing 25 by 4?”  So if you take a sequence of numbers, such as

1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, …

and take that sequence mod 4, you end up with

1, 2, 3, 0, 1, 2, 3, 0, 1, 2, 3, 0, 1, 2, ….

Do you see it already?  Since my screen is 600 pixels wide, I take the x-coordinate of the centers of the circles mod 600 (that’s what the “% 600” means in Python).  This makes the image wrap around horizontally — once you hit 600, you’re actually back at 0 again.  In other words, once you go off the right edge of the screen, you re-enter the screen on the left.

That’s the easy part….  The geometry is a little trickier.  The line

x = (75 + 150 * i + 2 * frameCount) % 600

requires a little more explanation.

First, I wanted the circles to be 100 pixels in diameter.  This makes a total of 400 pixels for the width of the circles.  Now since I wanted the image to wrap around, I needed 50 pixels between each circle.  To begin with a centered image, that means I needed margins which are just 25 pixels.  Think about it — since the image is wrapping around, I have to add the 25-pixel margin on the right to the 25-pixel margin on the left to get 50 pixels between the right and left circles.

So the center of the left circles are 75 pixels in from the left edge — 25 pixels for the margin plus 50 pixels for the radius.  Since the circles are 100 pixels in diameter and there are 50 pixels between them, there are 150 pixels between the centers of the circles. That’s the “150 * i.”  Recall that in for loops, the counters begin at 0, so the first circle has a center just 75 pixels in from the left.

Now here’s where the timing comes in.  I chose 300 frames so that by showing approximately 30 frames per second (many standard frame-per-second rates are near 30 fps) the gif would cycle in about 10 seconds.  But cycling means moving 600 pixels in the x direction — so the “2 * frameCount” will actually give me a movement of 600 pixels to the right after 300 frames.  You’ve got to carefully calculate so your gif moves at just the speed you want it to.

To make displaying the colors easier, I put the R, G, and B values in lists.  Of course there are many other ways to do this — using a series of if/else statements, etc.

One last comment:  according to my online research, .png files are better for making animated gifs, while .tif files (as I’ve also used in many previous posts) are better for making movies.  But .png files take longer to save, which is why your gif will look like it’s moving slowly when you use saveFrame, but will actually move faster once you make your gif.

So now we have our frames!  What’s next?  A few of my students mentioned using Giphy to make animated gifs, but I use GIMP.  It is open source, and can be downloaded for free here.  I’m a big fan of open source software, and I like that I can create gifs locally on my machine.

Once you’ve got GIMP open, select “Open as Layers…” from the File menu.  Then go to the folder with all your frames, select them all (using Ctrl-A or Cmd-A or whatever does the trick on your computer), and then click “Open.”  It may take a few minutes to open all the images, depending on how many you have.

Now all that’s left to do is export as an animated gif!  In the File menu, select “Export As…”, and make sure your filename ends in “.gif”.  Then click “Export.”  A dialog box should open up — be sure that the “As animation” and “Loop forever” boxes are checked so your animated gif actually cycles.  The only choice to make now is the delay between frames.  I chose 30 milliseconds, so my gif cycled in about 10 seconds.  Then click “Export.”  Exporting will also take a few seconds as well — again, depending on how many frames you have.

Unfortunately, I don’t think there’s a once-size-fits-all answer here.  The delay you choose depends on how big your gif actually is — the width of your screen in Processing — since that will determine how many frames you need to avoid the gif looking too “jerky.”  The smaller the time interval between frames, the more frames you’ll need, the more space those frames will take up, and the longer you’ll need to upload your images in Gimp and export them to an animated gif.  Trade-offs.

So that’s all there is to it!  Not too complicated, though it did take a bit longer for me the first time.  But now you can benefit from my experience.  I’d love to see any animated gifs you make!

Using Processing for the First Time

While I have discussed how to code in Processing in several previous posts, I realized I have not written about getting Processing working on your own computer.  Naturally I tell students how to do this in my Mathematics and Digital Art course.  But now that I have started a Digital Art Club at the University of San Francisco, it’s worth having the instructions readily accessible.

The file I will discuss may be used to create an image based on the work of Josef Albers, as shown below.

0001

See Day002 of my blog,  Josef Albers and Interaction of Color, for more about how color is used in creating this piece.

As you would expect, the first step is to download Processing.  You can do that here.  It may take a few moments, so be patient.

The default language used in Processing is Java.  I won’t go into details of why I’m not a fan of Java — so I use Python mode.  When you open Processing, you’ll see a blank document like this:

Day110Screen1

Note the “Java” in the upper right corner.  Click on that button, and you should see a menu with the option “Add Mode…”  Select this option, and then you should see a variety of choices — select the one for Python and click “Install.”  This will definitely take a few minutes to download, so again, be patient.

Now you’re ready to go!  Next, find some Processing code written in Python (from my website, or any other example you want to play around with).  For convenience, here is the one I’ll be talking about today:  Day03JosefAlbers.pyde.  Note that it is an Open Office document; WordPress doesn’t let you upload a .pyde file.  So just open this document, copy, and paste into the blank sketch.  Be aware that indentation is important in Python, since it separates blocks of code.  When I copied and pasted the code from the Open Office document, it worked just fine.  But in case something goes awry, I always use four spaces for successive indents.

Now run the sketch (the button with the triangle pointing to the right).  You will be asked to create a new folder; just say yes.  When Processing runs, it often creates additional files (as we’ll see in a moment), and so keeping them all in one folder is helpful.  You should also see the image I showed above; that is the default image created by this Processing program.

Incidentally, the button with the square in the middle stops running your sketch.  Sometimes Processing runs into a glitch or crashes, so stopping and restarting your sketch is sometimes necessary.  (I still haven’t figured out why it crashes at random times.)

Next, go to the folder that you just created.  You should see a directory called “frames.”  Inside, you should see some copies of the image.

Day110Screen2

Inside the “draw” function, there is a function call to “saveFrame,” which saves copies of the frames you make.  You can call the folder whatever you want; this is convenient, since you might want to make two different images with the same program.  Just change the folder name you save the images to.

A word about the syntax.  The “####” means the frames will be numbered with four digits, as in 0001.png, 0002.png, etc.  If you need more than 10,000 frames (likely you won’t when first starting), just add more hashtags.  The “.png” is the type of file.  You can use “.tif” as well.  I use “.tif” for making movies, and “.png” for making animated gifs.  There are other file types as well; see the documentation on saveFrame for more details.

Now let’s take a look at making your own image using this program.

Day110Screen3

If you notice, there are lines labelled “CHANGE 1” to “CHANGE 6” in the setup and draw functions.  These are the only lines you need to change in order to design you own piece. You may tweak the other code later if you like.  But especially for beginning programmers, I like to make the first examples very user-friendly.

So let me talk you through changing these lines.  I won’t bother talking about the other code right now — that would take some time!  But you don’t need to know what’s under the engine in order to create some interesting artwork….

CHANGE 1:  The hashtags here, by the way, indicate a comment in your code:  when your program runs, anything after a hashtag is ignored.  This makes it easy to give hints and provide instructions in a program (like telling you what lines to change).  I created a window 800 x 600 pixels; you can make it any size you want by changing those numbers. The “P2D” just means you’re working with a two-dimensional geometry.  You can work in 3D in Processing, but we won’t discuss that today.

CHANGE 2:  The “sqSide” variable indicates how big the square are, in units of pixels.  The default unit in Processing is always pixels, so if you want to use another geometry (like a Cartesian coordinate system), you have to convert from one coordinate system to another.  I do all this for you in the code; all you need to do is say how large each square is.  And if you didn’t go back and read the Josef Albers piece, by “square,” I mean a unit like this:

Day002Square

CHANGE 3, CHANGE 4:  The variables “sqRows” and “sqCols” indicate, as you would expect, how many rows and columns are in the final image.  Since I have 15 rows and the squares are 30 pixels on a side, the height of the image is 450 pixels.  Since my window is 600 pixels in height, this means there are margins of 75 pixels on the top and bottom.  If your image is too tall (or too wide) for the screen, it will be cropped to fit the screen.  Processing will not automatically resize — once you set the size of the screen, it’s fixed.

CHANGE 5:  The “background” function call sets the color of the background, using the usual RGB values from 0-255.

CHANGE 6:  The first three numbers are the RGB values of the central rectangles in a square unit.  The next three numbers indicate how the background colors of the surrounding rectangles change.  (I won’t go into that here, since I explain it in detail in the post on Josef Albers mentioned above.  The only difference is that in that post, I use RGB values from 0-1, but in the Processing code here, I use values from 0-255.  The underlying concept is the same.)

The last (and seventh) number is the random number seed.  Why is this important?  If you don’t say what the random number seed is (the code does involve choosing random numbers), every time you run the code you will get a different image — the computer just keeps generating more (and different) random numbers in a sequence.  So if you find an image you really like and don’t know what the seed is, you’ll likely never be able to reproduce it again!  And if you don’t quite like the texture created by using one random seed, you can try another.  It doesn’t matter so much if you have many rows and columns, but if you try a more minimalist approach with fewer and larger squares, the random number seed makes a big difference.

OK, now you’re on your own….  This should be enough to get you started, and will hopefully inspire you to learn a lot more about Python and Processing!

 

 

To Processing I

I made a decision last week to abandon using Sage (now called CoCalc) as a platform in my Mathematics and Digital Art class.  It was not an easy decision to make, as there are some nice features (which I’ll get to in a moment).  But now any effective use of Sage comes with a cost — the free version uses servers, and you are given this pleasant message:  “This project runs on a free server (which may be unavailable during peak hours)….”

This means that to guarantee access to CoCalc, you need a subscription.  It would not be prohibitively expensive for my class — but as I am committed to being open source, I am reluctant to continue putting sample code on my web page which requires a cost in order to use.  Yes, there is the free version — as long as the server is available….

When I asked my students last semester about moving exclusively to Processing, they responded with comments to the effect that using Sage was a gentle introduction to coding, and that I should stick with it.  I fully intended to do this, and got started preparing for the first few days of classes.  I opened my Sage worksheet, and waited for it to load.  And waited….

That’s when I began thinking — I did have experiences last year where the class virtually came to a halt because Sage was too slow.  It’s a problem I no longer wanted to have.

So now I’m going to Processing right from the beginning.  But why was I reluctant in the past?

The issue is that of user space versus screen space.  (See Making Movies with Processing I for a discussion of these concepts.)  With Sage, students could work in user space — the usual Cartesian coordinate system.  And the programming was particularly easy, since to create geometrical objects, all you needed to do was specify the vertices of a polygon.

I felt this issue was important.  Recall the success I felt students achieved by being able to alter the geometry of the rectangles in the assignment about Josef Albers and color.  (See the post on Josef Albers and Interaction of Color for a refresher on the Albers assignment.)

peyton
Peyton’s piece on Josef Albers.

Most students experimented significantly with the geometry, so I wanted to make that feature readily accessible.  It was easy in Sage, the essential code looking something like this:

Screen Shot 2017-08-26 at 11.08.35 AM

What is happening here is that the base piece is essentially an array of rectangles within unit squares, with lower-left corners of the squares at coordinates (i, j).  So it was easy for students to alter the polygons rendered by using graph paper to sketch some other polygon, approximate its coordinates, and then enter these coordinates into the nested loops.

Then Sage rendered whatever image you created on the screen, automatically sizing the image for you.

But here is the problem:  Processing doesn’t render images this way.  When you specify a polygon, the coordinates must be in screen space, whose units are pixels.  The pedagogical issue is this:  jumping into screen space right at the beginning of the semester, when we’re just learning about colors and hex codes, is just too big a leap.  I want the first assignment to focus on getting used to coding and thinking about color, not changing coordinate systems.

Moreover, creating polygons in Processing involves methods — object-oriented programming.  Another leap for students brand new to coding.

The solution?  I had to create a function in Processing which essentially mimicked the “polygon” function used in Sage.  In addition, I wanted to make the editing process easy for my students, since they needed to input more information this time.

Day108pyde1

In Processing — in addition to the number or rows and columns — students must specify the screen size and the length of the sides of the squares, both in pixels.  The margins — xoffset and yoffset — are automatically calculated.

Here is the structure of the revised nested for loops:

Day108pyde2

Of course there are many more function calls in the loops — stroke weights, additional fill colors and polygons, etc.  But it looks very similar to the loop formerly written in Sage — even a bit simpler, since I moved the translations to arguments (instead of needing to include them in each vertex coordinate) and moved all the output routines to the “myshape” function.

Again, the reason for this is that creating arbitrary shapes involves object-oriented concepts.  See this Processing documentation page for more details.

Here is what the myshape function looks like:

Day108pyde3

The structure is not complicated.  Start by invoking “createShape,” and then use the “beginShape” method.  Adding vertices to a shape involves using the “vertex” method, once for each vertex.  This seems a bit cumbersome to me; I am new to using shapes in Processing, so I’m hoping to learn more.  I had been able to get by with just creating points, lines, rectangles, and circles so far — but that doesn’t give students as much room to be creative as including arbitrary shapes does.

I should point out that shapes can have subshapes (children) and other various attributes.  There is also a “fill” method for shapes, but I have students use the fill function call in the for loop to avoid having too many arguments to myshape.  I also think it helps in understanding the logical structure of Processing — the order in which functions calls are invoked matters.  So you first set your fill color, then when you specify the vertices of your polygon, the most recently defined fill color is used.  That subtlety would get lost if I absorbed the fill into the myshape function.

As in previous semesters, I’ll let you know how it goes!  Like last semester, I’ll give updates approximately monthly, since the content was specified in detail in the first semester of the course (see Section L. of 100 Posts! for a complete listing of posts about the Mathematics and Digital Art course).

Throughout the semester, I’ll be continuously moving code from Sage to Processing.  It might not always warrant a post, but if I come across something interesting, I’ll certainly let you know!

Mathematics and Digital Art IV

This week will complete the series devoted to a new Mathematics and Digital Art (MDA) course I’ll be teaching for the first time this Fall.  During the semester, I’ll be posting regularly about the course progression for those interested in following along.

Continuing from the previous post, Weeks 7 and 8 will be devoted to polyhedra.  While not really a topic under “digital art,” so much of the art at Bridges and similar conferences is three-dimensional that I think it’s important that students are familiar with a basic three-dimensional geometric vocabulary.

Moreover, I’ve taught laboratory-based courses on polyhedra since the mid 1990’s, and I’ve also written a textbook for such a course.  So there will be no problem coming up with ideas.  Basic topics to cover are the Platonic solids (and proofs that there are only five), Euler’s formula, and building models with paper (including unit origami) and Zometools.

There are also over 50 papers in the Bridges archive on polyhedra.  One particularly interesting one is by Reza Sarhangi about putting patterns on polyhedra (link here).  Looking at this paper will allow an interested student to combine the creation of digital art and the construction of polyhedra.

At the end of Week 7, the proposal for the Final Project will be due.  During Week 8, I’ll have one of the days be devoted to a construction project, which will give me time to go around to students individually and comment on their proposals.

This paves the way for the second half of the semester, which is largely focused on Processing and work on Final Projects.

In Weeks 9 and 10, the first two class periods will be devoted to work on Processing.  I recently completed a six-part series on making movies with Processing (see Day039–Day044), beginning with a very simple example of morphing a dot from one color to another.

These blog posts were written especially for MDA, so we’ll begin our discussion of Processing by working through those posts.  You’ll notice the significant use of IFS, which is why there were such an important emphasis during the first half of the course.  But as mentioned in the post on Day044, the students in my linear algebra course got so excited about the IFS movie project, I’m confident we’ll have a similar experience in MDA.

The third class in Weeks 9 and 10 will be devoted to work on the Final Project.  Not only does taking the class time to work on these projects emphasize their importance, but I get to monitor students’ progress.  Their proposals will include a very rough week-by-week outline of what they want to accomplish, so I’ll use that to help me gauge their progress.

What these work days also allow for is troubleshooting and possibly revising the proposals along the way.  This is an important aspect of any project, as it is not always possible to predict one’s progress — especially when writing code is involved!  But struggling with writing and debugging code is part of the learning process, so students should learn to be comfortable with the occasional bug or syntax error.  And recall that I’ll have my student Nick as an assistant in the classroom, so there will be two of us to help students on these work days.

Week 11 will be another Presentation Week, again largely based on the Bridges archives.  However, I’ll give students more latitude to look at other sources of interest if they want.  Again, we’re looking for breadth here, so students will present papers on topics not covered in class or the first round of presentations.

I wanted to have a week here to break up the second half of the semester a bit.  Students will still include this week in their outline — they will be expected to continue working on their project as well.  But I am hoping that they find these Presentation Weeks interesting and informative.  Rather like a mini-conference in the context of the usual course.

Weeks 12 and 13 will essentially be like Weeks 9 and 10.  Again, given that most students will not have written any code before this course, getting them to make their own movies in Processing will take time.  There is always the potential that we’ll get done with the basics early — but there is no shortage of topics to go into if needed.  But I do want to make sure all students experience some measure of success with making movies in Processing.

Week 14 will be the Final Project Presentation Week.  This is the culminating week of the entire semester, where students showcase what they’ve created during the previous five weeks.  Faculty from mathematics, computer science, art, and design will be invited to these presentations as well.  I plan to have videos made of the presentations so that I can show some highlights on my blog.

Week 15 is reserved for Special Topics.  There are just two days in this last week, which is right before Final Exams.  I want to have a low-key, fun week where we still learn some new ideas about mathematics and art after the hard work is already done.

So that’s Mathematics and Digital Art!  The planning process has been very exciting, and I’m really looking forward to teaching the course this fall.

Just keep the two “big picture” ideas in mind.  First, that students see a real application of mathematics and programming.  Second, students have a positive experience of mathematics — in other words, they have fun doing projects involving mathematics and programming.

I can only hope that the course I’ve designed really does give students such a positive experience.  It really is necessary to bolster the perception of mathematics and computer science in society, and ideally Mathematics and Digital Art will do just that!

Mathematics and Digital Art II

This week, I want to talk more about the overall structure of the Mathematics and Digital Art (MDA) course I’ll be teaching in the fall.  I won’t have time to address specifics about content today, but I’ll begin with that next week.

As I mentioned last week, because I can’t require students to bring a laptop to class, MDA will meet in a computer laboratory.  Here is my actual classroom:

CO214_10182013

Each day, there will be some time in class — usually at least half the 65-minute period — for students to work at their comptuers.  This is a typical 16-week course meeting three times a week.  (Though courses at USF are four credits, hence the longer class time each day.)

Because the course is project-based, there are homework assignments and projects due, but no exams.  There may be an occasional homework quiz on the mathematics, where I let students use their notes.  I prefer this method to collecting homework, since there are always issues of too much copying.  Because I typically change the numbers in homework quiz problems, it is difficult to do well on this type of quiz if all you do is copy your homework from someone else.

Instead of a Final Exam, there is a major project due at the end of the course.  So the first half of the semester — roughly eight weeks — covers a breadth of topics so that students have lots of options when writing a proposal for their Final Project.

Their proposals are due mid-semester, so I have time to evaluate and discuss them, as well as make suggestions.  I try to make sure each project is appropriate for each student — enough to challenge them, but not frustrate them.  Of course there is flexibility for projects to undergo changes along the way, but the proposal allows for a very concrete starting point.

In the second half of the semester, most weeks will include one day for working on Final Projects.  Not only does this emphasize the importance of the projects, but it also lets me see their progress and perhaps alter the direction they’re going if necessary.

The other main focus of the second half of the semester is the use of Processing to make movies.  Because most students will not have studied programming before, I need to make sure there is plenty of time for them to be successful.  We’ll need to take it slowly.

Of course this means that students will not be able to include the use of Processing in their course proposals, but that doesn’t mean they can’t adapt their project along the way to include the use of Processing if they want to.  This is a necessary trade-off, however, since front-loading the course with a discussion of Processing would mean sacrificing the breadth of topics covered.  I like the students to see as much as possible before they write their Final Project proposals.

This is the broad structure of the course.  There are a few other aspects of MDA which also deserve mention.  Three weeks of the course are devoted to presentations.  The idea here is twofold.  First, there is the clear benefit of developing students’ public speaking abilities.

Second, because students will be giving presentations on papers from the Bridges archive (the archive of all papers presented in the Bridges conferences since 1998), they will need to find a paper on a topic of interest to themselves at a level they can understand.  As there are over 1000 papers here, along with an ability to search using keywords, this should not pose a siginificant problem.  Of course should a student have another source about mathematics and art they are keen to share, this would be acceptable as well.

Because the class size is small (13 students), it will feasible to have all students present in each of the three weeks.  The first Presentation Week on Bridges papers will be about the sixth week of the semester, and the second will be about the eleventh week.

The third Presentation Week will be at the fourteenth week of the semester, but this time will be focused on Final Projects.  I will invite mathematics, computer science, and art/design faculty to these presentations as well, and of course will let the students know this in advance.  All presentations will be both peer-evaluated and evaluated by me.

There is also a plan to bring guest speakers from the Bay area into the classroom.  I know a handful of mathematical artists in the area, so bringing in two or three speakers over the course of a semester would be feasible.  This is one of the design features of the First-Year Seminar, incidentally — expose students to the larger San Francicso/Bay area community.

In addition, I can have a student assistant in the classroom as well.  Nick, my student who is also going to the Bridges conference in Finland this year, will serve in that role.  We’ve spent a semester in a directed study to prepare for the Bridges 2016 conference, so he has unique qualifications.  I’ll talk more about Nick in a future post.

When teaching a programming course with a laboratory component, it is difficult to be able to get around to help all students in any given class period.  Certainly some questions students ask have simple answers (as in a syntax fix), but others will require sitting down with a student for several minutes.

So it will be great to have Nick as an assistant, since that will allow two of us to circulate around the classroom during the laboratory part of the class.  The benefit to students will be obvious, and with the small class size, I’m confident they’ll get the attention they need.

Finally, I left the last week (just two class periods) open for special topics.  Given all the demands of a first-semester student just before Final Exam week, I thought it would be nice for them to have a short breather.  I’ll take suggestions for topics from the students, with the Bridges papers they presented on as a good starting point.

So that’s what the course looks like, broadly.  Next week, I’ll begin a week-by-week discussion of the mathematical/artistic content of the course.  I also intend to post weekly or biweekly while the course is going on — course design is a lot easier in theory than in practice, and I’ll be able to share pitfalls and triumphs in real time!

 

Mathematics and Digital Art I

Last week I discussed a movie project I had my linear algebra students do which involved the animation of fractals generated by  iterated function systems.  This week, I’d like to discuss a new classroom project — a Mathematics and Digital Art course I’ll be teaching this Fall at the University of San Francisco!

The idea came to me during the Fall 2015 semester when we were asked to list courses we’d like to teach for the Fall 2016 semester.  I noticed that one of my colleagues had taught a First-Year Seminar course  — that is, a course with a small enrollment (capped at 16) focused on a topic of special interest to the faculty member teaching the course.  The idea is for each first-year student to get to know one faculty member fairly well, and get acclimated to university life.

So I thought, Why not teach a course on mathematics and art?  My department chair urged me to go for it, and so I drafted a syllabus and started the process going.  Here’s the course description:

What is digital art? It is easy to make a digital image, but what gives it artistic value? This question will be explored in a practical, hands-on way by having students learn how to create their own digital images and movies in a laboratory-style classroom. We will focus on the Sage/Python environment, and learn to use Processing as well. There will be an emphasis on using the computer to create various types of fractal images. No previous programming experience is necessary.

I have two”big picture” motivations in mind.  First, I want the course to show real applications of mathematics and programming.  Too many students have experienced mathematics as completing sets of problems in a textbook.  In this course, students will use mathematics to help design digital images.  I’ll say more about this in later posts.

And second, I want students to have a  positive experience of mathematics.  This course might be the only math course they have to take in college, and I want them to enjoy it!  Given prevailing attitudes about mathematics in general, I think it is completely legitimate to have “students will begin to enjoy mathematics” as a course goal.

I also think that every student should learn some programming during their college career.  Granted, students will start by tweaking Python code I give to them, just like with the movie project.  Some students won’t progress much beyond this, but I am hopeful that others will.  Given the type of course this is, I really can’t have any prerequisites, so I’m assuming I will have students who either haven’t taken a math course in a year or two, and/or have never written a line of code before.

I’ll go into greater detail in the next few posts about content and course flow, but today I’ll share three project ideas which will drive much of the mathematics and programming content.  The first revolves around the piece Evaporation, which I discuss on Day011 and Day012 of my blog.

Day011Evaporation2bWeb

Creating a piece like this involves learning the basics of representing colors digitally, as well as basic programming ideas like variables and loops.

The second project revolves around the algorithm which produces the Koch curve, which I discuss in some detail on Day007, Day008, Day009, and Day027 of my blog.

Day007koch-45-175

By varying the usual angles in the Koch curve algorithm, a variety of interesting images may be produced.  Many exhibit chaotic behavior, but some, like the image above, actually “close up” and are beautifully symmetric.

It turns out that entire families of images which close up may be generated by choosing pairs of angles which are solutions to a particular linear Diophantine equation.  So I’ll introduce some elementary number theory so we can look at several families of solutions.

The third (and largest) project revolves around creating animated movies of iterated function systems, as I described in the last six posts.

This involves learning about linear and affine transformations in two dimensions, and how fractals may be described by iterated function systems.  The mathematics is at a bit higher level here, but students can still play with the algorithms to generate fractal images without having completely mastered the linear algebra.

But I think it’s worth it, so students can learn to create movies of fractals.  In addition, fractals are just cool.  I think using IFS is a good way not only to show students an interesting application of mathematics and programming, but also to foster an enjoyment of mathematics and programming as well.  I had great success with my linear algebra students in this regard.

I’d like to end this post with a few words on the process of creating a course like Mathematics and Digital Art at USF.  Some of these points might be obvious, others not — and some may not even be relevant at your particular school.

  • Start early!  In my case, the course needed to be first approved by the Dean, then next by a curriculum committee in order to receive a Core mathematics designation, and then finally by the First-Year Seminar committee.  The approval process took four months.
  • Consider having your course in a computer lab.  At USF, I could not require students to bring a laptop to class, since it could be the case that some students do not have their own personal computer.  I hadn’t anticipated this wrinkle.
  • Don’t reinvent the wheel!  One reason I’m writing about Mathematics and Digital Art on my blog is to make it easier for others to design a similar course.  I’ll be talking more about content and course flow in the next few posts, so feel free to use whatever might be useful.  And if would help, here is my course syllabus.

As I mentioned, next week’s post will focus more on the actual content of the course.  Stay tuned!

Making Movies with Processing VI

The last post in this series will address how I used Processing in the classroom this past semester.  Although my experience has been limited so far, the response has been great!

In the Spring 2016 semester, I used Processing in my Linear Algebra and Probability course, which is a four-credit course specifically for CS majors.  The idea is to learn linear algebra first from a geometrical perspective, and then say that a matrix is simply a representation of a geometrical transformation.

I make it a point to generalize and include affine transformations as well, as they are the building blocks for creating fractals using iterated function systems (IFS).  The more intuition  students have about affine transformations, the easier it is for them to work with IFS.  (To learn more about IFS, see Day034, Day035, and Day036 of my blog.)

If you’ve noticed, many of the movies I’ve discussed deal with IFS.  Within the first three weeks of the class (here is the link to the course website), I assign a project to create a single fractal image using an IFS.  I use the Sage platform as it supports Python and is open source, and all of my students were supposed to have taken a Python course already.  All links, prompts, and handouts for this project may be found on Days 5 and 6 of the course website.

This sets up the class for a Processing project later on in the semester.  The timing is not critical; as it turns out the project was due during the Probability section of the course (the last third) as most students had other large projects due a bit earlier.  A sample movie provided to the students may be found on Day 22 of the course website, and the project prompt may be found on Day 33.

The basic idea was to use a parameter to vary the numbers in a series of affine transformations used to create a fractal using an IFS.  As the parameter varied, the fractal image varied as well.  This allowed  for an animation from an initial fractal image to a final fractal image.

My grading rubric was fairly simple:  each feature a student added beyond my bare-bones movie bumped their grade up.  Roughly speaking, four additional features resulted in an A for the assignment.  These might include use of color, background image, music, text, etc.

I was inspired by how eagerly students took on this assignment!  They really got into it.  They appreciated using mathematics — specifically linear algebra — in a hands-on application.  I do feel that it is important for CS majors to understand mathematics from an applied viewpoint as well as a theoretical one.  Very few CS majors will go on to become theoretical computer scientists.

We did take some class time to watch movies which students uploaded to my Google drive.  All had their interesting features, but some were particularly creative.  Let’s take a look at a few of them.

Here is Monica’s description of her movie:

My fractal movie consists of a Phoenix, morphing into a shell, morphing into the yin-yang symbol, and then morphing apart. I chose my background color to be pure black to create contrast between the orange and yellow colors of my fractal. The top fractal, which starts out as yellow, shifts downward the whole time. On the other hand, the bottom fractal, which starts out orange, shifts upward. In both cases, a small number of the points from each fractal get stuck in the opposite fractal and begin to shift with it. This leaves the two fractals at the end of the movie intertwined. I created text at the bottom of the fractal movie, which says “Phoenix.” I wanted to enhance the overall movie and give it a name. Lastly, I added music to my fractal movie. I picked the song “Sweet Caroline” by Neil Diamond.

Ethan says this about his movie:

The inspiration for this fractal came from a process of trial and error.  I knew I wanted to have symmetry and bright color, but everything else was undecided.  After creating the shape of the fractal, I decided to create a complete copy of the fractal and rotate one copy on top of the other.  After seeing what it looked like with a simple rotation, I decided something was missing so I had the copied image rotate and either shrink or grow, depending on a random variable.  In this movie the image shrinks.  I used transitional gradients because I wanted to add more color without it looking too busy or cluttered.

Finally, this is how Mohamed describes his video:

This video starts as a set of four squares scaled by 0.45, and each square has either x increased by 0.5, y increased by 0.5, both increased by 0.5, or neither increased by 0.5. The grays and blacks as the video starts show the random points plotted as the numbers fed into the function are increasing, while the blues and whites show the points as the numbers fed into the function are decreasing. I chose to do this because we often see growth of functions in videos, but we do not see the regression back to its original form too often….

I was very pleased how creative students got with the project, and how enthusiastic they were about their final videos.  I have another project underway where I use Processing — a Mathematics and Digital Art course I’ll be teaching this Fall semester.  I’ll be talking about this course soon, so be sure to follow along!

Making Movies with Processing V

Last week, we saw how to use linear interpolation to rotate a series of fractal images.  It was not unusually difficult, but it was important to call functions in the right sequence in order to make the code as simple as possible.

This week we’ll explore different ways to use color.  Using color in digital art is a broad and complex topic, so we’ll only scratch the surface today.

The first movie shows how the different parts of the Sierpinski triangles corresponding to different affine transformations of the iterated function system can be drawn in different colors.

Recall that the points variable stored all the points to be drawn in any given fractal image.  Since the points are drawn all at once, it is difficult to say which of the three transformations generated a particular point.  But this is important because we want the color of a point to correspond to the transformation used to generate it.

One idea is to use three variables to store the points, and have these variables correspond to the three affine transformations.  Here is the code — we’ll discuss it in detail in a moment.

CodeSnippet5

We use variables points1, points2, and points3 to store the points generated by each of the three affine transformations.  Note that the use of append is now within each if statement, not after all the if statments.  This is because we want to remember which points are generated by which transformation, so we can plot them all the same color.

As a result, we now need three separate calls to the stroke and point routines.  Recall that in Processing, a call to the stroke command changes the color of everything drawn after that stroke command is called.  So if we want to draw using three different colors, we need three calls to the stroke command.

Of course it follows that we need three calls to the point routine, since once we change the color of what is drawn, we need to make sure the correct set of points is that color.  In this case, all the points in points1 are yellow, those in points2 are red, and those in points3 are blue.

Again, not unusually complicated.  You just have to make sure you know how each function in Processing works, and the appropriate order in which to call the functions you use.

On to the next color experiment!  It’s been a few weeks since we used linear interpolation with color.  You’ll see in the movie below that the yellow triangle gradually turns to red, the blue triangle changes to yellow, and the red triangle becomes blue.

Let’s see how we’d use linear intepolation to accomplish this.  Below is the only code which needs to be altered — the stroke and point commands.  Also, I left out the rotate function so the changing of the colors would be easier to follow.

CodeSnippet6

We’ll focus on how to change the red triangle to blue in this example, which occurs for the points in the variable points2.  The other color changes are handled similarly.  All we need to do is use linear interpolation on each of the RGB values of the colors we are looking at.

For example, red has an R value of 255, but blue has an R value of 0.  Now when p = 0 the triangle is red, and when p = 1, the triangle is blue.  So we need to start (p, R) at (0, 255) and end at (1, 0).  Creating a line between these two points results in the equation

R = (1 – p) * 255.

You can see the right-hand side of this equation as the first argument to the stroke command used to change the color for points2.

Working with the G values is easy.  Since both red and blue have a G value of 0, we don’t need linear interpolation at all!  Just leave the G value 0, and everything will be fine.

Finally, we look at the B value.  For red, the B value is 0, but it’s 255 for blue.  So we need to start (p, R) at (0, 0) and end at (1, 255).  This is not difficult to do; we get the line

R = p * 255.

You’ll see the right-hand side of this equation as the third argument to the stroke command which changes the color for points2.

Just linear interpolation at work again!  It’s not too difficult, once you look at it the right way.

For our last example, we’ll let the triangles “fade out,” as shown in the following movie.

Can you figure out how this is done?  Linear interpolation again, but this time in the strokeWeight function.  Here are the changes:

CodeSnippet7

Let’s what this if-else clause does.  If the parameter p is less than 0.5, leave the stroke weight as 2.  Otherwise, calculate the stroke weight to be (1 – p)*4.

What does this accomplish?  Well, at p = 0.5, the stroke weight is (1 – 0.5)*4, which is 2.  And at p = 1, the stroke weight is 0.  This means that the stroke weight is 2 for the first half of the movie, then gradually diminishes to 0 by the end of the movie.  In other words, a fade out.

Of course when you write the code, you have to reverse engineer it.  If I call my stroke weight W, I want to start (p, W) at (0.5, 2) and end at (1, 0).  Creating a line between these two points gives the equation

W = (1 – p) * 4.

That’s all there is to it!

I hope you’ve seen how linear interpolation is a handy tool you can use to create all types of special effects.  The neat thing is that it can be applied to any function which takes numerical parameters — and those parameters can correspond to color values, angles of rotation, location, or stroke weight.  The only limit to how you can incorporate linear interpolation into your movies is your imagination!

Making Movies with Processing IV

Last week, we saw how using linear interpolation allowed us to create animations of fractals.  This week, we’ll explore how to create another special effect by using linear interpolation in a different way.  We’ll build on last week’s example, so you might want to take a minute to review it if you’ve forgotten the details.

Let’s suppose that in addition to morphing the Sierpinski triangle, we want to slowly rotate it as well.  So we insert a rotate command before plotting the points of the Sierpinski triangle, as shown here:

CodeSnippet1

First, it’s important to note that the rotate command takes angles in radian measure, not degrees.  Recall from your trigonometry classes that

360^\circ=2\pi{\rm \ radians.}

But different from your trigonometry classes is that the rotation is in a clockwise direction.  When you studied the unit circle, angles moved counter-clockwise around the origin as they increased in measure.  This is not a really complicated difference, but it illustrates again how not every platform is the same.  I googled “rotating in processing” to understand more, and I found what I needed right away.

Let’s recall that p is a parameter which is 0 at the beginning of the animation, and 1 at the end.  So when p = 0, there is a rotation of 0 radians (that is, no rotation), and when p = 1, there is a rotation of 2\pi radians, or one complete revolution.  And because we’re using linear interpolation, the rotation changes gradually and linearly as the parameter p varies from 0 to 1.

Let’s see what effect adding this line has.  (Note:  Watch the movie until the end.  You’ll see some blank space in the middle — we’ll explain that later!)

What just happened?  Most platforms which have a rotate command assume that the rotation is about the origin, (0,0).  We learned in the first post that the origin in Processing is in the upper left corner of the screen.  If you watch that last video again, you can clearly see that the Sierpinksi triangle does in fact rotate about the upper left corner of the screen in a clockwise direction.

Of course this isn’t what we want — since most of the time the fractals are out of the view screen!  So we should pick a different point to rotate around.  You can pick any point you like, but I though it looked good to rotate about the midpoint of the hypotenuse of the Sierpinski triangles.  When I did this, I produced the following video.

So how did I do this?  It’s not too complicated, but let’s take it one step at a time.  We’ve got to remember that before scaling, the fractal fit inside a triangle with vertices (0,0), (0,2), and (2,0).  I wanted to make it 450 pixels wide, so I scaled by a factor of 225.

This means that the scaled Sierpinski triangle fits inside a right triangle with vertices (0, 0), (0, 450), and (450, 0).  Using the usual formula, we see that the midpoint of the hypotenuse of this triangle has coordinates

\dfrac12\left((0,450)+(450,0)\right)=(225,225).

To make (225, 225) the new “origin,” we can just subtract 225 from the x– and y-coordinates of our points, like this:

CodeSnippet2

Remember that the positive y-axis points down in Processing, which is why we use an expression like 225 – y rather than y – 225.  This produces the following video.

This isn’t quite what we want yet, but you’ll notice what’s happening.  The midpoint of the hypotenuse is now always at the upper left corner.  As the triangle rotates, most of it is outside the view screen.  But that’s not hard to fix.

All we have to do now is move the midpoint of the hypotenuse to the center of the screen.  We can easily do this using the translate function.  So here is the complete version of the sierpinski function, incorporating the translate function as well:

CodeSnippet3

So let’s briefly recap what we’ve learned.  Rotating an image is not difficult as long as you remember that the rotate function rotates about the point (0,0).  So first, we needed to decided what point in user space we wanted to rotate about – and we chose (225, 225) so the fractals would rotate around the midpoint of the hypotenuse of the enclosing right triangle.  This is indicated in how the x– and y-coordinates are changed in the point function.

Next, we needed to decided what point in screen space we wanted to rotate around.  The center of the screen seemed a natural choice, so we used (384, 312).  This is indicated in the arguments to the translate function.

And finally, we decided to have the triangles undergo one complete revolution, so that p = 0 corresponded to no rotation at all, and p = 1 corresponded to one complete revolution.  We accomplished this using linear interpolation, which was incorporated into the rotate function.

But most importantly — we made these changes in the correct order.  If you played around and switched the lines containing the translate and rotate functions, you’d get a different result.

It is worth remarking that it is possible to use the rotate function first.  But then the translate function would be much more complicated, since you would have to take into account where the point (384, 312) moved to.  And you’d have to review your trigonometry.  Here’s what the two lines would need to look like:

CodeSnippet4

As you can see, there is a lot more involved here!  So when you’re thinking about designing algorithms to produce special effects, it’s worth thinking about the order in which you perform various tasks.  Often there is a way that is easier than all the others — but you don’t always hit upon it the first time you try.  That’s part of the adventure!

Next week we’ll look a few more special effects you can incorporate into your movies.  Then we’ll look at actual movies made by students in my linear algebra class.  They’re really great!

Making Movies with Processing III

This week, we begin a discussion of creating movies consisting of animated fractals.  Last week’s post about the dot changing colors was at a beginning level as far as Processing goes.  This week’s post will be a little more involved, but will assume a knowledge of Iterated Function Systems.  I talked about IFS on Day034, Day035, and Day036.  Feel free to look back for a refresher….

Today, we’ll see how to create the following movie.  You’ll notice that both the beginning and final Sierpinski triangles are fractals discussed on Day034.

As a reminder, these are the three transformations which produce the initial Sierpinksi triangle:

F_1\left(\begin{matrix}x\\y\end{matrix}\right)=\left[\begin{matrix}0.5&0\\0&0.5\end{matrix}\right] \left(\begin{matrix}x\\y\end{matrix}\right),

F_2\left(\begin{matrix}x\\y\end{matrix}\right)=\left[\begin{matrix}0.5&0\\0&0.5\end{matrix}\right] \left(\begin{matrix}x\\y\end{matrix}\right)+\left(\begin{matrix}1\\0\end{matrix}\right),

F_3\left(\begin{matrix}x\\y\end{matrix}\right)=\left[\begin{matrix}0.5&0\\0&0.5\end{matrix}\right] \left(\begin{matrix}x\\y\end{matrix}\right)+\left(\begin{matrix}0\\1\end{matrix}\right).

Also, recall that to get the modified Sierpinski triangle at the end of the video, all we did was change the first transformation to

F_1\left(\begin{matrix}x\\y\end{matrix}\right)=\left[\begin{matrix}0.25&0\\0&0.5\end{matrix}\right] \left(\begin{matrix}x\\y\end{matrix}\right).

We’ll how to use linear interpolation to create the animation.  But first, let’s look at the Python code for creating a fractal using an iterated function system.

ifscode

The parameter p is for the linear interpolation (which we’ll discuss later), and n is the number of points to plot.  First, import the library for generating random integers — since each transformation will be weighted equally, it’s simpler just to choose a random integer from 1, 2, and 3.  The variable points keeps track of all the points, while last keeps track of the most recently plotted point.  Recall from earlier posts that you only need the last point in order to get the next one.

Next, the for loop just creates new points, one at a time, and appends them to points.  Once an affine transformation is randomly chosen by selecting a randint in the range from 1 to 3, it is applied to the last point generated.  For the purpose of writing Python code, it’s easier to use the notation

F_2\left(\begin{matrix}x\\y\end{matrix}\right)=\left(\begin{matrix}0.5 \ast x+1\\0.5\ast y\end{matrix}\right)

rather than matrix notation.  In order to use vector and matrix notation, you’d need to indicate that (1,2) is a vector by writing

v = vector(1,2),

and similarly for matrices.  Since we’re doing some fairly simple calculations, just writing out the individual terms of the result is easier and requires less code.

Once the points are all created, it’s time to plot them.  You’ll recognize the background, stroke, and strokeWeight functions from last week.  Nothing fancy here, since we’re focusing on algorithms today.  Just a black background and small orange dots.

The last line plots the points, and is an example of what is called list comprehension in Python.  First, note that the iterated function system would create a fractal which would fit inside a triangle with vertices (0,2), (0,0), and (2,0).  So we need to suitably scale the fractal — in this case by a factor of 225 so it will be large enough to see.  Remember that units are in pixels in Processing.

Then we need to compensate for Processing’s coordinate system.  You’ll notice a similarity to what we did a few weeks ago.

What the last line does is essentially this:  for every point x in the list points, adjust the coordinates for screen space, and then plot x with the point function.  List comprehension is convenient because you don’t have to make a loop or other iterative construct — it’s done automatically for you.

Of course that doesn’t mean you never need a for loop.  It wouldn’t be easy to replace the for loop above with a list comprehension as each new point depends on the previous one.  But for plotting a bunch of points, for example, it doesn’t matter which one you plot first.

Now for the linear interpolation!  We want the first frame to be the usual Sierpinski triangle, and the last frame to be our modified triangle.  The only difference is that one of the constants in the first function changes from 0.5 to 0.25.

This is perfect for using linear interpolation.  We’d like our parameter p to be 0.5 when p = 0, and 0.25 when p = 1.  So we just need to create a linear function of p which passes through the points (0, 0.5) and (1, 0.25).  This isn’t hard to do; you should easily be able to get

0.5 - 0.25 \ast p.

The effect of using the parameter p in this way is to create a series of fractals, each one slightly different from the one before.  Since we’re taking 360 steps to vary from 0.5 to 0.25, there is very little difference from one fractal to the next, and so when strung together, the fractals make a convincing animation.

I should point out the dots look like they’re “dancing” because each time a fractal image is made, a different series of random affine transformations is chosen.  So while the points in each successive fractal will be close to each other, most of them will actually be different.

For completeness, here is the code which comes before the sierpinski function is defined.

ifscodesetup2

It should look pretty familiar from last week.  Similar setup, creating the parameter p, writing out the frames, etc.  You’ll find this a general type of setup which you can use over and over again.

So that’s all there is to it!  Now that you’ve got a basic grasp of Processing’s screen space and a few different ways to use linear interpolation, you can start making movies on your own.

Of course there are lots of cool effects you can add by using linear interpolation in more creative ways.  We’ll start to take a look at some of those next week!