Creating Animated GIFs in Processing

Last week at our Digital Art Club meeting, I mentioned that I had started making a few animated gifs using Processing.  Like this one.

GIF

(I’m not sure exactly why the circles look like they have such jagged edges — must have to do with the say WordPress uploads the gif.  But it was my first animated gif, so I thought I’d include it anyway.)

And, of course, my students wanted to learn how to make them.  A natural question.  So I thought I’d devote today’s post to showing you how to create a rather simple animated gif.

sample

Certainly not very exciting, but I wanted to use an example where I can include all the code and explain how it works.  For some truly amazing animated gifs, visit David Whyte’s Bees & Bombs page, or my friend Roger Antonsen’s Art page.

Here is the code that produces the moving circles.

gif3

I’ll assume you’ve done some work with Processing, so you understand the setup and draw functions, you know that background(0, 0, 0) sets the background color to black, etc.

The idea behind an animated gif which seems to be a continuous loop is to create a sequence of frames whose last frame is essentially the same as the first.  That way, when the gif keeps repeating, it will seem as though the image is continually moving.

One way to do this is with the “mod” function — in other words, using modular arithmetic.  Recall that taking 25 mod 4 means asking “What is the remainder after dividing 25 by 4?”  So if you take a sequence of numbers, such as

1, 2, 3, 4, 5, 6, 7, 8, 9, 12, 11, 12, 13, 14, …

and take that sequence mod 4, you end up with

1, 2, 3, 0, 1, 2, 3, 0, 1, 2, 3, 0, 1, 2, ….

Do you see it already?  Since my screen is 600 pixels wide, I take the x-coordinate of the centers of the circles mod 600 (that’s what the “% 600” means in Python).  This makes the image wrap around horizontally — once you hit 600, you’re actually back at 0 again.  In other words, once you go off the right edge of the screen, you re-enter the screen on the left.

That’s the easy part….  The geometry is a little trickier.  The line

x = (75 + 150 * i + 2 * frameCount) % 600

requires a little more explanation.

First, I wanted the circles to be 100 pixels in diameter.  This makes a total of 400 pixels for the width of the circles.  Now since I wanted the image to wrap around, I needed 50 pixels between each circle.  To begin with a centered image, that means I needed margins which are just 25 pixels.  Think about it — since the image is wrapping around, I have to add the 25-pixel margin on the right to the 25-pixel margin on the left to get 50 pixels between the right and left circles.

So the center of the left circles are 75 pixels in from the left edge — 25 pixels for the margin plus 50 pixels for the radius.  Since the circles are 100 pixels in diameter and there are 50 pixels between them, there are 150 pixels between the centers of the circles. That’s the “150 * i.”  Recall that in for loops, the counters begin at 0, so the first circle has a center just 75 pixels in from the left.

Now here’s where the timing comes in.  I chose 300 frames so that by showing approximately 30 frames per second (many standard frame-per-second rates are near 30 fps) the gif would cycle in about 10 seconds.  But cycling means moving 600 pixels in the x direction — so the “2 * frameCount” will actually give me a movement of 600 pixels to the right after 300 frames.  You’ve got to carefully calculate so your gif moves at just the speed you want it to.

To make displaying the colors easier, I put the R, G, and B values in lists.  Of course there are many other ways to do this — using a series of if/else statements, etc.

One last comment:  according to my online research, .png files are better for making animated gifs, while .tif files (as I’ve also used in many previous posts) are better for making movies.  But .png files take longer to save, which is why your gif will look like it’s moving slowly when you use saveFrame, but will actually move faster once you make your gif.

So now we have our frames!  What’s next?  A few of my students mentioned using Giphy to make animated gifs, but I use GIMP.  It is open source, and can be downloaded for free here.  I’m a big fan of open source software, and I like that I can create gifs locally on my machine.

Once you’ve got GIMP open, select “Open as Layers…” from the File menu.  Then go to the folder with all your frames, select them all (using Ctrl-A or Cmd-A or whatever does the trick on your computer), and then click “Open.”  It may take a few minutes to open all the images, depending on how many you have.

Now all that’s left to do is export as an animated gif!  In the File menu, select “Export As…”, and make sure your filename ends in “.gif”.  Then click “Export.”  A dialog box should open up — be sure that the “As animation” and “Loop forever” boxes are checked so your animated gif actually cycles.  The only choice to make now is the delay between frames.  I chose 30 milliseconds, so my gif cycled in about 10 seconds.  Then click “Export.”  Exporting will also take a few seconds as well — again, depending on how many frames you have.

Unfortunately, I don’t think there’s a once-size-fits-all answer here.  The delay you choose depends on how big your gif actually is — the width of your screen in Processing — since that will determine how many frames you need to avoid the gif looking too “jerky.”  The smaller the time interval between frames, the more frames you’ll need, the more space those frames will take up, and the longer you’ll need to upload your images in Gimp and export them to an animated gif.  Trade-offs.

So that’s all there is to it!  Not too complicated, though it did take a bit longer for me the first time.  But now you can benefit from my experience.  I’d love to see any animated gifs you make!

Mathematics and Digital Art: Update 1 (Fall 2017)

About a month has passed since beginning my third semester of Mathematics and Digital Art!  As with last semester, I plan on giving updates about once a month to discuss changes in the course and to showcase student work.

The main difference this semester (as I discussed a few weeks ago) was starting with Processing right from the beginning.  From my perspective, the course has run more smoothly than ever — and some of my students are already really getting into the coding aspect of creating digital art.

I do believe that beginning this way will pay off when we get to making movies.  Since we’ll already know the basics and understand the difference between user space and screen space, I can focus more on the interactive abilities of Processing — such as having features of the displayed image change by moving the mouse or pressing different keys on the keyboard.

The first two assignments were essentially the same as last semester.  We began with discussing color and the work of Josef Albers, emphasizing the fact that there is no such thing as “pure color” — colors are only perceived in relation to other colors.

Again, I was surprised by the diversity of the images students created.  Like last year, a few students experimented with a minimalist approach.  Here is what Alex generated using just a 2-by-2 grid of squares.

Day112Alex

I should point out that outlining the geometrical objects (using the strokeWeight function) is not “pure” Albers — you aren’t really seeing one color on top of another due to the black outlines.  But I did have students submit three pieces, insisting that one of the pieces was created only by changing the parameters in the original Albers routine, as shown in the following submission.

Day112Linh2

Here is Courtney’s submission on this theme, again created only by changing parameters to the drawing routine.

Day112Courtney

 

Most students — I think in part due to the fact that we started discussing code even earlier than previous semesters — really pushed the geometry far beyond the simple idea of rectangles within rectangles.

While toying with various geometrical motifs, Tera found something that reminded her of a rose.  This influenced her color palette:  reds and pinks for the flowers, with a green background, meant to suggest that the flowers were in a garden.

Day112Tera

Cissy explored the geometry as well.  Note how keeping the stroke weight at zero — so that the geometrical objects have no outline — creates a more subtle effect, especially since the randomness from the dominant color is not too pronounced.

Day112Cissy

The second art assignment, as in the previous semesters, was to explore creating textures using randomness in both color and shape.  As with the first assignment, I wanted students to submit one piece which involved only changing the parameters to a given function.  In this case, the function created a grid of gray circles, with both the intensity of the gray and the size of the circles having some degree of randomness.  I think it is important that students do some work within given constraints — it really challenges their creativity.  Here is Terry’s piece along these lines.

Day112Terry

 

The second piece was based on a function which created a grid of squares of the same size, but random colors.  Here, there were no constraints — students could modify the geometry in any way they wanted to.  Several were quite creative.  For example, Sepid approached this task by choosing both shape and color to create an image reminiscent of a stained glass window.

Day112Sepid

The third piece involved a color gradient (see my previous posts on Evaporation).  If you look back at these posts, you’ll recall that a color gradient can be created by increasing the randomness of the colors as you move from the top of the image to the bottom using a power function:  f(y)=y corresponds to a linear gradient, f(y)=y^2 corresponds to a quadratic gradient, etc.  Different effects can be created by varying the exponent.

As I was discussing this in class, one student asked what would happen if you used a negative exponent.  I had never thought about this before!  Jack used this idea in his piece, which he said reminds him of looking at a fire.

Day112Jack

It turns out that using a negative exponent creates a gradient beginning with black on the top.  Why is this?  As the image proceeds lower down the screen, the algorithm subtracts values from the RGB parameters proportional to y^n, where y=0 corresponds to the top of the image, and y=1 corresponds to the bottom of the image.

So if the exponent n is positive, there is very little randomness subtracted.  But if the exponent is negative, a lot of randomness is subtracted, since now the numbers near 0 are on the denominator.  Because the RGB values only go up to 255, subtracting a large degree of randomness leaves nothing left — in other words, black.  Now some of the numbers will end up being negative  near the top– but putting all negative numbers in a color specification in Processing does in fact give you black.

Another student also worked with yellows and reds to imitate fire in another way.  Instead of making small circles, he made larger circles with quite a bit of overlap, creating a rather different effect.

Day112Ali.png

And Rosalie found in interesting say to create stripes with the algorithm.  I had not seen this effect before.

Day112Rosalie

So that’s it for the first update of the Fall 2017 installment of Mathematics and Digital Art.  As you can see, my students are already being quite creative.  I look forward to seeing their work develop as the semester progresses!

 

Bay Area Mathematical Artists, I

Yesterday was the first meeting of the Bay Area Mathematical Artists at the University of San Francisco!

It all began one balmy Friday evening in Waterloo, Ontario, Canada at the Bridges 2017 conference….the Mendlers and I hosted a pot luck dinner at our AirBnB, and we realized how many of us were from the Bay area.  In fact, we remarked upon the fact that nine of us were actually on the same flight from San Francisco to Toronto for the conference!

Bridges participants do really form a community.  There is a spirit of sharing and mutual appreciation for each others’ work.  We really do cherish those few days each year when we can all come together.  The only drawback is that Bridges comes around just once a year.

So throughout the evening, between chowing down on grilled fare and sipping a glass of beer or wine, the idea of informally gathering now and then kept cropping up.

But as we all know, ideas do not automatically become reality.  They have a tendency to wither if not watered and fertilized….so I decided to take up gardening.

I had the advantage of being associated with a University, so I could arrange a meeting space.  Location was also somewhat convenient — some of us were to the northeast in Oakland and Berkeley, and others were to the southwest in Santa Clara and Scotts Valley.  It might be nice to move around occasionally so not everyone has to drive as far all the time.  But since the meetings are on Saturdays, at least traffic is not so much of a bother.

And then come the emails!  Yes, lots of them….  The main decision to be made was deciding on a format.  I thought informal was best — I sent out a call for speakers, and put them on the docket on a first-come, first-served basis.  I wanted to take away the stress of competing for time; if there were more speakers than we had time for, we’d just start where we left off the last time.

The other reason for this is that I wanted to encourage students from my Mathematics and Digital Art class, as well as members of the newly formed Digital Art Club, to participate as well.  I think it is important to let mathematical artists of all levels have a place to share ideas and get feedback on their work.

So for our inaugural meeting, we had three speakers:  Chamberlain Fong, Karl Schaffer,  and Dan Bach.

Chamberlain’s talk was entitled The Conformal Hyperbolic Square and Its Ilk.  He discussed different ways to transform circular hyperbolic tilings (particularly those of Escher) to square images.  Chamberlain did give a version of this talk at Bridges in 2016, but included more recent results as well.  For more information, you can contact him at chamberlain@yahoo.com.

001title

 

Karl Schaffer’s talk was entitled Dance’s Center of Attention Mass.  Inspired by Joseph Thie’s Rhythm and Dance Mathematics and Kasia Williams’ idea of “Center of Attention Mass,” Karl is interested in graphically showing where the center of attention is by weighting the position of each dancer on stage.  He went so far as to contact Thie — now in his 80’s — and they are actively collaborating together.

Apoll. Circles.png 

Karl is also giving the lecture/demonstration Calculated Movements at the Montalvo Art Center next March.  There is more information here.  You can reach Karl at karl_schaffer@yahoo.com.

Finally, Dan Bach’s talk was entitled 3D Math Art and iBooks Author.  Dan is keen on creating highly interactive math books which engage students of all ages.  He gave a practical talk demonstrating the software he uses, including examples of converting graphics to various different formats since it is not always a simple task to take a 3D image created by one software package and import it into another.  You can reach Dan at dan@dansmath.com.

DanBachSlide1

 

After the talks — which included ample room for Q&A — we had a brief discussion on the future of the group.  I wanted to make it clear that while I am willing to keep things going in the current format, it is really up to the group to decide how to run our meetings.  We opted to keep things going the same way for next month — but suggestions for the future included workshops, or perhaps themed sessions, like a series of talks on polyhedra.  Participants were encouraged to think of other ways to use our time together as a topic of discussion for the next meeting.  Keeping it informal means lessening the pressure of submitting talks/papers for conferences, etc.

Then dinner!  Most of us were available for a meal afterwards.  There were two nice options nearby — a cafe with sandwiches and salads, and an Indian restaurant with a buffet.  I went with the group who preferred Indian food — and truly, a good time was had by all!  We left for dinner at about 5:30, and I finally had to break things up shortly before 8:00, since some of us had a ways to drive home.  We could clearly have kept talking for quite a while….

So our first meeting of the (tentatively named) Bay Area Mathematical Artists was a success!  There were a total of 15 of us present, including three students from USF — a very respectable number for a first time event.  We plan to meet approximately monthly, modulo the University schedule of classes and holidays.

I’ll post summaries each month of our meetings, including a brief synopsis of the talks, workshop(s), or whatever other form the meetings take.  Feel free to contact the speakers for more information about the talks they gave this weekend, and don’t hesitate to spread the word to others who might be interested!

 

Using Processing for the First Time

While I have discussed how to code in Processing in several previous posts, I realized I have not written about getting Processing working on your own computer.  Naturally I tell students how to do this in my Mathematics and Digital Art course.  But now that I have started a Digital Art Club at the University of San Francisco, it’s worth having the instructions readily accessible.

The file I will discuss may be used to create an image based on the work of Josef Albers, as shown below.

0001

See Day002 of my blog,  Josef Albers and Interaction of Color, for more about how color is used in creating this piece.

As you would expect, the first step is to download Processing.  You can do that here.  It may take a few moments, so be patient.

The default language used in Processing is Java.  I won’t go into details of why I’m not a fan of Java — so I use Python mode.  When you open Processing, you’ll see a blank document like this:

Day110Screen1

Note the “Java” in the upper right corner.  Click on that button, and you should see a menu with the option “Add Mode…”  Select this option, and then you should see a variety of choices — select the one for Python and click “Install.”  This will definitely take a few minutes to download, so again, be patient.

Now you’re ready to go!  Next, find some Processing code written in Python (from my website, or any other example you want to play around with).  For convenience, here is the one I’ll be talking about today:  Day03JosefAlbers.pyde.  Note that it is an Open Office document; WordPress doesn’t let you upload a .pyde file.  So just open this document, copy, and paste into the blank sketch.  Be aware that indentation is important in Python, since it separates blocks of code.  When I copied and pasted the code from the Open Office document, it worked just fine.  But in case something goes awry, I always use four spaces for successive indents.

Now run the sketch (the button with the triangle pointing to the right).  You will be asked to create a new folder; just say yes.  When Processing runs, it often creates additional files (as we’ll see in a moment), and so keeping them all in one folder is helpful.  You should also see the image I showed above; that is the default image created by this Processing program.

Incidentally, the button with the square in the middle stops running your sketch.  Sometimes Processing runs into a glitch or crashes, so stopping and restarting your sketch is sometimes necessary.  (I still haven’t figured out why it crashes at random times.)

Next, go to the folder that you just created.  You should see a directory called “frames.”  Inside, you should see some copies of the image.

Day110Screen2

Inside the “draw” function, there is a function call to “saveFrame,” which saves copies of the frames you make.  You can call the folder whatever you want; this is convenient, since you might want to make two different images with the same program.  Just change the folder name you save the images to.

A word about the syntax.  The “####” means the frames will be numbered with four digits, as in 0001.png, 0002.png, etc.  If you need more than 10,000 frames (likely you won’t when first starting), just add more hashtags.  The “.png” is the type of file.  You can use “.tif” as well.  I use “.tif” for making movies, and “.png” for making animated gifs.  There are other file types as well; see the documentation on saveFrame for more details.

Now let’s take a look at making your own image using this program.

Day110Screen3

If you notice, there are lines labelled “CHANGE 1” to “CHANGE 6” in the setup and draw functions.  These are the only lines you need to change in order to design you own piece. You may tweak the other code later if you like.  But especially for beginning programmers, I like to make the first examples very user-friendly.

So let me talk you through changing these lines.  I won’t bother talking about the other code right now — that would take some time!  But you don’t need to know what’s under the engine in order to create some interesting artwork….

CHANGE 1:  The hashtags here, by the way, indicate a comment in your code:  when your program runs, anything after a hashtag is ignored.  This makes it easy to give hints and provide instructions in a program (like telling you what lines to change).  I created a window 800 x 600 pixels; you can make it any size you want by changing those numbers. The “P2D” just means you’re working with a two-dimensional geometry.  You can work in 3D in Processing, but we won’t discuss that today.

CHANGE 2:  The “sqSide” variable indicates how big the square are, in units of pixels.  The default unit in Processing is always pixels, so if you want to use another geometry (like a Cartesian coordinate system), you have to convert from one coordinate system to another.  I do all this for you in the code; all you need to do is say how large each square is.  And if you didn’t go back and read the Josef Albers piece, by “square,” I mean a unit like this:

Day002Square

CHANGE 3, CHANGE 4:  The variables “sqRows” and “sqCols” indicate, as you would expect, how many rows and columns are in the final image.  Since I have 15 rows and the squares are 30 pixels on a side, the height of the image is 450 pixels.  Since my window is 600 pixels in height, this means there are margins of 75 pixels on the top and bottom.  If your image is too tall (or too wide) for the screen, it will be cropped to fit the screen.  Processing will not automatically resize — once you set the size of the screen, it’s fixed.

CHANGE 5:  The “background” function call sets the color of the background, using the usual RGB values from 0-255.

CHANGE 6:  The first three numbers are the RGB values of the central rectangles in a square unit.  The next three numbers indicate how the background colors of the surrounding rectangles change.  (I won’t go into that here, since I explain it in detail in the post on Josef Albers mentioned above.  The only difference is that in that post, I use RGB values from 0-1, but in the Processing code here, I use values from 0-255.  The underlying concept is the same.)

The last (and seventh) number is the random number seed.  Why is this important?  If you don’t say what the random number seed is (the code does involve choosing random numbers), every time you run the code you will get a different image — the computer just keeps generating more (and different) random numbers in a sequence.  So if you find an image you really like and don’t know what the seed is, you’ll likely never be able to reproduce it again!  And if you don’t quite like the texture created by using one random seed, you can try another.  It doesn’t matter so much if you have many rows and columns, but if you try a more minimalist approach with fewer and larger squares, the random number seed makes a big difference.

OK, now you’re on your own….  This should be enough to get you started, and will hopefully inspire you to learn a lot more about Python and Processing!

 

 

Beguiling Games I: Nic-Nac-No

It has been some time since I’ve posted any puzzles or games.  In going through some boxes of folders in my office, I came across some fun puzzles I created for a class whose focus was proofs and written solutions to problems.  I’d like to share one this week.

For the assignments, I sometimes wrote stories around the puzzles.  So here is one such story.  The date on the assignment, if you’re interested in such things, is January 16, 2003.  (I assume that you are familiar with the game Tic-Tac-Toe, as well as the fact that if both players play intelligently, the game ends in a draw.)  I called the game “Nic-Nac-No.”

Betty and Clyde, after their favorite breakfast of blueberry pancakes one sunny Saturday morning, began a Tic-Tac-Toe tournament.  They were reasonably bright children — taking turns going first, the initial 73 games ended in a draw.

“Just once, Clyde, couldn’t you try putting your O first on a side instead of in a corner?” prodded Betty.  “That way, it wouldn’t be the same boring game every time.”

“Well, it’s my turn to go first this time,” said Clyde, putting an X in the center.  “OK, now you show me how you want me to play so I can do it that way next time.”

“Oh, shut up, Clyde,” sighed Betty, putting her O in the upper left corner.  And so game #74 ended in a draw.

“Hey, I’ve got an idea!” exclaimed Clyde.  “Let’s make up different rules.  How about this:  the first one who gets three-in-a-row loses.  Whaddya think, Betty?”

“That’s so random, Clyde,” said Betty, secretly excited by the suggestion.

“No, it’s not.  And besides,” reasoned Clyde, “it’s got to be better than playing another game of Tic-Tac-Toe since you won’t ever try anything different.

“OK, potato brain.  Let’s try.  You go first.”

“Great!” exclaimed Clyde, until he realized Betty was trying to outmaneuver him.  He just realized that it you’re trying to avoid three-in-a-row, the fewer squares you own, the better.

Assuming Betty and Clyde play optimally, will the game be a win for Betty, a win for Clyde, or a draw?

I should remark that the idea of a misere game — where you turn the winning condition into a losing condition — is not original with me.  But most students have not considered this type of game, so misere versions of games often make for engaging problems.

Before I discuss the solution, you might want to try it out for yourself!  There are likely many strategies possible to produce the desired result; I’ll just show you the ones I thought were the most straightforward.

In my solution, Betty and Clyde use different strategies, but the end result is the same:  the game must end in a draw.  Let’s look at what strategies they might use.

It turns out that if Clyde starts in the center, he can use a strategy where he does not lose.  It’s fairly simple:  always play opposite Betty.  Thus, when Betty plays a corner/side, Clyde takes the opposite corner/side.

Why can’t Clyde lose?  First, it should be clear that Clyde can never make a three-in-a-row that passes through the center.  Since he always plays opposite Betty, any line of three passing through the center must contain two X’s and one O (recall Clyde started with an X in the center), and so is not a three-in-a-row.

What about a three-in-a-row along a side?  Since Clyde plays opposite Betty, if he ever placed an X to make three-in-a-row along a side, that would mean Betty already had three O’s in a row on the opposite side, and would have already lost!  So it’s impossible for Clyde to lose this way.

Since any three-in-a-row must pass through the center or be along a side, this means that Clyde — if he plays intelligently — can never lose Nic-Nac-No.

Now let’s look a non-losing strategy for Betty.  There is no guarantee she will be able to take the center square on her first move, so we’ve got to consider something different.  And we can’t just rely on playing opposite Clyde, since there is no opposite move if the takes the center first.  Moreover, it may be the case that Clyde uses some other strategy than the one I mentioned, so we can’t even assume that he does take the center on his first move!

To see a strategy for Betty, consider the following diagram:

Day109NicNanNo

Betty’s strategy is simple:  place an O in one of the squares marked A, one marked B, one marked C, and one marked D.

Note that this is always possible.  Even if Clyde does not play in the center on his first move, he can only occupy one square labelled A, B, C, or D.  Then Betty places her O on the other square with the same letter.  If Clyde does begin in the center, then Betty has her choice of first move.

Since it is always possible — and since Betty only has four moves — these comprise all of Betty’s moves.  But note that since Betty never has an O on two of the same letter, she can never get three-in-a-row on a side.  Further, since Betty’s strategy never involves a move in the center, she can never get three-in-a-row in a line going through the center square.  This means that Betty can never lose!

So if the two players play their best games, then Nic-Nac-No ends up in a draw.  And while these strategies do indeed work, I would welcome someone to find simpler strategies.

I’ll leave you with another version of Tic-Tac-Toe to think about.  Here are the rules:  if during the game either play gets three-in-a-row, then X wins.  If at the end, no one has three in a row, then O wins.  Does X have a winning strategy?  Does O?  Note that in this game, there cannot be a draw!  I’ll give you the answer in my next installment of Beguiling Games….

To Processing I

I made a decision last week to abandon using Sage (now called CoCalc) as a platform in my Mathematics and Digital Art class.  It was not an easy decision to make, as there are some nice features (which I’ll get to in a moment).  But now any effective use of Sage comes with a cost — the free version uses servers, and you are given this pleasant message:  “This project runs on a free server (which may be unavailable during peak hours)….”

This means that to guarantee access to CoCalc, you need a subscription.  It would not be prohibitively expensive for my class — but as I am committed to being open source, I am reluctant to continue putting sample code on my web page which requires a cost in order to use.  Yes, there is the free version — as long as the server is available….

When I asked my students last semester about moving exclusively to Processing, they responded with comments to the effect that using Sage was a gentle introduction to coding, and that I should stick with it.  I fully intended to do this, and got started preparing for the first few days of classes.  I opened my Sage worksheet, and waited for it to load.  And waited….

That’s when I began thinking — I did have experiences last year where the class virtually came to a halt because Sage was too slow.  It’s a problem I no longer wanted to have.

So now I’m going to Processing right from the beginning.  But why was I reluctant in the past?

The issue is that of user space versus screen space.  (See Making Movies with Processing I for a discussion of these concepts.)  With Sage, students could work in user space — the usual Cartesian coordinate system.  And the programming was particularly easy, since to create geometrical objects, all you needed to do was specify the vertices of a polygon.

I felt this issue was important.  Recall the success I felt students achieved by being able to alter the geometry of the rectangles in the assignment about Josef Albers and color.  (See the post on Josef Albers and Interaction of Color for a refresher on the Albers assignment.)

peyton
Peyton’s piece on Josef Albers.

Most students experimented significantly with the geometry, so I wanted to make that feature readily accessible.  It was easy in Sage, the essential code looking something like this:

Screen Shot 2017-08-26 at 11.08.35 AM

What is happening here is that the base piece is essentially an array of rectangles within unit squares, with lower-left corners of the squares at coordinates (i, j).  So it was easy for students to alter the polygons rendered by using graph paper to sketch some other polygon, approximate its coordinates, and then enter these coordinates into the nested loops.

Then Sage rendered whatever image you created on the screen, automatically sizing the image for you.

But here is the problem:  Processing doesn’t render images this way.  When you specify a polygon, the coordinates must be in screen space, whose units are pixels.  The pedagogical issue is this:  jumping into screen space right at the beginning of the semester, when we’re just learning about colors and hex codes, is just too big a leap.  I want the first assignment to focus on getting used to coding and thinking about color, not changing coordinate systems.

Moreover, creating polygons in Processing involves methods — object-oriented programming.  Another leap for students brand new to coding.

The solution?  I had to create a function in Processing which essentially mimicked the “polygon” function used in Sage.  In addition, I wanted to make the editing process easy for my students, since they needed to input more information this time.

Day108pyde1

In Processing — in addition to the number or rows and columns — students must specify the screen size and the length of the sides of the squares, both in pixels.  The margins — xoffset and yoffset — are automatically calculated.

Here is the structure of the revised nested for loops:

Day108pyde2

Of course there are many more function calls in the loops — stroke weights, additional fill colors and polygons, etc.  But it looks very similar to the loop formerly written in Sage — even a bit simpler, since I moved the translations to arguments (instead of needing to include them in each vertex coordinate) and moved all the output routines to the “myshape” function.

Again, the reason for this is that creating arbitrary shapes involves object-oriented concepts.  See this Processing documentation page for more details.

Here is what the myshape function looks like:

Day108pyde3

The structure is not complicated.  Start by invoking “createShape,” and then use the “beginShape” method.  Adding vertices to a shape involves using the “vertex” method, once for each vertex.  This seems a bit cumbersome to me; I am new to using shapes in Processing, so I’m hoping to learn more.  I had been able to get by with just creating points, lines, rectangles, and circles so far — but that doesn’t give students as much room to be creative as including arbitrary shapes does.

I should point out that shapes can have subshapes (children) and other various attributes.  There is also a “fill” method for shapes, but I have students use the fill function call in the for loop to avoid having too many arguments to myshape.  I also think it helps in understanding the logical structure of Processing — the order in which functions calls are invoked matters.  So you first set your fill color, then when you specify the vertices of your polygon, the most recently defined fill color is used.  That subtlety would get lost if I absorbed the fill into the myshape function.

As in previous semesters, I’ll let you know how it goes!  Like last semester, I’ll give updates approximately monthly, since the content was specified in detail in the first semester of the course (see Section L. of 100 Posts! for a complete listing of posts about the Mathematics and Digital Art course).

Throughout the semester, I’ll be continuously moving code from Sage to Processing.  It might not always warrant a post, but if I come across something interesting, I’ll certainly let you know!

On Coding XI: Computer Graphics III, POV-Ray

It has been a while since the last installment of On Coding.  I realized there is still more to say about computer graphics; I mentioned the graphics package POV-Ray briefly in On Coding IX, but feel it deserves much more than just a mere mention.

I’d say I began using POV-Ray in the late 1990’s, though I can’t be more precise.  This is one of the first images I recall creating, and the comments in the file reveal its creation date to be 19 September 1997.

petrie

Not very sophisticated, but I was just trying to get POV-Ray to work, as I (very vaguely) remember.  Since then, I’ve created some more polished images, like the polyhedron shown below.  I’ll talk more about polyhedra in a moment….

XT-18-19-28

First, a very brief introduction.  POV-Ray stands for Persistence of Vision Raytracer.  Ray tracing is a technique used in computer graphics to create amazingly realistic images.  Essentially, the color of each pixel in the image is determined by sending an imaginary light ray from a viewing source (the camera in the image below) and seeing where it ends up in the scene.

800px-Ray_trace_diagram.svg
Image by Henrik, Wikipedia Commons.

It is possible to create various effects by using different light sources, having objects reflect light, etc.  You are welcome to read the Wikipedia page on ray tracing for more details on how ray tracing actually works; my emphasis here will be on the code (of course!) I wrote to create various three-dimensional images.  And while the images you can produce using a ray tracing program are quite stunning at times, there is a trade-off: it takes longer to generate images because the color of each pixel in the image must be individually calculated.  But I do think the results are worth the wait!

My interest in POV-Ray stemmed from wanting to render polyhedra.  The images I found online indicated that you could create images substantially more sophisticated than those created with Mathematica.  And better yet, POV-Ray was (and still is!) open source.  That means all I needed to do was download the program, and start coding….

R_3-5_Dual

POV-Ray is a procedural programming language all in itself.  So to give you an idea of what a typical program looks like, I’ll show you the code which makes the following polyhedron:

tRp_52-3-5

Here’s how it begins.

POV4

The #include directive indicates that the given file is to be imported.  POV-Ray has many predefined colors and literally hundreds of different textures:  glass, wood, stone, gold and other metals, etc.  You just include those files you actually need.  To give you an idea, here is the logo I created for Dodecahedron Day using  silver and stone textures.

logo3

Next are the global settings.  There are actually many global settings which do a lot more than I actually understand…but I just used the two which were included in the file I modeled my code after.  Of course you can play with all the different settings (there is extensive online documentation) and see what effects they have on your final image, but I didn’t feel the need to.  I was getting results I liked, so I didn’t go any further down this path.

Then you set the camera.  This is fairly self-explanatory — position the camera in space, point it, and set the viewing angle.  A fair bit of tweaking is necessary here.  Because of way the image is rendered, you can’t zoom in, rotate, or otherwise interact with the created image.  So there is a bit of trial-and-error in these settings.

Lighting comes next.  You can use point light sources in any color, or have grids of light sources.  The online documentation suggested the area lighting here (a 5-by-5 grid of lights spanning 5 units on the x-axis and 5 units on the z-axis), and it worked well for me.  Since I wanted the contrast of the colors of the faces of the polyhedra on a black background, I needed a little more light than just a single point source.  You can read all about the “adaptive” and “jitter” parameters here, but I just used the defaults suggested in the documentation and my images came out fine.

There are spotlights and cylindrical lighting sources as well, and all may be given various parameters.  Lighting in Mathematica is quite a bit simpler with fewer options, so lighting was one of the most challenging features of POV-Ray to learn how to use effectively.

So that’s the basic setup.  Now for the geometry!  I can’t go into all the details here, but I can give you the gist of how I proceeded.

POV2

The nested while loops produce the 60 yellow faces of the polyhedron.  Because of the high icosahedral symmetry of the polyhedron, once you specify one yellow face, you can use matrices to move that face around and get the 59 others.

To get the 60 matrices, you can take each of 12 matrices representing the symmetries of a tetrahedron (tetra[Count]), and multiply each by the five matrices representing the rotations around the vertices of an icosahedron (fiverot[Count2]).  There is a lot of linear algebra going on here; the matrices are defined in the file “Mico.inc”, which is included.

The vertices of the yellow faces are given in “VF[16]”, where the “VF” stands for “vertex figure.”  These vertex figures are imported from the file “VFico.inc”.  Lots of geometry going on here, but what’s important is this:  the polygon with vertices listed in VF[16] is successively transformed by two symmetry matrices, and then the result is defined to be an “object” in our scene.  So the nested loops place 60 objects (yellow triangles) in the scene.  The red pentagrams are created similarly.

POV3

Finally, the glass tabletop!  I wanted the effect to be subtle, and found that the built-in texture “T_Glass2” did the trick — I created a square (note the first and last vertices are the same; a quirk of POV-Ray) of glass for the polyhedron to sit on.

POV-Ray does the rest!  The overall idea is this:  put whatever objects you want in a scene.  Then set up your camera, adjust the lighting, and let POV-Ray render the resulting image.

petrie10

Of course this introduction is necessarily brief — I just wanted to give you the flavor of coding with POV-Ray.  There are lots of examples online and within the documentation, so it should not be too difficult to get started if you want to experiment yourself!