Making Movies with Processing VI

The last post in this series will address how I used Processing in the classroom this past semester.  Although my experience has been limited so far, the response has been great!

In the Spring 2016 semester, I used Processing in my Linear Algebra and Probability course, which is a four-credit course specifically for CS majors.  The idea is to learn linear algebra first from a geometrical perspective, and then say that a matrix is simply a representation of a geometrical transformation.

I make it a point to generalize and include affine transformations as well, as they are the building blocks for creating fractals using iterated function systems (IFS).  The more intuition  students have about affine transformations, the easier it is for them to work with IFS.  (To learn more about IFS, see Day034, Day035, and Day036 of my blog.)

If you’ve noticed, many of the movies I’ve discussed deal with IFS.  Within the first three weeks of the class (here is the link to the course website), I assign a project to create a single fractal image using an IFS.  I use the Sage platform as it supports Python and is open source, and all of my students were supposed to have taken a Python course already.  All links, prompts, and handouts for this project may be found on Days 5 and 6 of the course website.

This sets up the class for a Processing project later on in the semester.  The timing is not critical; as it turns out the project was due during the Probability section of the course (the last third) as most students had other large projects due a bit earlier.  A sample movie provided to the students may be found on Day 22 of the course website, and the project prompt may be found on Day 33.

The basic idea was to use a parameter to vary the numbers in a series of affine transformations used to create a fractal using an IFS.  As the parameter varied, the fractal image varied as well.  This allowed  for an animation from an initial fractal image to a final fractal image.

My grading rubric was fairly simple:  each feature a student added beyond my bare-bones movie bumped their grade up.  Roughly speaking, four additional features resulted in an A for the assignment.  These might include use of color, background image, music, text, etc.

I was inspired by how eagerly students took on this assignment!  They really got into it.  They appreciated using mathematics — specifically linear algebra — in a hands-on application.  I do feel that it is important for CS majors to understand mathematics from an applied viewpoint as well as a theoretical one.  Very few CS majors will go on to become theoretical computer scientists.

We did take some class time to watch movies which students uploaded to my Google drive.  All had their interesting features, but some were particularly creative.  Let’s take a look at a few of them.

Here is Monica’s description of her movie:

My fractal movie consists of a Phoenix, morphing into a shell, morphing into the yin-yang symbol, and then morphing apart. I chose my background color to be pure black to create contrast between the orange and yellow colors of my fractal. The top fractal, which starts out as yellow, shifts downward the whole time. On the other hand, the bottom fractal, which starts out orange, shifts upward. In both cases, a small number of the points from each fractal get stuck in the opposite fractal and begin to shift with it. This leaves the two fractals at the end of the movie intertwined. I created text at the bottom of the fractal movie, which says “Phoenix.” I wanted to enhance the overall movie and give it a name. Lastly, I added music to my fractal movie. I picked the song “Sweet Caroline” by Neil Diamond.

Ethan says this about his movie:

The inspiration for this fractal came from a process of trial and error.  I knew I wanted to have symmetry and bright color, but everything else was undecided.  After creating the shape of the fractal, I decided to create a complete copy of the fractal and rotate one copy on top of the other.  After seeing what it looked like with a simple rotation, I decided something was missing so I had the copied image rotate and either shrink or grow, depending on a random variable.  In this movie the image shrinks.  I used transitional gradients because I wanted to add more color without it looking too busy or cluttered.

Finally, this is how Mohamed describes his video:

This video starts as a set of four squares scaled by 0.45, and each square has either x increased by 0.5, y increased by 0.5, both increased by 0.5, or neither increased by 0.5. The grays and blacks as the video starts show the random points plotted as the numbers fed into the function are increasing, while the blues and whites show the points as the numbers fed into the function are decreasing. I chose to do this because we often see growth of functions in videos, but we do not see the regression back to its original form too often….

I was very pleased how creative students got with the project, and how enthusiastic they were about their final videos.  I have another project underway where I use Processing — a Mathematics and Digital Art course I’ll be teaching this Fall semester.  I’ll be talking about this course soon, so be sure to follow along!

Making Movies with Processing V

Last week, we saw how to use linear interpolation to rotate a series of fractal images.  It was not unusually difficult, but it was important to call functions in the right sequence in order to make the code as simple as possible.

This week we’ll explore different ways to use color.  Using color in digital art is a broad and complex topic, so we’ll only scratch the surface today.

The first movie shows how the different parts of the Sierpinski triangles corresponding to different affine transformations of the iterated function system can be drawn in different colors.

Recall that the points variable stored all the points to be drawn in any given fractal image.  Since the points are drawn all at once, it is difficult to say which of the three transformations generated a particular point.  But this is important because we want the color of a point to correspond to the transformation used to generate it.

One idea is to use three variables to store the points, and have these variables correspond to the three affine transformations.  Here is the code — we’ll discuss it in detail in a moment.

CodeSnippet5

We use variables points1, points2, and points3 to store the points generated by each of the three affine transformations.  Note that the use of append is now within each if statement, not after all the if statments.  This is because we want to remember which points are generated by which transformation, so we can plot them all the same color.

As a result, we now need three separate calls to the stroke and point routines.  Recall that in Processing, a call to the stroke command changes the color of everything drawn after that stroke command is called.  So if we want to draw using three different colors, we need three calls to the stroke command.

Of course it follows that we need three calls to the point routine, since once we change the color of what is drawn, we need to make sure the correct set of points is that color.  In this case, all the points in points1 are yellow, those in points2 are red, and those in points3 are blue.

Again, not unusually complicated.  You just have to make sure you know how each function in Processing works, and the appropriate order in which to call the functions you use.

On to the next color experiment!  It’s been a few weeks since we used linear interpolation with color.  You’ll see in the movie below that the yellow triangle gradually turns to red, the blue triangle changes to yellow, and the red triangle becomes blue.

Let’s see how we’d use linear intepolation to accomplish this.  Below is the only code which needs to be altered — the stroke and point commands.  Also, I left out the rotate function so the changing of the colors would be easier to follow.

CodeSnippet6

We’ll focus on how to change the red triangle to blue in this example, which occurs for the points in the variable points2.  The other color changes are handled similarly.  All we need to do is use linear interpolation on each of the RGB values of the colors we are looking at.

For example, red has an R value of 255, but blue has an R value of 0.  Now when p = 0 the triangle is red, and when p = 1, the triangle is blue.  So we need to start (p, R) at (0, 255) and end at (1, 0).  Creating a line between these two points results in the equation

R = (1 – p) * 255.

You can see the right-hand side of this equation as the first argument to the stroke command used to change the color for points2.

Working with the G values is easy.  Since both red and blue have a G value of 0, we don’t need linear interpolation at all!  Just leave the G value 0, and everything will be fine.

Finally, we look at the B value.  For red, the B value is 0, but it’s 255 for blue.  So we need to start (p, R) at (0, 0) and end at (1, 255).  This is not difficult to do; we get the line

R = p * 255.

You’ll see the right-hand side of this equation as the third argument to the stroke command which changes the color for points2.

Just linear interpolation at work again!  It’s not too difficult, once you look at it the right way.

For our last example, we’ll let the triangles “fade out,” as shown in the following movie.

Can you figure out how this is done?  Linear interpolation again, but this time in the strokeWeight function.  Here are the changes:

CodeSnippet7

Let’s what this if-else clause does.  If the parameter p is less than 0.5, leave the stroke weight as 2.  Otherwise, calculate the stroke weight to be (1 – p)*4.

What does this accomplish?  Well, at p = 0.5, the stroke weight is (1 – 0.5)*4, which is 2.  And at p = 1, the stroke weight is 0.  This means that the stroke weight is 2 for the first half of the movie, then gradually diminishes to 0 by the end of the movie.  In other words, a fade out.

Of course when you write the code, you have to reverse engineer it.  If I call my stroke weight W, I want to start (p, W) at (0.5, 2) and end at (1, 0).  Creating a line between these two points gives the equation

W = (1 – p) * 4.

That’s all there is to it!

I hope you’ve seen how linear interpolation is a handy tool you can use to create all types of special effects.  The neat thing is that it can be applied to any function which takes numerical parameters — and those parameters can correspond to color values, angles of rotation, location, or stroke weight.  The only limit to how you can incorporate linear interpolation into your movies is your imagination!

Making Movies with Processing IV

Last week, we saw how using linear interpolation allowed us to create animations of fractals.  This week, we’ll explore how to create another special effect by using linear interpolation in a different way.  We’ll build on last week’s example, so you might want to take a minute to review it if you’ve forgotten the details.

Let’s suppose that in addition to morphing the Sierpinski triangle, we want to slowly rotate it as well.  So we insert a rotate command before plotting the points of the Sierpinski triangle, as shown here:

CodeSnippet1

First, it’s important to note that the rotate command takes angles in radian measure, not degrees.  Recall from your trigonometry classes that

360^\circ=2\pi{\rm \ radians.}

But different from your trigonometry classes is that the rotation is in a clockwise direction.  When you studied the unit circle, angles moved counter-clockwise around the origin as they increased in measure.  This is not a really complicated difference, but it illustrates again how not every platform is the same.  I googled “rotating in processing” to understand more, and I found what I needed right away.

Let’s recall that p is a parameter which is 0 at the beginning of the animation, and 1 at the end.  So when p = 0, there is a rotation of 0 radians (that is, no rotation), and when p = 1, there is a rotation of 2\pi radians, or one complete revolution.  And because we’re using linear interpolation, the rotation changes gradually and linearly as the parameter p varies from 0 to 1.

Let’s see what effect adding this line has.  (Note:  Watch the movie until the end.  You’ll see some blank space in the middle — we’ll explain that later!)

What just happened?  Most platforms which have a rotate command assume that the rotation is about the origin, (0,0).  We learned in the first post that the origin in Processing is in the upper left corner of the screen.  If you watch that last video again, you can clearly see that the Sierpinksi triangle does in fact rotate about the upper left corner of the screen in a clockwise direction.

Of course this isn’t what we want — since most of the time the fractals are out of the view screen!  So we should pick a different point to rotate around.  You can pick any point you like, but I though it looked good to rotate about the midpoint of the hypotenuse of the Sierpinski triangles.  When I did this, I produced the following video.

So how did I do this?  It’s not too complicated, but let’s take it one step at a time.  We’ve got to remember that before scaling, the fractal fit inside a triangle with vertices (0,0), (0,2), and (2,0).  I wanted to make it 450 pixels wide, so I scaled by a factor of 225.

This means that the scaled Sierpinski triangle fits inside a right triangle with vertices (0, 0), (0, 450), and (450, 0).  Using the usual formula, we see that the midpoint of the hypotenuse of this triangle has coordinates

\dfrac12\left((0,450)+(450,0)\right)=(225,225).

To make (225, 225) the new “origin,” we can just subtract 225 from the x– and y-coordinates of our points, like this:

CodeSnippet2

Remember that the positive y-axis points down in Processing, which is why we use an expression like 225 – y rather than y – 225.  This produces the following video.

This isn’t quite what we want yet, but you’ll notice what’s happening.  The midpoint of the hypotenuse is now always at the upper left corner.  As the triangle rotates, most of it is outside the view screen.  But that’s not hard to fix.

All we have to do now is move the midpoint of the hypotenuse to the center of the screen.  We can easily do this using the translate function.  So here is the complete version of the sierpinski function, incorporating the translate function as well:

CodeSnippet3

So let’s briefly recap what we’ve learned.  Rotating an image is not difficult as long as you remember that the rotate function rotates about the point (0,0).  So first, we needed to decided what point in user space we wanted to rotate about – and we chose (225, 225) so the fractals would rotate around the midpoint of the hypotenuse of the enclosing right triangle.  This is indicated in how the x– and y-coordinates are changed in the point function.

Next, we needed to decided what point in screen space we wanted to rotate around.  The center of the screen seemed a natural choice, so we used (384, 312).  This is indicated in the arguments to the translate function.

And finally, we decided to have the triangles undergo one complete revolution, so that p = 0 corresponded to no rotation at all, and p = 1 corresponded to one complete revolution.  We accomplished this using linear interpolation, which was incorporated into the rotate function.

But most importantly — we made these changes in the correct order.  If you played around and switched the lines containing the translate and rotate functions, you’d get a different result.

It is worth remarking that it is possible to use the rotate function first.  But then the translate function would be much more complicated, since you would have to take into account where the point (384, 312) moved to.  And you’d have to review your trigonometry.  Here’s what the two lines would need to look like:

CodeSnippet4

As you can see, there is a lot more involved here!  So when you’re thinking about designing algorithms to produce special effects, it’s worth thinking about the order in which you perform various tasks.  Often there is a way that is easier than all the others — but you don’t always hit upon it the first time you try.  That’s part of the adventure!

Next week we’ll look a few more special effects you can incorporate into your movies.  Then we’ll look at actual movies made by students in my linear algebra class.  They’re really great!

Making Movies with Processing III

This week, we begin a discussion of creating movies consisting of animated fractals.  Last week’s post about the dot changing colors was at a beginning level as far as Processing goes.  This week’s post will be a little more involved, but will assume a knowledge of Iterated Function Systems.  I talked about IFS on Day034, Day035, and Day036.  Feel free to look back for a refresher….

Today, we’ll see how to create the following movie.  You’ll notice that both the beginning and final Sierpinski triangles are fractals discussed on Day034.

As a reminder, these are the three transformations which produce the initial Sierpinksi triangle:

F_1\left(\begin{matrix}x\\y\end{matrix}\right)=\left[\begin{matrix}0.5&0\\0&0.5\end{matrix}\right] \left(\begin{matrix}x\\y\end{matrix}\right),

F_2\left(\begin{matrix}x\\y\end{matrix}\right)=\left[\begin{matrix}0.5&0\\0&0.5\end{matrix}\right] \left(\begin{matrix}x\\y\end{matrix}\right)+\left(\begin{matrix}1\\0\end{matrix}\right),

F_3\left(\begin{matrix}x\\y\end{matrix}\right)=\left[\begin{matrix}0.5&0\\0&0.5\end{matrix}\right] \left(\begin{matrix}x\\y\end{matrix}\right)+\left(\begin{matrix}0\\1\end{matrix}\right).

Also, recall that to get the modified Sierpinski triangle at the end of the video, all we did was change the first transformation to

F_1\left(\begin{matrix}x\\y\end{matrix}\right)=\left[\begin{matrix}0.25&0\\0&0.5\end{matrix}\right] \left(\begin{matrix}x\\y\end{matrix}\right).

We’ll how to use linear interpolation to create the animation.  But first, let’s look at the Python code for creating a fractal using an iterated function system.

ifscode

The parameter p is for the linear interpolation (which we’ll discuss later), and n is the number of points to plot.  First, import the library for generating random integers — since each transformation will be weighted equally, it’s simpler just to choose a random integer from 1, 2, and 3.  The variable points keeps track of all the points, while last keeps track of the most recently plotted point.  Recall from earlier posts that you only need the last point in order to get the next one.

Next, the for loop just creates new points, one at a time, and appends them to points.  Once an affine transformation is randomly chosen by selecting a randint in the range from 1 to 3, it is applied to the last point generated.  For the purpose of writing Python code, it’s easier to use the notation

F_2\left(\begin{matrix}x\\y\end{matrix}\right)=\left(\begin{matrix}0.5 \ast x+1\\0.5\ast y\end{matrix}\right)

rather than matrix notation.  In order to use vector and matrix notation, you’d need to indicate that (1,2) is a vector by writing

v = vector(1,2),

and similarly for matrices.  Since we’re doing some fairly simple calculations, just writing out the individual terms of the result is easier and requires less code.

Once the points are all created, it’s time to plot them.  You’ll recognize the background, stroke, and strokeWeight functions from last week.  Nothing fancy here, since we’re focusing on algorithms today.  Just a black background and small orange dots.

The last line plots the points, and is an example of what is called list comprehension in Python.  First, note that the iterated function system would create a fractal which would fit inside a triangle with vertices (0,2), (0,0), and (2,0).  So we need to suitably scale the fractal — in this case by a factor of 225 so it will be large enough to see.  Remember that units are in pixels in Processing.

Then we need to compensate for Processing’s coordinate system.  You’ll notice a similarity to what we did a few weeks ago.

What the last line does is essentially this:  for every point x in the list points, adjust the coordinates for screen space, and then plot x with the point function.  List comprehension is convenient because you don’t have to make a loop or other iterative construct — it’s done automatically for you.

Of course that doesn’t mean you never need a for loop.  It wouldn’t be easy to replace the for loop above with a list comprehension as each new point depends on the previous one.  But for plotting a bunch of points, for example, it doesn’t matter which one you plot first.

Now for the linear interpolation!  We want the first frame to be the usual Sierpinski triangle, and the last frame to be our modified triangle.  The only difference is that one of the constants in the first function changes from 0.5 to 0.25.

This is perfect for using linear interpolation.  We’d like our parameter p to be 0.5 when p = 0, and 0.25 when p = 1.  So we just need to create a linear function of p which passes through the points (0, 0.5) and (1, 0.25).  This isn’t hard to do; you should easily be able to get

0.5 - 0.25 \ast p.

The effect of using the parameter p in this way is to create a series of fractals, each one slightly different from the one before.  Since we’re taking 360 steps to vary from 0.5 to 0.25, there is very little difference from one fractal to the next, and so when strung together, the fractals make a convincing animation.

I should point out the dots look like they’re “dancing” because each time a fractal image is made, a different series of random affine transformations is chosen.  So while the points in each successive fractal will be close to each other, most of them will actually be different.

For completeness, here is the code which comes before the sierpinski function is defined.

ifscodesetup2

It should look pretty familiar from last week.  Similar setup, creating the parameter p, writing out the frames, etc.  You’ll find this a general type of setup which you can use over and over again.

So that’s all there is to it!  Now that you’ve got a basic grasp of Processing’s screen space and a few different ways to use linear interpolation, you can start making movies on your own.

Of course there are lots of cool effects you can add by using linear interpolation in more creative ways.  We’ll start to take a look at some of those next week!

Making Movies with Processing II

Last week we learned a little about the history of Processing, as well as what the coordinate system is like in Processing (and many other graphics applications as well).  Today I’d like to discuss an idea which I find very helpful in making movies — linear interpolation.  We’ll only have time for one simple example today, but we’ll go through that example very thoroughly.

So here’s the movie we’ll explore today.  Not glamorous, but it’s a start.

You’ll notice what’s happening — the dot slowly turns from magenta to black, while the background does the opposite.  Let’s look at the dot first.  In Processing, RGB values go from 0 to 255.  Magenta is (255, 0, 255) in RGB, and black is (0, 0, 0).  We want the dot to go smoothly from magenta to black.

The way I like to do this is to introduce a parameter p.  I think of p = 0 as my starting point, and p = 1 as my ending point.  In this example, p = 0 corresponds to magenta, and p = 1 corresponds to black.  As p varies from 0 to 1, we want the dot to go from magenta go black.

This is where linear interpolation comes in.  For any start and end values, the expression

(1 – p) * start + p * end

will vary continuously from the start value to the end value as p goes from 0 to 1.  It should be clear that when p = 0, you get the start value, and when p = 1, you get the end value.  Any value of p in between 0 and 1 will be in between the start and end values — closer to the start value when p is near 0, and closer to the end value when p is near 1.

Because this expression is linear in p — no quadratic terms, no square roots, no sines or cosines — we say that this expression gives a linear interpolation between the start and end values.

Let’s see how this works in Processing.  Here’s a screen shot of the code that produced the video shown above.

screenshot2

The basic functions for making a simple movie are setup and draw.  The setup function is executed once before the movie starts.  Then the draw function is called repeatedly — each time the draw function is called, a new frame of the movie is created.

In a typical program, if you wanted to repeat something over and over, you’d need to make a loop.  But in Processing — since its purpose is to create frames for movies — this is built in to how the application works.  No need to do it yourself.

So in Processing, the variable frameCount automatically keeps track of which frame you’re creating.  And it is automatically incremented each time the draw function executes.  You’ll find that this is a very nice feature!  This way, you don’t have to keep track of frames yourself.

Note the size declaration in the setup function.  Last week, I mentioned that you’ve got to set the size of your coordinate system when you start your movie.  The “P2D” means you’re creating a two-dimensional image.  You can make 3D objects in processing, but we won’t look at that today.

Typical applications which make movies (like PhotoShop, or the Movie Maker found in Processing’s “Tools” menu) use about 30 frames per second.  So 360 frames (as above) will be a 12-second movie.  Enough to see the dot and background change colors.

The if/else clause determines the frames produced.  Note that when frameCount gets larger than 360, you skip to the “else” clause, which is “noLoop()”.  This essentially stops Processing from creating new frames.  The saveFrame command saves the individual frames to whatever directory you specify; if you forget the else clause, Processing will just keep generating more and more .tiff files and clutter up your directory.

So the if/else clause basically tells Processing this:  make 360 frames, one at a time, and save then in a directory called “frames.”  Once all 360 frames are made, it’s time to stop.

The makedot function is what actually creates the frame.  Now we’re getting to the linear interpolation!  The frameCount/360. is what creates the value of p.  When frameCount is 1, the value of p is 1/360 (close enough to 0 for the purposes of our movie), and when frameCount is 360, the value of p is 1.  As the value of frameCount increases, p increases from about 0 to 1.

It’s important to use a decimal point after the number 360.  If you don’t, Python will use integer division, and you’ll get 0 every time until frameCount reaches 360, and only then will you get 1.  (I learned this one the hard way…took a few minutes to figure out what went wrong.)

So let’s go through the makedot function line by line.  First, we set the background color.  Note that when p is 0, the color is (0, 0, 0) — black.  As p moves closer to 1, the black turns to magenta.  It is also important to note that when you call the background function, the entire screen is set to the color you specify.  Processing will write over anything else that was previously displayed.  So be careful where you put this command!

Next, the width of lines/points drawn is set with strokeWeight.  Remember that the units are pixels — so the width of the dot is 300 pixels on a screen 500 pixels wide.  Pretty big dot.

The stroke command sets the color of lines/points on the screen.  Notice the (1 – p) here — we want to begin with magenta (when p = 0), and end with black (when p = 1).  So we go the “opposite” direction from the background.

Maybe you noticed that at one point, the dot looked like it “disappeared.”  Not really — note that when p = 1/2, then 1 – p = 1/2 as well!  At this time, the dot and background are exactly the same color.  That’s why it looked like the dot vanished.

And last but not least — the dot.  Remembering the coordinate system, the dot is centered on the screen at (250, 250).

That’s it.  Not a long program, but all the essentials are there.  Now you’ve got a basic idea of how simple movies can me made.  The next post will build on this week’s and look at some more examples of linear interpolation.  Stay tuned!

 

Making Movies with Processing I

If you’ve been following my Twitter (@cre8math), you’ll have noticed I posted quite a few videos last fall.  At the time I was writing a lot about creating fractal images, and I really wanted to learn how to make animations using these images.  I had heard about Processing from a lot of friends, and decided that it was time to dive in!  Here is one of earlier movies, but still one of my favorites.

Processing has been around since 2001.  From the very beginning, Processing was used in the classroom as a tool to allow students to quickly create graphical images and movies.  It is now used widely by educators, artist, designers, architects, and researchers.  Many companies use Processing for data visualization.  Click on the link to read more!

One of the great things about Processing is that it’s open source — a specific intention of the developers.  Now, there’s even a Processing Foundation.  Its purpose is to “promote software literacy within the visual arts, and visual literacy within technology-related fields — and to make these fields accessible to diverse communities.”  The idea is to provide everyone access to software you can use to create a wide range of visual media.  Not just those who can afford it.

I support open-source initiatives as well.  Most of the digital art I’ve talked about on my blog was written in Mathematica, a powerful programming language, but expensive to buy.  I rewrote all the algorithms you’ve seen on the Sage worksheets in Python because I wanted them to be available to anyone who has access to the internet.  Not just those who can afford pricey software….

Now there won’t be time to go into everything about Processing — that’s what the internet is for.  I often google phrases like “How do you change the background color in Processing?”, and usually get the information I need pretty quickly.

What I want to focus on today is the coordinate system used in Processing.  If you’re new to computer graphics, it might be a little unfamiliar — the coordinate system is based on pixels.  For example, the screen on my Mac is 1440 x 900 pixels.  But the origin (0, 0) is in the upper left corner of the screen, so that  the lower right corner has coordinates (1440, 900).  In other words, while the positive x-axis is still to the right, the positive y-axis is now pointing down.  This is sometimes called a screen coordinate system, or screen space.

This convention has an interesting history.  Older television screens used cathode ray tubes, and literally updated the screen by refreshing one line at a time, from top to bottom and from left to right.  After updating the pixel at the lower right, refreshing began again at the upper left.  This process occurred several times each second to give the illusion of continuous motion.  Because the refresh was done from top to bottom, the y-coordinates increased from top to bottom as well.

Such a coordinate system is not unusual.  The PostScript programming language (which I told you about two weeks ago) has a coordinate system based on points.  The definition of a point has changed over time, but PostScript uses a coordinate system where there are 72 points per inch, and the lower left corner has coordinates (0,0).

PostScript is used for printing physical documents.  Because copier paper typically has dimensions 8.5″ x 11″ in the United States, the upper right corner of your document would have coordinates (612, 792).

In Processing, you need to specify the size of the screen you want for your movie.  In the example above, I chose 768 x 768 pixels.  This may sound like a strange number, but a commonly used screen size is 1024 x 768.  Because of what my image looked like, though, I wanted the screen to be square so there wouldn’t be extra space on the left and right.

Now the objects you see in the movie have rotational symmetry.  So it would make sense for the origin (0,0) to be at the center of the movie screen, not the upper left corner.  A convenient coordinate system for these objects might have the upper left corner be (-1, 1), and the lower right corner be (1, -1), just like the usual Cartesian coordinate system.  This system of coordinates is sometimes called local space or user space.  In other words, it’s the space the user is thinking in when designing various images.

So now the issue is converting from user space to screen space.  This isn’t difficult, so we’ll just work through this example to see how it’s done.  All you need to remember is how to find the equation of a line.

Why a line?  Let’s look at x-coordinates first.  In user space, x-coordinates range from -1 to 1.  In screen space, they range from 0 to 768.  If you went 1/4 of the way from -1 to 1 in user space — which would put you at -0.5 — then you’d expect the corresponding point in screen space to be 1/4 of the way from 0 to 768, which would be 192.  You’d expect this for any ratio, not just 1/4.  This is precisely what it means for the transformation of coordinates from user space to screen space to be linear.

Since -1 in user space corresponds to 0 in screen space, and 1 corresponds to 768, all we need to do is find the equation of the line between the points (-1,0) and (1,768).  In Cartesian coordinates, this would be y=384(x+1). Or we might alternatively say

x_{\rm screen} = 384(x_{\rm user}+1).

Once we know how to do this for the x-coordinates, we can proceed the same way for the y-coordinates.  You should be able to figure out that

y_{\rm screen} = 384(1-y_{\rm user}).

Note that the minus sign in front of y_{\rm user} is a result of the fact that the positive y direction is pointing down.

This is a very general idea which will allow you to convert from one space to another fairly easily.  And it’s definitely necessary in Processing to set up the screen for your movie.

Next week we’ll look at linear interpolation — another important concept which will help us make movies.  Until then!