On Coding XII: Python

It has been some time since I wrote an installment of On Coding.  It’s time to address one of my more recent programming adventures:  Python.  I started learning Python about two-and-a-half years ago when I began teaching at the University of San Francisco.

One of my colleagues introduced me to the Sage environment (now going by “CoCalc”) as a place to do Mathematica-like calculations, albeit at a smaller scale.  Four features were worthy of note to me:  1)  you could do graphics;  2)  you could write code (in Python);  3)  you could run the environment in your browser without downloading anything;  and 4)  it was open source.

For me, this was (at the time) the perfect environment to develop tools for creating digital art which I could freely share.  Yes, I had thousands of lines of Mathematica code, but Mathematica is fairly expensive.  I wanted an environment which would be easily accessible to students (and my blog followers!), and Sage fit the bill.

So that’s why I started learning Python — it was the language I needed to learn in order to use Sage.

For me, two things were a challenge.  The first was how heavily typed Python is.  In Mathematica, the essential data structure is a list, just like in LISP.  For example,

{1, 2, 3}

is a list.  But that list may also represent a vector in three-dimensional space — even though it would look exactly the same.  It may also represent a set of numbers, so you could calculate

Intersection[{1, 2, 3}, {3, 4, 5}].

In Python you can create a list, tuple, or set, as follows:

list([1, 2, 3]),  tuple([1, 2, 3]), set([1, 2, 3]).

And in Python, these are three different objects, none equal to any other.  I don’t necessarily want to start a discussion of typed vs. untyped languages, but when you’re so used to using an untyped language, like Mathematica, you are constantly wondering if the argument to some random Python function is a list, tuple, or….

Second, Python has a “return” statement.  In languages like LISP and Mathematica, the value of the last statement executed is automatically returned.  In Python, you have to specify that you want a value returned by using a return statement.  I forget this all the time.

And while not a huge obstacle, it does take a little while to get used to integer division.  In Python, 3/4 = 0, since when you divide integers, the value of the fraction is the quotient when considered as integer division.  But 3/4. = 0.75, since adding the decimal point after the 4 indicates the number is a floating point number, and so floating-point arithmetic is performed.

Of course, if you’ve been reading recent posts, you know I’ve moved from Sage entirely to Processing in my Mathematics and Digital Art course.  You can read more about that decision here — but one key feature of Processing is that there’s a Python mode, so I was able to take work already done in Sage and adapt it for Processing.

It turns out that this was not as easy as I had hoped.  The essential difficulty is that in Sage, the bounding box of your image is computed for you, and your image is appropriately scaled and displayed on the screen.  In Processing, you’ve got to do that on your own, as well as work in a space where x– and y-coordinates are in units of pixels, which is definitely not how I am used to thinking about geometry.

I am finding out, however — much to my delight and surprise — that there are quite a few functional programming aspects built into Python.  I suspect there are many more than I’m familiar with, but I’m learning them a little at a time.

For example, I am very fond of using maps and function application in Mathematica to do some calculations efficiently.  Rather than use a loop to, say, add the squares of the numbers 1– 10, in Mathematica, you would say

Plus @@ (#^2& /@ Range[10])

The “#^2&” is a pure function, and the “/@” applies the function to the numbers 1–10 and puts them in a list.  Then the function “Plus” is applied, which adds the numbers together.

There is a similar construct in Python.  The same sum of 385 can be computed by using

sum([(n + 1)**2 for n in range(10)])

OK, this looks a little different, but it’s just the syntax.  Rather than the “#” character for the variable in the pure function, you provide the variable name.  The “for” here is called list comprehension in Python, though it is just a map.  Of course you need “(n + 1),” since Python always starts at 0, so that “range(10)” actually refers to the numbers 0–9.  And the “sum” function can take a list of numbers as well.  But from a conceptual level, the same thing is going on.

The inherent “return” in any Mathematica function does find its way into Python as well. Let’s take a look at a simple example:  we’ll write a function which computes the maximum of two numbers.

Now you’d probably think to write:

Day117python1

This is the usual way of defining “max.”  But there’s another way do to this in Python.

If you ask Python to

print 3 > 2,

you’ll see “True.”  But you can also tell Python to

print (3 > 2) + 7

and get “8.”  What’s going on here is that depending on the context, “3 > 2” can take on the value “True” or “1.”  Likewise, “3 < 2” can take on either the value “False” or the value “0.”

This allows you to completely sidestep making Boolean checks.  Consider the following definition.

Day117python2

This also works!  If in fact a >= b, you return the value 1 * a + 0 * b, which gives you a — the maximum value when a >= b.  And when a < b, you return b.  Note that when a = b, both are the maximum, so we could just as well have written

Day117python3

I think this is a neat feature of Python, which does not have a direct analogue in Mathematica.  I am hoping to learn many other intriguing features like this as I dive deeper into Python.

Python is my newest language, and I have yet to become “fluent.”  I still sometimes ask the internet how to do simple things which would be at my fingertips in Mathematica.  But I do love learning new languages!  Maybe in a year or so I’ll update my On Coding entry on Python with a flurry of new and interesting things I’ve learned….

Mathematics and Digital Art: Update 2 (Fall 2017)

We’ve just completed Week 8 of the Fall semester, so it’s time for the next update on my Mathematics and Digital Art class!  As I had mentioned before, the major difference this semester was starting with Processing right from the beginning of the semester.

It turns out this is making a really big difference in the way the class is progressing.  The first two times I taught the course, I had students work in the Sage environment for the first half of the semester.  The second half of the semester was devoted to Processing and student projects.

Because students only started to learn Processing at the same time they were diving into their projects, they were not able to start off with a Processing-based project.  As it happened, a few students actually incorporated Processing into their final projects as the second half of the semester progressed, but this was the exception, not the rule.

But last week, we already started making movies in Processing!  Starting simply, of course, with the dot changing colors.

 

This was a bit easier to present this time around, since we already had a discussion of user space vs. screen space earlier in the semester.  So this time, I could really focus on linear interpolation — the key mathematical concept behind making animations.

Next week will be a Processing-intense week.  I’ll delay some topics — like geometric series — to a little later in the course so we can get more Processing in right now.  The reason?  I really think many students will involve Processing in their final projects in a significant way.  I want to make sure they have enough exposure to feel confident about going in that direction for their final projects.  I’ll let you know what happens in this regard in my next update of Mathematics and Digital Art.

Now for some examples of student work!  For the assignment on iterated function systems, students had three different images to submit.  The first was a Sierpinski triangle — I asked students to create an image simultaneously as close to and as far away from a Sierpinski triangle as possible.  The idea was that a viewer should recognize the image as being based on a Sierpinksi triangle, but perhaps only after staring at it for thirty seconds or so.

This is Sepid’s take on the assignment.  On many of her pieces, she experimented with different ways to crop the final image.  This has a significant effect on the image’s final appearance.

Day115Sepid1.png

 

This is Cissy’s submission for the Sierpinski triangle.  In this piece (and the others submitted for this assignment), Cissy remarked that she really enjoyed experimenting with color.  I commented that I thought color choices were among the most difficult decisions to make as far as elements of a work of digital art are concerned.

Day115Cissy1.png

The second piece was to involve only two affine transformations.  This is often a challenge for students, but there really is an enormous variety of images that may be created using just two transformations.  In addition, one of the transformations needed to involve a rotation by a non-trivial angle (that is, not a multiple of 45°), and students needed to submit a picture of their calculations as well.

One student was trying to create an image that looked like an animal footprint.  She remarked that she did consider a different color palette, but in the end, preferred to go with monochromaticity.
Day115L2.png

Interestingly, Terry also used a simple color palette.  She remarked that it was challenge to use just two transformations — and because of this minimalist requirement, decided to go with a minimalist color palette.  In addition, her resulting fractal reminded her of birds, so she set the fractal against a white moon and gray sky.

Day115Terry2

For the third submission, there were no constraints whatsoever — in fact, I encouraged students to be as creative as possible.  There was a very wide range of submissions.  One student was fairly minimalistic, using a highly contrasting color palette.

Day115A

Jack’s piece was also fairly minimalistic.  I should remark that we took part of a lab one day for students to do some online peer commenting; Jack (and others as well) remarked that he used the advice of another student to improve an earlier draft of his piece.  In particular, he adjusted the stroke weight to increase the intensity of the colors.

Day115Jack.png

Tera based her work on the Sierpinski triangle,  but also included reflections of each of the three smaller components of her version of the Sierpinski triangle.  She remarked that the final image reminded her of a snowflake, or perhaps a Christmas sweater.

Day115Tera.png

Alex’s inspiration came from The Great Wave off Kanagawa, by Katsushika Hokusai.

The_Great_Wave_of_Kanagawa.jpg
Courtesy the Wikipedia Commons.

 

First, he created the fractal image, experimenting with various color combinations.  When he was satisfied with his palette, he added the boat and the white circle to suggest a black moon.  A rather interesting approach!

Day115Alex.png

As you can see, I’ve got quite a creative class of students who are willing to experiment in many different ways.  It’s interesting for me, since there is no way to predict what they’ll create next!  I look forward to seeing what they create when they really dive deeply into Processing and begin making animations.

In the next update, I’ll report on how students involve Processing in their project proposals.  In addition, they will have submitted their fractal movie projects by then, so there will undoubtedly be many interesting examples of student work to exhibit.  Stay tuned!

Mathematics and Digital Art: Update 1 (Fall 2017)

About a month has passed since beginning my third semester of Mathematics and Digital Art!  As with last semester, I plan on giving updates about once a month to discuss changes in the course and to showcase student work.

The main difference this semester (as I discussed a few weeks ago) was starting with Processing right from the beginning.  From my perspective, the course has run more smoothly than ever — and some of my students are already really getting into the coding aspect of creating digital art.

I do believe that beginning this way will pay off when we get to making movies.  Since we’ll already know the basics and understand the difference between user space and screen space, I can focus more on the interactive abilities of Processing — such as having features of the displayed image change by moving the mouse or pressing different keys on the keyboard.

The first two assignments were essentially the same as last semester.  We began with discussing color and the work of Josef Albers, emphasizing the fact that there is no such thing as “pure color” — colors are only perceived in relation to other colors.

Again, I was surprised by the diversity of the images students created.  Like last year, a few students experimented with a minimalist approach.  Here is what Alex generated using just a 2-by-2 grid of squares.

Day112Alex

I should point out that outlining the geometrical objects (using the strokeWeight function) is not “pure” Albers — you aren’t really seeing one color on top of another due to the black outlines.  But I did have students submit three pieces, insisting that one of the pieces was created only by changing the parameters in the original Albers routine, as shown in the following submission.

Day112Linh2

Here is Courtney’s submission on this theme, again created only by changing parameters to the drawing routine.

Day112Courtney

 

Most students — I think in part due to the fact that we started discussing code even earlier than previous semesters — really pushed the geometry far beyond the simple idea of rectangles within rectangles.

While toying with various geometrical motifs, Tera found something that reminded her of a rose.  This influenced her color palette:  reds and pinks for the flowers, with a green background, meant to suggest that the flowers were in a garden.

Day112Tera

Cissy explored the geometry as well.  Note how keeping the stroke weight at zero — so that the geometrical objects have no outline — creates a more subtle effect, especially since the randomness from the dominant color is not too pronounced.

Day112Cissy

The second art assignment, as in the previous semesters, was to explore creating textures using randomness in both color and shape.  As with the first assignment, I wanted students to submit one piece which involved only changing the parameters to a given function.  In this case, the function created a grid of gray circles, with both the intensity of the gray and the size of the circles having some degree of randomness.  I think it is important that students do some work within given constraints — it really challenges their creativity.  Here is Terry’s piece along these lines.

Day112Terry

 

The second piece was based on a function which created a grid of squares of the same size, but random colors.  Here, there were no constraints — students could modify the geometry in any way they wanted to.  Several were quite creative.  For example, Sepid approached this task by choosing both shape and color to create an image reminiscent of a stained glass window.

Day112Sepid

The third piece involved a color gradient (see my previous posts on Evaporation).  If you look back at these posts, you’ll recall that a color gradient can be created by increasing the randomness of the colors as you move from the top of the image to the bottom using a power function:  f(y)=y corresponds to a linear gradient, f(y)=y^2 corresponds to a quadratic gradient, etc.  Different effects can be created by varying the exponent.

As I was discussing this in class, one student asked what would happen if you used a negative exponent.  I had never thought about this before!  Jack used this idea in his piece, which he said reminds him of looking at a fire.

Day112Jack

It turns out that using a negative exponent creates a gradient beginning with black on the top.  Why is this?  As the image proceeds lower down the screen, the algorithm subtracts values from the RGB parameters proportional to y^n, where y=0 corresponds to the top of the image, and y=1 corresponds to the bottom of the image.

So if the exponent n is positive, there is very little randomness subtracted.  But if the exponent is negative, a lot of randomness is subtracted, since now the numbers near 0 are on the denominator.  Because the RGB values only go up to 255, subtracting a large degree of randomness leaves nothing left — in other words, black.  Now some of the numbers will end up being negative  near the top– but putting all negative numbers in a color specification in Processing does in fact give you black.

Another student also worked with yellows and reds to imitate fire in another way.  Instead of making small circles, he made larger circles with quite a bit of overlap, creating a rather different effect.

Day112Ali.png

And Rosalie found in interesting say to create stripes with the algorithm.  I had not seen this effect before.

Day112Rosalie

So that’s it for the first update of the Fall 2017 installment of Mathematics and Digital Art.  As you can see, my students are already being quite creative.  I look forward to seeing their work develop as the semester progresses!

 

Using Processing for the First Time

While I have discussed how to code in Processing in several previous posts, I realized I have not written about getting Processing working on your own computer.  Naturally I tell students how to do this in my Mathematics and Digital Art course.  But now that I have started a Digital Art Club at the University of San Francisco, it’s worth having the instructions readily accessible.

The file I will discuss may be used to create an image based on the work of Josef Albers, as shown below.

0001

See Day002 of my blog,  Josef Albers and Interaction of Color, for more about how color is used in creating this piece.

As you would expect, the first step is to download Processing.  You can do that here.  It may take a few moments, so be patient.

The default language used in Processing is Java.  I won’t go into details of why I’m not a fan of Java — so I use Python mode.  When you open Processing, you’ll see a blank document like this:

Day110Screen1

Note the “Java” in the upper right corner.  Click on that button, and you should see a menu with the option “Add Mode…”  Select this option, and then you should see a variety of choices — select the one for Python and click “Install.”  This will definitely take a few minutes to download, so again, be patient.

Now you’re ready to go!  Next, find some Processing code written in Python (from my website, or any other example you want to play around with).  For convenience, here is the one I’ll be talking about today:  Day03JosefAlbers.pyde.  Note that it is an Open Office document; WordPress doesn’t let you upload a .pyde file.  So just open this document, copy, and paste into the blank sketch.  Be aware that indentation is important in Python, since it separates blocks of code.  When I copied and pasted the code from the Open Office document, it worked just fine.  But in case something goes awry, I always use four spaces for successive indents.

Now run the sketch (the button with the triangle pointing to the right).  You will be asked to create a new folder; just say yes.  When Processing runs, it often creates additional files (as we’ll see in a moment), and so keeping them all in one folder is helpful.  You should also see the image I showed above; that is the default image created by this Processing program.

Incidentally, the button with the square in the middle stops running your sketch.  Sometimes Processing runs into a glitch or crashes, so stopping and restarting your sketch is sometimes necessary.  (I still haven’t figured out why it crashes at random times.)

Next, go to the folder that you just created.  You should see a directory called “frames.”  Inside, you should see some copies of the image.

Day110Screen2

Inside the “draw” function, there is a function call to “saveFrame,” which saves copies of the frames you make.  You can call the folder whatever you want; this is convenient, since you might want to make two different images with the same program.  Just change the folder name you save the images to.

A word about the syntax.  The “####” means the frames will be numbered with four digits, as in 0001.png, 0002.png, etc.  If you need more than 10,000 frames (likely you won’t when first starting), just add more hashtags.  The “.png” is the type of file.  You can use “.tif” as well.  I use “.tif” for making movies, and “.png” for making animated gifs.  There are other file types as well; see the documentation on saveFrame for more details.

Now let’s take a look at making your own image using this program.

Day110Screen3

If you notice, there are lines labelled “CHANGE 1” to “CHANGE 6” in the setup and draw functions.  These are the only lines you need to change in order to design you own piece. You may tweak the other code later if you like.  But especially for beginning programmers, I like to make the first examples very user-friendly.

So let me talk you through changing these lines.  I won’t bother talking about the other code right now — that would take some time!  But you don’t need to know what’s under the engine in order to create some interesting artwork….

CHANGE 1:  The hashtags here, by the way, indicate a comment in your code:  when your program runs, anything after a hashtag is ignored.  This makes it easy to give hints and provide instructions in a program (like telling you what lines to change).  I created a window 800 x 600 pixels; you can make it any size you want by changing those numbers. The “P2D” just means you’re working with a two-dimensional geometry.  You can work in 3D in Processing, but we won’t discuss that today.

CHANGE 2:  The “sqSide” variable indicates how big the square are, in units of pixels.  The default unit in Processing is always pixels, so if you want to use another geometry (like a Cartesian coordinate system), you have to convert from one coordinate system to another.  I do all this for you in the code; all you need to do is say how large each square is.  And if you didn’t go back and read the Josef Albers piece, by “square,” I mean a unit like this:

Day002Square

CHANGE 3, CHANGE 4:  The variables “sqRows” and “sqCols” indicate, as you would expect, how many rows and columns are in the final image.  Since I have 15 rows and the squares are 30 pixels on a side, the height of the image is 450 pixels.  Since my window is 600 pixels in height, this means there are margins of 75 pixels on the top and bottom.  If your image is too tall (or too wide) for the screen, it will be cropped to fit the screen.  Processing will not automatically resize — once you set the size of the screen, it’s fixed.

CHANGE 5:  The “background” function call sets the color of the background, using the usual RGB values from 0-255.

CHANGE 6:  The first three numbers are the RGB values of the central rectangles in a square unit.  The next three numbers indicate how the background colors of the surrounding rectangles change.  (I won’t go into that here, since I explain it in detail in the post on Josef Albers mentioned above.  The only difference is that in that post, I use RGB values from 0-1, but in the Processing code here, I use values from 0-255.  The underlying concept is the same.)

The last (and seventh) number is the random number seed.  Why is this important?  If you don’t say what the random number seed is (the code does involve choosing random numbers), every time you run the code you will get a different image — the computer just keeps generating more (and different) random numbers in a sequence.  So if you find an image you really like and don’t know what the seed is, you’ll likely never be able to reproduce it again!  And if you don’t quite like the texture created by using one random seed, you can try another.  It doesn’t matter so much if you have many rows and columns, but if you try a more minimalist approach with fewer and larger squares, the random number seed makes a big difference.

OK, now you’re on your own….  This should be enough to get you started, and will hopefully inspire you to learn a lot more about Python and Processing!

 

 

To Processing I

I made a decision last week to abandon using Sage (now called CoCalc) as a platform in my Mathematics and Digital Art class.  It was not an easy decision to make, as there are some nice features (which I’ll get to in a moment).  But now any effective use of Sage comes with a cost — the free version uses servers, and you are given this pleasant message:  “This project runs on a free server (which may be unavailable during peak hours)….”

This means that to guarantee access to CoCalc, you need a subscription.  It would not be prohibitively expensive for my class — but as I am committed to being open source, I am reluctant to continue putting sample code on my web page which requires a cost in order to use.  Yes, there is the free version — as long as the server is available….

When I asked my students last semester about moving exclusively to Processing, they responded with comments to the effect that using Sage was a gentle introduction to coding, and that I should stick with it.  I fully intended to do this, and got started preparing for the first few days of classes.  I opened my Sage worksheet, and waited for it to load.  And waited….

That’s when I began thinking — I did have experiences last year where the class virtually came to a halt because Sage was too slow.  It’s a problem I no longer wanted to have.

So now I’m going to Processing right from the beginning.  But why was I reluctant in the past?

The issue is that of user space versus screen space.  (See Making Movies with Processing I for a discussion of these concepts.)  With Sage, students could work in user space — the usual Cartesian coordinate system.  And the programming was particularly easy, since to create geometrical objects, all you needed to do was specify the vertices of a polygon.

I felt this issue was important.  Recall the success I felt students achieved by being able to alter the geometry of the rectangles in the assignment about Josef Albers and color.  (See the post on Josef Albers and Interaction of Color for a refresher on the Albers assignment.)

peyton
Peyton’s piece on Josef Albers.

Most students experimented significantly with the geometry, so I wanted to make that feature readily accessible.  It was easy in Sage, the essential code looking something like this:

Screen Shot 2017-08-26 at 11.08.35 AM

What is happening here is that the base piece is essentially an array of rectangles within unit squares, with lower-left corners of the squares at coordinates (i, j).  So it was easy for students to alter the polygons rendered by using graph paper to sketch some other polygon, approximate its coordinates, and then enter these coordinates into the nested loops.

Then Sage rendered whatever image you created on the screen, automatically sizing the image for you.

But here is the problem:  Processing doesn’t render images this way.  When you specify a polygon, the coordinates must be in screen space, whose units are pixels.  The pedagogical issue is this:  jumping into screen space right at the beginning of the semester, when we’re just learning about colors and hex codes, is just too big a leap.  I want the first assignment to focus on getting used to coding and thinking about color, not changing coordinate systems.

Moreover, creating polygons in Processing involves methods — object-oriented programming.  Another leap for students brand new to coding.

The solution?  I had to create a function in Processing which essentially mimicked the “polygon” function used in Sage.  In addition, I wanted to make the editing process easy for my students, since they needed to input more information this time.

Day108pyde1

In Processing — in addition to the number or rows and columns — students must specify the screen size and the length of the sides of the squares, both in pixels.  The margins — xoffset and yoffset — are automatically calculated.

Here is the structure of the revised nested for loops:

Day108pyde2

Of course there are many more function calls in the loops — stroke weights, additional fill colors and polygons, etc.  But it looks very similar to the loop formerly written in Sage — even a bit simpler, since I moved the translations to arguments (instead of needing to include them in each vertex coordinate) and moved all the output routines to the “myshape” function.

Again, the reason for this is that creating arbitrary shapes involves object-oriented concepts.  See this Processing documentation page for more details.

Here is what the myshape function looks like:

Day108pyde3

The structure is not complicated.  Start by invoking “createShape,” and then use the “beginShape” method.  Adding vertices to a shape involves using the “vertex” method, once for each vertex.  This seems a bit cumbersome to me; I am new to using shapes in Processing, so I’m hoping to learn more.  I had been able to get by with just creating points, lines, rectangles, and circles so far — but that doesn’t give students as much room to be creative as including arbitrary shapes does.

I should point out that shapes can have subshapes (children) and other various attributes.  There is also a “fill” method for shapes, but I have students use the fill function call in the for loop to avoid having too many arguments to myshape.  I also think it helps in understanding the logical structure of Processing — the order in which functions calls are invoked matters.  So you first set your fill color, then when you specify the vertices of your polygon, the most recently defined fill color is used.  That subtlety would get lost if I absorbed the fill into the myshape function.

As in previous semesters, I’ll let you know how it goes!  Like last semester, I’ll give updates approximately monthly, since the content was specified in detail in the first semester of the course (see Section L. of 100 Posts! for a complete listing of posts about the Mathematics and Digital Art course).

Throughout the semester, I’ll be continuously moving code from Sage to Processing.  It might not always warrant a post, but if I come across something interesting, I’ll certainly let you know!

On Coding XI: Computer Graphics III, POV-Ray

It has been a while since the last installment of On Coding.  I realized there is still more to say about computer graphics; I mentioned the graphics package POV-Ray briefly in On Coding IX, but feel it deserves much more than just a mere mention.

I’d say I began using POV-Ray in the late 1990’s, though I can’t be more precise.  This is one of the first images I recall creating, and the comments in the file reveal its creation date to be 19 September 1997.

petrie

Not very sophisticated, but I was just trying to get POV-Ray to work, as I (very vaguely) remember.  Since then, I’ve created some more polished images, like the polyhedron shown below.  I’ll talk more about polyhedra in a moment….

XT-18-19-28

First, a very brief introduction.  POV-Ray stands for Persistence of Vision Raytracer.  Ray tracing is a technique used in computer graphics to create amazingly realistic images.  Essentially, the color of each pixel in the image is determined by sending an imaginary light ray from a viewing source (the camera in the image below) and seeing where it ends up in the scene.

800px-Ray_trace_diagram.svg
Image by Henrik, Wikipedia Commons.

It is possible to create various effects by using different light sources, having objects reflect light, etc.  You are welcome to read the Wikipedia page on ray tracing for more details on how ray tracing actually works; my emphasis here will be on the code (of course!) I wrote to create various three-dimensional images.  And while the images you can produce using a ray tracing program are quite stunning at times, there is a trade-off: it takes longer to generate images because the color of each pixel in the image must be individually calculated.  But I do think the results are worth the wait!

My interest in POV-Ray stemmed from wanting to render polyhedra.  The images I found online indicated that you could create images substantially more sophisticated than those created with Mathematica.  And better yet, POV-Ray was (and still is!) open source.  That means all I needed to do was download the program, and start coding….

R_3-5_Dual

POV-Ray is a procedural programming language all in itself.  So to give you an idea of what a typical program looks like, I’ll show you the code which makes the following polyhedron:

tRp_52-3-5

Here’s how it begins.

POV4

The #include directive indicates that the given file is to be imported.  POV-Ray has many predefined colors and literally hundreds of different textures:  glass, wood, stone, gold and other metals, etc.  You just include those files you actually need.  To give you an idea, here is the logo I created for Dodecahedron Day using  silver and stone textures.

logo3

Next are the global settings.  There are actually many global settings which do a lot more than I actually understand…but I just used the two which were included in the file I modeled my code after.  Of course you can play with all the different settings (there is extensive online documentation) and see what effects they have on your final image, but I didn’t feel the need to.  I was getting results I liked, so I didn’t go any further down this path.

Then you set the camera.  This is fairly self-explanatory — position the camera in space, point it, and set the viewing angle.  A fair bit of tweaking is necessary here.  Because of way the image is rendered, you can’t zoom in, rotate, or otherwise interact with the created image.  So there is a bit of trial-and-error in these settings.

Lighting comes next.  You can use point light sources in any color, or have grids of light sources.  The online documentation suggested the area lighting here (a 5-by-5 grid of lights spanning 5 units on the x-axis and 5 units on the z-axis), and it worked well for me.  Since I wanted the contrast of the colors of the faces of the polyhedra on a black background, I needed a little more light than just a single point source.  You can read all about the “adaptive” and “jitter” parameters here, but I just used the defaults suggested in the documentation and my images came out fine.

There are spotlights and cylindrical lighting sources as well, and all may be given various parameters.  Lighting in Mathematica is quite a bit simpler with fewer options, so lighting was one of the most challenging features of POV-Ray to learn how to use effectively.

So that’s the basic setup.  Now for the geometry!  I can’t go into all the details here, but I can give you the gist of how I proceeded.

POV2

The nested while loops produce the 60 yellow faces of the polyhedron.  Because of the high icosahedral symmetry of the polyhedron, once you specify one yellow face, you can use matrices to move that face around and get the 59 others.

To get the 60 matrices, you can take each of 12 matrices representing the symmetries of a tetrahedron (tetra[Count]), and multiply each by the five matrices representing the rotations around the vertices of an icosahedron (fiverot[Count2]).  There is a lot of linear algebra going on here; the matrices are defined in the file “Mico.inc”, which is included.

The vertices of the yellow faces are given in “VF[16]”, where the “VF” stands for “vertex figure.”  These vertex figures are imported from the file “VFico.inc”.  Lots of geometry going on here, but what’s important is this:  the polygon with vertices listed in VF[16] is successively transformed by two symmetry matrices, and then the result is defined to be an “object” in our scene.  So the nested loops place 60 objects (yellow triangles) in the scene.  The red pentagrams are created similarly.

POV3

Finally, the glass tabletop!  I wanted the effect to be subtle, and found that the built-in texture “T_Glass2” did the trick — I created a square (note the first and last vertices are the same; a quirk of POV-Ray) of glass for the polyhedron to sit on.

POV-Ray does the rest!  The overall idea is this:  put whatever objects you want in a scene.  Then set up your camera, adjust the lighting, and let POV-Ray render the resulting image.

petrie10

Of course this introduction is necessarily brief — I just wanted to give you the flavor of coding with POV-Ray.  There are lots of examples online and within the documentation, so it should not be too difficult to get started if you want to experiment yourself!

Still more on: What is…Inversive Geometry?

Now for the final post on inversive geometry!  I’ve been generating some fascinating images, and I’d like to share a bit about how I make them.

2017-07-19Inversion3v2.png

In order to create such images in Mathematica, you need to go beyond the geometrical definition of inversion and use coordinate geometry.  Let’s take a moment to see how to do this.

Recall that P′, the inverse point of P, is that point on a ray drawn from the origin through P such that

[OP]\cdot[OP']=1,

where [AB] denotes the distance from A to B.  Feel free to reread the previous two posts on inversive geometry for a refresher (here are links to the first post and the second post).

Now suppose that the point P has Cartesian coordinates (x,y).  Points on a ray drawn from the origin through P will then have coordinates (kx, ky), where k>0.  Thus, we just need to find the right k so that the point P'=(kx,ky) satisfies the definition of an inverse point.

This is just a matter of a little algebra; the result is

P'=\left(\dfrac{x}{x^2+y^2},\dfrac{y}{x^2+y^2}\right).

What this means is that if you have an equation of a curve in terms of x and y, if you substitute x/(x^2+y^2) everywhere you see x, and substitute y/(x^2+y^2) everywhere you see y, you’ll get an equation for the inverse curve.

Let’s illustrate with a simple example — in general, the computer will be doing all the work, so we won’t need to actually do the algebra in practice.  We’ll look at the line x=1.  From our previous work, we know that the inverse curve must be a circle going through the origin.

Making the substitution just discussed, we get the equation

\dfrac x{x^2+y^2}=1,

which may be written (after completing the square) in the form

\left(x-\dfrac12\right)^2+y^2=\dfrac14.

It is not hard to see that this is in fact a circle which passes through the point (0,0).

Now we need to add one more step.  In the definition of an inverse point, we had the point O being the origin with coordinates (0,0).  What if O were some other point, say with coordinates (a,b)?

Let’s proceed incrementally.  Beginning with a point (x,y), translate to the point (x-a,y-b) so that the point (a,b) now acts like the origin.  Now use the previous formula to invert:

\left(\dfrac{x-a}{(x-a)^2+(y-b)^2},\dfrac{y-b}{(x-a)^2+(y-b)^2}\right).

Finally, translate back:

\left(a+\dfrac{x-a}{(x-a)^2+(y-b)^2},b+\dfrac{y-b}{(x-a)^2+(y-b)^2}\right).

This is now the inverse of the point (x,y) about the point (a,b).

So what you see in the above image is several copies of the parabola y=x^2 inverted about a series of equally spaced points along the line segment with endpoints (1/2,-1/2) and (3/2,1/2).  This might seem a little arbitrary, but it takes quite a bit of experimentation to find a set of points to invert about in order to create an aesthetically pleasing image.

Of course there is another perspective on accomplishing the same task — just shift the parabolas first, invert about the origin, and then shift back.  This is geometrically equivalent (and the algebra is the same); it just depends on how you want to look at it.

Here is another image creating by inverting the parabola y=x^2 about points which lie on a circle.

2017-07-18Inversion1v2

And while we’re on the subject of inverting parabolas, let’s take a moment to discuss the cardioid example we looked at in our last conversation about inversion:

Cardioid2

To prove that this construction of circles actually yields a cardioid, the trick is to take the inverse of a parabola about its focus.  If you do this, the tangent lines of the parabola will then invert to circles tangent to a cardioid.  I won’t go into all the details here, but I’ll outline how the proof goes using the following diagram.

Day104ParabolaProperty

Draw a line (shown in black) tangent to the blue parabola at its vertex; the inverse curves are shown in the same color, but dashed.  Note that the black circle must be tangent to the blue cardioid since the inverse black line is tangent to the inverse parabola.

The small red disk is the focus of the parabola.  Key to the proof is the property of the parabola that if you draw a line from the focus to a point on the black line and then bounce off at a right angle (the red lines), the resulting line is tangent to the parabola.  So the inverse of this line — the red dashed circle — must be tangent to the cardioid.

Since perpendicularity is preserved and the line from the focus inverts to itself (since we’re inverting about the focus), the red circle must be perpendicular to this line — meaning that the line from the focus in fact contains a diameter, and hence the center, of the red circle.  Then using properties of circles, you can show that all centers of circles formed in this way lie on a circle (shown dotted in purple) which is half the size of the black circle.  I’ll leave the details to you….

Finally, I’d like to show a few examples of using the other conic sections.  Here is an image with 80 inversions of an ellipse around centers which lie on a line segment.

2017-07-22Inversion1v2

And here is an example of 100 hyperbolas inverted around centers which lie on a line segment.  Since the tails of the branches of a hyperbola all go to infinity, they all meet at the same point when inverted.

2017-07-22Inversion2v2

So now you know how to work with geometrical inversion from an algebraic standpoint.  I hope seeing some of the fascinating images you can create will inspire you to try creating some yourself!