Imagifractalous! 5: Fractal Binary Trees III

Last week I talked about working with binary trees whose branching ratio is 1 or greater.  The difficulty with having a branching ratio larger than one is that the tree keeps growing, getting larger and larger with each iteration.

But when you work with software like Mathematica, for example, and you create such a tree, you can specify the size of the displayed image in screen size.

So the trees above both have branching ratio 2 and branching angle of 70°.  The left image is drawn to a depth of 7, and the right image is drawn to a depth of 12.  I specified that both images be drawn the same size in Mathematica.

But even though they are visually the same size, if you start with a trunk 1 unit in length, the left image is about 200 units wide, while the second is 6000 units wide!

So this prompted us to look at scaling back trees with large branching ratios.  In other words, as trees kept getting larger, scale them back even more.  You saw why this was important last week:  if the scale isn’t right, when you overlap trees with r less than one on top of the reciprocal tree with branching ratio 1/r, the leaves of the trees won’t overlap.  The scale has to be just right.

2017-04-08ris2d.png

So what should these scale factors be?  This is such an interesting story about collaboration and creativity — and how new ideas are generated — that I want to share it with you.

For your usual binary tree with branching ratio less than one, you don’t have to scale at all.  The tree remains bounded, which is easy to prove using convergent geometric series.
2017-01-20ris1

What about the case when r is exactly 1, as shown in the above figure?  At depth n, if you start with a trunk of length 1, the path from the base of the trunk to the leaf is a path of exactly n + 1 segments of length 1, and so can’t be any longer than n + 1 in length.  As the branching angle gets closer to 0°, you do approach this bound of n + 1.  So we thought that scaling back by a factor of n + 1 would keep the tree bounded in the case when r is 1.

What about the case when r > 1?  Let’s consider the case when r = 2 as an example.  The segments in any path are of length 1, 2, 4, 8, 16, etc., getting longer each time by a power of 2.  Going to a depth of n, the total length is proportional to 2^n in this case.  In general, the total length is about 2\cdot r^n for arbitrary r, so scaling back by a factor of r^n would keep the trees bounded as well.

So we knew how to keep the trees bounded, and started including these scaling factors when drawing our images.  But there were two issues.  First, we still had to do some fudging when drawing trees together with their reciprocal trees.  We could still create very appealing images, but we couldn’t use the scale factor on its own.

And second — and perhaps more importantly — Nick had been doing extensive exploration on his computer generating binary trees.  Right now, we had three different cases for scaling factors, depending on whether r < 1, r = 1, or r > 1.  But in Nick’s experience, when he moved continuously through values of r less than 1 to values of r greater than one, the transition looked very smooth to him.  There didn’t seem to be any “jump” when passing through r = 1, as happened with the scale factors we had at the moment.

I wasn’t too bothered by it, though.  There are lots of instances in mathematics where 1 is some sort of boundary point.  Take geometric series, for example.  Or perhaps there is another boundary point which separates three fundamentally different types of solutions.  For example, consider the quadratic equation

x^2+c=0.

The three fundamentally different solution sets correspond to  c < 0, c = 0, and c > 0.  There is a common example from differential equations, too, though I won’t go into that here.  Suffice it to say, this type of trichotomy occurs rather frequently.

I tried explaining this to Nick, but he just wouldn’t budge.  He had looked at so many binary trees, his intuition led him to firmly believe there just had to be a way to unify these scale factors.

I can still remember the afternoon — the moment — when I saw it.  It was truly beautiful, and I’ll share it in just a moment.  But my point is this:  I was so used to seeing trichotomies in mathematics, I was just willing to live with these three scale factors.  But Nick wasn’t.  He was tenacious, and just insisted that there was further digging to do.

Don’t ask me to explain how I came up with it.  It was like that feeling when you just were holding on to some small thing, and now you couldn’t find it.  But you never left the room, so it just had to be there.  So you just kept looking, not giving up until you found it.

And there is was:  if the branching ratio was and you were iterating to a depth of n, you scaled back by a factor of

\displaystyle\sum_{k=0}^n r^k.

This took care of all three cases at once!  When r < 1, this sum is bounded (think geometric series), so the boundedness of the tree isn’t affected.  When r = 1, you just get n + 1 — the same scaling factor we looked at before!  And when r > 1, this sum is large enough to scale back your tree so it’s bounded.

Not only that, this scale factor made proving the Dual Tree Theorem so nice.  The scaling factors for a tree with r < 1 and its reciprocal tree with branching ratio 1/r matched perfectly.  No need to fudge!

This isn’t the place to go into all the mathematics, but I’d be happy to share a copy of our paper if you’re interested.  We go into a lot more detail than I ever could in a blog post.

This is how mathematics happens, incidentally.  It isn’t just a matter of finding a right answer, or just solving an equation.  It’s a give-and-take, an exploration, a discovery here and there, tenacity, persistence.  A living, breathing endeavor.

But the saga isn’t over yet….  There is a lot more to say about binary trees.  I’ll do just that in my next installment of Imagifractalous!

Imagifractalous! 4: Fractal Binary Trees II

Now that the paper Nick and I wrote on binary trees was accepted for Bridges 2017 (yay!), I’d like to say a little more about what we discovered.  I’ll presume you’ve already read the first Imagifractalous! post on binary trees (see Day077 for a refresher if you need it).

Recall that in that post, I discussed creating binary trees with branching ratios which were 1 or larger.  Below are three examples of binary trees, with branching ratios less that 1, equal to 1, and larger than 1, respectively.

2016-12-18threetrees.png

It was Nick’s insight to consider the following question:  how are trees with branching ratio r related to those with branching ratio 1/r?  He had done a lot of exploring with graphics in Python, and observed that there was definitely some relationship.

Let’s look at an example.  The red tree is a binary tree with branching ratio r less than one, and the gray tree has a branching ratio which is the reciprocal r.  Both are drawn to the same depth.

2016-12-03-doubletree1.png

Of course you notice what’s happening — the leaves of the trees are overlapping!  This was happening so frequently, it just couldn’t be coincidence.  Here is another example.

tree1

Notice how three copies of the trees with branching ratio less than one are covering some of the leaves of a tree with the reciprocal ratio.

Now if you’ve ever created your own binary trees, you’ll likely have noticed that I left out a particularly important piece of information:  the size of the trunks of the trees.  You can imagine that if the sizes of the trunks of the r trees and the 1/r trees were not precisely related, you wouldn’t have the nice overlap.

Here is a figure taken from our paper which explains just how to find the correct relationship between the trunk sizes.  It illustrates the main idea which we used to rigorously prove just about everything we observed about these reciprocal trees.

tree2

Let’s take a look at what’s happening.  The thick, black tree has a branching ratio of 5/8, and a branching angle of 25°.  The thick, black path going from O to P is created by following the sequence of instructions RRRLL (and so the tree is rendered to a depth of 5).

Now make a symmetric path (thick, gray, dashed) starting at P and going to O.  If we start at P with the same trunk length we started with at O, and follow the exact same instructions, we have to end up back at O.

The trick is to now look at this gray path backwards, starting from O.  The branches now get larger each time, by a factor of 8/5 (since they were getting smaller by a factor of 5/8 when going in the opposite direction).  The size of the trunk, you can readily see, is the length of the last branch drawn in following the black path from O to P.  This must be (5/8)5 times the length of the trunk, since the tree is of depth 5.

The sequence of instructions needed to follow this gray path is RRLLL.  It turns out this is easy to predict from the geometry.  Recall that beginning at P, we followed the instructions RRRLL along the gray path to get to O.  When we reverse this path and go from O to P, we follow the instructions in reverse — except that in going in the reverse direction, what was previously a left turn becomes a right turn, and vice versa.

So all we need to do to get the reverse instructions is to reverse the string RRRLL to get LLRRR, and then change the L‘s to R‘s and the R‘s to L‘s, yielding RRLLL.

There’s one important detail to address:  the fact that the black tree with branching ratio 5/8 is rotated by 25° to make everything work out.  Again, this is easy to see from the geometry of the figure.  Look at the thick gray path for a moment.  Since following the instructions RRLLL means that in total, you make one more left turn than you do right turns, the last branch of the path must be oriented 25° to the left of your starting orientation (which was vertical).  This tells you precisely how much you need to rotate the black tree to make the two paths have the same starting and ending points.

Of course one example does not make a proof — but in fact all the important ideas are contained in this one illustration.  It is not difficult to make the argument more general, and we have successfully accomplished that (though this blog is not the place for it!).

If you look carefully at the diagram, you’ll count that there are exactly 10 leaves in common with these two trees with reciprocal branching ratios.  There is some nice combinatorics going on here, which is again easy to explain from the geometry.

You can see that these common leaves (illustrated with small, black dots) are at the ends of gray branches which are oriented 25° from the vertical.  Recall that this specific angle came from the fact that there was one more L than there were R‘s in the string RRLLL.

Now if you have a sequence of 5 instructions, the only way to have exactly one more L than R‘s is to have precisely three L‘s (and hence two R‘s).  And the number of ways to have three L‘s in a string of length 5 is just

\displaystyle{5\choose3}=10.

Again, these observations are easy to generalize and prove rigorously.

And where does this take us?

canopies.png

On the right are 12 copies of a tree with a braching ratio of r less than one and a branching angle of 30°, and on the left are 12 copies of a tree with a reciprocal branching ratio of 1/r, also with a branching angle of 30°.  All are drawn to depth 4, and the trunks are appropriately scaled as previously discussed.

These sets of trees produce exactly the same leaves!  We called this the Dual Tree Theorem, which was the culmination of all these observations.  Here is an illustration with both sets of trees on top of each other.

2016-12-14gtree.png

As intriguing as this discovery was, it was only the beginning of a much broader and deeper exploration into the fractal world of binary trees.  I’ll continue a discussion of our adventures in the next installment of Imagifractalous!

Imagifractalous! 3: Fractal Binary Trees

I’ve taken a break from Koch-like curves and p-adic sequences for an arboreal interlude….  Yes, there’s a story about why — I needed to work with Nick on a paper he was writing for Bridges — but that story isn’t quite finished yet.  When it is, I’ll tell it.  But for now, I thought I’d share some of the fascinating images we created along the way.

b17depth6-7v2

Let’s start with a few examples of simple binary trees.  If you want to see more, just do a quick online search — there are lots of fractal trees out there on the web!  The construction is pretty straightforward.  Start by drawing a vertical trunk of length 1.  Then, move left and right some specified angle, and draw a branch of some length r < 1.  Recursively repeat from where you left off, always adding two more smaller branches at the tip of each branch you’ve already drawn.

If you look at these two examples for a moment, you’ll get the idea.  Here, the angle used is 40 degrees, and the ratio is 5/8.  On the left, there are 5 iterations of the recursive drawing, and there are 6 iterations on the right.

Here’s another example with a lot more interaction among the branches.

2016-12-01-tree1.png

This type of fractal binary tree has been studied quite a bit.  There is a well-known paper by Mandelbrot and Frame which discusses these trees, but it’s not available without paying for it.  So here is a paper by Pons which addresses the same issues, but is available online.  It’s an interesting read, but be forewarned that there’s a lot of mathematics in it!

2017-01-20ris1.png

In trying to understand various properties of these fractal trees, it’s natural to write code which creates them.  But here’s the interesting thing about writing programs like that — once they’re written, you can input anything you like!  Who says that r has to be less than 1?  The tree above is a nice example of a fractal tree with r = 1.  All the branches are of the same length, and there is a lot of overlap.  This helps create an interesting texture.

But here’s the catch.  The more iterations you go, the bigger the tree gets.  In a mathematical sense, the iterations are said to be unbounded.  But when Mathematica outputs a graphic, it is automatically scaled to fit your viewing window.  So in practice, you don’t really care how large the tree gets, since it will automatically be scaled down so the entire tree is visible.

It is important to note that when r < 1, the trees are bounded, so they are easier to study mathematically.  The paper Nick and I are working on scales unbounded trees so they are more accessible, but as I said, I’ll talk more about this in a later post.

2017-01-21rgt1

Here are a few examples with r > 1.  Notice that as there are more and more iterations, the branches keep getting larger.  This creates a very different type of binary tree, and again, a tree which keeps getting bigger (and unbounded) as the number of iterations increases.  But as mentioned earlier, Mathematica will automatically scale an image, so these trees are easy to generate and look at.

Nick created the following image using copies of binary trees with r approximately equal to 1.04.  The ever-expanding branches allow for the creation of interesting textures you really can’t achieve when r < 1.

blackwhitetree

Another of my favorites is the following tree, created with r = 1.  The angle used, though, is 90.9 degrees.  Making the angle just slightly larger than a right angle creates an interesting visual effect.

2017-01-18binarytree

But the exploration didn’t stop with just varying r so it could take on values 1 or greater.  I started thinking about other ways to alter the parameters used to create fractal binary trees.

For example, why does r have to stay the same at each iteration?  Well, it doesn’t!  The following image was created using values of r which alternate between iterations.

2016-12-21-ralt.png

And the values of r can vary in other ways from iteration to iteration.  There is a lot more to investigate, such as generating a binary tree from any sequence of r values.  But studying these mathematically may be somewhat more difficult….

Now in a typical binary tree, the angle you branch to the left is the same as the angle you branch to the right.  Of course these two angles don’t have to be the same.  What happens if the branching angle to the left is different from the branching angle to the right?  Below is one possibility.

2016-12-25-2a1.png

And for another possibility?  What if you choose two different angles, but have the computer randomly decide which is used to branch left/right at each iteration?  What then?

2017-01-02randangles2.png

Here is one example, where the branching angles are 45 and 90 degrees, but which is left or right is chosen randomly (with equal probability) at each iteration.  Gives the fractal tree a funky feel….

You might have noticed that none of these images are in color.  One very practial reason is that for writing Bridges papers, you need to make sure your images look OK printed in black-and-white, since the book of conference papers is not printed in color.

But there’s another reason I didn’t include color images in this post.  Yes, I’ve got plenty…and I will share them with you later.  What I want to communicate is the amazing variety of textures available by using a simple algorithm to create binary trees.  Nick and I never imagined there would be such a fantastic range of images we could create.  But there are.  You’ve just seen them.

Once the Bridges paper is submitted, accepted (hopefully!), and revised, I’ll continue the story of our arboreal adventure.  There is a lot more to share, and it will certainly be worth the wait!

Imagifractalous! 2: p-adic sequences

In Imagifractalous! 1, I talked about varying parameters in the usual algorithm for creating the Koch curve to produce a variety of images, and casually mentioned p-adic valuations.  What happened was about a year after I began exploring these interesting images, I did a Google search (as one periodically does when researching new and exciting topics), and stumbled upon the paper Arithmetic Self-Similarity of Infinite Sequences.

It was there that I recognized the sequence of 0’s and 1’s I’d been using all along, and this sequence was called a 2-adic valuation (mod 2).

Here’s the definition:  the p-adic valuation is the sequence such that the nth term in the sequence is the highest power of p which divides n. So the 2-adic valuation begins

0, 1, 0, 2, 0, 1, 0, 3, 0, 1, 0, 2, 0, 1, 0, 4, ….

Of course all terms with odd n are 0, and for the even n, you just look at the highest power of 2 dividing that even number.  And you can take this sequence (mod 2), or mod anything you like.

I naturally did what any right-thinking mathematician would do — looked up p-adic sequences in the The On-Line Encyclopedia of Integer Sequences.  Now it’s not that p-adic valuations are all that obscure, it’s just I had never encountered them before.

Of course there it was:  A096268.  And if you read through the comments, you’ll see one which states that this sequence can be used to determine the angle you need to turn to create the Koch snowflake.

I wasn’t particularly discouraged — this sort of thing happens all the time when exploring new mathematical ideas.  You just get used to it.  But something unexpected happened.

2016-10-16-7adic2.png
Image based on a 7-adic valuation.

I starting experimenting with other p-adic valuations mod 2 (since I had two choices of angles), and found similar interesting behavior.  Not only that, p didn’t have to be prime — any integer would do.

But most papers about p-adic valuations assumed p was prime.  Why is that?  If \nu_p is a p-adic valuation and p is prime, it’s not hard to show that

\nu_p(mn)=\nu_p(m)+\nu_p(n),

an often-used property in developing a theory of p-adic valuations.  But it only takes a moment to see that

\nu_6(4)=0,\quad \nu_6(9)=0,\quad \nu_6(4\cdot9)\ne\nu_6(4)+\nu_6(9).

Bad news for the p-adic theorists, but the fractal images couldn’t seem to care whether p was prime or not….

2016-10-31-12adic1.png
Beginning segment of a curve based on a 12-adic valuation.

So I didn’t plunge into researching p-adic valuations, since I needed a treatment which included p composite, which didn’t seem to be out there.

But here’s the neat part.  Most of the work I’d done to prove something already known — that a 2-adic valuation (mod 2) could be used to produce a Koch curve — could be used to study generic p.  So I was able to make fairly good progress in a relatively short amount of time, since I’d already thought about it a lot before.  I suspect it would have taken me quite a bit longer if I’d just casually read about the 2-adic result rather than prove it myself.

The progress made was similar to that of the 2-adic case — number of segments in each arm in the symmetric case, how many solutions given a particular order of symmetry, and so forth.

Now my fractal brain was truly revved up!  I enjoyed creating images using various p-adic valuations — the variety seemed another dimension of endless.  So I started brainstorming about ways to even further diversify my repertoire of fractal images.

The first two weeks of November were particularly fruitful.  Two ideas seemed to coalesce.  The first revolved around an old unexplored question:  what happened when you didn’t only change the angles in the algorithm to produce the Koch snowflake, but you also changed the lengths?

Of course this seemed to make the parameter space impossibly large, but I was in an adventurous mood, and the only thing at stake was a few moments with Mathematica generating uninteresting images.

2016-10-07-3adic.png
Image based on a 3-adic valuation with different edge lengths.

 

But what I found was that if an image closed up with a particular symmetry, then as long as the sequence of edge lengths was appropriately periodic, the image with different edge lengths also closed up with the same order of symmetry!

This was truly mind-boggling at first.  But after looking at lots of images and diving into the algorithm, it’s not all that improbable.  You can observe that in the image above, segments of the same length occur in six different orientations which are 60 degrees apart, and so will ultimately “cancel out” in any final vector sum and take you right back to the origin.

Now I don’t have the precise characterization of “appropriately periodic” as yet, but I know it’s out there.  Just a matter of time.

The second big idea at the beginning of November involved skimming through the paper on arithmetic self-similarity mentioned above.  Some results discussed adding one sequence to another, and so I wondered:  what if you added two p-adic sequences before taking their sum (mod 2)?

2016-11-22-8+64adic.png
Imaged based on adding 8-adic and 64-adic valuations with different edge lengths.

Well, preliminary results were promising only when the p-adic valuations involved were of the same power of some p, like in the image above (which also involves different edge lengths).

These ideas are only in a very preliminary stage — it’s the perennial problem of, well, waiting…..  It may look liking adding 2-adic and 3-adic valuations doesn’t get you anywhere, but maybe that’s just because you need so many more iterations to see what’s actually happening.  So there’s lots more to explore here.

2016-10-31-42adic1v3b.png
Beginning segment of an image based on a 42-adic valuation.

So as the parameter space gets larger — mutliple different lengths, adding several different p-adic valuations — the variety becomes infinitely diverse, and the analysis becomes that much more involved.

But that makes it all the more intriguing.  And it will be all the more rewarding when I finally figure everything out….

 

Imagifractalous! 1: How it all began.

September 2, 2015 — that was the day Thomas sent an email to the faculty in the math department asking if anyone was willing to help him learn something about fractals.  And that was when it all began.

Now I’ve told this story in various forms over the past year or so on my blog, but I want to begin a thread along a different direction. I’d like to give a brief chronological history of my work with fractals, with an eye toward the fellow mathematician interested in really understanding what is going on.

I’ll be formal enough to state fairly precisely the mathematical nature of what I’m observing, so as to give the curious reader an idea of the nature of the mathematics involved.  But I’ll skip the proofs, so that the discussion doesn’t get bogged down in details.  And of course provide lots of pictures….

You might want to go back to read my post of Day007 — there is a more detailed discussion there.  But the gist of the post is the question posed by Thomas at one of our early meetings:  what happens when you change the angles in the algorithm which produces the Koch curve?

What sometimes happens is that the curve actually closes up and exhibits rotational symmetry.  As an example, when the recursive routine

F   +40   F   +60   F   +40   F

is implemented, the following sequence of segments is drawn.

Now this doesn’t look recursive at all, but rather iterative.  And you certainly will have noticed something curious — some sequences of eight segments, which I call arms, are traversed twice.

All of this behavior can be precisely described — all that’s involved is some fairly elementary number theory.  The crux of the analysis revolves around the following way of describing the algorithm:  after drawing the nth segment, turn +40 degrees if the highest power of 2 dividing n is even, and turn +60 degrees if it’s odd.  Then move forward.

I would learn later about p-adic valuations (this is just a 2-adic valuation), but that’s jumping a little ahead in the story.  I’ll just continue on with what I observed.

What is also true is that the kth arm (in this case, the kth sequence of eight segments) is retraced precisely when the highest power of power of 2 dividing k is odd.  This implies the following curious fact:  the curve in the video is traced over and over again as the recursion deepens, but never periodically.  This is because the 2-adic valuation of the positive integers isn’t periodic.

So I’ve written up some results and submitted them to a math journal.  What I essentially do is find large families of angle pairs (like +40 and +60) which close up, and I can describe in detail how the curves are drawn, what the symmetry is, etc.  I can use this information to create fractal images with a desired symmetry, and I discuss several examples on a page of my mathematical art website.

As one example, for my talks in Europe this past summer, I wanted to create images with 64 segments in each arm and 42-fold symmetry.  I also chose to divide the circle in 336 parts — subdivision into degrees is arbitrary.  Or another way of looking at it is that I want my angles to be rational multiples of \pi, and I’m specifying the denominator of that fraction.

Why 336?  First, I needed to make sure 42 divided evenly into 336, since each arm is then 8/336 away from the previous one (although they are not necessarily drawn that way).  And second, I wanted there to be enough angle pairs so I’d have some choices.  I knew there would be 96 distinct images from the work I’d already done, so it seemed a reasonable choice.  Below is one of my favorites.  The angles used here are 160 and 168 of the 336 parts the circle is divided into.

koch_336_6_160_168

Now my drawing routine has the origin at the center of the squares in the previous two images, so that the rotational symmetry is with respect to the origin.  If you think about it for a moment, you’ll realize that if both angles are the same rational multiple of \pi, you’ll get an image like the following, where one of the vertices of the figure is at the origin, but the center of symmetry is not at the origin.

fractal_40_60_4

Of course it could be (and usually is!) the case that the curve does not close — for example, if one angle is a rational multiple of \pi, and the other is an irrational multiple of \pi.  Even when they do close, some appealing results are obtained by cropping the images.

koch-60-254web

So where does this bring me?  I’d like a Theorem like this:  For the recursive Koch algorithm described by

F   \alpha_0   F   \alpha_1   F   \alpha_0   F,

the curve closes up precisely when (insert condition on \alpha_0 and \alpha_1), and has the following symmetry (insert description of the symmetry here).

I’m fairly confident I can handle the cases where the center of symmetry is at the origin, but how to address the case when the center of symmetry is not the origin is still baffling.  The only cases I’ve found so far are when \alpha_0=\alpha_1, but that does not preclude the possibility of there being others, of course.

At this point, I was really enjoying creating digital artwork with these Koch-like images, and was also busy working on understanding the underlying mathematics.  I was also happy that I could specify certain aspects of the images (like the number of segments in each arm and the symmetry) and find parameters which produced images with these features.  By stark contrast, when I began this process, I would just randomly choose angles and hope for the best!  I’d consider myself lucky to stumble on something nice….

So it took about a year to move from blindly wandering around in a two-dimensional parameter space (the two angles to be specified) to being able to precisely engineer certain features of fractal images.  The coveted “if-and-only-if” Theorem which says it all is still yet to be formulated, but significant progress toward that end has been made.

But then my fractal world explodes by moving from 2-adic to p-adic.  That for next time….

 

Color II: Opacity and Josef Albers

Remember from last week we were discussing the following image:

AlbersSquares1

What colors are the squares?  Recall that when we allowed one or both squares to be transparent to some degree, there were many possible answers to this question.  Last week, we found out all possible color/opacity combinations for the purple square alone.

First, we’ll examine the case that the pink square is transparent and on top of the purple square.  Whether the purple square is transparent actually doesn’t matter, so we’ll be perfectly fine if we assume the purple square has RGB values (0.6,0.5,1) with opacity 1.

To begin, we need to establish the apparent color of the pink square.  If I load this image into Photoshop and use the eyedropper tool, I get integer RGB values of  (229, 51, 255).  Dividing these by 255 to convert to values between 0 and 1, I get (0.898039, 0.2, 1).  I actually used RGB values of (0.9,0.2,1) to create the image, so the eyedropper tool did its job….

Now let’s call the color of the pink square (R,G,B), and the opacity a.  Recall the formula from last week for the result of looking through a transparent color:

a\, C_1+(1-a)\,C_2.

In our case, C_1 is the color of the pink square, and C_2 is the color of the purple square.  So we get the equation

(0.9,0.2,1)=a\,(R,G,B)+(1-a)\,(0.6,0.5,1).

As before, we need to break this down into three separate equations, one for each component.  For exactly the same reasons as last week, we must have B=1 (go back and reread this argument if you forgot).  The equations for the Red and Green components are

0.9=a\,R+(1-a)\,0.6,\qquad 0.2=a\,G+(1-a)\,0.5,

which may be rearranged to yield

R=0.6+\dfrac{0.3}a,\qquad G=0.5-\dfrac{0.3}a.

Since color values lie between 0 and 1, we see from the Red equation that we must have

0\le\dfrac{0.3}a\le0.4.

Note that when this is true, the Green value also lies between 0 and 1, so this is all we need to check.  A little simplification shows that this implies 0.75\le a, so the pink square can be transparent as long as

0.75\le a\le1.

We can then use the formulas above to fing the Red and Green values (remembering that Blue is always 1).  Below are the possibilities in RG space:

RGSpace2

The point X corresponds to the value a=1, while the point Y corresponds to a=0.75.  Note that while the possible points in RG space lie on a line segment (it is easy to see from the above formulas that R+G=1.1), the points on the line segment do not vary linearly with a since the a occurs in the denominator in the above formulas.

So now we’ve looked at all the possibilities when the pink square is transparent and on top of the purple square.  What if the purple square is transparent and on top of the pink square?

We first note that half the work is already done, since we worked out the possibilities for a transparent purple square last week.  Here is what we obtained, where again the opacity of the purple square is denoted by a:

R=\dfrac{a-0.4}a,\quad G=\dfrac{a-0.5}a,\quad B=1.

This corresponds to the color C_1 in the formula for opacity.  Now let (R,G,B) denote the color of the pink square underneath, which will be C_2.  Using the opacity formula, we obtain the equation

(0.9,0.2,1)=a\,\left(\dfrac{a-0.4}a,\dfrac{a-0.5}a,1\right)+(1-a)\,(R,G,B).

This may look a little complicated, but it turns out we only need to look at the Red component.  Looking at the Red color value, we get

0.9=a\left(\dfrac{a-0.4}a\right)+(1-a)\,R,

which after multiplying out and rearranging results in

R=\dfrac{1.3-a}{1-a}.

Can you see any problem with this formula for R?  Since the opacity must be between 0 and 1 — in other words, 0\le a\le1 — the numerator of this expression will always be greater than the denominator.  This means that the Red color value would have to be greater than 1, which we know is not possible!

Our conclusion?  It is not possible that the two squares can be obtained by a pinkish square beneath a transparent purple square.  Essentially, the pink is “too red.”  In order to make the pink show through the purple, the opacity of the purple would have to be too close to 0, which would then mean that we’re not seeing enough blue.

In general, this is not easy to just see by quickly glancing at an image.  But if we use the formula for opacity and are careful with our calculations, we can prove that certain color/transparency combinations are impossible.

And what about a more complex figure?

Josef_Albers's_painting_'Homage_to_the_Square',_1965

Well, there are four different squares to consider, and several different possible layerings.  But it’s even more complicated than that.

What you’re seeing is an image on your color monitor or phone, which is on my website.  I got the image from the Wikipedia commons.  Someone uploaded a digital file of the image, which was either taken of the original piece, or digitized from a photograph of the piece.  Which might have been from a book, published long enough after Albers finished the piece that the photo was actually of a faded original.

So what we’re actually seeing is only an approximation to Albers’ original painting.  To make the analysis more realistic, we’d have to assume that the apparent color we’re seeing is within a certain tolerance of the original.  Meaning each color doesn’t have just one value, but a range of possible values.

We won’t go into this more complicated issue today.  But I hope you now appreciate that an image of just a few squares may be much more intriguing than you might originally think!

Color I: Opacity and Josef Albers

What do you see?AlbersSquares1

Looks like a pink square on top of a purple square.  But after hearing a talk about Josef Albers’ work at Bridges 2016 a few weeks ago, I realized there is more than one way to look at this image.

Maybe the square is actually more red, but transparent — so that the purple showing through makes it pink.  Or maybe the pink square is behind the purple square, and it’s the purple square which is transparent.  Or maybe the purple square isn’t a square at all, but a frame — that is, a square with a hole in it!

This first post in a series on color will explore this apparently simple figure.  The ultimately goal will be to analyze images in Josef Albers’ series Homage to the Square. But as mentioned last week, James Mai found that there are 171 possible combinations of opaqueness, transparency, frames, and squares in this series!  So we’ll start with a basic example.

But first, we have to understand how opacity works in computer graphics.  We’ve used RGB colors in several posts so far, but rarely mentioned opacity.  When this is included, the color system is referred to as RGBA, where the “A” represents the opacity (or alternatively, the transparency), which is also sometimes called the alpha.  A value of a=0 means that the color is completely transparent, so it doesn’t affect the image at all, while a value of a=1 means the color is completely opaque, meaning you can’t see through it at all.

As an example, in the series below, the squares have RGB values of (0.2, 0, 1.0), and A values of 0.25, 0.5, 0.75, and 1.0 (going from left to right).

OpacityExample

But these squares could also have been created without any transparency at all!  What this means is that if you include transparency/opacity, there are many different ways to specify a color that appears on your screen, not just one.  So in an Albers’ piece with four squares, where all of them may have some degree of transparency, the analysis can be quite difficult.

Josef_Albers's_painting_'Homage_to_the_Square',_1965

Next, we have to understand how the opacity is used to create the colors you see.  If color C_1 has opacity a and is on top of a color C_2, the apparent or observed color is then

a\,C_1+(1-a)\,C_2.

If you thought to yourself, “Oh, it’s linear interpolation again!”, you’d be right!  In other words, the observed color is a linear combination of the transparent color and the background color.

Keep in mind that is an idealized situation.  It is very difficult to make a correspondence with how we “actually” see.  If you were looking at an object throught a square of colored glass — maybe sunglasses — there would be concerns about the distance between the glass and the object, multiple lighting sources, etc.  For now, we just want to understand the mathematics of using opacity in computer graphics.

Another complication arises in considering the following two squares — a pure red square on top of a pure blue square.

AlbersSquares2

In this case, it is impossible that the red square has any degree of transparency.  If it did, some of the blue would show through.  If the red square had a transparency of a, our linear interpolation formula would give an apparent color of

a,(1,0,0)+(1-a)\,(0,0,1)=(a,0,1-a).

This means if there were any transparency at all, there would be a Blue component of the observed color, which is not possible since Red has RGB values of (1,0,0).  So before you go about calculating opacity, you’ve got to decide if it’s even possible!

Let’s start with the purple square.

PurpleSquare

What color could this square be, with what transparency?  Let’s call the color C_1 and the opacity a.  Since the background (screen color) is white, we use C_2=(1,1,1).

To assess what the color “looks like,” you’d go to PhotoShop and use the eyedropper tool, for example — or another other application which allows you to point to a color and get the RGB values.  In this case, you’d get (0.6, 0.5, 1.0).  Basically, what you get with such tools are RGB values, assuming an opacity of a=1.

But of course we’re wondering what is possible if the opacity is not 1.  If we denote C_1 by (R,G,B),, we get

(0.6, 0.5, 1.0) = a\,(R,G,B)+(1-a)\,(1,1,1),

which simplifes to

(0.6, 0.5, 1.0)=(a\,R+1-a, a\,G+1-a, a\,B+1-a).

This gives us three equations in four unknowns, which makes sense — we know the answer is likely not unique since it may be possible to create the square using different opacities.

Note that the third equation is

1.0 = a\,B+1-a,

which when rearranged gives us

a\,(B-1)=0.

Of course a cannot be 0, or we wouldn’t see anything!  This means B=1, which makes sense since if we’re interpolating between B and 1, the only way to get a result of 1.0 would be if B=1.

What about the other equations,

0.6=a\,R+1-a,\qquad 0.5=a\,G+1-a?

Solving them, we get

R=\dfrac{a-0.4}a,\qquad G=\dfrac{a-0.5}a.

Keep in mind that the R, G, and a values must all be between 0 and 1.  So we must at least have

a\ge0.5

to guarantee that all values are positive.  Further, observe that for any a value between 0.5 and 1 (inclusive), we have that the numerators of both color values are less than the demoninators, and are therefore less that 1.

So this means that the purple square may be created using the color values

R=\dfrac{a-0.4}a,\quad G=\dfrac{a-0.5}a,\quad B=1,

as long as the opacity satisfies 0.5\le a\le1.  This is illustrated graphically below, where the solid line segment represents all possible colors for the square.RGSpace

Here, the horizontal axis is the Red component and the vertical axis is the Green component.  Since we observed earlier that the Blue component must alwas be 1, the lower-left corner would be (0,0) in RG space (Blue), and the upper-right corner would be (1,1) (White).  When a=1, we obtain the point X with RGB values (0.6, 0.5).  This makes sense, since we observed the color to have RGB values (0.6, 0.5, 1.0).

When a=2/3\approx0.67, we obtain the point Y with coordinates (0.4, 0.25) in RG space.  This means the purple square could be obtained using RGB values of (0.4, 0.25, 1.0) and opacity 2/3.

And when a=0.5, we obtain the point Z with coordinates (0.2, 0) in RG space.  This means the purple square could be obtained using RGB values of (0.2, 0, 1.0) and opacity 0.5.

Note that even though Y is  halfway between X and Z in RG space, the a value is not halfway between 0.5 and 1.0.  This is because of the a in the denominators in the fractions above giving solutions for R and G.

There’s a nice geometrical interpretation of this picture, too.  If one color is an interpolation of two others, it must lie on the segment joining those two others.  So we need all colors C such that X is on the segment between C and (1,1).  This is just that part of RG space which is a continuation of the line starting at (1,1) and going past X.

Now we could have just used this geometrical idea from the beginning — but I wanted to work out the mathematics so you could see how to use the definition of opacity to solve color problems.  And of course sometimes the geometry is not so obvious, so you need to start with the algebraic definition.

So working with opacity isn’t too difficult as long as you understand what your computer is doing.  In the next post on Color, we’ll tackle the issue of the pink square….

Bridges: Mathematics and Art III. Jyvaskyla, Finland!

After a 24-hour journey yesterday, I finally returned home from my month-long trip to Europe.  Not that Vienna and Florence weren’t fantastic, but I must admit that Bridges 2016 was the highlight of my trip!  I’d like to share some especially memorable moments of the conference this week.

Of course the most memorable is the annual Art Exhibition.

IMAG3283

I’ll discuss some of the pieces in this post, but there is no way to include them all!  Luckily, there are online galleries for all the Bridges conferences.  You can access them at the  Bridges online art galleries, although the gallery for Bridges 2016 is not up yet (but will be soon).

Two of the most memorable displays in the exhibit (no bias here, of course!) are my two pieces,

IMAG3259

and my student Nick Mendler’s two pieces (the second piece is shown later).

IMAG3236

It’s a real pleasure just to walk among the artwork and admire the intricacy of the pieces close-up.  Pictures absolutely do not do justice to the experience….  Some pieces which particularly stood out for me were these spheres created by Kiyoko Urata.

IMAG3438

Look closely at how they are made.  The medium is silk thread.  Yes, these are knitted!  These models won the award for Best Craftmanship, and they definitely deserved it.

Many of the talks in the Bridges conferences were about works in the exhibition, but not all.  I was particularly fascinated by a talk given by James Mai of Illinois State University (click here to read the full paper).  He discussed Josef Albers’ series Homage to the Square; one piece in the series is shown below (image courtesy of Wikipedia).  I have written about Josef Albers before (see Day002), so you might imagine I was rather interested in seeing this talk.

Josef_Albers's_painting_'Homage_to_the_Square',_1965

I’d seen these images before, but I just thought they were color studies.  The point of the talk was to suggest that there are many possible ways to interpret these figures.  For example, a naive interpretation is just to look at this image as four squares, one on top of each other.  What fascinated me was how many other ways there are to look at this series by Albers.

For example, what if some of the square are transparent?  Then some of the colors you see are the result of other colors showing through some of the squares.  But which ones?  What combinations of transparency are possible?  If we take it a step further, we might imagine that there aren’t four squares, but perhaps some frames — that is, squares with smaller squares cut out.  Further still, one or more of these frames might also be transparent.

In total, Albers produced over 1000 images of four different configurations of squares in this series (see the paper for details).  In the talk, James Mai found that there are in fact 171 different combinations of frames/transparency that are possible!  This is an entirely different level of complexity that I never imagined possible in this Albers series.  I intend to discuss this phenomenon in my Mathematics and Digital Art course this fall, so be sure to follow along if you’re interested to see more.

Nick and I did talk about our own work.  I was happy with my talk, which prompted a lot of questions from the audience — always a good sign!  Nick really did well for a first time at a conference like this.  He had 15 minutes allotted for his talk, and if you’ve never given a short talk, you may not realize how difficult it is.  As Nick and I sat through sessions, I kept noting to him how rushed many of the presenters were at the end because they spent too long at the beginning getting into their talk.

IMAG3240

But Nick had no time issues at all.  We worked at getting rid of a few slides which took too much time to explain — Nick talked through the remaining slides at a leisurely pace, and even had a few minutes for questions at the end.  Really well done.

One special treat was a talk on spherical mirrors given by Henry Segerman.  It really can’t be described in words, but if you look at his YouTube channel, you’ll find some truly amazing videos.  I highly recommend it!

There were also many scheduled events outside the university.  Again, there is no way to describe them all, but my favorite was the exhibition by Rinus Roelefs (who was attending the conference) at the Art Gallery of Central Finland’s Natural History Museum.  This was a spectacular display of polyhedra with a special opening night during the conference.

IMAG3320

There must have been at least 200 models exhibited, all very intricate and exquisitely assembled.  See my Twitter @cre8math for more pictures of this one-of-a-kind display.

Expect the unexpected — if you’ve been following me on Twitter, you’ll recall I posted about “found art” just walking around the streets of Florence.  There was a really wonderful surprise along the pedestrian walkway through central Jyvaskyla.

IMAG3286

Not much explanation is needed here — but it was just so neat to find this piece in the middle of the street!

Hopefully this gives you a sense of the atmosphere of the conference.  Truly magical!  If you are really interested, Bridges 2017 is in Waterloo, Canada, so it’s a lot easier to get to than Jyvaskyla, Finland.  Nick and I are already determined to attend, and will be starting our planning later this Fall.  I’m planning to give a report on my Mathematics and Digital Art course I’m teaching this semester, and Nick is hoping to take his investigations into the third dimension and make some awesome movies.  You’re welcome to join us in Waterloo for Bridges 2017!

 

 

 

 

Guest Blogger: Geoffrey Owen Miller, II

Let’s hear more from Geoffrey about his use of color!  Without further ado….

Last week, I mentioned a way to combine the RGB and CMYK color wheels.

color wheels-02This lovely wheel is often called the Yurmby wheel because it’s somewhat more pronounceable then YRMBCG(Y). The benefit of the Yurmby is that the primary of one system is the secondary of the other. With the RGB system,

Red light + Blue light = Magenta light.

Red and Blue are the “primaries,” meaning they are used to mix all other colors, like Magenta, and they can’t be mixed by other colors within that system. The CMYK system is different because pigments absorb light and so it is the reverse of the RGB system:

Cyan pigment + Yellow pigment = Green pigment.

This spatial relationship on the wheel is important as it is an oversimplified representation of an important aspect of our vision. Our eyes only have three types of color sensitive cells—typically called Red, Blue, and Green cone cells. Each cell is sensitive to a different size of wavelength of light:  Long, Medium, and Short. When all three frequencies of light are seen together, we see white light. But more significantly, when we see a combination of Medium (green) and Long (red) wavelengths, our brain gets the same signal as if we saw yellow light, which has a medium longish wavelength!

color wheels-03

Going back to the Color Relationships, the first, and easiest to see is a Triadic relationship. Here three colors are chosen at similar distances from each other on the color circle. If we choose Red, Green and Blue, then you should feel a sense of familiarity. When light of all three colors is brought together we have white. But what if we move the triangle, you may ask? Well, let’s try Magenta, Yellow, and Cyan. To shorten the number of words written, I am going to increasingly use more abbreviations. C + Y + M = Black, right? Well yes, if mixed as pigment on paper it becomes a dark gray, as Cyan absorbs Red, Yellow absorbs Blue, and Magenta absorbs Green. But as light, they mix to White.

Cyan light is made up of both Green light and Blue light as that is how you make Cyan light: C = G + B. If we go back to our original equation

C + Y + M

and simplify further, we get

(G + B) + (G + R) + (R + B), or 2R + 2G + 2B.

color wheels-04

We can remove the 2’s as they are not important (think stoichometry in chemistry!) and we are left with RGB, or white. Now let’s try moving half a step clockwise:

 Magenta-Blue + Cyan-Green + Yellow-Red,

or

M + B + C + G + Y + R,

which simplifies to

(R + B) + B + (B + G) + G + (G + R) + R, or 3R + 3G + 3B,

which once again is RGB!

color wheels-05

The point of these “color relations” is to simplify the number of colors in use. Any three equally spaced colors would mix to white. A complementary color pair is the fewest number of colors necessary to engage all three cone cells in the eye; for example,

R + C = R + (G + B)= White.

A split complement is in between a complement-pair and a triad, as shown in the second figure below.

color wheels-06

So for the watercolors I started this discussion about, I had found an old tube of flesh color that I had bought while in Taiwan. I was oddly attracted to this totally artificial-looking hue as a representative for human skin, and it also seemed like a good challenge to get it to work harmoniously in a painting. To understand it better, I first went about trying to remix that flesh color with my other single pigment paints. It turned out to be a mixture of red, yellow, and lots of white — basically a pastel orange. The complementary color of this is a Cyan-Blue color, which I chose to be the pigment Indigo. I added to this pallet Yellow Ochre, which is a reddish-yellow, and Venetian Red which is a yellowish-red, which when mixed make an orange, though much earthier than the orange in that tube of Barbie-like flesh tone.

color wheels-07

Once I had the four colors I felt best about (which also took into consideration other characteristics of a pigment, such as how grainy they are), the painting became a process where intuition, chance, luck, skill, and the weather all played equal parts in creating a work of historical record — never to be repeated equally again.

Disciplines of Geography 10.28.11
Disciplines of Geography 10.28.11

–Geoffrey Owen Miller

Thanks, Geoffrey!  I’d like to remark that I asked Geoffrey to elaborate on the statement “Once I had the four colors I felt best about….”  How did he know what colors were “best”?  Geoffrey commented that it was “like Tiger Woods describing what goes through his mind when he swings a golf club.”

As I thought about his comment, I appreciated it more and more.  A color wheel is a tool to help you organize thoughts about colors — but a color wheel cannot choose the colors for you.  This is where artistry comes in.  Using your tools as a guide, you navigate your way through the color spectrum until — based on years of practice and experience — you’ve “found it.”

But this is exactly what happens when creating digital art!  It is easy to find the complement of a color from the RGB values — just subtract each RGB value from 1.  Sometimes it’s just what you want, but sometimes you need to tweak it a little.  While there are simple arithmetic rules relating colors, they are not “absolute.”

So it all comes back to the question, “What is art?”  I won’t attempt to answer this question here, but only say that having tools and techniques (and code!) at your disposal will allow you to create images, but that doesn’t necessarily mean that these images will have artistic value.

I hope you’ve enjoyed my first guest blogger — I’ll occasionally invite other bloggers as well in the future.  I’m off to Finland for the Bridges 2016 conference tomorrow, so next week I’ll be talking about all the wonderful art I encounter there!

 

Guest Blogger: Geoffrey Owen Miller, I

Disciplines of Geography 11.5.11.jpg
Disciplines of Geography 11.5.11

Geoffrey was one of the two most influential artists who guided me along the path of creating digital art.  We worked at the same school a few years ago, and I sat in on many of his art classes.  Since the faculty often ate lunch together, we’d sit and have many casual chats about art and color.

What I always appreciate about Geoffery is that even though he’s not a mathematician, he is still able to understand and appreciate the mathematical aspects of what I create.  He isn’t intimidated by the mathematics, and I’m not intimidiated by his artisitic expertise.  He really helps me develop as an artist.

Geoffrey is passionate about the use of color, and thinks and writes extensively about the subject.  I’m also fascinated by color, so I thought I’d invite him to be a guest on my blog.  He had so many great things to say, though, that is soon became too much to say in one post.

So enjoy these few weeks, and learn something more about color!  If you like what you see and read, visit Geoffrey’s website at www.geoffreyowenmiller.com.  Enough from me — we’ll let Geoffrey speak for himself….

Back in 2011, I started a series of watercolors that came to be called the Disciplines of Geography. I had just been in Europe and had spent a lot of time visiting the museums and thinking about history while walking amongst all those giant oil paintings. On my return, I was asked by a friend and talented poet to help create a cover for a book of her poems. I had been using the medium of watercolor to sketch during my travels and I decided to use it to make something more substantial and sustained. I found watercolor quite difficult compared to oil paint as every mark you make on the paper remains at some level visible. It forces you to accept mistakes while you do all you can to try and minimize them. But mostly I loved the color the transparent washes of color could create despite my perpetual uncertainty and constant mistakes.

Discipline of Geography  1.6.11
Disciplines of Geography 1.6.11

I determined to make my own version of a history painting with watercolor by focusing on the process of painting. I started thinking about how time and space were linked with the history of the European borders. To watch the ebb and flow of the borders of different countries and governments overlapping and being overlapped was similar to the way watercolor extends out from a pour of paint on wet paper. Each layer of color is effectively redrawing the boundaries, while simultaneously building a history where every subsequent state is influenced by the previous colors, values, and borders of those before them.

As these paintings are nonobjective abstractions, meaning I was not looking at anything specific to inform my color choices, I needed something to inform my decision making. Often abstractions come from found or referenced materials, like photos, or found objects, which can often help direct the choices of value, color, line, etc., that one is making. Why choose one blue instead of another? At some level we choose colors we like or look better to us, but I like working in or creating systems that push me outside of what I normally feel comfortable with. As these paintings existed in the realm of ideas and needed a similar structure to build upon, I really started looking into the color wheel as a tool to think about the relationships of the different colors. (This itself became a multi-year ongoing project.) Without going down that path too far, I wanted to share something I found interesting and helpful in the context of making these watercolors as well providing a greater understanding to how color relates in other contexts.

As a student I was shown, in conjunction with the color wheel, certain color relationships that were supposed to help us make harmonious color choices in our images. We were supposed to make color choices based off of the relationships of the colors’ locations on the color circle. Complementary colors were any two colors on opposite sides of the circle, split complements exchanged one of those two colors for the two colors on either side of it, triadic colors were three colors equally distanced from each other, and so on. But was yellow the complement of purple?  Or of blue?  Because artists use more than one color wheel, it depended on which color circle you decided to go with.

The Red Yellow Blue (RYB) primary color circle was most often used during my schooling. This was because those were the pigments we most often used, and in this case yellow did complement the mixture of red and blue, which was called purple. If you mixed all three you tended to get a dark color that, depending on your ratios, was a pretty decent neutral, which is really important in color mixing. Hardly anything in the world around us is a fully saturated color.

color wheels-01

However, with light it is very clear, and also demonstrable in your own home with flashlights and colored films, that blue is the complement of yellow. And when they mix they create a fairly neutral white. As color is essentially about light (and pigments are a complicated world in themselves) I decided to go with the Red Green Blue (RGB) color circle. As an added bonus, those clever artists and color scientists figured out that certain other pigments can work quite nicely in this structure. Cyan, Yellow, and Magenta are the colors that Red, Green, and Blue light make when mixed, while Red, Green, and Blue are made with Cyan, Yellow, and Magenta pigments. Learning that was incredibly satisfying.

color wheels-02

But since Cyan, Magenta, and Yellow pigments don’t actually create Black, Black is needed for the very darkest colors.  So this system is called the “CMYK” system, where “K” is used for Black so it’s not confused with Blue.

There’s a neat way these two systems can be combined, called the “Yurmby” wheel, which I used to create Disciplines of Geography.  That’s where we’ll start next week!