https://drive.google.com/drive/folders/1TSgLRdO0tKIFUb6Oj8yN...
Here's a link to my post (unfortunately, in Russian) describing the process: https://t.me/little_pieces/822?single
A quick translation:
1. We have a 2D space and a regular shape with N angles, where N is a parameter.
2. Choose a function f : R -> R (e.g. f(x) = x / 2 for Sierpinsky).
3. Choose a random "current" point
4. Choose a random angular point of the N-shape
5. Calculate the distance D between two points. Move the current point to the selected angle point by distance calculated as f(D).
6. Repeat steps 4-5, saving position of the current point on each step, until there is enough data to draw a density map
Another person from gen-club[0] (Russian telegram chat dedicated to generative art) made a 3D-version of it[1][2]:
[1] https://t.me/mathimages/156
[2] https://codepen.io/strangerintheq/full/ZEKXamr (zoom out at startup)
f1(x,y) = (x/2, y/2)
f2(x,y) = (x/2+1/2, y/2)
f3(x,y) = (x/2 + 1/4, y/2 + sqrt(3)/4)
which are precisely the maps in the Iterated Function System [1] which generate the Sierpinsky triangle!
It's a very nice observation that the three maps have a concise representation in terms of taking the midpoint with the given point and the vertices of the triangle.
Edit: In my view, to show that this draws the Sierpinsky triangle, one would need to show that (1) we only draw points that are in the Sierpinski triangle and (2) we draw all the points of the Sierpinski triangle.
(1) is clearly false (we start with a random point), but the claim is of course only that we draw an “approximation”. What that means exactly would need to be defined. I assume a rigorous argument would involve 2D probability densities. Given that, (2) isn’t obvious to me as well, what’s the argument that all the parts of the Sierpinski shape correspond to areas with high probability density?
Taking midpoint between a point and tringle vertex is a transformation that scales everything by 1/2 with this vertex as pivot.
By choosing one of such three transformations randomly you are scaling the whole triangle into three smaller copies of itself that it cosists of.
If you start with a point belonging to Sierpinski trinagle, you are adding more and more points belonging to that triangle.
Fun thing is that you can use the same algorithm with different number and kind of transformations to get other fractals with such random walk. For exaple a fractal tree or fern leaf or Sierpinski carpet.
Another interesting thing is you can start with a point not belonging to a fractal and it pretty quickly coverges. It's because fractals are attractors.
Suppose you start by drawing a point uniformly at random from the triangle. We'll call this step 0. With probability 1 this point is not in the sierpinski triangle, but if we consider finite approximations of the sierpinsky triangle it is in the level 0 sierpinski triangle (see this random illustration I found on Google [1]). In particular, since the Level 0 sierpinski triangle is just the whole triangle, the initial point is uniformly distributed in the level 0 sierpinski triangle.
This is a base case. Then we can do an induction step: If we have a point which is uniformly distributed in the level k sierpinski triangle, applying the transformation (pick one vertex of the triangle at random and move halfway towards it) results in a point which is uniformly distributed in the level k+1 sierpinski triangle. This is because each corner of the level k+1 sierpinski triangle is a scaled copy of the level k sierpinski triangle.
Putting the base case and induction step together, we deduce that when we repeatedly apply the transformation of picking a vertex and moving halfway towards it, at each step k we have a point which is uniformly distributed in the level k sierpinski triangle.
This is enough to show that if we start with a sample of N points drawn uniformly from the triangle, after k steps we'll have N points drawn uniformly from the level k sierpinski triangle (i.e. an arbitrarily good approximation of the sierpinski triangle, if we make k large enough). It's not quite enough to show that if we start with a single point and record the location after each step we'll end up with something that looks like the sierpinski triangle. For that we need to establish ergodicity of the Markov chain. The details are technical but not difficult to show; basically the Markov chain needs to not get caught in fixed-length loops (aperiodicity) and never get stuck unable to reach part of the triangle (irreducibility). Both these conditions are satisfied, so the Markov chain is ergodic and starting from a single point will approximate a sierpinski triangle.
I've glossed over a bunch of details but hopefully that shows the basic idea.
[1] https://www.researchgate.net/profile/Ali-Bashir-4/publicatio...
> The most common algorithm to compute IFS fractals is called the "chaos game". It consists of picking a random point in the plane, then iteratively applying one of the functions chosen at random from the function system to transform the point to get a next point. An alternative algorithm is to generate each possible sequence of functions up to a given maximum length, and then to plot the results of applying each of these sequences of functions to an initial point or shape.
A sketch of the argument in the IFS goes like this: consider the set of maps {f1, f2, f3} as acting on compact subsets of R^2 by the action F -> f1(F) U f2(F) U f3(F). Since the maps fi are continuous, the image is a compact set. But one can also prove that this action is a contraction mapping on the space of compact subsets of R^2 equipped with Hausdorff metric [1], and appealing to the Banach fixed point theorem [2],
(1) there is a unique compact set K fixed by this action, and
(2) the images of F converge "geometrically fast" to the set K
In other words, there is some constant 0 < r < 1 such that after k steps of this algorithm (not the random process), the distance from the k^th approximation to the Sierpinsky triangle is at most r^k
Actual convergence rates of the corresponding random process are more complicated. One of my colleagues recently wrote a paper on convergence rates of the "chaos game" (essentially what this is doing), and one can get really sharp information on how fast the random process converges [3]. However, this uses some non-trivial covering time theory for Markov processes. It's not too complex to follow, but definitely way more detail than I can write up in this note here.
[1] https://en.wikipedia.org/wiki/Hausdorff_distance
[2] https://en.wikipedia.org/wiki/Banach_fixed-point_theorem
[3] https://arxiv.org/abs/2007.11517
Edit: To see that the points converge to the Sierpinsky triangle (with no quantitative information) is a "bit less challenging" (i.e. within the realm of "standard" material - not necessarily easier!) - you can essentially reduce this problem to an application of the Birkhoff ergodic theorem [4]. Firstly, we can consider this process as a random map
pi: {1,2,3}^{\mathbb{N}} -> R^2
which is defined by taking finite subsequences (i1, i2, ..., ik) and mapping them to the images fi1(fi2( ... (fik(x_0))), and taking the limit in k, for some fixed starting point x0.
Then the "apply map fi" can be interpreted as looking at pi(sigma(i1,i2,...)) where sigma is the left-shift map sigma(i1, i2, ...) = (i2, i3, ...). But the shift map plays very nicely with the measure on the sequence space (the Bernoulli measure), in the sense that the Birkhoff ergodic theorem can be applied. This gives that for "typical" sequences of random function applications, this process will converge to the Sierpinsky triangle.
Here's another way to see it: let's say I apply maps fi1, fi2, ..., fik to my starting point x0. Then if we code the "level k" approximations to the Sierpinsky triangle in the same way [triangle (i1, ..., ik) = fi1(...(fik(initial triangle)))], the image of the point x0 will lie in triangle (i1, ..., ik). But these triangles, for large k, approximate the Sierpinsky triangle. Some more argumentation is required to deal with the case when the starting point does not sit in the Sierpinsky triangle, but this isn't a problem because of the fast convergence result explained above (if you wait a long time, all the points will be "very close"). The above ergodic theory argument is just showing that every possible finite subsequence will show up for "typical" random function applications.
https://maths.anu.edu.au/people/academics/michael-barnsley
and
I remember sitting on the floor of our living room programming this in character by character on that TI keyboard. It's the first BASIC program I ever wrote. Everything before that was js and html.
"The sierpinski triangle page to end most sierpinski triangle pages ™"
Just to add the curiosum, the method as it was explained to me as a kid: "Plot a pixel on any corner of the triangle. Pick a new corner on random and plot a pixel halfway between there and where your last pixel was, then just repeat."
The recursive implementation always seemed more intuitive for me.
1. The Chaos Game, as described in this post.
2. Other ways of rendering the IFS; for example, iterating over the possibility tree of transforming a single point 10 times and plotting the 59049 leaves, or searching over the tree of inverse transformations from a given pixel to some depth like 6 and coloring the pixel with the length of the longest chain that doesn't blow up. It's a very well behaved IFS, so even methods that have trouble with some IFSs will do fine.
3. Coloring the odd numbers in Pascal's Triangle, aka Yang Hui's Triangle or the triangle from मेरु प्रस्तार.
4. As a generalization, plotting histories of an astonishingly wide range of 1-D cellular automata, starting with a single "live" cell. For example, the totalistic rule where a cell is live iff exactly 1 of itself and its neighbors were alive (rule 14). About a third of the 256 2-state neighborhood-3 CAs like this are some deformation of the Triangle IIRC.
5. A surprisingly large variety of L-systems also generate the Triangle. In http://canonical.org/~kragen/laserboot/cut-6 I used, I think, F = !F!-F-!F!, where ! swaps left and right. An interesting thing about this curve in particular is that it avoids self-intersections, which is what makes it somewhat suitable for laser cutting.
6. And of course you can start with a solid triangle, cut a hole in its middle to make a triforce, and then recursively do the same for each of the resulting three remaining triangles.
7. For use as a calendar, http://canonical.org/~kragen/sw/dev3/siercal I generated a tour of the triangle with a non-L-system method, just subdividing the triangle recursively.
8. It's also interesting to note that the state space of the Towers of Hanoi puzzle has the shape of the Sierpinski Triangle. So a suitable 2D graph layout algorithm (all edges equal length, maximizing total distance from the center) applied to the state space graph ought to generate it. This is a little handwavy, though.
9. I'd forgotten this, but as John D. Cook points out in https://www.johndcook.com/blog/2019/11/26/fractal-via-bit-tw..., the pixels where the bitwise AND of the X and Y coordinates is zero form a distorted Sierpinski Triangle.
10. Cook points out in https://www.johndcook.com/blog/2019/10/19/binary-surprise/ that the numbers of sides in the regular n-gons constructible with compass and straightedge, when written down in binary, also form the Sierpinski Triangle.
Going a step further, changing the "functions" over time can create smoothly animated fractals. Electric Sheep (https://electricsheep.org/) is a screensaver version of this which doubles as a distributed computing network to generate them. It also mixes in non-linear transformations which makes for some impressive, mesmerizing patterns.
The wiki for chaos games has animated examples that make it pretty clear how it works for the sierpinski triangle: https://en.wikipedia.org/wiki/Chaos_game
I enjoy that in Rule 90 of cellular automata approximate Sierpinski triangles are also generated as a mod-2 version of pascal's: https://en.wikipedia.org/wiki/Rule_90#Sierpiński_triangle Some other rules also result in the generation of sierpinski's triangle. Fun stuff.
After spending many years idly reflecting on why the chaos game converged to the Sierpinski triangle, I'd like to share an illustrated proof of mine that might be enjoyed by this audience: https://arun.chagantys.org/technical/2020/04/28/chaos-game.h...
https://www.speicherleck.de/iblech/stuff/chaosspiel-2015/cha...
Developed as part of a mathematics day for children, the worksheet is available here: https://www.speicherleck.de/iblech/stuff/chaosspiel-2015/arb...
[0] https://en.wikipedia.org/wiki/Fractint
EDIT: Also, as a bonus this method is very easy to do with a pen and paper. I remember trying that, but it takes a lot of points before the structure starts to become visible.
Then a relative wrote this algorithm in about 10 lines of qbasic. I was amazed that such simple code could do something so complex and unexpected. As a 5 yr old child, I spent hours trying to figure out how on earth this algorithm worked - even though it's blatantly obvious to adult me.
https://www.manualslib.com/manual/325928/Ti-Ti-82.html?page=...
It prompted me to learn a bit more about IFS and discover one of my favorite fractals ever. The word "chaos" where each stroke is made up of the word "chaos" [1]. I spent a bunch of time coding up the fractals on that site [2] on my calculator
[0] http://manualsdump.com/en/manuals/texas_instruments-ti-83/13...
Python Code:https://nbviewer.org/github/Nydhal/Python-Notebooks/blob/mas...
Sierpinski carpet L-system
677 bytes will render a Sierpinski carpet using an L-system to create a Z-order curve :
https://blogs.sas.com/content/iml/2020/10/19/random-points-i...
Since Each row of Pascal’s triangle is generated from the row just above it…
P(x,y) = P(x,y-1) + P(x-1,y-1)
and since for any even or odd numbers …
even+even=even, odd+odd=e, o+e=o
so really you can compute the sign of any item in Pascal’s triangle by just checking the sign of the two numbers above it and xoring them.
So you can compute each pixel S(x,y) = xor( S(x,y-1), S(x-1,y-1)
Where the initial row is all 0 with a single 1 in it.
I wrote a program on the Mac II in high school to compute this…it was quite a lot of fun. The process of going from integer to floating point to large integer to single bits was instructive.
Imagine what would be required to put a dot in an area that ought to be blank... The previous point would have had to have been outside the triangle, or in another blank area at the previous step.
Hence, it is impossible to draw in the blank areas.
https://play.rust-lang.org/?version=stable&mode=debug&editio...
Edit: quick Shadertoy version: https://www.shadertoy.com/view/flVXDh
(From the repo)