Morphing

Morphing

#creative coding#generative art

Morphing nature’s patterns onto humans to reveal deep interconnection.

Abstract

“Morphing” is a project that visually explores patterns found in nature and overlays them onto human figures captured through a webcam. The aim is to promote awareness of our interconnectedness with nature. The project consists of two parts: generating patterns based on the mathematical principle behind them, and capturing and morphing these patterns onto the viewer's figure. The project was developed using Turing's Reaction-Diffusion Model and the programming language Processing.

Toolkit

Processing, Generative Algorithms

Demo Video

Process

Concept

Similar patterns have been found in animal bodies, plants, and even landscapes. This shows us that maybe things in nature share the same basic algorithm to form their body, including the humans. However, with the development of technology, we think that we have control over other beings and nature, and begin to forget that nature's wisdom is inherent in our own bodies all the time.

I want to visually explore patterns found in nature and overlay them to the viewer's figure in the computer through their webcam. Through this project, I aim to promote awareness of our interconnectedness with nature and other living beings. While humans have developed great abilities, we remain part of the natural world that we evolved from. We should respect and learn from nature rather than try to exploit it.

imageimage
imageimage

Design and Implementation

My project consists of two main parts:

  1. Generating the patterns based on the mathematical principle behind them.
  2. Capturing the user's figure using the camera and morphing the patterns onto the figure.

I used Turing's Reaction-Diffusion Model as the blueprint for generating the patterns. That's because this model shows how different patterns in nature, from stripes to spots, can arise naturally from a homogeneous state. It is based on the interplay between two kinds of chemicals: the Activator and the Inhibitor, where the activator is trying to reproduce and the inhibitor is stopping it from doing so. Different generating and dying rates of these chemicals create a variety of interesting behaviors that explain the mystery of animal/plant patterns.

image

I mainly referred to Karl Sims's version of the reaction-diffusion equation. He has a wonderful research and web art project about Turing's pattern. https://www.karlsims.com/rd.html

image

I also learned about ways to translate this equation into code from the coding train: https://youtu.be/BV9ny785UNc?si=aoU4__mLw6Pze6ir

grid[y][x].a = a +
                ((dA * laplaceA(x, y)) -
                (a * b * b) +
                (feed * (1 - a))) * 1;
              grid[y][x].b = b +
                ((dB * laplaceB(x, y)) +
                (a * b * b) -
                ((k + feed) * b)) * 1;

I created a class that stores the concentration of Chemicals A and B of every pixel in the 2D array.

One of the interesting parts is that the "diffusion" part works similarly to the Game of Life. In every update, the new concentration of the center pixel of a 3x3 convolution is calculated based on the concentration of its 8 neighbors, each with a different weight. This causes the chemicals to diffuse into areas around them. In our case, the weights are as follows.

image

There are results like this:

image
image

However, one thing is that the complicated calculations slow down the running of the sketches, and making the canvas bigger also results in lagging. After testing, I found that the p5js library is causing the problem (because it's a rather large library).

As you can see, even the difference between using the p5 file and the p5.min file can cause such a huge difference in running efficiency: (They both start from the same seed. The one on the right is using p5.min, and runs twice as fast as the one on the left)

image

Therefore, I decided to use Processing as the platform to develop my project. It is a local software therefore doesn't have to fetch the library from the web.

Moreover, I reduced the resolution of the canvas by utilizing 2D arrays. (In the webcam part, I also reduced the resolution by storing the information of only the top-left pixel out of an 8x8 pixel grid). By doing this, I was able to expand the canvas size.

Then it comes to the next step: Capturing the user's figure with the webcam and projecting the patterns on the figure.

This is the logic of implementation:

image

Firstly, we will need to capture an empty background without any humans, and store the color data in a 2d array. Then we will compare the real-time video captured by the webcam with that empty background, and identify the areas that have large color differences. (I used the Euclidean distance) These areas will represent where the figure is. And then, we use this precessed layer as a map that controls which part of the pattern layer is shown to the user. Then, we will be able to see that the patterns are only growing on the user's figure but not on the background!

I added some customizable values to make the project more flexible to different lighting, skin colors, and preferences. As a user, you can move your mouse across the X-axis to change the exposure, and across the Y-axis to change the transparency of the mask.

At last, I added a GUI using the controlP5 library. The user will be able to use the preset patterns and color palettes as well as adjust the parameters on their own.

User testing on IM show:

imageimage