- Descent into madness
- Sources & code
Unusual displays bring me joy… Past projects involved a wooden segment display prototype and a binary clock:
So maybe you can understand why I got intrigued about Adobe Premiere’s analytics tools, called Lumetri Scopes. In particular the Waveform and Vectorscope displays. They show the colour brightness, distribution & saturation of an image:
But is this a one-way lane? I got curious and wanted to create images inside these analytics tools:
Surely this is not a novel idea but I couldn’t find much about it online – let alone code to do so. Code? Yes, we will use Python scripts to do the heavy lifting. I didn’t want to draw over 2 million pixels per image by hand. I might be weird – but not that weird.
This process involves three steps:
- Input image: A regular image file.
- Latent image: An image created via Python script, which produces the desired output image.
- Output image: What we see in the Waveform or Vectorscope display.
I will introduce each display briefly, before going into how to create the latent images to get the desired output images inside of it. Let’s start with the Waveform:
This tool shows the colour intensity horizontally across your input image. It is used to measure the brightness and to inform choices during colour correction. A regular input image usually shows up like wispy waves. Let’s see how we can “draw” an actual image…
How it works
The Waveform shows colour intensity per pixel column, so let’s look at a single column… In fact, let’s make it even easier and consider a single pixel first:
The red/green/blue amount of each input pixel contributes a dot in the Waveform display: 0 to 100 intensity from bottom to top. So a pixel that is:
- 50% red
- 20% green
- 0% blue
becomes 3 marks in the Waveform display:
- a red dot in the middle of the column
- a green dot one fifth from the bottom
- a blue dot at the very bottom
Do this for every input pixel and you have yourself a Waveform image:
Drawing Waveform images
Now that we know how a pixel of an input image correlates with a dot in the Waveform display, we can exploit this to provoke a desired output. As we saw in the previous section, a position in the Waveform display is defined by:
- horizontal position in waveform display = horizontal position of input pixel
- vertical position in waveform display = colour intensity of the input pixel
We basically lose the option to dim colours, because the colour intensity is used to place a dot along the vertical axis. In this display a pixel either has red/green/blue or it does not. No in between. Not very exciting for colour pickers…
To deal with this limitation we can check whether colours are above a certain threshold when we convert our input image. For example: If a pixel’s blue component is above 40% we consider it to be blue and it will become 100% blue in the output image. As you can imagine; that results in a pretty harsh image with 1bit per colour channel:
There is an old technique to improve the quality of images with low bit depth, called dithering (quite interesting: the word & idea has its origins in WW2 bomber computers). It is basically Pointillism for digital images; dots create the illusion of colours that aren’t part of our pure red, green and blue palette.
I used this implementation of dithering to dither the input image before converting it, which greatly improves the quality of the output image:
Now we have a crappier version of our image. So what?
Well, these latent images have an interesting characteristic: The vertical order of pixels doesn’t matter, because the height in the Waveform display is determined by the colour intensity. Therefore we can freely rearrange pixels vertically: Sorted by brightness, random, etc. All of these latent images produce the same output image as seen above:
Selectively blending the input images with the generated latent images can yield interesting results:
This tool shows the hue and saturation of colours in a picture. It is used to dial in specific colours, such as skin tones. A regular input image usually shows up as greyscale patches on the circle, like you see on the left. Let’s see how we can “draw” an actual image…
How it works
The hue of a pixel of the input image defines the angle around the centre (starting with red at the top, going counterclockwise), whereas the saturation defines the distance from the centre point.
It is essentially a polar coordinate system, with the hue defining the angle and the saturation the distance from the centre:
Drawing Vectorscope images
As a first step we can simply check whether a pixel of the input image is brighter than a certain threshold (for this we are essentially looking at a black & white version of the input image). If that is the case we add a pixel to our latent image with the hue and saturation that positions the dot correctly in the output image.
After writing a script and letting the computer do this for every pixel of the input image we end up with this output image:
To achieve gradients we can add multiple dots depending on how bright the input pixel is. For the following output image I increased the available shades of grey from 1 to 8:
We end up with another crappier version of the input image, but again with interesting properties. For example the position of the pixels in the latent image don’t matter at all, since only the hue and saturation define where a dot is placed in the Vectorscope. Therefore we can rearrange the pixels of these psychedelic latent images however we please. All of these produce the same output image as seen above:
Another interesting property is that shifting the HUE of the latent image rotates the output image:
Some more transformations achieved by adjusting the colours of the latent image:
Descent into madness
While rotating an input image I noticed something interesting in the Waveform display:
This looks a bit “3D”, doesn’t it? I got curious and wanted to come up with a latent image on my own, which creates this fairly basic 3D grid in the Waveform display:
If you followed me on this journey so far, I suggest that you ponder for a bit, what latent image you would have to craft to get this result. It is a fun little challenge and might seem trivial once you see the result.
This is how I approached it:
A grid in the output image is easiest created by a grid in the latent image. Let us start with that. But because all these colours are at 100% brightness we will not see anything yet.
To achieve a “tilt” – as if we were looking down onto the rotating grid – we can darken the latent image from top to bottom slightly (it is very subtle, so it might be hard to see).
Dark, blurry circles around the centre of the latent image create the waves.
Finally we can rotate the grid. This leaves us with this final latent image sequence that produces the output sequence seen at the beginning of this section.
Feel free to download the After Effects file and play around with the setup yourself.
Sources & code
All bike videos used for the examples are taken from the beautiful “Life Cycles” movie. Go watch it – it’s free!
The code in github.com/mischakolbe/lumetri_scope_display is slow and might be buggy, but it should get you started if you want to explore further. For example: Combining the Waveform & Vectorscope latent images into one, by rearranging pixels in a clever way. Or making it more efficient by running it on the GPU.
Note: Do NOT save the latent images in a lossy format, such as JPEG. The compression blends colours of adjacent pixels and ruins the output!
I embarked on this journey hopefully the same way you read this post: Not knowing what I was signing up for, but getting hooked on exploring something that was objectively pretty useless, but intriguing and oddly beautiful none the less.
Let’s keep exploring. I am looking forward to seeing what oddity keeps you up at night.