What Color is the Best Green-Screen Color?

A green-screen is a cinematographical processing technique where an objective (a person, an object, etc) is placed in front of a mono-colored background such that later in processing the green-screen background can be cut out leaving only the objective for the purpose of placing the objective into a different scene. For example this is how actors are "painted" onto scenes that do not exist in reality but are rendered using computer programs with the intent of making the audience believe that the actor is part of the scene and even interacting with the scene. Typically, the background color is green, which is why it is frequently called a green-screen, but in reality, the green color can be any color, ideally a color that is not part of or does not blend too easily with the objective. For example, if a blue and white teapot is to be filmed cut out and transplanted onto a different scene then ideally the background should not be either blue or white or else the cutting process might cut out pieces of the teapot itself.

Background Color for Animations using Indexed Colors

The green-screen color problem is simplified when creating cartoons where animations are involved that are created on the computer with a "transparent" color as the background and with a fixed set of colors. Without going into too much detail, sometimes transparent videos are not accepted depending on the formats, such that a background color has to be chosen instead of just being "transparent". When overlaying the animation onto a scene, the background has to be cut out again, such that the problem of picking an optimal background color that will be cut out later still persists. One approach to picking the best background color to be cut out first involves generating a color palette from all the frames of the object being animated, then using the palette as a reference and picking the background color to be a color that does not show up on the color palette, thereby ensuring that when the background is cut out, no pieces of the objective will be cut out.

With a color-palette made out of a sequence of frames for an animated objective, the problem is reduced to choosing a color that is farthest away from the colors on the color-palette and for that task complimentary colors can be used, perhaps even for a larger number of colors other than two by summing up and averaging the values, in order to determine the perfect background color that contrasts best with the colors on the color-palette.

Shutter Speed, Frames per Second and Flickering Lights

Shutter speed is the short time used by cameras to expose a film to an objective. The shorter the shutter speed, the more defined a picture would be but also the more likely that movement will occur while photographing the image that would result in a blurred effect. When shooting video, as an evolution of a standard still-frame camera, a video records at a given pictures per second, or rather frames per second (FPS) that are then strung together resulting in a dynamic motion. Very similar to cartoons, the higher the FPS, the more fluid the video appears in the end due to having a large amount of images to construct a fluid motion path.

Lights that are undampened by an AC-to-DC circuit with a reserve (ie: a capacitor or battery) that ensures a constant source of non-alternating currents, trivially will oscillate with the period being clamped to the oscillation of the AC circuit. In other words, given a mains current oscillation typical to Europe, the period would be $50Hz$ whereas in the US, the mains current period oscillates at $60Hz$ ($Hz$ or Hertz just indicates the number of oscillations on-off per second).

With those definitions in mind, lights flickering in video recordings are an indication of the relative difference between the video camera FPS and the speed that lights oscillate at.

  • Iff. a camera records with more FPS than the oscillation period of mains lights, then the intermittent "off" phase of the lights will be caught on video, such that the resulting footage would contain on-off flickers.
  • Conversely, iff. a camera records with less FPS than the oscillation period of the mains lights, then more than likely the camera will miss those "fast oscillations" such that the resulting footage will end up without any on-off flickers of the mains lights.

Historically speaking, when there were very very few "cameras" around in terms of brands and make, whether photo cameras or cinematographic cameras, one could assess the geographic location where a video was taken just by noticing whether the mains lights would flicker or not. It was possible to obtain the specifications of the camera, find out the FPS that it records at and then watch the lights flicker on the recording in order to estimate where the footage was taken in terms of political region.

Furthermore, iff. one knows the FPS that a footage is taken at and one measures the speed of the flicker, then it is possible to determine the mains socket current oscillation period in terms of $Hz$. The formula for finding out the oscillation speed of the mains current is the following:

$$
\begin{eqnarray*}
f_{camera} - f_{mains} = \frac{\mathtext{flickers}}{s}
\end{eqnarray*}
$$

where:

  • $f_{camera}$ is the frames-per-second of the camera (FPS),
  • $f_{mains}$ is the frequency of the mains current in Hertz ($Hz$),
  • $\mathtext{flickers}$ is the amount of on-off sequences of the mains current observed in 1 second of video footage.

This can be worked around a little in order to obtain a pluggable / programmable formula by solving for $f_{mains}$:

$$
\begin{eqnarray*}
f_{mains} &=& f_{camera} - \frac{\mathtext{flickers}}{s} 
\end{eqnarray*}
$$

So, how would one do this? With the camera FPS established or assumed, one would play back the video footage and count the number of times that the light flickers in a period of 1 second. That value would then be plugged into the formula for the flickers parameter and the equation would be solved in order to determine the mains frequency.

As a discussion, one would need to know the recording FPS of the camera which is something that can't really be determined from other parameters. There are indicators in terms of camera brand or the resolution that the video is shot at such that given the quality, it would be possible to assume some FPS - for example, for 2025, vloggers typically shoot at 60fps because they want the best quality available and without spending enormous mounts of money, 60fps is sort-of the current standard of the technological sliding window.

In fact, one could actually assume some camera FPS and use that in order to solve while judging if the result makes any sense, say, with the help of the rest of the data, if available. The former estimate is still possible to some level due to current distribution, camera tendencies being more or less standardized (ie: nobody would suddenly roll their own camera that shoots at $27FPS$ on, say, amateur video of a vlogger, just like that and for no reason; nor would any country for whatever reason want to come up with their own standard and that's mainly because electricity can be bought and sold such that compatibility with other grids would only be beneficial).

Blurring and Partial Blurring of Videos

Depending on the software package there might be features that would facilitate blurring a video or different parts of the video. However, a software-package agnostic way that does not depend on a particular software brand exists that can be used with any video processing software in order to achieve a blurring effect.

+-----------------------------------------------------+
| Blurred Copy of Video                               | <-- track 2
+-----------------------------------------------------+
 
+-----------------------------------------------------+
| Video                                               | <-- track 1
+-----------------------------------------------------+

Simply, a copy of the original video track is made and placed on top of the original video track as an overlay such that the original video is in Track 1 and the copy is in Track 2. After that, blurring effects are used to blur the entirety of Track 2.

Finally, in order to achieve a partial blur, Track 2 is just cropped, perhaps using key-frames such as in the censoring case in order to achieve a partial effect. In fact, the only difference between this method and the censoring black screens from the censoring writeup is that the censoring writeup just uses a solid black overlay on top of the video whereas here a blurred video copy is used instead as an overlay.


fuss/cinematography.txt ยท Last modified: by office

Wizardry and Steamworks

© 2025 Wizardry and Steamworks

Access website using Tor Access website using i2p Wizardry and Steamworks PGP Key


For the contact, copyright, license, warranty and privacy terms for the usage of this website please see the contact, license, privacy, copyright.