Initially, I did not plan to write this critical report on lighting design in film. The very first topic submitted was actually the wrong one, but in the course of conducting my thesis research, I came to realise that the role of lighting in film is far more complex than I had previously thought. It is not only a visual aid, but also directly affects the audience’s perception of reality and emotional resonance. And, in the process of making my graduation design, when I started to integrate a lot of CGI elements and real shooting scenes on my own, I found that the more I synthesised the two, or more of them together, the less realistic the effect they produced. This made me rethink the direction of my research and ultimately decided to delve deeper into how film lighting can strike a balance between realism and artistic expression.
In the course of my research, I reviewed a large amount of literature, including classics such as Bordwell and Thompson’s Film Art: An Introduction and Brown’s Cinematography: Theory and Practice. These books helped me understand the evolution of film lighting from early Hollywood to modern digital cinema. I also referenced many studies on visual effects, such as Malkiewicz’s discussion of natural versus artificial light in Film Lighting and Ebert’s critique of the over-reliance on digital lighting in CGI films. These theoretical backgrounds made me realise that many modern films actually sacrifice audience immersion while pursuing flashy visual effects.
In addition to academic research, I have also analysed some specific film cases. For example, Thor 3: Twilight of the Gods (2017) uses an extremely saturated lighting style that takes it away from its realistic tone, while Blade Runner 2049 (2017) employs a lot of neon lighting effects that enhance the cyberpunk atmosphere, but at the same time bring a certain degree of visual discomfort. In contrast, The Wild Bunch (2015) was shot entirely with natural light, giving the film an extremely realistic visual effect, which matched the feedback from many viewers – who preferred lighting designs that enhanced the authenticity of the story, rather than the deliberate use of artificial lighting for purely aesthetic reasons.
The whole creative process was not easy, and the biggest challenge was to find a balance between technology, history and audience psychology. I kept adjusting the structure of the thesis, expanding from the initial technical analysis to a discussion of audience immersion, trying to explore the dual role of film lighting from a more critical perspective. During this process, I also reflected on my own experience of watching films: those that moved me deeply tended to have more natural lighting, while those that relied too heavily on visual effects often gave me a sense of ‘distance’. These personal observations ultimately became an important argument in my thesis, further supporting my core ideas.
In addition, I found that film lighting not only affects the audience’s perception of reality, but also shapes their emotional experience on a subconscious level. Many directors use lighting to suggest emotions, for example, low-saturated natural light tends to make people feel immersed and close, whereas cool-toned artificial light may create a sense of distance or even unease. This also got me thinking about why the visual effects of some films feel ‘hollow’ or ‘detached from reality’ – not just because they rely on CGI, but because their lighting and shadows lack any reference to the real world. It’s not just because they rely on CGI, but because their lighting and shadows lack real-world references. To test this, I looked at audience feedback on different films and tried to compare it to the way the lighting was designed. It turned out that films that were shot with natural light or simulated real light changes, such as 1917 (2019) or The Wave (2008), tended to be perceived as more immersive, whereas films that used extreme lighting effects and artificial lighting were more likely to leave the audience feeling visually ‘overloaded’ or lacking in authenticity.
During this process, I gradually realised that lighting is not just a technical tool, but more like a ‘visual language’, capable of conveying messages in silence. How to find a balance between ‘artistic expression’ and ‘realism’ has become a problem in modern film production, and this is exactly the core issue that I hope to explore in my thesis.
In the end, this thesis not only gave me a deeper understanding of the impact of film lighting, but also a deeper knowledge of film making. During the writing process, I learned how to critically analyse visual elements and combine theory with practical examples. This not only helped me complete my academic research, but also made me more concerned about how to use lighting to enhance storytelling in my future creations, rather than just pursuing visual showmanship.
This project was a big one that lasted almost half a year, and I’m very happy with the final result, even if there are still a lot of mistakes and points that I could have optimized that I could have found even now, but it’s a good result for this stage. When I was working on Gonzalo’s last big project in August, I was already planning to make it, I also planned to practice scenes every week before the summer vacation started, there were about 8 of them, but in the end I only finished about 5 of them until September, all these small practices were me imitating the shots of those movies and the special effects, and a lot of time was spent on learning the basics of the software. The software I used for this project was MAYA, C4D, HOUDINI, NUKE, SUBSTANCE PAINTER, PR, AE, and PS. It was a great opportunity to train in these software, especially NUKE and C4D, I learned quite a lot of new things and I can now composite things quickly and accurately. VFX is a lot about bits and pieces of trivia, you have to look for multiple solutions to a problem, and I’m having a lot of fun with that.
The shortage of this project is that the sea battle scene is not completely produced using Houdini, because of the time problem. The effect of the magic shield is directly synthesized, and there is no fog on the sea, adding some solid fog will be much higher quality. Then there is the forest shot, I didn’t find a very high quality soldier model at first, and the key point is actually not taking a panoramic image along with the shot for HDRI, I am using other HDRI maps to mimic the real scene in the compositing now. Next is the magic bullet scene, where the problem is very similar to earlier, I learned to do the animation of firing the bullet but didn’t use it because the speed of the bullet is too slow and it looks unrealistic, I could actually continue to speed up the particles to simulate the real bullet flight, but time is limited. And there is also some problem on Part 4 scene, the airplane effect looks fake, I need to recreate it.
But all in all, I’m confident in this project, and overall the finish result and the flow of the plot I feel I can give myself a score of 85/100.
This is the last chapter of this project, I have mixed feelings when I write this Blog, on the one hand it is to celebrate the completion of the graduation design, and on the other hand it is also because my school life is coming to an end, and it is a whole new phase of my life.I would like to thank all my teachers first of all, you all made me play a positive role in my professional ability, and make students feel very tolerant in teaching, and also always have a very cheerful attitude towards life, you are more like a friend, help us to play a better role in planning our career path.This chapter is about the remaining bits and pieces of the production process as well as the revision process, and at the end, two ends of the video will be put up, one for the director’s version and one for the abridged version.
Those who don’t like it or don’t want to see it can not watch it, because in fact after the beginning of the Russian-Ukrainian conflict many European countries have made movies or films based on the war, and these works don’t try to remove some parts of it deliberately.
History is history and we can’t deny its existence.
First of all, I did a complete redo of the composite shot of the tank, as the previous one had a lot of issues that needed to be modified from scratch. This time I made very fine adjustments to the Mask of the trees, then I adjusted the material of the tank (giving it a new Normal map), adjusted the angle of the tank as well as added some real life features of the T-90, such as the very iconic infrared detection lights and the green laser aiming lines, which were all made in C4D
The green laser as well as the infrared light are animated
Moving on to the second issue from last time, there was a stretching of the right view of the tank when moving, I realized this was due to the fact that there was a bush in front of the tank, so I used Keyer to draw in the green color of this section individually as well as Luminance to get a composite Alpha, and then used Grade to adjust the contrast of the Alpha, otherwise there may have been a portion of the foliage running through to the middle of the center of the tank, the image below shows the Alpha channel of the bush
The part actually flickers a bit in the RGBA channel, this is because I used phone to shoot this part so the resolution is lower, if you shoot with a camera you can fix this, I tried using AI repair to expand the pixels but it still looks like a mosaic
Moving on to the character part, I changed the texture of the soldier to 4k and made the soldier no longer stand on the side of the road but on the far side of the screen, this circumvents a lot of problems as this is after all a free model so it’s not of the highest quality, I’ve learnt from this lesson this time and after that it’s better to use a MetaHuman or any other high-quality model in the compositing workflow. On the color of the clothes, in order to make him look the same as the character in the foreground, I put the original material as well as the texture of the clothes in PS and added the texture by sucking the color of the original image directly into the texture
And I made a new animation for the soldier’s skeleton, now the character is on the ground all the time, the reason why the shadows were weird in the previous version was because one of the character’s feet was floating all the time
Then Alwin finished some parts, I checked the files and there are some problems with them, for example, the circled parts of this scene on the left side have Mask problems, the problem on the right side is that the airplane is too far down, I want the airplane to be the main character of the shot, so it should be in the golden section line
Then I made all the voiceovers using AI and replaced the parts that were previously recorded directly from the game, which is now completely free of hidden copyright issues
ElevenLabs is reallllllllllllllllllllllllllllly good
Then Manos made some suggestions for my project, starting with the beginning part, to which I added particle effects and made this part of the metal look redder and added some other effects.
I also added another layer of lens halo effect to it
Then came the color changes to the forest scenes, in the previous version I didn’t have these shots colorized in NUKE, I just added a color grading to them in Pr, so they would look out of color with the rest of the section, now I’ve re-exported them and put them where they were. Also, the previous version of PART2 was way too long, and I realized this was a serious problem, and when I re-edited it, I deleted one scene and shortened the three other shots, so in total, this part is maybe 8-10 seconds shorter now
Then there was the issue with the color in Part4, there was some color difference between the first and second shot, I then looked at both shots in NUKE and found that indeed the right one would be a bit more yellow, which I had not noticed at all during the production. So I readjusted the Grade on the right part to add a little bit of light blue and removed the one node that was making it so yellow
In the process of adjusting, I found that the high temperature effect of the airplane in the previous version was not very good, so I changed a way to make it, this time I used Noise to control the flow of high temperature and added animation effects to it, in order to make it look real, I made a blur effect to it after making the Roto, and of course this blur has a shape, for example, it will be very big at the tail, and smaller on the two sides
Then I started to create the sound effect part, I have to declare one thing first, all the sound FX effects appearing in the work are open source free material, basically from a Chinese sound effect website, which has a lot of free resources made by individual artists. In the bullet time FX, a lot of the sounds I didn’t find, so I found an AI website that can generate sound FX, the effects it generates may not be the best, but at least it can synchronize the video with the sound
AI is always an aid to people
Then came the tweaks to the National Theater shot, in the previous version there were too few elements within this shot, only the three Soviet flags and some camera effects were special effects. So I added some other elements to it. The first was the effect of the missile flight as well as the explosion, as this shot was preceded by a background sound that said there was a missile attack on London. Then I also created a shockwave of the explosion. The steps to do this were to first suck the Luminance out of the screen, then create a rough Roto to cover the sky portion, then to make the explosion look more realistic I animated the Roto of the houses in the distant city so that the explosion would hit more places over time instead of just sticking to it like a sticker. For the shockwave part, I used a Gizmo that controls the screen to produce a similar wobble, it’s actual purpose is to create a heat effect, and I did multiple animations to simulate the jittery feeling of a real shockwave
Here is not the final result, here are some more questions
Then Alwin’s Naval Battle section is all done, here’s how I added effects to it.I added motion blur, generic filters, and frame noise to the firing scene (these are effects that are applied to all of the naval battle footage) I optimized the explosions by taking the brightest part of the explosions and controlling that part individually to create a Glow effect. For the sound, I synthesized the sound of five shells firing in AE and had it matched to the image
Next is the effect of the ship shield, this part of our original plan was to make the whole process in Houdini, but time is running out, if it is the whole process in houdini then the effect of the sea battle part can be at least twice as perfect, but this will take about 1.5 months. I couldn’t find any suitable copyright-free magic shield material here, so I had to buy a commercial resource to avoid copyright issues, because this works so well. In the process, I first used Heatwave’s Gizmo to do a heatwave effect on the shield’s location, so that it looks like there aren’t just particles inside the shield, but that there is indeed a layer of walls. Next, I drew an area with Roto to have Heatwave act only on this area, of course this Roto is animated because the shield spawns and the process of spawning requires an animation. Then I did a glow as well as a motion blur for the particle effect
it cost me abt 2 poundlook at that, so beautifulHere I adjusted a lot of parameters, otherwise the blurring effect is very fluctuatingI also added camera jitter because the render fx don’t have itHere’s the final result in Pr, which I personally think is pretty good
Then there’s the shot of the missile being fired, to which I added this magic shield as well, but this time there will also be an explosion effect of a missile hitting the shield, this effect is a free clip from ActionFX. In NUKE I matched the color of the background and when I started compositing, I noticed that the clip had a black edge, probably because of a missing step in the Unpremult process when making the clip export, so I solved the problem by subtracting 1 pixel point of Alpha using an Erode node.
In fact, the effect is not very good, because the general explosion will also produce smoke, this material does not have, and I want the material angle is more tricky, because the missile is in the side of the explosion, if there is time in the HOUDINI can become better
Then I synthesized more sound effects in Pr, such as the sound of the fighter jets flying over in this previous shot, so I won’t play each one here
Moving on to the first shot of the Alwin naval battle, I traced and 2D-tracked the four parts of the instrument panel in the plane, then put a fire control radar on it and added a Glow effect to it, which ended up looking so effective that you can’t tell it’s hovering when it’s being illuminated by the Glow
Then there was the process of creating the Concept poster, I mainly used some material from the project for the background and mimicked the typography of a lot of famous movie posters, and on the FX side I processed it many times in PhotoMosh after it was made to add some sense of war atmosphere, and I exported it in a GIF format.I may also be ready to use this image as the front of a business card.
The following pictures show the whole process of Pr production and the whole process of NUKE.
This chapter will document the creation process from 12.11 to 19.11.2024, focusing on: the creation of the bullet time scene and its compositing, the post color grading of Part3 and its compositing (if necessary), fixing the bugs that were created in the previous videos, the compositing ideas for the final video and the post dubbing process.
The above are in no particular order on the following contents
First let’s start with the basement scene, in the video the Russian soldiers kick down the blue steel door to enter the room, so I created a basement 3D scene and made the gun firing bullets and the bullets hitting the magical walls in it, the final result is much better than the one in Part I. The bullets were fired as whole bullets and not divided into warheads and casings in the previous scene, now I made the distinction and the bullets were fired by me manually controlling the animation, which is honestly stupid.Now I made the distinction, and I manually controlled the firing animation of the bullets before, which was honestly stupid, I learned how to use C4D’s particle emitter process as well as the spatial field speed, and now it’s very easy to create the effect of the gun firing.And I also created 3 different NATO soldiers’ movements.
overrollelectric lightMake my own mapping in PSReproduction of the blue iron gateThe large scenes were all handmade, iron gates, oil drums, and very small decorations being copyright-free material from Fab.com
Moving on to the bullets, I manually created a simple 7.62x39mm bullet model (made in C4D), then I created the textures for the bullets in Substance, which of course is divided into two parts: the casing and the tip of the bullet, and so also two different texture combination files.
I found a default fabric material at the end and transferred its Normal mapping to the bullet, which makes the scratches look more visible
Then I adjusted the intensity of the lights and added GI lighting to the scene so that the lighting quality would be better.Then I rendered a frame as a test.
Here the node I used Blendspill this Gizmo tool, which can combine BG and FG, and can automatically handle the spill (of course, the spill needs to be obtained through the noise reduction of the original RGBA screen minus a simple Keylight after the results) it essentially serves to remove the character edge of the gray or black outer edges of the stuff will definitelyappear, the specific reasons for the simple general three cases: 1 Alpha channel problem: Keylight will generate an Alpha channel for transparency, but if the background area of the Alpha value is not exactly 0 (i.e., not completely deducted cleanly), may lead to the edge of the area residual mixed color (usually gray). 2 Premult (PremultProblem : During rendering, the original image may have been premultiplied and the green screen deducted without correctly managing or re-multiplying the image, resulting in mismatched Alpha values for the edges. 3 Spill: When Keylight processes the green screen edges, the green spill may not be completely removed, but instead is converted to grey in order toavoid visible jaggies. And this node handles this with a single click, and it also incorporates Lightwrap’s functionality and can control the strength based on the RGB of the green screen footage
Of course while actually using this node I ran into a serious problem that bothered me for a long time in the beginning, in the beginning it produced results that would make the whole background a lot of noise, I was already using Denoise in the beginning, which was weird, and in the end I was normals that were produced by the IBK_Master node when I grabbed the alpha of the Core part of the character, and in the endI solved this by adjusting the size of the green value
When absorbing the background green, the system will be too dark and the reflection of green on the glasses together to recognize as a green screen, so you need to manually add a mask, this time the material’s problems are mainly these: clothes are too green, which will create some problems; shooting some of the shots of the environment is very dark, and not good to deal with the edges of the characters in the late stage
this is the green channel, which looks pretty good
I spent a lot of time on this next step, trying to get the Z-channel (Depth) in C4D’s RS renderer so I could isolate the Depth channel in Nuke to control some realistic depth of field effects.In practice, even if you do everything right in the render settings and AOV settings, you can’t find the Z channel in the resultant Mutiple File (EXR file). This problem has been bothering me for about half a year, and I couldn’t find the answer on Youtube or in the official C4D tutorials, but in the end, I kept asking ChatGPT for an answer, which actually didn’t give me the exact solution.In the end, I found that the solution is to render an extra layer of Depth files directly to the hard disk in the AOV manager while opening the multi-channel rendering of Z-channel, so that I will get two results after the rendering, one is the normal EXR data, and the other one is the floating point data of the extra Depth.
Test if Z takes effectIt finally worked!Adding virtual fog to NUKE by projection allows you to add scene fog very quickly, instead of adding it in the modeling software (which will drastically speed up the work time, since you don’t need to render it)
Later I used Mixamo to generate the shooting motion, I then used the motion editing system in C4D to adjust the timing of the shooting animation and make the animation repeat until 200F
12F a move, looks smootherGuns added and boundI love this one. It’s looks really coolIn the actual rendering, I’ve tweaked the position of the soldiers in each shot, because in the green screen footage my characters sometimes block out the center completely
The actual animation, the character moving behind will appear in the beginning shot, this is just a preview of all of it, and of course the soldiers don’t shoot in the beginning shot, just aiming
The animation of the bullet casings being thrown appears in the GIF above, so let me explain how this was created! Here at first I wanted to manually animate the bullet shells being thrown, but then I thought I’m a graduate student so why not find a more convenient way to do it? So I thought that C4D has a particle emitter system (which I’ve never used before) and I started to try to use it to create it, and then I realized that the effect is exactly as I expected, you can adjust its emitting speed, emitting angle, emitting frequency, and the most important thing is that it can be perfectly matched with the dynamics system in C4D. I learned a lot in this series of steps, and because I have a lot of experience with C4D, so I studied all the steps on my own, without looking at one instruction at all.
Of course, this is a bullet time effect, so I also created an autofire effect, I still used the particle emitter system, but I went on to add a velocity field, and an invisible wall to block the bullet’s flight, the difficulty in this step is that the velocity field isn’t there by default, and you have to test the effect by using some other field, such as a gravity field or a gravitational field. By default the field is applied to all objects created by gravity, you need to specify a space for it.
The exact steps to do this are in Breakdown
Next is the rendering of the bullets, here to save time I rendered the scene and the bullets separately, so that 1) I can speed up the process, and 2) I can control them better in NUKE.The difficulty here is to render the bullets separately so that the bullets are affected by the environment, and also so that the bullets create shadows on the invisible ground, here I added RS tags to the bullets and the ground, and then adjusted the data in the tags.
Nice projection.Works in NUKE, looks good
Magic Barrier Since Alwin is in charge of it, that’s all I have in this section, and I’ve also made a generic NUKE compositing workflow to make it easier for him to add magic barriers later.
It’s a pretty perfect set of processes, not a single node in it is redundant and the results obtained are of high quality
Next, due to the a littlke slow progress of Alwin’s part of the project (houdini is difficult), I took his place at that part and created a special effects scene of a missile flying over London, which I will explain below.
First of all, I’d like to note that I was going to do this part on my own from the beginning, I started looking for footage of the sky over London around October, and I ended up using stereoscopic scenes in the web version of Apple Map for screen shots, and then using PR and NUKE to stabilize, speed up, and motion blur the footage.
Moving on to the missile part, I found the Aim-120 model on Sketchfab, then I added a rotation to it in C4D and removed the Bezier curve from the motion curve so that it rotates seamlessly
For the missile flame part, I downloaded an uncopyrighted meteorite fall clip in MotionFX and added it to NUKE, then I added that high temperature vignette preset that I talked about in Part I to apply it to the background and added a Glow effect to this clip, of course, this step requires a separate control for the luminance of the flame, I applied theI put the Alpha of the Luminance on the Glow separately so that the smoke doesn’t glow
I started out using masks for control, but the results were mediocreYou can clearly see the Glow effect in the A-channel.
Above you can see that there is noise in the background in channel A. Let me explain what it does.This is a virtual fog effect I did on the background, I used a Noise node and did a simple 2D tracking of the scene, then changed the parameters of the noise to make it look more realistic.
This is the final result, it would be better if we use Houdini for the whole process, but we don’t have the time.
Next I will show the production process of HUD, I use AE to draw a simple dynamic HUD, and in Pr to add a luminous effect on it, so that it looks more sci-fi feeling
I have to say the HUD instantly makes it feel like a movie.
Then I finished the AI dubbing for the project, and I found a perfect AI model for it, which is ElevenLabs, which has almost no usage cap per day, and the quality of the AI is high, that many of the engines that cost money are probably socked with the AI model from this site. And it’s okay at producing Russian dubs, though there are some minor lags. But overall it’s good. In PR I did the same thing as before with the voiceovers, for example I used a multichannel processor for each voice to control it to give it the effect of, say, a walkie-talkie or a radio broadcast
Here’s a video of this stage of the project, basically only Alwin’s part is left now, what I still need to produce is the calibration of the previous scenes, and the subtitles for the video.There may be more to come, you can see my production process in 4
This post documents the effects compositing portion, and includes some other content at the end, such as the post update storyboards, I’ll also put our progress for the week at the beginning, and most of the footage from the naval battle should be all done by the end of the week (except for the effects) The tank firing footage should have a general look as well.
In September, I first started trying to create a composite of the shot below, many of the steps of which were very similar to those used to create the London project from a previous Nuke course, where I shot a video at home and tracked him down in NUKE.Next, I found an SU-57 model in Sketchfab and created an animation in Maya, using a similar HDRI as Dom Light
I also used these nodes to create an area of high temperature caused by the flames on the tail of the airplane, which will make close up shots look more realistic, an effect often seen when waiting in line for cars in the heat
Next is one of the most challenging shots, this shot is almost complete based on a 3D scene where I composited smoke (using Houdini), a lot of soldiers, tents, several tanks, and Su-57s. This shot has only one camera, but I put a lot of effort into making it, and my process is similar to the one above, with the exception of the steps required toI made a similar process to the one above, but the difference is that I needed to make a higher quality scene and model in Maya, first of all, let’s see how the scene is made.
On the rendering, since I only wanted to get the objects, and the projections that the objects produce on the ground, I created two different rendering levels, one level to control the shadows alone, that is to say, to let the renderer handle only the shadows of the objects as well as the ground (both of which are invisible in themselves, of course) This step is very important, and doing it this way lets you control the color transformations in Nuke much easier, and I’ve done 2+2 renderings of these objects hereI did 2+2 renders on these objects here, it should be 1+1, but the result is that some of the objects move incorrectly in Nuke (I think the problem is with Nuke’s 3D tracking)
In the compositing stage, the process can be divided into three main stages, the first is the shadows, the second is the objects, and the third is the clouds.Of course, the specific steps are still a little different, for example, this scene will have three airplanes through the camera, so this part is basically placed in the last step, the following is the node diagram of the whole process
Here’s a picture of the composite cloud, the object and the overall effect
Next is the creation of smoke in the scene, because the angle of this smoke is more special, I then decided to create their own special effects, so it’s also a chance to learn how to create smoke particles. The process is easy compared to a fluid system, the overall difficulty lies in the adjustment of the volumetric node parameters, as the smoke needs to be allowed to emit naturally
volumetric node parametersfinal fxrender
Next up are two simpler shots, I created them using C4D’s fabric simulation system as well as NUKE, in the National Theatre scene, this scene is just a jpg, but I wanted to get some camera effects, so I added a node that simulates the natural camera shake and added a little bit of motion blur.The second composite shot was a special trip I made to the other side of the river from Big Ben to shoot on location, for this shot I stabilized it, I used Tracker to grab 6 points of the buildings in the frame and auto-generated a Transform (Stabilizing) then I animated several Soviet flags and fighter jets flying over them in C4D.
The shadows behind the fabric are relatively simple to create in C4D, just add a Shadow material to the projected surface
In the Big Ben scene, the steps were pretty much the same as above, the difference being that I added a dynamic background in C4D as I was making it (easy to see if the positions were wrong, which was crucial, otherwise I would have needed to try them one at a time, which I didn’t know was possible until last week and wasted a lot of time) Then in compositing I masked the Big Ben clock, as well as the utility poles on the right side to allow the planes to fly over them from theback to fly over them.I ended up compositing multiple smoke as well as cloud effects in Nuke to give the result a more battlefield atmosphere
Then I used the Keyer method to change the weather, because the original video didn’t look very distinctive, so I used a Noise map to make a slightly shifting dark cloudy weather
Then I realized that the center of the scene was too empty, so I made a smoke effect in the road next to Big Ben after the explosion, the smoke effect is ActionFX in the copyright-free material
I made a simple mask underneath to mask the bridge in front of me as well as the trees, and of course added an EdgeErode to make the edges less visibleAirplanes will fly over this area from behind
Next, I felt that the scene was still missing some elements of war, so I used Runway’s AI image generation tool to make some effects of bullets and artillery shells on Big Ben (reasonable control of AI tools is a skill that designers must master, and technology always needs to be in the service of human beings)
Next, I found that the smoke effect I had just created was a bit monotonous, and in many movies the center flame part would be more comedic, so I added a Glow effect to it, and the way I did it was to use the Keyer to individually select the Alpha of the high temperature area of the flame and Premult this Alpha away, so that I could have the Glow control the center part individually
This pic shows all the steps in the synthesis of this part
In Part2 I did effects compositing work on three scenes, so let’s start by looking at the first one.Here I first did detailed 3D tracking of the scene, which is very important in this shot, and I also used Runway’s Ai Masking tool to mask the characters.(Highly recommended, its automated effects are fast and effective.) I then added three Cubes to the 3D environment in Nuke to check that the three positions in the camera were correct.Then I exported it to FBX mode.
Runway
Then I imported the particles, Cubes and camera in the 3D software, then I placed the tank and the soldiers according to the position of the Cube, and then I animated the turret of the tank to make it look realistic.
For the character animation, I found a model of a soldier on Skechfab and imported it into Mixamo and created several animations such as raising the gun and aiming.I then articulated these animations in C4D using the motion editing system as well as the binding of the bones (I needed to bind the gun in my hand)
When rendering, I did one render of each of the three, because there was a big difference in their positions, and I didn’t want to cause a redo in compositing due to them being in the wrong position, and at the same time, I did a manual Roto of the tree next to the tanks in Nuke, because the tanks were supposed to appear behind this tree, not in front of it, and then I did a Grade and color correction on them, and of course in theIn the final composite, the BMNP tank doesn’t appear behind the figure, because after many attempts, the occlusion of it by the figure was actually a very big pain in the ass, and it was very difficult for me to make a very accurate Roto of the motion-blurred parts of the figure (actually it was possible, but it would have taken me at least a week to deal with it, and the BMP tank behind it was relatively small, so it wasn’t really necessary)
Here the shadows of all the objects are rendered in a separate layer, so there are actually four renderings of this shot
But the final result so far is still unsatisfactory because the colors of the tanks as well as the soldiers are a bit strange, they are too far from the color of the soldiers in the foreground.Another problem is the position of the T-90, it’s a bit too big, causing its position in the frame to now stretch when the camera is on (where the edges of the screen are) My conclusion so far is that it’s due to the camera’s focal length in C4D being different from what it actually is, and I’m doing my best to optimize both parts right now
Here’s another shot of the process, I’ll skip the instructions here because the steps here are almost exactly the same as above, and even simpler because there’s only one object. The only difference is the keying of the characters I made by hand.
The yellow area shows the flow of the above two shots in Nuke
Moving on to the next FX shot of the forest, in this shot the soldier hears the sound of a fighter jet then walks out to look at it.Here I did the same steps as before, but this time I was already very experienced so the production cycle was much shorter and I finished this shot in only 3 hours.I used Runway for the tracking, and then because the Ai results can sometimes be a little problematic, I expanded the Ai mask a little bit before tracking so that I could completely cover myself, this step is actually the most important, I feel that VFX compositing is like building a special high-rise apartment building, and this step is like building the foundation of the building, and if the foundation is destroyed, you can’t fix it how much you can fix it.If the foundation is ruined, you can’t fix the house on top of it.
This is what Runway automatically generates
Then I animated and rendered fighter jets, then I replaned the start time of the video in Nuke because I wanted it to start at the end of the video, then I wanted to create an effect of a plane flying out of the bushes, so I used Keylight to pick up the color of the sky, and then I used a Roto to control the range of the Keylight.Then I blurred the edges of the mask a little bit to match the motion blur effect of the airplane moving fast.
I also added a hot airflow to the tail of the fighter, which of course would only be at the rear of the nearest airplane
I have been working on the Blog about FMP since 11.5, this project started in September, as I have been planning this project for a long time. However, many of the processes involved were not documented at that time, so many of my previous processes may be relatively less documented here, but I’ll still try my best to introduce some more.
This post will record some of the current fragmented production process as well as the project introduction, etc., the specific synthesis steps and scene setup will be placed in Part2, this post only at the end of the production process will include the production process of the bullet time version 1.0 and the production process of the satellite scene (because of the relatively short)
Project: Red Dawn
This project was inspired by a project I created two years ago, a 3D animated short film about anti-colonialism and hegemony, in which I used aliens as colonizers, invading the Earth as a form of neo-colonialism, and I researched a lot of history, mainly the history of ancient countries and explored the issue of absolutes of human historical development.For this project I focused on the Ukrainian War and used it as a blueprint to create a virtual story of World War III, here is my story and a short version of it at the end
Next, in September, I gathered a lot of inspiration from many special effects movies, and many of these shots also became the object of study for this project in later stages
Later I drew this storyboard again, the order of which might be a bit problematic because I didn’t follow the actual sequence of the shots, because I did a lot of modifications to the overall story again in the process, for example, the shot of the bullet time changed many times, and in my production process, this project will actually be divided into 4 paragraphs, and I’ll introduce the meanings of each chapter one by one later on
Here is the line list for the video, before this week it was planned that I would get 3 voice actors to voice my work, but I ended up canceling that plan due to the sensitive elements that may be involved in this project, I’ll be using AI dubbing later on, I’ve finished a few clips so far and I’ll be putting in some of the audio clips here
The next step was the audio production process, I used multiple platforms and software to create the sounds and BGM for this project
I used C++ to grab voice pack #1 of Verikov, a character in the game Escape from Tarkov, and made a translation of the snippets that might be usable
For the BGM, I chose a theme song from the Soviet DLC of the game HOI4 (Hearts of Iron IV), which is a new version of the Russian song – Katyusha – with a lot of futuristic elements, which gives the whole song a more lyrical and futuristic feel.I think it’s perfect for the beginning of this project
The full version of the song, which you can play as BGM, continues below ^ ^
All the footage here was shot in China at the beginning of October, as it would have worked better considering the need to use a gun, but of course, I had already tried to do a shoot with green screen in my apartment in the UK once before that (bullet time), and that time the results were very poor, as the green screen was of very low quality, and NUKE or AE couldn’t get the green screen information right!
Here’s a shot for my real life footage, using the device iPhone 15Pro 4K 30FPS (I tried to shoot with my phone in the bullet time effect as well, but the phone produces some vignetting as well as too much noise no matter what)
Alwin and I also finished the green screen shoot at school last week, we rented LCC’s best cinema lens, the Blackmagic Mini 6K, which definitely provides a great foundation for post-production, and I’ve learned from the failure of my last shoot at home, so this one was fast, but there are still some very minor issues that may or may not be visible in postnot, unless one is very careful.
Here are the results of the first green screen shoot, you can notice that the green screen is very unclean, especially where the feet are, which can lead to a lot of problems (of course the main problem is that the folds of the fabric are too black, and my shoes are also black)
As the main person in charge of the project, I also wrote a detailed weekly worksheet, which I also make small changes based on actual difficulty
We also summarized his part in class this week in accordance with the storyboard, and we finalized all the subplots as well
In the section on utility assets, I created a Sheet of 3D assets so that it would be much easier for us to call them up in post, and of course, the main thing is the model of the T-90, because we need to use that asset both
Next is the process of video editing, I use Pr to make a rough cut of the video, some of the special effects shots will also use Pr in some plug-ins for production
For the sound synthesis, I used a multi-band compressor to give different effects to the sound of the bullets crossing, so that it sounds like the bullets are being fired from different angles and distances, and I also used Key to control the sound of the airplane flying over the camera, and of course, Manos told me today that I need to pay attention to the difference in speed between the sound and the image, and I’ll be dealing with this effect in the post-editThis plugin is used to create the lens halo in the title, it’s a 3D component that automatically generates a scene as well as a camera, the only downside is that it drastically slows down the rendering speed
For the title, I used C4D as well as Redshift, I created two materials, one to control the rust material and one to control the metal material, I then used a Mix material to blend these two effects
I also created a moving light as well as a camera to make the effect look a bit better
In the beginning of Part1, I referenced a lot of sci-fi movies as well as COD11: Future War, and some of their chapters were about satellites and the Earth, so I created a shot of a satellite as well, and that shot will be related to a later one as well, which I will write about in the Blog after Alwin finishes that shot, and it’ll be a hyper-realistic shot of a satellite’s perspective overlooking the Earth
The earth used a simple sphere and googled a panorama of the earth with a blue halo, the satellites were modeled on Sketchfab
Although this part below won’t be shown in the finished product, I think the steps of learning in it are also important, here are the detailed steps of how I used Maya as well as NUKE to create Bullet Time 1.0 (it’s really a shame that the quality of the final product is so poor, it took me a week to create them, like the rendering, filming, and creating the bullet’s movement trajectory all take time)
Let’s put up a clip of Bullet Time 1.0 first for easy cross-checking 😉
First after shooting, I noise reduced the video so the Keyer tool wouldn’t suck up the noise, then I tracked the footage and imported the information into Maya
Of course still shots can just skip this step, I processed the green screen directly using the IBK workflow but it didn’t work well, I also tried Keylight with similar results, but of course the best step to take is IBK (because my background was very cluttered)
I also use Primatte to enrich the workflow, this node can quickly pick up the colors to get BG and FG, I usually use it to get the Core area of the character, after picking up I will continue to use EdgeBlur and FilterErode to adjust the size and blurring degree of the Core area, this layer of the alpha channel is to copy (merge) to the bottom of the character after IBK or Keylight, because these two tools will remove the green color on the character as well.The purpose of this layer of alpha channel is to copy (merge) it to the bottom of the character after IBK or Keylight, because these two tools will remove the green color of the character, if you don’t add a layer of Core area then the alpha value of many places will not be 1
look on my bodynow is full white
The overall node diagram is shown below
Next came the Maya part, I first looked for some house footage on Bridge and integrated it in Maya, then I also found a footage of a 9mm bullet on Sketchfab and added the position and firing animation to it in Maya, then I used the Bulley system to animate the drop, and then I imported the trackingI then imported the camera information for the tracking
I chose the EXR file when exporting because I needed to get the Depth information of the scene to create the focus effect, and I ended up adding motion blur to the bullets in Nuke
This post is about the final compositing, with a video of the last stage (It’s working very badly right now, but this is the only thing we can submit at the moment, I’ll show you the final version next semester)Our team uses GITHUB to record every time a team member makes a change to the project and uses it for important questions (within the project). Here is the link: Issues · Sturmt1ger/The-Bag-Choice (github.com)
Firstly below is the final result at this stage and a snippet of my personal VFX Breakdown for the team.
In this phase, I was responsible for the task of doing quality control, and as of the end of June, I had only received a video that had barely progressed, the last time I sent the team my file was the end of May, and this video was pretty much the exact same script as the one from the beginning of May (except for the effects shots part), the video below shows the video file from the end of June.
So I had the final compositor make changes and wrote a proposal for changes complete with images.
I have communicated with them several times but still no progress, it was explained to me that the file I sent to the team its not usable, so I redid a new NUKE file, but I found that this is not possible, please see the picture below, the left is the file that I sent (saturation is natural, and the main please look at the cloud, I don’t know what is this cloud on the right pic), and on the right side is the new (distorted, and overexposed) looks like cartoon.
Specific issues I’ve raised in the Fix-Report above, which I think will greatly improve this lens once they’re resolved.
Critical Reflection
My personal problem in this project is that I didn’t propose more than one FX shot at the beginning, which would have enriched our final video, in fact, more than one could have appeared at the beginning when I drew the storyboard, but I didn’t make the second one at a later stage because of multiple reasons;
The second point is that, for the part of the grass, I wanted to make a growing effect, but unfortunately I didn’t find any tutorials for it;
The third point is the division of labour in the group, I should have explained to the others at the beginning that I want to do the final composite, because this step is very important, I’m very confident in my ability to monitor the aesthetics of this step to me to do a lot fewer problems;
The fourth point is the problem of keying, our group in the beginning of the confirmation of the special effects shots is very hastyThe fourth point is the keying issue, our team was very hasty when we started to confirm the VFX shots, so I could only use semi-AI to complete the character’s keying, and this would lead to a very mediocre quality, I will figure out the compositing process before the production starts in the later stage, so I can avoid a lot of trouble, and if the characters are shot through green screen, the quality of the work can be greatly improved.
Scroll down to see more of the previous process ^ ^
This post is the final documentation of a personal project, featuring a demonstration of the final result, as well as adjustments to the blood shading, and lateral shots, and a MakingOf video of the overall production process of the piece
Here’s a look at the final product of the artwork and below is the VFX Breakdown.
For the adjustment of the shadows, I redid all the blood in NUKE, then I divided the original video into three layers and added them in order to the main Pipeline, of course the two layers of the shadows were using the same parameters and acted on two blur nodes
I then added a Roto channel to each of the shadow layers to make them both appear only on the left side and the right side.
In the landscape frame, I noticed that the camera shake was now disengaging the EXR sequence of the blood, and a portion of the blood video was currently inside the frame, so I added a 2D tracking to it to match the camera shake
We had an online meeting with my teacher over the past three weeks, from which I made some adjustments to the paper and documented the results of the final term 3 paper and the next steps in this post. The essay also successfully completed the Literature Review, Draft Chapter 5, and Draft Chapter 6 sections, and the Literature Cited section also documented 21 valid references in standard form.
This Word file below is the document with more words than required, but given the three weeks is more than enough time, a shorter version will follow, for example, eliminating the section on the main point of conflict in the work (150 words or so)
This week I completed the entirety of the Fall of London project, successfully synthesising the flames as well as tweaking the values of the various values such as the nuke explosions, and had my assignment critiqued by Gonzalo in class, who asked me to make a few tweaks later on
Here’s the full video presentation
I added the nuke fallout cloud and made it appear behind the nuke mushroom cloud, here I used the built-in tools in the node to make a change in the order of the masks.
Then I made smoke trails on the windows, mainly using the Grade node and the Roto node, and imported the flame material into NUKE in 3D mode
I then made a Glow node for the flames, but I found a softer alternative that automatically calculates the environment around the object to generate a more realistic effect
It’s a smoky wall effect.
What follows are some minor corrections I made after my teacher’s critique of my work in class. Firstly the nuke explosion, Gonzalo noticed on the big screen that some of the debris ran through the mask and into other places
Then there’s the flaming windows, and I’ve augmented the effect of the flames on the surroundings
Then I redid the trace on the shard building, which is now jittery, and the changes will fix some of this
There’s also the bright part of the bomb when the nuke explodes, and Gonzalo and I said that flame compositing is one of the most complicated steps in NUKE because we don’t know how this flame will actually look to the human eye, especially with explosions, and it’s generally hard for designers to match the brightness of the centre of the explosion. But I still think the centre of my explosion is currently too dark, and then the rest of it is too dark, so I tweaked it again
I mainly used the Keyer node to draw in the brightest part of the inside, and then used the Grade node to control the brightness and colour of the brightest part.
Then I found that the ColorCorrect node would cause white edges to appear on the edges of the screen, and by looking at the Blue channel of the screen, I found that it was because changing the value of the highlights would make the black part of the edges turn blue (because the original value there was not 0, so it would be very obvious when the contrast was adjusted upwards), so I cancelled the adjustments related to the ColorCorrect node.