Categories
NUKE

NUKE 2 Weeks Learning Summary and Assignment Progress

This fortnight we learnt more about compositing techniques and green screen keying.

PART I: LEARNING

The above picture is about Gizmo node, he and Keylight are different, Gizmo can solve more colour resources, and Keylight is more comprehensive.

The first thing to learn is about luminance, the core of green screen synthesis lies in the use of several nodes in the NUKE, these nodes have a unique role, each play a different speciality, I will explain in detail the special role of these nodes, they and AE’s removal of the green screen of the workflow is not the same at all, the NUKE is more powerful, but also to retain the original image of the movement NUKE is more powerful and can also retain the motion blur of the original image.

The HueColor node is an old friend from our last lesson! In the last assignment, this node worked well with the generic colours of the screen, making objects blend into the environment. And in the greenscreen workflow, it’s a piece of junk, as can be seen in the character’s hair attachment, with all the residue attached to it

In this section, we find that in the green screen processing or colour correction process, it is necessary to DENOISE the picture first, because the noise in the picture can make the solution very inaccurate, because the noise usually carries a triad of primary colours.

Reasonable use of Shuffle for channel separation and re-Merge, you can make a part of the colour transformation, and you can see the picture, ROTO plus multi-channel composite, you can make a part of the colour transformation

Using the Luminance key and adjusting the values to show different colour levels, the EdgeDetact node produces a similar effect (for this concrete wall).

A combination of the above techniques makes the sunlight in the middle of this image huge and magical!

IMPORTANT:

1: IBK stands for Image Based Keyer. It operate with a subtractive or difference methodology. It is one of the best keyers in NUKE for getting detail out of fine hair and severely motion blurred edges.

2: The chromKeyer tends to work better with more evenly lit screens that are more of a saturated color. NUKE’s CHROMA KEYER is that it takes full advantage of any GPU device you have in your system.

3: Keylight: It is a really great keyer over all and does color spill!

4: Primatte is what is called a 3D keyer. It uses a special algorithm that puts the colors into 3D color space and creates a 3d geometric shape to select colors from that space.

5: Utimatte: The advantage is you can get phenomenal keys that can get really fine detail and pull shadows and transparency from the same image.

Keylight is everything! When absorbing the green screen, you can simply set the green colour to 1, so that the system automatically absorbs all the green colour in the screen, and then process the different parts of the original screen RGBA through a Merge node, which will result in this funny image!

The IBK node is a bit more complicated, it requires manual setting of the node content and a step-by-step process to blend the character into the green and then the removal step, which is tedious but the result is ok

Then the real operation, you need to screen for three parts of the processing, so that the result is the most perfect, but also the most controllable, because many times the green screen is not pure green, green screen will also produce shadows, then we need to divide the screen into ALPHA, BG and DISPILL MATTE.

PART II: BASEMENT HOMEWORK PROCESS

Finally learned MAYA’s multi-channel rendering skills, so hard ah, asked three people to learn! But now I’ll be able to make some interesting stuff!

Perfect ROTO

Work with different channels of Shuffle, each Shuffle is processed only a little bit, step by step, don’t rush

Added two LightWrap nodes so that I could make my tank blend in more with the background, and I set it up with a couple of keyframes, otherwise the edges of the tank at the beginning of the video would be affected by that wall error

Categories
NUKE

NUKE Compositing and Progress of Work

This week we had a refresher on multichannel compositing and put the results in the basement homework, as well as learning some new techniques such as light matching and shadow compositing in what was a very big and difficult lesson!

PART I: LEARNING

light matching

1: Do not change the distortion of the original video when solving using the Distortion node, keep a separate file and name all the videos.

Distortion before solve the marks

3: In multi-channel compositing, don’t create multiple colour nodes to process a single image (e.g. reflections), just use merge through a single grade or other colour processing node.


4: Grade node in the whitepoint value by absorbing the colour of the screen to make that colour data to zero (that is, to absorb a more average brightness of the region, pay attention to do not let the screen change too much) use gain to absorb the colour A can make the overall picture plus colour A (that is, so that the grades of the screen fused into the background) blackppint to absorb the deepest color of the object, lift to absorb the background of the deepest color, lift to absorb the background of the deepest color of the object. blackppint picks up the darkest colour of the object, lift picks up the darker colour of the background, multiply picks up a colour in the background that you want to blend in.

5: ctrl+shift selects the average rgb value in the frame.
6: Merge (divide) is the B channel divided by the A channel rgb value of each
7: After shuffle(id) and connecting to keylight, you can select a 3d part by changing screen colour

HueCorrect

8: It’s easy to change the colour in HueCorrect, select the one you want to change and then hold down ctrl and alt to add a dot on the yellow line.
9: Toe node is to match the original screen melanin, lift to absorb the original screen is darker, you can make the image instantly into the screen (the shadow will not be too bright or black)

10: Exposure node can be used to adjust the exposure of the screen
11: Use the Specular node to connect to the glow node and select the colour A can make the colour A produce glow

PART II: PERSONAL WORK

I tried to use C4D for rendering at first, and the two images above are what went through the OCTANE renderer, don’t they look good! But alas, after my 3 hour long gruelling struggle, I failed! Because I found that the OCTANE renderer, no matter how it is set up, cannot render the multi-channel composite EXR file correctly. Sometimes there is no RGBA image, sometimes there is no ALPHA channel, and there are not many colour channel files such as SSS as well as Metallicity in the OCTANE Renderer, and many of the channel names don’t match NUKE.

The above is an example of what I’m talking about, and it’s not hard to see that there are many channels of files missing from the image on the left, many of which are black. But this is not the most important thing, the most important thing is that if I put the left file into the scene, the scene will contain the bottom and back of the tank’s “ground”, which is not what I want, I want to have the shadow of the tank, but without the “ground”.

Despite the failure, I was able to fruitfully import the wall shattering file I made in HOUDINI for the tank firing into NUKE, yay!

I then imported the tank’s ABC file directly into the SCENE node to complete this week’s stage assignment.

Categories
Practice

3DEqualizer Learning Notes

This two weeks we have learnt about this powerful 3D tracking software, which unlike NUKE, allows all tracking to be customised according to the user’s needs, and has a huge advantage when it comes to tracking faces, which is used by many film and TV studios for special effects on the face as well as the body, especially Marvel

3DE Showreel

1: The mode of operation of this software is very different from other software, in the beginning you need to open the file in the menu bar by yourself, this is the first step.

2: First you need to adjust the toolbars in the view in the CONFIG interface, select some commonly used, and close some unwanted, then you need to search for VFX CAMERA DATABASE in Google, and then find the camera used to shoot this video in the website, and manually input the parameters into 3DE, so that in the post operation of lens distortion can be a more accurate data.

3: Next you need to manually create a series of tracking points and track them correctly. Of course, these points don’t always track well, and sometimes you need to go and stop them manually, and then re-track them at the right time

Creating some tracking points

4: Finally you need to open the PARAMETER window, press ALT+C to open it quickly, inside it is a 3D interface generated by tracking the points, the more the number of tracking points, the more accurate this result is, of course, you need to minimise the movement of the points so that the result will be very satisfactory (and the post-production of special effects is more convenient and accurate).

The topmost screen is the degree of movement of the tracking point, the smoother the green line the better

5: The next step is to trace the portrait, we need to create a new level in the left toolbar and name it Head. Doing this will give you more order to the process and you’ll be able to make changes at a later stage

Tips: During tracking (environments as well as people) you can turn on the colour controls and lower or raise the contrast and brightness to make the places you are tracking more distinctive, so that your computer can track these points more quickly.

6: When tracking a character’s face, you need to divide the face into several areas, especially the eyes, it’s best to track all the way around the eyes, and you’d better have more than 8 points from the forehead to the nose, so that when you add 3D items later, they’ll be more accurate (the more the better).

7: The next stage is to create the 3D object, select your OBJ file in the interface, then you need to add an F6 window (3D interface) then move and rotate the position of the object and place it in the right place, close the 3D window when you are done. Then select all the tracking points of the face, then match them to the 3D model, now your 3D model will follow the character!

Tips: When tracking very large scenes, you need to sort the tracking points (different colours) so that you can easily make the next corrections

Categories
NUKE

NUKE Multichannel Compound & Model

This week we’ve been learning about NUKE’s multi-channel compositing for processing EXR images, and I’ve been doing some preparation work for modelling the underground scene

PART1

The main adjustments I made were to the sports car’s reflection layer as well as the metal layer, which are the two attributes I think are the most important ones in this night image.

And in this photo there is a lot of water on the ground, which reflects the green light source on the left side of the image, so I added green to the Metallicity node to ensure realism.

I tried lowering the brightness of the sports car overall as well as some contrast, but this makes the car very dark in colour, although it would make it look like it blends in with the background nicely.

In the last step, I wanted to have the car’s lights on and add some fog in the foreground and have the light shine through the fog to create a God light, so I tried to create a God light line.

Here’s my final render, I think it still needs a lot of improvement, for example the right side of the car near the camera needs to have more shadows, and then the overall lower end of the car doesn’t feel like it matches the ground, both of which are in dire need of improvement, I’ll be addressing these in the next session.
But overall I am happy with the colour of the car.

PART II

I still want to make something that interests me, and I think this model is more difficult than a steam engine, or some industrial model, and I’ve found that out in the process of making it, there are a lot of difficulties in making caterpillar track.