Cyanilux

Game Dev Blog & Tutorials

Intro to Shader Graph

Hey! This post is a detailed introduction to using Shader Graph - a node based shader editor for Unity that is provided for the Scriptable Render Pipelines (SRPs), including the Universal Render Pipeline (URP) and High Definition Render Pipeline (HDRP). The post goes over how to use the tool and it’s features, not necessarily how to create shaders using it - I have other tutorial breakdowns that go over creating specific shader effects if you’re interested in that.

Shader Graph is also now supported in the Built-in Pipeline as of Unity 2021.2+, though may be limited in some features. It is not supported for previous versions.

Also note, you may also see the “Lightweight Render Pipeline (LWRP)” mentioned online, which was renamed to Universal RP during development. You may still find LWRP listed in the package manager but the newer versions (e.g. v7 onwards) should be identical to URP. If you are installing the package manually, I’d suggest just using the Universal RP one instead.

If you aren’t familiar with what shaders and materials are, I’d recommend looking at my Intro to the Shader Pipeline post before reading this one.

(Also note, any keyboard shortcuts will be based on Windows)

Sections :


Installing Shader Graph

Shader Graph is a node based shader editor that is mostly intended for the Scriptable Render Pipelines. But it does also have some Built-in RP support now for Unity 2021.2+. If in that version or higher, you can install the Shader Graph package via the Package Manager (accessed via Window on the top menu).

For the Universal RP and High Definition RP, Shader Graph is already included in their packages so will automatically be installed alongside them. Select one of the URP/HDRP templates when starting a new project or install the URP/HDRP package manually via the Package Manager.

If installed manually, this will also require following some additional setup - such as creating and assigning a Pipeline Asset under the Project Settings → Graphics tab. The asset can also be overriden per Quality setting. You’ll also need to convert materials to use the shaders from the appropiate pipeline, this can be done automatically from the Edit → Render Pipeline menu but will only work for the Standard built-in shaders. Any custom shaders cannot be upgraded automatically and will require rewriting. For additional setup instructions see the “Getting Started” tabs in the documentation :

I would mainly recommend that beginners start with URP as it is much simpler than HDRP (and most of my tutorials are written for URP). Some tutorials may also work in HDRP but may require some additional tweaks (due to differences like Camera-Relative rendering and different master nodes/stack).

There is also documentation for Shader Graph. The Node Library in particular can be very useful for learning what each node does, but it can also be accessed in Shader Graph by right-clicking a node and selecting “Open Documentation”.

Also be aware you can view your current Shader Graph version in the Package Manager window (accessed via Window on the top menu). You might also be able to update the version here though some aren’t available unless you also update Unity itself. (e.g. v10 is only available for 2020.2 onwards)


Graph Types

To create a Shader Graph Asset, right-click somewhere in your Assets in the Project window then select Create → Shader Graph.

Older versions may have them listed under Shader heading but that also includes writing shaders using code (CG/HLSL) such as the Surface Shader and Unlit Shader options. Those templates are intended for the Built-in Render Pipeline and may not function in other pipelines.

Example : Create -> Shader -> Universal Render Pipeline -> Unlit Shader Graph

In v10+ (Unity 2020.2+) there is three graphs listed :

If URP is installed, a “Universal Render Pipeline” heading is shown with the following :

For HDRP graph types, see docs.


Graph Inspector

In v10+, the Graph Inspector window was added, which can be toggled using the button in the top right of the graph. It has two tabs, Node Settings and Graph Settings.

(Image)

The Graph Settings tab is important to set up the graph correctly if you’ve selected the Blank Shader Graph or want to switch to a different graph type. We need to select an Active Target (Universal, HDRP or Visual Effect, depending on which is installed in the project). It’s also possible to add multiple active targets to the list, if you need it to be compiled for more than one that is.

Once a target is selected, more settings are added below the list. I’ll be going through the Universal ones below but HDRP has some similar ones too (listed on their graph type / master node pages, as linked in the docs above).

(Image)

Prior to v10, these settings can instead be found on the Master node. While hovering over it, a small cog/gear icon appears in the top right of the node which shows the settings when clicked.

Common settings for URP include :


Master Stack

(Image)

Prior to v10, You could switch between graph types by creating a new Master Node (e.g. Unlit, PBR), right-clicking it and setting it as the Active one.

In v10 and newer you can still switch types via the Material in the Graph Settings as explained in the previous section. But “Master nodes” were removed and replaced by the Master Stack.

Unlike Master nodes, there can only be one Master Stack in the graph. Each port in the stack is attached to a Block.

The blocks required by the graph type will automatically change when switching the type (explained in the section above), assuming the “Automatically Add or Remove Blocks” option is enabled in Edit → Preferences → Shader Graph.

Any blocks that the previous type had (which are connected to or the default value edited), will remain in the stack but may become greyed out if the new graph type doesn’t make use of them. For example, a Lit graph has a Smoothness block/port. If something is connected and the graph type changes to Unlit, the port will remain but become greyed out as it is no longer used.

Blocks in the stack can be removed by right-clicking them and selecting “Delete” from the dropdown.

(Image)

Blocks can also be added manually by hovering over gaps between two blocks, or at the end of the last block. Right-click and select Create Node, or press Space. While you can add blocks, they’ll still be greyed out if the graph type doesn’t use them.

Vertex vs Fragment

The stack also provides a clearer distinction between the Vertex and Fragment stages of the shader, and different blocks are available for each.

If you are unfamiliar, the Vertex stage runs for each vertex in the mesh being rendered, transforming the vertex positions onto the screen. Then rasterisation occurs, which produces fragments (basically pixels) from the triangles in the mesh. The Fragment stage then runs for each fragment/pixel created and outputs the colour to the screen. This is discussed in slightly more detail in my Intro to the Shader Pipeline post.

The Vertex stage is also responsible for passing through any data (e.g. from the mesh) that may be needed in Fragment stage calculations. This is known as interpolators (as this data for each vertex is interpolated across the triangle). It is mostly handled automatically by Shader Graph - e.g. For nodes requiring UVs, the Position node, Normal Vector and other nodes in the Input → Geometry heading under the Create Node menu.

But in some cases (discussed later), we may also want to pass our own custom data between the two stages. In Shader Graph v12 (Unity 2021.2+), we can do this by adding Custom Interpolator blocks to the Vertex stage. I’ll discuss this in more detail near the end of the article.


Save Graph

While I haven’t explained the other parts of a graph yet, this is something that is a common problem for beginners so I’m mentioning it early. Remember to save your graph or changes won’t appear in Scene/Game View!

In newer versions using Ctrl+S might now save the graph. You can tell if it works if the * on the window/tab disappears.

Older versions did not have a shortcut and you had to use the Save Asset button in the top left corner of the graph.

When attempting to close the graph, it will also prompt you to save if unsaved changes have been made.


Nodes

To add a node to the graph, right-click and choose Create Node from the dropdown, or press the Spacebar. A box will open at the mouse position allowing you to search and/or choose a node from the list. As the list is quite extensive, I recommend typing the name of a node once you learn them.

(Image)

Ports & Connections

Nodes have inputs and outputs shown as small circles, known as ports. Inputs are always on the left side of the node while outputs are on the right.

(Image)

A small collection nodes, showing connections and various previews.

You can create connections between nodes by clicking on the port. While still holding the mouse button, drag over to another port and a connection will be made. You can only connect an output to an input. It’s not possible to connect an input to another input, or output to another output. Output ports can also have multiple connections to other nodes, but input ports must only have a single connection.

(The documentation refers to these connections as “Edges” but I’ve never heard anyone actually call them that, except for Unity that is. I’m just going to refer to them as connections)

While dragging a connection from a port, If you release the mouse while not hovering over another node port, it will open the Create Node menu – filtered to only show nodes where a connection is allowed. Selecting a node from there will then automatically connect it up too. This is very useful for quickly creating graphs.

Existing connections can be changed by clicking and dragging on the connection wire near to the port. In v10+ it’s also possible to drag a selection box over multiple connections coming out of a single output port, and move them all at once to another output.

(Image)

It’s also possible to collapse unused ports to make the node smaller. This can be toggled with the small arrow in the top right of the node (that appears when hovering over it). You can also right-click the node and use View → Collapse Ports and Expand Ports to show all ports again.

Redirect / Elbow

In v10+ you can add a redirect/elbow node by double-clicking along the connection wire, (or right-clicking the connection and selecting it from the dropdown). This will create a tiny node with a single input and output port that allows you to better organise connections (example shown below).

To help keep a graph readable, you should avoid having connections pass over other nodes. I’d also recommend keeping connections going horizontally or vertically, rather than diagonally, but this is more of a personal preference.

(Image)

An example using (probably way to many unncessary) redirect nodes, (but hopefully you get the point).

Versions prior to v10 did not have this feature, but the Preview node could be used instead, with the preview collapsed to make the node smaller to work with.

Selecting Nodes

(Image)

Nodes can be selected by clicking on them using the left mouse button, or by clicking in an empty space and dragging a selection box over multiple nodes to select them all at once.

While holding Ctrl, you can toggle the selection of nodes rather than unselecting everything else.

Previews

Most nodes will have a Preview, shown on the bottom of the node. They allow you to preview what the shader is doing at that particular stage, but the result might look different in the scene depending on the mesh used. You can also use the Preview node to preview a particular port if it doesn’t already have a preview.

Because the Preview can take up a lot of space, you can collapse it by clicking the upwards-pointing arrow that appears on the preview while hovering over it. The preview can then be shown again by clicking the downwards-pointing arrow on the bottom of the node. Alternatively, right-click the node and use View → Collapse Preview and Expand Preview.

(Image)

A collection of nodes showing different previews

Nodes like Position and Normal Vector use a 3D/Sphere version of the Preview rather than the 2D/Quad one, as they are meant to show 3D positions/vectors. All nodes in the chain after it will also switch to using the 3D one when connected.

In Shader Graph v11+ (Unity 2021.1+) it is also now possible to choose between the 2D and 3D versions of the preview on a per-node basis. Can do this in the Node Settings tab of the Graph Inspector window. (This was something I suggested, though some others may have too. Still, I’m very happy to see it included). For Sub Graphs you can also find a setting in the Graph Settings to select it’s default preview state.

Before we can go into exactly what these previews are showing, we need to understand the different data types included in Shader Graph :


Data Types

The colours of each port and connection will also change based on the port data type, which is also shown in brackets after the input/output name. Usually, different data types cannot be connected, though the Float/Vector types are an exception to this.

The data types are as follows :

Float & Vector

A Float is a scalar (meaning single component) floating point value. While it’s called float, it’s precision can be changed between Single (32-bit) and Half (16-bit) on a per-node basis (under the Node Settings tab of the Graph Inspector in v10. Prior to v10, there was a cog/gear on the node itself to change this). Usually nodes will Inherit the precision of the previous one. Changing the precision can help with optimisation, mainly for mobile platforms. More information about the precision modes can be found in the docs.

A Vector contains multiple components of these floating point values. This could be interpreted as a position or direction, but can also contain any arbitary numbers. The Vector4 contains four floating point values, Vector3 contains three, and Vector2 contains two. The precision again can be varied.

When put into a Split node, we can obtain each Float the vector contains. These components are usually labelled as R, G, B, A as shaders deal with colours a lot, but Shader Graph also uses X, Y, Z, W in the case of the Vector2 / Vector3 and Vector4 nodes.

(The labels themselves aren’t too important, they are the same. R=X, G=Y, B=Z and A=W)

Truncate

Unlike other data types, different size vectors can be connected to other sized vector ports. For example, A Vector3 or Vector4 can be connected to a Vector2 port (e.g. the UV input on a Sample Texture 2D node, some more examples are shown below).

(Image)

This will cause the vector to be truncated. In the case of the Float input port, only the R/X component is used while GBA/YZW is truncated. For a Vector2, the RG/XY components are used and BA/ZW is truncated, and Vector3, RGB/XYZ with A/W truncated.

A Float output can’t be truncated as it’s already the smallest possible size with only a single component.

Promote

Similarly, a smaller sized vector put into a larger sized port will be promoted. For example, a Vector2 put into a Vector3. The third component will be filled with a default value of 0.

If put into a Vector4 port, the fourth component has a default value of 1 (as it typically corresponds to an alpha value, and it defaults to full opaque, rather than invisible. Also useful for matrix multiplciation where the fourth component should be 1 to allow for translation. Don’t worry about that for now though, we’ll be going into matrices in a bit).

(Image)

On the left, the Float is promoted to the Vector3 size, (0.5, 0.5, 0.5). After the add, resulting in (0.5, 0.5, 1).
On the right, the Vector3 is truncated to the Vector2 size, (0, 0). After the add, resulting in (0.5, 0).

If a Float is connected to a Vector port, it is promoted where all components are filled with the float value. e.g. 0.5 would become (0.5,0.5,0.5,0.5) if connected to a Vector4 port. The alpha value would not default to 1 in this case too, which is something to be aware of.

If you wanted more control over promotion, you could handle it manually, using the Vector node that corresponds to the size you want. e.g. Vector2, Vector3 or Vector4. Or use the Combine node which gives you an output for all sizes.

These have Float input ports for each component. If you are starting with a Vector type, you can use a Split node to obtain each component. Of course, you would only connect the components you want to keep and override the others. Note that when a Split is used, promotion does not take place and outputs that aren’t used by the input Float/Vector size will be 0.

You can use a Split and Vector4 or Combine node to set each component to a specific Float if you want better control over how a vector is promoted. (Split would not be needed if you start with a Float though since it’s already a single component).

Dynamic Ports

While not really a data type, Some nodes also have Dynamic Vector ports. This means it can change between Float/Vector1, Vector2, Vector3 and Vector4 depending on the inputs.

Examples of these include many of the Math based nodes, e.g. Add, Subtract, Divide, Power, Fraction, Normalize, etc.

(Image)

Add node with ports switching between Float, Vector3 and Vector2 depending on what is connected

In most cases, these dynamic vector ports truncate both to the same size, based on the lowest size passed in (so if a Vector3 and Vector2 is passed in to each port, both ports will be a Vector2). However a Float/Vector1 will always be promoted to the same size as the other input.

The Multiply node ports are also dynamic, but also has a special case where a Matrix (will be explained below) can also be connected. Similar to the Vectors, the Dynamic ports are truncated to the smallest of the two matrix sizes (the output also changes to a matrix type of the same size). If a matrix and vector is connected, the size of the matrix is used and the Vector will be promoted or truncated to match.

Matrix

A matrix has up to 16 components, defined in rows and columns. Shaders written in code can define matrices with different row/column sizes, and even different data types, however Shader Graph can only handle square ones consisting of floating point values (floats or halfs depending on precision).

A Matrix4x4 means it has 4 rows and 4 columns, so 16 components in total. Similarly, a Matrix3x3 has 9 components, and Matrix2x2 has 4 components.

Matrices are used for applying transformations which consists of translation (adjusting position), rotation, and scaling. Shader Graph has a few useful spaces built-in to nodes though, such as Object, World, View and Tangent, so usually you don’t need to deal with matrices. I’ll be going over these spaces in more detail later. There’s also a Transform node to transform between these spaces.

(Image)

It is possible that you may want to handle your own transformations to a custom space too though. The Multiply node can be used to achieve matrix multiplciation if a Matrix is connected. It’s unlikely that you’ll need to deal with matrix multiplication unless you’re doing some very custom stuff, but if you aren’t familiar with it I’d recommend looking it up. This CatlikeCoding tutorial goes over them.

Something to be aware of is when dealing with matrix multiplciation, the order is important. Usually the matrix will be in the first input and the vector in the second. A Vector in the second input is treated like a Matrix consisting of up to 4 rows (depending on the size of the vector), and a single column. A Vector in the first input is instead treated as a Matrix consisting of 1 row and up to 4 columns.

Due to the Vector promotion (explained a few sections above), if a Vector3 is multiplied with a Matrix4x4, it’ll promote the vector to a Vector4 with the alpha component defaulting to 1, which allows the translation part of the matrix to work. If this isn’t what you want, you can Split and use Combine or Vector4 to set the alpha component yourself.

Boolean

A type with two possible values, True or False. It is used in Logic based nodes, for example the Comparsion node outputs a Boolean based on comparing two Floats. The Boolean is mainly used in the Predicate input of the Branch node, which is used to select between two Floats or Vectors based on if the Boolean is True or False. For Example :

(Image)

Example showing a Branch based on a Boolean Property. Allows switching between two different UV types. Planar projected / regular model UVs.

Internally the type is actually treated as a Float with a value of 0 or 1, so can be set from C# using material.SetFloat (or SetInteger).

Don’t confuse the Boolean type/property with a Boolean Keyword - these are not the same! I’ll be explaining keywords in a later section.

Textures

Textures store a colour for each texel - which is basically the same as a “pixel”, but as that is used to refer to the screen (or current render target) we instead use the term “texels” when referring to textures. They also aren’t limited to just two dimensions.

The fragment shader stage runs on a per-pixel basis, where we can access the colour of a texel with a given coordinate. Textures can have different sizes (widths/heights/depth), but the coordinate used to sample the texture is normalised to a 0-1 range. These are known as Texture Coordinates or UVs - U corrosponding to the horizontal axis of the texture, while V is the vertical.

(Though I mentioned texels have colours, the data doesn’t necessary have to be output as a colour. It’s a Vector4 which could represent any arbitary data and used for any calculations. Depending on the texture’s format it might also result in values outside a 0-1 range)

Filter Modes

To access a colour of a texel, we sample it with filtering applied. There are 3 different Filter Modes :

Mip Maps

Mipmaps are versions of a texture which are smaller than the original, with each mip map level being half the size of the previous. This produces downsampled/blurry versions of the texture which is useful for lowering detail at a distance, to avoid artifacts like moiré patterns.

The original texture is known as mipmap level 0. If it’s size was 512x512, then level 1 would have a size of 256x256, level 2 would be 128x128, level 3 would be 64x64, and so on until it reaches 1x1 (level 9 in this case), or a maximum mip map chain count set in the case of textures created through C# code. You can visualise what these mipmaps look like by using the slider on the Texture Preview in Unity’s Inspector :

(Image)

Mipmap levels of an example noise texture

Wrap Modes

This filtering method is stored in the sampler (SamplerState), along with a Wrap Mode :

SamplerState

Every Texture object also defines it’s own SamplerState (in DX10/11 at least, DX9 has them combined into a single “sampler” object but DX9 is deprecated as of Unity 2017.3 anyway. I believe OpenGL also doesn’t support separate textures and samplers, but Shader Graph takes care of this).

DX11 allows you to have up to 128 textures in a single shader but it’s important to be aware that there is a maximum of 16 samplers per shader. If you use more than 16, an error similar to maximum ps_5_0 sampler register index (16) exceeded will be shown. While every texture object in Shader Graph will define a sampler, if left unused the compiler will ignore them.

This means, in order to have more than 16 Sample Texture 2D nodes sampling different textures, you will need to connect a SamplerState node - Which is the equivalent of HLSL’s inline sampler states.

A SamplerState property could also be connected, it acts the same way. It cannot be exposed.

Texture2D

There a few different texture types as I listed earlier.

Texture2D is the more common one you’ll be familiar with and is sampled with a 2D coordinate (UV). If the UV input on the Sample Texture 2D is left blank, it automatically uses the UV0 channel from the mesh. If the SamplerState input is left blank, it’ll automatically use the one from the texture.

Texel Size & Texture Size

The Texture Size node allows us access to the Width and Height of a 2D texture (in texels), as well as the size of one texel (in terms of the normalised UV coords - so equal to the reciprocal of the texture’s width and height). In older versions of Shader Graph, the node was named Texel Size despite only outputting the texture’s size - hence it was renamed. You would have to use Reciprocal to calculate the texel sizes. Later, ports for texel size was added.

Specifically the node obtains components of a special Vector4 property that Unity passes into the shader alongside a Texture2D, which matches the texture’s name with _TexelSize appended. The .xy components of that are the Width & Height, while .zw is the Texel Width & Texel Height.

While the node is limited to Texture2D only, you could always set up your own Vector property and pass the size of a different texture type manually or through C# if required. More information about setting up properties will be discussed in a later section.

Texture3D

Texture3D is similar, but three dimensional, so has a depth to it as well as the width and height. It defines a volume of texels so must be sampled with a 3D coordinate in order to select which pixel is output. Shader Graph still labels this as a UV, though it’s common to refer to this as a UVW coordinate too, where W refers to this depth-based axis similar to the Z axis in regular 3D space.

Texture2DArray

Texture2DArray is very similar to a Texture3D, it is a collection of multiple 2D textures. It is also sampled with a 3D coordinate, though the URP Shader Library functions split these into two variables - a 2D UV coordinate and an index. The same occurs on the Sample Texture 2D Array node, with Vector2UV” and FloatIndex” ports.

Cubemap

A Cubemap (also referred to as a TextureCube in HLSL) is another type of 2D texture array that contains 6 slices, one for each side of a cube. It is sampled with a 3D vector that points out from the center of the texture cube. The returned colour is the result of where that vector intersects it.

Gradient

(Image)

The gradient type is a bit of an odd one. The type does not exist in regular shaders, It is something specific to shader graph and is actually defined as a struct containing an array of up to 8 colour and alpha values.

1
2
3
4
5
6
7
8
9
struct Gradient
{
    int type;
    int colorsLength;
    int alphasLength;
    float4 colors[8];
    float2 alphas[8];
};
// From com.unity.shadergraph/ShaderGraphLibrary/Functions.hlsl

The values are currently something that gets hardcoded into the generated shader code, so the Gradient type (created from the Blackboard at least) cannot be exposed or set by C# scripts.

If you need to expose something similar to a gradient, the foldout below has a few alternatives :

Color properties & Lerp

If you only need two colours, you can create a gradient quite easily from two Color properties and a Lerp node. Same with three, perhaps four, but you’ll need to link together multiple lerps and remap the T inputs. The below image provides an example for a 3-colour gradient.

While this can work for even more colours, adding that many properties can be tedious.

Texture

Another option (especially when lots of colours are required) is to use a Texture2D instead. We sample it with the Sample Texture 2D with the UV coordinate as the same “Time” value that would have been used in the Sample Gradient node.

This gives us the flexibity of more colours, and can be swapped out more easily, but a texture sample might end up being be more expensive than a math-based method, and potentially suffer from colour banding issues.

For creating the gradient texture, it can either be done manually in your preferred image manipulation software (e.g. Photoshop, GIMP, etc), or baked from a C# Gradient object. For example, using a function like :

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
// Converts Gradient to Texture2D
// width defaults to 256, but can be changed if a different resolution is needed
public Texture2D CreateGradientTexture(Gradient gradient, int width = 256) {
    Color32[] colors = new Color32[width];
    for (int i = 0; i < width; i++) {
        colors[i] = gradient.Evaluate((float)i / width);
    }
    Texture2D tex = new Texture2D(width, 1, TextureFormat.ARGB32, false);
    tex.SetPixels32(colors, 0);
    tex.wrapMode = TextureWrapMode.Clamp;
    tex.Apply();
    return tex;
}

// To save it as an asset, if that's what you want to do
public void SaveTexture2DAsset(Texture2D texture, string path = "Assets/Texture.asset") {
#if UNITY_EDITOR
    UnityEditor.AssetDatabase.CreateAsset(texture, path);
#endif
    // Or could use this instead, to write to png : (path must end in .png)
    // byte[] png = texture.EncodeToPNG();
    // System.IO.File.WriteAllBytes(path, png);
}

Custom Function

It is also technically possible to create a Gradient object in a Custom Function and pass it out. This is a node that allows us to write HLSL code, which is useful for accessing things that Shader Graph can’t handle, like arrays and loops. This method still cannot be exposed to the material inspector, but can be set through C# - but only globally.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
int _GradientType;
int _GradientColorsLength;
int _GradientAlphasLength;
float4 _GradientColors[8];
float _GradientAlphas[16];

void Gradient_float(out Gradient output){
	// Converts the float array to float2, into format that Gradient struct expects
	float2 Alphas[8];
	for (int i=0;i<8;i++){
		Alphas[i] = float2(_GradientAlphas[i*2], _GradientAlphas[i*2+1]);
	}
	
	Gradient g = {
	_GradientType,
	_GradientColorsLength,
	_GradientAlphasLength,
	_GradientColors,
	Alphas
	};
	output = g;
}

// Required to make a Gradient work as an output
// (I think something to do with how the preview is generated)
bool isfinite(Gradient g) {
   return true;
}

The problem here is that these aren’t defined as properties (and arrays can’t be) so should be set through the Shader.SetGlobalX functions, not as per-material ones to avoid batching-related issues.

While it might be possible to work around this using multiple Vector4 properties… It would be annoying to set these up in every graph you need an exposed Gradient for. You’d need 12 Vector4s to support all 8 colour/alpha options, (and another Vector4 / 3 Float properties for the ints), and then you have to deal with setting them through C#.

Using Matrix4x4 instead might work, that would only require 3 which is a lot better than 12. But matrices can’t be serialised so would need to be set through C# at runtime.

I’d personally recommend the Lerp or Sample Texture 2D approaches instead, but thought I should still mention this too… Maybe it’ll give someone else a better idea for another workaround.


Coordinate Spaces

Some nodes that obtain positions/directions will have the option to switch between different Spaces, including :

While not a space that is included on nodes, it may still be useful to be aware that Clip space exists. Behind the scenes the Vertex stage converts the Object space Vertex Position (port on Master Stack) to Clip space, which is important for clipping triangles to what is visible by the camera - so we don’t waste time trying to render fragments/pixels outside the screen. It’s a Vector4, where the XY components range from [-W, W], where W is the fourth component of that Vector4.

With a bit of remapping and dividing by W, clip space is converted to Screen space, aka Screen Position node (also known as “Normalised Device Coordinates” in SRP shader code). That node also has a few modes, including :


Understanding Previews

Float/Vector1

(Image)

For Float types, previews are always greyscale. Black means the value is 0, or negative, and White means a value of 1 or higher. Values in-between 0 and 1 show as different greyscale shades.

The exact shade isn’t too important as it can vary depending on whether the project is using Gamma or Linear colour space (setting under Project Settings → Player). If you’re using Gamma space, your previews might show more darker areas as the converison to linear isn’t handled. Projects should use Linear as it’s more realistic but Gamma can be more performant, especially for mobile devices.

Visualising values outside of a 0-1 range can be difficult, but nodes like these can help :

Vector

For Vector types, the same applies but previews show up to three components in their respective colours (red, green, blue). The alpha channel is always ignored. Red corresponds to the first component, green the second, and blue the third.

As well as for colours, vectors are commonly used for positions and other coordinates. Some examples are below.

UV

(Image)

The UV node for example shows a 2D/Quad preview. It’s output is a Vector4, but typically UVs (aka Texture Coordinates) are Vector2. As mentioned the 4th alpha component (which is 1 here) is ignored for previews and the 3rd component is 0 so we don’t see any Blue, only see Red and Green. From the colours shown, we can see that the red values start in the bottom left corner at 0 as they are darker, then increase to 1 horizontally across the quad, while green starts at 0 increasing to 1 vertically.

Where values of red and green are both larger than 0, we start getting shades of yellow due to the additive nature of the RGB colour model.

It’s important to note that these UVs are for the Quad mesh that’s used in previews. Meshes in the scene might have their own UVs (assuming they were UV mapped when created in a 3D modelling program, e.g. Blender) and might even make use of the Blue or Alpha channels too. e.g. For the Skybox, the UVs are actually filled with a 3D position, equal to the Position node.

When we use the Sample Texture 2D node, these UVs are used as an input, where they correspond to the part of the texture to sample. In the case of the default quad UVs this just shows the texture (though since alpha is ignored it can show either black areas, or colour bleeding as some programs/formats won’t store RGB values in areas with 0 alpha).

It’s possible to offset or adjust the tiling of the UVs through Add and Multiply nodes, or by using the Tiling And Offset node. We can even offset each part of the UVs by different amounts, e.g. using Simple Noise or Gradient Noise nodes (or noise sampled from another texture). This distorts the UV coordinates, and so the resulting texture sample is also distorted.

(Image)

The noise has values in a range from 0 to 1 (though this may vary depending on the noise used), so we Subtract 0.5 to center the distortion. We also Multiply by a small value to lower the strength of the distortion, as values of -0.5 and 0.5 would offset a coordinate by half of the texture.

Something to also note is the noise output is a Float, but the Offset is a Vector2, so it is being promoted. The distortion is using the same value in both axis of the texture so it will only ever offset in a diagonal direction towards the bottom left or top right. There is other methods that allow the distortion to be in any direction, for example using a flow map (which are very similar to Bump/Normal maps). This CatlikeCoding article is a good example of those, also offset by repeating time to create flowing water materials.

Position

(Image)

The Position node shows similar colours, but shows a 3D/Sphere and the output is Vector3. Again no blue shades are shown, but this time it’s because the side of the sphere facing us is the negative Z axis. The blue parts of the sphere are hidden on the other side. We can Negate the result to flip everything and blue will be shown. We can also use an Absolute node to bring negative values back into a positive range.

In the preview, the center is the point (0,0,0), with the X axis (red) increasing to the right, and decreasing to the left. The Y axis (green) increasing upwards and decreasing downwards. The values range from -0.6 to 0.6 in each axis but this isn’t important as it will vary depending on the mesh and space used.

Visualising some positions in the Scene View can be difficult. e.g. For World space, (0,0,0) (visualised as black) would be near the scene’s origin, but the object the shader is applied to could be much further away. I find it useful to put the position through a Fraction node to see each unit square, especially when trying to debug something like Reconstructing a position from Depth.


Main Preview

You can also use the Main Preview window to view the final shader result based on what is connected to the Master Stack (or active Master Node), though personally I prefer to view the shader applied to an object in the Scene View instead.

(Image)

Example Main Preview for a Soft Foliage Shader, showing a custom mesh

Use the button in the top right of the graph to toggle the Main Preview window. The window can be resized by dragging the bottom right corner of it (same goes for the other windows). Unlike node previews, this will show alpha/transparency.

Dragging will rotate the model. It doesn’t move the camera or light direction though so this isn’t always useful.

Right-clicking anywhere inside the window will also allow you to change the mesh being used. It defaults to a sphere mesh, but strangely it’s not the same as the “Sphere” on the dropdown. The default (which is the same sphere used for node previews) has UVs that wrap around it twice, while the one on the dropdown is the same as Unity’s default sphere mesh which has UVs that wrap around once. This isn’t too important since it’s just a preview, but it might be something to be aware of still.


Blackboard Window

Shaders can expose some of these data types as Properties, which are created from the Blackboard window. The visibility of the window can be toggled using the button in the top right of the graph.

Path

The Blackboard window also allows you to adjust the path of the shader when it appears in the Shader dropdown menu on the Material. This is the grey text under the graph name at the top.

(Image)

By default it shows under the “Shader Graphs” heading but this can be changed. It really doesn’t look like an editable field - but you can double-click it and it will change into a editable field.

You can also use slashes to create sub-headings to organise your shaders better.

Properties

To create a Property, use the “+” button in the Blackboard, a menu will be shown with the different data types as listed earlier. Once created we can name the property. Right-clicking the property will give us options like Rename, Delete and in v10+ Duplicate was also added.

If we have a data type node in the graph, (e.g. Nodes like Float, Vector2, Color, etc), we can also right-click them and choose Convert To → Property from the dropdown to create the property automatically, with the default value set from the value it had before converting. It’s also possible to convert a property back using Convert To → Inline Node if required. These can be done to multiple nodes/properties at once if multiple are selected.

Each property in the Blackboard can be dragged into the graph to add it as a node. You can also use the Create Node menu by typing it’s name, or “Property” to see all of them.

(Image)

Prior to v10, an arrow was on the left of each property to expand extra settings for it. In v10+ these settings were moved to the Graph Inspector window in the Node Settings tab. While a property is selected in the Blackboard or in the graph, the settings will appear there. (I’m personally not a fan of this change as it makes creating properties a bit awkward when having to go back and forth between two separate windows, so I’m very much hoping it’s reverted).

The settings available for each property can vary slightly but most types will have Reference, Exposed, Default and Precision options. Some also have additional Modes. I’ll discuss these in the sections below.

Some may also include a “Override Property Declaration” (or “Hybrid Instanced” prior to v10). The Hybrid Per-Instance option on this is related to the DOTS Hybrid Renderer. I’m not very familiar with DOTS, so if you want more information about it, see here and here. There’s also Global and Per-Material options but I would avoid overriding the property declaration unless you are using the Hybrid Renderer as it can break the SRP Batcher compatibility.

Reference

The settings for each property can vary slightly, but all will have a Reference field.

Prior to v12 (Unity 2021.2) this defaults to an auto-generated string, while v12+ attempts to use the same as the name, but with “_” appended to the front (and in place of any spaces/hyphens/other symbols). This reference field is important for setting the property from a C# script, which will be discussed in a later section.

You can edit this reference if you’d like. I’d recommend making it begin with an underscore ("_") as this helps avoid issues with certain words - specifically words that are used as important keywords in HLSL or the Shaderlab section of the generated shader code.

For example, “Cull”, “Offset”, “float”, “void” will cause the graph to break. Might show a similar error to syntax error: unexpected TOK_CULL, expecting TVAL_ID or TVAL_VARREF. But “_Cull”, “_Offset”, etc would work fine (though have nothing to do with the actual Cull/Offset functions, it’s just a string/name used to identify the property). Note that in v12+ Shader Graph automatically attempts fix references like this in order to prevent errors.

Note that the SamplerState node generates it’s reference based on the other settings so it cannot be edited. That data type also cannot be set through C# though.

Exposed / Scope

Most properties will also have an Exposed tickbox, or Scope dropdown in newer versions (with Local or Global options).

If exposed/local, these properties will appear in the Inspector window when viewing a Material (Unity’s regular Inspector that is, not the Graph Inspector). Materials allow you to have separate textures/values for each instance, while still sharing the same shader. These properties can also be set through C# as I’ll explain later.

If un-exposed/global, the properties will not appear on the Material, but can still be set from C# as a Global Shader Property - by using the static methods in the Shader class. This means all instances of the material, and even materials sharing the same reference will use that value. This can be very useful for easily sending data into all shaders/materials that shouldn’t change per-object.

Not all properties in the Blackboard can be exposed however. e.g. Matrices, SamplerState and Gradient objects. (Though matrices can actually be set locally (on material) through C#)

Default Value / Preview Value

The Default field is the value the property will default to when a Material is first created.

If a Material is already created, changing the Default values for an exposed property won’t affect them. You would have to adjust the value on the material manually through Unity’s Inspector. Be aware of this as it’s a common issue that you might encounter when trying to debug your shader. If your shader is producing different results in-graph than in-scene, the values on the Material would be the first place to check. (If the problem isn’t there, it could also be related to the mesh, e.g. UV Mapping related, if your shader is using the UV channels that is).

It is also the value that will be used in the node Previews so it can be important to set this field.

Modes

(Image)

Some properties will have different “Modes”, with options shown in these foldouts :

As a note, prior to v10 Unity didn’t correct HDR colours to the correct colorspace for the project. Graphs imported into v10+ will show the property as “Deprecated” but it can be upgraded in the Node Settings tab of the Graph Inspector, assuming you want the proper behaviour. If you’re in v10+ and want the old behaviour, You can also use a Colorspace Conversion node to convert from RGB to Linear to mimic it.

Note that other texture types (Texture3D, Texture2DArray, Cubemap, etc) default to the Grey mode when a texture (despite the generated code trying to use "white", which isn’t a valid mode for those types)

Precision

All properties will also have a Precision option, which can be changed between Inherit, Single (32-bit) and Half (16-bit). If set to Inherit the properties will default to the Precision in the Graph Settings tab of the Graph Inspector.

I usually don’t worry to much about the precision, but it may be important if you’re targetting mobile platforms. It can help to make the shader more performant. You can learn more about precision in the shadergraph docs.

Setting Properties from C#

Properties can also be set from a C# script through the Material class (only if exposed), or set globally through the static methods in the Shader class. Follow those links to see a list of these methods. When setting a property using them, it’s important to use the Reference of the property, and not it’s name in the Blackboard.

Note, since a Matrix cannot be exposed, by default they are declared in the UnityPerMaterial cbuffer which allows the SRP Batcher to function and the matrix can be different for each material. This however leads to garbage results when setting the matrix globally. If you want a global matrix, you should use the Override Property Declaration in the Node Settings to force it to Global. Don’t override it for other property types (unless you know what you are doing) as it can break SRP Batcher compatibility.

For more information and example code on setting properties from C#, see the section in my Intro to the Shader Pipeline post. It’s also important to understand the difference between using Renderer.material and Renderer.sharedMaterial, so see the section here too.

Keywords & Shader Variants

(Image)

As well as Properties, the Blackboard also allows us to add Keywords. In the “+” menu, under the Keywords heading there is the following options :

Keywords allow you to create multiple versions of the shader, with and without calculations included. These are known as Shader Variants. Each Material uses the given shader but can enable/disable keywords to select which variant is used.

Shader Graph has a set maximum number of Shader Variants that can be generated, which defaults to 128. A warning should show if your shader generates more than the maximum (though that seems to be bugged in v10.2.2 due to a NullReferenceException). Only variants produced from Keywords created in the Blackboard counts for the maximum, keywords used interally won’t affect it.

The maximum can be edited in Edit → Preferences → Shader Graph, however it is important to realise that with each keyword added it exponentially creates more variants. And more variants will also slow down build times. With 7 Boolean Keywords (2 states, On and Off), the shader produces 2^7 or 128 variants, so hits the limit exactly. I recommend to be careful with keywords as you don’t want to over-use them for everything. They should be used in a situation where you need to toggle an effect because it would greatly affect performance if a Branch was used instead (which would still calculate both sides and discard one).

Each keyword can be dragged into the graph to add it as a node. You can also use the Create Node menu by typing it’s name, or “Keyword” to see all of them. The created node will resemble other nodes and have input ports (either On/Off for the Boolean Keyword, or the Entries for the Enum Keyword). Only Floats and Vector types can be connected.

(Image)

Similar to Properties, you can find settings for each keyword under the Graph Inspector window in the Node Settings tab. While a keyword is selected in the Blackboard, the settings will appear there. (Prior to v10, an arrow was on the left of each keyword to expand extra settings for it).

Strangely, selecting the keyword node in the graph does not show the settings. I assume this wasn’t intended and might be fixed in the future.

Scope

Each keyword has a Scope setting, which can be set to Local or Global. This can affect how the keyword is set from C#, as explained in the next section. It does not affect how the keyword is exposed in the Material Inspector however. The inspector will always toggle the keyword locally.

There is a maximum of 256 Global keywords that can be used per-project. Unity uses some of these interally, (apparently around 60 but that was referring to the Built-in RP). Local keywords have a maximum of 64 per-shader.

These limits cannot be changed, so it is recommended to always use Local keywords (unless you need to set it globally through C# that is).

Exposed

To expose a Boolean Keyword in the Material Inspector it’s Reference must end in “_ON”.

For versions prior to v10, keywords had the Exposed tickbox but this still required the “_ON” suffix. The tickbox could only be used to un-expose the keyword if it had the suffix. This was removed in v10+, but you can always use a reference without the suffix if you don’t want it exposed.

Enum Keywords are always exposed by default and there doesn’t appear to be a way to set it as unexposed since the tickbox was removed. This shouldn’t be that important though as the keyword can still be accessed from C# regardless of whether it is exposed or not.

Definition

Each keyword has a definition field which has the following options :

It’s important to be aware that since Shader Feature strips unused variants from the build, it is not recommended to set the keywords at runtime from C#, since the required variant might not be included and result in a magenta error shader. If you plan on adjusting keywords at runtime, you should use Multi Compile instead.

Setting Keywords from C#

To adjust keywords in C#, we can use EnableKeyword and DisableKeyword methods in the Material class to toggle a keyword per-material, or the Shader class to toggle a keyword globally, for all materials. Like properties, the string used to toggle a keyword in C# is it’s Reference field in the Blackboard.

Keywords using the Global scope can be set through either class. Note that if a keyword is enabled globally through Shader.EnableKeyword it cannot be disabled or overriden using Material.DisableKeyword (or the inspector toggle).

Keywords using the Local scope can only be set through methods in the Material class.

It is also recommended to only adjust Multi Compile keywords at runtime, see Definition section below for more information.

For example, for Boolean Keywords :

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
Material material = GetComponent<Renderer>().sharedMaterial;
// Or use material, if you want to create an instance.
// This would be important if you only want to enable the keyword for this GameObject/Renderer
// See https://www.cyanilux.com/tutorials/intro-to-the-shader-pipeline/#material-instances

// Enables EXAMPLE_KEYWORD on material
material.EnableKeyword("EXAMPLE_KEYWORD");

// Disable EXAMPLE_KEYWORD on material
material.DisableKeyword("EXAMPLE_KEYWORD");

// Enable Globally, (Only for Global Scope)
Shader.EnableKeyword("EXAMPLE_KEYWORD");

// Disable Globally, (Only for Global Scope)
Shader.DisableKeyword("EXAMPLE_KEYWORD");

For an Enum Keyword you need to append the Entry’s Reference to the Keyword Reference, separated with “_” to obtain the full string to use in the C# methods. The references for each entry is shown in the table in the Node Settings tab.

As an example, a Keyword with a Reference of “ENUM” and the entry “A”, can be toggled using “ENUM_A”. You also shouldn’t enable multiple entries at the same time. To switch between entires, you should always disable the others or the variant used might not change properly. e.g.

1
2
3
material.EnableKeyword("ENUM_A");
material.DisableKeyword("ENUM_B");
material.DisableKeyword("ENUM_C");

Categories

(Image)

In v12+ it is also now possible to create Categories, to better organise the properties in the Blackboard (and in the Inspector when viewing the material).

Can create them from the + button (same that you would use to add a property/keyword), then rename by double-clicking the Grey “Category” text.

They are also collapsable (by pressing the triangle to the left) which is useful if the graph has a large amount of properties.


Grouping Nodes

To help with organising your graphs, you can group sections of nodes together. Drag a selection box over multiple nodes, right-click on one of those selected nodes, then choose Group Selection. Or use the Ctrl+G shortcut.

(Image)

Example section of a graph containing a couple grouped nodes. Helps to understand what the nodes are doing at a quick glance.

Once a group is created you can give it a name which appears at the top in a large text size. The group can also be renamed at any time by double-clicking the name. The name will always be on a single line and affect the width of the group so it’s a good idea to keep it short.

A group can be moved around (along with it’s contents) by dragging the group’s name.

If you want to remove nodes from the group, select them and use Ungroup Selection or Ctrl+U to separate them from the group.

To delete a group entirely without affecting it’s contents, select the group by clicking it’s name, then use the Delete key on your keyword. To delete a group with it’s contents, right-click the group and use the option from that dropdown.


Sticky Notes

You can also add Sticky Notes to a graph (Added in v7.4-ish?), by right-clicking an empty space in the graph and selecting Create Sticky Note from the dropdown.

(Image)

These are for adding comments to your graph. They have a text field at the top for a title, and a larger box for the text body. The text can be edited by double-clicking one of the fields. They can also be resized by dragging the edges.

Right-clicking the Sticky Note will give you some additional options including :

In v11+ (Unity 2021.1+) it should also now be possible to use Rich Text to add additional formatting like bold (using <b>example</b>), italics <i>example</i>, colors (<color=red>example</color>), etc.


Shortcut Keys

There are a few keyboard shortcuts available to speed up working with Shader Graph. Some of these might have been mentioned in other sections. These shortcuts include:

Shader Graph currently does not have any shortcuts for easily placing/creating nodes.


Sub Graphs

As briefly mentioned earlier in the post, a Sub Graph is special type of graph that can be used as a node in other graphs, which is useful for functions you’d like to reuse. They can even be nested in other Sub Graphs (though it’ll prevent you creating a recursive loop where Sub Graphs try to call themselves).

Similar to other graphs, the path can be adjusted in the Blackboard, which determines how it appears in the Create Node menu. Use slashes to create sub-headings. See the Path section above for more information.

There is two ways to create a Sub Graph :

(Image)

Outputs

In v10+ we can adjust the number of outputs and change their data types by selecting the Output node (left-click on it). A list will appear in the Node Settings tab of the Graph Inspector window, (It’s incorrectly labelled as “Inputs”, in v12 at least, but this might be fixed in the future).

Prior to v10 a cog/gear could be found on the Output node to adjust these instead.

(Image)

The first output is used to create the node’s Preview. It must be a Float, Vector, Matrix, or Boolean type or the Sub Graph will error. Other outputs can be any type from the dropdown (though Virtual Texture appears to not be supported and is greyed out).

(You may also see some other “Bare” texture types if in v10.3+ but they’ll be greyed out. They are only used if a Sub Graph was using those texture types prior to updating. See the Better Texture Data Types section for more information on why this change was made).

Inputs

It’s also possible to create Inputs for the Sub Graph, which uses Properties that can be created from the Blackboard. See the Properties section above for more information. Note however that some settings (like modes) will be hidden.

These inputs will always show as ports when the Sub Graph is added to another graph as a node and cannot be exposed to the Material Inspector.

Branch On Input Connection

A pretty neat feature added in v12+ (Unity 2021.2+) is the Branch On Input Connection node, which is available for Sub Graphs. It’s purpose is to allow you to create a Branch (graph selects between two options) based on whether a node is connected to the Input port when the subgraph is used.

To use this, we first need to enable the Use Custom Binding option on the Input, which is found in the Node Settings tab of the Graph Inspector window when the Input is selected.

When ticked, a text field for a Label appears. This will be the text that appears next to the port when nothing is connected to the Sub Graph node (Similar to how “AbsoluteWorld Space” and “World Space” appear on the Position and Normal ports on the Triplanar node).

(Image)

In the Sub Graph, we can then add the Branch On Input Connection node and connect our Input to the Input port, as well as the Connected port as we want to use it’s value.

(Image)

In this example I’m using the Normal Vector node (mesh normal) for a custom lighting calculation. By using the Branch On Input Connection it allows us to optionally connect a Vector3, allowing for this custom lighting to support normal maps.

Having this in our Sub Graph is much better than not providing support for normal maps, or having to expose a regular Vector3 port and making the user connect a Normal Vector node (or normal map transformed into World space) for it to function correctly.

(Image)

In this case I haven’t got a proper normal map so instead of using a Sample Texture 2D node with Normal mode, I’m using the Normal From Texture node which converts a height map into a normal map by using 3 texture samples (though may be less accurate). See Normal From Texture docs page for more info.


Fragment-Only Nodes

Some features of shaders are limited to the Fragment shader stage :

Connecting these to the Vertex stage is not possible. Quite a few nodes also make use of these partial derivatives so also can only be used in the Fragment stage. For example :

The following nodes also use 2D texture samples. It may be possible to create alternatives to these that use the Sample 2D LOD version instead.

Note, that when working with Sub Graphs, if one of these Fragment-only nodes is used, all it’s outputs cannot be connected to the Vertex stage. The shader won’t break if connections were made previously to converting it to a Sub Graph, but if un-connected, it won’t be possible to reconnect them without editing the Sub Graph. I recommend keeping all Vertex vs Fragment parts in completely separate Sub Graphs to avoid these issues.

If somehow you manage to create a connection to a Vertex port including a Fragment-only node, it will result in a shader error similar to cannot map expression to vs_5_0 instruction set.


Advanced

This final section includes some additional features of Shader Graph that I’d consider to be more advanced.

(I would have put some of the content in the Gradient & Array sections here instead, as that was also more advanced than I wanted the post to be, but felt it fits better to keep that under the Data Types section/heading)

Learning stuff below might not be necessary to get started, but may be useful to come back to! If you stop reading here, thanks for reading! Please consider sharing a link to others if you found this useful! <3

Custom Interpolators

The Vertex stage is responsible for passing through any data (e.g. from the mesh) that may be needed in Fragment stage calculations. This is known as interpolators (as this data for each vertex is interpolated across the triangle). It is mostly handled automatically by Shader Graph - e.g. For nodes requiring UVs, the Position node, Normal Vector and other nodes in the Input → Geometry heading under the Create Node menu.

But in some cases we may also want to pass our own custom data between the two stages. In Shader Graph v12 (Unity 2021.2+), we can do this by adding Custom Interpolator blocks to the Vertex stage (by right-clicking, then selecting Add Block Node). There is a docs page here for this, but below also explains how to use it and some usage examples.

(Image)

We can then change the name and type in the Node Settings tab of the Graph Inspector window while the block is selected.

(Image)

You can have multiple interpolators. The maximum is 32, though the actual amount you can use will depend on the target platform (I believe gles platforms may be limited to 8, as they use the SubShader with target 2.0, even though gles3 can actually support more). The Shader Compile Targets docs page has some related info.

Also note that :

Some uses of Custom Interpolators :

Custom Interpolators Example 1 - Undisplaced Position

When using the Position node in the Fragment stage, it will always have any vertex displacement applied (from calculation passed into the Position port on the Vertex stage). As an example, here we can use a Custom Interpolator block to pass the un-displaced position through.

(Image)

In the fragment we can create the interpolator node (found under Create Node → Custom Interpolator)

Now compare our interpolator with the regular Position node in the Fragment stage (can use the Main Preview for this. Can right-click in that window to swap the mesh for a Plane). The Y axis of the interpolator produces a completely black result (as the vertex positions are at Y=0). But with the Position node version, we can see a gradient (specifically from -1 to 1, though the negative values also show as black here).

(Image)

Of course, you’d only do this if the black result is what you required. I used this recently, to colour an fried egg shader. As the egg yolk has higher vertices and I (somewhat lazily) used a Step node (then into a Lerp) rather than applying a texture or vertex colors. But this required the undisplaced position otherwise other parts of the egg would appear yellow too as it displaces upwards.

Custom Interpolators Example 2 - Vertex Based Distortion

Perhaps you’d also use this if you are applying a texture by using a projection via Position node (usually known as worldspace planar mapping, somewhat of a precursor to triplanar mapping). If we want the texture to distort as the vertices move rather than stay static, then we need to handle the mapping with the position prior to the distortion. Here’s an example using Gradient Noise for displacement (on all XYZ axis) :

(Image)

And then Fragment, comparing the result with the Main Preview (with Plane mesh) :

(Image)

We could also do the same thing but with the Screen Position node instead for a screenspace mapping.

As mentioned previously, this is likely cheaper than a per-pixel approach to distortion (which is essentially the same calculation but all in the Fragment stage), but it definitely doesn’t look as accurate and would likely only fit in a old-PS1 style (or similar platforms around that time) or low-poly style (though this plane may need to be fairly high-poly still).


Custom Functions

Shader Graph is a powerful tool, but the nodes can’t do everything. Some things might require code snippets to be used instead, which is where the Custom Function node would be used. For example for creating loops, dynamic branching, or complex math calculations (which may be possible through nodes, but can be easier/shorter in code form)

Back in Unity 2018.1, there was a method of creating custom nodes using the CodeFunctionNode class in the UnityEditor.ShaderGraph namespace. This method is no longer accessible and will error if you attempt to use CodeFunctionNode in Shader Graph/URP/HDRP versions for Unity 2019.1+. The replacement was the in-graph node.

The Custom Function node can be added to a graph, and then configured under the Node Settings tab of the Graph Inspector window, while the node is selected in the graph. (Prior to v10 the settings could be accessed by the small cog/gear icon on the node itself)

(Image)

We can set the Inputs and Outputs for the function by using the “+” icon to add to the list. For each item, a dropdown is shown to change it’s data type. This affects the ports on the node.

There is two Types for the node :

1
Out = A + B;

The Name of the function isn’t too important if using the String mode but they must be unique or the function may override another.

Typically the whole file contents are inside an #ifndef KEYWORD #define KEYWORD#endif block, which ensures the functions are only defined once - even if the file is included multiple times, which may occur if two Custom Function nodes reference the same HLSL File. The keyword used should be unique to that file. If two files have the same keyword, the contents of one won’t be included (likely causing an undefined error)

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
#ifndef MYHLSLINCLUDE_INCLUDED
#define MYHLSLINCLUDE_INCLUDED

// The function name "MyFunction" will be the Name used in the Custom Function node.
// The _float suffix is used to support the Float/Single precision.
void MyFunction_float(float3 A, float B, out float3 Out) {
	// A is a Vector3 input, which in code is float3 (or half3 if using Half precision)
	// B is a Float/Vector1 input, which in code is float (or half)
	// Outputs use the "out" term in the function arguments/parameters.
    Out = A + B;
}

// If we need to support the half precision too,
// we can define the same function but with the _half suffix.
void MyFunction_half(half3 A, half B, out half3 Out) {
    Out = A + B;
}

#endif // ends the MYHLSLINCLUDE_INCLUDED check at the beginning of the file.

As shown above, the Name field must match the name of the function you want to use without the precision prefix (MyFunction in this case). A file can contain multiple functions - the Custom Function node will only use the one with the name you set.

Calling ShaderLibrary Functions in a Custom Function

There may be cases where you need to access functions from the URP/HDRP ShaderLibrary in your code. Some common macros may be defined for pipelines. For example, sampling a texture :

1
2
3
4
5
6
7
8
#ifndef SAMPLE_TEXTURE_EXAMPLE_INCLUDED
#define SAMPLE_TEXTURE_EXAMPLE_INCLUDED

void ExampleSampleTexture2D_float(UnityTexture2D Texture, float2 UV, out float4 Out){
   Out = SAMPLE_TEXTURE2D(Texture, Texture.samplerstate, UV);
}

#endif

But other more pipeline-specific macros/functions require special care. Previews within shader graph need to render in any render pipeline, so they use their own separate ShaderLibrary which won’t have access to URP/HDRP specific functions.

It’s typical to use the SHADERGRAPH_PREVIEW keyword to output something different for previews, especially when they wouldn’t be useful anyway. For example this function from my Custom Lighting Package, obtains Directional Light info in URP :

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
void MainLight_float (out float3 Direction, out float3 Color, out float DistanceAtten){
	#ifdef SHADERGRAPH_PREVIEW
		Direction = normalize(float3(1,1,-0.4));
		Color = float4(1,1,1,1);
		DistanceAtten = 1;
	#else
		Light mainLight = GetMainLight();
		Direction = mainLight.direction;
		Color = mainLight.color;
		DistanceAtten = mainLight.distanceAttenuation;
	#endif
}

GetMainLight() here is a function that exists only in the URP ShaderLibrary. That is fine for the final generated shader, but as explained above, previews do not have access to it. You might think of adding #include lines for Lighting.hlsl or RealtimeLights.hlsl so the function can be used, but those files contain other includes such as URP’s UnityInput.hlsl which defines variables like _Time. That already exists under the ShaderGraphLibrary used by previews, hence redefiniton errors are caused and the SHADERGRAPH_PREVIEW block must be used instead.

Sampling Textures in a Custom Function

In shader code online you might typically see sampler2D and tex2D() functions being used. This is the older DX9 syntax, but with Shader Graph we mostly use the newer DX11+ syntax which has separate objects for the texture and sampler. This isn’t supported in all platforms though (GLES2), so Unity provides a bunch of macros which can generate different code. These macros are defined in files found under render-pipelines.core/ShaderLibrary/API. The foldout below also lists some.

Common macros for sampling textures :

In Shader Graph v10.3+ the following structs were added, mirroring the HLSL types :

These structs (defined in pipelines.core/ShaderLibrary/Texture.hlsl) wrap the HLSL texture objects (Texture2D, Texture2DArray, Texture3D, TextureCube) and their associated samplers (SamplerState) together, allowing them to both be passed into a Custom Function node in a single parameter.

Some examples :

1
2
3
4
5
6
7
void SampleTexture2D_float(UnityTexture2D Texture, float2 UV, out float4 Out){
   Out = SAMPLE_TEXTURE2D(Texture, Texture.samplerstate, UV);
}

void SampleTexture2DLOD_float(UnityTexture2D Texture, float2 UV, float LOD, out float4 Out){
   Out = SAMPLE_TEXTURE2D_LOD(Texture, Texture.samplerstate, UV, LOD);
}

Thanks to these types, it’s possible to access the SamplerState used by the texture through .samplerstate (as shown above), as well as the TexelSize through .texelSize (That one is UnityTexture2D only though). The actual texture object is accessed via .tex but sampling macros can be used with new struct types too.

Custom Function nodes created prior to this change will set their inputs/outputs as the “Bare” types when updated. This allows the function to still work as it was originally intended, but switching them to “non-Bare” allows you to use the new samplerstate/texelsize access.

In case you are still on older versions or using Bare types, the above functions would instead typically look like this. Where a Sampler State node would have to be used.

1
2
3
4
5
6
7
void OldExample_float(Texture2D Tex, SamplerState SS, float2 UV, out float4 Out){
   Out = SAMPLE_TEXTURE2D(Tex, SS, UV);
}

void OldLODExample_float(Texture2D Tex, SamplerState SS, float3 UV, out float4 Out){
    Out = SAMPLE_TEXTURE2D_LOD(Tex, SS, UV, 0);
}

Arrays & Loops

While not a type exposed in Shader Graph, shaders also have array types and it is possible to define them in a Custom Function

Defining arrays can only be done with the File mode, as it needs to be outside the scope of the function itself. Arrays can also only be set from C# and won’t appear in the material inspector.

For example :

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
float4 _VectorArray[10]; // Vector array
float _FloatArray[10]; // Float array

void ArrayExample_float(out float Out){
	float add = 0;
	[unroll]
    for (int i = 0; i < 10; i++){
		add += _FloatArray[i];
	}
	Out = add;
}

In the above example, a loop has been used to sum the values. Typically you’ll need a loop inside the function in order to handle the array (unless you just need a single value at a given index). Another example can be found in my Forcefield Shader Breakdown that uses an array to produce ripples on the forcefield surface.

[unroll] is specified in the above code before the loop, which tells the compiler that we’ve written the loop more out of convienience and it shouldn’t actually be handled on the gpu as an actual loop. It would be equivalent to writing :

1
2
3
4
5
6
add += _FloatArray[0];
add += _FloatArray[1];
add += _FloatArray[2];
add += _FloatArray[3];
// ... (etc)
add += _FloatArray[9];

Generally unrolling is faster, though you might instead use an actual [loop] if the body is expensive and exiting early may be more performant than the overhead of the loop itself. Perhaps when dealing with raymarching for example.

For more info see this forum answer.

It’s technically also possible to have other types of arrays, however Unity can only set Vector (float4) and Float arrays from a C# script.

I also recommend to always set arrays globally, using Shader.SetGlobalVectorArray and/or Shader.SetGlobalFloatArray rather than using the material.SetVectorArray / SetFloatArray versions as that can lead to glitchy behaviour due to SRP Batching.

Note that arrays are limited to a maximum array size of 1023. If you need larger you might need to try alternative solutions instead, e.g. Compute Buffers (StructuredBuffer), though that is more something you’d index rather than loop over.

Shader Pass Defines

In URP, Prior to v10 it was possible to use a Predefined Boolean Keyword to switch between using code for different shader passes as a define would be added in each pass. These defines are still available in v10+ but how they were defined was changed which broke the ability to use a Keyword. It is still possible to handle it through a Custom Function node however.

These defines for URP are :

For HDRP a similar list can be found here.

Shader Graph also uses SHADERPASS_PREVIEW for it’s previews. Though you can also check for this using :

1
2
3
4
5
#ifdef SHADERGRAPH_PREVIEW
	// Code for previews only
#else
	// Code for actual shader
#endif

This can be important for some code that isn’t available in the preview stage.

To use one of the pass defines in v10+ to output different values for a specific pass, the following Custom Function could be used :

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
#ifdef SHADERGRAPH_PREVIEW
	Out = 1;
#else
	// For URP this is needed :
	#include "Packages/com.unity.render-pipelines.universal/Editor/ShaderGraph/Includes/ShaderPass.hlsl"
	
	// Test if we are in the SHADOWCASTER pass
	#if (SHADERPASS == SHADERPASS_SHADOWCASTER)
		Out = 1;
	#else
		Out = 0;
	#endif
#endif

Here I’m using the String mode, but you could also use the File mode and the #include could then be moved outside the function too. The ShaderPass.hlsl file is only important for defining values to each of the SHADERPASS defines I listed before. I find it a bit strange that Shader Graph isn’t generating the include for this file automatically (in URP v10.2.2 at least). It is using the defines from that file so really should be including it. I would assume this is a bug.

(Thanks to ElliotB who shared their findings on the changes to this in v10+ on the Unity discord & forums).

As for HDRP, they correctly include their ShaderPass.cs.hlsl file. Note that their shadow pass is SHADERPASS_SHADOWS instead.

While I’ve used the SHADERPASS_SHADOWCASTER as an example, if you actually wanted to have an object cast shadows without rendering normally, or only remove shadows, you can easily do this by changing the Cast Shadows option on the Mesh Renderer component, to Shadows Only or Off. The function here is just an example. The 0 and 1 used as the output could be replaced with two inputs for more control over which values are used, and would allow them to be different per fragment/pixel (e.g. based on Simple Noise or the result of a Sample Texture 2D).


Viewing/Editing Generated Shader Code

Behind the scenes, Shader Graph generates shader code specific to the render pipeline you’re working in (URP/HDRP). In v10+ it’s possible to view this code using the View Generated Shader button in Unity’s Inspector window when the Shader Graph Asset is selected in the Project window. Prior to v10 you could instead right-click the Master node to find this option.

(Image)

This will generate the code into a temporary file, allowing you to view it.

If you plan to edit the code, you must first save the shader file somewhere in your Assets folder. It can then be treated similarly to any other code-written shader (but still requires the Shader Graph package to be installed to function). It is separate from the graph asset now though, so you’ll need to switch the shader on materials to use the code version. Changes to the code will also not update the graph, and vice versa.

Sometimes Shader Graph can be a bit limited, so it can be useful to edit the generated code to extend what it is possible in graph form. For example, adding Stencil operations. (It’s also possible to override this on the RenderObjects feature for the URP Forward Renderer. That is outside of the scope of this tutorial though, it’s long enough already!)

Be aware that the code might include multiple SubShaders depending on the targets under the Graph Settings. I’ve also seen URP ones convert to two SubShaders, one intended for gles platforms (& using shader model 2) and another for other platforms (using shader model 4.5). If you’re editing the code make sure you edit the one being used by your target (or both)


Thanks for reading! 😊

If you find this post helpful, please consider sharing it with others / on socials
Donations are also greatly appreciated! 🙏✨

(Keeps this site free from ads and allows me to focus more on tutorials)


License / Usage Cookies & Privacy RSS Feed