Unity 5.6 Post-processing

Unity 5.6 has a bran new post-processing system, for this reason I have decided to dedicate a post to it.

Before Unity 5.6, post processing required the developer to attach a number of scripts to the camera, these scripts had to be placed in the correct order for the post processing to work correctly. This often caused confusion and undesirable results, for this reason Unity’s new post processing on one component, which takes a profile as its input. Adding the new post processing component to the camera is illustrated below.

Once the post profile is created and connected to the camera changing the post profile before run time is as easy editing the profile within Unity by double clicking it.

One thing to note is that many of the new post-process effects are designed to work in a differed lighting pipeline. The post process profile contains effects ranging from bloom to screen space real-time reflections. More information can be found about the new post processing effects in Unity’s official quick start guide.

Unity’s Post Processing Manual

Below are two images, one with post process enabled and the other without.

Advertisements

Unreal Engine Experimentaions

Intro

This post shows some experimentation I undertook using the Unreal Engine. I decided to experiment with the Unreal engine because I  wanted to ensure using Unity was the best choice for me in terms of creating this game.

Because I am focusing on programming, I would create a racing AI system within Unreal. This would not only familiarize me with the Unreal engine but will also give me some more programming experience.

Production Log

A full production log of the experimentation can be found in this forum post.
Unreal Engine Experimentation Log
The final results of this experimentation can be found at the bottom of this post.

Summary

The primary I encountered with the Unreal engine was the programming languages it accepted. Unreal accepts C++ as its programming language and as I currently work in C# or Java. Learning C++ while trying to create the game at the same time would not be the most optimal path. Although learning C++ is something I plan on doing in the near future.

Unreal does have a secondary visual programming system called blueprint and this is what I used for my experimentation to create the AI. Blueprint programming is very intuitive for small operations and the visual system is very easy to debug. However, for me blueprints have more draw backs that positives. For large systems they quickly become cluttered as shown below, moreover because some background operations are undertaken to decode blueprints they generally use more overhead than scripts created code.

picture

The out of the box visuals including in the Unreal Engine are very rewarding and generally look better than Unity, however with Unity’s new 5.6 post-processing and with some tweaks to the engine similar results can be acheived.

Final Results

 

Audacity – Sound Editor

This short post covers my use of Audacity to edit sounds.

Because I am downloading royalty free sound effects for this game I need the ability to edit these sounds. As they may not be up to standard to to my specification for this task I will use Audacity.

Audacity is a free open source sound editing software capable of not only cutting sounds but also making adjustments to them such as fade, pitch or volume.

To illustrate how I will be using Audacity, I have created a production log of me editing just one sound within the game. If you listen closely to looped sound in the video below you can clearly hear an click or artifact on each loop.

This artifact is down to bad editing, essentially the volume at the start or end of the loop is not zero so it does not seamlessly loop. I will therefore use Audacity to fix this, the process is illustrated below.

Once the file is open in Audacity a wave will be displayed, this wave represents the peaks and troughs of the sound. By zooming(Ctrl + 1) into the begging or end of the sound wave we can see it more detail.

 

The images above show the zoomed view of the sound wave, the first image represents the very start of the sound as shown the volume comes in at  0 and then begins to change. However, the second image represents the end of the sound and this does not end on a volume of 0 and therefore the sound will not loop as the volume will jump from some value to 0 instantly as the sound starts. To fix this then we can use Audacity’s fade feature to fade out the volume at the end.

By using Audacity’s Fadeout tools we have managed to cancel any volume at the end of the sound wave, the same process can be used for the beginning if need be. The resulting fixed sound can be heard below.

Audacity has range of audio tools which I will utilize during the development of this game. This post shows only one of these processes, buts demonstrates my typical use of Audacity.

Unity Plugins

This post covers the Unity Plugins used in this project.

Terrain Tool Kit 2017(Free)

The terrain toolkit is a script that unlocks new features within the terrain editor. The toolkit contains algorithms that are capable of generating terrain procedural based on the users input. This can then be fine tuned using the traditional manual terrain tools.

Not only is terrain tool kit capable of generating procedural terrain but also capable of automatically laying down textures to that terrain based on some input variables. The texture positions are calculated using the angle and height of the terrain to determine which textures should be laid where, for instance cliffs will have a high angle close to 90 degree’s while snowing mountain tops may have a low angle but a high height value.

The images below explain how Terrain Tool Kit can be used.

Terrain_ToolKit

Topology and XNormals

This post covers two subjects,

  • Mesh Topology, so that creating high poly models is easier.
  • Use of XNormal to Bake Texture Maps

Topology For Static Objects.

Although there is no set rules for creating topology, in certain situations one topology may perform ‘better’ than another topology on the same model. For instance in game engines low polygon count models are preferred over high poly models, because the game engines renders must render so many frames in such a short space of time.

However, topology should not just be used to control polygon count of a model. But can also be used to make work flow easier. For instance Autodesk’s Maya has a number of tools which are designed to help create high polygon models more efficiently, however they do require a certain type of topology to work correctly.

This images below explain this further(click images to enlarge).

Unity Lighting Research

Global Illumination(GI)

Global illumination is a techniques for calculating how light bounces from one object onto the next, in simply terms GI use ray tracing calculations to calculate how light waves would behave. This ultimately creates much nice looking and more realistic lighting in a scene by indirectly lighting objects through light bounce.

The calculations required to produce real time GI are very CPU intensive, for video games this means that full real time GI cannot be achieved. However, Unity offers two GI solutions that are not fully real time but emulate the results.

Because the GI calculations are so CPU intensive Unity offers two separate GI systems, both of these rely on pre-computing information and storing it into the game files so that it can be accessed at run time. The two solutions offered by Unity are known as ‘Pre-Computed GI’ and ‘Baked GI’. It is up to the developer to decide which system they use, but each system is targeted towards specific hardware. Is it a PC, mobile or console game for instance.

Pre-Computed GI

As mentioned above Unity cannot handle GI computations at run time because they are simply to CPU intensive. Unity does however offer a pre-computed GI solution. This solution uses real-time lighting at run time to directly illuminate objects, however GI information is ‘pre-computed’ for indirect lighting. The pre-computed information is stored and then accessed at run-time to emulate ‘real time GI’, without the heavy CPU cost.

Pre-computed GI is useful when the developer has animated lights or many moving objects within the scene. Because GI is pre-computed and finalized at run time objects are effected at run time.

More information can be found at, Unity Global Illumination Manual

Baked GI

In contrast to Pre-Computed GI is Baked GI. Baked GI uses no pre-computed information at run time, but instead reads a number of light maps baked during the development process. These light maps are applied to objects within the scene on load, because of this real time indirect lighting using baked GI cannot be achieved. But because Baked GI uses no CPU at run time it is the most efficient solution in terms of CPU power and is therefore often used for mobile platforms or older hardware. Baked GI can also be used if the lights within the scene are not animated, however to light animated objects at run time using baked GI requires an addition step.

More information can be found at, Unity Global Illumination Manual

Create a free website or blog at WordPress.com.

Up ↑