[ad_1]
Solution is to animate meshes on GPU (in shaders), and use the batchable MeshRenderer as a substitute of its skinned variation. Unity doesn’t have a built-in software for this, so I made one myself. I wrote a tough editor script that reads vertex positions of skinned mesh on every body and shops them right into a texture. A customized shader then repositions vertices in run-time by studying the positions from that texture.
As I used a shader graph and my baker utility class could be very specialised for my wants, it’s onerous to share any helpful code simply but. But in case you’re , I discovered this article and this repository helpful.
As a place to begin, operate AnimationClip.PatternAnimation units the mesh into a correct pose at a given time, and SkinnedMeshRenderer.BakeMesh can be utilized to extract the mesh information at that second. In the shader, the vertex index is saved within the built-in variable vertexID. An attention-grabbing discovering is that in case you retailer every keyframe on a separate texture row, bilinear filtering will mechanically deal with the animation mixing (see the article linked above).
Resizing the feel to power-of-two (not essentially required in desktop growth) was a surprisingly advanced job that required an exterior render texture (or I’ve simply missed some too apparent utility operate). Here’s what I’m doing for texture scaling (notice the feel format that’s wanted for storing damaging values with first rate precision):
public static void ResizeTexture(Texture2D pTexture, int pWidth, int pHeight) {
var rt = RenderTexture.GetTemporary(pWidth, pHeight, 0, RenderTextureFormat.ARGBHalf, RenderTextureReadWrite.Default);
RenderTexture.energetic = rt;
Graphics.Blit(pTexture, rt);
pTexture.Reinitialize(pWidth, pHeight, TextureFormat.RGBAHalf, false);
pTexture.filterMode = FilterMode.Bilinear;
pTexture.ReadPixels(new Rect(0f, 0f, pWidth, pHeight), 0, 0);
pTexture.Apply();
RenderTexture.ReleaseTemporary(rt);
}
In the top, animation baking wasn’t almost as advanced as I feared, however resulted in some hilarious glitches. During the primary makes an attempt, fashions distorted closely and made the viewers seem like a horde of lovecraftian monsters; sadly I did not report a video of it. Of course there’s nonetheless tons to do. Instead of repeating equivalent gestures, every spectator wants particular person variation, so mixing of various animations at completely different speeds per particular person is required; this can most likely introduce some batching points. I additionally have to bake normals and tangents and resolve smaller points with digital camera frustum culling (most likely associated to AABB calculation). But not less than I now have a strong base to construct on, already in a position to run 1000’s of animated spectators and not using a noticeable FPS drop.

I might make a prolonged weblog submit in regards to the topic. Perhaps I’ll sooner or later. Writing in regards to the course of appears to be slower than the precise implementation, so hopefully somebody finds these useful.
Besides the GPU magic I’ve additionally improved the background of the shipyard, experimented with GUI design and ACES tone mapping (I’ve to spice up up all these baked lights…) and many others. I’ll come again to these within the upcoming posts.

[ad_2]