I forgot that I already use an individual VFX graph for each player in the scene. So now the question becomes whether I use two VFX graphs per player, or whether I make a single shared VFX graph for all players to be used in conjunction with each player’s VFX graph. If I go the first route, then I can actually optimize by running both particle systems within the same VFX graph. That seems like the simpler, quicker solution, and so long as my computer can handle it, the better one.
However, the optimizer within me believes that perhaps I can make this work with a single shared VFX graph somehow. Using a color map for the particles based on their locations is my first idea, and I’ve somehow stumbled upon the basis for doing so, though that snippet does not include a method for mapping data to a texture in realtime. Essentially, I’d have to actively map the current screen to a texture, then read data for each pixel into a Set Color From Map node. Here’s what ChatGPT recommends. Actually, hold those thoughts…I’ve literally already done this before. Lolz… I can probably use KlakSpout to record the scene into a render texture that I directly sample.
Still, the other issue with using a single shared VFX graph is setting the spawn positions of the particles. This would ideally be done dynamically based on the number of players using some sort of random function. I’d like to avoid chaining together a huge switch like I did in my 12/17/23 log. If I could figure out this issue, then I could potentially use the same technique to assign color and player to each particle, and then I could get rid of the individual VFX graphs entirely! Knowing that this issue is more of a systemic one makes me feel like I should go the easy route and just duplicate my graphs. I definitely will circle back to this.
Tags: gamedev unity vfx particles optimization scripting realtime