It’s been a while since I’ve written in here. I’ve been very busy at my new job. However, some ideas have brewed in my brain since my last entry and I wanted to document them here.
Firstly, in regards to the scaling idea in my previous entry — I’ve come up with two different methods for scaling the width of the body based on velocity, but each of them have drawbacks and each need testing.
- Lock an axis of the body to match the velocity vector using , then scale that axis based on velocity. To do this, I would probably utilize Vector3.RotateTowards and Quaternion.LookRotation. Locking the rotation has the potential to interfere with physics calculations down the line. This brings to mind the issues manually setting the velocity caused (I eventually pivoted to setting velocity by force). In fact, it may already interfere with physics. I’m not sure whether adding forces to a rotating object changes its course if angular drag is set to 0. I’m also not sure if locking the rotation of a mesh affects whether its SDF equivalent can experience rotation. I’ll need to test both these things if I go this course.
- Use a compute shader to scale the mesh along the velocity vector. The reason I can use a compute shader is because it can pass data from the GPU back to an object’s mesh renderer, which means that any changes it made would get reflected in the SDF rendition of that object’s mesh. I found this guide from LogRocket on scaling a mesh using a compute shader, and would need to adapt it to my needs. The good news is that it’s possible. The questions are whether I can figure out how to do it and whether it significantly impacts performance. I downloaded the sample Unity project containing the compute example from the guide and did some testing. I ran into memory leak errors, and noticed that after added four of those modified meshes to the scene, performance began to drop significantly. Granted, the mesh being deformed had significantly more vertex data than my simple spheres did, and, like I said, there were memory leak errors. I’ll need to fix those leak errors first, and then work on adapting the algorithm to my needs so that I can accurately test performance. My gut tells me that compute shaders are the way to go.
Digging into compute shaders gave me an idea. If they can modify mesh data, could they be used to make metaballs? Lucky me, I found a guide. Did I mention I need to test performance??
Next, I have some ideas for the particle effects based on my own experiences with qi gong. I notice that when I practice, the energy seems to build up over time. Therefore, wouldn’t it be neat to incentivize players to practice for longer by increasing particle vibrancy and SDF stick force over time? This would mean that the longer a body is in the scene, the more “intense” it becomes. I originally thought this wouldn’t be possible since all bodies are sharing a single SDF, but then I remembered that the particles emitted by each user will be drawn to the most attractive body, which in 9/10 cases would be the one nearest to them (their own).