Optimize Light Engine

Massive update to light to improve performance and chunk loading/generation.

1) Massive bit packing/unpacking optimizations and inlining.
  A lot of performance has to do with constant packing and unpacking of bits.
  We now inline a most bit operations, and re-use base x/y/z bits in many places.
  This helps with cpu level processing to just do all the math at once instead
  of having to jump in and out of function calls.

  This much logic also is likely over the JVM Inline limit for JIT too.
2) Applied a few of JellySquid's Phosphor mod optimizations such as
  - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified
  - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used.
3) Optimize hot path accesses to getting updating chunk to have less branching
4) Optimize getBlock accesses to have less branching, and less unpacking
5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks.
6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full
   of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front.
   this applies for both urgent and non urgent tasks.
7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency.
8) Fix NPE risk that crashes server in getting nibble data

Fixes #3489
Fixes #3363
This commit is contained in:
Aikar
2020-06-05 01:25:11 -04:00
parent 9258a3ebcb
commit 799bd8f5e9
9 changed files with 1605 additions and 94 deletions

View File

@@ -40,8 +40,11 @@ index 0000000000000000000000000000000000000000..00000000000000000000000000000000
protected void a(LightEngineLayer<?, ?> lightenginelayer, long i) {
@@ -0,0 +0,0 @@ public abstract class LightEngineStorage<M extends LightEngineStorageArray<M>> e
protected void a(long i, @Nullable NibbleArray nibblearray) {
if (nibblearray != null) {
this.i.put(i, nibblearray);
- this.i.put(i, nibblearray);
+ NibbleArray remove = this.i.put(i, nibblearray); if (remove != null && remove.cleaner != null) remove.cleaner.run(); // Paper - clean up when removed
} else {
- this.i.remove(i);
+ NibbleArray remove = this.i.remove(i); if (remove != null && remove.cleaner != null) remove.cleaner.run(); // Paper - clean up when removed
@@ -59,7 +62,7 @@ index 0000000000000000000000000000000000000000..00000000000000000000000000000000
- this.data.queueUpdate(i, ((NibbleArray) this.data.getUpdating(i)).b()); // Paper - avoid copying light data
+ NibbleArray updating = this.data.getUpdating(i); // Paper - pool nibbles
+ this.data.queueUpdate(i, new NibbleArray().markPoolSafe(updating.getCloneIfSet())); // Paper - avoid copying light data - pool safe clone
+ if (updating.cleaner != null) updating.cleaner.run(); // Paper
+ if (updating.cleaner != null) MCUtil.scheduleTask(2, updating.cleaner); // Paper - delay clean incase anything holding ref was still using it
this.c();
}
@@ -262,7 +265,7 @@ index 0000000000000000000000000000000000000000..00000000000000000000000000000000
+ public void onPacketDispatchFinish(EntityPlayer player, io.netty.channel.ChannelFuture future) {
+ if (remainingSends.decrementAndGet() <= 0) {
+ // incase of any race conditions, schedule this delayed
+ MCUtil.scheduleTask(1, () -> {
+ MCUtil.scheduleTask(5, () -> {
+ if (remainingSends.get() == 0) {
+ cleaner1.run();
+ cleaner2.run();