Skip to content

Perf: optimize BigText binary search, findPlaceholders deep lookup, and reduce per-tick overhead#16

Merged
daedeloth merged 3 commits intomasterfrom
copilot/optimize-performance-low-end-devices
Feb 10, 2026
Merged

Perf: optimize BigText binary search, findPlaceholders deep lookup, and reduce per-tick overhead#16
daedeloth merged 3 commits intomasterfrom
copilot/optimize-performance-low-end-devices

Conversation

Copy link
Contributor

Copilot AI commented Feb 10, 2026

Frame rate drops and input lag on low-end devices traced to expensive per-frame operations in BigText, placeholder tree search, and redundant getBounds() calls.

BigText: reuse measurement object in binary search

  • goBigOrGoHomeBinary previously allocated a new createjs.Text per binary search step (~10 objects per resize). Now reuses a single measurement object, mutating only font/lineHeight, and creates the final display object once at the end.

MovieClipHelper: O(1) deep name lookup via pre-built map

  • buildNamedChildMap now also builds _mh_deep_named_children_map — a flattened name→children[] index across all descendants.
  • findFromNameInContainer uses this map for direct lookup instead of recursively walking _mh_named_children_map on every call. The map is built once and cached alongside the existing _mh_named_children_map.

Placeholder: guard z-index check in _tick

  • Skip updateZIndex() when element.parent is null, avoiding two getChildIndex calls per tick on orphaned elements.

Eliminate duplicate getBounds() calls

  • BigText.getAvailableSpace and Background.getChildBounds/getAvailableSpace now cache the getBounds() result in a local variable instead of calling it 2-3× per invocation.
Original prompt

This section details on the original issue you should resolve

<issue_title>Performance Audit & Optimization for Low-End Devices (Focus: Expensive Components)</issue_title>
<issue_description>As we expand the use of EaselBone within the QuizWitz game client, we are seeing performance degradation on lower-spec hardware (older tablets, budget smartphones). Certain components, specifically those requiring intensive real-time calculations, are causing frame rate drops and input lag.

The primary culprit identified is the BigText component, which uses an iterative approach to fit text into a container. However, a general audit of the adapter's update cycle is needed.

Another culprit is findAllPlaceholders which does quite a heavy search on named elements in the tree.

Also look at the other compoinents to see if any other improvements can be made.</issue_description>

Comments on the Issue (you are @copilot in this section)


💬 We'd love your input! Share your thoughts on Copilot coding agent in our 2 minute survey.

…duce per-tick overhead

Co-authored-by: daedeloth <1168599+daedeloth@users.noreply.github.com>
Copilot AI changed the title [WIP] Optimize performance for low-end devices in QuizWitz Perf: optimize BigText binary search, findPlaceholders deep lookup, and reduce per-tick overhead Feb 10, 2026
Copilot AI requested a review from daedeloth February 10, 2026 09:09
@github-actions
Copy link

📦 PR Preview deployed!

Examples are available at:

@daedeloth
Copy link
Member

@copilot i don't suppose there is a way to measure performance improvements?

Copy link
Contributor Author

Copilot AI commented Feb 10, 2026

@copilot i don't suppose there is a way to measure performance improvements?

There's no built-in benchmark suite in this repo, but here are practical ways to measure the impact:

Browser DevTools Performance panel – Record a session before/after in Chrome DevTools (Performance tab). Compare frame times and look for long-task markers in the flame chart, especially around goBigOrGoHomeBinary and findFromNameInContainer.

Quick micro-benchmarks – Wrap the hot paths with performance.now():

var t0 = performance.now();
var text = this.goBigOrGoHome(this.textstring, space.width, space.height);
console.log('BigText fit: ' + (performance.now() - t0).toFixed(2) + 'ms');

GC pressure (BigText) – The main BigText win is fewer allocations (~10 new Text() per resize → 1 measurement + 1 final). Chrome's Memory tab → Allocation timeline will show reduced GC spikes during text resizing.

Placeholder lookupfindFromNameInContainer went from recursive tree walk to a single map lookup. The deeper/wider the display tree, the bigger the speedup. Wrapping calls in console.time('findPlaceholders') / console.timeEnd('findPlaceholders') in a real QuizWitz session would show this clearly.

The gains are most visible on low-end devices under real workloads (many BigText instances resizing, deep display trees) rather than in synthetic benchmarks.

@daedeloth daedeloth marked this pull request as ready for review February 10, 2026 09:40
@daedeloth daedeloth merged commit 5cb0f38 into master Feb 10, 2026
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Performance Audit & Optimization for Low-End Devices (Focus: Expensive Components)

2 participants