Conversation
|
| if ( | ||
| this.focusKey && | ||
| (eventType === DIRECTION_LEFT || | ||
| eventType === DIRECTION_RIGHT || | ||
| eventType === DIRECTION_UP || | ||
| eventType === DIRECTION_DOWN) | ||
| ) { | ||
| this.onArrowRelease(eventType); |
| * Must be called BEFORE updateParentsHasFocusedChild so we can compare | ||
| * the new parent chain against the current one to detect newly entered regions. |
There was a problem hiding this comment.
This implementation makes it fully "stateless" in a sense that we don't need to remember which parents were entered before and which ones are entered for the first time. It just checks the parent tree before updating the focus, so the parents not having focused children NOW are considered as "entering for the first time" regions.
| distanceCalculationMethod: 'center' | ||
| distanceCalculationMethod: 'center', | ||
| onUtterText: (text: string) => { | ||
| console.log('onUtterText', text); |
There was a problem hiding this comment.
This will go into the platforms' TTS method.
predikament
left a comment
There was a problem hiding this comment.
Looks good to me 👍🏻
I guess we want to update the CHANGELOG and README accordingly?
This PR adds a capability to pass accessibility labels to focusable components so that when they are focused it would call a global
onUtterTextcallback with the concatenated label of all parent nodes + the leaf child node to be spoken by the platform Text To Speech engine.This library does not implement the TTS itself, it only provides the way of specifying labels and listen to the callback.
The main motivation is the fragmented support of Aria labels on different TV platforms, so when this library is used in the cross-platform project it can provide a unified way of defining accessibility labels and connecting the callback to each platform TTS methods.
Basic usage example is added to
App.tsx.TLDR;
onUtterTextcallback to your platform TTS engine methodaccessibilityLabelprop to eachuseFocusablewhere you need the element to be accessibleTODO: