speech package

speech.Spri

alias of SpeechPriority

speech.logBadSequenceTypes(sequence: Iterable[SpeechCommand | str], raiseExceptionOnError=False) bool

Check if the provided sequence is valid, otherwise log an error (only if speech is checked in the “log categories” setting of the advanced settings panel. @param sequence: the sequence to check @param raiseExceptionOnError: if True, and exception is raised. Useful to help track down the introduction

of erroneous speechSequence data.

@return: True if the sequence is valid.

class speech.GeneratorWithReturn(gen: Iterable, defaultReturnValue=None)

Bases: Iterable

Helper class, used with generator functions to access the ‘return’ value after there are no more values to iterate over.

_abc_impl = <_abc._abc_data object>
speech._flattenNestedSequences(nestedSequences: Iterable[list[SpeechCommand | str]] | GeneratorWithReturn) Generator[SpeechCommand | str, Any, bool | None]

Turns [[a,b,c],[d,e]] into [a,b,c,d,e]

speech._getSpellingSpeechAddCharMode(seq: Generator[SpeechCommand | str, None, None]) Generator[SpeechCommand | str, None, None]

Inserts CharacterMode commands in a speech sequence generator to ensure any single character is spelled by the synthesizer. @param seq: The speech sequence to be spelt.

speech._getSpellingCharAddCapNotification(speakCharAs: str, sayCapForCapitals: bool, capPitchChange: int, beepForCapitals: bool, reportNormalized: bool = False) Generator[SpeechCommand | str, None, None]

This function produces a speech sequence containing a character to be spelt as well as commands to indicate that this character is uppercase and/or normalized, if applicable. :param speakCharAs: The character as it will be spoken by the synthesizer. :param sayCapForCapitals: indicates if ‘cap’ should be reported along with the currently spelled character. :param capPitchChange: pitch offset to apply while spelling the currently spelled character. :param beepForCapitals: indicates if a cap notification beep should be produced while spelling the currently spelled character. :param reportNormalized: Indicates if ‘normalized’ should be reported along with the currently spelled character.

speech._getSpellingSpeechWithoutCharMode(text: str, locale: str, useCharacterDescriptions: bool, sayCapForCapitals: bool, capPitchChange: int, beepForCapitals: bool, fallbackToCharIfNoDescription: bool = True, unicodeNormalization: bool = False, reportNormalizedForCharacterNavigation: bool = False) Generator[SpeechCommand | str, None, None]

Processes text when spoken by character. This doesn’t take care of character mode (Option “Use spelling functionality”). :param text: The text to speak.

This is usually one character or a string containing a decomposite character (or glyph)

Parameters:
  • locale – The locale used to generate character descrptions, if applicable.

  • useCharacterDescriptions – Whether or not to use character descriptions, e.g. speak “a” as “alpha”.

  • sayCapForCapitals – Indicates if ‘cap’ should be reported along with the currently spelled character.

  • capPitchChange – Pitch offset to apply while spelling the currently spelled character.

  • beepForCapitals – Indicates if a cap notification beep should be produced while spelling the currently spelled character.

  • fallbackToCharIfNoDescription – Only applies if useCharacterDescriptions is True. If fallbackToCharIfNoDescription is True, and no character description is found, the character itself will be announced. Otherwise, nothing will be spoken.

  • unicodeNormalization – Whether to use Unicode normalization for the given text.

  • reportNormalizedForCharacterNavigation – When unicodeNormalization is true, indicates if ‘normalized’ should be reported along with the currently spelled character.

Returns:

A speech sequence generator.

speech._extendSpeechSequence_addMathForTextInfo(speechSequence: list[SpeechCommand | str], info: TextInfo, field: Field) None
speech._getPlaceholderSpeechIfTextEmpty(obj, reason: OutputReason) Tuple[bool, list[SpeechCommand | str]]
Attempt to get speech for placeholder attribute if text for ‘obj’ is empty. Don’t report the placeholder

value unless the text is empty, because it is confusing to hear the current value (presumably typed by the user) and the placeholder. The placeholder should “disappear” once the user types a value.

@return: (True, SpeechSequence) if text for obj was considered empty and we attempted to get speech for the

placeholder value. (False, []) if text for obj was not considered empty.

speech._getSelectionMessageSpeech(message: str, text: str) list[SpeechCommand | str]
speech._getSpeakMessageSpeech(text: str) list[SpeechCommand | str]

Gets the speech sequence for a given message. @param text: the message to speak

speech._objectSpeech_calculateAllowedProps(reason: OutputReason, shouldReportTextContent: bool, objRole: Role) dict[str, bool]
speech._suppressSpeakTypedCharacters(number: int)

Suppress speaking of typed characters. This should be used when sending a string of characters to the system and those characters should not be spoken individually as if the user were typing them. @param number: The number of characters to suppress.

speech.cancelSpeech()

Interupts the synthesizer from currently speaking

speech.clearTypedWordBuffer() None

Forgets any word currently being built up with typed characters for speaking. This should be called when the user’s context changes such that they could no longer complete the word (such as a focus change or choosing to move the caret).

speech.getCharDescListFromText(text, locale)

This method prepares a list, which contains character and its description for all characters the text is made up of, by checking the presence of character descriptions in characterDescriptions.dic of that locale for all possible combination of consecutive characters in the text. This is done to take care of conjunct characters present in several languages such as Hindi, Urdu, etc.

speech.getControlFieldSpeech(attrs: ControlField, ancestorAttrs: List[Field], fieldType: str, formatConfig: Dict[str, bool] | None = None, extraDetail: bool = False, reason: OutputReason | None = None) list[SpeechCommand | str]
speech.getCurrentLanguage() str
speech.getFormatFieldSpeech(attrs: Field, attrsCache: Field | None = None, formatConfig: Dict[str, bool] | None = None, reason: OutputReason | None = None, unit: str | None = None, extraDetail: bool = False, initialFormat: bool = False) list[SpeechCommand | str]
speech.getIndentationSpeech(indentation: str, formatConfig: Dict[str, bool]) list[SpeechCommand | str]

Retrieves the indentation speech sequence for a given string of indentation. @param indentation: The string of indentation. @param formatConfig: The configuration to use.

speech.getObjectPropertiesSpeech(obj: NVDAObjects.NVDAObject, reason: OutputReason = OutputReason.QUERY, _prefixSpeechCommand: SpeechCommand | None = None, **allowedProperties) list[SpeechCommand | str]
speech.getObjectSpeech(obj: NVDAObjects.NVDAObject, reason: OutputReason = OutputReason.QUERY, _prefixSpeechCommand: SpeechCommand | None = None) list[SpeechCommand | str]
speech.getPreselectedTextSpeech(text: str) list[SpeechCommand | str]

Helper method to get the speech sequence to announce a newly focused control already has text selected. This method will speak the word “selected” with the provided text appended. The announcement order is different from L{speakTextSelected} in order to inform a user that the newly focused control has content that is selected, which they may unintentionally overwrite.

@remarks: Implemented using L{_getSelectionMessageSpeech}, which allows for

creating a speech sequence with an arbitrary attached message.

speech.getPropertiesSpeech(reason: OutputReason = OutputReason.QUERY, **propertyValues) list[SpeechCommand | str]
speech.getSpellingSpeech(text: str, locale: str | None = None, useCharacterDescriptions: bool = False) Generator[SpeechCommand | str, None, None]
speech.getState()
speech.getTableInfoSpeech(tableInfo: Dict[str, Any] | None, oldTableInfo: Dict[str, Any] | None, extraDetail: bool = False) list[SpeechCommand | str]
speech.getTextInfoSpeech(info: TextInfo, useCache: bool | SpeakTextInfoState = True, formatConfig: Dict[str, bool] = None, unit: str | None = None, reason: OutputReason = OutputReason.QUERY, _prefixSpeechCommand: SpeechCommand | None = None, onlyInitialFields: bool = False, suppressBlanks: bool = False) Generator[list[SpeechCommand | str], None, bool]
speech.isBlank(text)

Determine whether text should be reported as blank. @param text: The text in question. @type text: str @return: C{True} if the text is blank, C{False} if not. @rtype: bool

speech.pauseSpeech(switch)
speech.processText(locale: str, text: str, symbolLevel: SymbolLevel, normalize: bool = False) str

Processes text for symbol pronunciation, speech dictionaries and Unicode normalization. :param locale: The language the given text is in, passed for symbol pronunciation. :param text: The text to process. :param symbolLevel: The verbosity level used for symbol pronunciation. :param normalize: Whether to apply Unicode normalization to the text

after it has been processed for symbol pronunciation and speech dictionaries.

Returns:

The processed text

speech.setSpeechMode(newMode: SpeechMode)
speech.speak(speechSequence: list[SpeechCommand | str], symbolLevel: SymbolLevel | None = None, priority: SpeechPriority = SpeechPriority.NORMAL)

Speaks a sequence of text and speech commands @param speechSequence: the sequence of text and L{SpeechCommand} objects to speak @param symbolLevel: The symbol verbosity level; C{None} (default) to use the user’s configuration. @param priority: The speech priority.

speech.speakSsml(ssml: str, markCallback: MarkCallbackT | None = None, symbolLevel: SymbolLevel | None = None, _prefixSpeechCommand: SpeechCommand | None = None, priority: SpeechPriority | None = None) None

Speaks a given speech sequence provided as ssml. :param ssml: The SSML data to speak. :param markCallback: An optional callback called for every mark command in the SSML. :param symbolLevel: The symbol verbosity level. :param _prefixSpeechCommand: A SpeechCommand to append before the sequence. :param priority: The speech priority.

speech.speakMessage(text: str, priority: SpeechPriority | None = None) None

Speaks a given message. @param text: the message to speak @param priority: The speech priority.

speech.speakObject(obj, reason: OutputReason = OutputReason.QUERY, _prefixSpeechCommand: SpeechCommand | None = None, priority: SpeechPriority | None = None)
speech.speakObjectProperties(obj: NVDAObjects.NVDAObject, reason: OutputReason = OutputReason.QUERY, _prefixSpeechCommand: SpeechCommand | None = None, priority: SpeechPriority | None = None, **allowedProperties)
speech.speakPreselectedText(text: str, priority: SpeechPriority | None = None)

Helper method to announce that a newly focused control already has text selected. This method is in contrast with L{speakTextSelected}. The method will speak the word “selected” with the provided text appended. The announcement order is different from L{speakTextSelected} in order to inform a user that the newly focused control has content that is selected, which they may unintentionally overwrite.

@remarks: Implemented using L{getPreselectedTextSpeech}

speech.speakSelectionChange(oldInfo: TextInfo, newInfo: TextInfo, speakSelected: bool = True, speakUnselected: bool = True, generalize: bool = False, priority: SpeechPriority | None = None)

Speaks a change in selection, either selected or unselected text. @param oldInfo: a TextInfo instance representing what the selection was before @param newInfo: a TextInfo instance representing what the selection is now @param generalize: if True, then this function knows that the text may have changed between the creation of the oldInfo and newInfo objects, meaning that changes need to be spoken more generally, rather than speaking the specific text, as the bounds may be all wrong. @param priority: The speech priority.

speech.speakSelectionMessage(message: str, text: str, priority: SpeechPriority | None = None)
speech.speakSpelling(text: str, locale: str | None = None, useCharacterDescriptions: bool = False, priority: SpeechPriority | None = None) None
speech.speakText(text: str, reason: OutputReason = OutputReason.MESSAGE, symbolLevel: SymbolLevel | None = None, priority: SpeechPriority | None = None)

Speaks some text. @param text: The text to speak. @param reason: Unused @param symbolLevel: The symbol verbosity level; C{None} (default) to use the user’s configuration. @param priority: The speech priority.

speech.speakTextInfo(info: TextInfo, useCache: bool | SpeakTextInfoState = True, formatConfig: Dict[str, bool] = None, unit: str | None = None, reason: OutputReason = OutputReason.QUERY, _prefixSpeechCommand: SpeechCommand | None = None, onlyInitialFields: bool = False, suppressBlanks: bool = False, priority: SpeechPriority | None = None) bool
class speech.SpeakTextInfoState(obj)

Bases: object

Caches the state of speakTextInfo such as the current controlField stack, current formatfield and indentation.

objRef
controlFieldStackCache
formatFieldAttributesCache
indentationCache
updateObj()
copy()
speech.speakTextSelected(text: str, priority: SpeechPriority | None = None)

Helper method to announce that the user has caused text to be selected. This method is in contrast with L{speakPreselectedText}. The method will speak the provided text with the word “selected” appended.

@remarks: Implemented using L{speakSelectionMessage}, which allows for

speaking text with an arbitrary attached message.

speech.speakTypedCharacters(ch: str)
class speech.SpeechMode(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)

Bases: DisplayStringIntEnum

off = 0
beeps = 1
talk = 2
onDemand = 3
property _displayStringLabels: dict[Self, str]

Specify a dictionary which takes members of the Enum and returns the translated display string.

speech.spellTextInfo(info: TextInfo, useCharacterDescriptions: bool = False, priority: SpeechPriority | None = None) None

Spells the text from the given TextInfo, honouring any LangChangeCommand objects it finds if autoLanguageSwitching is enabled.

speech.splitTextIndentation(text)

Splits indentation from the rest of the text. @param text: The text to split. @type text: str @return: Tuple of indentation and content. @rtype: (str, str)

Submodules

speech.commands module

Commands that can be embedded in a speech sequence for changing synth parameters, playing sounds or running

other callbacks.

class speech.commands.SpeechCommand

Bases: object

The base class for objects that can be inserted between strings of text to perform actions, change voice parameters, etc.

Note: Some of these commands are processed by NVDA and are not directly passed to synth drivers. synth drivers will only receive commands derived from L{SynthCommand}.

class speech.commands._CancellableSpeechCommand(reportDevInfo=False)

Bases: SpeechCommand

A command that allows cancelling the utterance that contains it. Support currently experimental and may be subject to change.

@param reportDevInfo: If true, developer info is reported for repr implementation.

abstract _checkIfValid()
abstract _getDevInfo()
_checkIfCancelled()
property isCancelled
cancelUtterance()
_getFormattedDevInfo()
class speech.commands.SynthCommand

Bases: SpeechCommand

Commands that can be passed to synth drivers.

class speech.commands.IndexCommand(index)

Bases: SynthCommand

Marks this point in the speech with an index. When speech reaches this index, the synthesizer notifies NVDA, thus allowing NVDA to perform actions at specific points in the speech; e.g. synchronizing the cursor, beeping or playing a sound. Callers should not use this directly. Instead, use one of the subclasses of L{BaseCallbackCommand}. NVDA handles the indexing and dispatches callbacks as appropriate.

@param index: the value of this index @type index: integer

class speech.commands.SynthParamCommand

Bases: SynthCommand

A synth command which changes a parameter for subsequent speech.

isDefault = False

Whether this command returns the parameter to its default value. Note that the default might be configured by the user; e.g. for pitch, rate, etc. @type: bool

class speech.commands.CharacterModeCommand(state)

Bases: SynthParamCommand

Turns character mode on and off for speech synths.

@param state: if true character mode is on, if false its turned off. @type state: boolean

class speech.commands.LangChangeCommand(lang: str | None)

Bases: SynthParamCommand

A command to switch the language within speech.

@param lang: the language to switch to: If None then the NVDA locale will be used.

class speech.commands.BreakCommand(time: int = 0)

Bases: SynthCommand

Insert a break between words.

@param time: The duration of the pause to be inserted in milliseconds.

time

Time in milliseconds

class speech.commands.EndUtteranceCommand

Bases: SpeechCommand

End the current utterance at this point in the speech. Any text after this will be sent to the synthesizer as a separate utterance.

class speech.commands.SuppressUnicodeNormalizationCommand(state: bool = True)

Bases: SpeechCommand

Suppresses Unicode normalization at a point in a speech sequence. For any text after this, Unicode normalization will be suppressed when state is True. When state is False, original behavior of normalization will be restored. This command is a no-op when normalization is disabled.

Parameters:

state – Suppress normalization if True, don’t suppress when False

state: bool
class speech.commands.BaseProsodyCommand(offset=0, multiplier=1)

Bases: SynthParamCommand

Base class for commands which change voice prosody; i.e. pitch, rate, etc. The change to the setting is specified using either an offset or a multiplier, but not both. The L{offset} and L{multiplier} properties convert between the two if necessary. To return to the default value, specify neither. This base class should not be instantiated directly.

Constructor. Either of C{offset} or C{multiplier} may be specified, but not both. @param offset: The amount by which to increase/decrease the user configured setting;

e.g. 30 increases by 30, -10 decreases by 10, 0 returns to the configured setting.

@type offset: int @param multiplier: The number by which to multiply the user configured setting;

e.g. 0.5 is half, 1 returns to the configured setting.

@param multiplier: int/float

settingName = None

The name of the setting in the configuration; e.g. pitch, rate, etc.

property defaultValue

The default value for the setting as configured by the user.

property multiplier

The number by which to multiply the default value.

property offset

The amount by which to increase/decrease the default value.

property newValue

The new absolute value after the offset or multiplier is applied to the default value.

class speech.commands.PitchCommand(offset=0, multiplier=1)

Bases: BaseProsodyCommand

Change the pitch of the voice.

Constructor. Either of C{offset} or C{multiplier} may be specified, but not both. @param offset: The amount by which to increase/decrease the user configured setting;

e.g. 30 increases by 30, -10 decreases by 10, 0 returns to the configured setting.

@type offset: int @param multiplier: The number by which to multiply the user configured setting;

e.g. 0.5 is half, 1 returns to the configured setting.

@param multiplier: int/float

settingName = 'pitch'

The name of the setting in the configuration; e.g. pitch, rate, etc.

class speech.commands.VolumeCommand(offset=0, multiplier=1)

Bases: BaseProsodyCommand

Change the volume of the voice.

Constructor. Either of C{offset} or C{multiplier} may be specified, but not both. @param offset: The amount by which to increase/decrease the user configured setting;

e.g. 30 increases by 30, -10 decreases by 10, 0 returns to the configured setting.

@type offset: int @param multiplier: The number by which to multiply the user configured setting;

e.g. 0.5 is half, 1 returns to the configured setting.

@param multiplier: int/float

settingName = 'volume'

The name of the setting in the configuration; e.g. pitch, rate, etc.

class speech.commands.RateCommand(offset=0, multiplier=1)

Bases: BaseProsodyCommand

Change the rate of the voice.

Constructor. Either of C{offset} or C{multiplier} may be specified, but not both. @param offset: The amount by which to increase/decrease the user configured setting;

e.g. 30 increases by 30, -10 decreases by 10, 0 returns to the configured setting.

@type offset: int @param multiplier: The number by which to multiply the user configured setting;

e.g. 0.5 is half, 1 returns to the configured setting.

@param multiplier: int/float

settingName = 'rate'

The name of the setting in the configuration; e.g. pitch, rate, etc.

class speech.commands.PhonemeCommand(ipa, text=None)

Bases: SynthCommand

Insert a specific pronunciation. This command accepts Unicode International Phonetic Alphabet (IPA) characters. Note that this is not well supported by synthesizers.

@param ipa: Unicode IPA characters. @type ipa: str @param text: Text to speak if the synthesizer does not support

some or all of the specified IPA characters, C{None} to ignore this command instead.

@type text: str

class speech.commands.BaseCallbackCommand

Bases: SpeechCommand

Base class for commands which cause a function to be called when speech reaches them. This class should not be instantiated directly. It is designed to be subclassed to provide specific functionality; e.g. L{BeepCommand}. To supply a generic function to run, use L{CallbackCommand}. This command is never passed to synth drivers.

abstract run()

Code to run when speech reaches this command. This method is executed in NVDA’s main thread, therefore must return as soon as practically possible, otherwise it will block production of further speech and or other functionality in NVDA.

_abc_impl = <_abc._abc_data object>
class speech.commands.CallbackCommand(callback, name: str | None = None)

Bases: BaseCallbackCommand

Call a function when speech reaches this point. Note that the provided function is executed in NVDA’s main thread,

therefore must return as soon as practically possible, otherwise it will block production of further speech and or other functionality in NVDA.

run(*args, **kwargs)

Code to run when speech reaches this command. This method is executed in NVDA’s main thread, therefore must return as soon as practically possible, otherwise it will block production of further speech and or other functionality in NVDA.

_abc_impl = <_abc._abc_data object>
class speech.commands.BeepCommand(hz, length, left=50, right=50)

Bases: BaseCallbackCommand

Produce a beep.

run()

Code to run when speech reaches this command. This method is executed in NVDA’s main thread, therefore must return as soon as practically possible, otherwise it will block production of further speech and or other functionality in NVDA.

_abc_impl = <_abc._abc_data object>
class speech.commands.WaveFileCommand(fileName)

Bases: BaseCallbackCommand

Play a wave file.

run()

Code to run when speech reaches this command. This method is executed in NVDA’s main thread, therefore must return as soon as practically possible, otherwise it will block production of further speech and or other functionality in NVDA.

_abc_impl = <_abc._abc_data object>
class speech.commands.ConfigProfileTriggerCommand(trigger, enter=True)

Bases: SpeechCommand

Applies (or stops applying) a configuration profile trigger to subsequent speech.

@param trigger: The configuration profile trigger. @type trigger: L{config.ProfileTrigger} @param enter: C{True} to apply the trigger, C{False} to stop applying it. @type enter: bool

speech.extensions module

Extension points for speech.

speech.extensions.speechCanceled = <extensionPoints.Action object>

Notifies when speech is canceled. Handlers are called without arguments.

speech.extensions.pre_speechCanceled = <extensionPoints.Action object>

Notifies when speech is about to be canceled. Handlers are called without arguments.

speech.extensions.pre_speech = <extensionPoints.Action object>

Notifies when code attempts to speak text.

@param speechSequence: the sequence of text and L{SpeechCommand} objects to speak @type speechSequence: speech.SpeechSequence

@param symbolLevel: The symbol verbosity level; C{None} (default) to use the user’s configuration. @type symbolLevel: characterProcessing.SymbolLevel

@param priority: The speech priority. @type priority: priorities.Spri

speech.extensions.filter_speechSequence = <extensionPoints.Filter object>

Filters speech sequence before it passes to synthDriver.

Parameters:

value (SpeechSequence) – the speech sequence to be filtered.

speech.manager module

speech.manager._shouldCancelExpiredFocusEvents()
speech.manager._shouldDoSpeechManagerLogging()
speech.manager._speechManagerDebug(msg, *args, **kwargs) None

Log ‘msg % args’ with severity ‘DEBUG’ if speech manager logging is enabled. ‘SpeechManager-’ is prefixed to all messages to make searching the log easier.

speech.manager._speechManagerUnitTest(msg, *args, **kwargs) None

Log ‘msg % args’ with severity ‘DEBUG’ if . ‘SpeechManUnitTest-’ is prefixed to all messages to make searching the log easier. When

class speech.manager.ParamChangeTracker

Bases: object

Keeps track of commands which change parameters from their defaults. This is useful when an utterance needs to be split. As you are processing a sequence, you update the tracker with a parameter change using the L{update} method. When you split the utterance, you use the L{getChanged} method to get the parameters which have been changed from their defaults.

update(command)

Update the tracker with a parameter change. @param command: The parameter change command. @type command: L{SynthParamCommand}

getChanged()

Get the commands for the parameters which have been changed from their defaults. @return: List of parameter change commands. @type: list of L{SynthParamCommand}

class speech.manager._ManagerPriorityQueue(priority: SpeechPriority)

Bases: object

A speech queue for a specific priority. This is intended for internal use by L{_SpeechManager} only. Each priority has a separate queue. It holds the pending speech sequences to be spoken, as well as other information necessary to restore state when this queue is preempted by a higher priority queue.

pendingSequences: List[list[SpeechCommand | str]]

The pending speech sequences to be spoken. These are split at indexes, so a single utterance might be split over multiple sequences.

enteredProfileTriggers: List[ProfileTrigger]

The configuration profile triggers that have been entered during speech.

paramTracker: ParamChangeTracker

Keeps track of parameters that have been changed during an utterance.

class speech.manager.SpeechManager

Bases: object

Manages queuing of speech utterances, calling callbacks at desired points in the speech, profile switching, prioritization, etc. This is intended for internal use only. It is used by higher level functions such as L{speak}.

The high level flow of control is as follows: 1. A speech sequence is queued with L{speak}, which in turn calls L{_queueSpeechSequence}. 2. L{_processSpeechSequence} is called to normalize, process and split the input sequence.

It converts callbacks to indexes. All indexing is assigned and managed by this class. It maps any indexes to their corresponding callbacks. It splits the sequence at indexes so we easily know what has completed speaking. If there are end utterance commands, the sequence is split at that point. We ensure there is an index at the end of all utterances so we know when they’ve finished speaking. We ensure any config profile trigger commands are preceded by an utterance end. Parameter changes are re-applied after utterance breaks. We ensure any entered profile triggers are exited at the very end.

  1. L{_queueSpeechSequence} places these processed sequences in the queue

    for the priority specified by the caller in step 1. There is a separate queue for each priority.

  2. L{_pushNextSpeech} is called to begin pushing speech.

    It looks for the highest priority queue with pending speech. Because there’s no other speech queued, that’ll be the queue we just touched.

  3. If the input begins with a profile switch, it is applied immediately.

  4. L{_buildNextUtterance} is called to build a full utterance and it is sent to the synth.

  5. For every index reached, L{_handleIndex} is called.

    The completed sequence is removed from L{_pendingSequences}. If there is an associated callback, it is run. If the index marks the end of an utterance, L{_pushNextSpeech} is called to push more speech.

  6. If there is another utterance before a profile switch, it is built and sent as per steps 6 and 7.

  7. In L{_pushNextSpeech}, if a profile switch is next, we wait for the synth to finish speaking before pushing more.

    This is because we don’t want to start speaking too early with a different synth. L{_handleDoneSpeaking} is called when the synth finishes speaking. It pushes more speech, which includes applying the profile switch.

  8. The flow then repeats from step 6 onwards until there are no more pending sequences.

  9. If another sequence is queued via L{speak} during speech,

    it is processed and queued as per steps 2 and 3.

  10. If this is the first utterance at priority now, speech is interrupted

    and L{_pushNextSpeech} is called. Otherwise, L{_pushNextSpeech} is called when the current utterance completes as per step 7.

  11. When L{_pushNextSpeech} is next called, it looks for the highest priority queue with pending speech.

    If that priority is different to the priority of the utterance just spoken, any relevant profile switches are applied to restore the state for this queue.

  12. If a lower priority utterance was interrupted in the middle,

    L{_buildNextUtterance} applies any parameter changes that applied before the interruption.

  13. The flow then repeats from step 6 onwards until there are no more pending sequences.

Note: All of this activity is (and must be) synchronized and serialized on the main thread.

_cancelCommandsForUtteranceBeingSpokenBySynth: Dict[_CancellableSpeechCommand, int]
_priQueues: Dict[Any, _ManagerPriorityQueue]
_curPriQueue: _ManagerPriorityQueue | None
_indexCounter

A counter for indexes sent to the synthesizer for callbacks, etc.

MAX_INDEX: int = 9999

Maximum index number to pass to synthesizers.

_generateIndexes() Generator[int, None, None]

Generator of index numbers. We don’t want to reuse index numbers too quickly, as there can be race conditions when cancelling speech which might result in an index from a previous utterance being treated as belonging to the current utterance. However, we don’t want the counter increasing indefinitely, as some synths might not be able to handle huge numbers. Therefore, we use a counter which starts at 1, counts up to L{MAX_INDEX}, wraps back to 1 and continues cycling thus. This maximum is arbitrary, but it’s small enough that any synth should be able to handle it and large enough that previous indexes won’t reasonably get reused in the same or previous utterance.

_reset()
_synthStillSpeaking() bool
_hasNoMoreSpeech()
speak(speechSequence: list[SpeechCommand | str], priority: SpeechPriority)
_queueSpeechSequence(inSeq: list[SpeechCommand | str], priority: SpeechPriority) bool

@return: Whether to interrupt speech.

_ensureEndUtterance(seq: list[SpeechCommand | str], outSeqs, paramsToReplay, paramTracker)

We split at EndUtteranceCommands so the ends of utterances are easily found. This function ensures the given sequence ends with an EndUtterance command, Ensures that the sequence also includes an index command at the end, It places the complete sequence in outSeqs, It clears the given sequence list ready to build a new one, And clears the paramsToReplay list and refills it with any params that need to be repeated if a new sequence is going to be built.

_processSpeechSequence(inSeq: list[SpeechCommand | str])
_pushNextSpeech(doneSpeaking: bool)
_getNextPriority()

Get the highest priority queue containing pending speech.

_buildNextUtterance()

Since an utterance might be split over several sequences, build a complete utterance to pass to the synth.

_checkForCancellations(utterance: list[SpeechCommand | str]) bool

Checks utterance to ensure it is not cancelled (via a _CancellableSpeechCommand). Because synthesizers do not expect CancellableSpeechCommands, they are removed from the utterance. :arg utterance: The utterance to check for cancellations. Modified in place, CancellableSpeechCommands are removed. :return True if sequence is still valid, else False

_WRAPPED_INDEX_MAGNITUDE = 4999
classmethod _isIndexABeforeIndexB(indexA: int, indexB: int) bool

Was indexB created before indexB Because indexes wrap after MAX_INDEX, custom logic is needed to compare relative positions. The boundary for considering a wrapped value as before another value is based on the distance between the indexes. If the distance is greater than half the available index space it is no longer before. @return True if indexA was created before indexB, else False

classmethod _isIndexAAfterIndexB(indexA: int, indexB: int) bool
_getMostRecentlyCancelledUtterance() int | None
removeCancelledSpeechCommands()
_doRemoveCancelledSpeechCommands()
_getUtteranceIndex(utterance: list[SpeechCommand | str])
_onSynthIndexReached(synth=None, index=None)
_removeCompletedFromQueue(index: int) Tuple[bool, bool]

Removes completed speech sequences from the queue. @param index: The index just reached indicating a completed sequence. @return: Tuple of (valid, endOfUtterance),

where valid indicates whether the index was valid and endOfUtterance indicates whether this sequence was the end of the current utterance.

@rtype: (bool, bool)

_handleIndex(index: int)
_onSynthDoneSpeaking(synth: SynthDriver | None = None)
_handleDoneSpeaking()
_switchProfile()
_exitProfileTriggers(triggers)
_restoreProfileTriggers(triggers)
cancel()

speech.priorities module

Speech priority enumeration.

class speech.priorities.SpeechPriority(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)

Bases: IntEnum

Facilitates the ability to prioritize speech. Note: This enum has its counterpart in the NVDAController RPC interface (nvdaController.idl). Additions to this enum should also be reflected in nvdaController.idl.

NORMAL = 0

Indicates that a speech sequence should have normal priority.

NEXT = 1

Indicates that a speech sequence should be spoken after the next utterance of lower priority is complete.

NOW = 2

Indicates that a speech sequence is very important and should be spoken right now, interrupting low priority speech. After it is spoken, interrupted speech will resume. Note that this does not interrupt previously queued speech at the same priority.

speech.priorities.Spri

Easy shorthand for the Speechpriority class

speech.priorities.SPEECH_PRIORITIES = (SpeechPriority.NOW, SpeechPriority.NEXT, SpeechPriority.NORMAL)

The speech priorities ordered from highest to lowest.

speech.sayAll module

class speech.sayAll.CURSOR(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)

Bases: IntEnum

CARET = 0
REVIEW = 1
TABLE = 2
speech.sayAll.initialize(speakFunc: Callable[[list[SpeechCommand | str]], None], speakObject: speakObject, getTextInfoSpeech: getTextInfoSpeech, SpeakTextInfoState: SpeakTextInfoState)
class speech.sayAll._SayAllHandler(speechWithoutPausesInstance: SpeechWithoutPauses, speakObject: speakObject, getTextInfoSpeech: getTextInfoSpeech, SpeakTextInfoState: SpeakTextInfoState)

Bases: object

_getActiveSayAll

The active say all manager. This is a weakref because the manager should be allowed to die once say all is complete.

stop()

Stops any active objects reader and resets the SayAllHandler’s SpeechWithoutPauses instance

isRunning()

Determine whether say all is currently running. @return: C{True} if say all is currently running, C{False} if not. @rtype: bool

readObjects(obj: NVDAObjects.NVDAObject, startedFromScript: bool | None = False)

Start or restarts the object reader. :param obj: the object to be read :param startedFromScript: whether the current say all action was initially started from a script; use None to keep

the last value unmodified, e.g. when the say all action is resumed during skim reading.

readText(cursor: CURSOR, startPos: TextInfo | None = None, nextLineFunc: Callable[[TextInfo], TextInfo] | None = None, shouldUpdateCaret: bool = True, startedFromScript: bool | None = False) None

Start or restarts the reader :param cursor: the type of cursor used for say all :param startPos: start position (only used for table say all) :param nextLineFunc: function called to read the next line (only used for table say all) :param shouldUpdateCaret: whether the caret should be updated during say all (only used for table say all) :param startedFromScript: whether the current say all action was initially started from a script; use None to keep

the last value unmodified, e.g. when the say all action is resumed during skim reading.

class speech.sayAll._ObjectsReader(handler: _SayAllHandler, root: NVDAObjects.NVDAObject)

Bases: TrackedObject

walk(obj: NVDAObjects.NVDAObject)
next()
stop()
class speech.sayAll._TextReader(handler: _SayAllHandler)

Bases: TrackedObject

Manages continuous reading of text. This is intended for internal use only.

The high level flow of control is as follows: 1. The constructor sets things up. 2. L{nextLine} is called to read the first line. 3. When it speaks a line, L{nextLine} request that L{lineReached} be called

when we start speaking this line, providing the position and state at this point.

  1. When we start speaking a line, L{lineReached} is called

    and moves the cursor to that line.

  2. L{lineReached} calls L{nextLine}.

  3. If there are more lines, L{nextLine} works as per steps 3 and 4.

  4. Otherwise, if the object doesn’t support page turns, we’re finished.

  5. If the object does support page turns,

    we request that L{turnPage} be called when speech is finished.

  6. L{turnPage} tries to turn the page.

  7. If there are no more pages, we’re finished.

  8. If there is another page, L{turnPage} calls L{nextLine}.

MAX_BUFFERED_LINES = 10
abstract getInitialTextInfo() TextInfo
abstract updateCaret(updater: TextInfo) None
shouldReadInitialPosition() bool
nextLineImpl() bool

Advances cursor to the next reading chunk (e.g. paragraph). @return: C{True} if advanced successfully, C{False} otherwise.

collapseLineImpl() bool

Collapses to the end of this line, ready to read the next. @return: C{True} if collapsed successfully, C{False} otherwise.

nextLine()
lineReached(obj, bookmark, state)
turnPage()
finish()
stop()
_abc_impl = <_abc._abc_data object>
class speech.sayAll._CaretTextReader(handler: _SayAllHandler)

Bases: _TextReader

getInitialTextInfo() TextInfo
updateCaret(updater: TextInfo) None
_abc_impl = <_abc._abc_data object>
class speech.sayAll._ReviewTextReader(handler: _SayAllHandler)

Bases: _TextReader

getInitialTextInfo() TextInfo
updateCaret(updater: TextInfo) None
_abc_impl = <_abc._abc_data object>
class speech.sayAll._TableTextReader(handler: _SayAllHandler, startPos: TextInfo | None = None, nextLineFunc: Callable[[TextInfo], TextInfo] | None = None, shouldUpdateCaret: bool = True)

Bases: _CaretTextReader

getInitialTextInfo() TextInfo
nextLineImpl() bool

Advances cursor to the next reading chunk (e.g. paragraph). @return: C{True} if advanced successfully, C{False} otherwise.

collapseLineImpl() bool

Collapses to the end of this line, ready to read the next. @return: C{True} if collapsed successfully, C{False} otherwise.

shouldReadInitialPosition() bool
updateCaret(updater: TextInfo) None
_abc_impl = <_abc._abc_data object>
class speech.sayAll.SayAllProfileTrigger

Bases: ProfileTrigger

A configuration profile trigger for when say all is in progress.

spec = 'sayAll'

speech.shortcutKeys module

Functions to create speech sequences for shortcut keys.

speech.shortcutKeys.speakKeyboardShortcuts(keyboardShortcutsStr: str | None) None
speech.shortcutKeys.getKeyboardShortcutsSpeech(keyboardShortcutsStr: str | None) list[SpeechCommand | str]

Gets the speech sequence for a shortcuts string containing one or more shortcuts. @param keyboardShortcutsStr: the shortcuts string.

speech.shortcutKeys._getKeyboardShortcutSpeech(keyboardShortcut: str) list[SpeechCommand | str]

Gets the speech sequence for a single shortcut string. @param keyboardShortcut: the shortcuts string.

speech.shortcutKeys.shouldUseSpellingFunctionality() bool
speech.shortcutKeys._getKeySpeech(key: str) list[SpeechCommand | str]

Gets the speech sequence for a string describing a key. @param key: the key string.

speech.shortcutKeys._splitShortcut(shortcut: str) Tuple[List[str], str]

Splits a string representing a shortcut key combination. @param shortcut: the shortcut to split.

It may be of the form “NVDA+R” or “NVDA + R”, i.e. key names separated by “+” symbol with or without space around it.

@return: 2-tuple containing the list of the keys and the separator used between them.

E.g. ([‘NVDA’, ‘R’], ‘ + ‘)

speech.shortcutKeys._splitSequentialShortcut(shortcut: str) Tuple[List[str], List[str]]

Splits a string representing a sequantial shortcut key combination (the ones found in ribbons). @param shortcut: the shortcut to split.

It should be of the form “Alt, F, L, Y 1” i.e. key names separated by comma symbol or space.

@return: 2-tuple containing the list of the keys and the list separators used between each key in the list.

E.g.: ([‘Alt’, ‘F’, ‘L’, ‘Y’, ‘1’], [’, ‘, ‘, ‘, ‘, ‘, ‘ ‘])

speech.speech module

High-level functions to speak information.

class speech.speech.SpeechMode(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)

Bases: DisplayStringIntEnum

off = 0
beeps = 1
talk = 2
onDemand = 3
property _displayStringLabels: dict[Self, str]

Specify a dictionary which takes members of the Enum and returns the translated display string.

class speech.speech.SpeechState(speechMode: speech.speech.SpeechMode = <SpeechMode.talk: 2>, _suppressSpeakTypedCharactersTime: Optional[float] = None)

Bases: object

beenCanceled = True
isPaused = False
speechMode: SpeechMode = 2

How speech should be handled

speechMode_beeps_ms = 15
_suppressSpeakTypedCharactersNumber = 0

The number of typed characters for which to suppress speech.

_suppressSpeakTypedCharactersTime: float | None = None

The time at which suppressed typed characters were sent.

oldTreeLevel = None
oldTableID = None
oldRowNumber = None
oldRowSpan = None
oldColumnNumber = None
oldColumnSpan = None
speech.speech.getState()
speech.speech.setSpeechMode(newMode: SpeechMode)
speech.speech.initialize()
speech.speech.BLANK_CHUNK_CHARS = frozenset({'\x00', '\n', '\r', ' ', '\xa0'})

If a chunk of text contains only these characters, it will be considered blank.

speech.speech.isBlank(text)

Determine whether text should be reported as blank. @param text: The text in question. @type text: str @return: C{True} if the text is blank, C{False} if not. @rtype: bool

speech.speech.processText(locale: str, text: str, symbolLevel: SymbolLevel, normalize: bool = False) str

Processes text for symbol pronunciation, speech dictionaries and Unicode normalization. :param locale: The language the given text is in, passed for symbol pronunciation. :param text: The text to process. :param symbolLevel: The verbosity level used for symbol pronunciation. :param normalize: Whether to apply Unicode normalization to the text

after it has been processed for symbol pronunciation and speech dictionaries.

Returns:

The processed text

speech.speech.cancelSpeech()

Interupts the synthesizer from currently speaking

speech.speech.pauseSpeech(switch)
speech.speech._getSpeakMessageSpeech(text: str) list[SpeechCommand | str]

Gets the speech sequence for a given message. @param text: the message to speak

speech.speech.speakMessage(text: str, priority: SpeechPriority | None = None) None

Speaks a given message. @param text: the message to speak @param priority: The speech priority.

speech.speech._getSpeakSsmlSpeech(ssml: str, markCallback: MarkCallbackT | None = None, _prefixSpeechCommand: SpeechCommand | None = None) list[SpeechCommand | str]

Gets the speech sequence for given SSML. :param ssml: The SSML data to speak :param markCallback: An optional callback called for every mark command in the SSML. :param _prefixSpeechCommand: A SpeechCommand to append before the sequence.

speech.speech.speakSsml(ssml: str, markCallback: MarkCallbackT | None = None, symbolLevel: SymbolLevel | None = None, _prefixSpeechCommand: SpeechCommand | None = None, priority: SpeechPriority | None = None) None

Speaks a given speech sequence provided as ssml. :param ssml: The SSML data to speak. :param markCallback: An optional callback called for every mark command in the SSML. :param symbolLevel: The symbol verbosity level. :param _prefixSpeechCommand: A SpeechCommand to append before the sequence. :param priority: The speech priority.

speech.speech.getCurrentLanguage() str
speech.speech.spellTextInfo(info: TextInfo, useCharacterDescriptions: bool = False, priority: SpeechPriority | None = None) None

Spells the text from the given TextInfo, honouring any LangChangeCommand objects it finds if autoLanguageSwitching is enabled.

speech.speech.speakSpelling(text: str, locale: str | None = None, useCharacterDescriptions: bool = False, priority: SpeechPriority | None = None) None
speech.speech._getSpellingSpeechAddCharMode(seq: Generator[SpeechCommand | str, None, None]) Generator[SpeechCommand | str, None, None]

Inserts CharacterMode commands in a speech sequence generator to ensure any single character is spelled by the synthesizer. @param seq: The speech sequence to be spelt.

speech.speech._getSpellingCharAddCapNotification(speakCharAs: str, sayCapForCapitals: bool, capPitchChange: int, beepForCapitals: bool, reportNormalized: bool = False) Generator[SpeechCommand | str, None, None]

This function produces a speech sequence containing a character to be spelt as well as commands to indicate that this character is uppercase and/or normalized, if applicable. :param speakCharAs: The character as it will be spoken by the synthesizer. :param sayCapForCapitals: indicates if ‘cap’ should be reported along with the currently spelled character. :param capPitchChange: pitch offset to apply while spelling the currently spelled character. :param beepForCapitals: indicates if a cap notification beep should be produced while spelling the currently spelled character. :param reportNormalized: Indicates if ‘normalized’ should be reported along with the currently spelled character.

speech.speech._getSpellingSpeechWithoutCharMode(text: str, locale: str, useCharacterDescriptions: bool, sayCapForCapitals: bool, capPitchChange: int, beepForCapitals: bool, fallbackToCharIfNoDescription: bool = True, unicodeNormalization: bool = False, reportNormalizedForCharacterNavigation: bool = False) Generator[SpeechCommand | str, None, None]

Processes text when spoken by character. This doesn’t take care of character mode (Option “Use spelling functionality”). :param text: The text to speak.

This is usually one character or a string containing a decomposite character (or glyph)

Parameters:
  • locale – The locale used to generate character descrptions, if applicable.

  • useCharacterDescriptions – Whether or not to use character descriptions, e.g. speak “a” as “alpha”.

  • sayCapForCapitals – Indicates if ‘cap’ should be reported along with the currently spelled character.

  • capPitchChange – Pitch offset to apply while spelling the currently spelled character.

  • beepForCapitals – Indicates if a cap notification beep should be produced while spelling the currently spelled character.

  • fallbackToCharIfNoDescription – Only applies if useCharacterDescriptions is True. If fallbackToCharIfNoDescription is True, and no character description is found, the character itself will be announced. Otherwise, nothing will be spoken.

  • unicodeNormalization – Whether to use Unicode normalization for the given text.

  • reportNormalizedForCharacterNavigation – When unicodeNormalization is true, indicates if ‘normalized’ should be reported along with the currently spelled character.

Returns:

A speech sequence generator.

speech.speech.getSingleCharDescriptionDelayMS() int

@returns: 1 second, a default delay for delayed character descriptions. In the future, this should fetch its value from a user defined NVDA idle time. Blocked by: https://github.com/nvaccess/nvda/issues/13915

speech.speech.getSingleCharDescription(text: str, locale: str | None = None) Generator[SpeechCommand | str, None, None]

Returns a speech sequence: a pause, the length determined by getSingleCharDescriptionDelayMS, followed by the character description.

speech.speech.getSpellingSpeech(text: str, locale: str | None = None, useCharacterDescriptions: bool = False) Generator[SpeechCommand | str, None, None]
speech.speech.getCharDescListFromText(text, locale)

This method prepares a list, which contains character and its description for all characters the text is made up of, by checking the presence of character descriptions in characterDescriptions.dic of that locale for all possible combination of consecutive characters in the text. This is done to take care of conjunct characters present in several languages such as Hindi, Urdu, etc.

speech.speech.speakObjectProperties(obj: NVDAObjects.NVDAObject, reason: OutputReason = OutputReason.QUERY, _prefixSpeechCommand: SpeechCommand | None = None, priority: SpeechPriority | None = None, **allowedProperties)
speech.speech.getObjectPropertiesSpeech(obj: NVDAObjects.NVDAObject, reason: OutputReason = OutputReason.QUERY, _prefixSpeechCommand: SpeechCommand | None = None, **allowedProperties) list[SpeechCommand | str]
speech.speech._getPlaceholderSpeechIfTextEmpty(obj, reason: OutputReason) Tuple[bool, list[SpeechCommand | str]]
Attempt to get speech for placeholder attribute if text for ‘obj’ is empty. Don’t report the placeholder

value unless the text is empty, because it is confusing to hear the current value (presumably typed by the user) and the placeholder. The placeholder should “disappear” once the user types a value.

@return: (True, SpeechSequence) if text for obj was considered empty and we attempted to get speech for the

placeholder value. (False, []) if text for obj was not considered empty.

speech.speech.speakObject(obj, reason: OutputReason = OutputReason.QUERY, _prefixSpeechCommand: SpeechCommand | None = None, priority: SpeechPriority | None = None)
speech.speech.getObjectSpeech(obj: NVDAObjects.NVDAObject, reason: OutputReason = OutputReason.QUERY, _prefixSpeechCommand: SpeechCommand | None = None) list[SpeechCommand | str]
speech.speech._objectSpeech_calculateAllowedProps(reason: OutputReason, shouldReportTextContent: bool, objRole: Role) dict[str, bool]
speech.speech.speakText(text: str, reason: OutputReason = OutputReason.MESSAGE, symbolLevel: SymbolLevel | None = None, priority: SpeechPriority | None = None)

Speaks some text. @param text: The text to speak. @param reason: Unused @param symbolLevel: The symbol verbosity level; C{None} (default) to use the user’s configuration. @param priority: The speech priority.

speech.speech.splitTextIndentation(text)

Splits indentation from the rest of the text. @param text: The text to split. @type text: str @return: Tuple of indentation and content. @rtype: (str, str)

speech.speech.getIndentationSpeech(indentation: str, formatConfig: Dict[str, bool]) list[SpeechCommand | str]

Retrieves the indentation speech sequence for a given string of indentation. @param indentation: The string of indentation. @param formatConfig: The configuration to use.

speech.speech.speak(speechSequence: list[SpeechCommand | str], symbolLevel: SymbolLevel | None = None, priority: SpeechPriority = SpeechPriority.NORMAL)

Speaks a sequence of text and speech commands @param speechSequence: the sequence of text and L{SpeechCommand} objects to speak @param symbolLevel: The symbol verbosity level; C{None} (default) to use the user’s configuration. @param priority: The speech priority.

speech.speech.speakPreselectedText(text: str, priority: SpeechPriority | None = None)

Helper method to announce that a newly focused control already has text selected. This method is in contrast with L{speakTextSelected}. The method will speak the word “selected” with the provided text appended. The announcement order is different from L{speakTextSelected} in order to inform a user that the newly focused control has content that is selected, which they may unintentionally overwrite.

@remarks: Implemented using L{getPreselectedTextSpeech}

speech.speech.getPreselectedTextSpeech(text: str) list[SpeechCommand | str]

Helper method to get the speech sequence to announce a newly focused control already has text selected. This method will speak the word “selected” with the provided text appended. The announcement order is different from L{speakTextSelected} in order to inform a user that the newly focused control has content that is selected, which they may unintentionally overwrite.

@remarks: Implemented using L{_getSelectionMessageSpeech}, which allows for

creating a speech sequence with an arbitrary attached message.

speech.speech.speakTextSelected(text: str, priority: SpeechPriority | None = None)

Helper method to announce that the user has caused text to be selected. This method is in contrast with L{speakPreselectedText}. The method will speak the provided text with the word “selected” appended.

@remarks: Implemented using L{speakSelectionMessage}, which allows for

speaking text with an arbitrary attached message.

speech.speech.speakSelectionMessage(message: str, text: str, priority: SpeechPriority | None = None)
speech.speech._getSelectionMessageSpeech(message: str, text: str) list[SpeechCommand | str]
speech.speech.speakSelectionChange(oldInfo: TextInfo, newInfo: TextInfo, speakSelected: bool = True, speakUnselected: bool = True, generalize: bool = False, priority: SpeechPriority | None = None)

Speaks a change in selection, either selected or unselected text. @param oldInfo: a TextInfo instance representing what the selection was before @param newInfo: a TextInfo instance representing what the selection is now @param generalize: if True, then this function knows that the text may have changed between the creation of the oldInfo and newInfo objects, meaning that changes need to be spoken more generally, rather than speaking the specific text, as the bounds may be all wrong. @param priority: The speech priority.

speech.speech._suppressSpeakTypedCharacters(number: int)

Suppress speaking of typed characters. This should be used when sending a string of characters to the system and those characters should not be spoken individually as if the user were typing them. @param number: The number of characters to suppress.

speech.speech.PROTECTED_CHAR = '*'

The character to use when masking characters in protected fields.

speech.speech.FIRST_NONCONTROL_CHAR = ' '

The first character which is not a Unicode control character. This is used to test whether a character should be spoken as a typed character; i.e. it should have a visual or spatial representation.

speech.speech.speakTypedCharacters(ch: str)
class speech.speech.SpeakTextInfoState(obj)

Bases: object

Caches the state of speakTextInfo such as the current controlField stack, current formatfield and indentation.

objRef
controlFieldStackCache
formatFieldAttributesCache
indentationCache
updateObj()
copy()
speech.speech._extendSpeechSequence_addMathForTextInfo(speechSequence: list[SpeechCommand | str], info: TextInfo, field: Field) None
speech.speech.speakTextInfo(info: TextInfo, useCache: bool | SpeakTextInfoState = True, formatConfig: Dict[str, bool] = None, unit: str | None = None, reason: OutputReason = OutputReason.QUERY, _prefixSpeechCommand: SpeechCommand | None = None, onlyInitialFields: bool = False, suppressBlanks: bool = False, priority: SpeechPriority | None = None) bool
speech.speech.getTextInfoSpeech(info: TextInfo, useCache: bool | SpeakTextInfoState = True, formatConfig: Dict[str, bool] = None, unit: str | None = None, reason: OutputReason = OutputReason.QUERY, _prefixSpeechCommand: SpeechCommand | None = None, onlyInitialFields: bool = False, suppressBlanks: bool = False) Generator[list[SpeechCommand | str], None, bool]
speech.speech._isControlEndFieldCommand(command: str | FieldCommand)
speech.speech._getTextInfoSpeech_considerSpelling(unit: TextInfo | None, onlyInitialFields: bool, textWithFields: List[str | FieldCommand], reason: OutputReason, speechSequence: list[SpeechCommand | str], language: str) Generator[list[SpeechCommand | str], None, None]
speech.speech._getTextInfoSpeech_updateCache(useCache: bool | SpeakTextInfoState, speakTextInfoState: SpeakTextInfoState, newControlFieldStack: List[ControlField], formatFieldAttributesCache: Field)
speech.speech.getPropertiesSpeech(reason: OutputReason = OutputReason.QUERY, **propertyValues) list[SpeechCommand | str]
speech.speech._rowAndColumnCountText(rowCount: int, columnCount: int) str | None
speech.speech._rowCountText(count: int) str
speech.speech._columnCountText(count: int) str
speech.speech._shouldSpeakContentFirst(reason: OutputReason, role: int, presCat: str, attrs: ControlField, tableID: Any, states: Iterable[int]) bool

Determines whether or not to speak the content before the controlField information. Helper function for getControlFieldSpeech.

speech.speech.getControlFieldSpeech(attrs: ControlField, ancestorAttrs: List[Field], fieldType: str, formatConfig: Dict[str, bool] | None = None, extraDetail: bool = False, reason: OutputReason | None = None) list[SpeechCommand | str]
speech.speech.getFormatFieldSpeech(attrs: Field, attrsCache: Field | None = None, formatConfig: Dict[str, bool] | None = None, reason: OutputReason | None = None, unit: str | None = None, extraDetail: bool = False, initialFormat: bool = False) list[SpeechCommand | str]
speech.speech.getTableInfoSpeech(tableInfo: Dict[str, Any] | None, oldTableInfo: Dict[str, Any] | None, extraDetail: bool = False) list[SpeechCommand | str]
speech.speech._manager = <speech.manager.SpeechManager object>

The singleton _SpeechManager instance used for speech functions. @type: L{manager.SpeechManager}

speech.speech.clearTypedWordBuffer() None

Forgets any word currently being built up with typed characters for speaking. This should be called when the user’s context changes such that they could no longer complete the word (such as a focus change or choosing to move the caret).

speech.speechWithoutPauses module

speech.speechWithoutPauses._yieldIfNonEmpty(seq: list[SpeechCommand | str])

Helper method to yield the sequence if it is not None or empty.

class speech.speechWithoutPauses.SpeechWithoutPauses(speakFunc: Callable[[list[SpeechCommand | str]], None])

Bases: object

Parameters:

speakFunc – Function used by L{speakWithoutPauses} to speak. This will likely be speech.speak.

_pendingSpeechSequence: list[SpeechCommand | str]
re_last_pause = re.compile('^(.*(?<=[^\\s.!?])[.!?][\\"\'”’)]?(?:\\s+|$))(.*$)', re.DOTALL)
reset()
speakWithoutPauses(speechSequence: list[SpeechCommand | str] | None, detectBreaks: bool = True) bool

Speaks the speech sequences given over multiple calls, only sending to the synth at acceptable phrase or sentence boundaries, or when given None for the speech sequence. @return: C{True} if something was actually spoken,

C{False} if only buffering occurred.

getSpeechWithoutPauses(speechSequence: list[SpeechCommand | str] | None, detectBreaks: bool = True) Generator[list[SpeechCommand | str], None, bool]

Generate speech sequences over multiple calls, only returning a speech sequence at acceptable phrase or sentence boundaries, or when given None for the speech sequence. @return: The speech sequence that can be spoken without pauses. The ‘return’ for this generator function, is a bool which indicates whether this sequence should be considered valid speech. Use L{GeneratorWithReturn} to retain the return value. A generator is used because the previous implementation had several calls to speech, this approach replicates that.

_detectBreaksAndGetSpeech(speechSequence: list[SpeechCommand | str]) Generator[list[SpeechCommand | str], None, bool]
_flushPendingSpeech() list[SpeechCommand | str]

@return: may be empty sequence

_getSpeech(speechSequence: list[SpeechCommand | str]) list[SpeechCommand | str]

@return: May be an empty sequence

speech.types module

Types used by speech package. Kept here so they can be re-used without having to worry about circular imports.

speech.types._isDebugForSpeech() bool

Check if debug logging for speech is enabled.

class speech.types.GeneratorWithReturn(gen: Iterable, defaultReturnValue=None)

Bases: Iterable

Helper class, used with generator functions to access the ‘return’ value after there are no more values to iterate over.

_abc_impl = <_abc._abc_data object>
speech.types._flattenNestedSequences(nestedSequences: Iterable[list[SpeechCommand | str]] | GeneratorWithReturn) Generator[SpeechCommand | str, Any, bool | None]

Turns [[a,b,c],[d,e]] into [a,b,c,d,e]

speech.types.logBadSequenceTypes(sequence: Iterable[SpeechCommand | str], raiseExceptionOnError=False) bool

Check if the provided sequence is valid, otherwise log an error (only if speech is checked in the “log categories” setting of the advanced settings panel. @param sequence: the sequence to check @param raiseExceptionOnError: if True, and exception is raised. Useful to help track down the introduction

of erroneous speechSequence data.

@return: True if the sequence is valid.