Technician Handbook

Interpreter24 Operations Manual for Event Technicians

This handbook is written for technical operators, AV engineers, event production staff, and integration teams who need to deploy Interpreter24 in real event conditions. It covers the full operational flow from machine preparation to session export, with the terminology and decision points that matter in production: source acquisition, routing, output mode, session state, redundancy, receiver delivery, and fault isolation.

1. System Overview

Interpreter24 is a native macOS operator console for live speech recognition, machine translation, multilingual caption delivery, and translated audio routing. In practical deployment terms, the application acts as a control layer between one spoken source and multiple delivery targets.

Layer Function Operator Concern
Input Captures the selected source microphone or audio interface feed Correct device selection, stable sample path, input level
ASR Generates the source transcript from the incoming signal Language choice, recognition vocabulary, network/API health
MT Generates translations per target language Target list, terminology control, latency
Delivery Sends outputs either to local USB routing or cloud/mobile receivers Physical channel map vs. receiver/session delivery model
Monitoring Shows transcript, translation, audio meter, receiver QR and system status Operational confidence and fault isolation during the event
Screenshot required here: full Session Configuration view with top session ID field, Audio tab, routing summary, broadcast block, and health strip.
Use this image when introducing the control surface and naming the major operator areas.
handbook-01-session-configuration-overview.png

2. Pre-Deployment Requirements

Before opening the application on show day, confirm the full signal chain and credential layer. Interpreter24 depends on both local resources and cloud services.

2.1 Minimum technical prerequisites

  • macOS machine dedicated to translation operation
  • Stable internet connection for ASR, translation, and optionally cloud receiver delivery
  • Valid provider credentials for speech recognition and translation
  • Known source audio device and, if using local multichannel distribution, a confirmed USB output interface
  • If using cloud mode, receiver access path verified on the venue or event network

2.2 Show-time technician checklist

  • Verify operator machine power profile, sleep disablement, and network path
  • Verify source audio device appears in macOS and in Interpreter24
  • Verify USB output interface appears with expected channels if local distribution is required
  • Verify provider API keys and endpoints in Settings
  • Verify at least one fallback plan: spare machine, spare internet path, or operator procedure for reconnection

3. Installation and First Launch

On first launch, the operator should not go directly to Live Console. The correct sequence is Settings, Session Configuration, then Live Console. This ensures the runtime configuration, terminology, output mode, and receiver behavior are all deterministic before the first live sentence is processed.

1

Launch the app

Confirm the app opens to the main shell and that the navigation loads without license or credential errors.

2

Go to Settings

Load or verify service credentials, endpoints, and local app parameters before building sessions.

3

Build the session

Create the source-target delivery model and save it before opening live translation.

Screenshot required here: Home or main shell view showing the navigation and the Session Configuration / Live Console entries.
Shows the operator where to start and how the app sections are separated.
handbook-02-app-shell-navigation.png

4. Settings and Credentials

The Settings area is the credential and environment control plane. Treat it as the place where the machine is bound to the correct vendor APIs and deployment mode.

4.1 What must be verified

  • Speech recognition provider credentials
  • Translation provider credentials
  • Text-to-speech provider if voice output is required
  • Server URL if cloud session storage or receiver functionality is used
  • License status for the local machine
Operational note: if Settings are incomplete, the app may still open, but live execution will fail at the point where the missing backend is actually called. For production, validate the full path before audience ingress.
Screenshot required here: Settings view showing credential sections, save buttons, and license block.
Use this screenshot to annotate which parameters are mandatory for recognition, translation, TTS, and cloud behavior.
handbook-03-settings-credentials.png

5. Building a Session

A session is the saved operational definition of one event instance. It binds together the source language set, target language set, input/output routing, delivery mode, receiver behavior, and descriptive metadata.

5.1 Session creation workflow

  1. Assign a session ID that will remain stable for the event lifecycle.
  2. Select one or more source languages under Speaking.
  3. Select one or more target languages under Translations.
  4. Choose delivery mode: USB for local channel routing or Cloud for mobile receivers.
  5. Choose service mode: captions only, voice only, or captions and voice where the output mode permits it.
  6. Save the session before entering live operation.
Screenshot required here: Session creation wizard first page with session ID and description.
Shows the first stage of session creation and the required operator metadata.
handbook-04-create-session-details.png
Screenshot required here: Session creation wizard language selection page with source and target strips populated.
Shows how to configure one or more source languages and one or more target outputs.
handbook-05-create-session-languages.png
Screenshot required here: Session creation wizard broadcast setup page with USB/Cloud selection and service selection.
Use this image to explain the decision point between local distribution and mobile/cloud delivery.
handbook-06-create-session-broadcast-setup.png
Screenshot required here: Session Configuration Audio tab after a valid session is loaded, with languages, routing summary, and broadcast mode visible.
Shows the saved session in its operational state.
handbook-07-session-audio-tab-loaded.png

6. Recognition and Glossary

The Customisation area has two distinct purposes:

  • Word recognition improves the upstream ASR path by injecting difficult terms and optional pronunciation hints.
  • Glossary constrains or stabilizes translation output across target languages.

6.1 Word recognition usage

Add brand names, speaker names, products, acronyms, venue-specific terminology, or any low-frequency terms likely to be misrecognized. Use the Sounds like column to provide phonetic guidance that helps recognition.

6.2 Glossary usage

Use the glossary when terminology must remain controlled across languages, for example sponsor names, role titles, regulated product names, or event branding phrases.

Screenshot required here: Customisation tab with Word recognition table visible and both columns readable.
Use this image to explain ASR vocabulary injection and the purpose of the Sounds like column.
handbook-08-customisation-word-recognition.png
Screenshot required here: Customisation tab with Glossary section populated for at least two languages.
Use this image to explain multi-language terminology locking for MT output.
handbook-09-customisation-glossary.png

7. Live Console Operations

The Live Console is the run-time monitoring and control surface. Once a saved session is loaded, the operator uses Live Console to observe transcription, translation activity, audio metering, receiver status, and current session state.

7.1 What the operator should monitor continuously

  • Source transcript continuity
  • Translated output continuity
  • Per-language activity indicators
  • Audio floor meter for source presence
  • Receiver QR visibility when cloud delivery is active
Do not start blind. If the live transcript panel is empty, the entire downstream translation chain will also be empty. Always isolate faults from left to right: source input, ASR, MT, then delivery.
Screenshot required here: Live Console with transcript panel, translation panel, audio monitoring meter, start button, and language buttons all visible.
Main operations screenshot for live run mode.
handbook-10-live-console-overview.png
Screenshot required here: Live Console while running, with active transcript, translated text, and green per-language status indicators.
Use this screenshot to show a healthy live state.
handbook-11-live-console-running-state.png

8. USB and Cloud Delivery

8.1 USB mode

USB mode is intended for local distribution workflows where translated language outputs are mapped to a multichannel interface and fed to physical downstream infrastructure such as RF transmitter chains, Bosch-style distribution, recording systems, or venue routing.

8.2 Cloud mode

Cloud mode is intended for participant delivery to mobile devices via receiver URL and QR onboarding. In this mode the operator should validate both receiver access and session-specific content before audience exposure.

Mode Best for Technician focus
USB Onsite routing to physical hardware Output device selection, channel mapping, local signal test
Cloud Participant BYOD delivery Receiver URL, QR onboarding, network reachability, mobile experience
Screenshot required here: Session Audio tab in USB mode with channel cards and play test buttons visible.
Use this image to explain channel-level verification in local distribution mode.
handbook-12-session-usb-routing.png
Screenshot required here: Session Audio tab in Cloud mode with QR visible and receiver URL shown.
Use this image to explain participant onboarding and cloud delivery verification.
handbook-13-session-cloud-qr.png

9. Receiver Branding and User Info

The User's App tab lets the operator control what the participant sees. This includes logo branding and the extra information panel shown inside the receiver. That content supports basic HTML formatting and external links, but it is sanitized on the receiver side.

9.1 Operational use cases

  • Show sponsor or event logo
  • Provide listening instructions
  • Publish room-level operational notes or support contact
  • Add agenda, speaker list, or compliance note
Screenshot required here: User's App tab with logo area and rich-text Extra Info editor visible.
Use this image to explain receiver-side branding and information publishing.
handbook-14-user-app-branding-and-info.png
Screenshot required here: Mobile receiver screen showing the rendered extra info panel and logo.
Use this screenshot to connect operator-side setup with participant-side result.
handbook-15-receiver-extra-info-rendered.png

10. Transcript Export

Interpreter24 stores segment events in the database and exports from the database on demand. The Recording tab is the operator-facing export point.

10.1 CSV export

CSV export produces one row per stored segment event, including timing, source text, and translation columns by language.

10.2 Word export

Word export is language-specific and produces plain transcript output for the selected language only. Use this when you need a clean transcript deliverable rather than an engineering data export.

Screenshot required here: Recording tab with Export transcript button visible.
Use this image to introduce transcript extraction from stored segment events.
handbook-16-recording-tab-export-button.png
Screenshot required here: Export transcript contextual menu opened, showing CSV and Word entries plus the Word language submenu.
Use this image to explain the difference between engineering export and per-language deliverable export.
handbook-17-export-transcript-menu.png

11. Troubleshooting Workflow

Troubleshooting should always follow the signal path. Do not troubleshoot cloud delivery first if the source transcript is already absent.

Symptom Likely layer Operator action
No transcript appears Input or ASR Check source device, input level, language selection, and ASR credentials
Transcript appears, no translation appears MT Check target languages, MT credentials, glossary/term payload, and internet path
Translation visible, no audience audio TTS or delivery Check TTS provider, output mode, output device, and channel routing
Cloud QR visible but receiver has no result Receiver path or caption delivery Check room/session binding, network access, and cloud output mode
Specific names consistently wrong Recognition vocabulary Update Word recognition list and add Sounds like hints
Fast fault-isolation order:
  1. Confirm source audio is present
  2. Confirm transcript updates
  3. Confirm translated text updates
  4. Confirm meter and output behavior
  5. Confirm receiver or USB destination path
Screenshot required here: Live Console in a fault condition, for example transcript active but translation missing, or translation active but delivery not healthy.
Useful for the troubleshooting section because it gives technicians a real-world reference for partial-failure states.
handbook-18-live-console-fault-example.png