Voice
The Immerse voice solution implementation varies depending on the platform of your app (e.g. WebGL vs. Android).
For WebGL apps, the voice solution is managed entirely within the browser, requiring no additional setup by the developer.
When launching a WebGL app in the browser, users will be prompted with an audio check before the app is presented to them. During this check, they can select their input device and ensure they can be heard and hear other users.
If you wish to skip this audio check during development, append the parameter skipAudioCheck=true
to the URL.
For Android clients, the voice functionality is managed by a plugin that must be included in the build. This plugin is part of the Voice module in the Immerse SDK but requires configuration in your project’s Gradle templates.
Connection to the voice service automatically made immediately after the network connection is created successfully.
- Open the Unity Project Settings: Edit > Project Settings…
- Select the Player section and switch to the Android build target.
- Scroll down and expand Publishing Settings.
- Under the Build header, enable the ‘Custom Main Gradle Template’ and ‘Custom Gradle Properties Template’ options. The following files will be added to your project:
Assets\Plugins\Android\mainTemplate.gradle
Assets\Plugins\Android\gradleTemplate.properties
- Open the mainTemplate.gradle file (any text editor is fine).
- In the dependencies section towards the top, add the following line:
implementation 'com.opentok.android:opentok-android-sdk:2.22.2'
- Save and close the file.
- Open the
gradleTemplate.properties
file (any text editor is fine). - Add the following line before the
**ADDITIONAL PROPERTIES**
line:
android.useAndroidX=true
- Save and close the file.