add C# code tabs to pages under tutorials/audio

This commit is contained in:
Paul Joannon
2021-03-04 11:31:03 +01:00
parent 1a1f4c5b76
commit 7e27d6a0fa
2 changed files with 105 additions and 0 deletions

View File

@@ -38,6 +38,20 @@ An ``AudioStreamPlayer`` named ``AudioStreamRecord`` is used for recording.
# as an "AudioEffectRecord" resource.
effect = AudioServer.get_bus_effect(idx, 0)
.. code-tab:: csharp
private AudioEffectRecord _effect;
private AudioStreamSample _recording;
public override void _Ready()
{
// We get the index of the "Record" bus.
int idx = AudioServer.GetBusIndex("Record");
// And use it to retrieve its first effect, which has been defined
// as an "AudioEffectRecord" resource.
_effect = (AudioEffectRecord)AudioServer.GetBusEffect(idx, 0);
}
The audio recording is handled by the :ref:`class_AudioEffectRecord` resource
which has three methods:
:ref:`get_recording() <class_AudioEffectRecord_method_get_recording>`,
@@ -62,6 +76,29 @@ and :ref:`set_recording_active() <class_AudioEffectRecord_method_set_recording_a
$RecordButton.text = "Stop"
$Status.text = "Recording..."
.. code-tab:: csharp
public void OnRecordButtonPressed()
{
if (_effect.IsRecordingActive())
{
_recording = _effect.GetRecording();
GetNode<Button>("PlayButton").Disabled = false;
GetNode<Button>("SaveButton").Disabled = false;
_effect.SetRecordingActive(false);
GetNode<Button>("RecordButton").Text = "Record";
GetNode<Label>("Status").Text = "";
}
else
{
GetNode<Button>("PlayButton").Disabled = true;
GetNode<Button>("SaveButton").Disabled = true;
_effect.SetRecordingActive(true);
GetNode<Button>("RecordButton").Text = "Stop";
GetNode<Label>("Status").Text = "Recording...";
}
}
At the start of the demo, the recording effect is not active. When the user
presses the ``RecordButton``, the effect is enabled with
``set_recording_active(true)``.
@@ -84,6 +121,22 @@ the recorded stream can be stored into the ``recording`` variable by calling
$AudioStreamPlayer.stream = recording
$AudioStreamPlayer.play()
.. code-tab:: csharp
public void OnPlayButtonPressed()
{
GD.Print(_recording);
GD.Print(_recording.Format);
GD.Print(_recording.MixRate);
GD.Print(_recording.Stereo);
byte[] data = _recording.Data;
GD.Print(data);
GD.Print(data.Length);
var audioStreamPlayer = GetNode<AudioStreamPlayer>("AudioStreamPlayer");
audioStreamPlayer.Stream = _recording;
audioStreamPlayer.Play();
}
To playback the recording, you assign the recording as the stream of the
``AudioStreamPlayer`` and call ``play()``.
@@ -95,5 +148,14 @@ To playback the recording, you assign the recording as the stream of the
recording.save_to_wav(save_path)
$Status.text = "Saved WAV file to: %s\n(%s)" % [save_path, ProjectSettings.globalize_path(save_path)]
.. code-tab:: csharp
public void OnSavebuttonPressed()
{
string savePath = GetNode<LineEdit>("SaveButton/Filename").Text;
_recording.SaveToWav(savePath);
GetNode<Label>("Status").Text = string.Format("Saved WAV file to: {0}\n({1})", savePath, ProjectSettings.GlobalizePath(savePath));
}
To save the recording, you call ``save_to_wav()`` with the path to a file.
In this demo, the path is defined by the user via a ``LineEdit`` input box.

View File

@@ -56,6 +56,25 @@ Add these two and it's possible to guess almost exactly when sound or music will
time = max(0, time)
print("Time is: ", time)
.. code-tab:: csharp
private double _timeBegin;
private double _timeDelay;
public override void _Ready()
{
_timeBegin = OS.GetTicksUsec();
_timeDelay = AudioServer.GetTimeToNextMix() + AudioServer.GetOutputLatency();
GetNode<AudioStreamPlayer>("Player").Play();
}
public override void _Process(float _delta)
{
double time = (OS.GetTicksUsec() - _timeBegin) / 1000000.0d;
time = Math.Max(0.0d, time - _timeDelay);
GD.Print(string.Format("Time is: {0}", time));
}
In the long run, though, as the sound hardware clock is never exactly in sync with the system clock, the timing information will slowly drift away.
@@ -76,6 +95,11 @@ Adding the return value from this function to *get_playback_position()* increase
var time = $Player.get_playback_position() + AudioServer.get_time_since_last_mix()
.. code-tab:: csharp
double time = GetNode<AudioStreamPlayer>("Player").GetPlaybackPosition() + AudioServer.GetTimeSinceLastMix();
To increase precision, subtract the latency information (how much it takes for the audio to be heard after it was mixed):
.. tabs::
@@ -83,6 +107,10 @@ To increase precision, subtract the latency information (how much it takes for t
var time = $Player.get_playback_position() + AudioServer.get_time_since_last_mix() - AudioServer.get_output_latency()
.. code-tab:: csharp
double time = GetNode<AudioStreamPlayer>("Player").GetPlaybackPosition() + AudioServer.GetTimeSinceLastMix() - AudioServer.GetOutputLatency();
The result may be a bit jittery due how multiple threads work. Just check that the value is not less than in the previous frame (discard it if so). This is also a less precise approach than the one before, but it will work for songs of any length, or synchronizing anything (sound effects, as an example) to music.
Here is the same code as before using this approach:
@@ -100,3 +128,18 @@ Here is the same code as before using this approach:
# Compensate for output latency.
time -= AudioServer.get_output_latency()
print("Time is: ", time)
.. code-tab:: csharp
public override void _Ready()
{
GetNode<AudioStreamPlayer>("Player").Play();
}
public override void _Process(float _delta)
{
double time = GetNode<AudioStreamPlayer>("Player").GetPlaybackPosition() + AudioServer.GetTimeSinceLastMix();
// Compensate for output latency.
time -= AudioServer.GetOutputLatency();
GD.Print(string.Format("Time is: {0}", time));
}