# Audio Node

The audio node brings voice, sound, and sonic atmosphere into your FLORA canvas. It closes the loop between what you see and what you hear, letting you generate voiceovers, sound effects, and transcriptions alongside the image and video work already happening on your canvas. No jumping out to a separate tool, no booking scratch VO, no silent concepts. Audio nodes turn sound into another first-class node you can pipe into the rest of your workflow.

&#x20;                                                                                                                                                                                                                   &#x20;

Here is a quick introduction on how to get started with audio nodes:    &#x20;

{% embed url="<https://youtu.be/sP1yhgYfOnw?si=pGA3quOEqMjRreTN>" %}

## Capabilities

* Text-to-speech. Type a script, choose a voice, and get a voiceover back. Play it on the canvas, export as MP3 or WAV.
* Text-to-SFX. Describe a sound effect in plain language and generate it in place.   &#x20;
* Text-to-Music. Describe a sonic environment in plain language and recieve a matching track.                                                                                        &#x20;
* Audio-to-text. Transcribe an audio clip into a text node for editing, captioning, or downstream prompting.                                                                                                      &#x20;
* Lipsync. Pair an audio node with a video node to drive a lipsynced performance.                                                                                                                               &#x20;
* FAUNA-aware. FAUNA can create and chain audio nodes for you as part of a multi-step workflow.  &#x20;

## Models

Visit our [Audio Models](/models/audio-models.md) section to learn about the video models and capabilities available in the Video Node.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.flora.ai/nodes/audio-node.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
