Scene Overview
This section will give you a brief overview of all components within an Audectra Scene.
flowchart LR
A[Scene] ---> B[Visualization A]
A --> C[Visualization B]
B --> E[State 0]
B --> F[State 1]
C --> G[State 0]
Visualizations
Each scene consists of at least one visualization.
Visualization States
Each visualization consists of at least one visualization state. Each of these visualization states can define an independent way of how the visualization is being rendered.
Though a visualization can have multiple states, there can only be one state active at any given time.
Visualization State Layers
A visualization state defines how its parent visualization will be rendered, when it has been activated. Each state can be built with one or multiple layers. Multiple layers can be blended on top of each other to create unique visualizations.
Timeline
Whereas the visualization states and their corresponding layers above define how scenes are rendered, it is the timelines responsibility to define when and which visualization state will be rendered at a given time.
The time unit is beats, synchronized to the detected beats from the live audio processor. All visualizations within the scene are synchronized via this timeline.
Visualization Tracks
Each visualization within the scene has a matching track within the timeline. Within each track, the position of visualization state blocks defines which visualization state will be rendered at what point within the timeline.
Timeline Modifiers
While the synchronization determines the flow of the timeline to live detected beats, it can also be controlled by placing modifiers in the timeline, affecting all tracks simultaneously. One such example is a stop modifier, which halts the timeline flow at the modifier position, until a configurable condition has been satisfied.
Binding Node Networks
Many layer settings can be bound to node networks, giving endless possibilities to add custom dynamic behavior to visualizations. For example, think about binding some of your effect layer settings to live detected audio features.
There are two different types of nodes.
Network Input/Output Nodes
Inputs/outputs to/from a node network. They are defined by the network itself and cannot be removed.
Signal Nodes
Signal nodes are nodes, that process or manipulate an incoming signal, generate an output signal and/or publish or consume triggers. Their throughput can be as low as outputting a single configured constant, and as high as real-time processed audio features, for example.