Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Constructible and connect()able AudioParams #367

Closed
ugur-zongur opened this issue Oct 13, 2014 · 43 comments
Closed

Constructible and connect()able AudioParams #367

ugur-zongur opened this issue Oct 13, 2014 · 43 comments

Comments

@ugur-zongur
Copy link

Binding AudioParam of another AudioNode to an AudioWorker

As far as i understand, it is impossible to change a native AudioNodes' parameter at real-time. Some use cases i can think of:

  1. Changing parameters of AudioNodes with respect to audio input/output signals (e.g. side-chaining)
  2. Changing parameters of AudioNodes with respect to some internal real-time algorithm.
  3. Changing parameters of AudioNodes with respect to MIDI messages at minimum latency possible.
    (also resolving following issue MIDI API should be available from Workers? web-midi-api#99)

A solution can be achieved by making AudioParams bindable to AudioWorkers.

Possible WebIDL can be something like:

interface AudioWorkerNode : AudioNode {
    ...
    void  bindParameter(DOMString name, AudioParam parameter);
    ...
};

Example main file javascript:

envelope = context.createGain();
worker.bindParameter('envelopeGain', envelope.gain);

Example worker code:

onaudioprocess= function (e) {
  e.parameters.envelopeGain.value = calculateValue();
}

Since AudioWorkers are processed sequentially in audio processing graph, i think changing AudioParam in a AudioWorker's context won't be a problem in terms of concurrency.

@cwilso cwilso changed the title Binding AudioParam of another AudioNode to an AudioWorker Connecting AudioParam of one AudioNode to another Node's AudioParam Oct 13, 2014
@cwilso
Copy link
Contributor

cwilso commented Oct 13, 2014

I don't think you want binding here. You just want .connect() on AudioParam, to enable it to drive another AudioParam (or be a source, which would solve the "I need a DC offset" problem).

@sebpiq
Copy link

sebpiq commented Oct 13, 2014

Actually ... is there a good reason to have this complicated behaviour in AudioParam in the first place? Why aren't parameters just "slots" to which any AudioNode can be connected, and automation functionalities currently handled by AudioParam would be handled by a first-citizen AudioNode called for example RampNode? That would make the API simpler and more consistent.

@ugur-zongur
Copy link
Author

In fact i needed this behavior while trying to design a pure data clone for web audio api. Control rate messages of pure data can be implemented in an audioworker without any problem. But when these messages need to change a dsp object parameter (e.g. line~) i have no option other than message passing through ui which has no real-time guarantees. Automation is not helpful because the control rate behavior is calculated real-time and is not known beforehand (because of the possibility of data flow in audio to control rate direction). There needs to be some kind of inter-node communication in this use case.

This was 2nd use case in my original post. I think 3rd case is also crucial. Think about a web audio synth that is controlled through midi. If a user wants to use this software in a live performance, there should be minimal latency possible. So ui thread shouldn't be involved there also. Will the midi message processing be made available to audioworker, the midi data is stuck in that audioworker and cannot be passed to other audionodes since there is no realtime internode (especially with native ones) communication. I don't know if i'm missing something here but as far as i see in both cases native audionodes cannot be used effectively due to the fact that audionode parameters cannot be changed in realtime.

@sebpiq
Copy link

sebpiq commented Oct 14, 2014

if by "pure data clone" you mean full-blown JavaScript clone of Pd, you should check this out : https://github.com/sebpiq/WebPd it's far from perfect, but there is already a lot of work done.

@ugur-zongur
Copy link
Author

Yes i know that one, but i preferred emscripten for performance :) so the codebase is c++ actually. Besides WebPd couldn't use native audionodes, its author had some complaints in the past (http://lists.w3.org/Archives/Public/public-audio/2013OctDec/0073.html) Thanks for sharing anyway.

@sebpiq
Copy link

sebpiq commented Oct 14, 2014

Hahaha ... yeah I know ;)

If you wanna use emscripten, you'll have everything running in one AudioWorker right? So 1) and 2) shouldn't be a problem.

@ugur-zongur
Copy link
Author

:). Yes all the control rate computation will be in one node. But for dsp part i want to exploit native nodes as much as i can. So for dsp graph i have one-to-one mapping to web audio graph in mind. I want to use native audionodes where available. But this results in the problem i stated. I need a mechanism to pass the values calculated in the control rate node on the same batch. So i still think they are problem.

@sebpiq
Copy link

sebpiq commented Oct 14, 2014

Well ... this is exactly what I had in mind for WebPd, but this is very hard to achieve, for the reasons listed in the discussion you linked above (which AudioWoker partially addresses) ...

Also, there are very few native nodes that can be used to reimplement Pd objects. Even the AudioBufferSourceNode cannot be used for implementing a simple tabread~ ... and let's not talk about the mess that event scheduling would be.

Trust me, it is really not worth it. You will write a lot of ugly code to glue all together, trying to reuse just a very small subset of the native functionality from Web Audio API (I've been there : ), and maybe even create a performance overhead if you have a lot of separate AudioWorkerNodes as opposed to all the dsp running in a single AudioWorkerNode.

This said, if you really want to try, I'd be very curious to see what you come up with. But I am pretty sure that waa native functionality won't be very useful for you...

@ugur-zongur
Copy link
Author

@cwilso i think it should be a two way update mechanism so that the code below is valid.

onaudioprocess= function (e) {
  e.parameters.envelopeGain.value = e.parameters.envelopeGain.value * 0.5;
}

Because there are cases where (snapshot~ object in PD) a data value from previous batch is required.

@ugur-zongur
Copy link
Author

@sebpiq thank you for the advice. i'm in the design phase right now so there're certainly things i'm missing. I'll keep you informed and maybe ask you for further advice in the future if it's ok? :)

@sebpiq
Copy link

sebpiq commented Oct 14, 2014

@ugur-zongur sure, I can help. And actually I have been desperately searching for people to help on WebPd. So if you feel like giving a hand, that would be awesome. I am quite open about how we do it since I haven't found a satisfying way until now. If you get good results with your experiments, I'd be happy to take it into WebPd. Good luck!

@pendragon-andyh
Copy link

If you need to connect an AudioParam to an AudioNode then check out:

They use the WaveShaperNode or AudioBufferSourceNode to provide a DC-offset into a GainNode - and then allow its "gain" property to be connected to other nodes. This should allow you to use MIDI notes as inputs to your PD-like audio graph.

@cwilso
Copy link
Contributor

cwilso commented Oct 14, 2014

@sebpiq Yeah, we could get rid of AudioParam altogether and only have LinearRampNode, LogRampNode, DCValueNode, LogTargetNode. But it would make it MORE complex to do simple cases - certainly

n.frequency.value = 1500;

Would be a bit of a pain, and definitely less understandable, and most of all less efficient:

context.CreateDCValueNode(1500).connect(n.frequency);

@ugur-zongur that code is valid. Given that you're assigning one value to half of itself, I'm not even sure what you mean, precisely, if you're intending a live connection there. But I think a connection is better than an assignment, which is why I think just adding .connect to AudioParam would fix this.

@pendragon-andyh That's not really connecting an AudioParam to an AudioNode per se; he's using an AudioParam in another part of the graph and just copying a reference to that node, much like copying references on a chorus "node". If we just added .connect on AudioParam, and make them instantiatable, I think this would address @sebpiq's issue too.

@ugur-zongur
Copy link
Author

@cwilso it was 4 in the morning :), the code has something to do with the solution i had in mind, wrote it without comprehending yours i suppose. I think i get it now and yes, i also think it's better. Just to be sure, according to your solution, AudioParams will be available to be connected to and from, which implies they can be considered like named inputs or outputs for the AudioNodes now right?

@cwilso
Copy link
Contributor

cwilso commented Oct 14, 2014

@ugur-zongur AudioParams already can be connected TO - like a named input, as you put it - the connections sum and are added to the computed value (calculated from the scheduled values and any .value). However, they aren't currently available as a source (i.e. you can't .connect() them to another node's inputs.)

@ugur-zongur
Copy link
Author

@cwilso yes that was what i meant. I should have emphasised "outputs" :). Thanks for clearing that up.

@sebpiq
Copy link

sebpiq commented Oct 15, 2014

@cwilso

have LinearRampNode, LogRampNode, DCValueNode, LogTargetNode

No just RampNode with the same methods as AudioP aram's

n.frequency = 1500
context.createRampNode(1500).connect(n.frequency)

The less concepts the better. So all in all it would be IMO much simpler to understand. Definitely not more complex...

Look at the beauty ... you remove a concept from the spec, and open-up a world of possibilities at the same time :

// Simple frequency modulation
var freqMod = context.createOscillator()
var mult = context.createGain()
var add = context.createRamp()
var osc = context.createOscillator()

freqMod.connect(mult)
mult.connect(add)
add.connect(osc)

mult.gain = 50
add.setValueAtTime(440, 0)

Right now you cannot even do this most simple FM in an obvious way.

@ugur-zongur
Copy link
Author

@sebpiq @cwilso said "make them (AudioParams) instantiatable" in his answer to @pendragon-andyh.
So what i understand from this, he has something like this in mind.

// Simple frequency modulation
var freqMod = context.createOscillator()
var mult = context.createGain()
var add = context.createAudioParam() // instantiatable
var osc = context.createOscillator()

freqMod.connect(mult)
mult.connect(add)
add.connect(o/c) // connectable

mult.gain = 50
add.setValueAtTime(440, 0)

So you can do this. Less concepts is better but I cannot speculate about it right now because i'll talk about more concepts now :)

@cwilso if this will be the way to go then there should be a separation like InputAudioParam and OutputAudioParam i think. Since an input to input connection, or output to output connection is meaningless. Excluding connect() for InputAudioParam would do the trick i suppose. And also removing modification functions e.g. setValueAtTime from OutputAudioParam (edit: we loose the functionality above so modification functions should exist).

Another issue, it says "read-only Float32Array" in the documentation of AudioWorker section. I don't know if this "read-only" means the array is immutable or just the ref of it is read-only but if it's the former case then i think this should change too for output case obviously.

@hoch
Copy link
Member

hoch commented Oct 15, 2014

Jumping in to sidetrack...

The instantiable AudioParam object is just a converter from audio signal to a-rate control data. I believe it is (and should be) only useful where you create your own node design with AudioWorker.

Also the obvious example of FM simply looks like a PureData patch (osc~, *~, and line~) and I rather think connecting the output of AudioNode directly into AudioParam makes more sense in terms of the semantics. Having to instantiating a "ramp" is sort of PureData way of doing this. I would say it is just a different paradigm In addition, the simple FM implementation based on the current spec is not really that different to the example code above.

I partially agree that the current AudioParam design is not perfect, but it serves various use cases pretty well. Let's not forget that we have to deal with event scheduling very carefully due to the architecture of JavaScript thread. I guess the main goal of the current AudioParam design is to achieve the precise scheduling of sample-accurate automation/interpolation.

Sorry about the distraction, but I would definitely love to hear more ideas and opinions about this.

@sebpiq
Copy link

sebpiq commented Oct 15, 2014

Also the obvious example of FM simply looks like a PureData patch

and a SuperCollider patch, and a chuck patch, and a csound patch, ... etc ...
Pd and SuperCollider have been around for twenty years, so it would be good to take inspiration form them as they have been crafted over all this time to answer this specific purpose. Let's not reinvent the wheel.

Basically to make a proper FM synthesis you need to be able to control your modulator, and for this you need a DC (index), and you need to be able to schedule value changes for this DC. And it turns out, that is exactly what AudioParam does, except that it adds an unnecessary layer of complexity.

@cwilso
Copy link
Contributor

cwilso commented Oct 15, 2014

Except AudioParams reduce the unnecessary layer of complexity for the cases they're mostly used for - namely, controlling audio parameters on other nodes.

If you have instantiable AudioParams that are connectable, you essentially have precisely what you've asked for - a schedulable value node.

@hoch
Copy link
Member

hoch commented Oct 15, 2014

@sebpiq

and a SuperCollider patch, and a chuck patch, and a csound patch, ... etc ...

No. I was specifically referring PureData because the example code is just equivalent to PureData patch. ChucK doesn't have an extra layer for automatable parameters for unit generator. SuperCollider has the concept of a-rate and k-rate - which I believe it is very similar to what we can conceptually see in the current spec of Web Audio API.

I believe the current design - encapsulating AudioParams into the node - was a reasonable design because the API itself was geared toward to a wide range of audience. However, as @cwilso suggested, the instantiable AudioParams might be the most elegant solution to solve this type of issues.

By the way, WAAX has several classes to abstract AudioParams with more musically meaningful data. This might not be directly related to OP's issue, but it can be an example of the abstraction of AudioParams.

https://github.com/hoch/WAAX/blob/master/src/waax.core.js#L60

@sebpiq
Copy link

sebpiq commented Oct 15, 2014

yeah ... probably got a bit carried away with Chuck. I haven't used it for several years.

SuperCollider is conceptually very similar to Web Audio API as you have a graph with nodes that run on the server, and an API for a client language to change some of the parameters of these nodes, ... and yeah schedule things. Pd (and Max) is also quite close, and pd also has sort of a k-rate and a-rate (messages vs dsp). So I believe both should be sources of inspiration. I don't say they are perfect of course. Both have their share of ugliness. But the basic concepts are solid and really suited for programming with sound. And yeah ... Pd and Max are also geared toward a wide audience. People (from my experience giving workshops) who might not understand - a thing - about programming nor sound. And still they manage!

I understand the reasoning behind AudioParams, and using them as a main tool for control and scheduling, but the fact that there is a need to add so many different ways of using them (plugging AudioNode to AudioParam, instantiate AudioParam, ...) makes me think that it was probably not the best decision. It makes very basic things not very intuitive to do (e.g. the DC thing) which is not good for beginners.

Anyways, I guess AudioParams are here to stay, so I will stop criticizing them :)

@joeberkovitz
Copy link
Contributor

Can't this be addressed today (and perhaps for longer) by making use of multiple audio outputs from a node, some of which are intended to be connected to AudioParams of other nodes?

@cwilso
Copy link
Contributor

cwilso commented Oct 27, 2014

Well, you could certainly use multiple outputs of a node to be outputs. The only node that has multiple outputs today is channelSplitter - but you could, say, have the known semantic that DynamicsCompressor has a second output that is the envelope follower tracking.

However, this doesn't address the use case of "I want a DC offset" - where if you could instantiate an AudioParam, you could easily do:

var dc = new AudioParam();
dc.value = 1;
dc.connect( nodeIWantADCOffsetInputTo );

And any other stuff. (for example, it would be even more obvious that you're creating and scheduling an Envelope.)

@cwilso cwilso changed the title Connecting AudioParam of one AudioNode to another Node's AudioParam Constructible AudioParams Apr 8, 2016
@cwilso cwilso changed the title Constructible AudioParams Constructible and connect()able AudioParams Apr 8, 2016
@mdjp
Copy link
Member

mdjp commented Apr 8, 2016

F2F: To be reviewed for next conference call to establish effort required to include in V1.

@mdjp mdjp modified the milestones: Needs WG decision, Web Audio v.next Apr 8, 2016
@rtoy
Copy link
Member

rtoy commented Jun 22, 2016

Can't the constructible audioparam be done with a UnitSource source node whose output is 1. This node would have one AudioParam, say, gain. Would this not allow you to construct, in effect, an AudioParam, and allow you to connect an AudioParam to other AudioParams?

I find myself creating unit sources all the time and it's really annoying to have to create a looping 1-sample buffer source for this.

@joeberkovitz
Copy link
Contributor

@rtoy That does seem elegant, and much less of a perturbation than adding new bells and whistles to either AudioNode or AudioParam.

@cwilso
Copy link
Contributor

cwilso commented Jun 23, 2016

This can be done that way (by creating a ValueNode - that has a single
AudioParam .value, which controls its value. I think the "elegance" in
that is elegance in having to put less in the spec, rather than elegance in
users using it (the pattern would result in this code:

var value = context.createValueNode;
value.value.value = 5; // <- hahahaha :)

It's okay, I suppose. Constructible/Connectable audioparam would still be
cleaner.

On Thu, Jun 23, 2016 at 3:56 PM, Joe Berkovitz [email protected]
wrote:

@rtoy https://github.com/rtoy That does seem elegant, and much less of
a perturbation than adding new bells and whistles to either AudioNode or
AudioParam.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#367 (comment),
or mute the thread
https://github.com/notifications/unsubscribe/AAe8eTRAMZJTPNcRIx8JefxDY2muq8MQks5qOpCpgaJpZM4CuOeT
.

@joeberkovitz
Copy link
Contributor

ValueNode is definitely a better name.

But using value as the name of the AudioParam (the node.value.value thing) stylistically frightens me. Maybe the param could be called output or something else.

@rtoy
Copy link
Member

rtoy commented Jun 23, 2016

What would a constructible AudioParam look like? Would you have to do basically the same thing:

var p = context.createAudioParam();  // or new AudioParam();
p.value.value = 5;

Or is there some other approach you have in mind?

Regardless, I, personally, would love to have a constant value source node; I can of course work around this, but when using hoch.github.io/canopy to hack a test, a constant value node would be sweet.

@cwilso
Copy link
Contributor

cwilso commented Jun 23, 2016

Yeah, that's exactly it. The only real change is that AudioParam would
need to acquire a .connect().

On Thu, Jun 23, 2016 at 5:29 PM, rtoy [email protected] wrote:

What would a constructible AudioParam look like? Would you have to do
basically the same thing:

var p = context.createAudioParam(); // or new AudioParam();
p.value.value = 5;

Or is there some other approach you have in mind?

Regardless, I, personally, would love to have a constant value source
node; I can of course work around this, but when using
hoch.github.io/canopy to hack a test, a constant value node would be
sweet.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#367 (comment),
or mute the thread
https://github.com/notifications/unsubscribe/AAe8eRQUA7qVp0KZup2LWTRfy6FvsDIQks5qOqZogaJpZM4CuOeT
.

@rtoy
Copy link
Member

rtoy commented Jun 23, 2016

In that case, let me cast my vote for a constant source node (of some appropriate name) with an audioparam.

@rtoy
Copy link
Member

rtoy commented Jun 30, 2016

Per teleconf: The constant source node probably doesn't work quite right. If you connect, say, an oscillator to the audioparam, the output would be the 1 + the oscillator. This isn't desired. It can be worked around by having the user set the constant source node value to 0, but this might not be what we want.

@rtoy
Copy link
Member

rtoy commented Jul 7, 2016

According to https://webaudio.github.io/web-audio-api/#computation-of-value, this is the correct behavior. If we make the default value of the constant source node be 0, I think everything will work out as desired. Then it's up to the developer to do the right thing with this constant source node. But it will behave as if it were a constructible AudioParam.

@rtoy
Copy link
Member

rtoy commented Jul 7, 2016

See #902 for a proposed ConstantSourceNode with one audio param named sourceValue defaulting to 0.

@pendragon-andyh
Copy link

Do you need to specify anything about garbage collection?

Maybe "the node will become eligible for garbage collection when there are
no javascript references to the node AND when the node is no-longer
connected to a part of the audio graph that is being kept alive by a
oscillator or buffer-source node".

What should happen if the new ConstantSourceNode is connected directly to
the destination node (with no supporting oscillator)? Should it cause a
dc-offset until it goes out of scope ... or should it be silent because no
real node is driving the graph?
On 7 Jul 2016 22:21, "rtoy" [email protected] wrote:

See #902 #902 for a
proposed ConstantSourceNode with one audio param named sourceValue
defaulting to 0.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#367 (comment),
or mute the thread
https://github.com/notifications/unsubscribe/AF9f5VKuT6bf4UjaI0VvQijCoJkSegSJks5qTW3MgaJpZM4CuOeT
.

@rtoy
Copy link
Member

rtoy commented Jul 7, 2016

A ConstantSourceNode is a real AudioNode, very similar to an OscillatorNode. I would expect it to behave the same in terms of GC just like an OscillatorNode. Thus, I wouldn't expect to need to say anything special.

Unless, of course, you're thinking of ConstantSourceNode as if it were a constructible AudioParam. But it's not; it's an AudioNode, att least how I've defined it here. The group needs to decide if this is the correct approach or not.

@pendragon-andyh
Copy link

The ConstantSourceNode differs from other source nodes because it does not
have start and stop methods.

I have not rechecked the spec, but my memory says that the audio context
holds a reference to oscillator nodes until they stop - which therefore
makes them eligible for garbage collection.
On 7 Jul 2016 23:07, "rtoy" [email protected] wrote:

A ConstantSourceNode is a real AudioNode, very similar to an
OscillatorNode. I would expect it to behave the same in terms of GC just
like an OscillatorNode. Thus, I wouldn't expect to need to say anything
special.

Unless, of course, you're thinking of ConstantSourceNode as if it were a
constructible AudioParam. But it's not; it's an AudioNode, att least how
I've defined it here. The group needs to decide if this is the correct
approach or not.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#367 (comment),
or mute the thread
https://github.com/notifications/unsubscribe/AF9f5Qs1SRO3gn1vQYijYw19N88zLPtIks5qTXimgaJpZM4CuOeT
.

@rtoy
Copy link
Member

rtoy commented Jul 8, 2016

Ah. The current (updated) PR actually includes start and stop methods. But @hongchan and I were just discussing whether this makes sense or not. For an AudioParam, this probably doesn't make sense. But for a source node it does, along with an onended event.

@mdjp
Copy link
Member

mdjp commented Sep 22, 2016

Keep the factory method = yes
Name of attribute = offset
Default value = 1

@mdjp mdjp modified the milestones: Web Audio V1, Needs WG decision Sep 23, 2016
@rtoy rtoy closed this as completed in 85a8138 Sep 27, 2016
rtoy added a commit that referenced this issue Sep 27, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

8 participants