Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Scheduled Interrupt #4609

Merged
merged 10 commits into from
Jun 19, 2018
Merged

Scheduled Interrupt #4609

merged 10 commits into from
Jun 19, 2018

Conversation

hreintke
Copy link
Contributor

@hreintke hreintke commented Apr 5, 2018

Hi,

This PR adds the possibility to schedule (= execute after loop) the ISR routine.
In this way

  • The need for setting a switch and checking in loop is prevented.
  • No limitation of execution of flash/long running functions as the actual execution is outside the ISR.

This builds on top of the functional interrupt functionality (#2745)

In this implementation I took std::function<void(InterruptInfo)>
InterruptInfo containing pin, pinstatus and micros() from time of interrupt.

@devyte : This also solves the memory leak you mentioned for FunctionalInterrupt

Comments are welcome.

@igrr
Copy link
Member

igrr commented Apr 5, 2018

@hreintke have you considered proposing this to the Arduino cores (AVR/ARM) as well? It would be great if we could keep such features more or less in sync.

@hreintke
Copy link
Contributor Author

hreintke commented Apr 5, 2018

@igrr No I did not (yet).
Only briefly used an Arduino Nano and then moved to ESP8266 -> will need to check into that.

Isn't schedule_function an ESP8266 only functionality ?

@devyte
Copy link
Collaborator

devyte commented Apr 12, 2018

@hreintke I've been looking at this on and off (I keep getting interrupted... don't laugh). There is a subtlety with using the scheduled functions from a gpio isr. I am describing this here for reference.

The number of functions that can be scheduled for the next loop run is currently limited. If the isr were to trigger more times than that limit before the next schedule service after the loop, calls would be lost, because the scheduled function list would be filled up.
Examples where this could happen:

  • a button that has some bounce, followed by a trigger that you can't miss
  • use of all available gpios for interrupt input, and they all trigger within a very short time span
  • pulse counter

I think there is some additional work that should accompany this implementation (not necessarily in this same PR):

  1. the sequence is currently loop() ->scheduled functions. This should be changed to scheduled functions -> loop(), which would reduce the overhead between the execution of the isr and the call of the scheduled function. This has been discussed internally by maintainers, and is agreed upon. I've been meaning to do this myself, but again, interruptions....
  2. the fixed limit on the number of scheduled functions should be removed and made dynamic. A solution for this that does not lose the advantages of the current implementation is not straightforward.
  3. some protection should be imposed against a high rate of isr triggers to avoid heap filling up.

@hreintke
Copy link
Contributor Author

hreintke commented Apr 12, 2018

@devyte Nice catch on the number of interrupts, didn't think of that.
I can propose a solution for these interrupt/scheduled_function updates if you want.

Remarks on
1/ sequence
What would be the difference. In both cases you get the same sequence.
now : loop,scheduled,loop,scheduled,loop,scheduled.
then : scheduled,loop,scheduled,loop,scheduled,loop,scheduled.

2/ limit
What are the main advantages you want to preserve ?
You mentioned in #4551 that you are thinking on a rewrite based on c++, what base design do you have in mind.

3/protection.
Do you see that as a interrupt or scheduled_function requirement.
Also non-interrupt scheduled functions can come in a high volume,

@devyte
Copy link
Collaborator

devyte commented Apr 26, 2018

@hreintke

  1. sequence
    I'm not thinking so much about the order as of the latency between the ISR and the scheduled function. Maybe it is more important in the functional Ticker implementation than here, but in essence:
    Current behavior:
    In case of trigger during an os task:
  1. first part of os task
    -> trigger
  2. ISR, function is scheduled
  3. second part of os task
  4. entire loop executes <--- this is extra latency which would be gone if scheduled functions ran before loop
  5. scheduled function executes
  6. ...

In case of trigger during a loop pass:

  1. first part of loop
    -> trigger
  2. ISR, function is scheduled
  3. second part of loop
  4. os tasks execute, if any
  5. scheduled function executes
  6. next loop...

If we change the execution of the scheduled functions to before the loop, the new behavior would be this:
In case of trigger during an os task:

  1. first part of os task
    -> trigger
  2. ISR, function is scheduled
  3. second part of os task
  4. scheduled function executes
  5. loop executes
  6. ...

In case of trigger during a loop pass:

  1. first part of loop
    -> trigger
  2. ISR, function is scheduled
  3. second part of loop
  4. os tasks execute, if any
  5. scheduled function executes
  6. next loop...
  1. limit
    The current implementation does all allocation up-front on boot, and then there is no allocation done during runtime. The number of list nodes is fixed, and the nodes are recycled. This helps with mem fragmentation.
    An implementation with a variable number of nodes would need to allocate on the fly, which would increase mem fragmentation.
    I just want to remove the C-style list handling, and replace it with e.g.: std::forward::list, maybe wrap the whole thing in an object.

  2. protection
    The danger I see is that, if an ISR is triggered too often, then many scheduled functions will queue up in the list. Currently this is limited to 10, but in a variable length implementation there would be no limit. If the ISR is triggered by e.g.: a button, and the button bounces, you could easily end up with many ISR triggers, and many nodes queued up in the list. From this point of view, limiting the max number of functions that can be scheduled makes sense, but only in the context of protecting against a wildly repeating ISR that is not really meant to trigger so often. That means that a hard limit applied to the scheduled functions would help against this, but would also limit other applications where e.g.: many functions are scheduled by code on purpose.
    So: how a limit be imposed that protects against the bad but still allows the good?

@hreintke
Copy link
Contributor Author

Possible Schedule implementation.

I added the following functionality :

  • Is encapsulated within a class, each instance can have it's own max queued elements.
  • Include option to run "continuous" -> function will be executed after each loop until removed
  • Remove options are
    • Call removeFunction
    • ScheduledRegistration variable get's out of scope.
  • Can add to begin or end of queue -> if used by interrupts lowers latency

Did not yet integrate with Schedule.h & FunctionalInterrupt.
Will need more work, this is for discussion.

Example sketch :

#include "ScheduledFunctions.h"
#include "Schedule.h"
#include <memory>

//instanciate ScheduledFunction, max elememts = 10
ScheduledFunctions sf(10);


class exClass
{
public:
	exClass()
	{
		// schedule continuous, guarded by sr
		sr = sf.scheduleFunctionReg(std::bind(&exClass::proc,this), true, false);

	};
	~exClass(){};

	int exc = 0;
	ScheduledRegistration sr;
	void proc()
	{
		Serial.printf("exClass proc, exc  = %d, sr %d\r\n",exc++,sr.use_count());
	}
};

int cc = 0;
void count()
{
	Serial.printf("count proc, cc= %d\r\n",cc++);
}

exClass* e ;
ScheduledRegistration sr1;

void setup()
{
	Serial.begin(115200);

	sr1 = sf.scheduleFunctionReg(count, true, false);
	e = new exClass(); // This will schedule from within class
}

// The loop function is called in an endless loop
void loop()
{

	if (cc == 100) { sf.removeFunction(sr1);} // 

	if (e)
	{
	   if (e->exc > 100) {delete e; e = nullptr; }; // when e is deleted, e.sr goes out of scope -> schedule function is removed.
	}

//	Call schedule_function from loop, need to be moved to Schedule.h
	schedule_function(std::bind(&ScheduledFunctions::runScheduledFunctions,&sf));
}

@devyte
Copy link
Collaborator

devyte commented Jun 18, 2018

@hreintke there seem to be warnings with this, hence failing CI. Could you please take a look?

@hreintke
Copy link
Contributor Author

@devyte Done

@devyte
Copy link
Collaborator

devyte commented Jun 19, 2018

ok, my (shallow) testing shows this working, I have nothing to add to the code changes, and there is one report of the memleak being fixed. I'm merging.

@devyte devyte merged commit 641c5cd into esp8266:master Jun 19, 2018
d-a-v added a commit to d-a-v/Arduino that referenced this pull request Jul 2, 2018
(fix proposed by @shimarin, verified by @marcvtew esp8266#4866)
This was referenced Jul 2, 2018
d-a-v added a commit to d-a-v/Arduino that referenced this pull request Jul 2, 2018
d-a-v added a commit that referenced this pull request Jul 2, 2018
@hreintke
Copy link
Contributor Author

hreintke commented Jul 4, 2018

@devyte This PR also included the WIP of ScheduledFuncion (.h & cpp).
Do you want to keep that in and work from there or a PR that removes those ?

@devyte
Copy link
Collaborator

devyte commented Jul 4, 2018

@hreintke you're right, and I now realize that I have some comments in there.
No use removing it. Will you be continuing your work?

@hreintke
Copy link
Contributor Author

hreintke commented Jul 4, 2018

@devyte Yes I will.
Just let me know your comments and I'll try to adapt.

PS : See you are busy, no problem that it takes some time.

@hreintke
Copy link
Contributor Author

hreintke commented Jul 4, 2018

@devyte see #4890

@d-a-v d-a-v added this to the 2.4.2 milestone Aug 1, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants