Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

starlark: support thread cancellation and limits on computation #298

Merged
merged 1 commit into from
Aug 21, 2020

Conversation

alandonovan
Copy link
Contributor

This change adds two related features:
(1) asynchronous cancellation of Starlark threads (Thread.Cancel), and
(2) recording and limiting of the number of abstract computation steps.

Fixes issue #252
Fixes issue #160

starlark/eval.go Outdated
// before and after a computation. It may also be used to limit
// a computation to k steps by setting its initial value to
// 1<<64 - k.
Steps uint64
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No change needed necessarily, just thinking about alternatives:

When I worked on V8, they had a counter which was set to some value, then decremented at function returns and loop back branches. If it reached zero, the profiler was notified, which could trigger optimization.

I think it's a little more intuitive to have a decrementing counter, since you can initially set it to k rather than 1<<64 - k. But then measuring how many steps were taken is harder, so perhaps there's no advantage either way.

In any case, I wonder if it would be better to provide methods to get / set this, rather than exporting the counter itself. There are a lot of uses and possible representations for a counter like this, and it could easily change in the future.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fair point about encapsulation; I've changed the API to a pair of functions ExecutionSteps and SetMaxExecutionSteps. (No set or reset operation is necessary.) The check must now do an extra load of 'maxSteps', but it's in the same cache line as 'steps'.

if thread.Steps == 0 {
thread.Cancel("too many steps")
}
if reason := atomic.LoadPointer((*unsafe.Pointer)(unsafe.Pointer(&thread.cancelReason))); reason != nil {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The atomic load, compare, and branch on every iteration seems expensive.

Cancel could set Steps to ^0, but I suppose an atomic load would still be needed.

That could be optimized by keeping a local step count and updating the thread's step count on certain instructions (on builtin, return, or back branch) or when the local step count crosses some constant threshold.

No need to optimize that now, but it may require changing the contract of Steps in the future. Better to keep it abstract or vague.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I measured it on x86 and it wasn't noticeable (sorry I didn't keep the data). The address is always in L1 cache, the branch is extremely predictable, and on Intel at least, every load is an atomic load. (The atomic load is much cheaper than selecting on the Done channel of a context: even though fundamentally that too boils down to an atomic load, compare, and branch, the function call overheads are significant.)

I considered using a local variable but it complicates the logic, and I suspect it could increase register pressure in the rest of the bytecode interpreter.

starlark/eval.go Outdated
// Starlark execution, by computing the difference in its value
// before and after a computation. It may also be used to limit
// a computation to k steps by setting its initial value to
// 1<<64 - k.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider adding a note that the number of steps taken by a given computation is an implementation detail and may change.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

This change adds two related features:
(1) asynchronous cancellation of Starlark threads (Thread.Cancel), and
(2) recording and limiting of the number of abstract computation steps.

Fixes issue #252
Fixes issue #160

Change-Id: Ia2e594d742363f18e650a04a0e3745678182f0e7
@alandonovan alandonovan merged commit 949cc6f into master Aug 21, 2020
@alandonovan alandonovan deleted the steps-and-cancel branch August 21, 2020 14:29
aarzilli added a commit to aarzilli/delve that referenced this pull request Aug 24, 2020
derekparker pushed a commit to go-delve/delve that referenced this pull request Aug 24, 2020
* vendor: update starlark

* terminal: use new mechanism to cancel starlark threads

See: google/starlark-go#298
martyspiewak pushed a commit to martyspiewak/starlark-go that referenced this pull request Apr 29, 2022
…le#298)

This change adds two related features:
(1) asynchronous cancellation of Starlark threads (Thread.Cancel), and
(2) recording and limiting of the number of abstract computation steps.

Fixes issue google#252
Fixes issue google#160
martyspiewak pushed a commit to martyspiewak/starlark-go that referenced this pull request Apr 29, 2022
…le#298)

This change adds two related features:
(1) asynchronous cancellation of Starlark threads (Thread.Cancel), and
(2) recording and limiting of the number of abstract computation steps.

Fixes issue google#252
Fixes issue google#160
abner-chenc pushed a commit to loongson/delve that referenced this pull request Mar 1, 2024
* vendor: update starlark

* terminal: use new mechanism to cancel starlark threads

See: google/starlark-go#298
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants