From 160315d9eb8b919b8f72229551b978fd6c4c5540 Mon Sep 17 00:00:00 2001 From: Benjie Gillam Date: Fri, 28 Apr 2023 16:58:49 +0100 Subject: [PATCH 01/37] Extract common logic from ExecuteQuery, ExecuteMutation and ExecuteSubscriptionEvent --- spec/Section 6 -- Execution.md | 44 +++++++++++++++++++++------------- 1 file changed, 27 insertions(+), 17 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 8184f95bb..97c74dde6 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -131,12 +131,8 @@ ExecuteQuery(query, schema, variableValues, initialValue): - Let {queryType} be the root Query type in {schema}. - Assert: {queryType} is an Object type. - Let {selectionSet} be the top level selection set in {query}. -- Let {data} be the result of running {ExecuteSelectionSet(selectionSet, - queryType, initialValue, variableValues)} _normally_ (allowing - parallelization). -- Let {errors} be the list of all _field error_ raised while executing the - selection set. -- Return an unordered map containing {data} and {errors}. +- Return {ExecuteRootSelectionSet(variableValues, initialValue, queryType, + selectionSet)}. ### Mutation @@ -153,11 +149,8 @@ ExecuteMutation(mutation, schema, variableValues, initialValue): - Let {mutationType} be the root Mutation type in {schema}. - Assert: {mutationType} is an Object type. - Let {selectionSet} be the top level selection set in {mutation}. -- Let {data} be the result of running {ExecuteSelectionSet(selectionSet, - mutationType, initialValue, variableValues)} _serially_. -- Let {errors} be the list of all _field error_ raised while executing the - selection set. -- Return an unordered map containing {data} and {errors}. +- Return {ExecuteRootSelectionSet(variableValues, initialValue, mutationType, + selectionSet, true)}. ### Subscription @@ -301,12 +294,8 @@ ExecuteSubscriptionEvent(subscription, schema, variableValues, initialValue): - Let {subscriptionType} be the root Subscription type in {schema}. - Assert: {subscriptionType} is an Object type. - Let {selectionSet} be the top level selection set in {subscription}. -- Let {data} be the result of running {ExecuteSelectionSet(selectionSet, - subscriptionType, initialValue, variableValues)} _normally_ (allowing - parallelization). -- Let {errors} be the list of all _field error_ raised while executing the - selection set. -- Return an unordered map containing {data} and {errors}. +- Return {ExecuteRootSelectionSet(variableValues, initialValue, + subscriptionType, selectionSet)}. Note: The {ExecuteSubscriptionEvent()} algorithm is intentionally similar to {ExecuteQuery()} since this is how each event result is produced. @@ -322,6 +311,27 @@ Unsubscribe(responseStream): - Cancel {responseStream}. +## Executing the Root Selection Set + +To execute the root selection set, the object value being evaluated and the +object type need to be known, as well as whether it must be executed serially, +or may be executed in parallel. + +Executing the root selection set works similarly for queries (parallel), +mutations (serial), and subscriptions (where it is executed for each event in +the underlying Source Stream). + +ExecuteRootSelectionSet(variableValues, initialValue, objectType, selectionSet, +serial): + +- If {serial} is not provided, initialize it to {false}. +- Let {data} be the result of running {ExecuteSelectionSet(selectionSet, + objectType, initialValue, variableValues)} _serially_ if {serial} is {true}, + _normally_ (allowing parallelization) otherwise. +- Let {errors} be the list of all _field error_ raised while executing the + selection set. +- Return an unordered map containing {data} and {errors}. + ## Executing Selection Sets To execute a _selection set_, the object value being evaluated and the object From c5c33a0508d47bcfad8337f1f1b72f4ce961f5f7 Mon Sep 17 00:00:00 2001 From: Benjie Gillam Date: Fri, 28 Apr 2023 17:20:43 +0100 Subject: [PATCH 02/37] Change ExecuteSelectionSet to ExecuteGroupedFieldSet --- spec/Section 6 -- Execution.md | 49 ++++++++++++++++++++-------------- 1 file changed, 29 insertions(+), 20 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 97c74dde6..5fc42d8fa 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -321,31 +321,34 @@ Executing the root selection set works similarly for queries (parallel), mutations (serial), and subscriptions (where it is executed for each event in the underlying Source Stream). +First, the selection set is turned into a grouped field set; then, we execute +this grouped field set and return the resulting {data} and {errors}. + ExecuteRootSelectionSet(variableValues, initialValue, objectType, selectionSet, serial): - If {serial} is not provided, initialize it to {false}. -- Let {data} be the result of running {ExecuteSelectionSet(selectionSet, +- Let {groupedFieldSet} be the result of {CollectFields(objectType, + selectionSet, variableValues)}. +- Let {data} be the result of running {ExecuteGroupedFieldSet(groupedFieldSet, objectType, initialValue, variableValues)} _serially_ if {serial} is {true}, _normally_ (allowing parallelization) otherwise. - Let {errors} be the list of all _field error_ raised while executing the selection set. - Return an unordered map containing {data} and {errors}. -## Executing Selection Sets +## Executing a Grouped Field Set -To execute a _selection set_, the object value being evaluated and the object +To execute a grouped field set, the object value being evaluated and the object type need to be known, as well as whether it must be executed serially, or may be executed in parallel. -First, the selection set is turned into a grouped field set; then, each -represented field in the grouped field set produces an entry into a response -map. +Each represented field in the grouped field set produces an entry into a +response map. -ExecuteSelectionSet(selectionSet, objectType, objectValue, variableValues): +ExecuteGroupedFieldSet(groupedFieldSet, objectType, objectValue, +variableValues): -- Let {groupedFieldSet} be the result of {CollectFields(objectType, - selectionSet, variableValues)}. - Initialize {resultMap} to an empty ordered map. - For each {groupedFieldSet} as {responseKey} and {fields}: - Let {fieldName} be the name of the first entry in {fields}. Note: This value @@ -363,8 +366,8 @@ is explained in greater detail in the Field Collection section below. **Errors and Non-Null Fields** -If during {ExecuteSelectionSet()} a field with a non-null {fieldType} raises a -_field error_ then that error must propagate to this entire selection set, +If during {ExecuteGroupedFieldSet()} a field with a non-null {fieldType} raises +a _field error_ then that error must propagate to this entire selection set, either resolving to {null} if allowed or further propagated to a parent field. If this occurs, any sibling fields which have not yet executed or have not yet @@ -704,8 +707,9 @@ CompleteValue(fieldType, fields, result, variableValues): - Let {objectType} be {fieldType}. - Otherwise if {fieldType} is an Interface or Union type. - Let {objectType} be {ResolveAbstractType(fieldType, result)}. - - Let {subSelectionSet} be the result of calling {MergeSelectionSets(fields)}. - - Return the result of evaluating {ExecuteSelectionSet(subSelectionSet, + - Let {groupedFieldSet} be the result of calling {CollectSubfields(objectType, + fields, variableValues)}. + - Return the result of evaluating {ExecuteGroupedFieldSet(groupedFieldSet, objectType, result, variableValues)} _normally_ (allowing for parallelization). @@ -752,9 +756,9 @@ ResolveAbstractType(abstractType, objectValue): **Merging Selection Sets** -When more than one field of the same name is executed in parallel, the -_selection set_ for each of the fields are merged together when completing the -value in order to continue execution of the sub-selection sets. +When more than one field of the same name is executed in parallel, during value +completion their selection sets are collected together to produce a single +grouped field set in order to continue execution of the sub-selection sets. An example operation illustrating parallel fields with the same name with sub-selections. @@ -773,14 +777,19 @@ sub-selections. After resolving the value for `me`, the selection sets are merged together so `firstName` and `lastName` can be resolved for one value. -MergeSelectionSets(fields): +CollectSubfields(objectType, fields, variableValues): -- Let {selectionSet} be an empty list. +- Let {groupedFieldSet} be an empty map. - For each {field} in {fields}: - Let {fieldSelectionSet} be the selection set of {field}. - If {fieldSelectionSet} is null or empty, continue to the next field. - - Append all selections in {fieldSelectionSet} to {selectionSet}. -- Return {selectionSet}. + - Let {subGroupedFieldSet} be the result of {CollectFields(objectType, + fieldSelectionSet, variableValues)}. + - For each {subGroupedFieldSet} as {responseKey} and {subfields}: + - Let {groupForResponseKey} be the list in {groupedFieldSet} for + {responseKey}; if no such list exists, create it as an empty list. + - Append all fields in {subfields} to {groupForResponseKey}. +- Return {groupedFieldSet}. ### Handling Field Errors From 3488636235021100675d5eddf5d788447bb068eb Mon Sep 17 00:00:00 2001 From: Benjie Gillam Date: Mon, 21 Aug 2023 12:15:34 +0100 Subject: [PATCH 03/37] Correct reference to MergeSelectionSets --- spec/Section 5 -- Validation.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/spec/Section 5 -- Validation.md b/spec/Section 5 -- Validation.md index 473cf5457..44a7433b9 100644 --- a/spec/Section 5 -- Validation.md +++ b/spec/Section 5 -- Validation.md @@ -463,7 +463,7 @@ unambiguous. Therefore any two field selections which might both be encountered for the same object are only valid if they are equivalent. During execution, the simultaneous execution of fields with the same response -name is accomplished by {MergeSelectionSets()} and {CollectFields()}. +name is accomplished by {CollectSubfields()}. For simple hand-written GraphQL, this rule is obviously a clear developer error, however nested fragments can make this difficult to detect manually. From 0ffed6352a3a7471e4f517217884f90ef43d41bf Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Thu, 15 Feb 2024 22:23:30 +0200 Subject: [PATCH 04/37] moves Field Collection section earlier --- spec/Section 6 -- Execution.md | 212 ++++++++++++++++----------------- 1 file changed, 106 insertions(+), 106 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 5fc42d8fa..510142115 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -337,6 +337,112 @@ serial): selection set. - Return an unordered map containing {data} and {errors}. +### Field Collection + +Before execution, the _selection set_ is converted to a grouped field set by +calling {CollectFields()}. Each entry in the grouped field set is a list of +fields that share a response key (the alias if defined, otherwise the field +name). This ensures all fields with the same response key (including those in +referenced fragments) are executed at the same time. + +As an example, collecting the fields of this selection set would collect two +instances of the field `a` and one of field `b`: + +```graphql example +{ + a { + subfield1 + } + ...ExampleFragment +} + +fragment ExampleFragment on Query { + a { + subfield2 + } + b +} +``` + +The depth-first-search order of the field groups produced by {CollectFields()} +is maintained through execution, ensuring that fields appear in the executed +response in a stable and predictable order. + +CollectFields(objectType, selectionSet, variableValues, visitedFragments): + +- If {visitedFragments} is not provided, initialize it to the empty set. +- Initialize {groupedFields} to an empty ordered map of lists. +- For each {selection} in {selectionSet}: + - If {selection} provides the directive `@skip`, let {skipDirective} be that + directive. + - If {skipDirective}'s {if} argument is {true} or is a variable in + {variableValues} with the value {true}, continue with the next {selection} + in {selectionSet}. + - If {selection} provides the directive `@include`, let {includeDirective} be + that directive. + - If {includeDirective}'s {if} argument is not {true} and is not a variable + in {variableValues} with the value {true}, continue with the next + {selection} in {selectionSet}. + - If {selection} is a {Field}: + - Let {responseKey} be the response key of {selection} (the alias if + defined, otherwise the field name). + - Let {groupForResponseKey} be the list in {groupedFields} for + {responseKey}; if no such list exists, create it as an empty list. + - Append {selection} to the {groupForResponseKey}. + - If {selection} is a {FragmentSpread}: + - Let {fragmentSpreadName} be the name of {selection}. + - If {fragmentSpreadName} is in {visitedFragments}, continue with the next + {selection} in {selectionSet}. + - Add {fragmentSpreadName} to {visitedFragments}. + - Let {fragment} be the Fragment in the current Document whose name is + {fragmentSpreadName}. + - If no such {fragment} exists, continue with the next {selection} in + {selectionSet}. + - Let {fragmentType} be the type condition on {fragment}. + - If {DoesFragmentTypeApply(objectType, fragmentType)} is {false}, continue + with the next {selection} in {selectionSet}. + - Let {fragmentSelectionSet} be the top-level selection set of {fragment}. + - Let {fragmentGroupedFieldSet} be the result of calling + {CollectFields(objectType, fragmentSelectionSet, variableValues, + visitedFragments)}. + - For each {fragmentGroup} in {fragmentGroupedFieldSet}: + - Let {responseKey} be the response key shared by all fields in + {fragmentGroup}. + - Let {groupForResponseKey} be the list in {groupedFields} for + {responseKey}; if no such list exists, create it as an empty list. + - Append all items in {fragmentGroup} to {groupForResponseKey}. + - If {selection} is an {InlineFragment}: + - Let {fragmentType} be the type condition on {selection}. + - If {fragmentType} is not {null} and {DoesFragmentTypeApply(objectType, + fragmentType)} is {false}, continue with the next {selection} in + {selectionSet}. + - Let {fragmentSelectionSet} be the top-level selection set of {selection}. + - Let {fragmentGroupedFieldSet} be the result of calling + {CollectFields(objectType, fragmentSelectionSet, variableValues, + visitedFragments)}. + - For each {fragmentGroup} in {fragmentGroupedFieldSet}: + - Let {responseKey} be the response key shared by all fields in + {fragmentGroup}. + - Let {groupForResponseKey} be the list in {groupedFields} for + {responseKey}; if no such list exists, create it as an empty list. + - Append all items in {fragmentGroup} to {groupForResponseKey}. +- Return {groupedFields}. + +DoesFragmentTypeApply(objectType, fragmentType): + +- If {fragmentType} is an Object Type: + - If {objectType} and {fragmentType} are the same type, return {true}, + otherwise return {false}. +- If {fragmentType} is an Interface Type: + - If {objectType} is an implementation of {fragmentType}, return {true} + otherwise return {false}. +- If {fragmentType} is a Union: + - If {objectType} is a possible type of {fragmentType}, return {true} + otherwise return {false}. + +Note: The steps in {CollectFields()} evaluating the `@skip` and `@include` +directives may be applied in either order since they apply commutatively. + ## Executing a Grouped Field Set To execute a grouped field set, the object value being evaluated and the object @@ -474,112 +580,6 @@ A correct executor must generate the following result for that _selection set_: } ``` -### Field Collection - -Before execution, the _selection set_ is converted to a grouped field set by -calling {CollectFields()}. Each entry in the grouped field set is a list of -fields that share a response key (the alias if defined, otherwise the field -name). This ensures all fields with the same response key (including those in -referenced fragments) are executed at the same time. - -As an example, collecting the fields of this selection set would collect two -instances of the field `a` and one of field `b`: - -```graphql example -{ - a { - subfield1 - } - ...ExampleFragment -} - -fragment ExampleFragment on Query { - a { - subfield2 - } - b -} -``` - -The depth-first-search order of the field groups produced by {CollectFields()} -is maintained through execution, ensuring that fields appear in the executed -response in a stable and predictable order. - -CollectFields(objectType, selectionSet, variableValues, visitedFragments): - -- If {visitedFragments} is not provided, initialize it to the empty set. -- Initialize {groupedFields} to an empty ordered map of lists. -- For each {selection} in {selectionSet}: - - If {selection} provides the directive `@skip`, let {skipDirective} be that - directive. - - If {skipDirective}'s {if} argument is {true} or is a variable in - {variableValues} with the value {true}, continue with the next {selection} - in {selectionSet}. - - If {selection} provides the directive `@include`, let {includeDirective} be - that directive. - - If {includeDirective}'s {if} argument is not {true} and is not a variable - in {variableValues} with the value {true}, continue with the next - {selection} in {selectionSet}. - - If {selection} is a {Field}: - - Let {responseKey} be the response key of {selection} (the alias if - defined, otherwise the field name). - - Let {groupForResponseKey} be the list in {groupedFields} for - {responseKey}; if no such list exists, create it as an empty list. - - Append {selection} to the {groupForResponseKey}. - - If {selection} is a {FragmentSpread}: - - Let {fragmentSpreadName} be the name of {selection}. - - If {fragmentSpreadName} is in {visitedFragments}, continue with the next - {selection} in {selectionSet}. - - Add {fragmentSpreadName} to {visitedFragments}. - - Let {fragment} be the Fragment in the current Document whose name is - {fragmentSpreadName}. - - If no such {fragment} exists, continue with the next {selection} in - {selectionSet}. - - Let {fragmentType} be the type condition on {fragment}. - - If {DoesFragmentTypeApply(objectType, fragmentType)} is {false}, continue - with the next {selection} in {selectionSet}. - - Let {fragmentSelectionSet} be the top-level selection set of {fragment}. - - Let {fragmentGroupedFieldSet} be the result of calling - {CollectFields(objectType, fragmentSelectionSet, variableValues, - visitedFragments)}. - - For each {fragmentGroup} in {fragmentGroupedFieldSet}: - - Let {responseKey} be the response key shared by all fields in - {fragmentGroup}. - - Let {groupForResponseKey} be the list in {groupedFields} for - {responseKey}; if no such list exists, create it as an empty list. - - Append all items in {fragmentGroup} to {groupForResponseKey}. - - If {selection} is an {InlineFragment}: - - Let {fragmentType} be the type condition on {selection}. - - If {fragmentType} is not {null} and {DoesFragmentTypeApply(objectType, - fragmentType)} is {false}, continue with the next {selection} in - {selectionSet}. - - Let {fragmentSelectionSet} be the top-level selection set of {selection}. - - Let {fragmentGroupedFieldSet} be the result of calling - {CollectFields(objectType, fragmentSelectionSet, variableValues, - visitedFragments)}. - - For each {fragmentGroup} in {fragmentGroupedFieldSet}: - - Let {responseKey} be the response key shared by all fields in - {fragmentGroup}. - - Let {groupForResponseKey} be the list in {groupedFields} for - {responseKey}; if no such list exists, create it as an empty list. - - Append all items in {fragmentGroup} to {groupForResponseKey}. -- Return {groupedFields}. - -DoesFragmentTypeApply(objectType, fragmentType): - -- If {fragmentType} is an Object Type: - - If {objectType} and {fragmentType} are the same type, return {true}, - otherwise return {false}. -- If {fragmentType} is an Interface Type: - - If {objectType} is an implementation of {fragmentType}, return {true} - otherwise return {false}. -- If {fragmentType} is a Union: - - If {objectType} is a possible type of {fragmentType}, return {true} - otherwise return {false}. - -Note: The steps in {CollectFields()} evaluating the `@skip` and `@include` -directives may be applied in either order since they apply commutatively. - ## Executing Fields Each field requested in the grouped field set that is defined on the selected From ffbfd3ca043661272adeff3b6ed09f022605238b Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Thu, 15 Feb 2024 22:30:17 +0200 Subject: [PATCH 05/37] Introduce `@defer` directive --- spec/Section 6 -- Execution.md | 383 ++++++++++++++++++++++++++++----- 1 file changed, 332 insertions(+), 51 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 510142115..3028bca7e 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -252,12 +252,13 @@ CreateSourceEventStream(subscription, schema, variableValues, initialValue): - Let {groupedFieldSet} be the result of {CollectFields(subscriptionType, selectionSet, variableValues)}. - If {groupedFieldSet} does not have exactly one entry, raise a _request error_. -- Let {fields} be the value of the first entry in {groupedFieldSet}. -- Let {fieldName} be the name of the first entry in {fields}. Note: This value - is unaffected if an alias is used. -- Let {field} be the first entry in {fields}. +- Let {fieldDetailsList} be the value of the first entry in {groupedFieldSet}. +- Let {fieldDetails} be the first entry in {fieldDetailsList}. +- Let {field} be the corresponding entry on {fieldDetails}. +- Let {fieldName} be the name of {field}. Note: This value is unaffected if an + alias is used. - Let {argumentValues} be the result of {CoerceArgumentValues(subscriptionType, - field, variableValues)}. + node, variableValues)}. - Let {fieldStream} be the result of running {ResolveFieldEventStream(subscriptionType, initialValue, fieldName, argumentValues)}. @@ -328,14 +329,142 @@ ExecuteRootSelectionSet(variableValues, initialValue, objectType, selectionSet, serial): - If {serial} is not provided, initialize it to {false}. -- Let {groupedFieldSet} be the result of {CollectFields(objectType, - selectionSet, variableValues)}. -- Let {data} be the result of running {ExecuteGroupedFieldSet(groupedFieldSet, - objectType, initialValue, variableValues)} _serially_ if {serial} is {true}, - _normally_ (allowing parallelization) otherwise. -- Let {errors} be the list of all _field error_ raised while executing the - selection set. -- Return an unordered map containing {data} and {errors}. +- Let {groupedFieldSet} and {newDeferUsages} be the result of + {CollectFields(objectType, selectionSet, variableValues)}. +- Let {fieldPlan} be the result of {BuildFieldPlan(groupedFieldSet)}. +- Let {data} and {incrementalDataRecords} be the result of + {ExecuteFieldPlan(newDeferUsages, fieldPlan, objectType, initialValue, + variableValues, serial)}. +- Let {errors} be the list of all _field error_ raised while completing {data}. +- If {incrementalDataRecords} is empty, return an unordered map containing + {data} and {errors}. +- Let {incrementalResults} be the result of {YieldIncrementalResults(data, + errors, incrementalDataRecords)}. +- Wait for the first result in {incrementalResults} to be available. +- Let {initialResult} be that result. +- Return {initialResult} and {BatchIncrementalResults(incrementalResults)}. + +### Yielding Incremental Results + +The procedure for yielding incremental results is specified by the +{YieldIncrementalResults()} algorithm. + +YieldIncrementalResults(data, errors, incrementalDataRecords): + +- Initialize {graph} to an empty directed acyclic graph. +- For each {incrementalDataRecord} of {incrementalDataRecords}: + - Add {incrementalDataRecord} to {graph} as a new Pending Data node directed + from the {pendingResults} that it completes, adding each of {pendingResults} + to {graph} as new nodes, if necessary, each directed from its {parent}, if + defined, recursively adding each {parent} as necessary. +- Prune root nodes of {graph} containing no direct child Incremental Data + Records, repeatedly if necessary, promoting any direct child Deferred + Fragments of the pruned nodes to root nodes. (This ensures that no empty + fragments are reported as pending). +- Let {newPendingResults} be the set of root nodes in {graph}. +- Let {pending} be the result of {GetPending(newPendingResults)}. +- Let {hasNext} be {true}. +- Yield an unordered map containing {data}, {errors}, {pending}, and {hasNext}. +- For each completed child Pending Incremental Data node of a root node in + {graph}: + - Let {incrementalDataRecord} be the Pending Incremental Data for that node; + let {result} be the corresponding completed result. + - If {data} on {result} is {null}: + - Initialize {completed} to an empty list. + - Let {parents} be the parent nodes of {deferredGroupedFieldSetRecord}. + - Initialize {completed} to an empty list. + - For each {pendingResult} of {parents}: + - Append {GetCompletedEntry(parent, errors)} to {completed}. + - Remove {pendingResult} and all of its descendant nodes from {graph}, + except for any descendant Incremental Data Record nodes with other + parents. + - Let {hasNext} be {false}, if {graph} is empty. + - Yield an unordered map containing {completed} and {hasNext}. + - Continue to the next completed child Incremental Data node in {graph}. + - Replace {node} in {graph} with a new node corresponding to the Completed + Incremental Data for {result}. + - Add each {incrementalDataRecord} of {incrementalDataRecords} on {result} to + {graph} via the same procedure as above. + - Let {completedDeferredFragments} be the set of root nodes in {graph} without + any child Pending Data nodes. + - Let {completedIncrementalDataNodes} be the set of completed Incremental Data + nodes that are children of {completedDeferredFragments}. + - If {completedIncrementalDataNodes} is empty, continue to the next completed + child Incremental Data node in {graph}. + - Initialize {incremental} to an empty list. + - For each {node} of {completedIncrementalDataNodes}: + - Let {incrementalDataRecord} be the corresponding record for {node}. + - Append {GetIncrementalEntry(incrementalDataRecord, graph)} to + {incremental}. + - Remove {node} from {graph}. + - Initialize {completed} to an empty list. + - For each {pendingResult} of {completedDeferredFragments}: + - Append {GetCompletedEntry(pendingResult)} to {completed}. + - Remove {pendingResult} from {graph}, promoting its child nodes to root + nodes. + - Prune root nodes of {graph} containing no direct child Incremental Data + Records, as above. + - Let {hasNext} be {false} if {graph} is empty. + - Let {incrementalResult} be an unordered map containing {hasNext}. + - If {incremental} is not empty, set the corresponding entry on + {incrementalResult} to {incremental}. + - If {completed} is not empty, set the corresponding entry on + {incrementalResult} to {completed}. + - Let {newPendingResults} be the set of new root nodes in {graph}, promoted by + the above steps. + - If {newPendingResults} is not empty: + - Let {pending} be the result of {GetPending(newPendingResults)}. + - Set the corresponding entry on {incrementalResult} to {pending}. + - Yield {incrementalResult}. +- Complete this incremental result stream. + +GetPending(newPendingResults): + +- Initialize {pending} to an empty list. +- For each {newPendingResult} of {newPendingResults}: + - Let {id} be a unique identifier for {newPendingResult}. + - Let {path} and {label} be the corresponding entries on {newPendingResult}. + - Let {pendingEntry} be an unordered map containing {id}, {path}, and {label}. + - Append {pendingEntry} to {pending}. +- Return {pending}. + +GetIncrementalEntry(incrementalDataRecord, graph): + +- Let {deferredFragments} be the Deferred Fragments incrementally completed by + {incrementalDataRecord} at {path}. +- Let {result} be the result of {incrementalDataRecord}. +- Let {data} and {errors} be the corresponding entries on {result}. +- Let {releasedDeferredFragments} be the members of {deferredFragments} that are + root nodes in {graph}. +- Let {bestDeferredFragment} be the member of {releasedDeferredFragments} with + the shortest {path} entry. +- Let {subPath} be the portion of {path} not contained by the {path} entry of + {bestDeferredFragment}. +- Let {id} be the unique identifier for {bestDeferredFragment}. +- Return an unordered map containing {id}, {subPath}, {data}, and {errors}. + +GetCompletedEntry(pendingResult, errors): + +- Let {id} be the unique identifier for {pendingResult}. +- Let {completedEntry} be an unordered map containing {id}. +- If {errors} is not empty, set the corresponding entry on {completedEntry} to + {errors}. +- Return {completedEntry}. + +### Batching Incremental Results + +BatchIncrementalResults(incrementalResults): + +- Return a new stream {batchedIncrementalResults} which yields events as + follows: +- While {incrementalResults} is not closed: + - Let {availableIncrementalResults} be a list of one or more Incremental + Results available on {incrementalResults}. + - Let {batchedIncrementalResult} be an unordered map created by merging the + items in {availableIncrementalResults} into a single unordered map, + concatenating list entries as necessary, and setting {hasNext} to the value + of {hasNext} on the final item in the list. + - Yield {batchedIncrementalResult}. ### Field Collection @@ -368,10 +497,12 @@ The depth-first-search order of the field groups produced by {CollectFields()} is maintained through execution, ensuring that fields appear in the executed response in a stable and predictable order. -CollectFields(objectType, selectionSet, variableValues, visitedFragments): +CollectFields(objectType, selectionSet, variableValues, deferUsage, +visitedFragments): - If {visitedFragments} is not provided, initialize it to the empty set. - Initialize {groupedFields} to an empty ordered map of lists. +- Initialize {newDeferUsages} to an empty list. - For each {selection} in {selectionSet}: - If {selection} provides the directive `@skip`, let {skipDirective} be that directive. @@ -386,14 +517,24 @@ CollectFields(objectType, selectionSet, variableValues, visitedFragments): - If {selection} is a {Field}: - Let {responseKey} be the response key of {selection} (the alias if defined, otherwise the field name). + - Let {fieldDetails} be a new unordered map containing {deferUsage}. + - Set the entry for {field} on {fieldDetails} to {selection}. and + {deferUsage}. - Let {groupForResponseKey} be the list in {groupedFields} for {responseKey}; if no such list exists, create it as an empty list. - - Append {selection} to the {groupForResponseKey}. + - Append {fieldDetails} to the {groupForResponseKey}. - If {selection} is a {FragmentSpread}: - Let {fragmentSpreadName} be the name of {selection}. - - If {fragmentSpreadName} is in {visitedFragments}, continue with the next - {selection} in {selectionSet}. - - Add {fragmentSpreadName} to {visitedFragments}. + - If {fragmentSpreadName} provides the directive `@defer` and its {if} + argument is not {false} and is not a variable in {variableValues} with the + value {false}: + - Let {deferDirective} be that directive. + - If this execution is for a subscription operation, raise a _field + error_. + - If {deferDirective} is not defined: + - If {fragmentSpreadName} is in {visitedFragments}, continue with the next + {selection} in {selectionSet}. + - Add {fragmentSpreadName} to {visitedFragments}. - Let {fragment} be the Fragment in the current Document whose name is {fragmentSpreadName}. - If no such {fragment} exists, continue with the next {selection} in @@ -402,31 +543,45 @@ CollectFields(objectType, selectionSet, variableValues, visitedFragments): - If {DoesFragmentTypeApply(objectType, fragmentType)} is {false}, continue with the next {selection} in {selectionSet}. - Let {fragmentSelectionSet} be the top-level selection set of {fragment}. - - Let {fragmentGroupedFieldSet} be the result of calling - {CollectFields(objectType, fragmentSelectionSet, variableValues, - visitedFragments)}. + - If {deferDirective} is defined, let {fragmentDeferUsage} be + {deferDirective} and append it to {newDeferUsages}. + - Otherwise, let {fragmentDeferUsage} be {deferUsage}. + - Let {fragmentGroupedFieldSet} and {fragmentNewDeferUsages} be the result + of calling {CollectFields(objectType, fragmentSelectionSet, + variableValues, fragmentDeferUsage, visitedFragments)}. - For each {fragmentGroup} in {fragmentGroupedFieldSet}: - Let {responseKey} be the response key shared by all fields in {fragmentGroup}. - Let {groupForResponseKey} be the list in {groupedFields} for {responseKey}; if no such list exists, create it as an empty list. - Append all items in {fragmentGroup} to {groupForResponseKey}. + - Append all items in {fragmentNewDeferUsages} to {newDeferUsages}. - If {selection} is an {InlineFragment}: - Let {fragmentType} be the type condition on {selection}. - If {fragmentType} is not {null} and {DoesFragmentTypeApply(objectType, fragmentType)} is {false}, continue with the next {selection} in {selectionSet}. - Let {fragmentSelectionSet} be the top-level selection set of {selection}. - - Let {fragmentGroupedFieldSet} be the result of calling - {CollectFields(objectType, fragmentSelectionSet, variableValues, - visitedFragments)}. + - If {InlineFragment} provides the directive `@defer` and its {if} argument + is not {false} and is not a variable in {variableValues} with the value + {false}: + - Let {deferDirective} be that directive. + - If this execution is for a subscription operation, raise a _field + error_. + - If {deferDirective} is defined, let {fragmentDeferUsage} be + {deferDirective} and append it to {newDeferUsages}. + - Otherwise, let {fragmentDeferUsage} be {deferUsage}. + - Let {fragmentGroupedFieldSet} and {fragmentNewDeferUsages} be the result + of calling {CollectFields(objectType, fragmentSelectionSet, + variableValues, fragmentDeferUsage, visitedFragments)}. - For each {fragmentGroup} in {fragmentGroupedFieldSet}: - Let {responseKey} be the response key shared by all fields in {fragmentGroup}. - Let {groupForResponseKey} be the list in {groupedFields} for {responseKey}; if no such list exists, create it as an empty list. - Append all items in {fragmentGroup} to {groupForResponseKey}. -- Return {groupedFields}. + - Append all items in {fragmentNewDeferUsages} to {newDeferUsages}. +- Return {groupedFields} and {newDeferUsages}. DoesFragmentTypeApply(objectType, fragmentType): @@ -443,6 +598,105 @@ DoesFragmentTypeApply(objectType, fragmentType): Note: The steps in {CollectFields()} evaluating the `@skip` and `@include` directives may be applied in either order since they apply commutatively. +### Field Plan Generation + +BuildFieldPlan(originalGroupedFieldSet, parentDeferUsages): + +- If {parentDeferUsages} is not provided, initialize it to the empty set. +- Initialize {fieldPlan} to an empty ordered map. +- For each {responseKey} and {groupForResponseKey} of {groupedFieldSet}: + - Let {deferUsageSet} be the result of + {GetDeferUsageSet(groupForResponseKey)}. + - Let {groupedFieldSet} be the entry in {fieldPlan} for any equivalent set to + {deferUsageSet}; if no such map exists, create it as an empty ordered map. + - Set the entry for {responseKey} in {groupedFieldSet} to + {groupForResponseKey}. +- Return {fieldPlan}. + +GetDeferUsageSet(fieldDetailsList): + +- Let {deferUsageSet} be the set containing the {deferUsage} entry from each + item in {fieldDetailsList}. +- For each {deferUsage} of {deferUsageSet}: + - Let {ancestors} be the set of {deferUsage} entries that are ancestors of + {deferUsage}, collected by recursively following the {parent} entry on + {deferUsage}. + - If any of {ancestors} is contained by {deferUsageSet}, remove {deferUsage} + from {deferUsageSet}. +- Return {deferUsageSet}. + +## Executing a Field Plan + +To execute a field plan, the object value being evaluated and the object type +need to be known, as well as whether the non-deferred grouped field set must be +executed serially, or may be executed in parallel. + +ExecuteFieldPlan(newDeferUsages, fieldPlan, objectType, objectValue, +variableValues, serial, path, deferUsageSet, deferMap): + +- If {path} is not provided, initialize it to an empty list. +- Let {newDeferMap} be the result of {GetNewDeferMap(newDeferUsages, path, + deferMap)}. +- Let {groupedFieldSet} be the entry in {fieldPlan} for the set equivalent to + {deferUsageSet}. +- Let {newGroupedFieldSets} be the remaining portion of {fieldPlan}. +- Allowing for parallelization, perform the following steps: + - Let {data} and {nestedIncrementalDataRecords} be the result of running + {ExecuteGroupedFieldSet(groupedFieldSet, objectType, objectValue, + variableValues, path, deferUsageSet, newDeferMap)} _serially_ if {serial} is + {true}, _normally_ (allowing parallelization) otherwise. + - Let {incrementalDataRecords} be the result of + {ExecuteDeferredGroupedFieldSets(objectType, objectValue, variableValues, + newGroupedFieldSets, path, newDeferMap)}. +- Append all items in {nestedIncrementalDataRecords} to + {incrementalDataRecords}. +- Return {data} and {incrementalDataRecords}. + +GetNewDeferMap(newDeferUsages, path, deferMap): + +- If {newDeferUsages} is empty, return {deferMap}: +- Let {newDeferMap} be a new unordered map containing all entries in {deferMap}. +- For each {deferUsage} in {newDeferUsages}: + - Let {parentDeferUsage} and {label} be the corresponding entries on + {deferUsage}. + - Let {parent} be the entry in {deferMap} for {parentDeferUsage}. + - Let {newDeferredFragment} be an unordered map containing {parent}, {path} + and {label}. + - Set the entry for {deferUsage} in {newDeferMap} to {newDeferredFragment}. +- Return {newDeferMap}. + +ExecuteDeferredGroupedFieldSets(objectType, objectValue, variableValues, +newGroupedFieldSets, path, deferMap): + +- Initialize {incrementalDataRecords} to an empty list. +- For each {deferUsageSet} and {groupedFieldSet} in {newGroupedFieldSets}: + - Let {deferredFragments} be an empty list. + - For each {deferUsage} in {deferUsageSet}: + - Let {deferredFragment} be the entry for {deferUsage} in {deferMap}. + - Append {deferredFragment} to {deferredFragments}. + - Let {incrementalDataRecord} represent the future execution of + {ExecuteDeferredGroupedFieldSet(groupedFieldSet, objectType, objectValue, + variableValues, deferredFragments, path, deferUsageSet, deferMap)}, + incrementally completing {deferredFragments} at {path}. + - Append {incrementalDataRecord} to {incrementalDataRecords}. + - Schedule initiation of execution of {incrementalDataRecord} following any + implementation specific deferral. +- Return {incrementalDataRecords}. + +Note: {incrementalDataRecord} can be safely initiated without blocking +higher-priority data once any of {deferredFragments} are released as pending. + +ExecuteDeferredGroupedFieldSet(groupedFieldSet, objectType, objectValue, +variableValues, path, deferUsageSet, deferMap): + +- Let {data} and {incrementalDataRecords} be the result of running + {ExecuteGroupedFieldSet(groupedFieldSet, objectType, objectValue, + variableValues, path, deferUsageSet, deferMap)} _normally_ (allowing + parallelization). +- Let {errors} be the list of all _field error_ raised while completing {data}. +- Return an unordered map containing {data}, {errors}, and + {incrementalDataRecords}. + ## Executing a Grouped Field Set To execute a grouped field set, the object value being evaluated and the object @@ -452,23 +706,27 @@ be executed in parallel. Each represented field in the grouped field set produces an entry into a response map. -ExecuteGroupedFieldSet(groupedFieldSet, objectType, objectValue, -variableValues): +ExecuteGroupedFieldSet(groupedFieldSet, objectType, objectValue, variableValues, +path, deferUsageSet, deferMap): - Initialize {resultMap} to an empty ordered map. +- Initialize {incrementalDataRecords} to an empty list. - For each {groupedFieldSet} as {responseKey} and {fields}: - Let {fieldName} be the name of the first entry in {fields}. Note: This value is unaffected if an alias is used. - Let {fieldType} be the return type defined for the field {fieldName} of {objectType}. - If {fieldType} is defined: - - Let {responseValue} be {ExecuteField(objectType, objectValue, fieldType, - fields, variableValues)}. + - Let {responseValue} and {fieldIncrementalDataRecords} be the result of + {ExecuteField(objectType, objectValue, fieldType, fields, variableValues, + path)}. - Set {responseValue} as the value for {responseKey} in {resultMap}. -- Return {resultMap}. + - Append all items in {fieldIncrementalDataRecords} to + {incrementalDataRecords}. +- Return {resultMap} and {incrementalDataRecords}. Note: {resultMap} is ordered by which fields appear first in the operation. This -is explained in greater detail in the Field Collection section below. +is explained in greater detail in the Field Collection section above. **Errors and Non-Null Fields** @@ -588,16 +846,19 @@ coerces any provided argument values, then resolves a value for the field, and finally completes that value either by recursively executing another selection set or coercing a scalar value. -ExecuteField(objectType, objectValue, fieldType, fields, variableValues): +ExecuteField(objectType, objectValue, fieldType, fieldDetailsList, +variableValues, path, deferUsageSet, deferMap): -- Let {field} be the first entry in {fields}. +- Let {fieldDetails} be the first entry in {fieldDetailsList}. +- Let {field} be the corresponding entry on {fieldDetails}. - Let {fieldName} be the field name of {field}. +- Append {fieldName} to {path}. - Let {argumentValues} be the result of {CoerceArgumentValues(objectType, field, variableValues)}. - Let {resolvedValue} be {ResolveFieldValue(objectType, objectValue, fieldName, argumentValues)}. - Return the result of {CompleteValue(fieldType, fields, resolvedValue, - variableValues)}. + variableValues, path, deferUsageSet, deferMap)}. ### Coercing Field Arguments @@ -684,22 +945,22 @@ After resolving the value for a field, it is completed by ensuring it adheres to the expected return type. If the return type is another Object type, then the field execution process continues recursively. -CompleteValue(fieldType, fields, result, variableValues): +CompleteValue(fieldType, fieldDetailsList, result, variableValues, path, +deferUsageSet, deferMap): - If the {fieldType} is a Non-Null type: - Let {innerType} be the inner type of {fieldType}. - - Let {completedResult} be the result of calling {CompleteValue(innerType, - fields, result, variableValues)}. + - Let {completedResult} and {incrementalDataRecords} be the result of calling + {CompleteValue(innerType, fields, result, variableValues, path)}. - If {completedResult} is {null}, raise a _field error_. - - Return {completedResult}. + - Return {completedResult} and {incrementalDataRecords}. - If {result} is {null} (or another internal value similar to {null} such as {undefined}), return {null}. - If {fieldType} is a List type: - If {result} is not a collection of values, raise a _field error_. - Let {innerType} be the inner type of {fieldType}. - - Return a list where each list item is the result of calling - {CompleteValue(innerType, fields, resultItem, variableValues)}, where - {resultItem} is each item in {result}. + - Return the result of {CompleteListValue(innerType, fieldDetailsList, result, + variableValues, path, deferUsageSet, deferMap)}. - If {fieldType} is a Scalar or Enum type: - Return the result of {CoerceResult(fieldType, result)}. - If {fieldType} is an Object, Interface, or Union type: @@ -707,11 +968,28 @@ CompleteValue(fieldType, fields, result, variableValues): - Let {objectType} be {fieldType}. - Otherwise if {fieldType} is an Interface or Union type. - Let {objectType} be {ResolveAbstractType(fieldType, result)}. - - Let {groupedFieldSet} be the result of calling {CollectSubfields(objectType, - fields, variableValues)}. - - Return the result of evaluating {ExecuteGroupedFieldSet(groupedFieldSet, - objectType, result, variableValues)} _normally_ (allowing for - parallelization). + - Let {groupedFieldSet} and {newDeferUsages} be the result of calling + {CollectSubfields(objectType, fieldDetailsList, variableValues)}. + - Let {fieldPlan} be the result of {BuildFieldPlan(groupedFieldSet, + deferUsageSet)}. + - Return the result of {ExecuteFieldPlan(newDeferUsages, fieldPlan, + objectType, result, variableValues, false, path, deferUsageSet, deferMap)}. + +CompleteListValue(innerType, fieldDetailsList, result, variableValues, path, +deferUsageSet, deferMap): + +- Initialize {items} and {incrementalDataRecords} to empty lists. +- Let {index} be {0}. +- For each {resultItem} of {result}: + - Let {itemPath} be {path} with {index} appended. + - Let {completedItem} and {itemIncrementalDataRecords} be the result of + calling {CompleteValue(innerType, fieldDetailsList, item, variableValues, + itemPath)}. + - Append {completedItem} to {items}. + - Append all items in {itemIncrementalDataRecords} to + {incrementalDataRecords}. + - Increment {index} by {1}. +- Return {items} and {incrementalDataRecords}. **Coercing Results** @@ -777,18 +1055,21 @@ sub-selections. After resolving the value for `me`, the selection sets are merged together so `firstName` and `lastName` can be resolved for one value. -CollectSubfields(objectType, fields, variableValues): +CollectSubfields(objectType, fieldDetailsList, variableValues): -- Let {groupedFieldSet} be an empty map. -- For each {field} in {fields}: +- Initialize {groupedFieldSet} to an empty ordered map of lists. +- Initialize {newDeferUsages} to an empty list. +- For each {fieldDetails} in {fieldDetailsList}: + - Let {field} and {deferUsage} be the corresponding entries on {fieldDetails}. - Let {fieldSelectionSet} be the selection set of {field}. - If {fieldSelectionSet} is null or empty, continue to the next field. - - Let {subGroupedFieldSet} be the result of {CollectFields(objectType, - fieldSelectionSet, variableValues)}. + - Let {subGroupedFieldSet} and {subNewDeferUsages} be the result of + {CollectFields(objectType, fieldSelectionSet, variableValues, deferUsage)}. - For each {subGroupedFieldSet} as {responseKey} and {subfields}: - Let {groupForResponseKey} be the list in {groupedFieldSet} for {responseKey}; if no such list exists, create it as an empty list. - Append all fields in {subfields} to {groupForResponseKey}. + - Append all defer usages in {subNewDeferUsages} to {newDeferUsages}. - Return {groupedFieldSet}. ### Handling Field Errors From 5ce10a571fd5084222dcb45c06aa59f1e51c5e61 Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Thu, 13 Jun 2024 15:04:00 +0300 Subject: [PATCH 06/37] refactor a few lines out of YieldSubsequentResults --- spec/Section 6 -- Execution.md | 80 ++++++++++++++++++++++------------ 1 file changed, 51 insertions(+), 29 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 3028bca7e..b5c3c331f 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -356,15 +356,12 @@ YieldIncrementalResults(data, errors, incrementalDataRecords): - Add {incrementalDataRecord} to {graph} as a new Pending Data node directed from the {pendingResults} that it completes, adding each of {pendingResults} to {graph} as new nodes, if necessary, each directed from its {parent}, if - defined, recursively adding each {parent} as necessary. -- Prune root nodes of {graph} containing no direct child Incremental Data - Records, repeatedly if necessary, promoting any direct child Deferred - Fragments of the pruned nodes to root nodes. (This ensures that no empty - fragments are reported as pending). -- Let {newPendingResults} be the set of root nodes in {graph}. -- Let {pending} be the result of {GetPending(newPendingResults)}. -- Let {hasNext} be {true}. -- Yield an unordered map containing {data}, {errors}, {pending}, and {hasNext}. + defined, recursively adding each {parent} as necessary until + {incrementalDataRecord} is connected to {graph}. +- Let {pendingResults} be the result of {GetNonEmptyNewPending(graph)}. +- Prune root nodes from {graph} not in {pendingResults}, repeating as necessary + until all root nodes in {graph} are also in {pendingResults}. +- Yield the result of {GetInitialResult(data, errors, pending)}. - For each completed child Pending Incremental Data node of a root node in {graph}: - Let {incrementalDataRecord} be the Pending Incremental Data for that node; @@ -380,7 +377,7 @@ YieldIncrementalResults(data, errors, incrementalDataRecords): parents. - Let {hasNext} be {false}, if {graph} is empty. - Yield an unordered map containing {completed} and {hasNext}. - - Continue to the next completed child Incremental Data node in {graph}. + - Continue to the next completed Pending Incremental Data node. - Replace {node} in {graph} with a new node corresponding to the Completed Incremental Data for {result}. - Add each {incrementalDataRecord} of {incrementalDataRecords} on {result} to @@ -390,7 +387,7 @@ YieldIncrementalResults(data, errors, incrementalDataRecords): - Let {completedIncrementalDataNodes} be the set of completed Incremental Data nodes that are children of {completedDeferredFragments}. - If {completedIncrementalDataNodes} is empty, continue to the next completed - child Incremental Data node in {graph}. + Pending Incremental Data Node. - Initialize {incremental} to an empty list. - For each {node} of {completedIncrementalDataNodes}: - Let {incrementalDataRecord} be the corresponding record for {node}. @@ -402,32 +399,57 @@ YieldIncrementalResults(data, errors, incrementalDataRecords): - Append {GetCompletedEntry(pendingResult)} to {completed}. - Remove {pendingResult} from {graph}, promoting its child nodes to root nodes. - - Prune root nodes of {graph} containing no direct child Incremental Data - Records, as above. - - Let {hasNext} be {false} if {graph} is empty. - - Let {incrementalResult} be an unordered map containing {hasNext}. - - If {incremental} is not empty, set the corresponding entry on - {incrementalResult} to {incremental}. - - If {completed} is not empty, set the corresponding entry on - {incrementalResult} to {completed}. - - Let {newPendingResults} be the set of new root nodes in {graph}, promoted by - the above steps. - - If {newPendingResults} is not empty: - - Let {pending} be the result of {GetPending(newPendingResults)}. - - Set the corresponding entry on {incrementalResult} to {pending}. - - Yield {incrementalResult}. + - Let {newPendingResults} be a new set containing the result of + {GetNonEmptyNewPending(graph, pendingResults)}. + - Add all nodes in {newPendingResults} to {pendingResults}. + - Prune root nodes from {graph} not in {pendingResults}, repeating as + necessary until all root nodes in {graph} are also in {pendingResults}. + - Let {pending} be the result of {GetPendingEntry(newPendingResults)}. + - Yield the result of {GetIncrementalResult(graph, incremental, completed, + pending)}. - Complete this incremental result stream. -GetPending(newPendingResults): +GetNonEmptyNewPending(graph, oldPendingResults): + +- If not provided, initialize {oldPendingResults} to the empty set. +- Let {rootNodes} be the set of root nodes in {graph}. +- For each {rootNode} of {rootNodes}: + - If {rootNodes} is in {oldPendingResults}: + - Continue to the next {rootNode}. + - If {rootNode} has no children Pending Incremental Data nodes: + - Let {children} be the set of child Deferred Fragment nodes of {rootNode}. + - Remove {rootNode} from {rootNodes}. + - Add each of the nodes in {children} to {rootNodes}. +- Return {rootNodes}. + +GetInitialResult(data, errors, pendingResults): + +- Let {pending} be the result of {GetPendingEntry(pendingResults)}. +- Let {hasNext} be {true}. +- Return an unordered map containing {data}, {errors}, {pending}, and {hasNext}. + +GetPendingEntry(pendingResults): - Initialize {pending} to an empty list. -- For each {newPendingResult} of {newPendingResults}: - - Let {id} be a unique identifier for {newPendingResult}. - - Let {path} and {label} be the corresponding entries on {newPendingResult}. +- For each {pendingResult} of {pendingResult}: + - Let {id} be a unique identifier for {pendingResult}. + - Let {path} and {label} be the corresponding entries on {pendingResult}. - Let {pendingEntry} be an unordered map containing {id}, {path}, and {label}. - Append {pendingEntry} to {pending}. - Return {pending}. +GetIncrementalResult(graph, incremental, completed, pending): + +- Let {hasNext} be {false} if {graph} is empty, otherwise, {true}. +- Let {incrementalResult} be an unordered map containing {hasNext}. +- If {incremental} is not empty: + - Set the corresponding entry on {incrementalResult} to {incremental}. +- If {completed} is not empty: + - Set the corresponding entry on {incrementalResult} to {completed}. +- If {pending} is not empty: + - Set the corresponding entry on {incrementalResult} to {pending}. +- Return {incrementalResult}. + GetIncrementalEntry(incrementalDataRecord, graph): - Let {deferredFragments} be the Deferred Fragments incrementally completed by From b9a2500c3d9e14f168577501495b8139369267e5 Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Tue, 18 Jun 2024 22:37:22 +0300 Subject: [PATCH 07/37] add a word or two about which child nodes are being promoted --- spec/Section 6 -- Execution.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index b5c3c331f..fadeb0de8 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -397,8 +397,8 @@ YieldIncrementalResults(data, errors, incrementalDataRecords): - Initialize {completed} to an empty list. - For each {pendingResult} of {completedDeferredFragments}: - Append {GetCompletedEntry(pendingResult)} to {completed}. - - Remove {pendingResult} from {graph}, promoting its child nodes to root - nodes. + - Remove {pendingResult} from {graph}, promoting its child Deferred Fragment + nodes to root nodes. - Let {newPendingResults} be a new set containing the result of {GetNonEmptyNewPending(graph, pendingResults)}. - Add all nodes in {newPendingResults} to {pendingResults}. From c7d5ccdb54159ce5512f81a5e17aae5bb9e0586f Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Tue, 18 Jun 2024 22:58:32 +0300 Subject: [PATCH 08/37] be more graphy --- spec/Section 6 -- Execution.md | 28 ++++++++++++++++------------ 1 file changed, 16 insertions(+), 12 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index fadeb0de8..4776b6e82 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -351,17 +351,10 @@ The procedure for yielding incremental results is specified by the YieldIncrementalResults(data, errors, incrementalDataRecords): -- Initialize {graph} to an empty directed acyclic graph. -- For each {incrementalDataRecord} of {incrementalDataRecords}: - - Add {incrementalDataRecord} to {graph} as a new Pending Data node directed - from the {pendingResults} that it completes, adding each of {pendingResults} - to {graph} as new nodes, if necessary, each directed from its {parent}, if - defined, recursively adding each {parent} as necessary until - {incrementalDataRecord} is connected to {graph}. +- Let {graph} be the result of {BuildGraph(incrementalDataRecords)}. - Let {pendingResults} be the result of {GetNonEmptyNewPending(graph)}. -- Prune root nodes from {graph} not in {pendingResults}, repeating as necessary - until all root nodes in {graph} are also in {pendingResults}. -- Yield the result of {GetInitialResult(data, errors, pending)}. +- Update {graph} to the subgraph rooted at nodes in {pendingResults}. +- Yield the result of {GetInitialResult(data, errors, pendingResults)}. - For each completed child Pending Incremental Data node of a root node in {graph}: - Let {incrementalDataRecord} be the Pending Incremental Data for that node; @@ -402,13 +395,24 @@ YieldIncrementalResults(data, errors, incrementalDataRecords): - Let {newPendingResults} be a new set containing the result of {GetNonEmptyNewPending(graph, pendingResults)}. - Add all nodes in {newPendingResults} to {pendingResults}. - - Prune root nodes from {graph} not in {pendingResults}, repeating as - necessary until all root nodes in {graph} are also in {pendingResults}. + - Update {graph} to the subgraph rooted at nodes in {pendingResults}. - Let {pending} be the result of {GetPendingEntry(newPendingResults)}. - Yield the result of {GetIncrementalResult(graph, incremental, completed, pending)}. - Complete this incremental result stream. +BuildGraph(incrementalDataRecords): + +- Initialize {graph} to an empty directed acyclic graph, where the root nodes + represent the Subsequent Result nodes that have been released as pending. +- For each {incrementalDataRecord} of {incrementalDataRecords}: + - Add {incrementalDataRecord} to {graph} as a new Pending Data node directed + from the {pendingResults} that it completes, adding each of {pendingResults} + to {graph} as new nodes, if necessary, each directed from its {parent}, if + defined, recursively adding each {parent} as necessary until + {incrementalDataRecord} is connected to {graph}. +- Return {graph}. + GetNonEmptyNewPending(graph, oldPendingResults): - If not provided, initialize {oldPendingResults} to the empty set. From bfe47f3c09adc2eca3ca49e12720c1a978db759a Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Wed, 19 Jun 2024 06:09:52 +0300 Subject: [PATCH 09/37] fix timing --- spec/Section 6 -- Execution.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 4776b6e82..3e18aef98 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -404,7 +404,7 @@ YieldIncrementalResults(data, errors, incrementalDataRecords): BuildGraph(incrementalDataRecords): - Initialize {graph} to an empty directed acyclic graph, where the root nodes - represent the Subsequent Result nodes that have been released as pending. + represent the pending Subsequent Results. - For each {incrementalDataRecord} of {incrementalDataRecords}: - Add {incrementalDataRecord} to {graph} as a new Pending Data node directed from the {pendingResults} that it completes, adding each of {pendingResults} From 587589c322224482aaae39c1d0920c98173dee93 Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Wed, 19 Jun 2024 06:16:58 +0300 Subject: [PATCH 10/37] reuse function --- spec/Section 6 -- Execution.md | 22 +++++++++++----------- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 3e18aef98..f53b4237f 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -373,8 +373,8 @@ YieldIncrementalResults(data, errors, incrementalDataRecords): - Continue to the next completed Pending Incremental Data node. - Replace {node} in {graph} with a new node corresponding to the Completed Incremental Data for {result}. - - Add each {incrementalDataRecord} of {incrementalDataRecords} on {result} to - {graph} via the same procedure as above. + - Let {resultIncrementalDataRecords} be {incrementalDataRecords} on {result}. + - Update {graph} to {BuildGraph(resultIncrementalDataRecords, graph)}. - Let {completedDeferredFragments} be the set of root nodes in {graph} without any child Pending Data nodes. - Let {completedIncrementalDataNodes} be the set of completed Incremental Data @@ -401,17 +401,17 @@ YieldIncrementalResults(data, errors, incrementalDataRecords): pending)}. - Complete this incremental result stream. -BuildGraph(incrementalDataRecords): +BuildGraph(incrementalDataRecords, graph): -- Initialize {graph} to an empty directed acyclic graph, where the root nodes - represent the pending Subsequent Results. +- Let {newGraph} be a new directed acyclic graph containing all of the nodes and + edges in {graph}. - For each {incrementalDataRecord} of {incrementalDataRecords}: - - Add {incrementalDataRecord} to {graph} as a new Pending Data node directed - from the {pendingResults} that it completes, adding each of {pendingResults} - to {graph} as new nodes, if necessary, each directed from its {parent}, if - defined, recursively adding each {parent} as necessary until - {incrementalDataRecord} is connected to {graph}. -- Return {graph}. + - Add {incrementalDataRecord} to {newGraph} as a new Pending Data node + directed from the {pendingResults} that it completes, adding each of + {pendingResults} to {newGraph} as new nodes, if necessary, each directed + from its {parent}, if defined, recursively adding each {parent} as necessary + until {incrementalDataRecord} is connected to {newGraph}. +- Return {newGraph}. GetNonEmptyNewPending(graph, oldPendingResults): From e8368ed6cd24003368f473e19ef8e5742e919f8e Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Wed, 19 Jun 2024 06:21:19 +0300 Subject: [PATCH 11/37] fix --- spec/Section 6 -- Execution.md | 16 +++++++--------- 1 file changed, 7 insertions(+), 9 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index f53b4237f..5095f4ee7 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -392,8 +392,7 @@ YieldIncrementalResults(data, errors, incrementalDataRecords): - Append {GetCompletedEntry(pendingResult)} to {completed}. - Remove {pendingResult} from {graph}, promoting its child Deferred Fragment nodes to root nodes. - - Let {newPendingResults} be a new set containing the result of - {GetNonEmptyNewPending(graph, pendingResults)}. + - Let {newPendingResults} be the result of {GetNonEmptyNewPending(graph)}. - Add all nodes in {newPendingResults} to {pendingResults}. - Update {graph} to the subgraph rooted at nodes in {pendingResults}. - Let {pending} be the result of {GetPendingEntry(newPendingResults)}. @@ -413,18 +412,17 @@ BuildGraph(incrementalDataRecords, graph): until {incrementalDataRecord} is connected to {newGraph}. - Return {newGraph}. -GetNonEmptyNewPending(graph, oldPendingResults): +GetNonEmptyNewPending(graph): -- If not provided, initialize {oldPendingResults} to the empty set. -- Let {rootNodes} be the set of root nodes in {graph}. +- Initialize {newPendingResults} to the empty set. +- Initialize {rootNodes} to the set of root nodes in {graph}. - For each {rootNode} of {rootNodes}: - - If {rootNodes} is in {oldPendingResults}: - - Continue to the next {rootNode}. - If {rootNode} has no children Pending Incremental Data nodes: - Let {children} be the set of child Deferred Fragment nodes of {rootNode}. - - Remove {rootNode} from {rootNodes}. - Add each of the nodes in {children} to {rootNodes}. -- Return {rootNodes}. + - Continue to the next {rootNode} of {rootNodes}. + - Add {rootNode} to {newPendingResults}. +- Return {newPendingResults}. GetInitialResult(data, errors, pendingResults): From 7d8b9d085299d7cc10c0c8b83fd412a4b6403540 Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Wed, 19 Jun 2024 06:23:15 +0300 Subject: [PATCH 12/37] rename BuildGraph to GraphFromRecords --- spec/Section 6 -- Execution.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 5095f4ee7..91b8f0179 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -351,7 +351,7 @@ The procedure for yielding incremental results is specified by the YieldIncrementalResults(data, errors, incrementalDataRecords): -- Let {graph} be the result of {BuildGraph(incrementalDataRecords)}. +- Let {graph} be the result of {GraphFromRecords(incrementalDataRecords)}. - Let {pendingResults} be the result of {GetNonEmptyNewPending(graph)}. - Update {graph} to the subgraph rooted at nodes in {pendingResults}. - Yield the result of {GetInitialResult(data, errors, pendingResults)}. @@ -374,7 +374,7 @@ YieldIncrementalResults(data, errors, incrementalDataRecords): - Replace {node} in {graph} with a new node corresponding to the Completed Incremental Data for {result}. - Let {resultIncrementalDataRecords} be {incrementalDataRecords} on {result}. - - Update {graph} to {BuildGraph(resultIncrementalDataRecords, graph)}. + - Update {graph} to {GraphFromRecords(resultIncrementalDataRecords, graph)}. - Let {completedDeferredFragments} be the set of root nodes in {graph} without any child Pending Data nodes. - Let {completedIncrementalDataNodes} be the set of completed Incremental Data @@ -400,7 +400,7 @@ YieldIncrementalResults(data, errors, incrementalDataRecords): pending)}. - Complete this incremental result stream. -BuildGraph(incrementalDataRecords, graph): +GraphFromRecords(incrementalDataRecords, graph): - Let {newGraph} be a new directed acyclic graph containing all of the nodes and edges in {graph}. From a4b506cba72aa411dffc77a216c9b4ec12216ecf Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Wed, 19 Jun 2024 06:25:31 +0300 Subject: [PATCH 13/37] reword recursive abort case --- spec/Section 6 -- Execution.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 91b8f0179..d6094e06d 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -407,9 +407,9 @@ GraphFromRecords(incrementalDataRecords, graph): - For each {incrementalDataRecord} of {incrementalDataRecords}: - Add {incrementalDataRecord} to {newGraph} as a new Pending Data node directed from the {pendingResults} that it completes, adding each of - {pendingResults} to {newGraph} as new nodes, if necessary, each directed - from its {parent}, if defined, recursively adding each {parent} as necessary - until {incrementalDataRecord} is connected to {newGraph}. + {pendingResults} to {newGraph} as a new node directed from its {parent}, + recursively adding each {parent} until {incrementalDataRecord} is connected + to {newGraph}, or the {parent} is not defined. - Return {newGraph}. GetNonEmptyNewPending(graph): From c796f03eaf54948ddbe7f638292eb221106c4d4b Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Wed, 17 Jul 2024 22:51:33 +0300 Subject: [PATCH 14/37] bring BuildFieldPlan in line with implementation --- spec/Section 6 -- Execution.md | 55 +++++++++++++++++++++------------- 1 file changed, 34 insertions(+), 21 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index d6094e06d..cdc8d9295 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -627,27 +627,41 @@ directives may be applied in either order since they apply commutatively. BuildFieldPlan(originalGroupedFieldSet, parentDeferUsages): - If {parentDeferUsages} is not provided, initialize it to the empty set. -- Initialize {fieldPlan} to an empty ordered map. +- Initialize {groupedFieldSet} to an empty ordered map. +- Initialize {newGroupedFieldSets} to an empty unordered map. +- Let {fieldPlan} be an unordered map containing {groupedFieldSet} and + {newGroupedFieldSets}. - For each {responseKey} and {groupForResponseKey} of {groupedFieldSet}: - - Let {deferUsageSet} be the result of - {GetDeferUsageSet(groupForResponseKey)}. - - Let {groupedFieldSet} be the entry in {fieldPlan} for any equivalent set to - {deferUsageSet}; if no such map exists, create it as an empty ordered map. - - Set the entry for {responseKey} in {groupedFieldSet} to - {groupForResponseKey}. + - Let {filteredDeferUsageSet} be the result of + {GetFilteredDeferUsageSet(groupForResponseKey)}. + - If {filteredDeferUsageSet} is the equivalent set to {parentDeferUsages}: + - Set the entry for {responseKey} in {groupedFieldSet} to + {groupForResponseKey}. + - Otherwise: + - Let {newGroupedFieldSet} be the entry in {newGroupedFieldSets} for any + equivalent set to {deferUsageSet}; if no such map exists, create it as an + empty ordered map. + - Set the entry for {responseKey} in {newGroupedFieldSet} to + {groupForResponseKey}. - Return {fieldPlan}. -GetDeferUsageSet(fieldDetailsList): - -- Let {deferUsageSet} be the set containing the {deferUsage} entry from each - item in {fieldDetailsList}. -- For each {deferUsage} of {deferUsageSet}: - - Let {ancestors} be the set of {deferUsage} entries that are ancestors of - {deferUsage}, collected by recursively following the {parent} entry on - {deferUsage}. - - If any of {ancestors} is contained by {deferUsageSet}, remove {deferUsage} - from {deferUsageSet}. -- Return {deferUsageSet}. +GetFilteredDeferUsageSet(fieldGroup): + +- Initialize {filteredDeferUsageSet} to the empty set. +- For each {fieldDetails} of {fieldGroup}: + - Let {deferUsage} be the corresponding entry on {fieldDetails}. + - If {deferUsage} is not defined: + - Remove all entries from {filteredDeferUsageSet}. + - Return {filteredDeferUsageSet}. + - Add {deferUsage} to {filteredDeferUsageSet}. +- For each {deferUsage} in {filteredDeferUsageSet}: + - Let {parentDeferUsage} be the corresponding entry on {deferUsage}. + - While {parentDeferUsage} is defined: + - If {parentDeferUsage} is contained by {filteredDeferUsageSet}: + - Remove {deferUsage} from {filteredDeferUsageSet}. + - Continue to the next {deferUsage} in {filteredDeferUsageSet}. + - Reset {parentDeferUsage} to the corresponding entry on {parentDeferUsage}. +- Return {filteredDeferUsageSet}. ## Executing a Field Plan @@ -661,9 +675,8 @@ variableValues, serial, path, deferUsageSet, deferMap): - If {path} is not provided, initialize it to an empty list. - Let {newDeferMap} be the result of {GetNewDeferMap(newDeferUsages, path, deferMap)}. -- Let {groupedFieldSet} be the entry in {fieldPlan} for the set equivalent to - {deferUsageSet}. -- Let {newGroupedFieldSets} be the remaining portion of {fieldPlan}. +- Let {groupedFieldSet} and {newGroupedFieldSets} be the corresponding entries + on {fieldPlan}. - Allowing for parallelization, perform the following steps: - Let {data} and {nestedIncrementalDataRecords} be the result of running {ExecuteGroupedFieldSet(groupedFieldSet, objectType, objectValue, From f0ebc12ab1f5e31778a7f306878f05456ec3a343 Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Wed, 17 Jul 2024 23:01:18 +0300 Subject: [PATCH 15/37] rename "deferred grouped field set record" to "execution group" --- spec/Section 6 -- Execution.md | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index cdc8d9295..bc3113d7c 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -361,7 +361,7 @@ YieldIncrementalResults(data, errors, incrementalDataRecords): let {result} be the corresponding completed result. - If {data} on {result} is {null}: - Initialize {completed} to an empty list. - - Let {parents} be the parent nodes of {deferredGroupedFieldSetRecord}. + - Let {parents} be the parent nodes of {executionGroup}. - Initialize {completed} to an empty list. - For each {pendingResult} of {parents}: - Append {GetCompletedEntry(parent, errors)} to {completed}. @@ -683,7 +683,7 @@ variableValues, serial, path, deferUsageSet, deferMap): variableValues, path, deferUsageSet, newDeferMap)} _serially_ if {serial} is {true}, _normally_ (allowing parallelization) otherwise. - Let {incrementalDataRecords} be the result of - {ExecuteDeferredGroupedFieldSets(objectType, objectValue, variableValues, + {ExecuteExecutionGroups(objectType, objectValue, variableValues, newGroupedFieldSets, path, newDeferMap)}. - Append all items in {nestedIncrementalDataRecords} to {incrementalDataRecords}. @@ -702,7 +702,7 @@ GetNewDeferMap(newDeferUsages, path, deferMap): - Set the entry for {deferUsage} in {newDeferMap} to {newDeferredFragment}. - Return {newDeferMap}. -ExecuteDeferredGroupedFieldSets(objectType, objectValue, variableValues, +ExecuteExecutionGroups(objectType, objectValue, variableValues, newGroupedFieldSets, path, deferMap): - Initialize {incrementalDataRecords} to an empty list. @@ -712,7 +712,7 @@ newGroupedFieldSets, path, deferMap): - Let {deferredFragment} be the entry for {deferUsage} in {deferMap}. - Append {deferredFragment} to {deferredFragments}. - Let {incrementalDataRecord} represent the future execution of - {ExecuteDeferredGroupedFieldSet(groupedFieldSet, objectType, objectValue, + {ExecuteExecutionGroup(groupedFieldSet, objectType, objectValue, variableValues, deferredFragments, path, deferUsageSet, deferMap)}, incrementally completing {deferredFragments} at {path}. - Append {incrementalDataRecord} to {incrementalDataRecords}. @@ -723,8 +723,8 @@ newGroupedFieldSets, path, deferMap): Note: {incrementalDataRecord} can be safely initiated without blocking higher-priority data once any of {deferredFragments} are released as pending. -ExecuteDeferredGroupedFieldSet(groupedFieldSet, objectType, objectValue, -variableValues, path, deferUsageSet, deferMap): +ExecuteExecutionGroup(groupedFieldSet, objectType, objectValue, variableValues, +path, deferUsageSet, deferMap): - Let {data} and {incrementalDataRecords} be the result of running {ExecuteGroupedFieldSet(groupedFieldSet, objectType, objectValue, From 4b862500c2c11f25adb322d928f6c1a3ffcecf56 Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Wed, 17 Jul 2024 23:02:43 +0300 Subject: [PATCH 16/37] rename ExecuteExecutionGroup to CollectExecutionGroup --- spec/Section 6 -- Execution.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index bc3113d7c..0371cfa52 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -683,7 +683,7 @@ variableValues, serial, path, deferUsageSet, deferMap): variableValues, path, deferUsageSet, newDeferMap)} _serially_ if {serial} is {true}, _normally_ (allowing parallelization) otherwise. - Let {incrementalDataRecords} be the result of - {ExecuteExecutionGroups(objectType, objectValue, variableValues, + {CollectExecutionGroup(objectType, objectValue, variableValues, newGroupedFieldSets, path, newDeferMap)}. - Append all items in {nestedIncrementalDataRecords} to {incrementalDataRecords}. @@ -702,7 +702,7 @@ GetNewDeferMap(newDeferUsages, path, deferMap): - Set the entry for {deferUsage} in {newDeferMap} to {newDeferredFragment}. - Return {newDeferMap}. -ExecuteExecutionGroups(objectType, objectValue, variableValues, +CollectExecutionGroups(objectType, objectValue, variableValues, newGroupedFieldSets, path, deferMap): - Initialize {incrementalDataRecords} to an empty list. @@ -712,7 +712,7 @@ newGroupedFieldSets, path, deferMap): - Let {deferredFragment} be the entry for {deferUsage} in {deferMap}. - Append {deferredFragment} to {deferredFragments}. - Let {incrementalDataRecord} represent the future execution of - {ExecuteExecutionGroup(groupedFieldSet, objectType, objectValue, + {CollectExecutionGroup(groupedFieldSet, objectType, objectValue, variableValues, deferredFragments, path, deferUsageSet, deferMap)}, incrementally completing {deferredFragments} at {path}. - Append {incrementalDataRecord} to {incrementalDataRecords}. @@ -723,7 +723,7 @@ newGroupedFieldSets, path, deferMap): Note: {incrementalDataRecord} can be safely initiated without blocking higher-priority data once any of {deferredFragments} are released as pending. -ExecuteExecutionGroup(groupedFieldSet, objectType, objectValue, variableValues, +CollectExecutionGroup(groupedFieldSet, objectType, objectValue, variableValues, path, deferUsageSet, deferMap): - Let {data} and {incrementalDataRecords} be the result of running From db54ad8cefbd9c5a8ee2cc732c404b2b12828bb5 Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Thu, 18 Jul 2024 17:26:26 +0300 Subject: [PATCH 17/37] properly initialize deferUsages with their parents --- spec/Section 6 -- Execution.md | 14 ++++++++++---- 1 file changed, 10 insertions(+), 4 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 0371cfa52..a53183f0b 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -567,8 +567,11 @@ visitedFragments): - If {DoesFragmentTypeApply(objectType, fragmentType)} is {false}, continue with the next {selection} in {selectionSet}. - Let {fragmentSelectionSet} be the top-level selection set of {fragment}. - - If {deferDirective} is defined, let {fragmentDeferUsage} be - {deferDirective} and append it to {newDeferUsages}. + - If {deferDirective} is defined: + - Let {path} be the corresponding entry on {deferDirective}. + - Let {parentDeferUsage} be {deferUsage}. + - Let {fragmentDeferUsage} be an unordered map containing {path} and + {parentDeferUsage}. - Otherwise, let {fragmentDeferUsage} be {deferUsage}. - Let {fragmentGroupedFieldSet} and {fragmentNewDeferUsages} be the result of calling {CollectFields(objectType, fragmentSelectionSet, @@ -592,8 +595,11 @@ visitedFragments): - Let {deferDirective} be that directive. - If this execution is for a subscription operation, raise a _field error_. - - If {deferDirective} is defined, let {fragmentDeferUsage} be - {deferDirective} and append it to {newDeferUsages}. + - If {deferDirective} is defined: + - Let {path} be the corresponding entry on {deferDirective}. + - Let {parentDeferUsage} be {deferUsage}. + - Let {fragmentDeferUsage} be an unordered map containing {path} and + {parentDeferUsage}. - Otherwise, let {fragmentDeferUsage} be {deferUsage}. - Let {fragmentGroupedFieldSet} and {fragmentNewDeferUsages} be the result of calling {CollectFields(objectType, fragmentSelectionSet, From a2516e2891861364c107298685cb233e69bb1513 Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Thu, 18 Jul 2024 17:27:41 +0300 Subject: [PATCH 18/37] move Field Collection back to where it was mostly to reduce the diff. --- spec/Section 6 -- Execution.md | 358 ++++++++++++++++----------------- 1 file changed, 179 insertions(+), 179 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index a53183f0b..942c24416 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -490,185 +490,6 @@ BatchIncrementalResults(incrementalResults): of {hasNext} on the final item in the list. - Yield {batchedIncrementalResult}. -### Field Collection - -Before execution, the _selection set_ is converted to a grouped field set by -calling {CollectFields()}. Each entry in the grouped field set is a list of -fields that share a response key (the alias if defined, otherwise the field -name). This ensures all fields with the same response key (including those in -referenced fragments) are executed at the same time. - -As an example, collecting the fields of this selection set would collect two -instances of the field `a` and one of field `b`: - -```graphql example -{ - a { - subfield1 - } - ...ExampleFragment -} - -fragment ExampleFragment on Query { - a { - subfield2 - } - b -} -``` - -The depth-first-search order of the field groups produced by {CollectFields()} -is maintained through execution, ensuring that fields appear in the executed -response in a stable and predictable order. - -CollectFields(objectType, selectionSet, variableValues, deferUsage, -visitedFragments): - -- If {visitedFragments} is not provided, initialize it to the empty set. -- Initialize {groupedFields} to an empty ordered map of lists. -- Initialize {newDeferUsages} to an empty list. -- For each {selection} in {selectionSet}: - - If {selection} provides the directive `@skip`, let {skipDirective} be that - directive. - - If {skipDirective}'s {if} argument is {true} or is a variable in - {variableValues} with the value {true}, continue with the next {selection} - in {selectionSet}. - - If {selection} provides the directive `@include`, let {includeDirective} be - that directive. - - If {includeDirective}'s {if} argument is not {true} and is not a variable - in {variableValues} with the value {true}, continue with the next - {selection} in {selectionSet}. - - If {selection} is a {Field}: - - Let {responseKey} be the response key of {selection} (the alias if - defined, otherwise the field name). - - Let {fieldDetails} be a new unordered map containing {deferUsage}. - - Set the entry for {field} on {fieldDetails} to {selection}. and - {deferUsage}. - - Let {groupForResponseKey} be the list in {groupedFields} for - {responseKey}; if no such list exists, create it as an empty list. - - Append {fieldDetails} to the {groupForResponseKey}. - - If {selection} is a {FragmentSpread}: - - Let {fragmentSpreadName} be the name of {selection}. - - If {fragmentSpreadName} provides the directive `@defer` and its {if} - argument is not {false} and is not a variable in {variableValues} with the - value {false}: - - Let {deferDirective} be that directive. - - If this execution is for a subscription operation, raise a _field - error_. - - If {deferDirective} is not defined: - - If {fragmentSpreadName} is in {visitedFragments}, continue with the next - {selection} in {selectionSet}. - - Add {fragmentSpreadName} to {visitedFragments}. - - Let {fragment} be the Fragment in the current Document whose name is - {fragmentSpreadName}. - - If no such {fragment} exists, continue with the next {selection} in - {selectionSet}. - - Let {fragmentType} be the type condition on {fragment}. - - If {DoesFragmentTypeApply(objectType, fragmentType)} is {false}, continue - with the next {selection} in {selectionSet}. - - Let {fragmentSelectionSet} be the top-level selection set of {fragment}. - - If {deferDirective} is defined: - - Let {path} be the corresponding entry on {deferDirective}. - - Let {parentDeferUsage} be {deferUsage}. - - Let {fragmentDeferUsage} be an unordered map containing {path} and - {parentDeferUsage}. - - Otherwise, let {fragmentDeferUsage} be {deferUsage}. - - Let {fragmentGroupedFieldSet} and {fragmentNewDeferUsages} be the result - of calling {CollectFields(objectType, fragmentSelectionSet, - variableValues, fragmentDeferUsage, visitedFragments)}. - - For each {fragmentGroup} in {fragmentGroupedFieldSet}: - - Let {responseKey} be the response key shared by all fields in - {fragmentGroup}. - - Let {groupForResponseKey} be the list in {groupedFields} for - {responseKey}; if no such list exists, create it as an empty list. - - Append all items in {fragmentGroup} to {groupForResponseKey}. - - Append all items in {fragmentNewDeferUsages} to {newDeferUsages}. - - If {selection} is an {InlineFragment}: - - Let {fragmentType} be the type condition on {selection}. - - If {fragmentType} is not {null} and {DoesFragmentTypeApply(objectType, - fragmentType)} is {false}, continue with the next {selection} in - {selectionSet}. - - Let {fragmentSelectionSet} be the top-level selection set of {selection}. - - If {InlineFragment} provides the directive `@defer` and its {if} argument - is not {false} and is not a variable in {variableValues} with the value - {false}: - - Let {deferDirective} be that directive. - - If this execution is for a subscription operation, raise a _field - error_. - - If {deferDirective} is defined: - - Let {path} be the corresponding entry on {deferDirective}. - - Let {parentDeferUsage} be {deferUsage}. - - Let {fragmentDeferUsage} be an unordered map containing {path} and - {parentDeferUsage}. - - Otherwise, let {fragmentDeferUsage} be {deferUsage}. - - Let {fragmentGroupedFieldSet} and {fragmentNewDeferUsages} be the result - of calling {CollectFields(objectType, fragmentSelectionSet, - variableValues, fragmentDeferUsage, visitedFragments)}. - - For each {fragmentGroup} in {fragmentGroupedFieldSet}: - - Let {responseKey} be the response key shared by all fields in - {fragmentGroup}. - - Let {groupForResponseKey} be the list in {groupedFields} for - {responseKey}; if no such list exists, create it as an empty list. - - Append all items in {fragmentGroup} to {groupForResponseKey}. - - Append all items in {fragmentNewDeferUsages} to {newDeferUsages}. -- Return {groupedFields} and {newDeferUsages}. - -DoesFragmentTypeApply(objectType, fragmentType): - -- If {fragmentType} is an Object Type: - - If {objectType} and {fragmentType} are the same type, return {true}, - otherwise return {false}. -- If {fragmentType} is an Interface Type: - - If {objectType} is an implementation of {fragmentType}, return {true} - otherwise return {false}. -- If {fragmentType} is a Union: - - If {objectType} is a possible type of {fragmentType}, return {true} - otherwise return {false}. - -Note: The steps in {CollectFields()} evaluating the `@skip` and `@include` -directives may be applied in either order since they apply commutatively. - -### Field Plan Generation - -BuildFieldPlan(originalGroupedFieldSet, parentDeferUsages): - -- If {parentDeferUsages} is not provided, initialize it to the empty set. -- Initialize {groupedFieldSet} to an empty ordered map. -- Initialize {newGroupedFieldSets} to an empty unordered map. -- Let {fieldPlan} be an unordered map containing {groupedFieldSet} and - {newGroupedFieldSets}. -- For each {responseKey} and {groupForResponseKey} of {groupedFieldSet}: - - Let {filteredDeferUsageSet} be the result of - {GetFilteredDeferUsageSet(groupForResponseKey)}. - - If {filteredDeferUsageSet} is the equivalent set to {parentDeferUsages}: - - Set the entry for {responseKey} in {groupedFieldSet} to - {groupForResponseKey}. - - Otherwise: - - Let {newGroupedFieldSet} be the entry in {newGroupedFieldSets} for any - equivalent set to {deferUsageSet}; if no such map exists, create it as an - empty ordered map. - - Set the entry for {responseKey} in {newGroupedFieldSet} to - {groupForResponseKey}. -- Return {fieldPlan}. - -GetFilteredDeferUsageSet(fieldGroup): - -- Initialize {filteredDeferUsageSet} to the empty set. -- For each {fieldDetails} of {fieldGroup}: - - Let {deferUsage} be the corresponding entry on {fieldDetails}. - - If {deferUsage} is not defined: - - Remove all entries from {filteredDeferUsageSet}. - - Return {filteredDeferUsageSet}. - - Add {deferUsage} to {filteredDeferUsageSet}. -- For each {deferUsage} in {filteredDeferUsageSet}: - - Let {parentDeferUsage} be the corresponding entry on {deferUsage}. - - While {parentDeferUsage} is defined: - - If {parentDeferUsage} is contained by {filteredDeferUsageSet}: - - Remove {deferUsage} from {filteredDeferUsageSet}. - - Continue to the next {deferUsage} in {filteredDeferUsageSet}. - - Reset {parentDeferUsage} to the corresponding entry on {parentDeferUsage}. -- Return {filteredDeferUsageSet}. - ## Executing a Field Plan To execute a field plan, the object value being evaluated and the object type @@ -881,6 +702,185 @@ A correct executor must generate the following result for that _selection set_: } ``` +### Field Collection + +Before execution, the _selection set_ is converted to a grouped field set by +calling {CollectFields()}. Each entry in the grouped field set is a list of +fields that share a response key (the alias if defined, otherwise the field +name). This ensures all fields with the same response key (including those in +referenced fragments) are executed at the same time. + +As an example, collecting the fields of this selection set would collect two +instances of the field `a` and one of field `b`: + +```graphql example +{ + a { + subfield1 + } + ...ExampleFragment +} + +fragment ExampleFragment on Query { + a { + subfield2 + } + b +} +``` + +The depth-first-search order of the field groups produced by {CollectFields()} +is maintained through execution, ensuring that fields appear in the executed +response in a stable and predictable order. + +CollectFields(objectType, selectionSet, variableValues, deferUsage, +visitedFragments): + +- If {visitedFragments} is not provided, initialize it to the empty set. +- Initialize {groupedFields} to an empty ordered map of lists. +- Initialize {newDeferUsages} to an empty list. +- For each {selection} in {selectionSet}: + - If {selection} provides the directive `@skip`, let {skipDirective} be that + directive. + - If {skipDirective}'s {if} argument is {true} or is a variable in + {variableValues} with the value {true}, continue with the next {selection} + in {selectionSet}. + - If {selection} provides the directive `@include`, let {includeDirective} be + that directive. + - If {includeDirective}'s {if} argument is not {true} and is not a variable + in {variableValues} with the value {true}, continue with the next + {selection} in {selectionSet}. + - If {selection} is a {Field}: + - Let {responseKey} be the response key of {selection} (the alias if + defined, otherwise the field name). + - Let {fieldDetails} be a new unordered map containing {deferUsage}. + - Set the entry for {field} on {fieldDetails} to {selection}. and + {deferUsage}. + - Let {groupForResponseKey} be the list in {groupedFields} for + {responseKey}; if no such list exists, create it as an empty list. + - Append {fieldDetails} to the {groupForResponseKey}. + - If {selection} is a {FragmentSpread}: + - Let {fragmentSpreadName} be the name of {selection}. + - If {fragmentSpreadName} provides the directive `@defer` and its {if} + argument is not {false} and is not a variable in {variableValues} with the + value {false}: + - Let {deferDirective} be that directive. + - If this execution is for a subscription operation, raise a _field + error_. + - If {deferDirective} is not defined: + - If {fragmentSpreadName} is in {visitedFragments}, continue with the next + {selection} in {selectionSet}. + - Add {fragmentSpreadName} to {visitedFragments}. + - Let {fragment} be the Fragment in the current Document whose name is + {fragmentSpreadName}. + - If no such {fragment} exists, continue with the next {selection} in + {selectionSet}. + - Let {fragmentType} be the type condition on {fragment}. + - If {DoesFragmentTypeApply(objectType, fragmentType)} is {false}, continue + with the next {selection} in {selectionSet}. + - Let {fragmentSelectionSet} be the top-level selection set of {fragment}. + - If {deferDirective} is defined: + - Let {path} be the corresponding entry on {deferDirective}. + - Let {parentDeferUsage} be {deferUsage}. + - Let {fragmentDeferUsage} be an unordered map containing {path} and + {parentDeferUsage}. + - Otherwise, let {fragmentDeferUsage} be {deferUsage}. + - Let {fragmentGroupedFieldSet} and {fragmentNewDeferUsages} be the result + of calling {CollectFields(objectType, fragmentSelectionSet, + variableValues, fragmentDeferUsage, visitedFragments)}. + - For each {fragmentGroup} in {fragmentGroupedFieldSet}: + - Let {responseKey} be the response key shared by all fields in + {fragmentGroup}. + - Let {groupForResponseKey} be the list in {groupedFields} for + {responseKey}; if no such list exists, create it as an empty list. + - Append all items in {fragmentGroup} to {groupForResponseKey}. + - Append all items in {fragmentNewDeferUsages} to {newDeferUsages}. + - If {selection} is an {InlineFragment}: + - Let {fragmentType} be the type condition on {selection}. + - If {fragmentType} is not {null} and {DoesFragmentTypeApply(objectType, + fragmentType)} is {false}, continue with the next {selection} in + {selectionSet}. + - Let {fragmentSelectionSet} be the top-level selection set of {selection}. + - If {InlineFragment} provides the directive `@defer` and its {if} argument + is not {false} and is not a variable in {variableValues} with the value + {false}: + - Let {deferDirective} be that directive. + - If this execution is for a subscription operation, raise a _field + error_. + - If {deferDirective} is defined: + - Let {path} be the corresponding entry on {deferDirective}. + - Let {parentDeferUsage} be {deferUsage}. + - Let {fragmentDeferUsage} be an unordered map containing {path} and + {parentDeferUsage}. + - Otherwise, let {fragmentDeferUsage} be {deferUsage}. + - Let {fragmentGroupedFieldSet} and {fragmentNewDeferUsages} be the result + of calling {CollectFields(objectType, fragmentSelectionSet, + variableValues, fragmentDeferUsage, visitedFragments)}. + - For each {fragmentGroup} in {fragmentGroupedFieldSet}: + - Let {responseKey} be the response key shared by all fields in + {fragmentGroup}. + - Let {groupForResponseKey} be the list in {groupedFields} for + {responseKey}; if no such list exists, create it as an empty list. + - Append all items in {fragmentGroup} to {groupForResponseKey}. + - Append all items in {fragmentNewDeferUsages} to {newDeferUsages}. +- Return {groupedFields} and {newDeferUsages}. + +DoesFragmentTypeApply(objectType, fragmentType): + +- If {fragmentType} is an Object Type: + - If {objectType} and {fragmentType} are the same type, return {true}, + otherwise return {false}. +- If {fragmentType} is an Interface Type: + - If {objectType} is an implementation of {fragmentType}, return {true} + otherwise return {false}. +- If {fragmentType} is a Union: + - If {objectType} is a possible type of {fragmentType}, return {true} + otherwise return {false}. + +Note: The steps in {CollectFields()} evaluating the `@skip` and `@include` +directives may be applied in either order since they apply commutatively. + +### Field Plan Generation + +BuildFieldPlan(originalGroupedFieldSet, parentDeferUsages): + +- If {parentDeferUsages} is not provided, initialize it to the empty set. +- Initialize {groupedFieldSet} to an empty ordered map. +- Initialize {newGroupedFieldSets} to an empty unordered map. +- Let {fieldPlan} be an unordered map containing {groupedFieldSet} and + {newGroupedFieldSets}. +- For each {responseKey} and {groupForResponseKey} of {groupedFieldSet}: + - Let {filteredDeferUsageSet} be the result of + {GetFilteredDeferUsageSet(groupForResponseKey)}. + - If {filteredDeferUsageSet} is the equivalent set to {parentDeferUsages}: + - Set the entry for {responseKey} in {groupedFieldSet} to + {groupForResponseKey}. + - Otherwise: + - Let {newGroupedFieldSet} be the entry in {newGroupedFieldSets} for any + equivalent set to {deferUsageSet}; if no such map exists, create it as an + empty ordered map. + - Set the entry for {responseKey} in {newGroupedFieldSet} to + {groupForResponseKey}. +- Return {fieldPlan}. + +GetFilteredDeferUsageSet(fieldGroup): + +- Initialize {filteredDeferUsageSet} to the empty set. +- For each {fieldDetails} of {fieldGroup}: + - Let {deferUsage} be the corresponding entry on {fieldDetails}. + - If {deferUsage} is not defined: + - Remove all entries from {filteredDeferUsageSet}. + - Return {filteredDeferUsageSet}. + - Add {deferUsage} to {filteredDeferUsageSet}. +- For each {deferUsage} in {filteredDeferUsageSet}: + - Let {parentDeferUsage} be the corresponding entry on {deferUsage}. + - While {parentDeferUsage} is defined: + - If {parentDeferUsage} is contained by {filteredDeferUsageSet}: + - Remove {deferUsage} from {filteredDeferUsageSet}. + - Continue to the next {deferUsage} in {filteredDeferUsageSet}. + - Reset {parentDeferUsage} to the corresponding entry on {parentDeferUsage}. +- Return {filteredDeferUsageSet}. + ## Executing Fields Each field requested in the grouped field set that is defined on the selected From 4b19cf5ab7775cf099f1a0fb7bc5ccfee9209824 Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Thu, 18 Jul 2024 17:28:37 +0300 Subject: [PATCH 19/37] f --- spec/Section 6 -- Execution.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 942c24416..b54f12674 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -590,7 +590,7 @@ path, deferUsageSet, deferMap): - Return {resultMap} and {incrementalDataRecords}. Note: {resultMap} is ordered by which fields appear first in the operation. This -is explained in greater detail in the Field Collection section above. +is explained in greater detail in the Field Collection section below. **Errors and Non-Null Fields** From 313aaa65042ab6542bb21457ff99ff13281dac40 Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Thu, 18 Jul 2024 17:30:44 +0300 Subject: [PATCH 20/37] use fieldDetailsList consistently instead of sometimes fieldGroup, for consistency and so as to remove another "Group" term --- spec/Section 6 -- Execution.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index b54f12674..47cbc84df 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -863,10 +863,10 @@ BuildFieldPlan(originalGroupedFieldSet, parentDeferUsages): {groupForResponseKey}. - Return {fieldPlan}. -GetFilteredDeferUsageSet(fieldGroup): +GetFilteredDeferUsageSet(fieldDetailsList): - Initialize {filteredDeferUsageSet} to the empty set. -- For each {fieldDetails} of {fieldGroup}: +- For each {fieldDetails} of {fieldDetailsList}: - Let {deferUsage} be the corresponding entry on {fieldDetails}. - If {deferUsage} is not defined: - Remove all entries from {filteredDeferUsageSet}. From 4571da97d25c12a9adf8feae3ba08a870b40c690 Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Thu, 18 Jul 2024 23:12:42 +0300 Subject: [PATCH 21/37] add info re: data structures --- spec/Section 6 -- Execution.md | 38 ++++++++++++++++++++++++++++++++++ 1 file changed, 38 insertions(+) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 47cbc84df..68662c638 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -733,6 +733,40 @@ The depth-first-search order of the field groups produced by {CollectFields()} is maintained through execution, ensuring that fields appear in the executed response in a stable and predictable order. +The {CollectFields()} algorithm makes use of the following data types: + +Defer Usage Records are unordered maps representing the usage of a `@defer` +directive within a given operation. Defer Usages are "abstract" in that they +include information about the `@defer` directive from the AST of the GraphQL +document. A single Defer Usage may be used to create many "concrete" Delivery +Groups when a `@defer` is included within a list type. + +Defer Usages contain the following information: + +- {label}: the `label` argument provided by the given `@defer` directive, if + any, otherwise {undefined}. +- {parentDeferUsage}: a Defer Usage corresponding to the `@defer` directive + enclosing this `@defer` directive, if any, otherwise {undefined}. + +The {parentDeferUsage} entry is used to build distinct Execution Groups as +discussed within the Field Plan Generation section below. + +Field Details Records are unordered maps containing the following entries: + +- {field}: the Field selection. +- {deferUsage}: the Defer Usage enclosing the selection, if any, otherwise + {undefined}. + +A Grouped Field Set is an ordered map of keys to lists of Field Details. The +keys are the same as that of the response, the alias for the field, if defined, +otherwise the field name. + +The {CollectFields()} algorithm returns: + +- {groupedFieldSet}: the Grouped Field Set for the fields in the selection set. +- {newDeferUsages}: a list of new Defer Usages encountered during this field + collection. + CollectFields(objectType, selectionSet, variableValues, deferUsage, visitedFragments): @@ -840,6 +874,10 @@ DoesFragmentTypeApply(objectType, fragmentType): Note: The steps in {CollectFields()} evaluating the `@skip` and `@include` directives may be applied in either order since they apply commutatively. +Note: When completing a List field, the {CollectFields} algorithm is invoked +with the same arguments for each element of the list. GraphQL Services may +choose to memoize their implementations of {CollectFields}. + ### Field Plan Generation BuildFieldPlan(originalGroupedFieldSet, parentDeferUsages): From 3556851e332f779114840825730e63558246f5e7 Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Sat, 20 Jul 2024 21:43:11 +0300 Subject: [PATCH 22/37] rename FieldPlan to ExecutionPlan --- spec/Section 6 -- Execution.md | 30 +++++++++++++++--------------- 1 file changed, 15 insertions(+), 15 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 68662c638..6bae848ec 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -331,9 +331,9 @@ serial): - If {serial} is not provided, initialize it to {false}. - Let {groupedFieldSet} and {newDeferUsages} be the result of {CollectFields(objectType, selectionSet, variableValues)}. -- Let {fieldPlan} be the result of {BuildFieldPlan(groupedFieldSet)}. +- Let {executionPlan} be the result of {BuildExecutionPlan(groupedFieldSet)}. - Let {data} and {incrementalDataRecords} be the result of - {ExecuteFieldPlan(newDeferUsages, fieldPlan, objectType, initialValue, + {ExecuteExecutionPlan(newDeferUsages, executionPlan, objectType, initialValue, variableValues, serial)}. - Let {errors} be the list of all _field error_ raised while completing {data}. - If {incrementalDataRecords} is empty, return an unordered map containing @@ -490,20 +490,20 @@ BatchIncrementalResults(incrementalResults): of {hasNext} on the final item in the list. - Yield {batchedIncrementalResult}. -## Executing a Field Plan +## Executing an Execution Plan -To execute a field plan, the object value being evaluated and the object type -need to be known, as well as whether the non-deferred grouped field set must be -executed serially, or may be executed in parallel. +To execute a execution plan, the object value being evaluated and the object +type need to be known, as well as whether the non-deferred grouped field set +must be executed serially, or may be executed in parallel. -ExecuteFieldPlan(newDeferUsages, fieldPlan, objectType, objectValue, +ExecuteExecutionPlan(newDeferUsages, executionPlan, objectType, objectValue, variableValues, serial, path, deferUsageSet, deferMap): - If {path} is not provided, initialize it to an empty list. - Let {newDeferMap} be the result of {GetNewDeferMap(newDeferUsages, path, deferMap)}. - Let {groupedFieldSet} and {newGroupedFieldSets} be the corresponding entries - on {fieldPlan}. + on {executionPlan}. - Allowing for parallelization, perform the following steps: - Let {data} and {nestedIncrementalDataRecords} be the result of running {ExecuteGroupedFieldSet(groupedFieldSet, objectType, objectValue, @@ -749,7 +749,7 @@ Defer Usages contain the following information: enclosing this `@defer` directive, if any, otherwise {undefined}. The {parentDeferUsage} entry is used to build distinct Execution Groups as -discussed within the Field Plan Generation section below. +discussed within the Execution Plan Generation section below. Field Details Records are unordered maps containing the following entries: @@ -878,14 +878,14 @@ Note: When completing a List field, the {CollectFields} algorithm is invoked with the same arguments for each element of the list. GraphQL Services may choose to memoize their implementations of {CollectFields}. -### Field Plan Generation +### Execution Plan Generation -BuildFieldPlan(originalGroupedFieldSet, parentDeferUsages): +BuildExecutionPlan(originalGroupedFieldSet, parentDeferUsages): - If {parentDeferUsages} is not provided, initialize it to the empty set. - Initialize {groupedFieldSet} to an empty ordered map. - Initialize {newGroupedFieldSets} to an empty unordered map. -- Let {fieldPlan} be an unordered map containing {groupedFieldSet} and +- Let {executionPlan} be an unordered map containing {groupedFieldSet} and {newGroupedFieldSets}. - For each {responseKey} and {groupForResponseKey} of {groupedFieldSet}: - Let {filteredDeferUsageSet} be the result of @@ -899,7 +899,7 @@ BuildFieldPlan(originalGroupedFieldSet, parentDeferUsages): empty ordered map. - Set the entry for {responseKey} in {newGroupedFieldSet} to {groupForResponseKey}. -- Return {fieldPlan}. +- Return {executionPlan}. GetFilteredDeferUsageSet(fieldDetailsList): @@ -1051,9 +1051,9 @@ deferUsageSet, deferMap): - Let {objectType} be {ResolveAbstractType(fieldType, result)}. - Let {groupedFieldSet} and {newDeferUsages} be the result of calling {CollectSubfields(objectType, fieldDetailsList, variableValues)}. - - Let {fieldPlan} be the result of {BuildFieldPlan(groupedFieldSet, + - Let {executionPlan} be the result of {BuildExecutionPlan(groupedFieldSet, deferUsageSet)}. - - Return the result of {ExecuteFieldPlan(newDeferUsages, fieldPlan, + - Return the result of {ExecuteExecutionPlan(newDeferUsages, executionPlan, objectType, result, variableValues, false, path, deferUsageSet, deferMap)}. CompleteListValue(innerType, fieldDetailsList, result, variableValues, path, From a950a968d02e437b3c0071e741389b53ee7ebd54 Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Wed, 24 Jul 2024 20:31:30 +0300 Subject: [PATCH 23/37] path => label --- spec/Section 6 -- Execution.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 6bae848ec..3c03f94cb 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -814,9 +814,9 @@ visitedFragments): with the next {selection} in {selectionSet}. - Let {fragmentSelectionSet} be the top-level selection set of {fragment}. - If {deferDirective} is defined: - - Let {path} be the corresponding entry on {deferDirective}. + - Let {label} be the corresponding entry on {deferDirective}. - Let {parentDeferUsage} be {deferUsage}. - - Let {fragmentDeferUsage} be an unordered map containing {path} and + - Let {fragmentDeferUsage} be an unordered map containing {label} and {parentDeferUsage}. - Otherwise, let {fragmentDeferUsage} be {deferUsage}. - Let {fragmentGroupedFieldSet} and {fragmentNewDeferUsages} be the result @@ -842,9 +842,9 @@ visitedFragments): - If this execution is for a subscription operation, raise a _field error_. - If {deferDirective} is defined: - - Let {path} be the corresponding entry on {deferDirective}. + - Let {label} be the corresponding entry on {deferDirective}. - Let {parentDeferUsage} be {deferUsage}. - - Let {fragmentDeferUsage} be an unordered map containing {path} and + - Let {fragmentDeferUsage} be an unordered map containing {label} and {parentDeferUsage}. - Otherwise, let {fragmentDeferUsage} be {deferUsage}. - Let {fragmentGroupedFieldSet} and {fragmentNewDeferUsages} be the result From afacc0ac85016bba6f0d7173704dd5a1937d2ab3 Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Thu, 25 Jul 2024 19:46:19 +0300 Subject: [PATCH 24/37] add missing arguments --- spec/Section 6 -- Execution.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 3c03f94cb..955e2344c 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -583,7 +583,7 @@ path, deferUsageSet, deferMap): - If {fieldType} is defined: - Let {responseValue} and {fieldIncrementalDataRecords} be the result of {ExecuteField(objectType, objectValue, fieldType, fields, variableValues, - path)}. + path, deferUsageSet, deferMap)}. - Set {responseValue} as the value for {responseKey} in {resultMap}. - Append all items in {fieldIncrementalDataRecords} to {incrementalDataRecords}. From 8677044c54ce88d934eac1080a1dcfbb99eab11e Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Thu, 25 Jul 2024 19:50:45 +0300 Subject: [PATCH 25/37] add missing return value --- spec/Section 6 -- Execution.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 955e2344c..2eb4a9ed2 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -1151,7 +1151,7 @@ CollectSubfields(objectType, fieldDetailsList, variableValues): {responseKey}; if no such list exists, create it as an empty list. - Append all fields in {subfields} to {groupForResponseKey}. - Append all defer usages in {subNewDeferUsages} to {newDeferUsages}. -- Return {groupedFieldSet}. +- Return {groupedFieldSet} and {newDeferUsages}. ### Handling Field Errors From 5e9ea96e44f3233ec6f28b40121b4e1a22d78eae Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Thu, 25 Jul 2024 20:16:26 +0300 Subject: [PATCH 26/37] fix some renaming around CollectExecutionGroups and ExecuteExecutionGroup --- spec/Section 6 -- Execution.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 2eb4a9ed2..ebfff81bd 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -510,7 +510,7 @@ variableValues, serial, path, deferUsageSet, deferMap): variableValues, path, deferUsageSet, newDeferMap)} _serially_ if {serial} is {true}, _normally_ (allowing parallelization) otherwise. - Let {incrementalDataRecords} be the result of - {CollectExecutionGroup(objectType, objectValue, variableValues, + {CollectExecutionGroups(objectType, objectValue, variableValues, newGroupedFieldSets, path, newDeferMap)}. - Append all items in {nestedIncrementalDataRecords} to {incrementalDataRecords}. @@ -539,7 +539,7 @@ newGroupedFieldSets, path, deferMap): - Let {deferredFragment} be the entry for {deferUsage} in {deferMap}. - Append {deferredFragment} to {deferredFragments}. - Let {incrementalDataRecord} represent the future execution of - {CollectExecutionGroup(groupedFieldSet, objectType, objectValue, + {ExecuteExecutionGroup(groupedFieldSet, objectType, objectValue, variableValues, deferredFragments, path, deferUsageSet, deferMap)}, incrementally completing {deferredFragments} at {path}. - Append {incrementalDataRecord} to {incrementalDataRecords}. @@ -550,7 +550,7 @@ newGroupedFieldSets, path, deferMap): Note: {incrementalDataRecord} can be safely initiated without blocking higher-priority data once any of {deferredFragments} are released as pending. -CollectExecutionGroup(groupedFieldSet, objectType, objectValue, variableValues, +ExecuteExecutionGroup(groupedFieldSet, objectType, objectValue, variableValues, path, deferUsageSet, deferMap): - Let {data} and {incrementalDataRecords} be the result of running From 04d5803c636b09ef587f3284d7156b9e8b9d5ef7 Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Mon, 26 Aug 2024 09:02:46 -0400 Subject: [PATCH 27/37] Correct argument name "node" should be "field" within CreateSourceEventStream Co-authored-by: Rob Richard --- spec/Section 6 -- Execution.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index ebfff81bd..4ab99f810 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -258,7 +258,7 @@ CreateSourceEventStream(subscription, schema, variableValues, initialValue): - Let {fieldName} be the name of {field}. Note: This value is unaffected if an alias is used. - Let {argumentValues} be the result of {CoerceArgumentValues(subscriptionType, - node, variableValues)}. + field, variableValues)}. - Let {fieldStream} be the result of running {ResolveFieldEventStream(subscriptionType, initialValue, fieldName, argumentValues)}. From 0058d2a26992860e98855f13cd28ce860b576031 Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Mon, 26 Aug 2024 16:07:29 +0300 Subject: [PATCH 28/37] clarify errors from ExecuteExecutionPlan accompanying note is a WIP, open to further suggestions as to how to clarify --- spec/Section 6 -- Execution.md | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 4ab99f810..535b4e76b 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -335,7 +335,8 @@ serial): - Let {data} and {incrementalDataRecords} be the result of {ExecuteExecutionPlan(newDeferUsages, executionPlan, objectType, initialValue, variableValues, serial)}. -- Let {errors} be the list of all _field error_ raised while completing {data}. +- Let {errors} be the list of all _field error_ raised while executing the + execution plan. - If {incrementalDataRecords} is empty, return an unordered map containing {data} and {errors}. - Let {incrementalResults} be the result of {YieldIncrementalResults(data, @@ -344,6 +345,9 @@ serial): - Let {initialResult} be that result. - Return {initialResult} and {BatchIncrementalResults(incrementalResults)}. +Note: {ExecuteExecutionPlan()} does not directly raise field errors from the +incremental portion of the Execution Plan. + ### Yielding Incremental Results The procedure for yielding incremental results is specified by the From 09e89dd987de3dcdc7ea5b252759e7f0b5ea33d5 Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Mon, 26 Aug 2024 16:36:42 +0300 Subject: [PATCH 29/37] add initial versions of explanations for the algorithms in the "Executing an Execution Plan" section --- spec/Section 6 -- Execution.md | 38 ++++++++++++++++++++++++++++++---- 1 file changed, 34 insertions(+), 4 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 535b4e76b..c414a5401 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -496,9 +496,12 @@ BatchIncrementalResults(incrementalResults): ## Executing an Execution Plan -To execute a execution plan, the object value being evaluated and the object -type need to be known, as well as whether the non-deferred grouped field set -must be executed serially, or may be executed in parallel. +Executing an execution plan consists of two tasks that may be performed in +parallel. The first task is simply the execution of the non-deferred grouped +field set. The second task is to use the partitioned grouped field sets within +the execution plan to generate Execution Groups, i.e. Incremental Data Records, +where each Incremental Data Records represents the deferred execution of one of +the partitioned grouped field sets. ExecuteExecutionPlan(newDeferUsages, executionPlan, objectType, objectValue, variableValues, serial, path, deferUsageSet, deferMap): @@ -520,6 +523,15 @@ variableValues, serial, path, deferUsageSet, deferMap): {incrementalDataRecords}. - Return {data} and {incrementalDataRecords}. +Because `@defer` directives may be nested within list types, a map is required +to associate a Defer Usage record as recorded within Field Details Records and +an actual Deferred Fragment so that any additional Execution Groups may be +associated with the correct Deferred Fragment. The {GetNewDeferMap()} algorithm +creates that map. Given a list of new Defer Usages, the actual path at which the +fields they defer are spread, and an initial map, it returns a new map +containing all entries in the provided defer map, as well as new entries for +each new Defer Usage. + GetNewDeferMap(newDeferUsages, path, deferMap): - If {newDeferUsages} is empty, return {deferMap}: @@ -533,6 +545,11 @@ GetNewDeferMap(newDeferUsages, path, deferMap): - Set the entry for {deferUsage} in {newDeferMap} to {newDeferredFragment}. - Return {newDeferMap}. +The {CollectExecutionGroups()} algorithm is responsible for creating the +Execution Groups, i.e. Incremental Data Records, for each partitioned grouped +field set. It uses the map created by {GetNewDeferMap()} algorithm to associate +each Execution Group with the correct Deferred Fragment. + CollectExecutionGroups(objectType, objectValue, variableValues, newGroupedFieldSets, path, deferMap): @@ -554,6 +571,9 @@ newGroupedFieldSets, path, deferMap): Note: {incrementalDataRecord} can be safely initiated without blocking higher-priority data once any of {deferredFragments} are released as pending. +The {ExecuteExecutionGroup()} algorithm is responsible for actually executing +the deferred grouped field set and collecting the result and any raised errors. + ExecuteExecutionGroup(groupedFieldSet, objectType, objectValue, variableValues, path, deferUsageSet, deferMap): @@ -561,7 +581,8 @@ path, deferUsageSet, deferMap): {ExecuteGroupedFieldSet(groupedFieldSet, objectType, objectValue, variableValues, path, deferUsageSet, deferMap)} _normally_ (allowing parallelization). -- Let {errors} be the list of all _field error_ raised while completing {data}. +- Let {errors} be the list of all _field error_ raised while executing + {ExecuteGroupedFieldSet()}. - Return an unordered map containing {data}, {errors}, and {incrementalDataRecords}. @@ -884,6 +905,15 @@ choose to memoize their implementations of {CollectFields}. ### Execution Plan Generation +A grouped field set may contain fields that have been deferred by the use of the +`@defer` directive on their enclosing fragments. Given a grouped field set, +{BuildExecutionPlan()} generates an execution plan by partitioning the grouped +field as specified by the operation's use of `@defer` and the requirements of +the incremental response format. An execution plan consists of a single new +grouped field containing the fields that do not require deferral, and a map of +new grouped field set containing where the keys represent the set of Defer +Usages containing those fields. + BuildExecutionPlan(originalGroupedFieldSet, parentDeferUsages): - If {parentDeferUsages} is not provided, initialize it to the empty set. From 689a6b4a92d3d58ccc2e484948f7be2dc9d381d0 Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Thu, 5 Sep 2024 23:22:17 +0300 Subject: [PATCH 30/37] add subheadings --- spec/Section 6 -- Execution.md | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index c414a5401..65560e943 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -523,6 +523,8 @@ variableValues, serial, path, deferUsageSet, deferMap): {incrementalDataRecords}. - Return {data} and {incrementalDataRecords}. +### Mapping Deferred Fragments to Delivery Groups + Because `@defer` directives may be nested within list types, a map is required to associate a Defer Usage record as recorded within Field Details Records and an actual Deferred Fragment so that any additional Execution Groups may be @@ -545,6 +547,8 @@ GetNewDeferMap(newDeferUsages, path, deferMap): - Set the entry for {deferUsage} in {newDeferMap} to {newDeferredFragment}. - Return {newDeferMap}. +### Collecting Execution Groups + The {CollectExecutionGroups()} algorithm is responsible for creating the Execution Groups, i.e. Incremental Data Records, for each partitioned grouped field set. It uses the map created by {GetNewDeferMap()} algorithm to associate From 517362f6c0b8c95043e2bb2ffa3ca2fd1bc02a35 Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Fri, 6 Sep 2024 09:51:21 +0300 Subject: [PATCH 31/37] adjust heading --- spec/Section 6 -- Execution.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 65560e943..da8bb45e2 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -523,7 +523,7 @@ variableValues, serial, path, deferUsageSet, deferMap): {incrementalDataRecords}. - Return {data} and {incrementalDataRecords}. -### Mapping Deferred Fragments to Delivery Groups +### Mapping @defer Directives to Delivery Groups Because `@defer` directives may be nested within list types, a map is required to associate a Defer Usage record as recorded within Field Details Records and From a67def46d857955f38225b85e0b77804e826f174 Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Fri, 6 Sep 2024 09:52:54 +0300 Subject: [PATCH 32/37] Initialize graph --- spec/Section 6 -- Execution.md | 1 + 1 file changed, 1 insertion(+) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index da8bb45e2..a362aec6a 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -406,6 +406,7 @@ YieldIncrementalResults(data, errors, incrementalDataRecords): GraphFromRecords(incrementalDataRecords, graph): +- If {graph} is not provided, initialize to an empty graph. - Let {newGraph} be a new directed acyclic graph containing all of the nodes and edges in {graph}. - For each {incrementalDataRecord} of {incrementalDataRecords}: From 359441e7cc8a4895cabce412b8e324f5cce3208a Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Fri, 6 Sep 2024 10:09:34 +0300 Subject: [PATCH 33/37] adjust YieldSubsequentResults algorithm per review --- spec/Section 6 -- Execution.md | 37 +++++++++++++++++----------------- 1 file changed, 18 insertions(+), 19 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index a362aec6a..498b88da5 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -356,23 +356,21 @@ The procedure for yielding incremental results is specified by the YieldIncrementalResults(data, errors, incrementalDataRecords): - Let {graph} be the result of {GraphFromRecords(incrementalDataRecords)}. -- Let {pendingResults} be the result of {GetNonEmptyNewPending(graph)}. -- Update {graph} to the subgraph rooted at nodes in {pendingResults}. +- Let {rootNodes} be the result of {GetNewRootNodes(graph)}. +- Update {graph} to the subgraph rooted at nodes in {rootNodes}. - Yield the result of {GetInitialResult(data, errors, pendingResults)}. - For each completed child Pending Incremental Data node of a root node in {graph}: - Let {incrementalDataRecord} be the Pending Incremental Data for that node; let {result} be the corresponding completed result. - If {data} on {result} is {null}: - - Initialize {completed} to an empty list. - Let {parents} be the parent nodes of {executionGroup}. - Initialize {completed} to an empty list. - - For each {pendingResult} of {parents}: + - For each {node} of {parents}: - Append {GetCompletedEntry(parent, errors)} to {completed}. - - Remove {pendingResult} and all of its descendant nodes from {graph}, - except for any descendant Incremental Data Record nodes with other - parents. - - Let {hasNext} be {false}, if {graph} is empty. + - Remove {node} and all of its descendant nodes from {graph}, except for + any descendant Incremental Data Record nodes with other parents. + - Let {hasNext} be {false} if {graph} is empty; otherwise, {true}. - Yield an unordered map containing {completed} and {hasNext}. - Continue to the next completed Pending Incremental Data node. - Replace {node} in {graph} with a new node corresponding to the Completed @@ -386,20 +384,21 @@ YieldIncrementalResults(data, errors, incrementalDataRecords): - If {completedIncrementalDataNodes} is empty, continue to the next completed Pending Incremental Data Node. - Initialize {incremental} to an empty list. - - For each {node} of {completedIncrementalDataNodes}: - - Let {incrementalDataRecord} be the corresponding record for {node}. + - For each {completedIncrementalDataNode} of {completedIncrementalDataNodes}: + - Let {incrementalDataRecord} be the corresponding record for + {completedIncrementalDataNode}. - Append {GetIncrementalEntry(incrementalDataRecord, graph)} to {incremental}. - Remove {node} from {graph}. - Initialize {completed} to an empty list. - - For each {pendingResult} of {completedDeferredFragments}: - - Append {GetCompletedEntry(pendingResult)} to {completed}. - - Remove {pendingResult} from {graph}, promoting its child Deferred Fragment - nodes to root nodes. - - Let {newPendingResults} be the result of {GetNonEmptyNewPending(graph)}. - - Add all nodes in {newPendingResults} to {pendingResults}. - - Update {graph} to the subgraph rooted at nodes in {pendingResults}. - - Let {pending} be the result of {GetPendingEntry(newPendingResults)}. + - For each {completedDeferredFragment} of {completedDeferredFragments}: + - Append {GetCompletedEntry(completedDeferredFragment)} to {completed}. + - Remove {completedDeferredFragment} from {graph}, promoting its child + Deferred Fragment nodes to root nodes. + - Let {newRootNodes} be the result of {GetNewRootNodes(graph)}. + - Add all nodes in {newRootNodes} to {rootNodes}. + - Update {graph} to the subgraph rooted at nodes in {rootNodes}. + - Let {pending} be the result of {GetPendingEntry(newRootNodes)}. - Yield the result of {GetIncrementalResult(graph, incremental, completed, pending)}. - Complete this incremental result stream. @@ -417,7 +416,7 @@ GraphFromRecords(incrementalDataRecords, graph): to {newGraph}, or the {parent} is not defined. - Return {newGraph}. -GetNonEmptyNewPending(graph): +GetNewRootNodes(graph): - Initialize {newPendingResults} to the empty set. - Initialize {rootNodes} to the set of root nodes in {graph}. From 8292813b0ff464da019a4f6c98469b9af0030423 Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Fri, 6 Sep 2024 10:15:39 +0300 Subject: [PATCH 34/37] reuse GetIncrementalResult() for the error case --- spec/Section 6 -- Execution.md | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 498b88da5..baa378b59 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -370,8 +370,7 @@ YieldIncrementalResults(data, errors, incrementalDataRecords): - Append {GetCompletedEntry(parent, errors)} to {completed}. - Remove {node} and all of its descendant nodes from {graph}, except for any descendant Incremental Data Record nodes with other parents. - - Let {hasNext} be {false} if {graph} is empty; otherwise, {true}. - - Yield an unordered map containing {completed} and {hasNext}. + - Yield the result of {GetIncrementalResult(graph, completed)}. - Continue to the next completed Pending Incremental Data node. - Replace {node} in {graph} with a new node corresponding to the Completed Incremental Data for {result}. @@ -444,15 +443,15 @@ GetPendingEntry(pendingResults): - Append {pendingEntry} to {pending}. - Return {pending}. -GetIncrementalResult(graph, incremental, completed, pending): +GetIncrementalResult(graph, completed, incremental, pending): - Let {hasNext} be {false} if {graph} is empty, otherwise, {true}. - Let {incrementalResult} be an unordered map containing {hasNext}. -- If {incremental} is not empty: +- If {incremental} is provided and not empty: - Set the corresponding entry on {incrementalResult} to {incremental}. - If {completed} is not empty: - Set the corresponding entry on {incrementalResult} to {completed}. -- If {pending} is not empty: +- If {pending} is provided and not empty: - Set the corresponding entry on {incrementalResult} to {pending}. - Return {incrementalResult}. From e933424f2c683e6fd8ce8da5de04a4714bac658c Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Fri, 6 Sep 2024 10:52:10 +0300 Subject: [PATCH 35/37] add descriptions and fix bug within GetNewRootNodes, it needs the old root nodes before the graph was adjusted --- spec/Section 6 -- Execution.md | 91 ++++++++++++++++++++++++++++------ 1 file changed, 77 insertions(+), 14 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index baa378b59..4d57455fe 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -351,14 +351,50 @@ incremental portion of the Execution Plan. ### Yielding Incremental Results The procedure for yielding incremental results is specified by the -{YieldIncrementalResults()} algorithm. +{YieldIncrementalResults()} algorithm. The incremental state is stored within a +graph, with root nodes representing the currently pending delivery groups. + +For example, given the following operation: + +```graphql example +{ + ...SlowFragment @defer + fastField +} + +fragment SlowFragment on Query { + ...SlowestFragment @defer + slowField +} + +fragment SlowestFragment on Query { + slowestField +} +``` + +A valid GraphQL executor deferring `SlowFragment` must include a `pending` entry +to that effect within the initial result, while the `pending` entry for +`SlowestFragment` should be delivered together with `SlowFragment`. + +Delivery group nodes may have three different types of child nodes: + +1. Other delivery group nodes, i.e. the node representing `SlowFragment` should + have a child node representing `SlowestFragment`. +2. Pending incremental data nodes, i.e. the node for `SlowFragment` should + initially have a node for `slowField`. +3. Completed incremental data nodes, i.e. when `slowField` is completed, the + pending incremental data node for `slowField` should be replaced with a node + representing the completed data. + +The {YieldIncrementalResults()} algorithm is responsible for updating the graph +as it yields the incremental results. YieldIncrementalResults(data, errors, incrementalDataRecords): - Let {graph} be the result of {GraphFromRecords(incrementalDataRecords)}. - Let {rootNodes} be the result of {GetNewRootNodes(graph)}. - Update {graph} to the subgraph rooted at nodes in {rootNodes}. -- Yield the result of {GetInitialResult(data, errors, pendingResults)}. +- Yield the result of {GetInitialResult(data, errors, rootNodes)}. - For each completed child Pending Incremental Data node of a root node in {graph}: - Let {incrementalDataRecord} be the Pending Incremental Data for that node; @@ -370,7 +406,7 @@ YieldIncrementalResults(data, errors, incrementalDataRecords): - Append {GetCompletedEntry(parent, errors)} to {completed}. - Remove {node} and all of its descendant nodes from {graph}, except for any descendant Incremental Data Record nodes with other parents. - - Yield the result of {GetIncrementalResult(graph, completed)}. + - Yield the result of {GetSubsequentResult(graph, completed)}. - Continue to the next completed Pending Incremental Data node. - Replace {node} in {graph} with a new node corresponding to the Completed Incremental Data for {result}. @@ -394,11 +430,11 @@ YieldIncrementalResults(data, errors, incrementalDataRecords): - Append {GetCompletedEntry(completedDeferredFragment)} to {completed}. - Remove {completedDeferredFragment} from {graph}, promoting its child Deferred Fragment nodes to root nodes. - - Let {newRootNodes} be the result of {GetNewRootNodes(graph)}. + - Let {newRootNodes} be the result of {GetNewRootNodes(graph, rootNodes)}. - Add all nodes in {newRootNodes} to {rootNodes}. - Update {graph} to the subgraph rooted at nodes in {rootNodes}. - Let {pending} be the result of {GetPendingEntry(newRootNodes)}. - - Yield the result of {GetIncrementalResult(graph, incremental, completed, + - Yield the result of {GetSubsequentResult(graph, incremental, completed, pending)}. - Complete this incremental result stream. @@ -415,17 +451,28 @@ GraphFromRecords(incrementalDataRecords, graph): to {newGraph}, or the {parent} is not defined. - Return {newGraph}. -GetNewRootNodes(graph): +The {GetNewRootNodes()} algorithm is responsible for determining the new root +nodes that must be reported as pending. Any delivery groups without any +execution groups should not be reported as pending, and any child delivery +groups for these "empty" delivery groups should be reported as pending in their +stead. + +GetNewRootNodes(graph, oldRootNodes): -- Initialize {newPendingResults} to the empty set. +- Initialize {newRootNodes} to the empty set. - Initialize {rootNodes} to the set of root nodes in {graph}. - For each {rootNode} of {rootNodes}: - If {rootNode} has no children Pending Incremental Data nodes: - Let {children} be the set of child Deferred Fragment nodes of {rootNode}. - Add each of the nodes in {children} to {rootNodes}. - Continue to the next {rootNode} of {rootNodes}. - - Add {rootNode} to {newPendingResults}. -- Return {newPendingResults}. + - If {oldRootNodes} does not contain {rootNode}, add {rootNode} to + {newRootNodes}. +- Return {newRootNodes}. + +Formatting of the initial result is defined by the {GetInitialResult()} +algorithm. It will only be called when there is an incremental result stream, +and so `hasNext` will always be set to {true}. GetInitialResult(data, errors, pendingResults): @@ -433,17 +480,26 @@ GetInitialResult(data, errors, pendingResults): - Let {hasNext} be {true}. - Return an unordered map containing {data}, {errors}, {pending}, and {hasNext}. -GetPendingEntry(pendingResults): +Formatting the `pending` of initial and subsequentResults is defined by the +{GetPendingEntry()} algorithm. Given a set of new root nodes added to the graph, +{GetPendingEntry()} returns a list of formatted `pending` entries. + +GetPendingEntry(newRootNodes): - Initialize {pending} to an empty list. -- For each {pendingResult} of {pendingResult}: - - Let {id} be a unique identifier for {pendingResult}. - - Let {path} and {label} be the corresponding entries on {pendingResult}. +- For each {newRootNode} of {newRootNodes}: + - Let {id} be a unique identifier for {newRootNode}. + - Let {path} and {label} be the corresponding entries on {newRootNode}. - Let {pendingEntry} be an unordered map containing {id}, {path}, and {label}. - Append {pendingEntry} to {pending}. - Return {pending}. -GetIncrementalResult(graph, completed, incremental, pending): +Formatting of subsequent incremental results is defined by the +{GetSubsequentResult()} algorithm. Given the current graph, and any `completed`, +`incremental`, and `pending` entries, it produces an appropriately formatted +subsequent incremental response. + +GetSubsequentResult(graph, completed, incremental, pending): - Let {hasNext} be {false} if {graph} is empty, otherwise, {true}. - Let {incrementalResult} be an unordered map containing {hasNext}. @@ -455,6 +511,10 @@ GetIncrementalResult(graph, completed, incremental, pending): - Set the corresponding entry on {incrementalResult} to {pending}. - Return {incrementalResult}. +Formatting of subsequent incremental results is defined by the +{GetSubsequentResult()} algorithm. Execution groups are tagged with the `id` and +`subPath` combination optimized to produce the shortest `subPath`. + GetIncrementalEntry(incrementalDataRecord, graph): - Let {deferredFragments} be the Deferred Fragments incrementally completed by @@ -470,6 +530,9 @@ GetIncrementalEntry(incrementalDataRecord, graph): - Let {id} be the unique identifier for {bestDeferredFragment}. - Return an unordered map containing {id}, {subPath}, {data}, and {errors}. +Formatting of completed incremental results is defined by the +{GetCompletedEntry()} algorithm. + GetCompletedEntry(pendingResult, errors): - Let {id} be the unique identifier for {pendingResult}. From d9014e4fbd1c482449bdfce57b053a7866659201 Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Fri, 6 Sep 2024 11:06:01 +0300 Subject: [PATCH 36/37] finish addressing review comments --- spec/Section 6 -- Execution.md | 44 ++++++++++++++++++++++++++-------- 1 file changed, 34 insertions(+), 10 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 4d57455fe..ed3403a6f 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -378,16 +378,18 @@ to that effect within the initial result, while the `pending` entry for Delivery group nodes may have three different types of child nodes: -1. Other delivery group nodes, i.e. the node representing `SlowFragment` should +1. Child Delivery Group nodes, i.e. the node representing `SlowFragment` should have a child node representing `SlowestFragment`. -2. Pending incremental data nodes, i.e. the node for `SlowFragment` should +2. Pending Incremental Data nodes, i.e. the node for `SlowFragment` should initially have a node for `slowField`. -3. Completed incremental data nodes, i.e. when `slowField` is completed, the +3. Completed Incremental Data nodes, i.e. when `slowField` is completed, the pending incremental data node for `slowField` should be replaced with a node representing the completed data. The {YieldIncrementalResults()} algorithm is responsible for updating the graph -as it yields the incremental results. +as it yields the incremental results. When a delivery group contains only +completed incremental data nodes, the group is removed from the graph as it is +delivered. YieldIncrementalResults(data, errors, incrementalDataRecords): @@ -413,7 +415,7 @@ YieldIncrementalResults(data, errors, incrementalDataRecords): - Let {resultIncrementalDataRecords} be {incrementalDataRecords} on {result}. - Update {graph} to {GraphFromRecords(resultIncrementalDataRecords, graph)}. - Let {completedDeferredFragments} be the set of root nodes in {graph} without - any child Pending Data nodes. + any child Pending Incremental Data nodes. - Let {completedIncrementalDataNodes} be the set of completed Incremental Data nodes that are children of {completedDeferredFragments}. - If {completedIncrementalDataNodes} is empty, continue to the next completed @@ -438,17 +440,39 @@ YieldIncrementalResults(data, errors, incrementalDataRecords): pending)}. - Complete this incremental result stream. +New Incremental Data Records are added to the {graph} by the +{GraphFromRecords()} algorithm as Pending Incremental Data nodes directed from +the Deferred Fragments they incrementally complete. + GraphFromRecords(incrementalDataRecords, graph): - If {graph} is not provided, initialize to an empty graph. - Let {newGraph} be a new directed acyclic graph containing all of the nodes and edges in {graph}. - For each {incrementalDataRecord} of {incrementalDataRecords}: - - Add {incrementalDataRecord} to {newGraph} as a new Pending Data node - directed from the {pendingResults} that it completes, adding each of - {pendingResults} to {newGraph} as a new node directed from its {parent}, - recursively adding each {parent} until {incrementalDataRecord} is connected - to {newGraph}, or the {parent} is not defined. + - Let {deferredFragments} be the Deferred Fragments incrementally completed by + {incrementalDataRecord}. + - For each {deferredFragment} of {deferredFragments}: + - Reset {newGraph} to the result of + {GraphWithDeferredFragmentRecord(deferredFragment, newGraph)}. + - Add {incrementalDataRecord} to {newGraph} as a new Pending Incremental Data + node directed from the {deferredFragments} that it completes. +- Return {newGraph}. + +The {GraphWithDeferredFragmentRecord()} algorithm returns a new graph containing +the provided Deferred Fragment Record, recursively adding its parent Deferred +Fragment nodes. + +GraphWithDeferredFragmentRecord(deferredFragment, graph): + +- If {graph} contains a Deferred Fragment node representing {deferredFragment}, + return {graph}. +- Let {parent} be the parent Deferred Fragment of {deferredFragment}. +- If {parent} is defined, let {newGraph} be the result of + {GraphWithDeferredFragmentRecord(parent, graph)}; otherwise, let {newGraph} be + a new directed acyclic graph containing all of the nodes and edges in {graph}. +- Add {deferredFragment} to {newGraph} as a new Deferred Fragment node directed + from {parent}, if defined. - Return {newGraph}. The {GetNewRootNodes()} algorithm is responsible for determining the new root From b8e518726923e1efe3602225f9bc295dc56e0dac Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Fri, 6 Sep 2024 11:09:44 +0300 Subject: [PATCH 37/37] add missing word --- spec/Section 6 -- Execution.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index ed3403a6f..e43a395cd 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -504,9 +504,9 @@ GetInitialResult(data, errors, pendingResults): - Let {hasNext} be {true}. - Return an unordered map containing {data}, {errors}, {pending}, and {hasNext}. -Formatting the `pending` of initial and subsequentResults is defined by the -{GetPendingEntry()} algorithm. Given a set of new root nodes added to the graph, -{GetPendingEntry()} returns a list of formatted `pending` entries. +Formatting the `pending` entries of initial and subsequentResults is defined by +the {GetPendingEntry()} algorithm. Given a set of new root nodes added to the +graph, {GetPendingEntry()} returns a list of formatted `pending` entries. GetPendingEntry(newRootNodes):