1. Introduction
This paper proposes a self-contained design for a Standard C++ framework for managing asynchronous execution on generic execution resources. It is based on the ideas in A Unified Executors Proposal for C++ and its companion papers.
1.1. Motivation
Today, C++ software is increasingly asynchronous and parallel, a trend that is likely to only continue going forward. Asynchrony and parallelism appears everywhere, from processor hardware interfaces, to networking, to file I/O, to GUIs, to accelerators. Every C++ domain and every platform needs to deal with asynchrony and parallelism, from scientific computing to video games to financial services, from the smallest mobile devices to your laptop to GPUs in the world’s fastest supercomputer.
While the C++ Standard Library has a rich set of concurrency primitives (
This paper proposes a Standard C++ model for asynchrony, based around three key abstractions: schedulers, senders, and receivers, and a set of customizable asynchronous algorithms.
1.2. Priorities
- 
     Be composable and generic, allowing users to write code that can be used with many different types of execution resources. 
- 
     Encapsulate common asynchronous patterns in customizable and reusable algorithms, so users don’t have to invent things themselves. 
- 
     Make it easy to be correct by construction. 
- 
     Support the diversity of execution resources and execution agents, because not all execution agents are created equal; some are less capable than others, but not less important. 
- 
     Allow everything to be customized by an execution resource, including transfer to other execution resources, but don’t require that execution resources customize everything. 
- 
     Care about all reasonable use cases, domains and platforms. 
- 
     Errors must be propagated, but error handling must not present a burden. 
- 
     Support cancellation, which is not an error. 
- 
     Have clear and concise answers for where things execute. 
- 
     Be able to manage and terminate the lifetimes of objects asynchronously. 
1.3. Examples: End User
In this section we demonstrate the end-user experience of asynchronous programming directly with the sender algorithms presented in this paper. See § 4.20 User-facing sender factories, § 4.21 User-facing sender adaptors, and § 4.22 User-facing sender consumers for short explanations of the algorithms used in these code examples.
1.3.1. Hello world
using namespace std :: execution ; scheduler auto sch = thread_pool . scheduler (); // 1 sender auto begin = schedule ( sch ); // 2 sender auto hi = then ( begin , []{ // 3 std :: cout << "Hello world! Have an int." ; // 3 return 13 ; // 3 }); // 3 sender auto add_42 = then ( hi , []( int arg ) { return arg + 42 ; }); // 4 auto [ i ] = this_thread :: sync_wait ( add_42 ). value (); // 5 
This example demonstrates the basics of schedulers, senders, and receivers:
- 
     First we need to get a scheduler from somewhere, such as a thread pool. A scheduler is a lightweight handle to an execution resource. 
- 
     To start a chain of work on a scheduler, we call § 4.20.1 execution::schedule, which returns a sender that completes on the scheduler. A sender describes asynchronous work and sends a signal (value, error, or stopped) to some recipient(s) when that work completes. 
- 
     We use sender algorithms to produce senders and compose asynchronous work. § 4.21.2 execution::then is a sender adaptor that takes an input sender and a std :: invocable std :: invocable then schedule void std :: invocable int 
- 
     Now, we add another operation to the chain, again using § 4.21.2 execution::then. This time, we get sent a value - the int 42 
- 
     Finally, we’re ready to submit the entire asynchronous pipeline and wait for its completion. Everything up until this point has been completely asynchronous; the work may not have even started yet. To ensure the work has started and then block pending its completion, we use § 4.22.2 this_thread::sync_wait, which will either return a std :: optional < std :: tuple < ... >> std :: optional 
1.3.2. Asynchronous inclusive scan
using namespace std :: execution ; sender auto async_inclusive_scan ( scheduler auto sch , // 2 std :: span < const double > input , // 1 std :: span < double > output , // 1 double init , // 1 std :: size_t tile_count ) // 3 { std :: size_t const tile_size = ( input . size () + tile_count - 1 ) / tile_count ; std :: vector < double > partials ( tile_count + 1 ); // 4 partials [ 0 ] = init ; // 4 return transfer_just ( sch , std :: move ( partials )) // 5 | bulk ( tile_count , // 6 [ = ]( std :: size_t i , std :: vector < double >& partials ) { // 7 auto start = i * tile_size ; // 8 auto end = std :: min ( input . size (), ( i + 1 ) * tile_size ); // 8 partials [ i + 1 ] = *-- std :: inclusive_scan ( begin ( input ) + start , // 9 begin ( input ) + end , // 9 begin ( output ) + start ); // 9 }) // 10 | then ( // 11 []( std :: vector < double >&& partials ) { std :: inclusive_scan ( begin ( partials ), end ( partials ), // 12 begin ( partials )); // 12 return std :: move ( partials ); // 13 }) | bulk ( tile_count , // 14 [ = ]( std :: size_t i , std :: vector < double >& partials ) { // 14 auto start = i * tile_size ; // 14 auto end = std :: min ( input . size (), ( i + 1 ) * tile_size ); // 14 std :: for_each ( begin ( output ) + start , begin ( output ) + end , // 14 [ & ] ( double & e ) { e = partials [ i ] + e ; } // 14 ); }) | then ( // 15 [ = ]( std :: vector < double >&& partials ) { // 15 return output ; // 15 }); // 15 } 
This example builds an asynchronous computation of an inclusive scan:
- 
     It scans a sequence of double std :: span < const double > input double std :: span < double > output 
- 
     It takes a scheduler, which specifies what execution resource the scan should be launched on. 
- 
     It also takes a tile_count 
- 
     First we need to allocate temporary storage needed for the algorithm, which we’ll do with a std :: vector partials double 
- 
     Next we’ll create our initial sender with § 4.20.3 execution::transfer_just. This sender will send the temporary storage, which we’ve moved into the sender. The sender has a completion scheduler of sch sch 
- 
     Senders and sender adaptors support composition via operator | operator | tile_count 
- 
     Each agent will call a std :: invocable i [ 0 , tile_count ) 
- 
     We start by computing the start and end of the range of input and output elements that this agent is responsible for, based on our agent index. 
- 
     Then we do a sequential std :: inclusive_scan partials 
- 
     After all computation in that initial § 4.21.9 execution::bulk pass has completed, every one of the spawned execution agents will have written the sum of its elements into its slot in partials 
- 
     Now we need to scan all of the values in partials 
- 
     § 4.21.2 execution::then takes an input sender and an std :: invocable std :: invocable std :: invocable std :: inclusive_scan partials 
- 
     Then we return partials 
- 
     Finally we do another § 4.21.9 execution::bulk of the same shape as before. In this § 4.21.9 execution::bulk, we will use the scanned values in partials 
- 
     async_inclusive_scan std :: span < double > async_inclusive_scan 
1.3.3. Asynchronous dynamically-sized read
using namespace std :: execution ; sender_of < set_value_t ( std :: size_t ) > auto async_read ( // 1 sender_of < set_value_t ( std :: span < std :: byte > ) > auto buffer , // 1 auto handle ); // 1 struct dynamic_buffer { // 3 std :: unique_ptr < std :: byte [] > data ; // 3 std :: size_t size ; // 3 }; // 3 sender_of < set_value_t ( dynamic_buffer ) > auto async_read_array ( auto handle ) { // 2 return just ( dynamic_buffer {}) // 4 | let_value ([ handle ] ( dynamic_buffer & buf ) { // 5 return just ( std :: as_writeable_bytes ( std :: span ( & buf . size , 1 )) // 6 | async_read ( handle ) // 7 | then ( // 8 [ & buf ] ( std :: size_t bytes_read ) { // 9 assert ( bytes_read == sizeof ( buf . size )); // 10 buf . data = std :: make_unique < std :: byte [] > ( buf . size ); // 11 return std :: span ( buf . data . get (), buf . size ); // 12 }) | async_read ( handle ) // 13 | then ( [ & buf ] ( std :: size_t bytes_read ) { assert ( bytes_read == buf . size ); // 14 return std :: move ( buf ); // 15 }); }); } 
This example demonstrates a common asynchronous I/O pattern - reading a payload of a dynamic size by first reading the size, then reading the number of bytes specified by the size:
- 
     async_read std :: span < std :: byte > std :: span 
- 
     async_read_array dynamic_buffer 
- 
     dynamic_buffer std :: unique_ptr < std :: byte [] > 
- 
     The first thing we do inside of async_read_array dynamic_array operator | 
- 
     We need the lifetime of this dynamic_array let_value std :: invocable let_value std :: invocable std :: invocable 
- 
     Inside of the let_value std :: invocable async_read std :: span buf . size 
- 
     We chain the async_read operator | 
- 
     Next, we pipe a std :: invocable async_read 
- 
     That std :: invocable 
- 
     We need to check that the number of bytes read is what we expected. 
- 
     Now that we have read the size of the data, we can allocate storage for it. 
- 
     We return a std :: span < std :: byte > std :: invocable 
- 
     And that recipient will be another async_read 
- 
     Once the data has been read, in another § 4.21.2 execution::then, we confirm that we read the right number of bytes. 
- 
     Finally, we move out of and return our dynamic_buffer async_read_array 
1.4. Asynchronous Windows socket recv 
   To get a better feel for how this interface might be used by low-level operations see this example implementation
of a cancellable 
struct operation_base : WSAOVERALAPPED { using completion_fn = void ( operation_base * op , DWORD bytesTransferred , int errorCode ) noexcept ; // Assume IOCP event loop will call this when this OVERLAPPED structure is dequeued. completion_fn * completed ; }; template < typename Receiver > struct recv_op : operation_base { recv_op ( SOCKET s , void * data , size_t len , Receiver r ) : receiver ( std :: move ( r )) , sock ( s ) { this -> Internal = 0 ; this -> InternalHigh = 0 ; this -> Offset = 0 ; this -> OffsetHigh = 0 ; this -> hEvent = NULL; this -> completed = & recv_op :: on_complete ; buffer . len = len ; buffer . buf = static_cast < CHAR *> ( data ); } friend void tag_invoke ( std :: execution :: start_t , recv_op & self ) noexcept { // Avoid even calling WSARecv() if operation already cancelled auto st = std :: execution :: get_stop_token ( std :: get_env ( self . receiver )); if ( st . stop_requested ()) { std :: execution :: set_stopped ( std :: move ( self . receiver )); return ; } // Store and cache result here in case it changes during execution const bool stopPossible = st . stop_possible (); if ( ! stopPossible ) { self . ready . store ( true, std :: memory_order_relaxed ); } // Launch the operation DWORD bytesTransferred = 0 ; DWORD flags = 0 ; int result = WSARecv ( self . sock , & self . buffer , 1 , & bytesTransferred , & flags , static_cast < WSAOVERLAPPED *> ( & self ), NULL); if ( result == SOCKET_ERROR ) { int errorCode = WSAGetLastError (); if ( errorCode != WSA_IO_PENDING )) { if ( errorCode == WSA_OPERATION_ABORTED ) { std :: execution :: set_stopped ( std :: move ( self . receiver )); } else { std :: execution :: set_error ( std :: move ( self . receiver ), std :: error_code ( errorCode , std :: system_category ())); } return ; } } else { // Completed synchronously (assuming FILE_SKIP_COMPLETION_PORT_ON_SUCCESS has been set) execution :: set_value ( std :: move ( self . receiver ), bytesTransferred ); return ; } // If we get here then operation has launched successfully and will complete asynchronously. // May be completing concurrently on another thread already. if ( stopPossible ) { // Register the stop callback self . stopCallback . emplace ( std :: move ( st ), cancel_cb { self }); // Mark as 'completed' if ( self . ready . load ( std :: memory_order_acquire ) || self . ready . exchange ( true, std :: memory_order_acq_rel )) { // Already completed on another thread self . stopCallback . reset (); BOOL ok = WSAGetOverlappedResult ( self . sock , ( WSAOVERLAPPED * ) & self , & bytesTransferred , FALSE , & flags ); if ( ok ) { std :: execution :: set_value ( std :: move ( self . receiver ), bytesTransferred ); } else { int errorCode = WSAGetLastError (); std :: execution :: set_error ( std :: move ( self . receiver ), std :: error_code ( errorCode , std :: system_category ())); } } } } struct cancel_cb { recv_op & op ; void operator ()() noexcept { CancelIoEx (( HANDLE ) op . sock , ( OVERLAPPED * )( WSAOVERLAPPED * ) & op ); } }; static void on_complete ( operation_base * op , DWORD bytesTransferred , int errorCode ) noexcept { recv_op & self = * static_cast < recv_op *> ( op ); if ( ready . load ( std :: memory_order_acquire ) || ready . exchange ( true, std :: memory_order_acq_rel )) { // Unsubscribe any stop-callback so we know that CancelIoEx() is not accessing 'op' // any more stopCallback . reset (); if ( errorCode == 0 ) { std :: execution :: set_value ( std :: move ( receiver ), bytesTransferred ); } else { std :: execution :: set_error ( std :: move ( receiver ), std :: error_code ( errorCode , std :: system_category ())); } } } Receiver receiver ; SOCKET sock ; WSABUF buffer ; std :: optional < typename stop_callback_type_t < Receiver > :: template callback_type < cancel_cb >> stopCallback ; std :: atomic < bool > ready { false}; }; struct recv_sender { using is_sender = void ; SOCKET sock ; void * data ; size_t len ; template < typename Receiver > friend recv_op < Receiver > tag_invoke ( std :: execution :: connect_t , const recv_sender & s , Receiver r ) { return recv_op < Receiver > { s . sock , s . data , s . len , std :: move ( r )}; } }; recv_sender async_recv ( SOCKET s , void * data , size_t len ) { return recv_sender { s , data , len }; } 
1.4.1. More end-user examples
1.4.1.1. Sudoku solver
This example comes from Kirk Shoop, who ported an example from TBB’s documentation to sender/receiver in his fork of the libunifex repo. It is a Sudoku solver that uses a configurable number of threads to explore the search space for solutions.
The sender/receiver-based Sudoku solver can be found here. Some things that are worth noting about Kirk’s solution:
- 
     Although it schedules asychronous work onto a thread pool, and each unit of work will schedule more work, its use of structured concurrency patterns make reference counting unnecessary. The solution does not make use of shared_ptr 
- 
     In addition to eliminating the need for reference counting, the use of structured concurrency makes it easy to ensure that resources are cleaned up on all code paths. In contrast, the TBB example that inspired this one leaks memory. 
For comparison, the TBB-based Sudoku solver can be found here.
1.4.1.2. File copy
This example also comes from Kirk Shoop which uses sender/receiver to recursively copy the files a directory tree. It demonstrates how sender/receiver can be used to do IO, using a scheduler that schedules work on Linux’s io_uring.
As with the Sudoku example, this example obviates the need for reference counting by employing structured concurrency. It uses iteration with an upper limit to avoid having too many open file handles.
You can find the example here.
1.4.1.3. Echo server
Dietmar Kuehl has a hobby project that implements networking APIs on top of sender/receiver. He recently implemented an echo server as a demo. His echo server code can be found here.
Below, I show the part of the echo server code. This code is executed for each client that connects to the echo server. In a loop, it reads input from a socket and echos the input back to the same socket. All of this, including the loop, is implemented with generic async algorithms.
outstanding . start ( EX :: repeat_effect_until ( EX :: let_value ( NN :: async_read_some ( ptr -> d_socket , context . scheduler (), NN :: buffer ( ptr -> d_buffer )) | EX :: then ([ ptr ]( :: std :: size_t n ){ :: std :: cout << "read='" << :: std :: string_view ( ptr -> d_buffer , n ) << "' \n " ; ptr -> d_done = n == 0 ; return n ; }), [ & context , ptr ]( :: std :: size_t n ){ return NN :: async_write_some ( ptr -> d_socket , context . scheduler (), NN :: buffer ( ptr -> d_buffer , n )); }) | EX :: then ([]( auto && ...){}) , [ owner = :: std :: move ( owner )]{ return owner -> d_done ; } ) ); 
In this code, 
This is a good example of seamless composition of async IO functions with non-IO operations. And by composing the senders in this structured way, all the state for the composite operation -- the 
1.5. Examples: Algorithms
In this section we show a few simple sender/receiver-based algorithm implementations.
1.5.1. then 
namespace exec = std :: execution ; template < class R , class F > class _then_receiver : exec :: receiver_adaptor < _then_receiver < R , F > , R > { friend exec :: receiver_adaptor < _then_receiver , R > ; F f_ ; // Customize set_value by invoking the callable and passing the result to the inner receiver template < class ... As > void set_value ( As && ... as ) && noexcept try { exec :: set_value ( std :: move ( * this ). base (), std :: invoke (( F && ) f_ , ( As && ) as ...)); } catch (...) { exec :: set_error ( std :: move ( * this ). base (), std :: current_exception ()); } public : _then_receiver ( R r , F f ) : exec :: receiver_adaptor < _then_receiver , R > { std :: move ( r )} , f_ ( std :: move ( f )) {} }; template < exec :: sender S , class F > struct _then_sender { using is_sender = void ; S s_ ; F f_ ; template < class ... Args > using _set_value_t = exec :: completion_signatures < exec :: set_value_t ( std :: invoke_result_t < F , Args ... > ) > ; // Compute the completion signatures template < class Env > friend auto tag_invoke ( exec :: get_completion_signatures_t , _then_sender && , Env ) -> exec :: make_completion_signatures < S , Env , exec :: completion_signatures < exec :: set_error_t ( std :: exception_ptr ) > , _set_value_t > ; // Connect: template < exec :: receiver R > friend auto tag_invoke ( exec :: connect_t , _then_sender && self , R r ) -> exec :: connect_result_t < S , _then_receiver < R , F >> { return exec :: connect ( ( S && ) self . s_ , _then_receiver < R , F > {( R && ) r , ( F && ) self . f_ }); } friend decltype ( auto ) tag_invoke ( get_env_t , const _then_sender & self ) noexcept { return get_env ( self . s_ ); } }; template < exec :: sender S , class F > exec :: sender auto then ( S s , F f ) { return _then_sender < S , F > {( S && ) s , ( F && ) f }; } 
This code builds a 
In detail, it does the following:
- 
     Defines a receiver in terms of execution :: receiver_adaptor - 
       Defines a constrained tag_invoke 
- 
       Defines another constrained overload of tag_invoke 
 The tag_invoke execution :: receiver_adaptor _then_receiver :: set_value 
- 
       
- 
     Defines a sender that aggregates another sender and the invocable, which defines a tag_invoke std :: execution :: connect std :: execution :: connect tag_invoke get_completion_signatures 
1.5.2. retry 
using namespace std ; namespace exec = execution ; template < class From , class To > concept _decays_to = same_as < decay_t < From > , To > ; // _conv needed so we can emplace construct non-movable types into // a std::optional. template < invocable F > requires is_nothrow_move_constructible_v < F > struct _conv { F f_ ; explicit _conv ( F f ) noexcept : f_ (( F && ) f ) {} operator invoke_result_t < F > () && { return (( F && ) f_ )(); } }; template < class S , class R > struct _op ; // pass through all customizations except set_error, which retries the operation. template < class S , class R > struct _retry_receiver : exec :: receiver_adaptor < _retry_receiver < S , R >> { _op < S , R >* o_ ; R && base () && noexcept { return ( R && ) o_ -> r_ ; } const R & base () const & noexcept { return o_ -> r_ ; } explicit _retry_receiver ( _op < S , R >* o ) : o_ ( o ) {} void set_error ( auto && ) && noexcept { o_ -> _retry (); // This causes the op to be retried } }; // Hold the nested operation state in an optional so we can // re-construct and re-start it if the operation fails. template < class S , class R > struct _op { S s_ ; R r_ ; optional < exec :: connect_result_t < S & , _retry_receiver < S , R >>> o_ ; _op ( S s , R r ) : s_ (( S && ) s ), r_ (( R && ) r ), o_ { _connect ()} {} _op ( _op && ) = delete ; auto _connect () noexcept { return _conv {[ this ] { return exec :: connect ( s_ , _retry_receiver < S , R > { this }); }}; } void _retry () noexcept try { o_ . emplace ( _connect ()); // potentially-throwing exec :: start ( * o_ ); } catch (...) { exec :: set_error (( R && ) r_ , std :: current_exception ()); } friend void tag_invoke ( exec :: start_t , _op & o ) noexcept { exec :: start ( * o . o_ ); } }; template < class S > struct _retry_sender { using is_sender = void ; S s_ ; explicit _retry_sender ( S s ) : s_ (( S && ) s ) {} template < class ... Ts > using _value_t = exec :: completion_signatures < exec :: set_value_t ( Ts ...) > ; template < class > using _error_t = exec :: completion_signatures <> ; // Declare the signatures with which this sender can complete template < class Env > friend auto tag_invoke ( exec :: get_completion_signatures_t , const _retry_sender & , Env ) -> exec :: make_completion_signatures < S & , Env , exec :: completion_signatures < exec :: set_error_t ( std :: exception_ptr ) > , _value_t , _error_t > ; template < exec :: receiver R > friend _op < S , R > tag_invoke ( exec :: connect_t , _retry_sender && self , R r ) { return {( S && ) self . s_ , ( R && ) r }; } friend decltype ( auto ) tag_invoke ( exec :: get_env_t , const _retry_sender & self ) noexcept { return get_env ( self . s_ ); } }; template < exec :: sender S > exec :: sender auto retry ( S s ) { return _retry_sender {( S && ) s }; } 
The 
This example does the following:
- 
     Defines a _conv std :: optional 
- 
     Defines a _retry_receiver set_error _retry () 
- 
     Defines an operation state that aggregates the input sender and receiver, and declares storage for the nested operation state in an optional _retry_receiver 
- 
     Starting the operation state dispatches to start 
- 
     The _retry () 
- 
     After reinitializing the inner operation state, _retry () start 
- 
     Defines a _retry_sender connect 
- 
     _retry_sender get_completion_signatures 
1.6. Examples: Schedulers
In this section we look at some schedulers of varying complexity.
1.6.1. Inline scheduler
class inline_scheduler { template < class R > struct _op { [[ no_unique_address ]] R rec_ ; friend void tag_invoke ( std :: execution :: start_t , _op & op ) noexcept { std :: execution :: set_value (( R && ) op . rec_ ); } }; struct _env { template < class Tag > friend inline_scheduler tag_invoke ( std :: execution :: get_completion_scheduler_t < Tag > , _env ) noexcept { return {}; } }; struct _sender { using is_sender = void ; using completion_signatures = std :: execution :: completion_signatures < std :: execution :: set_value_t () > ; template < class R > friend auto tag_invoke ( std :: execution :: connect_t , _sender , R && rec ) noexcept ( std :: is_nothrow_constructible_v < std :: remove_cvref_t < R > , R > ) -> _op < std :: remove_cvref_t < R >> { return {( R && ) rec }; } friend _env tag_invoke ( exec :: get_env_t , _sender ) noexcept { return {}; } }; friend _sender tag_invoke ( std :: execution :: schedule_t , const inline_scheduler & ) noexcept { return {}; } public : inline_scheduler () = default ; bool operator == ( const inline_scheduler & ) const noexcept = default ; }; 
The inline scheduler is a trivial scheduler that completes immediately and synchronously on
the thread that calls 
Although not a particularly useful scheduler, it serves to illustrate the basics of
implementing one. The 
- 
     Customizes execution :: schedule _sender 
- 
     The _sender sender set_error set_stopped execution :: completion_signatures 
- 
     The _sender execution :: connect _op 
- 
     The operation state customizes std :: execution :: start std :: execution :: set_value 
1.6.2. Single thread scheduler
This example shows how to create a scheduler for an execution resource that consists of a single
thread. It is implemented in terms of a lower-level execution resource called 
class single_thread_context { std :: execution :: run_loop loop_ ; std :: thread thread_ ; public : single_thread_context () : loop_ () , thread_ ([ this ] { loop_ . run (); }) {} ~ single_thread_context () { loop_ . finish (); thread_ . join (); } auto get_scheduler () noexcept { return loop_ . get_scheduler (); } std :: thread :: id get_thread_id () const noexcept { return thread_ . get_id (); } }; 
The 
The interesting bits are in the 
1.7. Examples: Server theme
In this section we look at some examples of how one would use senders to implement an HTTP server. The examples ignore the low-level details of the HTTP server and looks at how senders can be combined to achieve the goals of the project.
General application context:
- 
     server application that processes images 
- 
     execution resources: - 
       1 dedicated thread for network I/O 
- 
       N worker threads used for CPU-intensive work 
- 
       M threads for auxiliary I/O 
- 
       optional GPU context that may be used on some types of servers 
 
- 
       
- 
     all parts of the applications can be asynchronous 
- 
     no locks shall be used in user code 
1.7.1. Composability with execution :: let_ * 
   Example context:
- 
     we are looking at the flow of processing an HTTP request and sending back the response 
- 
     show how one can break the (slightly complex) flow into steps with execution :: let_ * 
- 
     different phases of processing HTTP requests are broken down into separate concerns 
- 
     each part of the processing might use different execution resources (details not shown in this example) 
- 
     error handling is generic, regardless which component fails; we always send the right response to the clients 
Goals:
- 
     show how one can break more complex flows into steps with let_* functions 
- 
     exemplify the use of let_value let_error let_stopped just 
namespace ex = std :: execution ; // Returns a sender that yields an http_request object for an incoming request ex :: sender auto schedule_request_start ( read_requests_ctx ctx ) {...} // Sends a response back to the client; yields a void signal on success ex :: sender auto send_response ( const http_response & resp ) {...} // Validate that the HTTP request is well-formed; forwards the request on success ex :: sender auto validate_request ( const http_request & req ) {...} // Handle the request; main application logic ex :: sender auto handle_request ( const http_request & req ) { //... return ex :: just ( http_response { 200 , result_body }); } // Transforms server errors into responses to be sent to the client ex :: sender auto error_to_response ( std :: exception_ptr err ) { try { std :: rethrow_exception ( err ); } catch ( const std :: invalid_argument & e ) { return ex :: just ( http_response { 404 , e . what ()}); } catch ( const std :: exception & e ) { return ex :: just ( http_response { 500 , e . what ()}); } catch (...) { return ex :: just ( http_response { 500 , "Unknown server error" }); } } // Transforms cancellation of the server into responses to be sent to the client ex :: sender auto stopped_to_response () { return ex :: just ( http_response { 503 , "Service temporarily unavailable" }); } //... // The whole flow for transforming incoming requests into responses ex :: sender auto snd = // get a sender when a new request comes schedule_request_start ( the_read_requests_ctx ) // make sure the request is valid; throw if not | ex :: let_value ( validate_request ) // process the request in a function that may be using a different execution resource | ex :: let_value ( handle_request ) // If there are errors transform them into proper responses | ex :: let_error ( error_to_response ) // If the flow is cancelled, send back a proper response | ex :: let_stopped ( stopped_to_response ) // write the result back to the client | ex :: let_value ( send_response ) // done ; // execute the whole flow asynchronously ex :: start_detached ( std :: move ( snd )); 
The example shows how one can separate out the concerns for interpreting requests, validating requests, running the main logic for handling the request, generating error responses, handling cancellation and sending the response back to the client.
They are all different phases in the application, and can be joined together with the 
All our functions return 
Also, because of using 
1.7.2. Moving between execution resources with execution :: on execution :: transfer 
   Example context:
- 
     reading data from the socket before processing the request 
- 
     reading of the data is done on the I/O context 
- 
     no processing of the data needs to be done on the I/O context 
Goals:
- 
     show how one can change the execution resource 
- 
     exemplify the use of on transfer 
namespace ex = std :: execution ; size_t legacy_read_from_socket ( int sock , char * buffer , size_t buffer_len ) {} void process_read_data ( const char * read_data , size_t read_len ) {} //... // A sender that just calls the legacy read function auto snd_read = ex :: just ( sock , buf , buf_len ) | ex :: then ( legacy_read_from_socket ); // The entire flow auto snd = // start by reading data on the I/O thread ex :: on ( io_sched , std :: move ( snd_read )) // do the processing on the worker threads pool | ex :: transfer ( work_sched ) // process the incoming data (on worker threads) | ex :: then ([ buf ]( int read_len ) { process_read_data ( buf , read_len ); }) // done ; // execute the whole flow asynchronously ex :: start_detached ( std :: move ( snd )); 
The example assume that we need to wrap some legacy code of reading sockets, and handle execution resource switching. (This style of reading from socket may not be the most efficient one, but it’s working for our purposes.) For performance reasons, the reading from the socket needs to be done on the I/O thread, and all the processing needs to happen on a work-specific execution resource (i.e., thread pool).
Calling 
The completion-signal will be issued in the I/O execution resource, so we have to move it to the work thread pool.
This is achieved with the help of the 
The reader should notice the difference between 
1.8. What this proposal is not
This paper is not a patch on top of A Unified Executors Proposal for C++; we are not asking to update the existing paper, we are asking to retire it in favor of this paper, which is already self-contained; any example code within this paper can be written in Standard C++, without the need to standardize any further facilities.
This paper is not an alternative design to A Unified Executors Proposal for C++; rather, we have taken the design in the current executors paper, and applied targeted fixes to allow it to fulfill the promises of the sender/receiver model, as well as provide all the facilities we consider essential when writing user code using standard execution concepts; we have also applied the guidance of removing one-way executors from the paper entirely, and instead provided an algorithm based around senders that serves the same purpose.
1.9. Design changes from P0443
- 
     The executor 
- 
     Properties are not included in this paper. We see them as a possible future extension, if the committee gets more comfortable with them. 
- 
     Senders now advertise what scheduler, if any, their evaluation will complete on. 
- 
     The places of execution of user code in P0443 weren’t precisely defined, whereas they are in this paper. See § 4.5 Senders can propagate completion schedulers. 
- 
     P0443 did not propose a suite of sender algorithms necessary for writing sender code; this paper does. See § 4.20 User-facing sender factories, § 4.21 User-facing sender adaptors, and § 4.22 User-facing sender consumers. 
- 
     P0443 did not specify the semantics of variously qualified connect 
- 
     This paper extends the sender traits/typed sender design to support typed senders whose value/error types depend on type information provided late via the receiver. 
- 
     Support for untyped senders is dropped; the typed_sender sender sender_traits completion_signatures_of_t 
- 
     Specific type erasure facilities are omitted, as per LEWG direction. Type erasure facilities can be built on top of this proposal, as discussed in § 5.9 Ranges-style CPOs vs tag_invoke. 
- 
     A specific thread pool implementation is omitted, as per LEWG direction. 
- 
     Some additional utilities are added: - 
       run_loop 
- 
       receiver_adaptor 
- 
       completion_signatures make_completion_signatures 
 
- 
       
1.10. Prior art
This proposal builds upon and learns from years of prior art with asynchronous and parallel programming frameworks in C++. In this section, we discuss async abstractions that have previously been suggested as a possible basis for asynchronous algorithms and why they fall short.
1.10.1. Futures
A future is a handle to work that has already been scheduled for execution. It is one end of a communication channel; the other end is a promise, used to receive the result from the concurrent operation and to communicate it to the future.
Futures, as traditionally realized, require the dynamic allocation and management of a shared state, synchronization, and typically type-erasure of work and continuation. Many of these costs are inherent in the nature of "future" as a handle to work that is already scheduled for execution. These expenses rule out the future abstraction for many uses and makes it a poor choice for a basis of a generic mechanism.
1.10.2. Coroutines
C++20 coroutines are frequently suggested as a basis for asynchronous algorithms. It’s fair to ask why, if we added coroutines to C++, are we suggesting the addition of a library-based abstraction for asynchrony. Certainly, coroutines come with huge syntactic and semantic advantages over the alternatives.
Although coroutines are lighter weight than futures, coroutines suffer many of the same problems. Since they typically start suspended, they can avoid synchronizing the chaining of dependent work. However in many cases, coroutine frames require an unavoidable dynamic allocation and indirect function calls. This is done to hide the layout of the coroutine frame from the C++ type system, which in turn makes possible the separate compilation of coroutines and certain compiler optimizations, such as optimization of the coroutine frame size.
Those advantages come at a cost, though. Because of the dynamic allocation of coroutine frames, coroutines in embedded or heterogeneous environments, which often lack support for dynamic allocation, require great attention to detail. And the allocations and indirections tend to complicate the job of the inliner, often resulting in sub-optimal codegen.
The coroutine language feature mitigates these shortcomings somewhat with the HALO optimization Halo: coroutine Heap Allocation eLision Optimization: the joint response, which leverages existing compiler optimizations such as allocation elision and devirtualization to inline the coroutine, completely eliminating the runtime overhead. However, HALO requires a sophisiticated compiler, and a fair number of stars need to align for the optimization to kick in. In our experience, more often than not in real-world code today’s compilers are not able to inline the coroutine, resulting in allocations and indirections in the generated code.
In a suite of generic async algorithms that are expected to be callable from hot code paths, the extra allocations and indirections are a deal-breaker. It is for these reasons that we consider coroutines a poor choise for a basis of all standard async.
1.10.3. Callbacks
Callbacks are the oldest, simplest, most powerful, and most efficient mechanism for creating chains of work, but suffer problems of their own. Callbacks must propagate either errors or values. This simple requirement yields many different interface possibilities. The lack of a standard callback shape obstructs generic design.
Additionally, few of these possibilities accommodate cancellation signals when the user requests upstream work to stop and clean up.
1.11. Field experience
1.11.1. libunifex
This proposal draws heavily from our field experience with libunifex. Libunifex implements all of the concepts and customization points defined in this paper (with slight variations -- the design of P2300 has evolved due to LEWG feedback), many of this paper’s algorithms (some under different names), and much more besides.
Libunifex has several concrete schedulers in addition to the 
In addition to the proposed interfaces and the additional schedulers, it has several important extensions to the facilities described in this paper, which demonstrate directions in which these abstractions may be evolved over time, including:
- 
     Timed schedulers, which permit scheduling work on an execution resource at a particular time or after a particular duration has elapsed. In addition, it provides time-based algorithms. 
- 
     File I/O schedulers, which permit filesystem I/O to be scheduled. 
- 
     Two complementary abstractions for streams (asynchronous ranges), and a set of stream-based algorithms. 
Libunifex has seen heavy production use at Facebook. As of October 2021, it is currently used in production within the following applications and platforms:
- 
     Facebook Messenger on iOS, Android, Windows, and macOS 
- 
     Instagram on iOS and Android 
- 
     Facebook on iOS and Android 
- 
     Portal 
- 
     An internal Facebook product that runs on Linux 
All of these applications are making direct use of the sender/receiver abstraction as presented in this paper. One product (Instagram on iOS) is making use of the sender/coroutine integration as presented. The monthly active users of these products number in the billions.
1.11.2. Other implementations
The authors are aware of a number of other implementations of sender/receiver from this paper. These are presented here in perceived order of maturity and field experience.
- 
     HPX - The C++ Standard Library for Parallelism and Concurrency HPX is a general purpose C++ runtime system for parallel and distributed applications that has been under active development since 2007. HPX exposes a uniform, standards-oriented API, and keeps abreast of the latest standards and proposals. It is used in a wide variety of high-performance applications. The sender/receiver implementation in HPX has been under active development since May 2020. It is used to erase the overhead of futures and to make it possible to write efficient generic asynchronous algorithms that are agnostic to their execution resource. In HPX, algorithms can migrate execution between execution resources, even to GPUs and back, using a uniform standard interface with sender/receiver. Far and away, the HPX team has the greatest usage experience outside Facebook. Mikael Simberg summarizes the experience as follows: Summarizing, for us the major benefits of sender/receiver compared to the old model are: - 
        Proper hooks for transitioning between execution resources. 
- 
        The adaptors. Things like let_value 
- 
        Separation of the error channel from the value channel (also cancellation, but we don’t have much use for it at the moment). Even from a teaching perspective having to explain that the future f2 f1 . then ([]( future < T > f2 ) {...}) 
- 
        For futures we have a thing called hpx :: dataflow when_all (...). then (...) when_all (...) | then (...) 
 
- 
        
- 
     kuhllib by Dietmar Kuehl This is a prototype Standard Template Library with an implementation of sender/receiver that has been under development since May, 2021. It is significant mostly for its support for sender/receiver-based networking interfaces. Here, Dietmar Kuehl speaks about the perceived complexity of sender/receiver: ... and, also similar to STL: as I had tried to do things in that space before I recognize sender/receivers as being maybe complicated in one way but a huge simplification in another one: like with STL I think those who use it will benefit - if not from the algorithm from the clarity of abstraction: the separation of concerns of STL (the algorithm being detached from the details of the sequence representation) is a major leap. Here it is rather similar: the separation of the asynchronous algorithm from the details of execution. Sure, there is some glue to tie things back together but each of them is simpler than the combined result. Elsewhere, he said: ... to me it feels like sender/receivers are like iterators when STL emerged: they are different from what everybody did in that space. However, everything people are already doing in that space isn’t right. Kuehl also has experience teaching sender/receiver at Bloomberg. About that experience he says: When I asked [my students] specifically about how complex they consider the sender/receiver stuff the feedback was quite unanimous that the sender/receiver parts aren’t trivial but not what contributes to the complexity. 
- 
     
     This is a complete implementation written from the specification in this paper. Its primary purpose is to help find specification bugs and to harden the wording of the proposal. It is fit for broad use and for contribution to libc++. It is current with R7 of this paper. 
- 
     Reference implementation for the Microsoft STL by Michael Schellenberger Costa This is another reference implementation of this proposal, this time in a fork of the Mircosoft STL implementation. Michael Schellenberger Costa is not affiliated with Microsoft. He intends to contribute this implementation upstream when it is complete. 
1.11.3. Inspirations
This proposal also draws heavily from our experience with Thrust and Agency. It is also inspired by the needs of countless other C++ frameworks for asynchrony, parallelism, and concurrency, including:
2. Revision history
2.1. R7
The changes since R6 are as follows:
Fixes:
- 
     Make it valid to pass non-variadic templates to the exposition-only alias template gather - signatures value_types_of_t error_types_of_t sync - wait - type 
- 
     Removed the query forwarding from receiver_adaptor 
- 
     When adapting a sender to an awaitable with as_awaitable variant 
- 
     Correctly specify the completion signatures of the schedule_from 
- 
     The sender_of T T && 
- 
     The just just_error 
Enhancements:
- 
     The sender receiver enable_sender enable_receiver is_sender is_receiver 
- 
     get_attrs get_env 
- 
     The exposition-only type empty - env empty_env 
- 
     get_env empty_env {} tag_invoke 
- 
     get_env 
- 
     get_env empty_env env_of_t std :: 
- 
     Add a new subsection describing the async programming model of senders in abstract terms. See § 11.3 Asynchronous operations [async.ops]. 
2.2. R6
The changes since R5 are as follows:
Fixes:
- 
     Fix typo in the specification of in_place_stop_source 
- 
     get_completion_signatures connect 
- 
     A coroutine promise type is an environment provider (that is, it implements get_env () 
Enhancements:
- 
     Sender queries are moved into a separate queryable "attributes" object that is accessed by passing the sender to get_attrs () sender get_attrs () sender_in < Snd , Env > 
- 
     The placeholder types no_env dependent_completion_signatures <> 
- 
     ensure_started split get_attrs () 
- 
     Reorder constraints of the scheduler receiver 
- 
     Re-express the sender_of 
- 
     Make the specification of the alias templates value_types_of_t error_types_of_t sends_done gather - signatures 
2.2.1. Environments and attributes
In earlier revisions, receivers, senders, and schedulers all were directly
queryable. In R4, receiver queries were moved into a separate "environment"
object, obtainable from a receiver with a 
Schedulers, however, remain directly queryable. As lightweight handles that are required to be movable and copyable, there is little reason to want to dispose of a scheduler and yet persist the scheduler’s queries.
This revision also makes operation states directly queryable, even though there isn’t yet a use for such. Some early prototypes of cooperative bulk parallel sender algorithms done at NVIDIA suggest the utility of forwardable operation state queries. The authors chose to make opstates directly queryable since the opstate object is itself required to be kept alive for the duration of asynchronous operation.
2.3. R5
The changes since R4 are as follows:
Fixes:
- 
     start_detached void set_value 
Enhancements:
- 
     Receiver concepts refactored to no longer require an error channel for exception_ptr 
- 
     sender_of connect 
- 
     get_completion_signatures completion_signatures dependent_completion_signatures 
- 
     make_completion_signatures 
- 
     receiver_adaptor get_env set_ * receiver_adaptor get_env () get_env_t 
- 
     just just_error just_stopped into_variant 
2.4. R4
The changes since R3 are as follows:
Fixes:
- 
     Fix specification of get_completion_scheduler transfer schedule_from transfer_when_all set_error 
- 
     The value of sends_stopped falsetotrueto acknowledge the fact that some coroutine types are generally awaitable and may implement theunhandled_stopped () 
- 
     Fix the incorrect use of inline namespaces in the < execution > 
- 
     Shorten the stable names for the sections. 
- 
     sync_wait std :: error_code std :: system_error 
- 
     Fix how ADL isolation from class template arguments is specified so it doesn’t constrain implmentations. 
- 
     Properly expose the tag types in the header < execution > 
Enhancements:
- 
     Support for "dependently-typed" senders, where the completion signatures -- and thus the sender metadata -- depend on the type of the receiver connected to it. See the section dependently-typed senders below for more information. 
- 
     Add a read ( query ) 
- 
     Add completion_signatures make_completion_signatures 
- 
     Add make_completion_signatures 
- 
     Drop support for untyped senders and rename typed_sender sender 
- 
     set_done set_stopped done stopped 
- 
     Add customization points for controlling the forwarding of scheduler, sender, receiver, and environment queries through layers of adaptors; specify the behavior of the standard adaptors in terms of the new customization points. 
- 
     Add get_delegatee_scheduler 
- 
     Add schedule_result_t 
- 
     More precisely specify the sender algorithms, including precisely what their completion signatures are. 
- 
     stopped_as_error 
- 
     tag_invoke 
2.4.1. Dependently-typed senders
Background:
In the sender/receiver model, as with coroutines, contextual information about
the current execution is most naturally propagated from the consumer to the
producer. In coroutines, that means information like stop tokens, allocators and
schedulers are propagated from the calling coroutine to the callee. In
sender/receiver, that means that that contextual information is associated with
the receiver and is queried by the sender and/or operation state after the
sender and the receiver are 
Problem:
The implication of the above is that the sender alone does not have all the
information about the async computation it will ultimately initiate; some of
that information is provided late via the receiver. However, the 
Example:
To get concrete, consider the case of the "
This causes knock-on problems since some important algorithms require a typed
sender, such as 
namespace ex = std :: execution ; ex :: sender auto task = ex :: let_value ( ex :: get_scheduler (), // Fetches scheduler from receiver. []( auto current_sched ) { // Lauch some nested work on the current scheduler: return ex :: on ( current_sched , nested work ... ); }); std :: this_thread :: sync_wait ( std :: move ( task )); 
The code above is attempting to schedule some work onto the 
Solution:
The solution is conceptually quite simple: extend the 
Design:
Using the receiver type to compute the sender traits turns out to have pitfalls
in practice. Many receivers make use of that type information in their
implementation. It is very easy to create cycles in the type system, leading to
inscrutible errors. The design pursued in R4 is to give receivers an associated environment object -- a bag of key/value pairs -- and to move the contextual
information (schedulers, etc) out of the receiver and into the environment. The 
A further refinement of this design would be to separate the receiver and the
environment entirely, passing then as separate arguments along with the sender to 
Impact:
This change, apart from increasing the expressive power of the sender/receiver abstraction, has the following impact:
- 
     Typed senders become moderately more challenging to write. (The new completion_signatures make_completion_signatures 
- 
     Sender adaptor algorithms that previously constrained their sender arguments to satisfy the typed_sender connect 
- 
     Operation states that own receivers that add to or change the environment are typically larger by one pointer. It comes with the benefit of far fewer indirections to evaluate queries. 
"Has it been implemented?"
Yes, the reference implementation, which can be found at https://github.com/NVIDIA/stdexec, has implemented this design as well as some dependently-typed senders to confirm that it works.
Implementation experience
Although this change has not yet been made in libunifex, the most widely adopted sender/receiver implementation, a similar design can be found in Folly’s coroutine support library. In Folly.Coro, it is possible to await a special awaitable to obtain the current coroutine’s associated scheduler (called an executor in Folly).
For instance, the following Folly code grabs the current executor, schedules a task for execution on that executor, and starts the resulting (scheduled) task by enqueueing it for execution.
// From Facebook’s Folly open source library: template < class T > folly :: coro :: Task < void > CancellableAsyncScope :: co_schedule ( folly :: coro :: Task < T >&& task ) { this -> add ( std :: move ( task ). scheduleOn ( co_await co_current_executor )); co_return ; } 
Facebook relies heavily on this pattern in its coroutine code. But as described
above, this pattern doesn’t work with R3 of 
Why now?
The authors are loathe to make any changes to the design, however small, at this
stage of the C++23 release cycle. But we feel that, for a relatively minor
design change -- adding an extra template parameter to 
One might wonder why this missing feature not been added to sender/receiver before now. The designers of sender/receiver have long been aware of the need. What was missing was a clean, robust, and simple design for the change, which we now have.
Drive-by:
We took the opportunity to make an additional drive-by change: Rather than
providing the sender traits via a class template for users to specialize, we
changed it into a sender query: 
Details:
Below are the salient parts of the new support for dependently-typed senders in R4:
- 
     Receiver queries have been moved from the receiver into a separate environment object. 
- 
     Receivers have an associated environment. The new get_env get_env 
- 
     sender_traits Env 
- 
     The primary sender_traits completion_signatures_of_t get_completion_signatures tag_invoke get_completion_signatures 
- 
     Support for untyped senders is dropped. The typed_sender sender 
- 
     The environment argument to the sender get_completion_signatures no_env no_env 
- 
     A type S sender < S > dependent_completion_signatures 
- 
     If a sender satisfies both sender < S > sender < S , Env > 
- 
     All of the algorithms and examples have been updated to work with dependently-typed senders. 
2.5. R3
The changes since R2 are as follows:
Fixes:
- 
     Fix specification of the on get_scheduler 
- 
     Fix a memory safety bug in the implementation of connect - awaitable 
- 
     Fix recursive definition of the scheduler 
Enhancements:
- 
     Add run_loop 
- 
     Add receiver_adaptor 
- 
     Require a scheduler’s sender to model sender_of 
- 
     Specify the cancellation scope of the when_all 
- 
     Make as_awaitable 
- 
     Change connect as_awaitable 
- 
     Add value_types_of_t error_types_of_t stop_token_type_t stop_token_of_t 
- 
     Add a design rationale for the removal of the possibly eager algorithms. 
- 
     Expand the section on field experience. 
2.6. R2
The changes since R1 are as follows:
- 
     Remove the eagerly executing sender algorithms. 
- 
     Extend the execution :: connect sender_traits <> typed_sender 
- 
     Add utilities as_awaitable () with_awaitable_senders <> 
- 
     Add a section describing the design of the sender/awaitable interactions. 
- 
     Add a section describing the design of the cancellation support in sender/receiver. 
- 
     Add a section showing examples of simple sender adaptor algorithms. 
- 
     Add a section showing examples of simple schedulers. 
- 
     Add a few more examples: a sudoku solver, a parallel recursive file copy, and an echo server. 
- 
     Refined the forward progress guarantees on the bulk 
- 
     Add a section describing how to use a range of senders to represent async sequences. 
- 
     Add a section showing how to use senders to represent partial success. 
- 
     Add sender factories execution :: just_error execution :: just_stopped 
- 
     Add sender adaptors execution :: stopped_as_optional execution :: stopped_as_error 
- 
     Document more production uses of sender/receiver at scale. 
- 
     Various fixes of typos and bugs. 
2.7. R1
The changes since R0 are as follows:
- 
     Added a new concept, sender_of 
- 
     Added a new scheduler query, this_thread :: execute_may_block_caller 
- 
     Added a new scheduler query, get_forward_progress_guarantee 
- 
     Removed the unschedule 
- 
     Various fixes of typos and bugs. 
2.8. R0
Initial revision.
3. Design - introduction
The following three sections describe the entirety of the proposed design.
- 
     § 3 Design - introduction describes the conventions used through the rest of the design sections, as well as an example illustrating how we envision code will be written using this proposal. 
- 
     § 4 Design - user side describes all the functionality from the perspective we intend for users: it describes the various concepts they will interact with, and what their programming model is. 
- 
     § 5 Design - implementer side describes the machinery that allows for that programming model to function, and the information contained there is necessary for people implementing senders and sender algorithms (including the standard library ones) - but is not necessary to use senders productively. 
3.1. Conventions
The following conventions are used throughout the design section:
- 
     The namespace proposed in this paper is the same as in A Unified Executors Proposal for C++: std :: execution std :: execution :: foo std :: execution :: foo 
- 
     Universal references and explicit calls to std :: move std :: forward 
- 
     None of the names proposed here are names that we are particularly attached to; consider the names to be reasonable placeholders that can freely be changed, should the committee want to do so. 
3.2. Queries and algorithms
A query is a callable that takes some set of objects (usually one) as parameters and returns facts about those objects without modifying them. Queries are usually customization point objects, but in some cases may be functions.
An algorithm is a callable that takes some set of objects as parameters and causes those objects to do something. Algorithms are usually customization point objects, but in some cases may be functions.
4. Design - user side
4.1. Execution resources describe the place of execution
An execution resource is a resource that represents the place where execution will happen. This could be a concrete resource - like a specific thread pool object, or a GPU - or a more abstract one, like the current thread of execution. Execution contexts don’t need to have a representation in code; they are simply a term describing certain properties of execution of a function.
4.2. Schedulers represent execution resources
A scheduler is a lightweight handle that represents a strategy for
scheduling work onto an execution resource. Since execution resources don’t
necessarily manifest in C++ code, it’s not possible to program directly against
their API. A scheduler is a solution to that problem: the scheduler concept is
defined by a single sender algorithm, 
execution :: scheduler auto sch = thread_pool . scheduler (); execution :: sender auto snd = execution :: schedule ( sch ); // snd is a sender (see below) describing the creation of a new execution resource // on the execution resource associated with sch 
Note that a particular scheduler type may provide other kinds of scheduling operations
which are supported by its associated execution resource. It is not limited to scheduling
purely using the 
Future papers will propose additional scheduler concepts that extend 
- 
     A time_scheduler scheduler schedule_after ( sched , duration ) schedule_at ( sched , time_point ) now ( sched ) 
- 
     Concepts that extend scheduler 
- 
     Concepts that extend scheduler 
4.3. Senders describe work
A sender is an object that describes work. Senders are similar to futures in existing asynchrony designs, but unlike futures, the work that is being done to arrive at the values they will send is also directly described by the sender object itself. A sender is said to send some values if a receiver connected (see § 5.3 execution::connect) to that sender will eventually receive said values.
The primary defining sender algorithm is § 5.3 execution::connect; this function, however, is not a user-facing API; it is used to facilitate communication between senders and various sender algorithms, but end user code is not expected to invoke it directly.
The way user code is expected to interact with senders is by using sender algorithms. This paper proposes an initial set of such sender algorithms, which are described in § 4.4 Senders are composable through sender algorithms, § 4.20 User-facing sender factories, § 4.21 User-facing sender adaptors, and § 4.22 User-facing sender consumers. For example, here is how a user can create a new sender on a scheduler, attach a continuation to it, and then wait for execution of the continuation to complete:
execution :: scheduler auto sch = thread_pool . scheduler (); execution :: sender auto snd = execution :: schedule ( sch ); execution :: sender auto cont = execution :: then ( snd , []{ std :: fstream file { "result.txt" }; file << compute_result ; }); this_thread :: sync_wait ( cont ); // at this point, cont has completed execution 
4.4. Senders are composable through sender algorithms
Asynchronous programming often departs from traditional code structure and control flow that we are familiar with. A successful asynchronous framework must provide an intuitive story for composition of asynchronous work: expressing dependencies, passing objects, managing object lifetimes, etc.
The true power and utility of senders is in their composability. With senders, users can describe generic execution pipelines and graphs, and then run them on and across a variety of different schedulers. Senders are composed using sender algorithms:
- 
     sender factories, algorithms that take no senders and return a sender. 
- 
     sender adaptors, algorithms that take (and potentially execution :: connect 
- 
     sender consumers, algorithms that take (and potentially execution :: connect 
4.5. Senders can propagate completion schedulers
One of the goals of executors is to support a diverse set of execution resources, including traditional thread pools, task and fiber frameworks (like HPX and Legion), and GPUs and other accelerators (managed by runtimes such as CUDA or SYCL). On many of these systems, not all execution agents are created equal and not all functions can be run on all execution agents. Having precise control over the execution resource used for any given function call being submitted is important on such systems, and the users of standard execution facilities will expect to be able to express such requirements.
A Unified Executors Proposal for C++ was not always clear about the place of execution of any given piece of code. Precise control was present in the two-way execution API present in earlier executor designs, but it has so far been missing from the senders design. There has been a proposal (Towards C++23 executors: A proposal for an initial set of algorithms) to provide a number of sender algorithms that would enforce certain rules on the places of execution of the work described by a sender, but we have found those sender algorithms to be insufficient for achieving the best performance on all platforms that are of interest to us. The implementation strategies that we are aware of result in one of the following situations:
- 
     trying to submit work to one execution resource (such as a CPU thread pool) from another execution resource (such as a GPU or a task framework), which assumes that all execution agents are as capable as a std :: thread 
- 
     forcibly interleaving two adjacent execution graph nodes that are both executing on one execution resource (such as a GPU) with glue code that runs on another execution resource (such as a CPU), which is prohibitively expensive for some execution resources (such as CUDA or SYCL). 
- 
     having to customise most or all sender algorithms to support an execution resource, so that you can avoid problems described in 1. and 2, which we believe is impractical and brittle based on months of field experience attempting this in Agency. 
None of these implementation strategies are acceptable for many classes of parallel runtimes, such as task frameworks (like HPX) or accelerator runtimes (like CUDA or SYCL).
Therefore, in addition to the 
4.5.1. execution :: get_completion_scheduler 
   
execution :: scheduler auto cpu_sched = new_thread_scheduler {}; execution :: scheduler auto gpu_sched = cuda :: scheduler (); execution :: sender auto snd0 = execution :: schedule ( cpu_sched ); execution :: scheduler auto completion_sch0 = execution :: get_completion_scheduler < execution :: set_value_t > ( get_env ( snd0 )); // completion_sch0 is equivalent to cpu_sched execution :: sender auto snd1 = execution :: then ( snd0 , []{ std :: cout << "I am running on cpu_sched! \n " ; }); execution :: scheduler auto completion_sch1 = execution :: get_completion_scheduler < execution :: set_value_t > ( get_env ( snd1 )); // completion_sch1 is equivalent to cpu_sched execution :: sender auto snd2 = execution :: transfer ( snd1 , gpu_sched ); execution :: sender auto snd3 = execution :: then ( snd2 , []{ std :: cout << "I am running on gpu_sched! \n " ; }); execution :: scheduler auto completion_sch3 = execution :: get_completion_scheduler < execution :: set_value_t > ( get_env ( snd3 )); // completion_sch3 is equivalent to gpu_sched 
4.6. Execution resource transitions are explicit
A Unified Executors Proposal for C++ does not contain any mechanisms for performing an execution resource transition. The only sender algorithm that can create a sender that will move execution to a specific execution resource is 
We propose that, for senders advertising their completion scheduler, all execution resource transitions must be explicit; running user code anywhere but where they defined it to run must be considered a bug.
The 
execution :: scheduler auto sch1 = ...; execution :: scheduler auto sch2 = ...; execution :: sender auto snd1 = execution :: schedule ( sch1 ); execution :: sender auto then1 = execution :: then ( snd1 , []{ std :: cout << "I am running on sch1! \n " ; }); execution :: sender auto snd2 = execution :: transfer ( then1 , sch2 ); execution :: sender auto then2 = execution :: then ( snd2 , []{ std :: cout << "I am running on sch2! \n " ; }); this_thread :: sync_wait ( then2 ); 
4.7. Senders can be either multi-shot or single-shot
Some senders may only support launching their operation a single time, while others may be repeatable and support being launched multiple times. Executing the operation may consume resources owned by the sender.
For example, a sender may contain a 
A single-shot sender can only be connected to a receiver
at most once. Its implementation of 
A multi-shot sender can be connected to multiple
receivers and can be launched multiple times. Multi-shot senders customise 
If the user of a sender does not require the sender to remain valid after connecting it to a
receiver then it can pass an rvalue-reference to the sender to the call to 
If the caller does wish for the sender to remain valid after the call then it can pass an lvalue-qualified sender
to the call to 
Algorithms that accept senders will typically either decay-copy an input sender and store it somewhere
for later usage (for example as a data-member of the returned sender) or will immediately call 
Some multi-use sender algorithms may require that an input sender be copy-constructible but will only call 
For a sender to be usable in both multi-use scenarios, it will generally be required to be both copy-constructible and lvalue-connectable.
4.8. Senders are forkable
Any non-trivial program will eventually want to fork a chain of senders into independent streams of work, regardless of whether they are single-shot or multi-shot. For instance, an incoming event to a middleware system may be required to trigger events on more than one downstream system. This requires that we provide well defined mechanisms for making sure that connecting a sender multiple times is possible and correct.
The 
auto some_algorithm ( execution :: sender auto && input ) { execution :: sender auto multi_shot = split ( input ); // "multi_shot" is guaranteed to be multi-shot, // regardless of whether "input" was multi-shot or not return when_all ( then ( multi_shot , [] { std :: cout << "First continuation \n " ; }), then ( multi_shot , [] { std :: cout << "Second continuation \n " ; }) ); } 
4.9. Senders are joinable
Similarly to how it’s hard to write a complex program that will eventually want to fork sender chains into independent streams, it’s also hard to write a program that does not want to eventually create join nodes, where multiple independent streams of execution are merged into a single one in an asynchronous fashion.
4.10. Senders support cancellation
Senders are often used in scenarios where the application may be concurrently executing multiple strategies for achieving some program goal. When one of these strategies succeeds (or fails) it may not make sense to continue pursuing the other strategies as their results are no longer useful.
For example, we may want to try to simultaneously connect to multiple network servers and use whichever server responds first. Once the first server responds we no longer need to continue trying to connect to the other servers.
Ideally, in these scenarios, we would somehow be able to request that those other strategies stop executing promptly so that their resources (e.g. cpu, memory, I/O bandwidth) can be released and used for other work.
While the design of senders has support for cancelling an operation before it starts
by simply destroying the sender or the operation-state returned from 
The ability to be able to cancel in-flight operations is fundamental to supporting some kinds of generic concurrency algorithms.
For example:
- 
     a when_all ( ops ...) 
- 
     a first_successful ( ops ...) 
- 
     a generic timeout ( src , duration ) src 
- 
     a stop_when ( src , trigger ) src trigger trigger src 
The mechanism used for communcating cancellation-requests, or stop-requests, needs to have a uniform interface so that generic algorithms that compose sender-based operations, such as the ones listed above, are able to communicate these cancellation requests to senders that they don’t know anything about.
The design is intended to be composable so that cancellation of higher-level operations can propagate those cancellation requests through intermediate layers to lower-level operations that need to actually respond to the cancellation requests.
For example, we can compose the algorithms mentioned above so that child operations are cancelled when any one of the multiple cancellation conditions occurs:
sender auto composed_cancellation_example ( auto query ) { return stop_when ( timeout ( when_all ( first_successful ( query_server_a ( query ), query_server_b ( query )), load_file ( "some_file.jpg" )), 5 s ), cancelButton . on_click ()); } 
In this example, if we take the operation returned by 
- 
     first_successful query_server_a ( query ) 
- 
     when_all load_file ( "some_file.jpg" ) 
- 
     timeout 
- 
     stop_when 
- 
     The parent operation consuming the composed_cancellation_example () 
Note that within this code there is no explicit mention of cancellation, stop-tokens, callbacks, etc. yet the example fully supports and responds to the various cancellation sources.
The intent of the design is that the common usage of cancellation in sender/receiver-based code is primarily through use of concurrency algorithms that manage the detailed plumbing of cancellation for you. Much like algorithms that compose senders relieve the user from having to write their own receiver types, algorithms that introduce concurrency and provide higher-level cancellation semantics relieve the user from having to deal with low-level details of cancellation.
4.10.1. Cancellation design summary
The design of cancellation described in this paper is built on top of and extends the 
At a high-level, the facilities proposed by this paper for supporting cancellation include:
- 
     Add std :: stoppable_token std :: stoppable_token_for std :: stop_token 
- 
     Add std :: unstoppable_token stoppable_token 
- 
     Add std :: in_place_stop_token std :: in_place_stop_source std :: in_place_stop_callback < CB > 
- 
     Add std :: never_stop_token 
- 
     Add std :: execution :: get_stop_token () 
- 
     Add std :: execution :: stop_token_of_t < T > get_stop_token () 
In addition, there are requirements added to some of the algorithms to specify what their cancellation behaviour is and what the requirements of customisations of those algorithms are with respect to cancellation.
The key component that enables generic cancellation within sender-based operations is the 
As the caller of 
4.10.2. Support for cancellation is optional
Support for cancellation is optional, both on part of the author of the receiver and on part of the author of the sender.
If the receiver’s execution environment does not customise the false from the 
Sender code that tries to use this stop-token will in general result in code that handles stop-requests being compiled out and having little to no run-time overhead.
If the sender doesn’t call 
Note that stop-requests are generally racy in nature as there is often a race betwen an operation completing naturally and the stop-request being made. If the operation has already completed or past the point at which it can be cancelled when the stop-request is sent then the stop-request may just be ignored. An application will typically need to be able to cope with senders that might ignore a stop-request anyway.
4.10.3. Cancellation is inherently racy
Usually, an operation will attach a stop-callback at some point inside the call to 
A stop-request can be issued concurrently from another thread. This means the implementation of 
An implementation of 
If the stop-callback is subscribed first and then the operation is launched, care needs to be taken to ensure that a stop-request that invokes the stop-callback on another thread after the stop-callback is registered but before the operation finishes launching does not either result in a missed cancellation request or a data-race. e.g. by performing an atomic write after the launch has finished executing
If the operation is launched first and then the stop-callback is subscribed, care needs to be taken to ensure
that if the launched operation completes concurrently on another thread that it does not destroy the operation-state
until after the stop-callback has been registered. e.g. by having the 
For an example of an implementation strategy for solving these data-races see § 1.4 Asynchronous Windows socket recv.
4.10.4. Cancellation design status
This paper currently includes the design for cancellation as proposed in Composable cancellation for sender-based async operations - "Composable cancellation for sender-based async operations". P2175R0 contains more details on the background motivation and prior-art and design rationale of this design.
It is important to note, however, that initial review of this design in the SG1 concurrency subgroup raised some concerns related to runtime overhead of the design in single-threaded scenarios and these concerns are still being investigated.
The design of P2175R0 has been included in this paper for now, despite its potential to change, as we believe that support for cancellation is a fundamental requirement for an async model and is required in some form to be able to talk about the semantics of some of the algorithms proposed in this paper.
This paper will be updated in the future with any changes that arise from the investigations into P2175R0.
4.11. Sender factories and adaptors are lazy
In an earlier revision of this paper, some of the proposed algorithms supported executing their logic eagerly; i.e., before the returned sender has been connected to a receiver and started. These algorithms were removed because eager execution has a number of negative semantic and performance implications.
We have originally included this functionality in the paper because of a long-standing belief that eager execution is a mandatory feature to be included in the standard Executors facility for that facility to be acceptable for accelerator vendors. A particular concern was that we must be able to write generic algorithms that can run either eagerly or lazily, depending on the kind of an input sender or scheduler that have been passed into them as arguments. We considered this a requirement, because the _latency_ of launching work on an accelerator can sometimes be considerable.
However, in the process of working on this paper and implementations of the features
proposed within, our set of requirements has shifted, as we understood the different
implementation strategies that are available for the feature set of this paper better,
and, after weighting the earlier concerns against the points presented below, we
have arrived at the conclusion that a purely lazy model is enough for most algorithms,
and users who intend to launch work earlier may use an algorithm such as 
4.11.1. Eager execution leads to detached work or worse
One of the questions that arises with APIs that can potentially return
eagerly-executing senders is "What happens when those senders are destructed
without a call to 
In these cases, the operation represented by the sender is potentially executing concurrently in another thread at the time that the destructor of the sender and/or operation-state is running. In the case that the operation has not completed executing by the time that the destructor is run we need to decide what the semantics of the destructor is.
There are three main strategies that can be adopted here, none of which is particularly satisfactory:
- 
     Make this undefined-behaviour - the caller must ensure that any eagerly-executing sender is always joined by connecting and starting that sender. This approach is generally pretty hostile to programmers, particularly in the presence of exceptions, since it complicates the ability to compose these operations. Eager operations typically need to acquire resources when they are first called in order to start the operation early. This makes eager algorithms prone to failure. Consider, then, what might happen in an expression such as when_all ( eager_op_1 (), eager_op_2 ()) eager_op_1 () eager_op_2 () when_all when_all It then becomes the responsibility, not of the algorithm, but of the end user to handle the exception and ensure that eager_op_1 () 
- 
     Detach from the computation - let the operation continue in the background - like an implicit call to std :: thread :: detach () std :: shared_ptr 
- 
     Block in the destructor until the operation completes. This approach is probably the safest to use as it preserves the structured nature of the concurrent operations, but also introduces the potential for deadlocking the application if the completion of the operation depends on the current thread making forward progress. The risk of deadlock might occur, for example, if a thread-pool with a small number of threads is executing code that creates a sender representing an eagerly-executing operation and then calls the destructor of that sender without joining it (e.g. because an exception was thrown). If the current thread blocks waiting for that eager operation to complete and that eager operation cannot complete until some entry enqueued to the thread-pool’s queue of work is run then the thread may wait for an indefinite amount of time. If all threads of the thread-pool are simultaneously performing such blocking operations then deadlock can result. 
There are also minor variations on each of these choices. For example:
- 
     A variation of (1): Call std :: terminate std :: thread 
- 
     A variation of (2): Request cancellation of the operation before detaching. This reduces the chances of operations continuing to run indefinitely in the background once they have been detached but does not solve the lifetime- or shutdown-related challenges. 
- 
     A variation of (3): Request cancellation of the operation before blocking on its completion. This is the strategy that std :: jthread 
4.11.2. Eager senders complicate algorithm implementations
Algorithms that can assume they are operating on senders with strictly lazy
semantics are able to make certain optimizations that are not available if
senders can be potentially eager. With lazy senders, an algorithm can safely
assume that a call to 
When an algorithm needs to deal with potentially eager senders, the potential race conditions can be resolved one of two ways, neither of which is desirable:
- 
     Assume the worst and implement the algorithm defensively, assuming all senders are eager. This obviously has overheads both at runtime and in algorithm complexity. Resolving race conditions is hard. 
- 
     Require senders to declare whether they are eager or not with a query. Algorithms can then implement two different implementation strategies, one for strictly lazy senders and one for potentially eager senders. This addresses the performance problem of (1) while compounding the complexity problem. 
4.11.3. Eager senders incur cancellation-related overhead
Another implication of the use of eager operations is with regards to cancellation. The eagerly executing operation will not have access to the caller’s stop token until the sender is connected to a receiver. If we still want to be able to cancel the eager operation then it will need to create a new stop source and pass its associated stop token down to child operations. Then when the returned sender is eventually connected it will register a stop callback with the receiver’s stop token that will request stop on the eager sender’s stop source.
As the eager operation does not know at the time that it is launched what the
type of the receiver is going to be, and thus whether or not the stop token
returned from 
The eager operation will also need to do this to support sending a stop request to the eager operation in the case that the sender representing the eager work is destroyed before it has been joined (assuming strategy (5) or (6) listed above is chosen).
4.11.4. Eager senders cannot access execution resource from the receiver
In sender/receiver, contextual information is passed from parent operations to their children by way of receivers. Information like stop tokens, allocators, current scheduler, priority, and deadline are propagated to child operations with custom receivers at the time the operation is connected. That way, each operation has the contextual information it needs before it is started.
But if the operation is started before it is connected to a receiver, then there isn’t a way for a parent operation to communicate contextual information to its child operations, which may complete before a receiver is ever attached.
4.12. Schedulers advertise their forward progress guarantees
To decide whether a scheduler (and its associated execution resource) is sufficient for a specific task, it may be necessary to know what kind of forward progress guarantees it provides for the execution agents it creates. The C++ Standard defines the following forward progress guarantees:
- 
     concurrent, which requires that a thread makes progress eventually; 
- 
     parallel, which requires that a thread makes progress once it executes a step; and 
- 
     weakly parallel, which does not require that the thread makes progress. 
This paper introduces a scheduler query function, 
4.13. Most sender adaptors are pipeable
To facilitate an intuitive syntax for composition, most sender adaptors are pipeable; they can be composed (piped) together with 
execution :: bulk ( snd , N , [] ( std :: size_t i , auto d ) {}); execution :: bulk ( N , [] ( std :: size_t i , auto d ) {})( snd ); snd | execution :: bulk ( N , [] ( std :: size_t i , auto d ) {}); 
Piping enables you to compose together senders with a linear syntax. Without it, you’d have to use either nested function call syntax, which would cause a syntactic inversion of the direction of control flow, or you’d have to introduce a temporary variable for each stage of the pipeline. Consider the following example where we want to execute first on a CPU thread pool, then on a CUDA GPU, then back on the CPU thread pool:
| Syntax Style | Example | 
|---|---|
| Function call (nested) | 
 | 
| Function call (named temporaries) | 
 | 
| Pipe | 
 | 
Certain sender adaptors are not pipeable, because using the pipeline syntax can result in confusion of the semantics of the adaptors involved. Specifically, the following sender adaptors are not pipeable.
- 
     execution :: when_all execution :: when_all_with_variant 
- 
     execution :: on transfer 
Sender consumers could be made pipeable, but we have chosen to not do so. However, since these are terminal nodes in a pipeline and nothing can be piped after them, we believe a pipe syntax may be confusing as well as unnecessary, as consumers cannot be chained. We believe sender consumers read better with function call syntax.
4.14. A range of senders represents an async sequence of data
Senders represent a single unit of asynchronous work. In many cases though, what is being modelled is a sequence of data arriving asynchronously, and you want computation to happen on demand, when each element arrives. This requires nothing more than what is in this paper and the range support in C++20. A range of senders would allow you to model such input as keystrikes, mouse movements, sensor readings, or network requests.
Given some expression 
for ( auto snd : R ) { if ( auto opt = co_await execution :: stopped_as_optional ( std :: move ( snd ))) co_yield fn ( * std :: move ( opt )); else break ; } 
This transforms each element of the asynchronous sequence 
Now imagine that 
Far more interesting would be if 
4.15. Senders can represent partial success
Receivers have three ways they can complete: with success, failure, or cancellation. This begs the question of how they can be used to represent async operations that partially succeed. For example, consider an API that reads from a socket. The connection could drop after the API has filled in some of the buffer. In cases like that, it makes sense to want to report both that the connection dropped and that some data has been successfully read.
Often in the case of partial success, the error condition is not fatal nor does it mean the API has failed to satisfy its post-conditions. It is merely an extra piece of information about the nature of the completion. In those cases, "partial success" is another way of saying "success". As a result, it is sensible to pass both the error code and the result (if any) through the value channel, as shown below:
// Capture a buffer for read_socket_async to fill in execution :: just ( array < byte , 1024 > {}) | execution :: let_value ([ socket ]( array < byte , 1024 >& buff ) { // read_socket_async completes with two values: an error_code and // a count of bytes: return read_socket_async ( socket , span { buff }) // For success (partial and full), specify the next action: | execution :: let_value ([]( error_code err , size_t bytes_read ) { if ( err != 0 ) { // OK, partial success. Decide how to deal with the partial results } else { // OK, full success here. } }); }) 
In other cases, the partial success is more of a partial failure. That happens when the error condition indicates that in some way the function failed to satisfy its post-conditions. In those cases, sending the error through the value channel loses valuable contextual information. It’s possible that bundling the error and the incomplete results into an object and passing it through the error channel makes more sense. In that way, generic algorithms will not miss the fact that a post-condition has not been met and react inappropriately.
Another possibility is for an async API to return a range of senders: if the API completes with full success, full error, or cancellation, the returned range contains just one sender with the result. Otherwise, if the API partially fails (doesn’t satisfy its post-conditions, but some incomplete result is available), the returned range would have two senders: the first containing the partial result, and the second containing the error. Such an API might be used in a coroutine as follows:
// Declare a buffer for read_socket_async to fill in array < byte , 1024 > buff ; for ( auto snd : read_socket_async ( socket , span { buff })) { try { if ( optional < size_t > bytes_read = co_await execution :: stopped_as_optional ( std :: move ( snd ))) // OK, we read some bytes into buff. Process them here.... } else { // The socket read was cancelled and returned no data. React // appropriately. } } catch (...) { // read_socket_async failed to meet its post-conditions. // Do some cleanup and propagate the error... } } 
Finally, it’s possible to combine these two approaches when the API can both partially succeed (meeting its post-conditions) and partially fail (not meeting its post-conditions).
4.16. All awaitables are senders
Since C++20 added coroutines to the standard, we expect that coroutines and awaitables will be how a great many will choose to express their asynchronous code. However, in this paper, we are proposing to add a suite of asynchronous algorithms that accept senders, not awaitables. One might wonder whether and how these algorithms will be accessible to those who choose coroutines instead of senders.
In truth there will be no problem because all generally awaitable types
automatically model the 
For an example, imagine a coroutine type called 
task < int > doSomeAsyncWork (); int main () { // OK, awaitable types satisfy the requirements for senders: auto o = this_thread :: sync_wait ( doSomeAsyncWork ()); } 
Since awaitables are senders, writing a sender-based asynchronous algorithm is trivial if you have a coroutine task type: implement the algorithm as a coroutine. If you are not bothered by the possibility of allocations and indirections as a result of using coroutines, then there is no need to ever write a sender, a receiver, or an operation state.
4.17. Many senders can be trivially made awaitable
If you choose to implement your sender-based algorithms as coroutines, you’ll run into the issue of how to retrieve results from a passed-in sender. This is not a problem. If the coroutine type opts in to sender support -- trivial with the 
For example, consider the following trivial implementation of the sender-based 
template < class S > requires single - sender < S &> // see [exec.as.awaitable] task < single - sender - value - type < S >> retry ( S s ) { for (;;) { try { co_return co_await s ; } catch (...) { } } } 
Only some senders can be made awaitable directly because of the fact that callbacks are more expressive than coroutines. An awaitable expression has a single type: the result value of the async operation. In contrast, a callback can accept multiple arguments as the result of an operation. What’s more, the callback can have overloaded function call signatures that take different sets of arguments. There is no way to automatically map such senders into awaitables. The 
4.18. Cancellation of a sender can unwind a stack of coroutines
When looking at the sender-based 
When your task type’s promise inherits from 
In order to "catch" this uncatchable stopped exception, one of the calling coroutines in the stack would have to await a sender that maps the stopped channel into either a value or an error. That is achievable with the 
if ( auto opt = co_await execution :: stopped_as_optional ( some_sender )) { // OK, some_sender completed successfully, and opt contains the result. } else { // some_sender completed with a cancellation signal. } 
As described in the section "All awaitables are senders", the sender customization points recognize awaitables and adapt them transparently to model the sender concept. When 
Obviously, 
4.19. Composition with parallel algorithms
The C++ Standard Library provides a large number of algorithms that offer the potential for non-sequential execution via the use of execution policies. The set of algorithms with execution policy overloads are often referred to as "parallel algorithms", although additional policies are available.
Existing policies, such as 
We will propose a customization point for combining schedulers with policies in order to provide control over where work will execute.
template < class ExecutionPolicy > unspecified executing_on ( execution :: scheduler auto scheduler , ExecutionPolicy && policy ); 
This function would return an object of an unspecified type which can be used in place of an execution policy as the first argument to one of the parallel algorithms. The overload selected by that object should execute its computation as requested by 
The existing parallel algorithms are synchronous; all of the effects performed by the computation are complete before the algorithm returns to its caller. This remains unchanged with the 
In the future, we expect additional papers will propose asynchronous forms of the parallel algorithms which (1) return senders rather than values or 
4.20. User-facing sender factories
A sender factory is an algorithm that takes no senders as parameters and returns a sender.
4.20.1. execution :: schedule 
execution :: sender auto schedule ( execution :: scheduler auto scheduler ); 
Returns a sender describing the start of a task graph on the provided scheduler. See § 4.2 Schedulers represent execution resources.
execution :: scheduler auto sch1 = get_system_thread_pool (). scheduler (); execution :: sender auto snd1 = execution :: schedule ( sch1 ); // snd1 describes the creation of a new task on the system thread pool 
4.20.2. execution :: just 
execution :: sender auto just ( auto ... && values ); 
Returns a sender with no completion schedulers, which sends the provided values. The input values are decay-copied into the returned sender. When the returned sender is connected to a receiver, the values are moved into the operation state if the sender is an rvalue; otherwise, they are copied. Then xvalues referencing the values in the operation state are passed to the receiver’s 
execution :: sender auto snd1 = execution :: just ( 3.14 ); execution :: sender auto then1 = execution :: then ( snd1 , [] ( double d ) { std :: cout << d << " \n " ; }); execution :: sender auto snd2 = execution :: just ( 3.14 , 42 ); execution :: sender auto then2 = execution :: then ( snd2 , [] ( double d , int i ) { std :: cout << d << ", " << i << " \n " ; }); std :: vector v3 { 1 , 2 , 3 , 4 , 5 }; execution :: sender auto snd3 = execution :: just ( v3 ); execution :: sender auto then3 = execution :: then ( snd3 , [] ( std :: vector < int >&& v3copy ) { for ( auto && e : v3copy ) { e *= 2 ; } return std :: move ( v3copy ); } auto && [ v3copy ] = this_thread :: sync_wait ( then3 ). value (); // v3 contains {1, 2, 3, 4, 5}; v3copy will contain {2, 4, 6, 8, 10}. execution :: sender auto snd4 = execution :: just ( std :: vector { 1 , 2 , 3 , 4 , 5 }); execution :: sender auto then4 = execution :: then ( std :: move ( snd4 ), [] ( std :: vector < int >&& v4 ) { for ( auto && e : v4 ) { e *= 2 ; } return std :: move ( v4 ); }); auto && [ v4 ] = this_thread :: sync_wait ( std :: move ( then4 )). value (); // v4 contains {2, 4, 6, 8, 10}. No vectors were copied in this example. 
4.20.3. execution :: transfer_just 
execution :: sender auto transfer_just ( execution :: scheduler auto scheduler , auto ... && values ); 
Returns a sender whose value completion scheduler is the provided scheduler, which sends the provided values in the same manner as 
execution :: sender auto vals = execution :: transfer_just ( get_system_thread_pool (). scheduler (), 1 , 2 , 3 ); execution :: sender auto snd = execution :: then ( vals , []( auto ... args ) { std :: ( args ...); }); // when snd is executed, it will print "123" 
This adaptor is included as it greatly simplifies lifting values into senders.
4.20.4. execution :: just_error 
execution :: sender auto just_error ( auto && error ); 
Returns a sender with no completion schedulers, which completes with the specified error. If the provided error is an lvalue reference, a copy is made inside the returned sender and a non-const lvalue reference to the copy is sent to the receiver’s 
4.20.5. execution :: just_stopped 
execution :: sender auto just_stopped (); 
Returns a sender with no completion schedulers, which completes immediately by calling the receiver’s 
4.20.6. execution :: read 
execution :: sender auto read ( auto tag ); execution :: sender auto get_scheduler () { return read ( execution :: get_scheduler ); } execution :: sender auto get_delegatee_scheduler () { return read ( execution :: get_delegatee_scheduler ); } execution :: sender auto get_allocator () { return read ( execution :: get_allocator ); } execution :: sender auto get_stop_token () { return read ( execution :: get_stop_token ); } 
Returns a sender that reaches into a receiver’s environment and pulls out the current value associated with the customization point denoted by 
This can be useful when scheduling nested dependent work. The following sender pulls the current schduler into the value channel and then schedules more work onto it.
execution :: sender auto task = execution :: get_scheduler () | execution :: let_value ([]( auto sched ) { return execution :: on ( sched , some nested work here ); }); this_thread :: sync_wait ( std :: move ( task ) ); // wait for it to finish 
This code uses the fact that 
4.21. User-facing sender adaptors
A sender adaptor is an algorithm that takes one or more senders, which it may 
Sender adaptors are lazy, that is, they are never allowed to submit any work for execution prior to the returned sender being started later on, and are also guaranteed to not start any input senders passed into them. Sender consumers such as § 4.22.1 execution::start_detached and § 4.22.2 this_thread::sync_wait start senders.
For more implementer-centric description of starting senders, see § 5.5 Sender adaptors are lazy.
4.21.1. execution :: transfer 
execution :: sender auto transfer ( execution :: sender auto input , execution :: scheduler auto scheduler ); 
Returns a sender describing the transition from the execution agent of the input sender to the execution agent of the target scheduler. See § 4.6 Execution resource transitions are explicit.
execution :: scheduler auto cpu_sched = get_system_thread_pool (). scheduler (); execution :: scheduler auto gpu_sched = cuda :: scheduler (); execution :: sender auto cpu_task = execution :: schedule ( cpu_sched ); // cpu_task describes the creation of a new task on the system thread pool execution :: sender auto gpu_task = execution :: transfer ( cpu_task , gpu_sched ); // gpu_task describes the transition of the task graph described by cpu_task to the gpu 
4.21.2. execution :: then 
execution :: sender auto then ( execution :: sender auto input , std :: invocable < values - sent - by ( input ) ... > function ); 
execution :: sender auto input = get_input (); execution :: sender auto snd = execution :: then ( input , []( auto ... args ) { std :: ( args ...); }); // snd describes the work described by pred // followed by printing all of the values sent by pred 
This adaptor is included as it is necessary for writing any sender code that actually performs a useful function.
4.21.3. execution :: upon_ * 
execution :: sender auto upon_error ( execution :: sender auto input , std :: invocable < errors - sent - by ( input ) ... > function ); execution :: sender auto upon_stopped ( execution :: sender auto input , std :: invocable auto function ); 
4.21.4. execution :: let_ * 
execution :: sender auto let_value ( execution :: sender auto input , std :: invocable < values - sent - by ( input ) ... > function ); execution :: sender auto let_error ( execution :: sender auto input , std :: invocable < errors - sent - by ( input ) ... > function ); execution :: sender auto let_stopped ( execution :: sender auto input , std :: invocable auto function ); 
4.21.5. execution :: on 
execution :: sender auto on ( execution :: scheduler auto sched , execution :: sender auto snd ); 
Returns a sender which, when started, will start the provided sender on an execution agent belonging to the execution resource associated with the provided scheduler. This returned sender has no completion schedulers.
4.21.6. execution :: into_variant 
execution :: sender auto into_variant ( execution :: sender auto snd ); 
Returns a sender which sends a variant of tuples of all the possible sets of types sent by the input sender. Senders can send multiple sets of values depending on runtime conditions; this is a helper function that turns them into a single variant value.
4.21.7. execution :: stopped_as_optional 
execution :: sender auto stopped_as_optional ( single - sender auto snd ); 
Returns a sender that maps the value channel from a 
4.21.8. execution :: stopped_as_error 
template < move_constructible Error > execution :: sender auto stopped_as_error ( execution :: sender auto snd , Error err ); 
Returns a sender that maps the stopped channel to an error of 
4.21.9. execution :: bulk 
execution :: sender auto bulk ( execution :: sender auto input , std :: integral auto shape , invocable < decltype ( size ), values - sent - by ( input ) ... > function ); 
Returns a sender describing the task of invoking the provided function with every index in the provided shape along with the values sent by the input sender. The returned sender completes once all invocations have completed, or an error has occurred. If it completes by sending values, they are equivalent to those sent by the input sender.
No instance of 
In this proposal, only integral types are used to specify the shape of the bulk section. We expect that future papers may wish to explore extensions of the interface to explore additional kinds of shapes, such as multi-dimensional grids, that are commonly used for parallel computing tasks.
4.21.10. execution :: split 
execution :: sender auto split ( execution :: sender auto sender ); 
If the provided sender is a multi-shot sender, returns that sender. Otherwise, returns a multi-shot sender which sends values equivalent to the values sent by the provided sender. See § 4.7 Senders can be either multi-shot or single-shot.
4.21.11. execution :: when_all 
execution :: sender auto when_all ( execution :: sender auto ... inputs ); execution :: sender auto when_all_with_variant ( execution :: sender auto ... inputs ); 
The returned sender has no completion schedulers.
See § 4.9 Senders are joinable.
execution :: scheduler auto sched = thread_pool . scheduler (); execution :: sender auto sends_1 = ...; execution :: sender auto sends_abc = ...; execution :: sender auto both = execution :: when_all ( sched , sends_1 , sends_abc ); execution :: sender auto final = execution :: then ( both , []( auto ... args ){ std :: cout << std :: format ( "the two args: {}, {}" , args ...); }); // when final executes, it will print "the two args: 1, abc" 
4.21.12. execution :: transfer_when_all 
execution :: sender auto transfer_when_all ( execution :: scheduler auto sched , execution :: sender auto ... inputs ); execution :: sender auto transfer_when_all_with_variant ( execution :: scheduler auto sched , execution :: sender auto ... inputs ); 
Similar to § 4.21.11 execution::when_all, but returns a sender whose value completion scheduler is the provided scheduler.
See § 4.9 Senders are joinable.
4.21.13. execution :: ensure_started 
execution :: sender auto ensure_started ( execution :: sender auto sender ); 
Once 
If the returned sender is destroyed before 
Note that the application will need to make sure that resources are kept alive in the case that the operation detaches.
e.g. by holding a 
4.22. User-facing sender consumers
A sender consumer is an algorithm that takes one or more senders, which it may 
4.22.1. execution :: start_detached 
void start_detached ( execution :: sender auto sender ); 
Like 
4.22.2. this_thread :: sync_wait 
auto sync_wait ( execution :: sender auto sender ) requires ( always - sends - same - values ( sender )) -> std :: optional < std :: tuple < values - sent - by ( sender ) >> ; 
If the provided sender sends an error instead of values, 
If the provided sender sends the "stopped" signal instead of values, 
For an explanation of the 
Note: This function is specified inside 
4.23. execution :: execute 
   In addition to the three categories of functions presented above, we also propose to include a convenience function for fire-and-forget eager one-way submission of an invocable to a scheduler, to fulfil the role of one-way executors from P0443.
void execution::execute ( execution :: schedule auto sched , std :: invocable auto fn ); 
Submits the provided function for execution on the provided scheduler, as-if by:
auto snd = execution :: schedule ( sched ); auto work = execution :: then ( snd , fn ); execution :: start_detached ( work ); 
5. Design - implementer side
5.1. Receivers serve as glue between senders
A receiver is a callback that supports more than one channel. In fact, it supports three of them:
- 
     set_value operator () 
- 
     set_error 
- 
     set_stopped set_value set_error 
Once an async operation has been started exactly one of these functions must be invoked on a receiver before it is destroyed.
While the receiver interface may look novel, it is in fact very similar to the
interface of 
Receivers are not a part of the end-user-facing API of this proposal; they are necessary to allow unrelated senders communicate with each other, but the only users who will interact with receivers directly are authors of senders.
Receivers are what is passed as the second argument to § 5.3 execution::connect.
5.2. Operation states represent work
An operation state is an object that represents work. Unlike senders, it is not a chaining mechanism; instead, it is a concrete object that packages the work described by a full sender chain, ready to be executed. An operation state is neither movable nor
copyable, and its interface consists of a single algorithm: 
Operation states are not a part of the user-facing API of this proposal; they are necessary for implementing sender consumers like 
The return value of § 5.3 execution::connect must satisfy the operation state concept.
5.3. execution :: connect 
   
execution :: sender auto snd = some input sender ; execution :: receiver auto rcv = some receiver ; execution :: operation_state auto state = execution :: connect ( snd , rcv ); execution :: start ( state ); // at this point, it is guaranteed that the work represented by state has been submitted // to an execution resource, and that execution resource will eventually call one of the // completion operations on rcv // operation states are not movable, and therefore this operation state object must be // kept alive until the operation finishes 
5.4. Sender algorithms are customizable
Senders being able to advertise what their completion schedulers are fulfills one of the promises of senders: that of being able to customize an implementation of a sender algorithm based on what scheduler any work it depends on will complete on.
The simple way to provide customizations for functions like 
- 
     sender . then ( invocable ) 
- 
     then ( sender , invocable ) 
- 
     a default implementation of then 
However, this definition is problematic. Imagine another sender adaptor, 
execution :: scheduler auto cuda_sch = cuda_scheduler {}; execution :: sender auto initial = execution :: schedule ( cuda_sch ); // the type of initial is a type defined by the cuda_scheduler // let’s call it cuda::schedule_sender<> execution :: sender auto next = execution :: then ( cuda_sch , []{ return 1 ; }); // the type of next is a standard-library unspecified sender adaptor // that wraps the cuda sender // let’s call it execution::then_sender_adaptor<cuda::schedule_sender<>> execution :: sender auto kernel_sender = execution :: bulk ( next , shape , []( int i ){ ... }); 
How can we specialize the 
namespace cuda :: for_adl_purposes { template < typename ... SentValues > class schedule_sender { execution :: operation_state auto connect ( execution :: receiver auto rcv ); execution :: scheduler auto get_completion_scheduler () const ; }; execution :: sender auto bulk ( execution :: sender auto && input , execution :: shape auto && shape , invocable < sender - values ( input ) > auto && fn ) { // return a cuda sender representing a bulk kernel launch } } // namespace cuda::for_adl_purposes 
However, if the input sender is not just a 
This means that well-meant specialization of sender algorithms that are entirely scheduler-agnostic can have negative consequences. The scheduler-specific specialization - which is essential for good performance on platforms providing specialized ways to launch certain sender algorithms - would not be selected in such cases. But it’s really the scheduler that should control the behavior of sender algorithms when a non-default implementation exists, not the sender. Senders merely describe work; schedulers, however, are the handle to the runtime that will eventually execute said work, and should thus have the final say in how the work is going to be executed.
Therefore, we are proposing the following customization scheme (also modified to take § 5.9 Ranges-style CPOs vs tag_invoke into account): the expression 
- 
     tag_invoke ( < sender - algorithm > , get_completion_scheduler < Tag > ( get_env ( sender )), sender , args ...) 
- 
     tag_invoke ( < sender - algorithm > , sender , args ...) 
- 
     a default implementation, if there exists a default implementation of the given sender algorithm. 
where 
For sender algorithms which accept concepts other than 
5.5. Sender adaptors are lazy
Contrary to early revisions of this paper, we propose to make all sender adaptors perform strictly lazy submission, unless specified otherwise (the one notable exception in this paper is § 4.21.13 execution::ensure_started, whose sole purpose is to start an input sender).
Strictly lazy submission means that there is a guarantee that no work is submitted to an execution resource before a receiver is connected to a sender, and 
5.6. Lazy senders provide optimization opportunities
Because lazy senders fundamentally describe work, instead of describing or representing the submission of said work to an execution resource, and thanks to the flexibility of the customization of most sender algorithms, they provide an opportunity for fusing multiple algorithms in a sender chain together, into a single function that can later be submitted for execution by an execution resource. There are two ways this can happen.
The first (and most common) way for such optimizations to happen is thanks to the structure of the implementation: because all the work is done within callbacks invoked on the completion of an earlier sender, recursively up to the original source of computation, the compiler is able to see a chain of work described using senders as a tree of tail calls, allowing for inlining and removal of most of the sender machinery. In fact, when work is not submitted to execution resources outside of the current thread of execution, compilers are capable of removing the senders abstraction entirely, while still allowing for composition of functions across different parts of a program.
The second way for this to occur is when a sender algorithm is specialized for a specific set of arguments. For instance, we expect that, for senders which are known to have been started already, § 4.21.13 execution::ensure_started will be an identity transformation, because the sender algorithm will be specialized for such senders. Similarly, an implementation could recognize two subsequent § 4.21.9 execution::bulks of compatible shapes, and merge them together into a single submission of a GPU kernel.
5.7. Execution resource transitions are two-step
Because 
This, however, is a problem: because customization of sender algorithms must be controlled by the scheduler they will run on (see § 5.4 Sender algorithms are customizable), the type of the sender returned from 
To allow for such customization from both ends, we propose the inclusion of a secondary transitioning sender adaptor, called 
The default implementation of 
5.8. All senders are typed
All senders must advertise the types they will send when they complete.
This is necessary for a number of features, and writing code in a way that’s
agnostic of whether an input sender is typed or not in common sender adaptors
such as 
The mechanism for this advertisement is similar to the one in A Unified Executors Proposal for C++; the
way to query the types is through 
There’s a choice made in the specification of § 4.22.2 this_thread::sync_wait: it returns a tuple of values sent by the
sender passed to it, wrapped in 
execution :: sender auto sends_1 = ...; execution :: sender auto sends_2 = ...; execution :: sender auto sends_3 = ...; auto [ a , b , c ] = this_thread :: sync_wait ( execution :: transfer_when_all ( execution :: get_completion_scheduler < execution :: set_value_t > ( get_env ( sends_1 )), sends_1 , sends_2 , sends_3 )). value (); // a == 1 // b == 2 // c == 3 
This works well for senders that always send the same set of arguments. If we ignore the possibility of having a sender that sends different sets of arguments into a receiver, we can specify the "canonical" (i.e. required to be followed by all senders) form of 
template < template < typename ... > typename TupleLike > using value_types = TupleLike ; 
If senders could only ever send one specific set of values, this would probably need to be the required form of 
This matter is somewhat complicated by the fact that (1) 
template < template < typename ... > typename TupleLike , template < typename ... > typename VariantLike > using value_types = VariantLike < TupleLike < Types1 ... > , TupleLike < Types2 ... > , ..., TupleLike < Types3 ... > > ; 
This, however, introduces a couple of complications:
- 
     A just ( 1 ) std :: variant < std :: tuple < int >> value_types 
- 
     As a consequence of (1): because sync_wait std :: tuple < int > just ( 1 ) std :: variant < std :: tuple < int >> sync_wait 
One possible solution to (2) above is to place a requirement on 
auto sync_wait_with_variant ( execution :: sender auto sender ) -> std :: optional < std :: variant < std :: tuple < values 0 - sent - by ( sender ) > , std :: tuple < values 1 - sent - by ( sender ) > , ..., std :: tuple < values n - sent - by ( sender ) > >> ; auto sync_wait ( execution :: sender auto sender ) requires ( always - sends - same - values ( sender )) -> std :: optional < std :: tuple < values - sent - by ( sender ) >> ; 
5.9. Ranges-style CPOs vs tag_invoke 
   The contemporary technique for customization in the Standard Library is customization point objects. A customization point object, will it look for member functions and then for nonmember functions with the same name as the customization point, and calls those if they match. This is the technique used by the C++20 ranges library, and previous executors proposals (A Unified Executors Proposal for C++ and Towards C++23 executors: A proposal for an initial set of algorithms) intended to use it as well. However, it has several unfortunate consequences:
- 
     It does not allow for easy propagation of customization points unknown to the adaptor to a wrapped object, which makes writing universal adapter types much harder - and this proposal uses quite a lot of those. 
- 
     It effectively reserves names globally. Because neither member names nor ADL-found functions can be qualified with a namespace, every customization point object that uses the ranges scheme reserves the name for all types in all namespaces. This is unfortunate due to the sheer number of customization points already in the paper, but also ones that we are envisioning in the future. It’s also a big problem for one of the operations being proposed already: sync_wait std :: this_fiber :: sync_wait std :: this_thread :: sync_wait 
This paper proposes to instead use the mechanism described in tag_invoke: A general pattern for supporting customisable functions: 
In short, instead of using globally reserved names, 
Using 
- 
     It reserves only a single global name, instead of reserving a global name for every customization point object we define. 
- 
     It is possible to propagate customizations to a subobject, because the information of which customization point is being resolved is in the type of an argument, and not in the name of the function: // forward most customizations to a subobject template < typename Tag , typename ... Args > friend auto tag_invoke ( Tag && tag , wrapper & self , Args && ... args ) { return std :: forward < Tag > ( tag )( self . subobject , std :: forward < Args > ( args )...); } // but override one of them with a specific value friend auto tag_invoke ( specific_customization_point_t , wrapper & self ) { return self . some_value ; } 
- 
     It is possible to pass those as template arguments to types, because the information of which customization point is being resolved is in the type. Similarly to how A Unified Executors Proposal for C++ defines a polymorphic executor wrapper which accepts a list of properties it supports, we can imagine scheduler and sender wrappers that accept a list of queries and operations they support. That list can contain the types of the customization point objects, and the polymorphic wrappers can then specialize those customization points on themselves using tag_invoke unifex :: any_unique 
6. Specification
Much of this wording follows the wording of A Unified Executors Proposal for C++.
§ 8 Library introduction [library] is meant to be a diff relative to the wording of the [library] clause of Working Draft, Standard for Programming Language C++.
§ 9 General utilities library [utilities] is meant to be a diff relative to the wording of the [utilities] clause of Working Draft, Standard for Programming Language C++. This diff applies changes from tag_invoke: A general pattern for supporting customisable functions.
§ 10 Thread support library [thread] is meant to be a diff relative to the wording of the [thread] clause of Working Draft, Standard for Programming Language C++. This diff applies changes from Composable cancellation for sender-based async operations.
§ 11 Execution control library [exec] is meant to be added as a new library clause to the working draft of C++.
7. Exception handling [except]
7.1. Special functions [except.special]
7.1.1. General [except.special.general]
7.1.1.1. The std :: terminate 
   
when a callback invocation exits via an exception when requesting stop on a
or astd :: stop_source ([stopsource.mem], [stopsource.inplace.mem]), or in the constructor ofstd :: in_place_stop_source orstd :: stop_callback ([stopcallback.cons], [stopcallback.inplace.cons]) when a callback invocation exits via an exception.std :: in_place_stop_callback 
8. Library introduction [library]
< execution > In subclause [conforming], after [lib.types.movedfrom], add the following new subclause with suggested stable name [lib.tmpl-heads].
16.4.6.17 Class template-heads
If a class template’s template-head is marked with "arguments are not associated entities"", any template arguments do not contribute to the associated entities ([basic.lookup.argdep]) of a function call where a specialization of the class template is an associated entity. In such a case, the class template can be implemented as an alias template referring to a templated class, or as a class template where the template arguments themselves are templated classes.
[Example:
template < class T > // arguments are not associated entities struct S {}; namespace N { int f ( auto ); struct A {}; } int x = f ( S < N :: A > {}); // error: N::f not a candidate The template
specified above can be implemented asS template < class T > struct s - impl { struct type { }; }; template < class T > using S = typename s - impl < T >:: type ; or as
template < class T > struct hidden { using type = struct _ { using type = T ; }; }; template < class HiddenT > struct s - impl { using T = typename HiddenT :: type ; }; template < class T > using S = s - impl < typename hidden < T >:: type > ; -- end example]
9. General utilities library [utilities]
9.1. Function objects [function.objects]
9.1.1. Header < functional > 
   At the end of this subclause, insert the following declarations into the synopsis within 
// expositon only: template < class Fn , class ... Args > concept callable = requires ( Fn && fn , Args && ... args ) { std :: forward < Fn > ( fn )( std :: forward < Args > ( args )...); }; template < class Fn , class ... Args > concept nothrow - callable = callable < Fn , Args ... > && requires ( Fn && fn , Args && ... args ) { { std :: forward < Fn > ( fn )( std :: forward < Args > ( args )...) } noexcept ; }; template < class Fn , class ... Args > using call - result - t = decltype ( declval < Fn > ()( declval < Args > ()...)); // [func.tag_invoke], tag_invoke namespace tag - invoke { // exposition only void tag_invoke (); template < class Tag , class ... Args > concept tag_invocable = requires ( Tag && tag , Args && ... args ) { tag_invoke ( std :: forward < Tag > ( tag ), std :: forward < Args > ( args )...); }; template < class Tag , class ... Args > concept nothrow_tag_invocable = tag_invocable < Tag , Args ... > && requires ( Tag && tag , Args && ... args ) { { tag_invoke ( std :: forward < Tag > ( tag ), std :: forward < Args > ( args )...) } noexcept ; }; template < class Tag , class ... Args > using tag_invoke_result_t = decltype ( tag_invoke ( declval < Tag > (), declval < Args > ()...)); template < class Tag , class ... Args > struct tag_invoke_result < Tag , Args ... > { using type = tag_invoke_result_t < Tag , Args ... > ; // present if and only if tag_invocable<Tag, Args...> is true }; struct tag ; // exposition only } inline constexpr tag - invoke :: tag tag_invoke {}; using tag - invoke :: tag_invocable ; using tag - invoke :: nothrow_tag_invocable ; using tag - invoke :: tag_invoke_result_t ; using tag - invoke :: tag_invoke_result ; template < auto & Tag > using tag_t = decay_t < decltype ( Tag ) > ; 
9.1.2. tag_invoke 
   Insert this section as a new subclause, between Searchers [func.search] and Class template 
Given a subexpression
, letE be expression-equivalent to a glvalue with the same type and value asREIFY ( E ) as if byE .identity ()( E ) 
The name
denotes a customization point object [customization.point.object]. Given subexpressionsstd :: tag_invoke andT , the expressionA ... is expression-equivalent [defns.expression-equivalent] tostd :: tag_invoke ( T , A ...) with overload resolution performed in a context in which unqualified lookup fortag_invoke ( REIFY ( T ), REIFY ( A )...) finds only the declarationtag_invoke void tag_invoke (); 
[Note: Diagnosable ill-formed cases above result in substitution failure when
appears in the immediate context of a template instantiation. —end note]std :: tag_invoke ( T , A ...) 
10. Thread support library [thread]
10.1. Stop tokens [thread.stoptoken]
10.1.1. Header < stop_token > 
   At the beginning of this subclause, insert the following declarations into the synopsis within 
template < template < class > class > struct check - type - alias - exists ; // exposition-only template < class T > concept stoppable_token = see - below ; template < class T , class CB , class Initializer = CB > concept stoppable_token_for = see - below ; template < class T > concept unstoppable_token = see - below ; 
At the end of this subclause, insert the following declarations into the synopsis of within 
// [stoptoken.never], class never_stop_token class never_stop_token ; // [stoptoken.inplace], class in_place_stop_token class in_place_stop_token ; // [stopsource.inplace], class in_place_stop_source class in_place_stop_source ; // [stopcallback.inplace], class template in_place_stop_callback template < class CB > class in_place_stop_callback ; template < class T , class CB > using stop_callback_for_t = typename T :: template callback_type < CB > ; 
10.1.2. Stop token concepts [thread.stoptoken.concepts]
Insert this section as a new subclause between Header 
The
concept checks for the basic interface of a stop token that is copyable and allows polling to see if stop has been requested and also whether a stop request is possible. For a stop token typestoppable_token and a typeT that is callable with no arguments, the typeCB is valid and denotes the stop callback type to use to register a callback to be executed if a stop request is ever made on aT :: callback_type < CB > of typestoppable_token . TheT concept checks for a stop token type compatible with a given callback type. Thestoppable_token_for concept checks for a stop token type that does not allow stopping.unstoppable_token template < class T > concept stoppable_token = copyable < T > && equality_comparable < T > && requires ( const T t ) { { T ( t ) } noexcept ; // see implicit expression variations ([concepts.equality]) { t . stop_requested () } noexcept -> same_as < bool > ; { t . stop_possible () } noexcept -> same_as < bool > ; typename check - type - alias - exists < T :: template callback_type > ; }; template < class T , class CB , class Initializer = CB > concept stoppable_token_for = stoppable_token < T > && invocable < CB > && constructible_from < CB , Initializer > && requires { typename stop_callback_for_t < T , CB > ; } && constructible_from < stop_callback_for_t < T , CB > , const T & , Initializer > ; template < class T > concept unstoppable_token = stoppable_token < T > && requires { { bool_constant < T :: stop_possible () > {} } -> same_as < false_type > ; }; LWG directed me to replacewithT :: stop_possible () because of the recentt . stop_possible () changes in P2280R2. However, even with those changes, a nested requirement likeconstexpr , whererequires ( ! t . stop_possible ()) is an argument in the requirement-parameter-list, is ill-formed according to [expr.prim.req.nested/p2]:t A local parameter shall only appear as an unevaluated operand within the constraint-expression.
This is the subject of core issue 2517.
Let
andt be distinct, valid objects of typeu . The typeT modelsT only if:stoppable_token 
If
evaluates tot . stop_possible () falsethen, ifandt reference the same logical shared stop state,u shall also subsequently evaluate tou . stop_possible () falseandshall also subsequently evaluate tou . stop_requested () false.
If
evaluates tot . stop_requested () truethen, ifandt reference the same logical shared stop state,u shall also subsequently evaluate tou . stop_requested () trueandshall also subsequently evaluate tou . stop_possible () true.
Let
andt be distinct, valid objects of typeu and letT be an object of typeinit . Then for some typeInitializer , the typeCB modelsT only if:stoppable_token_for < CB , Initializer > 
The type
models:T :: callback_type < CB > constructible_from < T , Initializer > && constructible_from < T & , Initializer > && constructible_from < const T , Initializer > 
Direct non-list initializing an object
of typecb fromT :: callback_type < CB > shall, ift , init ist . stop_possible () true, construct an instance,, of typecallback , direct-initialized withCB , and register callback withinit 's shared stop state such thatt will be invoked with an empty argument list if a stop request is made on the shared stop state.callback 
If
evaluates tot . stop_requested () trueat the timeis registered thencallback can be invoked on the thread executingcallback 's constructor.cb 
If
is invoked then, ifcallback andt reference the same shared stop state, an evaluation ofu will beu . stop_requested () trueif the beginning of the invocation ofstrongly-happens-before the evaluation ofcallback .u . stop_requested () 
[Note: If
evaluates tot . stop_possible () falsethen the construction ofis not required to construct and initializecb . --end note]callback 
Construction of a
instance shall only throw exceptions thrown by the initialization of theT :: callback_type < CB > instance from the value of typeCB .Initializer 
Destruction of the
object,T :: callback_type < CB > , removescb from the shared stop state such thatcallback will not be invoked after the destructor returns.callback 
If
is currently being invoked on another thread then the destructor ofcallback will block until the invocation ofcb returns such that the return from the invocation ofcallback strongly-happens-before the destruction ofcallback .callback 
Destruction of a callback
shall not block on the completion of the invocation of some other callback registered with the same shared stop state.cb 
10.1.3. Class stop_token 
   10.1.3.1. General [stoptoken.general]
Modify the synopsis of class 
namespace std { class stop_token { public : template < class T > using callback_type = stop_callback < T > ; // [stoptoken.cons], constructors, copy, and assignment stop_token () noexcept ; // ... 
10.1.4. Class never_stop_token 
   Insert a new subclause, Class 
10.1.4.1. General [stoptoken.never.general]
- 
     The class never_stop_token unstoppable_token 
namespace std { class never_stop_token { // exposition only struct callback { explicit callback ( never_stop_token , auto && ) noexcept {} }; public : template < class > using callback_type = callback ; [[ nodiscard ]] static constexpr bool stop_requested () noexcept { return false; } [[ nodiscard ]] static constexpr bool stop_possible () noexcept { return false; } [[ nodiscard ]] friend bool operator == ( const never_stop_token & , const never_stop_token & ) noexcept = default ; }; } 
10.1.5. Class in_place_stop_token 
   Insert a new subclause, Class 
10.1.5.1. General [stoptoken.inplace.general]
- 
     The class in_place_stop_token stop_requested stop_possible in_place_stop_source in_place_stop_token in_place_stop_callback in_place_stop_source 
namespace std { class in_place_stop_token { public : template < class CB > using callback_type = in_place_stop_callback < CB > ; // [stoptoken.inplace.cons], constructors, copy, and assignment in_place_stop_token () noexcept ; ~ in_place_stop_token (); void swap ( in_place_stop_token & ) noexcept ; // [stoptoken.inplace.mem], stop handling [[ nodiscard ]] bool stop_requested () const noexcept ; [[ nodiscard ]] bool stop_possible () const noexcept ; [[ nodiscard ]] friend bool operator == ( const in_place_stop_token & , const in_place_stop_token & ) noexcept = default ; friend void swap ( in_place_stop_token & lhs , in_place_stop_token & rhs ) noexcept ; private : const in_place_stop_source * source_ ; // exposition only }; } 
10.1.5.2. Constructors, copy, and assignment [stoptoken.inplace.cons]
in_place_stop_token () noexcept ; 
- 
     Effects: initializes source_ nullptr 
void swap ( stop_token & rhs ) noexcept ; 
- 
     Effects: Exchanges the values of source_ rhs . source_ 
10.1.5.3. Members [stoptoken.inplace.mem]
[[ nodiscard ]] bool stop_requested () const noexcept ; 
- 
     Effects: Equivalent to: return source_ != nullptr && source_ -> stop_requested (); 
- 
     [Note: The behavior of stop_requested () in_place_stop_source 
[[ nodiscard ]] bool stop_possible () const noexcept ; 
- 
     Effects: Equivalent to: return source_ != nullptr ; 
- 
     [Note: The behavior of stop_possible () in_place_stop_source 
10.1.5.4. Non-member functions [stoptoken.inplace.nonmembers]
friend void swap ( in_place_stop_token & x , in_place_stop_token & y ) noexcept ; 
- 
     Effects: Equivalent to: x . swap ( y ) 
10.1.6. Class in_place_stop_source 
   Insert a new subclause, Class 
10.1.6.1. General [stopsource.inplace.general]
- 
     The class in_place_stop_source in_place_stop_source in_place_stop_token in_place_stop_token in_place_stop_source in_place_stop_source 
namespace std { class in_place_stop_source { public : // [stopsource.inplace.cons], constructors, copy, and assignment in_place_stop_source () noexcept ; in_place_stop_source ( in_place_stop_source && ) noexcept = delete ; ~ in_place_stop_source (); //[stopsource.inplace.mem], stop handling [[ nodiscard ]] in_place_stop_token get_token () const noexcept ; [[ nodiscard ]] static constexpr bool stop_possible () noexcept { return true; } [[ nodiscard ]] bool stop_requested () const noexcept ; bool request_stop () noexcept ; }; } 
- 
     An instance of in_place_stop_source - 
       The stop state is checked. If stop has not been requested, the callback invocation is added to the list of registered callback invocations, and registration has succeeded. 
- 
       Otherwise, registration has failed. 
 When an invocation of a callback is unregistered, the invocation is atomically removed from the list of registered callback invocations. The removal is not blocked by the concurrent execution of another callback invocation in the list. If the callback invocation being unregistered is currently executing, then: - 
       If the execution of the callback invocation is happening concurrently on another thread, the completion of the execution strongly happens before ([intro.races]) the end of the callback’s lifetime. 
- 
       Otherwise, the execution is happening on the current thread. Removal of the callback invocation does not block waiting for the execution to complete. 
 
- 
       
10.1.6.2. Constructors, copy, and assignment [stopsource.inplace.cons]
in_place_stop_source () noexcept ; 
- 
     Effects: Initializes a new stop state inside * this 
- 
     Postconditions: stop_requested () false.
10.1.6.3. Members [stopsource.inplace.mem]
[[ nodiscard ]] in_place_stop_token get_token () const noexcept ; 
- 
     Returns: A new associated in_place_stop_token 
[[ nodiscard ]] bool stop_requested () const noexcept ; 
- 
     Returns: trueif the stop state inside* this false.
bool request_stop () noexcept ; 
- 
     Effects: Atomically determines whether the stop state inside * this terminate 
- 
     Postconditions: stop_requested () true.
- 
     Returns: trueif this call made a stop request; otherwisefalse.
10.1.7. Class template in_place_stop_callback 
   Insert a new subclause, Class template 
10.1.7.1. General [stopcallback.inplace.general]
- 
namespace std { template < class Callback > class in_place_stop_callback { public : using callback_type = Callback ; // [stopcallback.inplace.cons], constructors and destructor template < class C > explicit in_place_stop_callback ( in_place_stop_token st , C && cb ) noexcept ( is_nothrow_constructible_v < Callback , C > ); ~ in_place_stop_callback (); in_place_stop_callback ( in_place_stop_callback && ) = delete ; private : Callback callback_ ; // exposition only }; template < class Callback > in_place_stop_callback ( in_place_stop_token , Callback ) -> in_place_stop_callback < Callback > ; } 
- 
     Mandates: in_place_stop_callback Callback invocable destructible 
- 
     Preconditions: in_place_stop_callback Callback invocable destructible 
- 
     Recommended practice: Implementations should use the storage of the in_place_stop_callback in_place_stop_source 
10.1.7.2. Constructors and destructor [stopcallback.inplace.cons]
template < class C > explicit in_place_stop_callback ( in_place_stop_token st , C && cb ) noexcept ( is_nothrow_constructible_v < Callback , C > ); 
- 
     Constraints: Callback C constructible_from < Callback , C > 
- 
     Preconditions: Callback C constructible_from < Callback , C > 
- 
     Effects: Initializes callback_ std :: forward < C > ( cb ) in_place_stop_source st * this std :: forward < Callback > ( callback_ )() in_place_stop_source 
- 
     Throws: Any exception thrown by the initialization of callback_ 
- 
     Remarks: If evaluating std :: forward < Callback > ( callback_ )() terminate 
~ in_place_stop_callback (); 
- 
     Effects: Unregisters ([stopsource.inplace.general]) the callback invocation from the associated in_place_stop_source 
- 
     Remarks: A program has undefined behavior if the start of this destructor does not strongly happen before the start of the destructor of the associated in_place_stop_source 
11. Execution control library [exec]
11.1. General [exec.general]
- 
     This Clause describes components supporting execution of function objects [function.objects]. 
- 
     The following subclauses describe the requirements, concepts, and components for execution control primitives as summarized in Table 1. 
| Subclause | Header | |
| [exec.execute] | One-way execution | 
- 
     [Note: A large number of execution control primitives are customization point objects. For an object one might define multiple types of customization point objects, for which different rules apply. Table 2 shows the types of customization point objects used in the execution control library: 
| Customization point object type | Purpose | Examples | 
|---|---|---|
| core | provide core execution functionality, and connection between core components | ,, | 
| completion functions | called by senders to announce the completion of the work (success, error, or cancellation) | ,, | 
| senders | allow the specialization of the provided sender algorithms | 
 | 
| queries | allow querying different properties of objects | 
 | 
-- end note]
- 
     This clause makes use of the following exposition-only entities: - 
template < class Fn , class ... Args > requires callable < Fn , Args ... > constexpr auto mandate - nothrow - call ( Fn && fn , Args && ... args ) noexcept -> call - result - t < Fn , Args ... > { return std :: forward < Fn > ( fn )( std :: forward < Args > ( args )...); } - 
         Mandates: nothrow - callable < Fn , Args ... > true.
 
- 
         
- 
template < class T > concept movable - value = move_constructible < decay_t < T >> && constructible_from < decay_t < T > , T > ; 
- 
       For function types F1 F2 R1 ( Args1 ...) R2 ( Args2 ...) MATCHING - SIG ( F1 , F2 ) trueif and only ifsame_as < R1 ( Args && ...), R2 ( Args2 && ...) > true.
 
- 
11.2. Queries and queryables [exec.queryable]
11.2.1. General [exec.queryable.general]
- 
     A queryable object is a read-only collection of key/value pairs where each key is a customization point object known as a query object. A query is an invocation of a query object with a queryable object as its first argument and a (possibly empty) set of additional arguments. The result of a query expression is valid as long as the queryable object is valid. A query imposes syntactic and semantic requirements on its invocations. 
- 
     Given a subexpression e q F args F ( e , args ...) F ( c , args ...) c const q 
- 
     The type of a query expression can not be void 
- 
     The expression F ( e , args ...) 
- 
     Unless otherwise specified, the value returned by the expression F ( e , args ...) e 
11.2.2. queryable 
template < class T > concept queryable = destructible < T > ; 
- 
     The queryable 
- 
     Let e E E queryable F args requires { F ( e , args ...) } truethenF ( e , args ...) F 
11.3. Asynchronous operations [async.ops]
- 
     An execution resource is a program entity that manages a (possibly dynamic) set of execution agents ([thread.req.lockable.general]), which it uses to execute parallel work on behalf of callers. [Example 1: The currently active thread, a system-provided thread pool, and uses of an API associated with an external hardware accelerator are all examples of execution resources. -- end example] Execution resources execute asynchronous operations. An execution resource is either valid or invalid. 
- 
     An asynchronous operation is a distinct unit of program execution that: - 
       is explicitly created; 
- 
       can be explicitly started; an asynchronous operation can be started once at most; 
- 
       if started, eventually completes with a (possibly empty) set of result datums, and in exactly one of three modes: success, failure, or cancellation, known as the operation’s disposition; an asychronous operation can only complete once; a successful completion, also known as a value completion, can have an arbitrary number of result datums; a failure completion, also known as an error completion, has a single result datum; a cancellation completion, also known as a stopped completion, has no result datum; an asynchronous operation’s async result is its disposition and its (possibly empty) set of result datums. 
- 
       can complete on a different execution resource than that on which it started; and 
- 
       can create and start other asychronous operations called child operations. A child operation is an asynchronous operation that is created by the parent operation and, if started, completes before the parent operation completes. A parent operation is the asynchronous operation that created a particular child operation. 
 An asynchronous operation can in fact execute synchronously; that is, it can complete during the execution of its start operation on the thread of execution that started it. 
- 
       
- 
     An asynchronous operation has associated state known as its operation state. 
- 
     An asynchronous operation has an associated environment. An environment is a queryable object ([exec.queryable]) representing the execution-time properties of the operation’s caller. The caller of an asynchronous operation is its parent operation or the function that created it. An asynchronous operation’s operation state owns the operation’s environment. 
- 
     An asynchronous operation has an associated receiver. A receiver is an aggregation of three handlers for the three asynchronous completion dispositions: a value completion handler for a value completion, an error completion handler for an error completion, and a stopped completion handler for a stopped completion. A receiver has an associated environment. An asynchronous operation’s operation state owns the operation’s receiver. The environment of an asynchronous operation is equal to its receiver’s environment. 
- 
     For each completion disposition, there is a completion function. A completion function is a customization point object ([customization.point.object]) that accepts an asynchronous operation’s receiver as the first argument and the result datums of the asynchronous operation as additional arguments. The value completion function invokes the receiver’s value completion handler with the value result datums; likewise for the error completion function and the stopped completion function. A completion function has an associated type known as its completion tag that names the unqualified type of the completion function. A valid invocation of a completion function is called a completion operation. 
- 
     The lifetime of an asynchronous operation, also known as the operation’s async lifetime, begins when its start operation begins executing and ends when its completion operation begins executing. If the lifetime of an asynchronous operation’s associated operation state ends before the lifetime of the asynchronous operation, the behavior is undefined. After an asynchronous operation executes a completion operation, its associated operation state is invalid. Accessing any part of an invalid operation state is undefined behavior. 
- 
     An asynchronous operation shall not execute a completion operation before its start operation has begun executing. After its start operation has begun executing, exactly one completion operation shall execute. The lifetime of an asynchronous operation’s operation state can end during the execution of the completion operation. 
- 
     A sender is a factory for one or more asynchronous operations. Connecting a sender and a receiver creates an asynchronous operation. The asynchronous operation’s associated receiver is equal to the receiver used to create it, and its associated environment is equal to the environment associated with the receiver used to create it. The lifetime of an asynchronous operation’s associated operation state does not depend on the lifetimes of either the sender or the receiver from which it was created. A sender sends its results by way of the asynchronous operation(s) it produces, and a receiver receives those results. A sender is either valid or invalid; it becomes invalid when its parent sender (see below) becomes invalid. 
- 
     A scheduler is an abstraction of an execution resource with a uniform, generic interface for scheduling work onto that resource. It is a factory for senders whose asynchronous operations execute value completion operations on an execution agent belonging to the scheduler’s associated execution resource. A schedule-expression obtains such a sender from a scheduler. A schedule sender is the result of a schedule expression. On success, an asynchronous operation produced by a schedule sender executes a value completion operation with an empty set of result datums. Multiple schedulers can refer to the same execution resource. A scheduler can be valid or invalid. A scheduler becomes invalid when the execution resource to which it refers becomes invalid, as do any schedule senders obtained from the scheduler, and any operation states obtained from those senders. 
- 
     An asynchronous operation has one or more associated completion schedulers for each of its possible dispositions. A completion scheduler is a scheduler whose associated execution resource is used to execute a completion operation for an asynchronous operation. A value completion scheduler is a scheduler on which an asynchronous operation’s value completion operation can execute. Likewise for error completion schedulers and stopped completion schedulers. 
- 
     A sender has an associated queryable object ([exec.queryable]) known as its attributes that describes various characteristics of the sender and of the asynchronous operation(s) it produces. For each disposition, there is a query object for reading the associated completion scheduler from a sender’s attributes; i.e., a value completion scheduler query object for reading a sender’s value completion scheduler, etc. If a completion scheduler query is well-formed, the returned completion scheduler is unique for that disposition for any asynchronous operation the sender creates. A schedule sender is required to have a value completion scheduler attribute whose value is equal to the scheduler that produced the schedule sender. 
- 
     A completion signature is a function type that describes a completion operation. An asychronous operation has a finite set of possible completion signatures. The completion signature’s return type is the completion tag associated with the completion function that executes the completion operation. The completion signature’s argument types are the types and value categories of the asynchronous operation’s result datums. Together, a sender type and an environment type E E 
- 
     A sender algorithm is a function that takes and/or returns a sender. There are three categories of sender algorithms: - 
       A sender factory is a function that takes non-senders as arguments and that returns a sender. 
- 
       A sender adaptor is a function that constructs and returns a parent sender from a set of one or more child senders and a (possibly empty) set of additional arguments. An asynchronous operation created by a parent sender is a parent to the child operations created by the child senders. 
- 
       A sender consumer is a function that takes one or more senders and a (possibly empty) set of additional arguments, and whose return type is not the type of a sender. 
 
- 
       
11.4. Header < execution > 
namespace std { // [exec.general], helper concepts template < class T > concept movable - value = see - below ; // exposition only template < class From , class To > concept decays - to = same_as < decay_t < From > , To > ; // exposition only template < class T > concept class - type = decays - to < T , T > && is_class_v < T > ; // exposition only // [exec.queryable], queryable objects template < class T > concept queryable = destructible ; // [exec.queries], queries namespace queries { // exposition only struct forwarding_query_t ; struct get_allocator_t ; struct get_stop_token_t ; } using queries :: forwarding_query_t ; using queries :: get_allocator_t ; using queries :: get_stop_token_t ; inline constexpr forwarding_query_t forwarding_query {}; inline constexpr get_allocator_t get_allocator {}; inline constexpr get_stop_token_t get_stop_token {}; template < class T > using stop_token_of_t = remove_cvref_t < decltype ( get_stop_token ( declval < T > ())) > ; template < class T > concept forwarding - query = // exposition only forwarding_query ( T {}); namespace exec - envs { // exposition only struct empty_env {}; struct get_env_t ; } using envs - envs :: empty_env ; using envs - envs :: get_env_t ; inline constexpr get_env_t get_env {}; template < class T > using env_of_t = decltype ( get_env ( declval < T > ())); } namespace std :: execution { // [exec.queries], queries enum class forward_progress_guarantee ; namespace queries { // exposition only struct get_scheduler_t ; struct get_delegatee_scheduler_t ; struct get_forward_progress_guarantee_t ; template < class CPO > struct get_completion_scheduler_t ; } using queries :: get_scheduler_t ; using queries :: get_delegatee_scheduler_t ; using queries :: get_forward_progress_guarantee_t ; using queries :: get_completion_scheduler_t ; inline constexpr get_scheduler_t get_scheduler {}; inline constexpr get_delegatee_scheduler_t get_delegatee_scheduler {}; inline constexpr get_forward_progress_guarantee_t get_forward_progress_guarantee {}; template < class CPO > inline constexpr get_completion_scheduler_t < CPO > get_completion_scheduler {}; // [exec.sched], schedulers template < class S > concept scheduler = see - below ; // [exec.recv], receivers template < class R > inline constexpr bool enable_receiver = see - below ; template < class R > concept receiver = see - below ; template < class R , class Completions > concept receiver_of = see - below ; namespace receivers { // exposition only struct set_value_t ; struct set_error_t ; struct set_stopped_t ; } using receivers :: set_value_t ; using receivers :: set_error_t ; using receivers :: set_stopped_t ; inline constexpr set_value_t set_value {}; inline constexpr set_error_t set_error {}; inline constexpr set_stopped_t set_stopped {}; // [exec.opstate], operation states template < class O > concept operation_state = see - below ; namespace op - state { // exposition only struct start_t ; } using op - state :: start_t ; inline constexpr start_t start {}; // [exec.snd], senders template < class S > inline constexpr bool enable_sender = see below ; template < class S > concept sender = see - below ; template < class S , class E = empty_env > concept sender_in = see - below ; template < class S , class R > concept sender_to = see - below ; template < class S , class Sig , class E = empty_env > concept sender_of = see below ; template < class ... Ts > struct type - list ; // exposition only template < class S , class E = empty_env > using single - sender - value - type = see below ; // exposition only template < class S , class E = empty_env > concept single - sender = see below ; // exposition only // [exec.getcomplsigs], completion signatures namespace completion - signatures { // exposition only struct get_completion_signatures_t ; } using completion - signatures :: get_completion_signatures_t ; inline constexpr get_completion_signatures_t get_completion_signatures {}; template < class S , class E = empty_env > requires sender_in < S , E > using completion_signatures_of_t = call - result - t < get_completion_signatures_t , S , E > ; template < class ... Ts > using decayed - tuple = tuple < decay_t < Ts > ... > ; // exposition only template < class ... Ts > using variant - or - empty = see below ; // exposition only template < class S , class E = empty_env , template < class ... > class Tuple = decayed - tuple , template < class ... > class Variant = variant - or - empty > requires sender_in < S , E > using value_types_of_t = see below ; template < class S , class Env = empty_env , template < class ... > class Variant = variant - or - empty > requires sender_in < S , E > using error_types_of_t = see below ; template < class S , class E = empty_env > requires sender_in < S , E > inline constexpr bool sends_stopped = see below ; // [exec.connect], the connect sender algorithm namespace senders - connect { // exposition only struct connect_t ; } using senders - connect :: connect_t ; inline constexpr connect_t connect {}; template < class S , class R > using connect_result_t = decltype ( connect ( declval < S > (), declval < R > ())); // [exec.factories], sender factories namespace senders - factories { // exposition only struct schedule_t ; struct transfer_just_t ; } inline constexpr unspecified just {}; inline constexpr unspecified just_error {}; inline constexpr unspecified just_stopped {}; using senders - factories :: schedule_t ; using senders - factories :: transfer_just_t ; inline constexpr schedule_t schedule {}; inline constexpr transfer_just_t transfer_just {}; inline constexpr unspecified read {}; template < scheduler S > using schedule_result_t = decltype ( schedule ( declval < S > ())); // [exec.adapt], sender adaptors namespace sender - adaptor - closure { // exposition only template < class - type D > struct sender_adaptor_closure { }; } using sender - adaptor - closure :: sender_adaptor_closure ; namespace sender - adaptors { // exposition only struct on_t ; struct transfer_t ; struct schedule_from_t ; struct then_t ; struct upon_error_t ; struct upon_stopped_t ; struct let_value_t ; struct let_error_t ; struct let_stopped_t ; struct bulk_t ; struct split_t ; struct when_all_t ; struct when_all_with_variant_t ; struct transfer_when_all_t ; struct transfer_when_all_with_variant_t ; struct into_variant_t ; struct stopped_as_optional_t ; struct stopped_as_error_t ; struct ensure_started_t ; } using sender - adaptors :: on_t ; using sender - adaptors :: transfer_t ; using sender - adaptors :: schedule_from_t ; using sender - adaptors :: then_t ; using sender - adaptors :: upon_error_t ; using sender - adaptors :: upon_stopped_t ; using sender - adaptors :: let_value_t ; using sender - adaptors :: let_error_t ; using sender - adaptors :: let_stopped_t ; using sender - adaptors :: bulk_t ; using sender - adaptors :: split_t ; using sender - adaptors :: when_all_t ; using sender - adaptors :: when_all_with_variant_t ; using sender - adaptors :: transfer_when_all_t ; using sender - adaptors :: transfer_when_all_with_variant_t ; using sender - adaptors :: into_variant_t ; using sender - adaptors :: stopped_as_optional_t ; using sender - adaptors :: stopped_as_error_t ; using sender - adaptors :: ensure_started_t ; inline constexpr on_t on {}; inline constexpr transfer_t transfer {}; inline constexpr schedule_from_t schedule_from {}; inline constexpr then_t then {}; inline constexpr upon_error_t upon_error {}; inline constexpr upon_stopped_t upon_stopped {}; inline constexpr let_value_t let_value {}; inline constexpr let_error_t let_error {}; inline constexpr let_stopped_t let_stopped {}; inline constexpr bulk_t bulk {}; inline constexpr split_t split {}; inline constexpr when_all_t when_all {}; inline constexpr when_all_with_variant_t when_all_with_variant {}; inline constexpr transfer_when_all_t transfer_when_all {}; inline constexpr transfer_when_all_with_variant_t transfer_when_all_with_variant {}; inline constexpr into_variant_t into_variant {}; inline constexpr stopped_as_optional_t stopped_as_optional ; inline constexpr stopped_as_error_t stopped_as_error ; inline constexpr ensure_started_t ensure_started {}; // [exec.consumers], sender consumers namespace sender - consumers { // exposition only struct start_detached_t ; } using sender - consumers :: start_detached_t ; inline constexpr start_detached_t start_detached {}; // [exec.utils], sender and receiver utilities // [exec.utils.rcvr.adptr] template < class - type Derived , receiver Base = unspecified > // arguments are not associated entities ([lib.tmpl-heads]) class receiver_adaptor ; template < class Fn > concept completion - signature = // exposition only see below ; // [exec.utils.cmplsigs] template < completion - signature ... Fns > struct completion_signatures {}; template < class ... Args > // exposition only using default - set - value = completion_signatures < set_value_t ( Args ...) > ; template < class Err > // exposition only using default - set - error = completion_signatures < set_error_t ( Err ) > ; template < class Sigs > // exposition only concept valid - completion - signatures = see below ; // [exec.utils.mkcmplsigs] template < sender Sndr , class Env = empty_env , valid - completion - signatures AddlSigs = completion_signatures <> , template < class ... > class SetValue = see below , template < class > class SetError = see below , valid - completion - signatures SetStopped = completion_signatures < set_stopped_t () >> requires sender_in < Sndr , Env > using make_completion_signatures = completion_signatures < see below > ; // [exec.ctx], execution resources class run_loop ; } namespace std :: this_thread { // [exec.queries], queries namespace queries { // exposition only struct execute_may_block_caller_t ; } using queries :: execute_may_block_caller_t ; inline constexpr execute_may_block_caller_t execute_may_block_caller {}; namespace this - thread { // exposition only struct sync - wait - env ; // exposition only template < class S > requires sender_in < S , sync - wait - env > using sync - wait - type = see - below ; // exposition only template < class S > using sync - wait - with - variant - type = see - below ; // exposition only struct sync_wait_t ; struct sync_wait_with_variant_t ; } using this - thread :: sync_wait_t ; using this - thread :: sync_wait_with_variant_t ; inline constexpr sync_wait_t sync_wait {}; inline constexpr sync_wait_with_variant_t sync_wait_with_variant {}; } namespace std :: execution { // [exec.execute], one-way execution namespace execute { // exposition only struct execute_t ; } using execute :: execute_t ; inline constexpr execute_t execute {}; // [exec.as.awaitable] namespace coro - utils { // exposition only struct as_awaitable_t ; } using coro - utils :: as_awaitable_t ; inline constexpr as_awaitable_t as_awaitable ; // [exec.with.awaitable.senders] template < class - type Promise > struct with_awaitable_senders ; } 
- 
     The exposition-only type variant - or - empty < Ts ... > - 
       If sizeof ...( Ts ) variant - or - empty < Ts ... > variant < Us ... > Us ... decay_t < Ts > ... 
- 
       Otherwise, variant - or - empty < Ts ... > struct empty - variant { empty - variant () = delete ; }; 
 
- 
       
11.5. Queries [exec.queries]
11.5.1. std :: get_env 
   - 
     get_env o O get_env ( o ) - 
       tag_invoke ( std :: get_env , const_cast < const O &> ( o )) - 
         Mandates: The type of the expression above satisfies queryable 
 
- 
         
- 
       Otherwise, empty_env {} 
 
- 
       
- 
     The value of get_env ( o ) o 
- 
     When passed a sender object, get_env get_env 
11.5.2. std :: forwarding_query 
   - 
     std :: forwarding_query 
- 
     The name std :: forwarding_query q Q std :: forwarding_query ( q ) - 
       mandate - nothrow - call ( tag_invoke , std :: forwarding_query , q ) - 
         Mandates: The expression above has type bool q 
 
- 
         
- 
       Otherwise, trueifderived_from < Q , std :: forwarding_query_t > true.
- 
       Otherwise, false.
 
- 
       
- 
     For a queryable object o FWD - QUERIES ( o ) q as q ( FWD - QUERIES ( o ), as ...) forwarding_query ( q ) false; otherwise, it is expression-equivalent toq ( o , as ...) 
11.5.3. std :: get_allocator 
   - 
     get_allocator 
- 
     The name get_allocator r get_allocator ( r ) mandate - nothrow - call ( tag_invoke , std :: get_allocator , as_const ( r )) - 
       Mandates: The type of the expression above satisfies Allocator. 
 
- 
       
- 
     std :: forwarding_query ( std :: get_allocator ) true.
- 
     get_allocator () execution :: read ( std :: get_allocator ) 
11.5.4. std :: get_stop_token 
   - 
     get_stop_token 
- 
     The name get_stop_token r get_stop_token ( r ) - 
       mandate - nothrow - call ( tag_invoke , std :: get_stop_token , as_const ( r )) - 
         Mandates: The type of the expression above satisfies stoppable_token 
 
- 
         
- 
       Otherwise, never_stop_token {} 
 
- 
       
- 
     std :: forwarding_query ( std :: get_stop_token ) true.
- 
     get_stop_token () execution :: read ( std :: get_stop_token ) 
11.5.5. execution :: get_scheduler 
   - 
     get_scheduler 
- 
     The name get_scheduler r get_scheduler ( r ) mandate - nothrow - call ( tag_invoke , get_scheduler , as_const ( r )) - 
       Mandates: The type of the expression above satisfies scheduler 
 
- 
       
- 
     std :: forwarding_query ( std :: get_scheduler ) true.
- 
     get_scheduler () execution :: read ( get_scheduler ) 
11.5.6. execution :: get_delegatee_scheduler 
   - 
     get_delegatee_scheduler 
- 
     The name get_delegatee_scheduler r get_delegatee_scheduler ( r ) mandate - nothrow - call ( tag_invoke , get_delegatee_scheduler , as_const ( r )) - 
       Mandates: The type of the expression above is satisfies scheduler 
 
- 
       
- 
     std :: forwarding_query ( std :: get_delegatee_scheduler ) true.
- 
     get_delegatee_scheduler () execution :: read ( get_delegatee_scheduler ) 
11.5.7. execution :: get_forward_progress_guarantee 
enum class forward_progress_guarantee { concurrent , parallel , weakly_parallel }; 
- 
     get_forward_progress_guarantee 
- 
     The name get_forward_progress_guarantee s S decltype (( s )) S scheduler get_forward_progress_guarantee get_forward_progress_guarantee ( s ) - 
       mandate - nothrow - call ( tag_invoke , get_forward_progress_guarantee , as_const ( s )) - 
         Mandates: The type of the expression above is forward_progress_guarantee 
 
- 
         
- 
       Otherwise, forward_progress_guarantee :: weakly_parallel 
 
- 
       
- 
     If get_forward_progress_guarantee ( s ) s forward_progress_guarantee :: concurrent forward_progress_guarantee :: parallel 
11.5.8. this_thread :: execute_may_block_caller 
   - 
     this_thread :: execute_may_block_caller s execute ( s , f ) f 
- 
     The name this_thread :: execute_may_block_caller s S decltype (( s )) S scheduler this_thread :: execute_may_block_caller this_thread :: execute_may_block_caller ( s ) - 
       mandate - nothrow - call ( tag_invoke , this_thread :: execute_may_block_caller , as_const ( s )) - 
         Mandates: The type of the expression above is bool 
 
- 
         
- 
       Otherwise, true.
 
- 
       
- 
     If this_thread :: execute_may_block_caller ( s ) s false, noexecute ( s , f ) f 
11.5.9. execution :: get_completion_scheduler 
   - 
     get_completion_scheduler < completion - tag > 
- 
     The name get_completion_scheduler q Q decltype (( q )) Tag get_completion_scheduler < Tag > ( q ) set_value_t set_error_t set_stopped_t get_completion_scheduler < Tag > ( q ) get_completion_scheduler < Tag > ( q ) mandate - nothrow - call ( tag_invoke , get_completion_scheduler , as_const ( q )) - 
       Mandates: The type of the expression above satisfies scheduler 
 
- 
       
- 
     If, for some sender s C Tag get_completion_scheduler < Tag > ( get_env ( s )) sch s C ( r , args ...) r s args ... sch 
- 
     The expression forwarding_query ( get_completion_scheduler < CPO > ) true.
11.6. Schedulers [exec.sched]
- 
     The scheduler schedule schedule template < class S > concept scheduler = queryable < S > && requires ( S && s , const get_completion_scheduler_t < set_value_t > tag ) { { schedule ( std :: forward < S > ( s )) } -> sender ; { tag_invoke ( tag , std :: get_env ( schedule ( std :: forward < S > ( s )))) } -> same_as < remove_cvref_t < S >> ; } && equality_comparable < remove_cvref_t < S >> && copy_constructible < remove_cvref_t < S >> ; 
- 
     Let S E sender_in < schedule_result_t < S > , E > true. Thensender_of < schedule_result_t < S > , set_value_t (), E > true.
- 
     None of a scheduler’s copy constructor, destructor, equality comparison, or swap 
- 
     None of these member functions, nor a scheduler type’s schedule 
- 
     For any two (possibly const s1 s2 S s1 == s2 trueonly if boths1 s2 
- 
     For a given scheduler expression s get_completion_scheduler < set_value_t > ( std :: get_env ( schedule ( s ))) s 
- 
     A scheduler type’s destructor shall not block pending completion of any receivers connected to the sender objects returned from schedule 
11.7. Receivers [exec.recv]
11.7.1. Receiver concepts [exec.recv.concepts]
- 
     A receiver represents the continuation of an asynchronous operation. The receiver receiver_of get_env template < class R > inline constexpr bool enable_receiver = requires { typename R :: is_receiver ; }; template < class R > concept receiver = enable_receiver < remove_cvref_t < R >> && requires ( const remove_cvref_t < R >& r ) { { get_env ( r ) } -> queryable ; } && move_constructible < remove_cvref_t < R >> && // rvalues are movable, and constructible_from < remove_cvref_t < R > , R > ; // lvalues are copyable template < class Signature , class R > concept valid - completion - for = // exposition only requires ( Signature * sig ) { [] < class Tag , class ... Args > ( Tag ( * )( Args ...)) requires callable < Tag , remove_cvref_t < R > , Args ... > {}( sig ); }; template < class R , class Completions > concept receiver_of = receiver < R > && requires ( Completions * completions ) { [] < valid - completion - for < R > ... Sigs > ( completion_signatures < Sigs ... >* ) {}( completions ); }; 
- 
     Remarks: Pursuant to [namespace.std], users can specialize enable_receiver truefor cv-unqualified program-defined types that modelreceiver falsefor types that do not. Such specializations shall be usable in constant expressions ([expr.const]) and have typeconst bool 
- 
     Let r op_state r token get_stop_token ( get_env ( r )) token r op_state token token 
11.7.2. execution :: set_value 
   - 
     set_value set_value_t set_value ( R , Vs ...) R Vs R const mandate - nothrow - call ( tag_invoke , set_value , R , Vs ...) 
11.7.3. execution :: set_error 
   - 
     set_error set_error_t set_error ( R , E ) R E R const mandate - nothrow - call ( tag_invoke , set_error , R , E ) 
11.7.4. execution :: set_stopped 
   - 
     set_stopped set_stopped_t set_stopped ( R ) R R const mandate - nothrow - call ( tag_invoke , set_stopped , R ) 
11.8. Operation states [exec.opstate]
- 
     The operation_state template < class O > concept operation_state = queryable < O > && is_object_v < O > && requires ( O & o ) { { start ( o ) } noexcept ; }; 
- 
     If an operation_state 
- 
     Library-provided operation state types are non-movable. 
11.8.1. execution :: start 
   - 
     The name start start ( O ) O O mandate - nothrow - call ( tag_invoke , start , O ) 
- 
     If the function selected by tag_invoke O start ( O ) 
11.9. Senders [exec.snd]
11.9.1. Sender concepts [exec.snd.concepts]
- 
     The sender sender_in sender_to get_env connect template < class Sigs > concept valid - completion - signatures = see below ; template < class S > inline constexpr bool enable_sender = requires { typename S :: is_sender ; }; template < is - awaitable < env - promise < empty_env >> S > // [exec.awaitables] inline constexpr bool enable_sender < S > = true; template < class S > concept sender = enable_sender < remove_cvref_t < S >> && requires ( const remove_cvref_t < S >& s ) { { get_env ( s ) } -> queryable ; } && move_constructible < remove_cvref_t < S >> && // rvalues are movable, and constructible_from < remove_cvref_t < S > , S > ; // lvalues are copyable template < class S , class E = empty_env > concept sender_in = sender < S > && requires ( S && s , E && e ) { { get_completion_signatures ( std :: forward < S > ( s ), std :: forward < E > ( e )) } -> valid - completion - signatures ; }; template < class S , class R > concept sender_to = sender_in < S , env_of_t < R >> && receiver_of < R , completion_signatures_of_t < S , env_of_t < R >>> && requires ( S && s , R && r ) { connect ( std :: forward < S > ( s ), std :: forward < R > ( r )); }; 
- 
     A type Sigs valid - completion - signatures completion_signatures 
- 
     Remarks: Pursuant to [namespace.std], users can specialize enable_sender truefor cv-unqualified program-defined types that modelsender falsefor types that do not. Such specializations shall be usable in constant expressions ([expr.const]) and have typeconst bool 
- 
     The sender_of template < class > struct sender - of - helper ; // exposition only template < class R , class ... As > struct sender - of - helper < R ( As ...) > { using tag = R ; template < class ... Bs > using as - sig = R ( Bs ...); }; template < class S , class Sig , class E = empty_env > concept sender_of = sender_in < S , E > && MATCHING - SIG ( // see [exec.general] Sig , gather - signatures < // see [exec.utils.cmplsigs] typename sender - of - helper < Sig >:: tag , S , E , sender - of - helper < Sig >:: template as - sig , type_identity_t > ); - 
       [Example: auto s1 = just () | then ([]{}); using S1 = decltype ( s1 ); static_assert ( sender_of < S1 , set_value_t () > ); static_assert ( sender_of < S1 , set_error_t ( exception_ptr ) > ); static_assert ( ! sender_of < S1 , set_stopped_t () > ); auto s2 = s1 | let_error ([]( auto ) { return just ( 'a' ); }); using S2 = decltype ( s2 ); static_assert ( ! sender_of < S2 , set_value_t () > ); static_assert ( ! sender_of < S2 , set_value_t ( char ) > ); static_assert ( ! sender_of < S2 , set_error_t ( exception_ptr ) > ); static_assert ( ! sender_of < S2 , set_stopped_t () > ); -- end example] 
 
- 
       
- 
     For a type T SET - VALUE - SIG ( T ) set_value_t () T void set_value_t ( T ) 
- 
     Library-provided sender types: - 
       Always expose an overload of a customization of connect 
- 
       Only expose an overload of a customization of connect copy_constructible 
- 
       Model copy_constructible copy_constructible 
 
- 
       
11.9.2. Awaitable helpers [exec.awaitables]
- 
     The sender concepts recognize awaitables as senders. For this clause ([exec]), an awaitable is an expression that would be well-formed as the operand of a co_await 
- 
     For a subexpression c GET - AWAITER ( c , p ) c e p P await_transform operator co_await I have opened cwg#250 to give these transformations a term-of-art so we can more easily refer to it here.
- 
     Let is - awaitable template < class T > concept await - suspend - result = see below ; template < class A , class P > concept is - awaiter = // exposition only requires ( A & a , coroutine_handle < P > h ) { a . await_ready () ? 1 : 0 ; { a . await_suspend ( h ) } -> await - suspend - result ; a . await_resume (); }; template < class C , class P > concept is - awaitable = requires ( C ( * fc )() noexcept , P & p ) { { GET - AWAITER ( fc (), p ) } -> is - awaiter < P > ; }; await - suspend - result < T > trueif and only if one of the following istrue:- 
       T void 
- 
       T bool 
- 
       T coroutine_handle 
 
- 
       
- 
     For a subexpression c decltype (( c )) C p P await - result - type < C , P > decltype ( GET - AWAITER ( c , p ). await_resume ()) 
- 
     Let with - await - transform template < class Derived > struct with - await - transform { template < class T > T && await_transform ( T && value ) noexcept { return std :: forward < T > ( value ); } template < class T > requires tag_invocable < as_awaitable_t , T , Derived &> auto await_transform ( T && value ) noexcept ( nothrow_tag_invocable < as_awaitable_t , T , Derived &> ) -> tag_invoke_result_t < as_awaitable_t , T , Derived &> { return tag_invoke ( as_awaitable , std :: forward < T > ( value ), static_cast < Derived &> ( * this )); } }; 
- 
     Let env - promise template < class Env > struct env - promise : with - await - transform < env - promise < Env >> { unspecified get_return_object () noexcept ; unspecified initial_suspend () noexcept ; unspecified final_suspend () noexcept ; void unhandled_exception () noexcept ; void return_void () noexcept ; coroutine_handle <> unhandled_stopped () noexcept ; friend const Env & tag_invoke ( get_env_t , const env - promise & ) noexcept ; }; Specializations of env - promise 
11.9.3. execution :: get_completion_signatures 
   - 
     get_completion_signatures s decltype (( s )) S e decltype (( e )) E get_completion_signatures ( s , e ) - 
       tag_invoke_result_t < get_completion_signatures_t , S , E > {} - 
         Mandates: valid - completion - signatures < Sigs > Sigs tag_invoke_result_t < get_completion_signatures_t , S , E > 
 
- 
         
- 
       Otherwise, remove_cvref_t < S >:: completion_signatures {} - 
         Mandates: valid - completion - signatures < Sigs > Sigs remove_cvref_t < S >:: completion_signatures 
 
- 
         
- 
       Otherwise, if is - awaitable < S , env - promise < E >> true, then:completion_signatures < SET - VALUE - SIG ( await - result - type < S , env - promise < E >> ), // see [exec.snd.concepts] set_error_t ( exception_ptr ), set_stopped_t () > {} 
- 
       Otherwise, get_completion_signatures ( s , e ) 
 
- 
       
- 
     Let r R S sender_in < S , env_of_t < R >> true. LetSigs ... completion_signatures completion_signatures_of_t < S , env_of_t < R >> CSO S CSO ( r , args ...) Sig Sigs ... MATCHING - SIG ( tag_t < CSO > ( decltype ( args )...), Sig ) true([exec.general]).
11.9.4. execution :: connect 
   - 
     connect 
- 
     The name connect s r S decltype (( s )) R decltype (( r )) DS DR S R 
- 
     Let connect - awaitable - promise struct connect - awaitable - promise : with - await - transform < connect - awaitable - promise > { DR & rcvr ; // exposition only connect - awaitable - promise ( DS & , DR & r ) noexcept : rcvr ( r ) {} suspend_always initial_suspend () noexcept { return {}; } [[ noreturn ]] suspend_always final_suspend () noexcept { std :: terminate (); } [[ noreturn ]] void unhandled_exception () noexcept { std :: terminate (); } [[ noreturn ]] void return_void () noexcept { std :: terminate (); } coroutine_handle <> unhandled_stopped () noexcept { set_stopped (( DR && ) rcvr ); return noop_coroutine (); } operation - state - task get_return_object () noexcept { return operation - state - task { coroutine_handle < connect - awaitable - promise >:: from_promise ( * this )}; } friend auto tag_invoke ( get_env_t , connect - awaitable - promise & self ) noexcept ( nothrow - callable < get_env_t , const DR &> ) -> env_of_t < const DR &> { return get_env ( self . rcvr ); } }; 
- 
     Let operation - state - task struct operation - state - task { using promise_type = connect - awaitable - promise ; coroutine_handle <> coro ; // exposition only explicit operation - state - task ( coroutine_handle <> h ) noexcept : coro ( h ) {} operation - state - task ( operation - state - task && o ) noexcept : coro ( exchange ( o . coro , {})) {} ~ operation - state - task () { if ( coro ) coro . destroy (); } friend void tag_invoke ( start_t , operation - state - task & self ) noexcept { self . coro . resume (); } }; 
- 
     Let V await - result - type < DS , connect - awaitable - promise > Sigs completion_signatures < SET - VALUE - SIG ( V ), // see [exec.snd.concepts] set_error_t ( exception_ptr ), set_stopped_t () > and let connect - awaitable template < class Fun , class ... Ts > auto suspend - complete ( Fun fun , Ts && ... as ) noexcept { // exposition only auto fn = [ & , fun ]() noexcept { fun ( std :: forward < Ts > ( as )...); }; struct awaiter { decltype ( fn ) fn_ ; static bool await_ready () noexcept { return false; } void await_suspend ( coroutine_handle <> ) noexcept { fn_ (); } [[ noreturn ]] void await_resume () noexcept { unreachable (); } }; return awaiter { fn }; }; operation - state - task connect - awaitable ( DS s , DR r ) requires receiver_of < DR , Sigs > { exception_ptr ep ; try { if constexpr ( same_as < V , void > ) { co_await std :: move ( s ); co_await suspend - complete ( set_value , std :: move ( r )); } else { co_await suspend - complete ( set_value , std :: move ( r ), co_await std :: move ( s )); } } catch (...) { ep = current_exception (); } co_await suspend - complete ( set_error , std :: move ( r ), std :: move ( ep )); } 
- 
     If S sender R receiver connect ( s , r ) connect ( s , r ) - 
       tag_invoke ( connect , s , r ) connectable - with - tag - invoke < S , R > - 
         Mandates: The type of the tag_invoke operation_state 
 
- 
         
- 
       Otherwise, connect - awaitable ( s , r ) 
- 
       Otherwise, connect ( s , r ) 
 
- 
       
11.9.5. Sender factories [exec.factories]
11.9.5.1. execution :: schedule 
   - 
     schedule 
- 
     The name schedule s schedule ( s ) - 
       tag_invoke ( schedule , s ) tag_invoke set_value s schedule ( s ) - 
         Mandates: The type of the tag_invoke sender 
 
- 
         
- 
       Otherwise, schedule ( s ) 
 
- 
       
11.9.5.2. execution :: just execution :: just_error execution :: just_stopped 
   - 
     just just_error just_stopped 
- 
     Let just - sender template < class Tag , movable - value ... Ts > struct just - sender { // exposition only using is_sender = unspecified ; using completion_signatures = execution :: completion_signatures < Tag ( Ts ...) > ; tuple < Ts ... > vs_ ; // exposition only template < class R > struct operation { // exposition only tuple < Ts ... > vs_ ; // exposition only R r_ ; // exposition only friend void tag_invoke ( start_t , operation & s ) noexcept { apply ([ & s ]( Ts & ... values ) { Tag ()( std :: move ( s . r_ ), std :: move ( values )...); }, s . vs_ ); } }; template < receiver_of < completion_signatures > R > requires ( copy_constructible < Ts > && ...) friend operation < decay_t < R >> tag_invoke ( connect_t , const just - sender & s , R && r ) { return { s . vs_ , std :: forward < R > ( r ) }; } template < receiver_of < completion_signatures > R > friend operation < decay_t < R >> tag_invoke ( connect_t , just - sender && s , R && r ) { return { std :: move ( s . vs_ ), std :: forward < R > ( r ) }; } }; 
- 
     The name just vs Vs decltype (( vs )) just ( vs ...) just - sender < set_value_t , remove_cvref_t < Vs > ... > ({ vs ...}) 
- 
     The name just_error err Err decltype (( err )) just_error ( err ) just - sender < set_error_t , remove_cvref_t < Err >> ({ err }) 
- 
     Then name just_stopped just_stopped () just - sender < set_stopped_t > () 
11.9.5.3. execution :: transfer_just 
   - 
     transfer_just 
- 
     The name transfer_just s vs S decltype (( s )) Vs decltype (( vs ))... S scheduler V Vs movable - value transfer_just ( s , vs ...) transfer_just ( s , vs ...) - 
       tag_invoke ( transfer_just , s , vs ...) as decay_t < Vs > ... vs tag_invoke s as transfer_just ( s , vs ...) - 
         Mandates: sender_of < R , set_value_t ( decay_t < Vs > ...), E > R tag_invoke E 
 
- 
         
- 
       Otherwise, transfer ( just ( vs ...), s ) 
 
- 
       
11.9.5.4. execution :: read 
   - 
     read 
- 
     read template < class Tag > struct read - sender ; // exposition only struct read - t { // exposition only template < class Tag > constexpr read - sender < Tag > operator ()( Tag ) const noexcept { return {}; } }; 
- 
     read - sender template < class Tag > struct read - sender { // exposition only using is_sender = unspecified ; template < class R > struct operation - state { // exposition only R r_ ; // exposition only friend void tag_invoke ( start_t , operation - state & s ) noexcept { TRY - SET - VALUE ( std :: move ( s . r_ ), Tag {}( get_env ( s . r_ ))); } }; template < receiver R > friend operation - state < decay_t < R >> tag_invoke ( connect_t , read - sender , R && r ) { return { std :: forward < R > ( r ) }; } template < class Env > requires callable < Tag , Env > friend auto tag_invoke ( get_completion_signatures_t , read - sender , Env ) -> completion_signatures < set_value_t ( call - result - t < Tag , Env > ), set_error_t ( exception_ptr ) > ; // not defined template < class Env > requires nothrow - callable < Tag , Env > friend auto tag_invoke ( get_completion_signatures_t , read - sender , Env ) -> completion_signatures < set_value_t ( call - result - t < Tag , Env > ) > ; // not defined friend empty_env tag_invoke ( get_env_t , const read - sender & ) noexcept { return {}; } }; where TRY - SET - VALUE ( r , e ) r e try { set_value ( r , e ); } catch (...) { set_error ( r , current_exception ()); } if e set_value ( r , e ) 
11.9.6. Sender adaptors [exec.adapt]
11.9.6.1. General [exec.adapt.general]
- 
     Subclause [exec.adapt] specifies a set of sender adaptors. 
- 
     The bitwise OR operator is overloaded for the purpose of creating sender chains. The adaptors also support function call syntax with equivalent semantics. 
- 
     Unless otherwise specified, a sender adaptor is required to not begin executing any functions that would observe or modify any of the arguments of the adaptor before the returned sender is connected with a receiver using connect start 
- 
     Unless otherwise specified, a parent sender ([async.ops]) with a single child sender s FWD - QUERIES ( get_env ( s )) empty_env {} 
- 
     Unless otherwise specified, when a parent sender is connected to a receiver r FWD - QUERIES ( get_env ( r )) 
- 
     For any sender type, receiver type, operation state type, queryable type, or coroutine promise type that is part of the implementation of any sender adaptor in this subclause and that is a class template, the template arguments do not contribute to the associated entities ([basic.lookup.argdep]) of a function call where a specialization of the class template is an associated entity. [Example: namespace sender - adaptors { // exposition only template < class Sch , class S > // arguments are not associated entities ([lib.tmpl-heads]) class on - sender { // ... }; struct on_t { template < scheduler Sch , sender S > on - sender < Sch , S > operator ()( Sch && sch , S && s ) const { // ... } }; } inline constexpr sender - adaptors :: on_t on {}; -- end example] 
- 
     If a sender returned from a sender adaptor specified in this subsection is specified to include set_error_t ( E ) decay_t < E > exception_ptr exception_ptr exception_ptr 
11.9.6.2. Sender adaptor closure objects [exec.adapt.objects]
- 
     A pipeable sender adaptor closure object is a function object that accepts one or more sender sender C S decltype (( S )) sender sender C ( S ) S | C Given an additional pipeable sender adaptor closure object D C | D E E - 
       Its target object is an object d decay_t < decltype (( D )) > D 
- 
       It has one bound argument entity, an object c decay_t < decltype (( C )) > C 
- 
       Its call pattern is d ( c ( arg )) arg E 
 The expression C | D E 
- 
       
- 
     An object t T T derived_from < sender_adaptor_closure < T >> T sender_adaptor_closure < U > U T sender 
- 
     The template parameter D sender_adaptor_closure cv D | D derived_from < sender_adaptor_closure < D >> cv D | operator | 
- 
     A pipeable sender adaptor object is a customization point object that accepts a sender sender 
- 
     If a pipeable sender adaptor object accepts only one argument, then it is a pipeable sender adaptor closure object. 
- 
     If a pipeable sender adaptor object adaptor s decltype (( s )) sender args ... adaptor ( s , args ...) BoundArgs decay_t < decltype (( args )) > ... adaptor ( args ...) f - 
       Its target object is a copy of adaptor 
- 
       Its bound argument entities bound_args BoundArgs ... std :: forward < decltype (( args )) > ( args )... 
- 
       Its call pattern is adaptor ( r , bound_args ...) r f 
 The expression adaptor ( args ...) 
- 
       
11.9.6.3. execution :: on 
   - 
     on 
- 
     Let replace - scheduler ( e , sch ) e 'get_scheduler ( e ) sch tag_invoke ( tag , e ', args ...) tag ( e , args ...) args ... tag forwarding - query get_scheduler_t 
- 
     The name on sch s Sch decltype (( sch )) S decltype (( s )) Sch scheduler S sender on on ( sch , s ) - 
       tag_invoke ( on , sch , s ) s sch on ( sch , s ) - 
         Mandates: The type of the tag_invoke sender 
 
- 
         
- 
       Otherwise, constructs a sender s1 s1 out_r - 
         Constructs a receiver r - 
           When set_value ( r ) connect ( s , r2 ) r2 op_state3 start ( op_state3 ) set_error out_r current_exception () 
- 
           set_error ( r , e ) set_error ( out_r , e ) 
- 
           set_stopped ( r ) set_stopped ( out_r ) 
- 
           get_env ( r ) get_env ( out_r ) 
 
- 
           
- 
         Calls schedule ( sch ) s2 connect ( s2 , r ) op_state2 
- 
         op_state2 op_state1 
- 
         r2 out_r get_env ( r2 ) replace - scheduler ( e , sch ) 
- 
         When start op_state1 start op_state2 
- 
         The lifetime of op_state2 op_state3 op_state1 op_state3 op_state1 
 
- 
         
- 
       Given subexpressions s1 e s1 on S1 decltype (( s1 )) E 'decltype (( replace - scheduler ( e , sch ))) tag_invoke ( get_completion_signatures , s1 , e ) make_completion_signatures < copy_cvref_t < S1 , S > , E ', make_completion_signatures < schedule_result_t < Sch > , E , completion_signatures < set_error_t ( exception_ptr ) > , no - value - completions >> ; where no - value - completions < As ... > completion_signatures <> As ... 
 
- 
       
11.9.6.4. execution :: transfer 
   - 
     transfer set_value 
- 
     The name transfer sch s Sch decltype (( sch )) S decltype (( s )) Sch scheduler S sender transfer transfer ( s , sch ) - 
       tag_invoke ( transfer , get_completion_scheduler < set_value_t > ( get_env ( s )), s , sch ) - 
         Mandates: The type of the tag_invoke sender 
 
- 
         
- 
       Otherwise, tag_invoke ( transfer , s , sch ) - 
         Mandates: The type of the tag_invoke sender 
 
- 
         
- 
       Otherwise, schedule_from ( sch , s ) 
 If the function selected above does not return a sender which is a result of a call to schedule_from ( sch , s2 ) s2 s transfer ( s , sch ) 
- 
       
- 
     For a sender t transfer ( s , sch ) get_env ( t ) q get_completion_scheduler < CPO > ( q ) sch CPO set_value_t set_stopped_t get_completion_scheduler < set_error_t > Q forwarding - query Q ( q , args ...) Q ( get_env ( s ), args ...) 
11.9.6.5. execution :: schedule_from 
   - 
     schedule_from schedule_from transfer 
- 
     The name schedule_from sch s Sch decltype (( sch )) S decltype (( s )) Sch scheduler S sender schedule_from schedule_from ( sch , s ) - 
       tag_invoke ( schedule_from , sch , s ) tag_invoke sch s schedule_from ( sch , s ) - 
         Mandates: The type of the tag_invoke sender 
 
- 
         
- 
       Otherwise, constructs a sender s2 s2 out_r - 
         Constructs a receiver r Tag ( r , args ...) args ... op_state args '... r2 - 
           When set_value ( r2 ) Tag ( out_r , std :: move ( args ')...) 
- 
           set_error ( r2 , e ) set_error ( out_r , e ) 
- 
           set_stopped ( r2 ) set_stopped ( out_r ) 
 It then calls schedule ( sch ) s3 connect ( s3 , r2 ) op_state3 start ( op_state3 ) set_error ( out_r , current_exception ()) Tag ( r , args ...) 
- 
           
- 
         Calls connect ( s , r ) op_state2 connect ( s2 , out_r ) 
- 
         Returns an operation state op_state op_state2 start ( op_state ) start ( op_state2 ) op_state3 op_state 
 
- 
         
- 
       Given subexpressions s2 e s2 schedule_from S2 decltype (( s2 )) E decltype (( e )) tag_invoke ( get_completion_signatures , s2 , e ) make_completion_signatures < copy_cvref_t < S2 , S > , E , make_completion_signatures < schedule_result_t < Sch > , E , potenially - throwing - completions , no - completions > , value - completions , error - completions > ; where potentially - throwing - completions no - completions value - completions error - completions template < class ... Ts > using all - nothrow - decay - copyable = boolean_constant < ( is_nothrow_constructible_v < decay_t < Ts > , Ts > && ...) > ; template < class ... Ts > using conjunction = boolean_constant < ( Ts :: value && ...) > ; using potentially - throwing - completions = conditional_t < error_types_of_t < copy_cvref_t < S2 , S > , E , all - nothrow - decay - copyable >:: value && value_types_of_t < copy_cvref_t < S2 , S > , E , all - nothrow - decay - copyable , conjunction >:: value , completion_signatures <> , completion_signatures < set_error_t ( exception_ptr ) > ; template < class ... > using no - completions = completion_signatures <> ; template < class ... Ts > using value - completions = completion_signatures < set_value_t ( decay_t < Ts >&& ...) > ; template < class T > using error - completions = completion_signatures < set_error_t ( decay_t < T >&& ) > ; 
 
- 
       
- 
     For a sender t schedule_from ( sch , s ) get_env ( t ) q get_completion_scheduler < CPO > ( q ) sch CPO set_value_t set_stopped_t get_completion_scheduler < set_error_t > Q forwarding_query Q ( q , args ...) Q ( get_env ( s ), args ...) 
11.9.6.6. execution :: then 
   - 
     then 
- 
     The name then s f S decltype (( s )) F f f 'f S sender F movable - value then then ( s , f ) - 
       tag_invoke ( then , get_completion_scheduler < set_value_t > ( get_env ( s )), s , f ) - 
         Mandates: The type of the tag_invoke sender 
 
- 
         
- 
       Otherwise, tag_invoke ( then , s , f ) - 
         Mandates: The type of the tag_invoke sender 
 
- 
         
- 
       Otherwise, constructs a sender s2 s2 out_r - 
         Constructs a receiver r - 
           When set_value ( r , args ...) v invoke ( f ', args ...) decltype ( v ) void set_value ( out_r ) set_value ( out_r , v ) set_error ( out_r , current_exception ()) set_value ( r , args ...) 
- 
           set_error ( r , e ) set_error ( out_r , e ) 
- 
           set_stopped ( r ) set_stopped ( out_r ) 
 
- 
           
- 
         Returns an expression-equivalent to connect ( s , r ) 
- 
         Let compl - sig - t < Tag , Args ... > Tag () Args ... void Tag ( Args ...) s2 e s2 then S2 decltype (( s2 )) E decltype (( e )) tag_invoke ( get_completion_signatures , s2 , e ) make_completion_signatures < copy_cvref_t < S2 , S > , E , set - error - signature , set - value - completions > ; where set - value - completions template < class ... As > set - value - completions = completion_signatures < compl - sig - t < set_value_t , invoke_result_t < F , As ... >>> and set - error - signature completion_signatures < set_error_t ( exception_ptr ) > type - list value_types_of_t < copy_cvref_t < S2 , S > , E , potentially - throwing , type - list > true_type completion_signatures <> potentially - throwing template < class ... As > using potentially - throwing = bool_constant <! is_nothrow_invocable_v < F , As ... >> ; 
 
- 
         
 If the function selected above does not return a sender that invokes f s f then ( s , f ) 
- 
       
11.9.6.7. execution :: upon_error 
   - 
     upon_error 
- 
     The name upon_error s f S decltype (( s )) F f f 'f S sender F movable - value upon_error upon_error ( s , f ) - 
       tag_invoke ( upon_error , get_completion_scheduler < set_error_t > ( get_env ( s )), s , f ) - 
         Mandates: The type of the tag_invoke sender 
 
- 
         
- 
       Otherwise, tag_invoke ( upon_error , s , f ) - 
         Mandates: The type of the tag_invoke sender 
 
- 
         
- 
       Otherwise, constructs a sender s2 s2 out_r - 
         Constructs a receiver r - 
           set_value ( r , args ...) set_value ( out_r , args ...) 
- 
           When set_error ( r , e ) v invoke ( f ', e ) decltype ( v ) void set_value ( out_r ) set_value ( out_r , v ) set_error ( out_r , current_exception ()) set_error ( r , e ) 
- 
           set_stopped ( r ) set_stopped ( out_r ) 
 
- 
           
- 
         Returns an expression-equivalent to connect ( s , r ) 
- 
         Let compl - sig - t < Tag , Args ... > Tag () Args ... void Tag ( Args ...) s2 e s2 upon_error S2 decltype (( s2 )) E decltype (( e )) tag_invoke ( get_completion_signatures , s2 , e ) make_completion_signatures < copy_cvref_t < S2 , S > , E , set - error - signature , default - set - value , set - error - completion > ; where set - error - completion template < class E > set - error - completion = completion_signatures < compl - sig - t < set_value_t , invoke_result_t < F , E >>> and set - error - signature completion_signatures < set_error_t ( exception_ptr ) > type - list error_types_of_t < copy_cvref_t < S2 , S > , E , potentially - throwing > true_type completion_signatures <> potentially - throwing template < class ... Es > using potentially - throwing = type - list <! bool_constant < is_nothrow_invocable_v < F , Es >> ... > ; 
 
- 
         
 If the function selected above does not return a sender which invokes f s f upon_error ( s , f ) 
- 
       
11.9.6.8. execution :: upon_stopped 
   - 
     upon_stopped 
- 
     The name upon_stopped s f S decltype (( s )) F f f 'f S sender F movable - value invocable upon_stopped upon_stopped ( s , f ) - 
       tag_invoke ( upon_stopped , get_completion_scheduler < set_stopped_t > ( get_env ( s )), s , f ) - 
         Mandates: The type of the tag_invoke sender 
 
- 
         
- 
       Otherwise, tag_invoke ( upon_stopped , s , f ) - 
         Mandates: The type of the tag_invoke sender 
 
- 
         
- 
       Otherwise, constructs a sender s2 s2 out_r - 
         Constructs a receiver r - 
           set_value ( r , args ...) set_value ( out_r , args ...) 
- 
           set_error ( r , e ) set_error ( out_r , e ) 
- 
           When set_stopped ( r ) v invoke ( f ') v void set_value ( out_r ) set_value ( out_r , v ) set_error ( out_r , current_exception ()) set_stopped ( r ) 
 
- 
           
- 
         Returns an expression-equivalent to connect ( s , r ) 
- 
         Let compl - sig - t < Tag , Args ... > Tag () Args ... void Tag ( Args ...) s2 e s2 upon_stopped S2 decltype (( s2 )) E decltype (( e )) tag_invoke ( get_completion_signatures , s2 , e ) make_completion_signatures < copy_cvref_t < S2 , S > , E , set - error - signature , default - set - value , default - set - error , set - stopped - completions > ; where set - stopped - completions completion_signatures < compl - sig - t < set_value_t , invoke_result_t < F >> set - error - signature completion_signatures < set_error_t ( exception_ptr ) > is_nothrow_invocable_v < F > true, orcompletion_signatures <> 
 
- 
         
 If the function selected above does not return a sender that invokes f s f s upon_stopped ( s , f ) 
- 
       
11.9.6.9. execution :: let_value execution :: let_error execution :: let_stopped 
   - 
     let_value let_error let_stopped 
- 
     The names let_value let_error let_stopped let - cpo let_value let_error let_stopped s f S decltype (( s )) F f f 'f S sender let - cpo ( s , f ) F invocable let_stopped ( s , f ) let - cpo ( s , f ) - 
       tag_invoke ( let - cpo , get_completion_scheduler < set_value_t > ( get_env ( s )), s , f ) - 
         Mandates: The type of the tag_invoke sender 
 
- 
         
- 
       Otherwise, tag_invoke ( let - cpo , s , f ) - 
         Mandates: The type of the tag_invoke sender 
 
- 
         
- 
       Otherwise, given a receiver out_r out_r 'out_r - 
         For let_value set - cpo set_value let_error set - cpo set_error let_stopped set - cpo set_stopped completion - function set_value set_error set_stopped 
- 
         Let r R - 
           When set - cpo ( r , args ...) r args ... op_state2 args '... invoke ( f ', args '...) s3 connect ( s3 , std :: move ( out_r ')) op_state3 op_state3 op_state2 start ( op_state3 ) set_error ( std :: move ( out_r '), current_exception ()) set - cpo ( r , args ...) 
- 
           completion - function ( r , args ...) completion - function ( std :: move ( out_r '), args ...) completion - function set - cpo 
 
- 
           
- 
         let - cpo ( s , f ) s2 - 
           If the expression connect ( s , r ) connect ( s2 , out_r ) 
- 
           Otherwise, let op_state2 connect ( s , r ) connect ( s2 , out_r ) op_state op_state2 start ( op_state ) start ( op_state2 ) 
 
- 
           
- 
         Given subexpressions s2 e s2 let - cpo ( s , f ) S2 decltype (( s2 )) E decltype (( e )) DS copy_cvref_t < S2 , S > tag_invoke ( get_completion_signatures , s2 , e ) - 
           If sender_in < DS , E > false, the expressiontag_invoke ( get_completion_signatures , s2 , e ) 
- 
           Otherwise, let Sigs ... completion_signatures completion_signatures_of_t < DS , E > Sigs2 ... Sigs ... set - cpo Rest ... Sigs ... Sigs2 ... 
- 
           For each Sig2 i Sigs2 ... Vs i ... Sig2 i S3 i invoke_result_t < F , decay_t < Vs i >& ... > S3 i sender_in < S3 i , E > tag_invoke ( get_completion_signatures , s2 , e ) 
- 
           Otherwise, let Sigs3 i ... completion_signatures completion_signatures_of_t < S3 i , E > tag_invoke ( get_completion_signatures , s2 , e ) completion_signatures < Sigs3 0 ..., Sigs3 1 ..., ... Sigs3 n -1 . .., Rest ..., set_error_t ( exception_ptr ) > n sizeof ...( Sigs2 ) 
 
- 
           
 
- 
         
 If let - cpo ( s , f ) f set - cpo f s let - cpo ( s , f ) 
- 
       
11.9.6.10. execution :: bulk 
   - 
     bulk 
- 
     The name bulk s shape f S decltype (( s )) Shape decltype (( shape )) F decltype (( f )) S sender Shape integral bulk bulk ( s , shape , f ) - 
       tag_invoke ( bulk , get_completion_scheduler < set_value_t > ( get_env ( s )), s , shape , f ) - 
         Mandates: The type of the tag_invoke sender 
 
- 
         
- 
       Otherwise, tag_invoke ( bulk , s , shape , f ) - 
         Mandates: The type of the tag_invoke sender 
 
- 
         
- 
       Otherwise, constructs a sender s2 s2 out_r - 
         Constructs a receiver r - 
           When set_value ( r , args ...) f ( i , args ...) i Shape 0 shape set_value ( out_r , args ...) set_error ( out_r , current_exception ()) 
- 
           When set_error ( r , e ) set_error ( out_r , e ) 
- 
           When set_stopped ( r ) set_stopped ( out_r , e ) 
 
- 
           
- 
         Calls connect ( s , r ) op_state2 
- 
         Returns an operation state op_state op_state2 start ( op_state ) start ( op_state2 ) 
- 
         Given subexpressions s2 e s2 bulk S2 decltype (( s2 )) E decltype (( e )) DS copy_cvref_t < S2 , S > Shape decltype (( shape )) nothrow - callable template < class ... As > using nothrow - callable = bool_constant < is_nothrow_invocable_v < decay_t < F >& , Shape , As ... >> ; - 
           If any of the types in the type - list value_types_of_t < DS , E , nothrow - callable , type - list > false_type tag_invoke ( get_completion_signatures , s2 , e ) make_completion_signatures < DS , E , completion_signatures < set_error_t ( exception_ptr ) >> 
- 
           Otherwise, the type of tag_invoke ( get_completion_signatures , s2 , e ) completion_signatures_of_t < DS , E > 
 
- 
           
 
- 
         
- 
       If the function selected above does not return a sender that invokes f ( i , args ...) i Shape 0 shape args bulk ( s , shape , f ) 
 
- 
       
11.9.6.11. execution :: split 
   - 
     split 
- 
     Let split - env e get_stop_token ( e ) stop_token 
- 
     The name split s S decltype (( s )) sender_in < S , split - env > constructible_from < decay_t < env_of_t < S >> , env_of_t < S >> false,split split ( s ) - 
       tag_invoke ( split , get_completion_scheduler < set_value_t > ( get_env ( s )), s ) - 
         Mandates: The type of the tag_invoke sender 
 
- 
         
- 
       Otherwise, tag_invoke ( split , s ) - 
         Mandates: The type of the tag_invoke sender 
 
- 
         
- 
       Otherwise, constructs a sender s2 - 
         Creates an object sh_state stop_source s - 
           the operation state that results from connecting s r 
- 
           the sets of values and errors with which s exception_ptr 
- 
           the result of decay-copying get_env ( s ) 
 
- 
           
- 
         Constructs a receiver r - 
           When set_value ( r , args ...) args ... sh_state sh_state set_error ( r , current_exception ()) 
- 
           When set_error ( r , e ) e sh_state sh_state 
- 
           When set_stopped ( r ) sh_state 
- 
           get_env ( r ) e split - env get_stop_token ( e ) get_token () sh_state 
 
- 
           
- 
         Calls get_env ( s ) sh_state 
- 
         Calls connect ( s , r ) op_state2 op_state2 sh_state 
- 
         When s2 out_r OutR op_state - 
           An object out_r 'OutR out_r 
- 
           A reference to sh_state 
- 
           A stop callback of type optional < stop_token_of_t < env_of_t < OutR >>:: callback_type < stop - callback - fn >> stop - callback - fn struct stop - callback - fn { stop_source & stop_src_ ; void operator ()() noexcept { stop_src_ . request_stop (); } }; 
 
- 
           
- 
         When start ( op_state ) - 
           If one of r Tag Tag ( out_r ', args2 ...) args2 ... sh_state Tag ( r , args ...) 
- 
           Otherwise, it emplace constructs the stop callback optional with the arguments get_stop_token ( get_env ( out_r ')) stop - callback - fn { stop - src } stop - src sh_state 
- 
           Otherwise, it adds a pointer to op_state sh_state op_state - 
             If stop - src . stop_requested () true, all of the operation states insh_state set_stopped ( r ) 
- 
             Otherwise, start ( op_state2 ) 
 
- 
             
 
- 
           
- 
         When r op_state Tag r op_state Tag ( std :: move ( out_r '), args2 ...) args2 ... sh_state Tag ( r , args ...) 
- 
         Ownership of sh_state s2 op_state s2 
 
- 
         
- 
       Given subexpressions s2 s2 split get_env ( s2 ) sh_state get_env ( s ) 
- 
       Given subexpressions s2 e s2 split S2 decltype (( s2 )) E decltype (( e )) tag_invoke ( get_completion_signatures , s2 , e ) make_completion_signatures < copy_cvref_t < S2 , S > , E , completion_signatures < set_error_t ( exception_ptr ), set_error_t ( Es )... > , value - signatures , error - signatures > ; where Es value - signatures template < class ... Ts > using value - signatures = completion_signatures < set_value_t ( const decay_t < Ts >& ...) > ; and error - signatures template < class E > using error - signatures = completion_signatures < set_error_t ( const decay_t < E >& ) > ; 
- 
       Let s r s2 split ( s ) r2 s2 args r CSO s s2 CSO ( r2 , args2 ...) args2 args set_error ( r2 , e2 ) e2 r2 r2 
 
- 
       
11.9.6.12. execution :: when_all 
   - 
     when_all when_all_with_variant when_all when_all_with_variant ( s ...) when_all ( into_variant ( s )...) s 
- 
     The name when_all s i ... S i ... decltype (( s i ))... when_all ( s i ...) - 
       If the number of subexpressions s i ... 
- 
       If any type S i sender 
 Otherwise, the expression when_all ( s i ...) - 
       tag_invoke ( when_all , s i ...) tag_invoke s i ... set_value when_all ( s i ...) - 
         Mandates: The type of the tag_invoke sender 
 
- 
         
- 
       Otherwise, constructs a sender w W w out_r OutR op_state - 
         For each sender s i r i - 
           If set_value ( r i , t i ...) r i op_state set_value ( out_r , t 0 ..., t 1 ..., ..., t n -1 . ..) n s i ... 
- 
           Otherwise, set_error set_stopped r i set_error ( r i , e ) request_stop op_state op_state set_error ( out_r , e ) 
- 
           Otherwise, request_stop op_state op_state set_stopped ( out_r ) 
- 
           For each receiver r i get_env ( r i ) e get_stop_token ( e ) get_token () op_state tag_invoke ( tag , e , args ...) tag ( get_env ( out_r ), args ...) args ... tag forwarding - query get_stop_token_t 
 
- 
           
- 
         For each sender s i connect ( s i , r i ) child_op i 
- 
         Returns an operation state op_state - 
           Each operation state child_op i 
- 
           A stop source of type in_place_stop_source 
- 
           A stop callback of type optional < stop_token_of_t < env_of_t < OutR >>:: callback_type < stop - callback - fn >> stop - callback - fn struct stop - callback - fn { in_place_stop_source & stop_src_ ; void operator ()() noexcept { stop_src_ . request_stop (); } }; 
 
- 
           
- 
         When start ( op_state ) - 
           Emplace constructs the stop callback optional with the arguments get_stop_token ( get_env ( out_r )) stop - callback - fn { stop - src } stop - src op_state 
- 
           Then, it checks to see if stop - src . stop_requested () set_stopped ( out_r ) 
- 
           Otherwise, calls start ( child_op i ) child_op i 
 
- 
           
- 
         Given subexpressions s2 e s2 when_all S2 decltype (( s2 )) E decltype (( e )) Ss ... when_all s2 WE stop_token_of_t < WE > in_place_stop_token tag_invoke_result_t < Tag , WE , As ... > call - result - t < Tag , E , As ... > As ... Tag get_stop_token_t tag_invoke ( get_completion_signatures , s2 , e ) - 
           For each type S i Ss ... DS i copy_cvref_t < S2 , S i > DS i completion_signatures_of_t < DS i , WE > tag_invoke ( get_completion_signatures , s2 , e ) 
- 
           Otherwise, for each type DS i Sigs i ... completion_signatures completion_signatures_of_t < DS i , WE > C i Sigs i ... set_value_t C i tag_invoke ( get_completion_signatures , s2 , e ) 
- 
           Otherwise, let Sigs2 i ... Sigs i ... set_value_t Ws ... [ Sigs2 0 ..., Sigs2 1 ..., ... Sigs2 n -1 . .., set_stopped_t ()] n sizeof ...( Ss ) C i 0 tag_invoke ( get_completion_signatures , s2 , e ) completion_signatures < Ws ... > 
- 
           Otherwise, let V i ... Sigs i ... set_value_t tag_invoke ( get_completion_signatures , s2 , e ) completion_signatures < Ws ..., set_value_t ( decay_t < V 0 >&& ..., decay_t < V 1 >&& ..., ... decay_t < V n -1 >&& ...) > 
 
- 
           
 
- 
         
 
- 
       
- 
     The name when_all_with_variant s ... S decltype (( s )) S i S ... sender when_all_with_variant when_all_with_variant ( s ...) - 
       tag_invoke ( when_all_with_variant , s ...) tag_invoke R into - variant - type < S , env_of_t < R >> ... set_value when_all ( s i ...) - 
         Mandates: The type of the tag_invoke sender 
 
- 
         
- 
       Otherwise, when_all ( into_variant ( s )...) 
 
- 
       
- 
     For a sender s2 when_all when_all_with_variant get_env ( s2 ) empty_env 
11.9.6.13. execution :: transfer_when_all 
   - 
     transfer_when_all transfer_when_all_with_variant transfer_when_all transfer_when_all ( scheduler , input - senders ...) transfer ( when_all ( input - senders ...), scheduler ) transfer_when_all_with_variant ( scheduler , input - senders ...) transfer_when_all ( scheduler , into_variant ( intput - senders )...) 
- 
     The name transfer_when_all sch s ... Sch decltype ( sch ) S decltype (( s )) Sch scheduler S i S ... sender transfer_when_all transfer_when_all ( sch , s ...) - 
       tag_invoke ( transfer_when_all , sch , s ...) tag_invoke s ... set_value sch transfer_when_all ( sch , s ...) - 
         Mandates: The type of the tag_invoke sender 
 
- 
         
- 
       Otherwise, transfer ( when_all ( s ...), sch ) 
 
- 
       
- 
     The name transfer_when_all_with_variant sch s ... Sch decltype (( sch )) S decltype (( s )) S i S ... sender transfer_when_all_with_variant transfer_when_all_with_variant ( sch , s ...) - 
       tag_invoke ( transfer_when_all_with_variant , s ...) tag_invoke R into - variant - type < S , env_of_t < R >> ... set_value transfer_when_all_with_variant ( sch , s ...) - 
         Mandates: The type of the tag_invoke sender 
 
- 
         
- 
       Otherwise, transfer_when_all ( sch , into_variant ( s )...) 
 
- 
       
- 
     For a sender t transfer_when_all ( sch , s ...) get_env ( t ) q get_completion_scheduler < CPO > ( q ) sch CPO set_value_t set_stopped_t get_completion_scheduler < set_error_t > 
11.9.6.14. execution :: into_variant 
   - 
     into_variant variant tuple 
- 
     The template into - variant - type into_variant template < class S , class E > requires sender_in < S , E > using into - variant - type = value_types_of_t < S , E > ; 
- 
     into_variant s S decltype (( s )) S sender into_variant ( s ) into_variant ( s ) s2 s2 out_r - 
       Constructs a receiver r - 
         If set_value ( r , ts ...) set_value ( out_r , into - variant - type < S , env_of_t < decltype (( r )) >> ( decayed - tuple < decltype ( ts )... > ( ts ...))) set_error ( out_r , current_exception ()) 
- 
         set_error ( r , e ) set_error ( out_r , e ) 
- 
         set_stopped ( r ) set_stopped ( out_r ) 
 
- 
         
- 
       Calls connect ( s , r ) op_state2 
- 
       Returns an operation state op_state op_state2 start ( op_state ) start ( op_state2 ) 
- 
       Given subexpressions s2 e s2 into_variant S2 decltype (( s2 )) E decltype (( e )) into - variant - set - value template < class S , class E > struct into - variant - set - value { template < class ... Args > using apply = set_value_t ( into - variant - type < S , E > ); }; Let into - variant - is - nothrow template < class S , class E > struct into - variant - is - nothrow { template < class ... Args > requires constructible_from < decayed - tuple < Args ... > , Args ... > using apply = bool_constant < noexcept ( into - variant - type < S , E > ( decayed - tuple < Args ... > ( declval < Args > ()...))) > ; }; Let INTO - VARIANT - ERROR - SIGNATURES ( S , E ) completion_signatures < set_error_t ( exception_ptr ) > type - list value_types_of_t < S , E , into - variant - is - nothrow < S , E >:: template apply , type - list > false_type completion_signatures <> The type of tag_invoke ( get_completion_signatures_t {}, s2 , e ) make_completion_signatures < S2 , E , INTO - VARIANT - ERROR - SIGNATURES ( S , E ), into - variant - set - value < S2 , E >:: template apply > 
 
- 
       
11.9.6.15. execution :: stopped_as_optional 
   - 
     stopped_as_optional 
- 
     The name stopped_as_optional s S decltype (( s )) get - env - sender connect r start set_value ( r , get_env ( r )) stopped_as_optional ( s ) let_value ( get - env - sender , [] < class E > ( const E & ) requires single - sender < S , E > { return let_stopped ( then ( s , [] < class T > ( T && t ) { return optional < decay_t < single - sender - value - type < S , E >>> { std :: forward < T > ( t ) }; } ), [] () noexcept { return just ( optional < decay_t < single - sender - value - type < S , E >>> {}); } ); } ) 
11.9.6.16. execution :: stopped_as_error 
   - 
     stopped_as_error 
- 
     The name stopped_as_error s e S decltype (( s )) E decltype (( e )) S sender E movable - value stopped_as_error ( s , e ) stopped_as_error ( s , e ) let_stopped ( s , [] { return just_error ( e ); }) 
11.9.6.17. execution :: ensure_started 
   - 
     ensure_started 
- 
     Let ensure - started - env e get_stop_token ( e ) stop_token 
- 
     The name ensure_started s S decltype (( s )) sender_in < S , ensure - started - env > constructible_from < decay_t < env_of_t < S >> , env_of_t < S >> false,ensure_started ( s ) ensure_started ( s ) - 
       tag_invoke ( ensure_started , get_completion_scheduler < set_value_t > ( get_env ( s )), s ) - 
         Mandates: The type of the tag_invoke sender 
 
- 
         
- 
       Otherwise, tag_invoke ( ensure_started , s ) - 
         Mandates: The type of the tag_invoke sender 
 
- 
         
- 
       Otherwise, constructs a sender s2 - 
         Creates an object sh_state stop_source - 
           the operation state that results from connecting s r 
- 
           the sets of values and errors with which s exception_ptr 
- 
           the result of decay-copying get_env ( s ) 
 s2 sh_state r 
- 
           
- 
         Constructs a receiver r - 
           When set_value ( r , args ...) args ... sh_state sh_state set_error ( r , current_exception ()) 
- 
           When set_error ( r , e ) e sh_state 
- 
           When set_stopped ( r ) 
- 
           get_env ( r ) e ensure - started - env get_stop_token ( e ) get_token () sh_state 
- 
           r sh_state s2 r sh_state 
 
- 
           
- 
         Calls get_env ( s ) sh_state 
- 
         Calls connect ( s , r ) op_state2 op_state2 sh_state start ( op_state2 ) 
- 
         When s2 out_r OutR op_state - 
           An object out_r 'OutR out_r 
- 
           A reference to sh_state 
- 
           A stop callback of type optional < stop_token_of_t < env_of_t < OutR >>:: callback_type < stop - callback - fn >> stop - callback - fn struct stop - callback - fn { stop_source & stop_src_ ; void operator ()() noexcept { stop_src_ . request_stop (); } }; 
 s2 sh_state op_state 
- 
           
- 
         When start ( op_state ) - 
           If r CF r CF ( out_r ', args2 ...) args2 ... sh_state CF ( r , args ...) 
- 
           Otherwise, it emplace constructs the stop callback optional with the arguments get_stop_token ( get_env ( out_r ')) stop - callback - fn { stop - src } stop - src sh_state 
- 
           Then, it checks to see if stop - src . stop_requested () true. If so, it callsset_stopped ( out_r ') 
- 
           Otherwise, it sets sh_state op_state r 
 
- 
           
- 
         When r op_state CF r op_state CF ( std :: move ( out_r '), args2 ...) args2 ... sh_state CF ( r , args ...) 
- 
         [Note: If sender s2 r sh_state sh_state 
 
- 
         
- 
       Given a subexpression s s2 ensure_started ( s ) get_env ( s2 ) sh_state get_env ( s ) 
- 
       Given subexpressions s2 e s2 ensure_started S2 decltype (( s2 )) E decltype (( e )) tag_invoke ( get_completion_signatures , s2 , e ) make_completion_signatures < copy_cvref_t < S2 , S > , ensure - started - env , completion_signatures < set_error_t ( exception_ptr && ), set_error_t ( Es )... > , set - value - signature , error - types > where Es set - value - signature template < class ... Ts > using set - value - signature = completion_signatures < set_value_t ( decay_t < Ts >&& ...) > ; and error - types template < class E > using error - types = completion_signatures < set_error_t ( decay_t < E >&& ) > ; 
 
- 
       
- 
     Let s r s2 ensure_started ( s ) r2 s2 args r CSO s s2 CSO ( r2 , args2 ...) args2 args set_error ( r2 , e2 ) e2 r2 r2 
11.9.7. Sender consumers [exec.consumers]
11.9.7.1. execution :: start_detached 
   - 
     start_detached 
- 
     The name start_detached s S decltype (( s )) S sender start_detached start_detached ( s ) - 
       tag_invoke ( start_detached , get_completion_scheduler < set_value_t > ( get_env ( s )), s ) - 
         Mandates: The type of the tag_invoke void 
 
- 
         
- 
       Otherwise, tag_invoke ( start_detached , s ) - 
         Mandates: The type of the tag_invoke void 
 
- 
         
- 
       Otherwise: - 
         Let R r R cr const R - 
           The expression set_value ( r ) 
- 
           For any subexpression e set_error ( r , e ) terminate () 
- 
           The expression set_stopped ( r ) 
- 
           The expression get_env ( cr ) empty_env {} 
 
- 
           
- 
         Calls connect ( s , r ) op_state start ( op_state ) 
 
- 
         
 If the function selected above does not eagerly start the sender s terminate () start_detached ( s ) 
- 
       
11.9.7.2. this_thread :: sync_wait 
   - 
     this_thread :: sync_wait this_thread :: sync_wait_with_variant sync_wait 
- 
     For any receiver r sync_wait sync_wait_with_variant get_scheduler ( get_env ( r )) get_delegatee_scheduler ( get_env ( r )) this_thread :: sync_wait sync_wait run_loop sync_wait 
- 
     The templates sync - wait - type sync - wait - with - variant - type this_thread :: sync_wait this_thread :: sync_wait_with_variant sync - wait - env get_env ( r ) r sync_wait template < sender_in < sync - wait - env > S > using sync - wait - type = optional < value_types_of_t < S , sync - wait - env , decayed - tuple , type_identity_t >> ; template < sender_in < sync - wait - env > S > using sync - wait - with - variant - type = optional < into - variant - type < S , sync - wait - env >> ; 
- 
     The name this_thread :: sync_wait s S decltype (( s )) sender_in < S , sync - wait - env > false, or the number of the argumentscompletion_signatures_of_t < S , sync - wait - env >:: value_types Variant this_thread :: sync_wait ( s ) this_thread :: sync_wait ( s ) - 
       tag_invoke ( this_thread :: sync_wait , get_completion_scheduler < set_value_t > ( get_env ( s )), s ) - 
         Mandates: The type of the tag_invoke sync - wait - type < S , sync - wait - env > 
 
- 
         
- 
       Otherwise, tag_invoke ( this_thread :: sync_wait , s ) - 
         Mandates: The type of the tag_invoke sync - wait - type < S , sync - wait - env > 
 
- 
         
- 
       Otherwise: - 
         Constructs a receiver r 
- 
         Calls connect ( s , r ) op_state start ( op_state ) 
- 
         Blocks the current thread until a completion operation of r - 
           If set_value ( r , ts ...) sync - wait - type < S , sync - wait - env > { decayed - tuple < decltype ( ts )... > { ts ...}} sync_wait 
- 
           If set_error ( r , e ) E e E exception_ptr std :: rethrow_exception ( e ) E error_code system_error ( e ) e 
- 
           If set_stopped ( r ) sync - wait - type < S , sync - wait - env > {} 
 
- 
           
 
- 
         
 
- 
       
- 
     The name this_thread :: sync_wait_with_variant s S into_variant ( s ) sender_in < S , sync - wait - env > false,this_thread :: sync_wait_with_variant ( s ) this_thread :: sync_wait_with_variant ( s ) - 
       tag_invoke ( this_thread :: sync_wait_with_variant , get_completion_scheduler < set_value_t > ( get_env ( s )), s ) - 
         Mandates: The type of the tag_invoke sync - wait - with - variant - type < S , sync - wait - env > 
 
- 
         
- 
       Otherwise, tag_invoke ( this_thread :: sync_wait_with_variant , s ) - 
         Mandates: The type of the tag_invoke sync - wait - with - variant - type < S , sync - wait - env > 
 
- 
         
- 
       Otherwise, this_thread :: sync_wait ( into_variant ( s )) 
 
- 
       
11.10. execution :: execute 
   - 
     execute 
- 
     The name execute sch f Sch decltype (( sch )) F decltype (( f )) Sch scheduler F invocable execute execute - 
       tag_invoke ( execute , sch , f ) tag_invoke f f sch std :: terminate execute - 
         Mandates: The type of the tag_invoke void 
 
- 
         
- 
       Otherwise, start_detached ( then ( schedule ( sch ), f )) 
 
- 
       
11.11. Sender/receiver utilities [exec.utils]
- 
     This section makes use of the following exposition-only entities: // [ Editorial note: copy_cvref_t as in [[P1450R3]] -- end note ] // Mandates: is_base_of_v<T, remove_reference_t<U>> is true template < class T , class U > copy_cvref_t < U && , T > c - style - cast ( U && u ) noexcept requires decays - to < T , T > { return ( copy_cvref_t < U && , T > ) std :: forward < U > ( u ); } 
- 
     [Note: The C-style cast in c-style-cast is to disable accessibility checks. -- end note] 
11.11.1. execution :: receiver_adaptor 
template < class - type Derived , receiver Base = unspecified > // arguments are not associated entities ([lib.tmpl-heads]) class receiver_adaptor ; 
- 
     receiver_adaptor tag_invoke 
- 
     If Base - 
       Let HAS - BASE false, and
- 
       Let GET - BASE ( d ) d . base () 
 otherwise, let: - 
       Let HAS - BASE true, and
- 
       Let GET - BASE ( d ) c - style - cast < receiver_adaptor < Derived , Base >> ( d ). base () 
 Let BASE - TYPE ( D ) GET - BASE ( declval < D > ()) 
- 
       
- 
     receiver_adaptor < Derived , Base > template < class - type Derived , receiver Base = unspecified > // arguments are not associated entities ([lib.tmpl-heads]) class receiver_adaptor { friend Derived ; public : using is_receiver = unspecified ; // Constructors receiver_adaptor () = default ; template < class B > requires HAS - BASE && constructible_from < Base , B > explicit receiver_adaptor ( B && base ) : base_ ( std :: forward < B > ( base )) {} private : using set_value = unspecified ; using set_error = unspecified ; using set_stopped = unspecified ; using get_env = unspecified ; // Member functions template < class Self > requires HAS - BASE decltype ( auto ) base ( this Self && self ) noexcept { return ( std :: forward < Self > ( self ). base_ ); } // [exec.utils.rcvr.adptr.nonmembers] Non-member functions template < class ... As > friend void tag_invoke ( set_value_t , Derived && self , As && ... as ) noexcept ; template < class E > friend void tag_invoke ( set_error_t , Derived && self , E && e ) noexcept ; friend void tag_invoke ( set_stopped_t , Derived && self ) noexcept ; friend decltype ( auto ) tag_invoke ( get_env_t , const Derived & self ) noexcept ( see below ); [[ no_unique_address ]] Base base_ ; // present if and only if HAS-BASE is true }; 
- 
     [Note: receiver_adaptor tag_invoke Derived receiver_adaptor 
- 
     [Example: using _int_completion = completion_signatures < set_value_t ( int ) > ; template < receiver_of < _int_completion > R > class my_receiver : receiver_adaptor < my_receiver < R > , R > { friend receiver_adaptor < my_receiver , R > ; void set_value () && { set_value ( std :: move ( * this ). base (), 42 ); } public : using receiver_adaptor < my_receiver , R >:: receiver_adaptor ; }; -- end example] 
11.11.1.1. Non-member functions [exec.utils.rcvr.adptr.nonmembers]
template < class ... As > friend void tag_invoke ( set_value_t , Derived && self , As && ... as ) noexcept ; 
- 
     Let SET - VALUE std :: move ( self ). set_value ( std :: forward < As > ( as )...) 
- 
     Constraints: Either SET - VALUE typename Derived :: set_value callable < set_value_t , BASE - TYPE ( Derived ), As ... > true.
- 
     Mandates: SET - VALUE 
- 
     Effects: Equivalent to: - 
       If SET - VALUE SET - VALUE 
- 
       Otherwise, set_value ( GET - BASE ( std :: move ( self )), std :: forward < As > ( as )...) 
 
- 
       
template < class E > friend void tag_invoke ( set_error_t , Derived && self , E && e ) noexcept ; 
- 
     Let SET - ERROR std :: move ( self ). set_error ( std :: forward < E > ( e )) 
- 
     Constraints: Either SET - ERROR typename Derived :: set_error callable < set_error_t , BASE - TYPE ( Derived ), E > true.
- 
     Mandates: SET - ERROR 
- 
     Effects: Equivalent to: - 
       If SET - ERROR SET - ERROR 
- 
       Otherwise, set_error ( GET - BASE ( std :: move ( self )), std :: forward < E > ( e )) 
 
- 
       
friend void tag_invoke ( set_stopped_t , Derived && self ) noexcept ; 
- 
     Let SET - STOPPED std :: move ( self ). set_stopped () 
- 
     Constraints: Either SET - STOPPED typename Derived :: set_stopped callable < set_stopped_t , BASE - TYPE ( Derived ) > true.
- 
     Mandates: SET - STOPPED 
- 
     Effects: Equivalent to: - 
       If SET - STOPPED SET - STOPPED 
- 
       Otherwise, set_stopped ( GET - BASE ( std :: move ( self ))) 
 
- 
       
friend decltype ( auto ) tag_invoke ( get_env_t , const Derived & self ) noexcept ( see below ); 
- 
     Constraints: Either self . get_env () typename Derived :: get_env callable < get_env_t , BASE - TYPE ( const Derived & ) > true.
- 
     Effects: Equivalent to: - 
       If self . get_env () self . get_env () 
- 
       Otherwise, std :: get_env ( GET - BASE ( self )) 
 
- 
       
- 
     Remarks: The expression in the noexcept - 
       If self . get_env () noexcept ( self . get_env ()) 
- 
       Otherwise, noexcept ( std :: get_env ( GET - BASE ( self ))) 
 
- 
       
11.11.2. execution :: completion_signatures 
   - 
     completion_signatures 
- 
     [Example: class my_sender { using completion_signatures = completion_signatures < set_value_t (), set_value_t ( int , float ), set_error_t ( exception_ptr ), set_error_t ( error_code ), set_stopped_t () > ; }; // Declares my_sender to be a sender that can complete by calling // one of the following for a receiver expression R: // set_value(R) // set_value(R, int{...}, float{...}) // set_error(R, exception_ptr{...}) // set_error(R, error_code{...}) // set_stopped(R) -- end example] 
- 
     This section makes use of the following exposition-only entities: template < class Fn > concept completion - signature = see below ; template < bool > struct indirect - meta - apply { template < template < class ... > class T , class ... As > using meta - apply = T < As ... > ; // exposition only }; template < class ... > concept always - true= true; // exposition only - 
       A type Fn completion - signature - 
         set_value_t ( Vs ...) Vs 
- 
         set_error_t ( E ) E 
- 
         set_stopped_t () 
 
- 
         
 template < class Tag , class S , class E , template < class ... > class Tuple , template < class ... > class Variant > requires sender_in < S , E > using gather - signatures = see below ; - 
       Let Fns ... completion_signatures completion_signatures_of_t < S , E > TagFns Fns Tag Ts n n TagFns Tuple Variant gather - signatures < Tag , S , E , Tuple , Variant > META - APPLY ( Variant , META - APPLY ( Tuple , Ts 0 ...), META - APPLY ( Tuple , Ts 1 ...), ... META - APPLY ( Tuple , Ts m -1 . ..)) m TagFns META - APPLY ( T , As ...) typename indirect - meta - apply < always - true< As ... >>:: template meta - apply < T , As ... > ; 
- 
       The purpose of META - APPLY Variant Tuple gather - signatures 
 
- 
       
- 
template < completion - signature ... Fns > struct completion_signatures {}; template < class S , class E = empty_env , template < class ... > class Tuple = decayed - tuple , template < class ... > class Variant = variant - or - empty > requires sender_in < S , E > using value_types_of_t = gather - signatures < set_value_t , S , E , Tuple , Variant > ; template < class S , class E = empty_env , template < class ... > class Variant = variant - or - empty > requires sender_in < S , E > using error_types_of_t = gather - signatures < set_error_t , S , E , type_identity_t , Variant > ; template < class S , class E = empty_env > requires sender_in < S , E > inline constexpr bool sends_stopped = ! same_as < type - list <> , gather - signatures < set_stopped_t , S , E , type - list , type - list >> ; 
11.11.3. execution :: make_completion_signatures 
   - 
     make_completion_signatures completion_signatures 
- 
     [Example: // Given a sender S and an environment Env, adapt S’s completion // signatures by lvalue-ref qualifying the values, adding an additional // exception_ptr error completion if its not already there, and leaving the // other completion signatures alone. template < class ... Args > using my_set_value_t = completion_signatures < set_value_t ( add_lvalue_reference_t < Args > ...) > ; using my_completion_signatures = make_completion_signatures < S , Env , completion_signatures < set_error_t ( exception_ptr ) > , my_set_value_t > ; -- end example] 
- 
     This section makes use of the following exposition-only entities: template < class ... As > using default - set - value = completion_signatures < set_value_t ( As ...) > ; template < class Err > using default - set - error = completion_signatures < set_error_t ( Err ) > ; 
- 
template < sender Sndr , class Env = empty_env , valid - completion - signatures AddlSigs = completion_signatures <> , template < class ... > class SetValue = default - set - value , template < class > class SetError = default - set - error , valid - completion - signatures SetStopped = completion_signatures < set_stopped_t () >> requires sender_in < Sndr , Env > using make_completion_signatures = completion_signatures < see below > ; - 
       SetValue As ... SetValue < As ... > valid - completion - signatures < SetValue < As ... >> 
- 
       SetError Err SetError < Err > valid - completion - signatures < SetError < Err >> 
 Then: - 
       Let Vs ... type - list value_types_of_t < Sndr , Env , SetValue , type - list > 
- 
       Let Es ... type - list error_types_of_t < Sndr , Env , error - list > error - list error - list < Ts ... > type - list < SetError < Ts > ... > 
- 
       Let Ss completion_signatures <> sends_stopped < Sndr , Env > false; otherwise,SetStopped 
 Then: - 
       If any of the above types are ill-formed, then make_completion_signatures < Sndr , Env , AddlSigs , SetValue , SetError , SetStopped > 
- 
       Otherwise, make_completion_signatures < Sndr , Env , AddlSigs , SetValue , SetError , SetStopped > completion_signatures < Sigs ... > Sigs ... completion_signatures [ AddlSigs , Vs ..., Es ..., Ss ] 
 
- 
       
11.12. Execution contexts [exec.ctx]
- 
     This section specifies some execution resources on which work can be scheduled. 
11.12.1. run_loop 
   - 
     A run_loop run () run () 
- 
     A run_loop run_loop 
- 
     Concurrent invocations of the member functions of run_loop run pop_front push_back finish 
- 
     [Note: Implementations are encouraged to use an intrusive queue of operation states to hold the work units to make scheduling allocation-free. — end note] class run_loop { // [exec.run.loop.types] Associated types class run - loop - scheduler ; // exposition only class run - loop - sender ; // exposition only struct run - loop - opstate - base { // exposition only virtual void execute () = 0 ; run_loop * loop_ ; run - loop - opstate - base * next_ ; }; template < receiver_of < completion_signatures < set_value_t () >> R > using run - loop - opstate = unspecified ; // exposition only // [exec.run.loop.members] Member functions: run - loop - opstate - base * pop_front (); // exposition only void push_back ( run - loop - opstate - base * ); // exposition only public : // [exec.run.loop.ctor] construct/copy/destroy run_loop () noexcept ; run_loop ( run_loop && ) = delete ; ~ run_loop (); // [exec.run.loop.members] Member functions: run - loop - scheduler get_scheduler (); void run (); void finish (); }; 
11.12.1.1. Associated types [exec.run.loop.types]
class run - loop - scheduler ; 
- 
     run - loop - scheduler scheduler 
- 
     Instances of run - loop - scheduler run_loop 
- 
     Two instances of run - loop - scheduler run_loop 
- 
     Let sch run - loop - scheduler schedule ( sch ) run - loop - sender 
class run - loop - sender ; 
- 
     run - loop - sender sender_of < run - loop - sender , set_value_t () > true. Additionally, the types reported by itserror_types exception_ptr sends_stopped true.
- 
     An instance of run - loop - sender run_loop 
- 
     Let s run - loop - sender r decltype ( r ) receiver_of C set_value_t set_stopped_t - 
       The expression connect ( s , r ) run - loop - opstate < decay_t < decltype ( r ) >> decay_t < decltype ( r ) > r 
- 
       The expression get_completion_scheduler < C > ( get_env ( s )) run - loop - scheduler run - loop - scheduler s 
 
- 
       
template < receiver_of < completion_signatures < set_value_t () >> R > // arguments are not associated entities ([lib.tmpl-heads]) struct run - loop - opstate ; 
- 
     run - loop - opstate < R > run - loop - opstate - base 
- 
     Let o const run - loop - opstate < R > REC ( o ) const R r connect o - 
       The object to which REC ( o ) o 
- 
       The type run - loop - opstate < R > run - loop - opstate - base :: execute () o . execute () if ( get_stop_token ( REC ( o )). stop_requested ()) { set_stopped ( std :: move ( REC ( o ))); } else { set_value ( std :: move ( REC ( o ))); } 
- 
       The expression start ( o ) try { o . loop_ -> push_back ( & o ); } catch (...) { set_error ( std :: move ( REC ( o )), current_exception ()); } 
 
- 
       
11.12.1.2. Constructor and destructor [exec.run.loop.ctor]
run_loop :: run_loop () noexcept ; 
- 
     Postconditions: count is 0 
run_loop ::~ run_loop (); 
- 
     Effects: If count is not 0 terminate () 
11.12.1.3. Member functions [exec.run.loop.members]
run - loop - opstate - base * run_loop :: pop_front (); 
- 
     Effects: Blocks ([defns.block]) until one of the following conditions is true:- 
       count is 0 pop_front nullptr 
- 
       count is greater than 0 1 
 
- 
       
void run_loop::push_back ( run - loop - opstate - base * item ); 
- 
     Effects: Adds item 1 
- 
     Synchronization: This operation synchronizes with the pop_front item 
run - loop - scheduler run_loop :: get_scheduler (); 
- 
     Returns: an instance of run - loop - scheduler run_loop 
void run_loop::run (); 
- 
     Effects: Equivalent to: while ( auto * op = pop_front ()) { op -> execute (); } 
- 
     Precondition: state is starting. 
- 
     Postcondition: state is finishing. 
- 
     Remarks: While the loop is executing, state is running. When state changes, it does so without introducing data races. 
void run_loop::finish (); 
- 
     Effects: Changes state to finishing. 
- 
     Synchronization: This operation synchronizes with all pop_front 
11.13. Coroutine utilities [exec.coro.utils]
11.13.1. execution :: as_awaitable 
   - 
     as_awaitable template < class S , class E > using single - sender - value - type = see below ; template < class S , class E > concept single - sender = sender_in < S , E > && requires { typename single - sender - value - type < S , E > ; }; template < class S , class P > concept awaitable - sender = single - sender < S , ENV - OF ( P ) > && sender_to < S , awaitable - receiver > && // see below requires ( P & p ) { { p . unhandled_stopped () } -> convertible_to < coroutine_handle <>> ; }; template < class S , class P > class sender - awaitable ; where ENV - OF ( P ) env_of_t < P > empty_env - 
       Alias template single-sender-value-type is defined as follows: - 
         If value_types_of_t < S , E , Tuple , Variant > Variant < Tuple < T >> single - sender - value - type < S , E > decay_t < T > 
- 
         Otherwise, if value_types_of_t < S , E , Tuple , Variant > Variant < Tuple <>> Variant <> single - sender - value - type < S , E > void 
- 
         Otherwise, single - sender - value - type < S , E > 
 
- 
         
- 
       The type sender - awaitable < S , P > template < class S , class P > // arguments are not associated entities ([lib.tmpl-heads]) class sender - awaitable { struct unit {}; using value_t = single - sender - value - type < S , ENV - OF ( P ) > ; using result_t = conditional_t < is_void_v < value_t > , unit , value_t > ; struct awaitable - receiver ; variant < monostate , result_t , exception_ptr > result_ {}; connect_result_t < S , awaitable - receiver > state_ ; public : sender - awaitable ( S && s , P & p ); bool await_ready () const noexcept { return false; } void await_suspend ( coroutine_handle < P > ) noexcept { start ( state_ ); } value_t await_resume (); }; - 
         awaitable - receiver struct awaitable - receiver { using is_receiver = unspecified ; variant < monostate , result_t , exception_ptr >* result_ptr_ ; coroutine_handle < P > continuation_ ; // ... see below }; Let r awaitable - receiver cr const r vs ... Vs ... err Err - 
           If constructible_from < result_t , Vs ... > set_value ( r , vs ...) try { r . result_ptr_ -> emplace < 1 > ( vs ...); } catch (...) { r . result_ptr_ -> emplace < 2 > ( current_exception ()); } r . continuation_ . resume (); Otherwise, set_value ( r , vs ...) 
- 
           The expression set_error ( r , err ) r . result_ptr_ -> emplace < 2 > ( AS - EXCEPT - PTR ( err )); r . continuation_ . resume (); where AS - EXCEPT - PTR ( err ) - 
             err decay_t < Err > exception_ptr 
- 
             Otherwise, make_exception_ptr ( system_error ( err )) decay_t < Err > error_code 
- 
             Otherwise, make_exception_ptr ( err ) 
 
- 
             
- 
           The expression set_stopped ( r ) static_cast < coroutine_handle <>> ( r . continuation_ . promise (). unhandled_stopped ()). resume () 
- 
           For any expression tag forwarding - query as tag_invoke ( tag , get_env ( cr ), as ...) tag ( get_env ( as_const ( cr . continuation_ . promise ())), as ...) 
 
- 
           
- 
         sender - awaitable :: sender - awaitable ( S && s , P & p ) - 
           Effects: initializes state_ connect ( std :: forward < S > ( s ), awaitable - receiver { & result_ , coroutine_handle < P >:: from_promise ( p )}) 
 
- 
           
- 
         value_t sender - awaitable :: await_resume () - 
           Effects: equivalent to: if ( result_ . index ()) == 2 ) rethrow_exception ( get < 2 > ( result_ )); if constexpr ( ! is_void_v < value_t > ) return std :: forward < value_t > ( get < 1 > ( result_ )); 
 
- 
           
 
- 
         
 
- 
       
- 
     as_awaitable e p p E decltype (( e )) P decltype (( p )) as_awaitable ( e , p ) - 
       tag_invoke ( as_awaitable , e , p ) - 
         Mandates: is - awaitable < A , P > true, whereA tag_invoke 
 
- 
         
- 
       Otherwise, e is - awaitable < E , U > true, whereU await_transform is - awaitable < E , P > - 
         Preconditions: is - awaitable < E , P > trueand the expressionco_await e U P 
 
- 
         
- 
       Otherwise, sender - awaitable { e , p } awaitable - sender < E , P > true.
- 
       Otherwise, e 
 
- 
       
11.13.2. execution :: with_awaitable_senders 
   - 
     with_awaitable_senders In addition, it provides a default implementation of unhandled_stopped () set_stopped unhandled_stopped template < class - type Promise > struct with_awaitable_senders { template < OtherPromise > requires ( ! same_as < OtherPromise , void > ) void set_continuation ( coroutine_handle < OtherPromise > h ) noexcept ; coroutine_handle <> continuation () const noexcept { return continuation_ ; } coroutine_handle <> unhandled_stopped () noexcept { return stopped_handler_ ( continuation_ . address ()); } template < class Value > see - below await_transform ( Value && value ); private : // exposition only [[ noreturn ]] static coroutine_handle <> default_unhandled_stopped ( void * ) noexcept { terminate (); } coroutine_handle <> continuation_ {}; // exposition only // exposition only coroutine_handle <> ( * stopped_handler_ )( void * ) noexcept = & default_unhandled_stopped ; }; 
- 
     void set_continuation ( coroutine_handle < OtherPromise > h ) noexcept - 
       Effects: equivalent to: continuation_ = h ; if constexpr ( requires ( OtherPromise & other ) { other . unhandled_stopped (); } ) { stopped_handler_ = []( void * p ) noexcept -> coroutine_handle <> { return coroutine_handle < OtherPromise >:: from_address ( p ) . promise (). unhandled_stopped (); }; } else { stopped_handler_ = default_unhandled_stopped ; } 
 
- 
       
- 
     call - result - t < as_awaitable_t , Value , Promise &> await_transform ( Value && value ) - 
       Effects: equivalent to: return as_awaitable ( std :: forward < Value > ( value ), static_cast < Promise &> ( * this )); 
 
-