diff --git a/.gitignore b/.gitignore index c86cd5ee..a829fe81 100644 --- a/.gitignore +++ b/.gitignore @@ -1,3 +1,4 @@ +/.vs /build-*/ /build/* !/build/Jamfile diff --git a/doc/modules/ROOT/nav.adoc b/doc/modules/ROOT/nav.adoc index 7cdf742c..15142409 100644 --- a/doc/modules/ROOT/nav.adoc +++ b/doc/modules/ROOT/nav.adoc @@ -4,9 +4,9 @@ * xref:why-not-cobalt-2.adoc[Why Not Cobalt Concepts?] * xref:why-not-tmc.adoc[Why Not TooManyCooks?] * xref:quick-start.adoc[Quick Start] -* xref:2.cpp20-coroutines/2.intro.adoc[Introduction To C++20 Coroutines] +* xref:2.cpp20-coroutines/2.intro.adoc[Introduction To {cpp}20 Coroutines] ** xref:2.cpp20-coroutines/2a.foundations.adoc[Part I: Foundations] -** xref:2.cpp20-coroutines/2b.syntax.adoc[Part II: C++20 Syntax] +** xref:2.cpp20-coroutines/2b.syntax.adoc[Part II: {cpp}20 Syntax] ** xref:2.cpp20-coroutines/2c.machinery.adoc[Part III: Coroutine Machinery] ** xref:2.cpp20-coroutines/2d.advanced.adoc[Part IV: Advanced Topics] * xref:3.concurrency/3.intro.adoc[Introduction to Concurrency] diff --git a/doc/modules/ROOT/pages/2.cpp20-coroutines/2.intro.adoc b/doc/modules/ROOT/pages/2.cpp20-coroutines/2.intro.adoc index f57b96a9..771ca65f 100644 --- a/doc/modules/ROOT/pages/2.cpp20-coroutines/2.intro.adoc +++ b/doc/modules/ROOT/pages/2.cpp20-coroutines/2.intro.adoc @@ -7,12 +7,12 @@ // Official repository: https://github.com/cppalliance/capy // -= Introduction To C++20 Coroutines += Introduction To {cpp}20 Coroutines -Every C++ function you have ever written follows the same contract: it runs from start to finish, then returns. The caller waits. The stack frame lives and dies in lockstep with that single invocation. This model has served us well for decades, but it forces a hard tradeoff when programs need to wait--for a network response, a disk read, a timer, or another thread. The function either blocks (wasting a thread) or you restructure your code into callbacks, state machines, or futures that scatter your logic across multiple places. +Every {cpp} function you have ever written follows the same contract: it runs from start to finish, then returns. The caller waits. The stack frame lives and dies in lockstep with that single invocation. This model has served us well for decades, but it forces a hard tradeoff when programs need to wait--for a network response, a disk read, a timer, or another thread. The function either blocks (wasting a thread) or you restructure your code into callbacks, state machines, or futures that scatter your logic across multiple places. -C++20 coroutines change the rules. A coroutine can _suspend_ its execution--saving its local state somewhere outside the stack--and _resume_ later, picking up exactly where it left off. The control flow reads top-to-bottom, like the synchronous code you already know, but the runtime behavior is asynchronous. No blocked threads. No callback chains. No lost context. +{cpp}20 coroutines change the rules. A coroutine can _suspend_ its execution--saving its local state somewhere outside the stack--and _resume_ later, picking up exactly where it left off. The control flow reads top-to-bottom, like the synchronous code you already know, but the runtime behavior is asynchronous. No blocked threads. No callback chains. No lost context. This is not a minor syntactic convenience. It is a fundamental shift in how you can structure programs that wait. -This section takes you from zero to a working understanding of C++20 coroutines. No prior experience with coroutines or async programming is needed. You will start with the problem that coroutines solve, move through the language syntax and compiler machinery, and finish with the performance characteristics that make coroutines practical for real systems. By the end, you will understand not only _how_ to write coroutines but _why_ they work the way they do--knowledge that will make everything in the rest of this documentation click into place. +This section takes you from zero to a working understanding of {cpp}20 coroutines. No prior experience with coroutines or async programming is needed. You will start with the problem that coroutines solve, move through the language syntax and compiler machinery, and finish with the performance characteristics that make coroutines practical for real systems. By the end, you will understand not only _how_ to write coroutines but _why_ they work the way they do--knowledge that will make everything in the rest of this documentation click into place. diff --git a/doc/modules/ROOT/pages/2.cpp20-coroutines/2a.foundations.adoc b/doc/modules/ROOT/pages/2.cpp20-coroutines/2a.foundations.adoc index 3893d217..cbbda983 100644 --- a/doc/modules/ROOT/pages/2.cpp20-coroutines/2a.foundations.adoc +++ b/doc/modules/ROOT/pages/2.cpp20-coroutines/2a.foundations.adoc @@ -1,16 +1,16 @@ = Part I: Foundations -This section introduces the fundamental concepts you need before working with C++20 coroutines. You will learn how normal functions work, what makes coroutines different, and why coroutines exist as a language feature. +This section introduces the fundamental concepts you need before working with {cpp}20 coroutines. You will learn how normal functions work, what makes coroutines different, and why coroutines exist as a language feature. == Prerequisites Before beginning this tutorial, you should have: -* A C++ compiler with C++20 support (GCC 10+, Clang 14+, or MSVC 2019 16.8+) -* Familiarity with basic C++ concepts: functions, classes, templates, and lambdas +* A {cpp} compiler with {cpp}20 support (GCC 10+, Clang 14+, or MSVC 2019 16.8+) +* Familiarity with basic {cpp} concepts: functions, classes, templates, and lambdas * Understanding of how function calls work: the call stack, local variables, and return values -The examples in this tutorial use standard C++20 features. Compile with: +The examples in this tutorial use standard {cpp}20 features. Compile with: * GCC: `g++ -std=c++20 -fcoroutines your_file.cpp` * Clang: `clang++ -std=c++20 your_file.cpp` @@ -42,7 +42,7 @@ This model has a fundamental constraint: *run-to-completion*. Once a function st == What Is a Coroutine? -A *coroutine* is a function that can suspend its execution and resume later from exactly where it left off. Think of it as a bookmark in a book of instructions—instead of reading the entire book in one sitting, you can mark your place, do something else, and return to continue reading. +A *coroutine* is a function that can suspend its execution and resume later from exactly where it left off. Think of it as a bookmark in a book of instructions—instead of reading the entire book in one sitting, you can mark your place, do something else, and return to continue reading. When a coroutine suspends: @@ -55,7 +55,7 @@ When a coroutine resumes: * Local variables are restored to their previous values * Execution continues from the suspension point -This capability is implemented through a *coroutine frame*—a heap-allocated block of memory that stores the coroutine's state. Unlike stack frames, coroutine frames persist across suspension points because they live on the heap rather than the stack. +This capability is implemented through a *coroutine frame*—a heap-allocated block of memory that stores the coroutine's state. Unlike stack frames, coroutine frames persist across suspension points because they live on the heap rather than the stack. [source,cpp] ---- @@ -138,8 +138,8 @@ This code reads like the original blocking version. Local variables like `reques Coroutines also enable: -* *Generators* — Functions that produce sequences of values on demand, computing each value only when requested -* *State machines* — Complex control flow expressed as linear code with suspension points -* *Cooperative multitasking* — Multiple logical tasks interleaved on a single thread +* *Generators* — Functions that produce sequences of values on demand, computing each value only when requested +* *State machines* — Complex control flow expressed as linear code with suspension points +* *Cooperative multitasking* — Multiple logical tasks interleaved on a single thread -You have now learned what coroutines are and why they exist. In the next section, you will learn the C++20 syntax for creating coroutines. +You have now learned what coroutines are and why they exist. In the next section, you will learn the {cpp}20 syntax for creating coroutines. diff --git a/doc/modules/ROOT/pages/2.cpp20-coroutines/2b.syntax.adoc b/doc/modules/ROOT/pages/2.cpp20-coroutines/2b.syntax.adoc index 6772088c..6c3c1dd7 100644 --- a/doc/modules/ROOT/pages/2.cpp20-coroutines/2b.syntax.adoc +++ b/doc/modules/ROOT/pages/2.cpp20-coroutines/2b.syntax.adoc @@ -1,6 +1,6 @@ -= Part II: C++20 Syntax += Part II: {cpp}20 Syntax -This section introduces the three C++20 keywords that create coroutines and walks you through building your first coroutine step by step. +This section introduces the three {cpp}20 keywords that create coroutines and walks you through building your first coroutine step by step. == Prerequisites @@ -26,7 +26,7 @@ task fetch_page(std::string url) === co_yield -The `co_yield` keyword produces a value and suspends the coroutine. This pattern creates *generators*—functions that produce sequences of values one at a time. After yielding a value, the coroutine pauses until someone asks for the next value. +The `co_yield` keyword produces a value and suspends the coroutine. This pattern creates *generators*—functions that produce sequences of values one at a time. After yielding a value, the coroutine pauses until someone asks for the next value. [source,cpp] ---- @@ -91,11 +91,11 @@ For now, observe that the presence of `co_return` transforms what looks like a r == Awaitables and Awaiters -When you write `co_await expr`, the expression `expr` must be an *awaitable*—something that knows how to suspend and resume a coroutine. The awaitable produces an *awaiter* object that implements three methods: +When you write `co_await expr`, the expression `expr` must be an *awaitable*—something that knows how to suspend and resume a coroutine. The awaitable produces an *awaiter* object that implements three methods: -* `await_ready()` — Returns `true` if the result is immediately available and no suspension is needed -* `await_suspend(handle)` — Called when the coroutine suspends; receives a handle to the coroutine for later resumption -* `await_resume()` — Called when the coroutine resumes; its return value becomes the value of the `co_await` expression +* `await_ready()` — Returns `true` if the result is immediately available and no suspension is needed +* `await_suspend(handle)` — Called when the coroutine suspends; receives a handle to the coroutine for later resumption +* `await_resume()` — Called when the coroutine resumes; its return value becomes the value of the `co_await` expression === Example: Understanding the Awaiter Protocol @@ -183,10 +183,10 @@ The variable `i` inside `counter` maintains its value across all these suspensio === Standard Awaiters -The C++ standard library provides two predefined awaiters: +The {cpp} standard library provides two predefined awaiters: -* `std::suspend_always` — `await_ready()` returns `false` (always suspend) -* `std::suspend_never` — `await_ready()` returns `true` (never suspend) +* `std::suspend_always` — `await_ready()` returns `false` (always suspend) +* `std::suspend_never` — `await_ready()` returns `true` (never suspend) These are useful building blocks for promise types and custom awaitables. @@ -199,4 +199,4 @@ co_await std::suspend_always{}; co_await std::suspend_never{}; ---- -You have now learned the three coroutine keywords and how awaitables work. In the next section, you will learn about the promise type and coroutine handle—the machinery that makes coroutines function. +You have now learned the three coroutine keywords and how awaitables work. In the next section, you will learn about the promise type and coroutine handle—the machinery that makes coroutines function. diff --git a/doc/modules/ROOT/pages/2.cpp20-coroutines/2c.machinery.adoc b/doc/modules/ROOT/pages/2.cpp20-coroutines/2c.machinery.adoc index 7d5b04ee..a1454dd4 100644 --- a/doc/modules/ROOT/pages/2.cpp20-coroutines/2c.machinery.adoc +++ b/doc/modules/ROOT/pages/2.cpp20-coroutines/2c.machinery.adoc @@ -1,16 +1,16 @@ = Part III: Coroutine Machinery -This section explains the promise type and coroutine handle—the core machinery that controls coroutine behavior. You will build a complete generator type by understanding how these pieces work together. +This section explains the promise type and coroutine handle—the core machinery that controls coroutine behavior. You will build a complete generator type by understanding how these pieces work together. == Prerequisites -* Completed xref:2b.syntax.adoc[Part II: C++20 Syntax] +* Completed xref:2b.syntax.adoc[Part II: {cpp}20 Syntax] * Understanding of the three coroutine keywords * Familiarity with awaitables and awaiters == The Promise Type -Every coroutine has an associated *promise type*. This type acts as a controller for the coroutine, defining how it behaves at key points in its lifecycle. The promise type is not something you pass to the coroutine—it is a nested type inside the coroutine's return type that the compiler uses automatically. +Every coroutine has an associated *promise type*. This type acts as a controller for the coroutine, defining how it behaves at key points in its lifecycle. The promise type is not something you pass to the coroutine—it is a nested type inside the coroutine's return type that the compiler uses automatically. The compiler expects to find a type named `promise_type` nested inside your coroutine's return type. If your coroutine returns `Generator`, the compiler looks for `Generator::promise_type`. @@ -60,7 +60,7 @@ The compiler transforms your coroutine body into something resembling this pseud Important observations: * The return object is created before `initial_suspend()` runs, so it is available even if the coroutine suspends immediately -* `final_suspend()` determines whether the coroutine frame persists after completion—if it returns `suspend_always`, you must manually destroy the coroutine; if it returns `suspend_never`, the frame is destroyed automatically +* `final_suspend()` determines whether the coroutine frame persists after completion—if it returns `suspend_always`, you must manually destroy the coroutine; if it returns `suspend_never`, the frame is destroyed automatically === Tracing Promise Behavior @@ -153,10 +153,10 @@ A `std::coroutine_handle<>` is a lightweight object that refers to a suspended c === Basic Operations -* `handle()` or `handle.resume()` — Resume the coroutine -* `handle.done()` — Returns `true` if the coroutine has completed -* `handle.destroy()` — Destroy the coroutine frame (frees memory) -* `handle.promise()` — Returns a reference to the promise object (typed handles only) +* `handle()` or `handle.resume()` — Resume the coroutine +* `handle.done()` — Returns `true` if the coroutine has completed +* `handle.destroy()` — Destroy the coroutine frame (frees memory) +* `handle.promise()` — Returns a reference to the promise object (typed handles only) === Typed vs Untyped Handles diff --git a/doc/modules/ROOT/pages/2.cpp20-coroutines/2d.advanced.adoc b/doc/modules/ROOT/pages/2.cpp20-coroutines/2d.advanced.adoc index 64d1fbdf..38f425b7 100644 --- a/doc/modules/ROOT/pages/2.cpp20-coroutines/2d.advanced.adoc +++ b/doc/modules/ROOT/pages/2.cpp20-coroutines/2d.advanced.adoc @@ -9,7 +9,7 @@ This section covers advanced coroutine topics: symmetric transfer for efficient == Symmetric Transfer -When a coroutine completes or awaits another coroutine, control must transfer somewhere. The naive approach—simply calling `handle.resume()`—has a problem: each nested coroutine adds a frame to the call stack. With deep nesting, you risk stack overflow. +When a coroutine completes or awaits another coroutine, control must transfer somewhere. The naive approach—simply calling `handle.resume()`—has a problem: each nested coroutine adds a frame to the call stack. With deep nesting, you risk stack overflow. *Symmetric transfer* solves this by returning a coroutine handle from `await_suspend`. Instead of resuming the target coroutine via a function call, the compiler generates a tail call that transfers control without growing the stack. @@ -102,7 +102,7 @@ auto final_suspend() noexcept == Coroutine Allocation -Every coroutine needs memory for its *coroutine frame*—the heap-allocated structure holding local variables, parameters, and suspension state. +Every coroutine needs memory for its *coroutine frame*—the heap-allocated structure holding local variables, parameters, and suspension state. === Default Allocation @@ -454,13 +454,13 @@ This generator: == Conclusion -You have now learned the complete mechanics of C++20 coroutines: +You have now learned the complete mechanics of {cpp}20 coroutines: -* *Keywords* — `co_await`, `co_yield`, and `co_return` transform functions into coroutines -* *Promise types* — Control coroutine behavior at initialization, suspension, completion, and error handling -* *Coroutine handles* — Lightweight references for resuming, querying, and destroying coroutines -* *Symmetric transfer* — Efficient control flow without stack accumulation -* *Allocation* — Custom allocation and HALO optimization -* *Exception handling* — Capturing and propagating exceptions across suspension points +* *Keywords* — `co_await`, `co_yield`, and `co_return` transform functions into coroutines +* *Promise types* — Control coroutine behavior at initialization, suspension, completion, and error handling +* *Coroutine handles* — Lightweight references for resuming, querying, and destroying coroutines +* *Symmetric transfer* — Efficient control flow without stack accumulation +* *Allocation* — Custom allocation and HALO optimization +* *Exception handling* — Capturing and propagating exceptions across suspension points These fundamentals prepare you for understanding Capy's `task` type and the IoAwaitable protocol, which build on standard coroutine machinery with executor affinity and stop token propagation. diff --git a/doc/modules/ROOT/pages/3.concurrency/3a.foundations.adoc b/doc/modules/ROOT/pages/3.concurrency/3a.foundations.adoc index e9d2bc4f..4252789f 100644 --- a/doc/modules/ROOT/pages/3.concurrency/3a.foundations.adoc +++ b/doc/modules/ROOT/pages/3.concurrency/3a.foundations.adoc @@ -6,8 +6,8 @@ This section introduces the fundamental concepts of concurrent programming. You Before beginning this tutorial, you should have: -* A C++ compiler with C++11 or later support -* Familiarity with basic C++ concepts: functions, classes, and lambdas +* A {cpp} compiler with {cpp}11 or later support +* Familiarity with basic {cpp} concepts: functions, classes, and lambdas * Understanding of how programs execute sequentially == Why Concurrency Matters @@ -32,7 +32,7 @@ This sharing is both the power and the peril of threads. == Creating Threads -The `` header provides `std::thread`, the standard way to create threads in C++. +The `` header provides `std::thread`, the standard way to create threads in {cpp}. [source,cpp] ---- diff --git a/doc/modules/ROOT/pages/3.concurrency/3b.synchronization.adoc b/doc/modules/ROOT/pages/3.concurrency/3b.synchronization.adoc index 5dd953f6..2a13227e 100644 --- a/doc/modules/ROOT/pages/3.concurrency/3b.synchronization.adoc +++ b/doc/modules/ROOT/pages/3.concurrency/3b.synchronization.adoc @@ -9,7 +9,7 @@ This section introduces the dangers of shared data access and the synchronizatio == The Danger: Race Conditions -When multiple threads read the same data, all is well. But when at least one thread writes while others read or write, you have a *data race*. The result is undefined behavior—crashes, corruption, or silent errors. +When multiple threads read the same data, all is well. But when at least one thread writes while others read or write, you have a *data race*. The result is undefined behavior—crashes, corruption, or silent errors. Consider this code: @@ -39,9 +39,9 @@ int main() } ---- -Two threads, each incrementing 100,000 times. You would expect 200,000. But run this repeatedly and you will see different results—180,000, 195,327, maybe occasionally 200,000. Something is wrong. +Two threads, each incrementing 100,000 times. You would expect 200,000. But run this repeatedly and you will see different results—180,000, 195,327, maybe occasionally 200,000. Something is wrong. -The `++counter` operation looks atomic—indivisible—but it is not. It actually consists of three steps: +The `++counter` operation looks atomic—indivisible—but it is not. It actually consists of three steps: 1. Read the current value 2. Add one @@ -49,7 +49,7 @@ The `++counter` operation looks atomic—indivisible—but it is not. It actuall Between any of these steps, the other thread might execute its own steps. Imagine both threads read `counter` when it is 5. Both add one, getting 6. Both write 6 back. Two increments, but the counter only went up by one. This is a *lost update*, a classic race condition. -The more threads, the more opportunity for races. The faster your processor, the more instructions execute between context switches, potentially hiding the bug—until one critical day in production. +The more threads, the more opportunity for races. The faster your processor, the more instructions execute between context switches, potentially hiding the bug—until one critical day in production. == Mutual Exclusion: Mutexes @@ -91,11 +91,11 @@ int main() Now the output is always 200,000. The mutex ensures that between `lock()` and `unlock()`, only one thread executes. The increment is now effectively atomic. -But there is a problem with calling `lock()` and `unlock()` directly. If code between them throws an exception, `unlock()` never executes. The mutex stays locked forever, and any thread waiting for it blocks eternally—a *deadlock*. +But there is a problem with calling `lock()` and `unlock()` directly. If code between them throws an exception, `unlock()` never executes. The mutex stays locked forever, and any thread waiting for it blocks eternally—a *deadlock*. == Lock Guards: Safety Through RAII -C++ has a powerful idiom: *RAII* (Resource Acquisition Is Initialization). The idea: acquire resources in a constructor, release them in the destructor. Since destructors run even when exceptions are thrown, cleanup is guaranteed. +{cpp} has a powerful idiom: *RAII* (Resource Acquisition Is Initialization). The idea: acquire resources in a constructor, release them in the destructor. Since destructors run even when exceptions are thrown, cleanup is guaranteed. Lock guards apply RAII to mutexes: @@ -121,9 +121,9 @@ void increment_many_times() The `std::lock_guard` locks the mutex on construction and unlocks it on destruction. Even if an exception is thrown, the destructor runs and the mutex is released. This is the correct way to use mutexes. -=== std::scoped_lock (C++17) +=== std::scoped_lock ({cpp}17) -Since C++17, `std::scoped_lock` is preferred. It works like `lock_guard` but can lock multiple mutexes simultaneously, avoiding a class of deadlock: +Since {cpp}17, `std::scoped_lock` is preferred. It works like `lock_guard` but can lock multiple mutexes simultaneously, avoiding a class of deadlock: [source,cpp] ---- @@ -194,9 +194,9 @@ void safe_function() === Deadlock Prevention Rules -1. *Lock in consistent order* — Define a global ordering for mutexes and always lock in that order -2. *Use std::scoped_lock for multiple mutexes* — Let the library handle deadlock avoidance -3. *Hold locks for minimal time* — Reduce the window for contention -4. *Avoid nested locks when possible* — Simpler designs prevent deadlock by construction +1. *Lock in consistent order* — Define a global ordering for mutexes and always lock in that order +2. *Use std::scoped_lock for multiple mutexes* — Let the library handle deadlock avoidance +3. *Hold locks for minimal time* — Reduce the window for contention +4. *Avoid nested locks when possible* — Simpler designs prevent deadlock by construction You have now learned about race conditions, mutexes, lock guards, and deadlocks. In the next section, you will explore advanced synchronization primitives: atomics, condition variables, and shared locks. diff --git a/doc/modules/ROOT/pages/3.concurrency/3c.advanced.adoc b/doc/modules/ROOT/pages/3.concurrency/3c.advanced.adoc index af2ed07d..3156a2ca 100644 --- a/doc/modules/ROOT/pages/3.concurrency/3c.advanced.adoc +++ b/doc/modules/ROOT/pages/3.concurrency/3c.advanced.adoc @@ -44,14 +44,14 @@ No mutex, no lock guard, yet the result is always 200,000. The `std::atomic === When to Use Atomics -Atomics work best for single-variable operations: counters, flags, simple state. They are faster than mutexes when contention is low. But they cannot protect complex operations involving multiple variables—for that, you need mutexes. +Atomics work best for single-variable operations: counters, flags, simple state. They are faster than mutexes when contention is low. But they cannot protect complex operations involving multiple variables—for that, you need mutexes. Common atomic types include: -* `std::atomic` — Thread-safe boolean flag -* `std::atomic` — Thread-safe integer counter -* `std::atomic` — Thread-safe pointer -* `std::atomic>` — Thread-safe shared pointer (C++20) +* `std::atomic` — Thread-safe boolean flag +* `std::atomic` — Thread-safe integer counter +* `std::atomic` — Thread-safe pointer +* `std::atomic>` — Thread-safe shared pointer ({cpp}20) Any trivially copyable type can be made atomic. @@ -134,12 +134,12 @@ The worker thread calls `cv.wait()`, which atomically releases the mutex and sus === The Predicate -The lambda `[]{ return ready; }` is the *predicate*. `wait()` will not return until this evaluates to true. This guards against *spurious wakeups*—rare events where a thread wakes without notification. Always use a predicate. +The lambda `[]{ return ready; }` is the *predicate*. `wait()` will not return until this evaluates to true. This guards against *spurious wakeups*—rare events where a thread wakes without notification. Always use a predicate. === Notification Methods -* `notify_one()` — Wake a single waiting thread -* `notify_all()` — Wake all waiting threads +* `notify_one()` — Wake a single waiting thread +* `notify_all()` — Wake all waiting threads Use `notify_one()` when only one thread needs to proceed (e.g., producer-consumer with single consumer). Use `notify_all()` when multiple threads might need to check the condition (e.g., broadcast events, shutdown signals). @@ -160,7 +160,7 @@ auto status = cv.wait_until(lock, deadline, predicate); == Shared Locks: Readers and Writers -Consider a data structure that is read frequently but written rarely. A regular mutex serializes all access—but why block readers from each other? Multiple threads can safely read simultaneously; only writes require exclusive access. +Consider a data structure that is read frequently but written rarely. A regular mutex serializes all access—but why block readers from each other? Multiple threads can safely read simultaneously; only writes require exclusive access. *Shared mutexes* support this pattern: @@ -191,10 +191,10 @@ void writer(int value) === Lock Types `std::shared_lock`:: -Acquires a *shared lock*—multiple threads can hold shared locks simultaneously. +Acquires a *shared lock*—multiple threads can hold shared locks simultaneously. `std::unique_lock` (on shared_mutex):: -Acquires an *exclusive lock*—no other locks (shared or exclusive) can be held. +Acquires an *exclusive lock*—no other locks (shared or exclusive) can be held. === Behavior diff --git a/doc/modules/ROOT/pages/3.concurrency/3d.patterns.adoc b/doc/modules/ROOT/pages/3.concurrency/3d.patterns.adoc index 36bf279d..84429b2e 100644 --- a/doc/modules/ROOT/pages/3.concurrency/3d.patterns.adoc +++ b/doc/modules/ROOT/pages/3.concurrency/3d.patterns.adoc @@ -9,7 +9,7 @@ This section covers communication mechanisms for getting results from threads an == Futures and Promises: Getting Results Back -Threads can perform work, but how do you get results from them? Passing references works but is clunky. C++ offers a cleaner abstraction: *futures* and *promises*. +Threads can perform work, but how do you get results from them? Passing references works but is clunky. {cpp} offers a cleaner abstraction: *futures* and *promises*. A `std::promise` is a write-once container: a thread can set its value. A `std::future` is the corresponding read-once container: another thread can get that value. They form a one-way communication channel. @@ -52,7 +52,7 @@ The worker thread calls `set_value()`. The main thread calls `get()`, which bloc == std::async: The Easy Path -Creating threads manually, managing promises, joining at the end—it is mechanical. `std::async` automates it: +Creating threads manually, managing promises, joining at the end—it is mechanical. `std::async` automates it: [source,cpp] ---- @@ -98,7 +98,7 @@ For quick parallel tasks, `std::async` is often the cleanest choice. == Thread-Local Storage -Sometimes each thread needs its own copy of a variable—not shared, not copied each call, but persistent within that thread. +Sometimes each thread needs its own copy of a variable—not shared, not copied each call, but persistent within that thread. Declare it `thread_local`: @@ -268,25 +268,25 @@ The work is divided into chunks, each handled by its own thread. For CPU-bound w You have learned the fundamentals of concurrent programming: -* *Threads* — Independent flows of execution within a process -* *Mutexes* — Mutual exclusion to prevent data races -* *Lock guards* — RAII wrappers that ensure mutexes are properly released -* *Atomics* — Lock-free safety for single operations -* *Condition variables* — Efficient waiting for events -* *Shared locks* — Multiple readers or one writer -* *Futures and promises* — Communication of results between threads -* *std::async* — Simplified launching of parallel work +* *Threads* — Independent flows of execution within a process +* *Mutexes* — Mutual exclusion to prevent data races +* *Lock guards* — RAII wrappers that ensure mutexes are properly released +* *Atomics* — Lock-free safety for single operations +* *Condition variables* — Efficient waiting for events +* *Shared locks* — Multiple readers or one writer +* *Futures and promises* — Communication of results between threads +* *std::async* — Simplified launching of parallel work -You have seen the dangers—race conditions, deadlocks—and the tools to avoid them. +You have seen the dangers—race conditions, deadlocks—and the tools to avoid them. === Best Practices * *Start with std::async* when possible -* *Prefer immutable data* — shared data that never changes needs no synchronization -* *Protect mutable shared state carefully* — minimize the data that is shared -* *Minimize lock duration* — hold locks for as brief a time as possible -* *Avoid nested locks* — when unavoidable, use `std::scoped_lock` -* *Test thoroughly* — test with many threads, on different machines, under load +* *Prefer immutable data* — shared data that never changes needs no synchronization +* *Protect mutable shared state carefully* — minimize the data that is shared +* *Minimize lock duration* — hold locks for as brief a time as possible +* *Avoid nested locks* — when unavoidable, use `std::scoped_lock` +* *Test thoroughly* — test with many threads, on different machines, under load Concurrency is challenging. Bugs hide until the worst moment. Testing is hard because timing varies. But the rewards are substantial: responsive applications, full hardware utilization, and elegant solutions to naturally parallel problems. diff --git a/doc/modules/ROOT/pages/4.coroutines/4.intro.adoc b/doc/modules/ROOT/pages/4.coroutines/4.intro.adoc index c265e5f4..08c39b75 100644 --- a/doc/modules/ROOT/pages/4.coroutines/4.intro.adoc +++ b/doc/modules/ROOT/pages/4.coroutines/4.intro.adoc @@ -9,7 +9,7 @@ = Coroutines in Capy -You know how C++20 coroutines work at the language level. You understand threads, synchronization, and the problems that concurrency introduces. Now it is time to see how Capy brings these together into a practical, high-performance library. +You know how {cpp}20 coroutines work at the language level. You understand threads, synchronization, and the problems that concurrency introduces. Now it is time to see how Capy brings these together into a practical, high-performance library. Capy's coroutine model is built around a single principle: asynchronous code should look like synchronous code. You write a function that reads from a socket, processes the data, and writes a response--top to bottom, with local variables and normal control flow. Capy handles suspension, resumption, thread scheduling, and cancellation behind the scenes. The result is code that is both easier to read and harder to get wrong. diff --git a/doc/modules/ROOT/pages/4.coroutines/4a.tasks.adoc b/doc/modules/ROOT/pages/4.coroutines/4a.tasks.adoc index 4f5b2c5b..53acd996 100644 --- a/doc/modules/ROOT/pages/4.coroutines/4a.tasks.adoc +++ b/doc/modules/ROOT/pages/4.coroutines/4a.tasks.adoc @@ -1,10 +1,10 @@ = The task Type -This section introduces Capy's `task` type—the fundamental coroutine type for asynchronous programming in Capy. +This section introduces Capy's `task` type—the fundamental coroutine type for asynchronous programming in Capy. == Prerequisites -* Completed xref:../2.cpp20-coroutines/2d.advanced.adoc[C++20 Coroutines Tutorial] +* Completed xref:../2.cpp20-coroutines/2d.advanced.adoc[{cpp}20 Coroutines Tutorial] * Understanding of promise types, coroutine handles, and symmetric transfer == Overview @@ -13,11 +13,11 @@ This section introduces Capy's `task` type—the fundamental coroutine type f Key characteristics: -* *Lazy execution* — The coroutine does not start until awaited -* *Symmetric transfer* — Efficient resumption without stack accumulation -* *Executor inheritance* — Inherits the caller's executor unless explicitly bound -* *Stop token propagation* — Forward-propagates cancellation signals -* *HALO support* — Enables heap allocation elision when possible +* *Lazy execution* — The coroutine does not start until awaited +* *Symmetric transfer* — Efficient resumption without stack accumulation +* *Executor inheritance* — Inherits the caller's executor unless explicitly bound +* *Stop token propagation* — Forward-propagates cancellation signals +* *HALO support* — Enables heap allocation elision when possible == Declaring task Coroutines @@ -128,7 +128,7 @@ Computing... Result: 42 ---- -Lazy execution enables efficient composition—tasks that are never awaited never run, consuming no resources beyond their initial allocation. +Lazy execution enables efficient composition—tasks that are never awaited never run, consuming no resources beyond their initial allocation. == Symmetric Transfer diff --git a/doc/modules/ROOT/pages/4.coroutines/4b.launching.adoc b/doc/modules/ROOT/pages/4.coroutines/4b.launching.adoc index 3cf088df..7da933f4 100644 --- a/doc/modules/ROOT/pages/4.coroutines/4b.launching.adoc +++ b/doc/modules/ROOT/pages/4.coroutines/4b.launching.adoc @@ -9,10 +9,10 @@ This section explains how to launch coroutines for execution. You will learn abo == The Execution Model -Capy tasks are lazy—they do not execute until something drives them. Two mechanisms exist: +Capy tasks are lazy—they do not execute until something drives them. Two mechanisms exist: -* *Awaiting* — One coroutine awaits another (`co_await task`) -* *Launching* — Non-coroutine code initiates execution (`run_async`) +* *Awaiting* — One coroutine awaits another (`co_await task`) +* *Launching* — Non-coroutine code initiates execution (`run_async`) When a task is awaited, the awaiting coroutine provides context: an executor for dispatching completion and a stop token for cancellation. But what about the first task in a chain? That task needs explicit launching. @@ -43,9 +43,9 @@ int main() === Two-Call Syntax -Notice the unusual syntax: `run_async(executor)(task)`. This is intentional and relates to C++17 evaluation order. +Notice the unusual syntax: `run_async(executor)(task)`. This is intentional and relates to {cpp}17 evaluation order. -C++17 guarantees that in the expression `f(a)(b)`: +{cpp}17 guarantees that in the expression `f(a)(b)`: 1. `f(a)` is evaluated first, producing a callable 2. `b` is evaluated second diff --git a/doc/modules/ROOT/pages/4.coroutines/4d.io-awaitable.adoc b/doc/modules/ROOT/pages/4.coroutines/4d.io-awaitable.adoc index f9ba64ca..bdca135b 100644 --- a/doc/modules/ROOT/pages/4.coroutines/4d.io-awaitable.adoc +++ b/doc/modules/ROOT/pages/4.coroutines/4d.io-awaitable.adoc @@ -1,6 +1,6 @@ = The IoAwaitable Protocol -This section explains the IoAwaitable protocol—Capy's mechanism for propagating execution context through coroutine chains. +This section explains the IoAwaitable protocol—Capy's mechanism for propagating execution context through coroutine chains. == Prerequisites @@ -9,7 +9,7 @@ This section explains the IoAwaitable protocol—Capy's mechanism for propagatin == The Problem: Context Propagation -Standard C++20 coroutines define awaiters with this `await_suspend` signature: +Standard {cpp}20 coroutines define awaiters with this `await_suspend` signature: [source,cpp] ---- @@ -22,9 +22,9 @@ std::coroutine_handle<> await_suspend(std::coroutine_handle<> h); The awaiter receives only a handle to the suspended coroutine. But real I/O code needs more: -* *Executor* — Where should completions be dispatched? -* *Stop token* — Should this operation support cancellation? -* *Allocator* — Where should memory be allocated? +* *Executor* — Where should completions be dispatched? +* *Stop token* — Should this operation support cancellation? +* *Allocator* — Where should memory be allocated? How does an awaitable get this information? @@ -47,11 +47,11 @@ std::coroutine_handle<> await_suspend(std::coroutine_handle<> h, io_env const* e This signature receives: -* `h` — The coroutine handle (as in standard awaiters) -* `env` — The execution environment containing: -** `env->executor` — The caller's executor for dispatching completions -** `env->stop_token` — A stop token for cooperative cancellation -** `env->allocator` — An optional allocator for frame allocation +* `h` — The coroutine handle (as in standard awaiters) +* `env` — The execution environment containing: +** `env->executor` — The caller's executor for dispatching completions +** `env->stop_token` — A stop token for cooperative cancellation +** `env->allocator` — An optional allocator for frame allocation The return type enables symmetric transfer. @@ -75,10 +75,10 @@ The key difference from standard awaitables is the two-argument `await_suspend` For tasks that can be launched from non-coroutine contexts, the `IoRunnable` concept refines `IoAwaitable` with: -* `handle()` — Access the typed coroutine handle -* `release()` — Transfer ownership of the frame -* `exception()` — Check for captured exceptions -* `result()` — Access the result value (non-void tasks) +* `handle()` — Access the typed coroutine handle +* `release()` — Transfer ownership of the frame +* `exception()` — Check for captured exceptions +* `result()` — Access the result value (non-void tasks) These methods exist because launch functions like `run_async` cannot `co_await` the task directly. The trampoline must be allocated before the task type is known, so it type-erases the task through function pointers and needs a common API to manage lifetime and extract results. @@ -110,9 +110,9 @@ The child receives the parent's executor and stop token automatically. Forward propagation offers several advantages: -* *Decoupling* — Awaitables don't need to know caller's promise type -* *Composability* — Any IoAwaitable works with any IoRunnable task -* *Explicit flow* — Context flows downward through the call chain, not queried upward +* *Decoupling* — Awaitables don't need to know caller's promise type +* *Composability* — Any IoAwaitable works with any IoRunnable task +* *Explicit flow* — Context flows downward through the call chain, not queried upward This design enables Capy's type-erased wrappers (`any_stream`, etc.) to work without knowing the concrete executor type. diff --git a/doc/modules/ROOT/pages/4.coroutines/4e.cancellation.adoc b/doc/modules/ROOT/pages/4.coroutines/4e.cancellation.adoc index adc7c7fa..4da7ff5f 100644 --- a/doc/modules/ROOT/pages/4.coroutines/4e.cancellation.adoc +++ b/doc/modules/ROOT/pages/4.coroutines/4e.cancellation.adoc @@ -1,6 +1,6 @@ = Stop Tokens and Cancellation -This section teaches cooperative cancellation from the ground up, explaining C++20 stop tokens as a general-purpose notification mechanism and how Capy uses them for coroutine cancellation. +This section teaches cooperative cancellation from the ground up, explaining {cpp}20 stop tokens as a general-purpose notification mechanism and how Capy uses them for coroutine cancellation. == Prerequisites @@ -35,22 +35,22 @@ void worker() This approach has problems: -* *No standardization* — Every component invents its own cancellation flag -* *Race conditions* — Checking the flag and acting on it is not atomic -* *No cleanup notification* — The worker just stops; no opportunity for graceful cleanup -* *Polling overhead* — Must check the flag repeatedly +* *No standardization* — Every component invents its own cancellation flag +* *Race conditions* — Checking the flag and acting on it is not atomic +* *No cleanup notification* — The worker just stops; no opportunity for graceful cleanup +* *Polling overhead* — Must check the flag repeatedly === The Thread Interruption Problem -Some systems support forceful thread interruption. This is dangerous because it can leave resources in inconsistent states—files half-written, locks held, transactions uncommitted. +Some systems support forceful thread interruption. This is dangerous because it can leave resources in inconsistent states—files half-written, locks held, transactions uncommitted. === The Goal: Cooperative Cancellation The solution is *cooperative cancellation*: ask nicely, let the work clean up. The cancellation requestor signals intent; the worker decides when and how to respond. -== Part 2: C++20 Stop Tokens—A General-Purpose Signaling Mechanism +== Part 2: {cpp}20 Stop Tokens—A General-Purpose Signaling Mechanism -C++20 introduces `std::stop_token`, `std::stop_source`, and `std::stop_callback`. While named for "stopping," these implement a general-purpose *Observer pattern*—a thread-safe one-to-many notification system. +{cpp}20 introduces `std::stop_token`, `std::stop_source`, and `std::stop_callback`. While named for "stopping," these implement a general-purpose *Observer pattern*—a thread-safe one-to-many notification system. === The Three Components @@ -133,7 +133,7 @@ Registration and invocation are thread-safe. You can register callbacks, request *Critical limitation*: `stop_token` is a *one-shot* mechanism. * Can only transition from "not signaled" to "signaled" once -* No reset mechanism—once `stop_requested()` returns true, it stays true forever +* No reset mechanism—once `stop_requested()` returns true, it stays true forever * `request_stop()` returns `true` only on the first successful call * *You cannot "un-cancel" a stop_source* ==== @@ -177,9 +177,9 @@ If you need repeatable signals, `stop_token` is the wrong tool. Consider: The "stop" naming obscures the mechanism's generality. `stop_token` implements *one-shot broadcast notification*, useful for: -* *Starting things* — Signal "ready" to trigger initialization -* *Configuration loaded* — Notify components when config is available -* *Resource availability* — Signal when database connected or cache warmed +* *Starting things* — Signal "ready" to trigger initialization +* *Configuration loaded* — Notify components when config is available +* *Resource availability* — Signal when database connected or cache warmed * *Any one-shot broadcast scenario* == Part 5: Stop Tokens in Coroutines @@ -205,7 +205,7 @@ task<> child() } ---- -No manual threading—the protocol handles it. +No manual threading—the protocol handles it. === Accessing the Stop Token @@ -287,8 +287,8 @@ task fetch_with_cancel() Capy's I/O operations (provided by Corosio) respect stop tokens at the OS level: -* *IOCP* (Windows) — Pending operations can be cancelled via `CancelIoEx` -* *io_uring* (Linux) — Operations can be cancelled via `IORING_OP_ASYNC_CANCEL` +* *IOCP* (Windows) — Pending operations can be cancelled via `CancelIoEx` +* *io_uring* (Linux) — Operations can be cancelled via `IORING_OP_ASYNC_CANCEL` When you request stop, pending I/O operations are cancelled at the OS level, providing immediate response rather than waiting for the operation to complete naturally. @@ -375,7 +375,7 @@ public: == Reference -The stop token mechanism is part of the C++ standard library: +The stop token mechanism is part of the {cpp} standard library: [source,cpp] ---- @@ -384,8 +384,8 @@ The stop token mechanism is part of the C++ standard library: Key types: -* `std::stop_source` — Creates and manages stop state -* `std::stop_token` — Observes stop state -* `std::stop_callback` — Registers callbacks for stop notification +* `std::stop_source` — Creates and manages stop state +* `std::stop_token` — Observes stop state +* `std::stop_callback` — Registers callbacks for stop notification You have now learned how stop tokens provide cooperative cancellation for coroutines. In the next section, you will learn about concurrent composition with `when_all` and `when_any`. diff --git a/doc/modules/ROOT/pages/4.coroutines/4g.allocators.adoc b/doc/modules/ROOT/pages/4.coroutines/4g.allocators.adoc index e9632d7c..64394fa2 100644 --- a/doc/modules/ROOT/pages/4.coroutines/4g.allocators.adoc +++ b/doc/modules/ROOT/pages/4.coroutines/4g.allocators.adoc @@ -1,15 +1,15 @@ -= Frame Allocators += Frame Allocators This section explains how coroutine frames are allocated and how to customize allocation for performance. == Prerequisites * Completed xref:4f.composition.adoc[Concurrent Composition] -* Understanding of coroutine frame allocation from xref:../2.cpp20-coroutines/2d.advanced.adoc[C++20 Coroutines Tutorial] +* Understanding of coroutine frame allocation from xref:../2.cpp20-coroutines/2d.advanced.adoc[{cpp}20 Coroutines Tutorial] == The Timing Constraint -Coroutine frame allocation has a unique constraint: memory must be allocated *before* the coroutine body begins executing. The standard C++ mechanism—promise type's `operator new`—is called before the promise is constructed. +Coroutine frame allocation has a unique constraint: memory must be allocated *before* the coroutine body begins executing. The standard {cpp} mechanism—promise type's `operator new`—is called before the promise is constructed. This creates a challenge: how can a coroutine use a custom allocator when the allocator might be passed as a parameter, which is stored *in* the frame? @@ -43,7 +43,7 @@ After the window closes (at the first suspension), the TLS allocator may be rest == The FrameAllocator Concept -Custom allocators must satisfy the `FrameAllocator` concept, which is compatible with C++ allocator requirements: +Custom allocators must satisfy the `FrameAllocator` concept, which is compatible with {cpp} allocator requirements: [source,cpp] ---- diff --git a/doc/modules/ROOT/pages/5.buffers/5a.overview.adoc b/doc/modules/ROOT/pages/5.buffers/5a.overview.adoc index 8fd1f2f7..bca51278 100644 --- a/doc/modules/ROOT/pages/5.buffers/5a.overview.adoc +++ b/doc/modules/ROOT/pages/5.buffers/5a.overview.adoc @@ -4,18 +4,18 @@ This section explains why Capy uses concept-driven buffer sequences instead of ` == Prerequisites -* Basic C++ experience with memory and pointers -* Familiarity with C++20 concepts +* Basic {cpp} experience with memory and pointers +* Familiarity with {cpp}20 concepts == The I/O Use Case -Buffers exist to interface with operating system I/O. When you read from a socket, write to a file, or transfer data through any I/O channel, you work with contiguous memory regions—addresses and byte counts. +Buffers exist to interface with operating system I/O. When you read from a socket, write to a file, or transfer data through any I/O channel, you work with contiguous memory regions—addresses and byte counts. The fundamental unit is a `(pointer, size)` pair. The OS reads bytes from or writes bytes to linear addresses. == The Reflexive Answer: span -The instinctive C++ answer to "how should I represent a buffer?" is `std::span`: +The instinctive {cpp} answer to "how should I represent a buffer?" is `std::span`: [source,cpp] ---- @@ -23,7 +23,7 @@ void write_data(std::span data); void read_data(std::span buffer); ---- -This works for single contiguous buffers. But I/O often involves multiple buffers—a technique called *scatter/gather I/O*. +This works for single contiguous buffers. But I/O often involves multiple buffers—a technique called *scatter/gather I/O*. == Scatter/Gather I/O @@ -34,7 +34,7 @@ Consider assembling an HTTP message. The headers are in one buffer; the body is 3. Copy body after headers 4. Send the combined buffer -This is wasteful. The data already exists—why copy it? +This is wasteful. The data already exists—why copy it? Scatter/gather I/O solves this. Operating systems provide vectored I/O calls (`writev` on POSIX, scatter/gather with IOCP on Windows) that accept multiple buffers and transfer them as a single logical operation. @@ -76,9 +76,9 @@ write_data(combined); Every composition allocates. This leads to: -* Overload proliferation—separate functions for single buffer, multiple buffers, common cases -* Performance overhead—allocation on every composition -* Boilerplate—manual copying everywhere +* Overload proliferation—separate functions for single buffer, multiple buffers, common cases +* Performance overhead—allocation on every composition +* Boilerplate—manual copying everywhere == The Concept-Driven Alternative @@ -97,7 +97,7 @@ This single signature accepts: * A `vector` * A `string_view` (converts to single buffer) * A custom composite type -* *Any composition of the above—without allocation* +* *Any composition of the above—without allocation* == Zero-Allocation Composition @@ -114,7 +114,7 @@ auto combined = cat(headers, body); // No allocation! write_data(combined); // Works because combined satisfies ConstBufferSequence ---- -The `cat` function returns a lightweight object that, when iterated, first yields buffers from `headers`, then from `body`. The buffers themselves are not copied—only iterators are composed. +The `cat` function returns a lightweight object that, when iterated, first yields buffers from `headers`, then from `body`. The buffers themselves are not copied—only iterators are composed. == STL Parallel @@ -124,21 +124,21 @@ The span reflex is a regression from thirty years of generic programming. Concep == The Middle Ground -Concepts provide flexibility at user-facing APIs. But at type-erasure boundaries—virtual functions, library boundaries—concrete types are necessary. +Concepts provide flexibility at user-facing APIs. But at type-erasure boundaries—virtual functions, library boundaries—concrete types are necessary. Capy's approach: -* *User-facing APIs* — Accept concepts for maximum flexibility -* *Type-erasure boundaries* — Use concrete spans internally -* *Library handles conversion* — Users get concepts; implementation uses spans +* *User-facing APIs* — Accept concepts for maximum flexibility +* *Type-erasure boundaries* — Use concrete spans internally +* *Library handles conversion* — Users get concepts; implementation uses spans This gives users the composition benefits of concepts while hiding the concrete types needed for virtual dispatch. == Why Not std::byte? -Even `std::byte` imposes a semantic opinion. POSIX uses `void*` for semantic neutrality—"raw memory, I move bytes without opining on contents." +Even `std::byte` imposes a semantic opinion. POSIX uses `void*` for semantic neutrality—"raw memory, I move bytes without opining on contents." -But `span` doesn't compile—C++ can't express type-agnostic buffer abstraction with `span`. +But `span` doesn't compile—{cpp} can't express type-agnostic buffer abstraction with `span`. Capy provides `const_buffer` and `mutable_buffer` as semantically neutral buffer types. They have known layout compatible with OS structures (`iovec`, `WSABUF`) without imposing `std::byte` semantics. diff --git a/doc/modules/ROOT/pages/5.buffers/5b.types.adoc b/doc/modules/ROOT/pages/5.buffers/5b.types.adoc index 10ae8bbd..1f181df6 100644 --- a/doc/modules/ROOT/pages/5.buffers/5b.types.adoc +++ b/doc/modules/ROOT/pages/5.buffers/5b.types.adoc @@ -1,4 +1,4 @@ -= Buffer Types += Buffer Types This section introduces Capy's fundamental buffer types: `const_buffer` and `mutable_buffer`. @@ -13,7 +13,7 @@ This section introduces Capy's fundamental buffer types: `const_buffer` and `mut POSIX uses `void*` for buffers. This expresses semantic neutrality: "I move memory without opining on what it contains." The OS doesn't care if the bytes represent text, integers, or compressed data—it moves them. -But `std::span` doesn't compile. C++ can't express a type-agnostic buffer abstraction using `span`. +But `std::span` doesn't compile. {cpp} can't express a type-agnostic buffer abstraction using `span`. Capy provides `const_buffer` and `mutable_buffer` as semantically neutral buffer types with known layout. diff --git a/doc/modules/ROOT/pages/6.streams/6f.isolation.adoc b/doc/modules/ROOT/pages/6.streams/6f.isolation.adoc index 755d0c3f..ffd9de9b 100644 --- a/doc/modules/ROOT/pages/6.streams/6f.isolation.adoc +++ b/doc/modules/ROOT/pages/6.streams/6f.isolation.adoc @@ -9,7 +9,7 @@ This section explains how type-erased wrappers enable compilation firewalls and == The Compilation Firewall Pattern -C++ templates are powerful but have a cost: every instantiation compiles in every translation unit that uses it. Change a template, and everything that includes it recompiles. +{cpp} templates are powerful but have a cost: every instantiation compiles in every translation unit that uses it. Change a template, and everything that includes it recompiles. Type-erased wrappers break this dependency: @@ -122,7 +122,7 @@ any_write_sink sink{mock}; send_message(sink, msg); ---- -Same `send_message` function, different transports—compile once, use everywhere. +Same `send_message` function, different transports—compile once, use everywhere. == API Design Guidelines @@ -237,4 +237,4 @@ Type-erased wrappers are in ``: * `any_read_source`, `any_write_sink` * `any_buffer_source`, `any_buffer_sink` -You have now completed the Stream Concepts section. These abstractions—streams, sources, sinks, and their type-erased wrappers—form the foundation for Capy's I/O model. Continue to xref:../7.examples/7a.hello-task.adoc[Example Programs] to see complete working examples. +You have now completed the Stream Concepts section. These abstractions—streams, sources, sinks, and their type-erased wrappers—form the foundation for Capy's I/O model. Continue to xref:../7.examples/7a.hello-task.adoc[Example Programs] to see complete working examples. diff --git a/doc/modules/ROOT/pages/7.examples/7a.hello-task.adoc b/doc/modules/ROOT/pages/7.examples/7a.hello-task.adoc index 5c1be812..ae5a24d0 100644 --- a/doc/modules/ROOT/pages/7.examples/7a.hello-task.adoc +++ b/doc/modules/ROOT/pages/7.examples/7a.hello-task.adoc @@ -10,7 +10,7 @@ The minimal Capy program: a task that prints a message. == Prerequisites -* C++20 compiler +* {cpp}20 compiler * Capy library installed == Source Code @@ -57,7 +57,7 @@ task<> say_hello() } ---- -`task<>` is equivalent to `task`—a coroutine that completes without returning a value. The `co_return` keyword marks this as a coroutine. +`task<>` is equivalent to `task`—a coroutine that completes without returning a value. The `co_return` keyword marks this as a coroutine. Tasks are lazy: calling `say_hello()` creates a task object but does not execute the body. The `"Hello"` message is not printed until the task is launched. @@ -81,8 +81,8 @@ run_async(pool.get_executor())(say_hello()); `run_async` bridges non-coroutine code (like `main`) to coroutine code. The two-call syntax: -1. `run_async(pool.get_executor())` — Creates a launcher with the executor -2. `(say_hello())` — Accepts the task and starts execution +1. `run_async(pool.get_executor())` — Creates a launcher with the executor +2. `(say_hello())` — Accepts the task and starts execution The task runs on one of the pool's worker threads. @@ -100,4 +100,4 @@ Hello from Capy! == Next Steps -* xref:7b.producer-consumer.adoc[Producer-Consumer] — Multiple tasks communicating +* xref:7b.producer-consumer.adoc[Producer-Consumer] — Multiple tasks communicating diff --git a/doc/modules/ROOT/pages/8.design/8a.ReadStream.adoc b/doc/modules/ROOT/pages/8.design/8a.ReadStream.adoc index 38b8e9f6..931e2800 100644 --- a/doc/modules/ROOT/pages/8.design/8a.ReadStream.adoc +++ b/doc/modules/ROOT/pages/8.design/8a.ReadStream.adoc @@ -296,7 +296,7 @@ reasoning presented here was reconstructed from three sources: terse but the implications are deep. - *Analysis of the underlying system calls.* POSIX `recv()` and Windows `WSARecv()` both enforce a binary outcome per call: data - or error, never both. This is not because the C++ abstraction + or error, never both. This is not because the {cpp} abstraction copied the OS, but because both levels face the same fundamental constraint. @@ -528,7 +528,7 @@ an error -- it chooses to report the partial data as success. POSIX `recv()` independently enforces the same rule: `N > 0` on success, `-1` on error, `0` on EOF. The kernel never returns "here are your last 5 bytes, and also EOF." It delivers the available bytes -on one call and returns 0 on the next. This is not because the C++ +on one call and returns 0 on the next. This is not because the {cpp} abstraction copied POSIX semantics. It is because the kernel faces the same fundamental constraint: state is discovered through the act of I/O. The alignment between `read_some` and `recv()` is convergent diff --git a/doc/modules/ROOT/pages/8.design/8f.BufferSink.adoc b/doc/modules/ROOT/pages/8.design/8f.BufferSink.adoc index 9a5cfb42..740be7f2 100644 --- a/doc/modules/ROOT/pages/8.design/8f.BufferSink.adoc +++ b/doc/modules/ROOT/pages/8.design/8f.BufferSink.adoc @@ -479,7 +479,7 @@ This was replaced by the span-based interface because: - `std::span` is self-describing: it carries both the pointer and the size, eliminating a class of off-by-one errors. -- Returning a subspan of the input span is idiomatic C++ and composes +- Returning a subspan of the input span is idiomatic {cpp} and composes well with range-based code. - The raw-pointer interface required two parameters (pointer + count) where the span interface requires one. diff --git a/doc/modules/ROOT/pages/8.design/8g.RunApi.adoc b/doc/modules/ROOT/pages/8.design/8g.RunApi.adoc index 893c7713..83d5bbfe 100644 --- a/doc/modules/ROOT/pages/8.design/8g.RunApi.adoc +++ b/doc/modules/ROOT/pages/8.design/8g.RunApi.adoc @@ -142,7 +142,7 @@ co_await run(my_alloc)(subtask()); ---- The builder pattern reads well as English, but it creates problems -in C++ practice. See <> below for the full analysis. +in {cpp} practice. See <> below for the full analysis. === Single-Call with Named Method @@ -172,7 +172,7 @@ namespace collision problems of `on`/`with`. The objection is minor: `.spawn()` and `.call()` add vocabulary without adding clarity. The wrapper already has exactly one purpose -- accepting a task. A named method implies the wrapper has a richer interface than it does. -`operator()` is the conventional C++ way to express "this object +`operator()` is the conventional {cpp} way to express "this object does exactly one thing." That said, this alternative has legs and could be revisited if the `()()` syntax proves too confusing in practice. @@ -201,7 +201,7 @@ The `run` prefix was chosen for several reasons: as a coherent pair. - **Consistency.** The naming follows the established pattern from - `io_context::run()`, `std::jthread`, and other C++ APIs where + `io_context::run()`, `std::jthread`, and other {cpp} APIs where `run` means "begin executing work." - **No false promises.** A builder-pattern syntax like @@ -309,7 +309,7 @@ Any mechanism that injects the allocator _after_ the call -- receiver queries, `await_transform`, explicit method calls -- arrives too late. The frame is already allocated. -This is the fundamental tension identified in D4003 §3.3: +This is the fundamental tension identified in D4003 §3.3: [quote] ____ @@ -321,9 +321,9 @@ Any mechanism that injects context later -- receiver connection, `await_transform`, explicit method calls -- arrives too late. ____ -=== The Solution: C++17 Postfix Evaluation Order +=== The Solution: {cpp}17 Postfix Evaluation Order -C++17 guarantees that in a postfix-expression call, the +{cpp}17 guarantees that in a postfix-expression call, the postfix-expression is sequenced before the argument expressions: [quote] @@ -447,6 +447,6 @@ The `run` name is greppable, unambiguous, and won't collide with local variables in a namespace-heavy Boost codebase. The `f(ctx)(task)` syntax exists because coroutine frame allocation requires the allocator to be set _before_ the task expression is evaluated, and -C++17 postfix sequencing guarantees exactly that ordering. The syntax +{cpp}17 postfix sequencing guarantees exactly that ordering. The syntax is intentionally explicit about its two steps -- it tells the reader that something important happens between them. diff --git a/doc/modules/ROOT/pages/8.design/8j.Executor.adoc b/doc/modules/ROOT/pages/8.design/8j.Executor.adoc index 40049c84..62f60cb7 100644 --- a/doc/modules/ROOT/pages/8.design/8j.Executor.adoc +++ b/doc/modules/ROOT/pages/8.design/8j.Executor.adoc @@ -141,7 +141,7 @@ would execute while `await_suspend` has not yet returned -- resuming a coroutine from inside `await_suspend` before the suspension machinery completes risks undefined behavior. -The C++ standard describes the sequencing in +The {cpp} standard describes the sequencing in https://eel.is/c++draft/expr.await[[expr.await]/5.1]: [quote] @@ -451,9 +451,9 @@ have overhead that executor usage cannot tolerate: to a `static constexpr` structure in `.rodata`. One indirection, no branches, no allocation. -=== Why Not C++ Virtual Functions +=== Why Not {cpp} Virtual Functions -C++ virtual dispatch places the vtable pointer inside each +{cpp} virtual dispatch places the vtable pointer inside each heap-allocated object. Every virtual call chases a pointer from the object to its vtable, which may reside at an unpredictable address in memory. When objects of different types are diff --git a/doc/modules/ROOT/pages/index.adoc b/doc/modules/ROOT/pages/index.adoc index 3e885bec..77201075 100644 --- a/doc/modules/ROOT/pages/index.adoc +++ b/doc/modules/ROOT/pages/index.adoc @@ -1,45 +1,45 @@ = Capy -Capy abstracts away sockets, files, and asynchrony with type-erased streams and buffer sequences—code compiles fast because the implementation is hidden. It provides the framework for concurrent algorithms that transact in buffers of memory: networking, serial ports, console, timers, and any platform I/O. This is only possible because Capy is coroutine-only, enabling optimizations and ergonomics that hybrid approaches must sacrifice. +Capy abstracts away sockets, files, and asynchrony with type-erased streams and buffer sequences—code compiles fast because the implementation is hidden. It provides the framework for concurrent algorithms that transact in buffers of memory: networking, serial ports, console, timers, and any platform I/O. This is only possible because Capy is coroutine-only, enabling optimizations and ergonomics that hybrid approaches must sacrifice. == What This Library Does -* *Lazy coroutine tasks* — `task` with forward-propagating stop tokens and automatic cancellation -* *Buffer sequences* — taken straight from Asio and improved -* *Stream concepts* — `ReadStream`, `WriteStream`, `ReadSource`, `WriteSink`, `BufferSource`, `BufferSink` -* *Type-erased streams* — `any_stream`, `any_read_stream`, `any_write_stream` for fast compilation -* *Concurrency facilities* — executors, strands, thread pools, `when_all`, `when_any` -* *Test utilities* — mock streams, mock sources/sinks, error injection +* *Lazy coroutine tasks* — `task` with forward-propagating stop tokens and automatic cancellation +* *Buffer sequences* — taken straight from Asio and improved +* *Stream concepts* — `ReadStream`, `WriteStream`, `ReadSource`, `WriteSink`, `BufferSource`, `BufferSink` +* *Type-erased streams* — `any_stream`, `any_read_stream`, `any_write_stream` for fast compilation +* *Concurrency facilities* — executors, strands, thread pools, `when_all`, `when_any` +* *Test utilities* — mock streams, mock sources/sinks, error injection == What This Library Does Not Do -* *Networking* — no sockets, acceptors, or DNS; that's what Corosio provides -* *Protocols* — no HTTP, WebSocket, or TLS; see the Http and Beast2 libraries -* *Platform event loops* — no io_uring, IOCP, epoll, or kqueue; Capy is the layer above -* *Callbacks or futures* — coroutine-only means no other continuation styles -* *Sender/receiver* — Capy uses the IoAwaitable protocol, not `std::execution` +* *Networking* — no sockets, acceptors, or DNS; that's what Corosio provides +* *Protocols* — no HTTP, WebSocket, or TLS; see the Http and Beast2 libraries +* *Platform event loops* — no io_uring, IOCP, epoll, or kqueue; Capy is the layer above +* *Callbacks or futures* — coroutine-only means no other continuation styles +* *Sender/receiver* — Capy uses the IoAwaitable protocol, not `std::execution` == Target Audience -* Users of *Corosio* — portable coroutine networking -* Users of *Http* — sans-I/O HTTP/1.1 clients and servers -* Users of *Websocket* — sans-I/O WebSocket -* Users of *Beast2* — high-level HTTP/WebSocket servers -* Users of *Burl* — high-level HTTP client +* Users of *Corosio* — portable coroutine networking +* Users of *Http* — sans-I/O HTTP/1.1 clients and servers +* Users of *Websocket* — sans-I/O WebSocket +* Users of *Beast2* — high-level HTTP/WebSocket servers +* Users of *Burl* — high-level HTTP client -All of these are built on Capy. Understanding its concepts—tasks, buffer sequences, streams, executors—unlocks the full power of the stack. +All of these are built on Capy. Understanding its concepts—tasks, buffer sequences, streams, executors—unlocks the full power of the stack. == Design Philosophy -* *Use case first.* Buffer sequences, stream concepts, executor affinity—these exist because I/O code needs them, not because they're theoretically elegant. +* *Use case first.* Buffer sequences, stream concepts, executor affinity—these exist because I/O code needs them, not because they're theoretically elegant. * *Coroutines-only.* No callbacks, futures, or sender/receiver. Hybrid support forces compromises; full commitment unlocks optimizations that adapted models cannot achieve. -* *Address the complaints of C++.* Type erasure at boundaries, minimal dependencies, and hidden implementations keep builds fast and templates manageable. +* *Address the complaints of {cpp}.* Type erasure at boundaries, minimal dependencies, and hidden implementations keep builds fast and templates manageable. == Requirements === Assumed Knowledge -* C++20 coroutines, concepts, and ranges +* {cpp}20 coroutines, concepts, and ranges * Basic concurrent programming === Compiler Support @@ -104,15 +104,15 @@ int main() } ---- -The `echo` function accepts an `any_stream&`—a type-erased wrapper that works with any concrete stream implementation. The function reads data into a buffer, then writes it back. Both operations use `co_await` to suspend until the I/O completes. +The `echo` function accepts an `any_stream&`—a type-erased wrapper that works with any concrete stream implementation. The function reads data into a buffer, then writes it back. Both operations use `co_await` to suspend until the I/O completes. The `task<>` return type (equivalent to `task`) creates a lazy coroutine that does not start executing until awaited or launched with `run_async`. == Next Steps -* xref:quick-start.adoc[Quick Start] — Set up your first Capy project -* xref:cpp20-coroutines/foundations.adoc[C++20 Coroutines Tutorial] — Learn coroutines from the ground up -* xref:concurrency/foundations.adoc[Concurrency Tutorial] — Understand threads, mutexes, and synchronization -* xref:coroutines/tasks.adoc[Coroutines in Capy] — Deep dive into `task` and the IoAwaitable protocol -* xref:buffers/overview.adoc[Buffer Sequences] — Master the concept-driven buffer model -* xref:streams/overview.adoc[Stream Concepts] — Understand the six stream concepts +* xref:quick-start.adoc[Quick Start] — Set up your first Capy project +* xref:cpp20-coroutines/foundations.adoc[{cpp}20 Coroutines Tutorial] — Learn coroutines from the ground up +* xref:concurrency/foundations.adoc[Concurrency Tutorial] — Understand threads, mutexes, and synchronization +* xref:coroutines/tasks.adoc[Coroutines in Capy] — Deep dive into `task` and the IoAwaitable protocol +* xref:buffers/overview.adoc[Buffer Sequences] — Master the concept-driven buffer model +* xref:streams/overview.adoc[Stream Concepts] — Understand the six stream concepts diff --git a/doc/modules/ROOT/pages/quick-start.adoc b/doc/modules/ROOT/pages/quick-start.adoc index dd80adf6..9886625c 100644 --- a/doc/modules/ROOT/pages/quick-start.adoc +++ b/doc/modules/ROOT/pages/quick-start.adoc @@ -11,7 +11,7 @@ This page gets you from zero to a working coroutine program in five minutes. -NOTE: Capy requires C++20 with coroutine support. +NOTE: Capy requires {cpp}20 with coroutine support. == Minimal Example @@ -115,6 +115,6 @@ capy::run_async(executor)(might_fail(), Now that you have a working program: -* xref:coroutines/tasks.adoc[Tasks] — Learn how lazy tasks work -* xref:coroutines/launching.adoc[Launching Tasks] — Understand `run_async` in detail -* xref:coroutines/affinity.adoc[Executor Affinity] — Control where coroutines execute +* xref:coroutines/tasks.adoc[Tasks] — Learn how lazy tasks work +* xref:coroutines/launching.adoc[Launching Tasks] — Understand `run_async` in detail +* xref:coroutines/affinity.adoc[Executor Affinity] — Control where coroutines execute diff --git a/doc/modules/ROOT/pages/why-not-cobalt-2.adoc b/doc/modules/ROOT/pages/why-not-cobalt-2.adoc index 9aefc807..8884e24b 100644 --- a/doc/modules/ROOT/pages/why-not-cobalt-2.adoc +++ b/doc/modules/ROOT/pages/why-not-cobalt-2.adoc @@ -17,7 +17,7 @@ Each section below examines one design choice and its technical consequences. == Task Requirements -Capy formally defines what makes a task type conforming. Two C++20 concepts form a refinement hierarchy: +Capy formally defines what makes a task type conforming. Two {cpp}20 concepts form a refinement hierarchy: .... IoAwaitable @@ -493,7 +493,7 @@ The preceding sections each examined a specific design choice. A common thread r Cobalt's `write_stream` is an abstract base class. The abstraction and the runtime wrapper are the same type. Writing against the abstraction means using virtual dispatch. The return type (`write_op`), the buffer parameter type (`const_buffer_sequence`), the allocation strategy (4096-byte SBO), and the context propagation mechanism (promise probing) are all fixed by the base class definition. -Capy separates the abstraction from the wrapper. `WriteStream` is a C++20 concept: +Capy separates the abstraction from the wrapper. `WriteStream` is a {cpp}20 concept: [source,cpp] ---- @@ -521,7 +521,7 @@ This separation is the architectural root of the differences examined in this do | Aspect | Capy | Cobalt | Abstraction mechanism -| C++20 concept (`WriteStream`) +| {cpp}20 concept (`WriteStream`) | Abstract base class (`write_stream`) | Runtime wrapper diff --git a/doc/unlisted/io-awaitables-concepts.adoc b/doc/unlisted/io-awaitables-concepts.adoc index d43c0884..39f88369 100644 --- a/doc/unlisted/io-awaitables-concepts.adoc +++ b/doc/unlisted/io-awaitables-concepts.adoc @@ -1,4 +1,4 @@ -// +// // Copyright (c) 2025 Vinnie Falco (vinnie.falco@gmail.com) // // Distributed under the Boost Software License, Version 1.0. (See accompanying @@ -14,7 +14,7 @@ context propagation through coroutine chains. == The Problem: Lost Context -Standard C++20 awaiters receive only a coroutine handle: +Standard {cpp}20 awaiters receive only a coroutine handle: [source,cpp] ---- diff --git a/doc/unlisted/io-awaitables-stop-token.adoc b/doc/unlisted/io-awaitables-stop-token.adoc index f33cd86a..1ced5be1 100644 --- a/doc/unlisted/io-awaitables-stop-token.adoc +++ b/doc/unlisted/io-awaitables-stop-token.adoc @@ -14,7 +14,7 @@ using `std::stop_token`. == Cooperative Cancellation -Capy supports _cooperative_ cancellation through C++20's `std::stop_token`: +Capy supports _cooperative_ cancellation through {cpp}20's `std::stop_token`: * **Cooperative:** Operations check the token and decide how to respond * **Non-preemptive:** Nothing is forcibly terminated @@ -37,7 +37,7 @@ task long_operation() == The Stop Token API -C++20 provides three related types: +{cpp}20 provides three related types: [source,cpp] ---- @@ -156,7 +156,7 @@ task cancellable_work() } ---- -`this_coro::environment` never suspends—it's intercepted by `await_transform` and +`this_coro::environment` never suspends—it's intercepted by `await_transform` and returns immediately. == Implementing Stoppable Awaitables @@ -255,7 +255,7 @@ source.request_stop(); // All children in when_all see this == Error Handling vs Cancellation -Cancellation is not an error—it's an expected outcome: +Cancellation is not an error—it's an expected outcome: [source,cpp] ---- @@ -272,7 +272,7 @@ task> fetch_with_timeout() ---- Use `std::optional`, sentinel values, or error codes to signal -cancellation—reserve exceptions for unexpected failures. +cancellation—reserve exceptions for unexpected failures. == When NOT to Use Cancellation @@ -313,5 +313,5 @@ Do NOT use cancellation when: == Next Steps -* xref:allocator.adoc[The Allocator] — Frame allocation strategy -* xref:launching.adoc[Launching Coroutines] — Pass stop tokens to `run_async` +* xref:allocator.adoc[The Allocator] — Frame allocation strategy +* xref:launching.adoc[Launching Coroutines] — Pass stop tokens to `run_async` diff --git a/doc/unlisted/library-io-result.adoc b/doc/unlisted/library-io-result.adoc index 0f34ae83..3c808e7b 100644 --- a/doc/unlisted/library-io-result.adoc +++ b/doc/unlisted/library-io-result.adoc @@ -1,4 +1,4 @@ -// +// // Copyright (c) 2025 Vinnie Falco (vinnie.falco@gmail.com) // // Distributed under the Boost Software License, Version 1.0. (See accompanying @@ -111,7 +111,7 @@ auto [ec, v1, v2, v3] = co_await complex_operation(); == Using Structured Bindings -C++17 structured bindings make `io_result` ergonomic: +{cpp}17 structured bindings make `io_result` ergonomic: [source,cpp] ---- diff --git a/doc/unlisted/performance-tuning-high-performance-allocators.adoc b/doc/unlisted/performance-tuning-high-performance-allocators.adoc index 5fc07027..58b217d8 100644 --- a/doc/unlisted/performance-tuning-high-performance-allocators.adoc +++ b/doc/unlisted/performance-tuning-high-performance-allocators.adoc @@ -18,7 +18,7 @@ tuning, see xref:../execution/frame-allocation.adoc[Frame Allocation]. == Why Replace the Default Allocator? -The default memory allocator provided by most C++ standard library implementations +The default memory allocator provided by most {cpp} standard library implementations is general-purpose, but not always optimal for high-throughput applications. Common issues include: @@ -47,8 +47,8 @@ allocator. You can use both approaches together: -* **Global replacement** — Handles all allocations (containers, strings, etc.) -* **frame_allocator** — Optimizes coroutine frame allocation specifically +* **Global replacement** — Handles all allocations (containers, strings, etc.) +* **frame_allocator** — Optimizes coroutine frame allocation specifically For applications dominated by coroutine creation, a custom `frame_allocator` (like the built-in recycling allocator) may provide better results than just @@ -97,11 +97,11 @@ installation, linking, and configuration instructions. When using a replacement allocator: -* **Benchmark your specific workload** — allocator performance varies by +* **Benchmark your specific workload** — allocator performance varies by allocation pattern -* **Monitor memory usage** — some allocators trade memory for speed -* **Consider configuration** — most allocators have tunable parameters -* **Test under load** — benefits are most visible under concurrent allocation +* **Monitor memory usage** — some allocators trade memory for speed +* **Consider configuration** — most allocators have tunable parameters +* **Test under load** — benefits are most visible under concurrent allocation == Impact on Capy Coroutines @@ -142,5 +142,5 @@ first step before exploring custom frame allocators. == Next Steps -* xref:../execution/frame-allocation.adoc[Frame Allocation] — Coroutine-specific memory tuning -* xref:../coroutines/launching.adoc[Launching Tasks] — Running coroutines efficiently +* xref:../execution/frame-allocation.adoc[Frame Allocation] — Coroutine-specific memory tuning +* xref:../coroutines/launching.adoc[Launching Tasks] — Running coroutines efficiently