threadpool-exec (default)
- Provides a wrapper implementation around the threadpool crate.
- See threadpool_executor.
- Provides a thread pool executor with a single global queue.
- See crossbeam_channel_pool.
- Provides a thread pool executor with thread-local queues in addition to a global injector queue.
- See crossbeam_workstealing_pool.
- This feature flag determines the fairness mechanism between local and global queues in the crossbeam_workstealing_pool.
- If the flag is enabled the fairness is time-based. The global queue will be checked every 100ms.
- If the flags is absent the fairness is count-based. The global queue will be checked every 100 local jobs.
- Which one you should pick depends on your application.
- Time-based fairness is a compromise between latency of externally scheduled jobs and overall throughput.
- Count-based is going to depend heavily on how long your jobs typically are, but counting is cheaper than checking time, so it can lead to higher throughput.
- Disable thread parking for the crossbeam_workstealing_pool.
- This is generally detrimental to performance, as idle threads will unnecessarily hang on to CPU resources.
- However, for very latency sensitive interactions with external resources (e.g., I/O), this can reduce overall job latency.
- Allows pool threads to be pinned to specific cores.
- This can reduce cache invalidation overhead when threads sleep and then are woken up later.
- However, if your cores are needed by other processes, it can also introduce additional scheduling delay, if the pinned core isn’t available immediately at wake time.
- Use with care.
- Make memory architecture aware decisions.
- Concretely this setting currently only affects crossbeam_workstealing_pool.
- When it is enabled, work-stealing will happen by memory proximity.
- That is threads with too little work will try to steal work from memory-close other threads first, before trying further away threads.
- Every executor provided in this crate can produce metrics using the metrics crate.
- The metrics are executors.jobs_executed (“How many jobs were executed in total?”) and executors.jobs_queued (“How many jobs are currently waiting to be executed?”).
- Not all executors produce all metrics.
- WARNING: Collecting these metrics typically has a serious performance impact. You should only consider using this in production if your jobs are fairly large anyway (say in the millisecond range).
Re-exports
pub use crate::common::CanExecute; |
pub use crate::common::Executor; |
pub use crate::futures_executor::FuturesExecutor; |
pub use crate::futures_executor::JoinHandle; |
Modules
A simple abstraction for bidirectional 1-to-1 channels built over std::sync::mpsc .
The core traits and reusable functions of this crate.
A thread pool Executor used to execute functions in parallel.
A thread pool Executor used to execute functions in parallel.
Support for Rust’s futures and async/await APIs
This module contains helpers for executors that are NUMA-ware.
A reusable thread-pool-parking mechanism.
A simple Executor that simply runs tasks immediately on the current thread.
A thread pool Executor used to execute functions in parallel.
Functions
Tries run the job on the same executor that spawned the thread running the job.