The functools library¶
The asyncstdlib.functools
library implements
Python’s functools
for (async) functions and (async) iterables.
Iterator reducing¶
- await reduce(function: (T, T) → (await) T, iterable: (async) iter T, initial: T) T [source]¶
Reduce an (async) iterable by cumulative application of an (async) function
- Raises:
TypeError – if
iterable
is empty andinitial
is not given
Applies the
function
from the beginning ofiterable
, as if executingawait function(current, anext(iterable))
untiliterable
is exhausted. Note that the output offunction
should be valid as its first input.The optional
initial
is prepended to all items ofiterable
when applyingfunction
. If the combination ofinitial
anditerable
contains exactly one item, it is returned without callingfunction
.
Async Caches¶
The regular functools.lru_cache()
and functools.cached_property()
are not appropriate for
async callables, such as an async def
coroutine function:
their direct return value is an awaitable instead of the desired value.
This causes the cache to store only temporary helpers, not the actual values.
Both lru_cache()
and cached_property()
of asyncstdlib
work only with async callables
(they are not async neutral).
Notably, they also work with regular callables that return an awaitable,
such as an async def
function wrapped by partial()
.
Attribute Caches¶
This type of cache tracks await
ing an attribute.
- @cached_property(getter: (Self) → await T)¶
Transform a method into an attribute whose value is cached
When applied to an asynchronous method of a class, instances have an attribute of the same name as the method (similar to
property
). Using this attribute withawait
provides the value of using the method withawait
.The attribute value is cached on the instance after being computed; subsequent uses of the attribute with
await
provide the cached value, without executing the method again. The cached value can be cleared usingdel
, in which case the next access will recompute the value using the wrapped method.import asyncstdlib as a class Resource: def __init__(self, url): self.url = url @a.cached_property async def data(self): return await asynclib.get(self.url) resource = Resource(1, 3) print(await resource.data) # needs some time... print(await resource.data) # finishes instantly del resource.data print(await resource.data) # needs some time...
Unlike a
property
, this type does not supportsetter()
ordeleter()
.Note
Instances on which a value is to be cached must have a
__dict__
attribute that is a mutable mapping.Added in version 1.1.0.
Callable Caches¶
This type of cache tracks call argument patterns and their return values.
A pattern is an ordered representation of positional and keyword arguments;
notably, this disregards defaults and overlap between positional and keyword arguments.
This means that for a function f(a, b)
, the calls f(1, 2)
, f(a=1, b=2)
and f(b=2, a=1)
are considered three distinct patterns.
Note that exceptions are not considered return values and thus never cached. This makes the caches suitable for queries that may fail, caching any eventual result for quick and reliable lookup.
- @cache((...) → await R) LRUAsyncCallable [source]¶
Simple unbounded cache, aka memoization, for async functions
This is a convenience function, equivalent to
lru_cache()
with amaxsize
ofNone
.Added in version 3.9.0.
- @lru_cache((...) → await R) LRUAsyncCallable
- @lru_cache(maxsize: ?int = 128, typed: bool = False)((...) → await R) LRUAsyncCallable ¶
Least Recently Used cache for async functions
Applies an LRU cache storing call arguments and their awaited return value. This is appropriate for coroutine functions,
partial()
coroutines and any other callable that returns an awaitable.Arguments to the cached function must be hashable; when the arguments are in the cache, the underlying function is not called. This means any side-effects, including scheduling in an event loop, are skipped. Ideally,
lru_cache
is used for long-running queries or requests that return the same result for the same input.The maximum number of cached items is defined by
maxsize
:If set to a positive integer, up to
maxsize
function argument patterns are stored; further calls with different patterns replace the oldest pattern in the cache.If set to zero or a negative integer, the cache is disabled. Every call is directly forwarded to the underlying function, and counted as a cache miss.
If set to
None
, the cache has unlimited size. Every used function argument pattern adds an entry to the cache; patterns are never automatically evicted.
In addition to automatic cache eviction from
maxsize
, the cache can be explicitly emptied viacache_clear()
andcache_discard()
. Use the cache’scache_info()
to inspect the cache’s performance and filling level.If
typed
isTrue
, values in argument patterns are compared by value and type. For example, this means3
and3.0
are treated as distinct arguments; however, this is not applied recursively so the type of both(3, 4)
and(3.0, 4.0)
is the same.Note
This LRU cache supports overlapping
await
calls, provided that the wrapped async function does as well. Unlike the originalfunctools.lru_cache()
, it is not thread-safe.
A cached async callable can be queried for its cache metadata and allows clearing
entries from the cache. This can be useful to explicitly monitor cache performance,
and to manage caches of unrestricted size.
While the maxsize
of a cache cannot be changed at runtime,
the __wrapped__
callable may be wrapped with a new cache of different size.
- class LRUAsyncCallable¶
Protocol
of a LRU cache wrapping a callable to an awaitable- __wrapped__¶
The callable wrapped by this cache
- __call__(...) await R ¶
Call self as a function.
- cache_discard(...)[source]¶
Evict the call argument pattern and its result from the cache
When a cache is wrapped by another descriptor (
property
,staticmethod
, …), the descriptor must support wrapping descriptors for this method to detect implicit arguments such asself
.Changed in version Python3.9:
classmethod()
properly wraps caches.Changed in version Python3.13:
classmethod()
no longer wraps caches in a way that supports cache_discard.Added in version 3.10.4.
- cache_info() -> (hits=..., misses=..., maxsize=..., currsize=...)[source]¶
Get the current performance and boundary of the cache as a
NamedTuple