The functools library

The asyncstdlib.functools library implements Python’s functools for (async) functions and (async) iterables.

Iterator reducing

await reduce(function: (T, T) (await) T, iterable: (async) iter T, initial: T) T[source]

Reduce an (async) iterable by cumulative application of an (async) function


TypeError – if iterable is empty and initial is not given

Applies the function from the beginning of iterable, as if executing await function(current, anext(iterable)) until iterable is exhausted. Note that the output of function should be valid as its first input.

The optional initial is prepended to all items of iterable when applying function. If the combination of initial and iterable contains exactly one item, it is returned without calling function.

Async Caches

The regular functools.lru_cache() and functools.cached_property() are not appropriate for async callables, such as an async def coroutine function: their direct return value is an awaitable instead of the desired value. This causes the cache to store only temporary helpers, not the actual values.

Both lru_cache() and cached_property() of asyncstdlib work only with async callables (they are not async neutral). Notably, they also work with regular callables that return an awaitable, such as an async def function wrapped by partial().

Attribute Caches

This type of cache tracks awaiting an attribute.

@cached_property(getter: (Self) await T)

Transform a method into an attribute whose value is cached

When applied to an asynchronous method of a class, instances have an attribute of the same name as the method (similar to property). Using this attribute with await provides the value of using the method with await.

The attribute value is cached on the instance after being computed; subsequent uses of the attribute with await provide the cached value, without executing the method again. The cached value can be cleared using del, in which case the next access will recompute the value using the wrapped method.

import asyncstdlib as a

class Resource:
    def __init__(self, url):
        self.url = url

    async def data(self):
        return await asynclib.get(self.url)

resource = Resource(1, 3)
print(await  # needs some time...
print(await  # finishes instantly
print(await  # needs some time...

Unlike a property, this type does not support setter() or deleter().


Instances on which a value is to be cached must have a __dict__ attribute that is a mutable mapping.

New in version 1.1.0.

Callable Caches

This type of cache tracks call argument patterns and their return values. A pattern is an ordered representation of positional and keyword arguments; notably, this disregards defaults and overlap between positional and keyword arguments. This means that for a function f(a, b), the calls f(1, 2), f(a=1, b=2) and f(b=2, a=1) are considered three distinct patterns.

Note that exceptions are not considered return values and thus never cached. This makes the caches suitable for queries that may fail, caching any eventual result for quick and reliable lookup.

@cache((...) await R) LRUAsyncCallable[source]

Simple unbounded cache, aka memoization, for async functions

This is a convenience function, equivalent to lru_cache() with a maxsize of None.

New in version 3.9.0.

@lru_cache((...) await R) LRUAsyncCallable
@lru_cache(maxsize: ?int = 128, typed: bool = False)((...) await R) LRUAsyncCallable

Least Recently Used cache for async functions

Applies an LRU cache storing call arguments and their awaited return value. This is appropriate for coroutine functions, partial() coroutines and any other callable that returns an awaitable.

Arguments to the cached function must be hashable; when the arguments are in the cache, the underlying function is not called. This means any side-effects, including scheduling in an event loop, are skipped. Ideally, lru_cache is used for long-running queries or requests that return the same result for the same input.

The maximum number of cached items is defined by maxsize:

  • If set to a positive integer, up to maxsize function argument patterns are stored; further calls with different patterns replace the oldest pattern in the cache.

  • If set to zero or a negative integer, the cache is disabled. Every call is directly forwarded to the underlying function, and counted as a cache miss.

  • If set to None, the cache has unlimited size. Every used function argument pattern adds an entry to the cache; patterns are never automatically evicted.

In addition to automatic cache eviction from maxsize, the cache can be explicitly emptied via cache_clear() and cache_discard(). Use the cache’s cache_info() to inspect the cache’s performance and filling level.

If typed is True, values in argument patterns are compared by value and type. For example, this means 3 and 3.0 are treated as distinct arguments; however, this is not applied recursively so the type of both (3, 4) and (3.0, 4.0) is the same.


This LRU cache supports overlapping await calls, provided that the wrapped async function does as well. Unlike the original functools.lru_cache(), it is not thread-safe.

A cached async callable can be queried for its cache metadata and allows clearing entries from the cache. This can be useful to explicitly monitor cache performance, and to manage caches of unrestricted size. While the maxsize of a cache cannot be changed at runtime, the __wrapped__ callable may be wrapped with a new cache of different size.

class LRUAsyncCallable

Protocol of a LRU cache wrapping a callable to an awaitable


The callable wrapped by this cache

__call__(...) await R

Call self as a function.


Evict all call argument patterns and their results from the cache


Evict the call argument pattern and its result from the cache

When a cache is wrapped by another descriptor (property, staticmethod, …), the descriptor must support wrapping descriptors for this method to detect implicit arguments such as self.

Changed in version Python3.9: classmethod() properly wraps caches.

New in version 3.10.4.

cache_info() -> (hits=..., misses=..., maxsize=..., currsize=...)[source]

Get the current performance and boundary of the cache as a NamedTuple

cache_parameters() {"maxsize": ..., "typed": ...}[source]

Get the parameters of the cache

New in version 3.9.0.