The functools library

The asyncstdlib.functools library implements Python’s functools for (async) functions and (async) iterables.

Iterator reducing

await reduce(function: (T, T) (await) T, iterable: (async) iter T, initial: T) T[source]

Reduce an (async) iterable by cumulative application of an (async) function


TypeError – if iterable is empty and initial is not given

Applies the function from the beginning of iterable, as if executing await function(current, anext(iterable)) until iterable is exhausted. Note that the output of function should be valid as its first input.

The optional initial is prepended to all items of iterable when applying function. If the combination of initial and iterable contains exactly one item, it is returned without calling function.

Async Caches

The regular functools.lru_cache() and functools.cached_property() are not appropriate for async callables, such as an async def coroutine function: their direct return value is an awaitable instead of the desired value. This causes the cache to store only temporary helpers, not the actual values.

Both lru_cache() and cached_property() of asyncstdlib work only with async callables (they are not async neutral). Notably, this includes regular callables that return an awaitable, such as an async def function wrapped by partial().

@cached_property(getter: (Self) await T)

Transform a method into an attribute whose value is cached

When applied to an asynchronous method of a class, instances have an attribute of the same name as the method (similar to property). Using this attribute with await provides the value of using the method with await.

The attribute value is cached on the instance after being computed; subsequent uses of the attribute with await provide the cached value, without executing the method again. The cached value can be cleared using del, in which case the next access will recompute the value using the wrapped method.

import asyncstdlib as a

class Resource:
    def __init__(self, url):
        self.url = url

    async def data(self):
        return await asynclib.get(self.url)

resource = Resource(1, 3)
print(await  # needs some time...
print(await  # finishes instantly
print(await  # needs some time...

Unlike a property, this type does not support setter() or deleter().


Instances on which a value is to be cached must have a __dict__ attribute that is a mutable mapping.

New in version 1.1.0.

The lru_cache() can be applied as a decorator, both with and without arguments:

async def get_pep(num):
    url = f'{num:04}/'
    request = await asynclib.get(url)
    return request.body()

async def get_pep(num):
    url = f'{num:04}/'
    request = await asynclib.get(url)
    return request.body()
@cache((...) -> await R)[source]

Simple unbounded cache, aka memoization, for async functions

This is a convenience function, equivalent to lru_cache() with a maxsize of None.

New in version 3.9.0.

@lru_cache(maxsize: ?int = 128, typed: bool = False)((...) -> await R)

Least Recently Used cache for async functions

Applies an LRU cache, mapping the most recent function call arguments to the awaited function return value. This makes this cache appropriate for coroutine functions, partial() coroutines and any other callable that returns an awaitable.

Arguments to the cached function must be hashable. On a successful cache hit, the underlying function is not called. This means any side-effects, including scheduling in an internal event loop, are skipped. Ideally, lru_cache is used for long-running queries or requests that return the same result for the same input.

The maximum number of cached items is defined by maxsize:

  • If set to a positive integer, at most maxsize distinct function argument patterns are stored; further calls with different patterns evict the oldest stored pattern from the cache.

  • If set to zero or a negative integer, the cache is disabled. Every call is directly forwarded to the underlying function, and counted as a cache miss.

  • If set to None, the cache has unlimited size. Every new function argument pattern adds an entry to the cache; patterns and values are never automatically evicted.

The cache can always be explicitly emptied via cache_clear(). Use the cache’s cache_info() to inspect the cache’s performance and filling level.

If typed is True, values in argument patterns are compared by value and type. For example, this means that passing 3 and 3.0 as the same argument are treated as distinct pattern elements.


This wrapper is intended for use with a single event loop, and supports overlapping concurrent calls. Unlike the original functools.lru_cache(), it is not thread-safe.

The cache tracks call argument patterns and maps them to observed return values. A pattern is an ordered representation of provided positional and keyword arguments; notably, this disregards default arguments, as well as any overlap between positional and keyword arguments. This means that for a function f(a, b), the calls f(1, 2), f(a=1, b=2) and f(b=2, a=1) are considered three distinct patterns.

In addition, exceptions are not return values. This allows retrying a long-running query that may fail, caching any eventual result for quick and reliable lookup.

A wrapped async callable can be queried for its cache metadata, and allows clearing the entire cache. This can be useful to explicitly monitor cache performance, and to manage caches of unrestricted size. Note that the maxsize of a cache cannot be changed at runtime – however, the __wrapped__ callable may be wrapped with a new cache of different size.

class LRUAsyncCallable

Protocol of a LRU cache wrapping a callable to an awaitable


The callable wrapped by this cache

__call__(...) await R

Call self as a function.


Evict all call argument patterns and their results from the cache

cache_info() -> (hits=..., misses=..., maxsize=..., currsize=...)[source]

Get the current performance and boundary of the cache as a NamedTuple

cache_parameters() {"maxsize": ..., "typed": ...}[source]

Get the parameters of the cache

New in version 3.9.0: The cache_parameters() method.