Tech News
← Back to articles

PEP 810 – Explicit lazy imports

read original related products more articles

PEP 810 – Explicit lazy imports

Author : Pablo Galindo , Germán Méndez Bravo , Thomas Wouters , Dino Viehland , Brittany Reynoso , Noah Kim , Tim Stumbaugh Discussions-To : Discourse thread Status : Draft Type : Standards Track Created : 02-Oct-2025 Python-Version : 3.15

Abstract This PEP introduces lazy imports as an explicit language feature. Currently, a module is eagerly loaded at the point of the import statement. Lazy imports defer the loading and execution of a module until the first time the imported name is used. By allowing developers to mark individual imports as lazy with explicit syntax, Python programs can reduce startup time, memory usage, and unnecessary work. This is particularly beneficial for command-line tools, test suites, and applications with large dependency graphs. The proposal preserves full backwards compatibility: normal import statements remain unchanged, and lazy imports are enabled only where explicitly requested.

Motivation A common convention in Python code is to place all imports at the module level, typically at the beginning of the file. This avoids repetition, makes dependencies clear and minimizes runtime overhead by only evaluating an import statement once per module. A major drawback with this approach is that importing the first module for an execution of Python (the “main” module) often triggers an immediate cascade of imports, and optimistically loads many dependencies that may never be used. The effect is especially costly for command-line tools with multiple subcommands, where even running the command with --help can load dozens of unnecessary modules and take several seconds. This basic example demonstrates what must be loaded just to get helpful feedback to the user on how to run the program at all. Inefficiently, the user incurs this overhead again when they figure out the command they want and invoke the program “for real.” A somewhat common way to delay imports is to move the imports into functions (inline imports), but this practice is very manual to implement and maintain. Additionally, it obfuscates the full set of dependencies for a module. Analysis of the Python standard library shows that approximately 17% of all imports outside tests (nearly 3500 total imports across 730 files) are already placed inside functions, classes, or methods specifically to defer their execution. This demonstrates that developers are already manually implementing lazy imports in performance-sensitive code, but doing so requires scattering imports throughout the codebase and makes the full dependency graph harder to understand at a glance. The standard library provides importlib.util.LazyLoader to solve some of these inefficiency problems. It permits imports at the module level to work mostly like inline imports do. Scientific Python libraries have adopted a similar pattern, formalized in SPEC 1. There’s also the third-party lazy_loader package. Imports used solely for static type checking are another source of potentially unneeded imports, and there are similarly disparate approaches to minimizing the overhead. All use cases are not covered by these approaches; however, these approaches add runtime overhead in unexpected places, in non-obvious ways, and without a clear standard. This proposal introduces lazy imports syntax with a design that is local, explicit, controlled, and granular. Each of these qualities is essential to making the feature predictable and safe to use in practice. The behavior is local: laziness applies only to the specific import marked with the lazy keyword, and it does not cascade recursively into other imports. This ensures that developers can reason about the effect of laziness by looking only at the line of code in front of them, without worrying about whether imported modules will themselves behave differently. A lazy import is an isolated decision in a single module, not a global shift in semantics. The semantics are explicit. When a name is imported lazily, the binding is created in the importing module immediately, but the target module is not loaded until the first time the name is accessed. After this point, the binding is indistinguishable from one created by a normal import. This clarity reduces surprises and makes the feature accessible to developers who may not be deeply familiar with Python’s import machinery. Lazy imports are controlled, in the sense that deferred loading is only triggered by the importing code itself. In the general case, a library will only experience lazy imports if its own authors choose to mark them as such. This avoids shifting responsibility onto downstream users and prevents accidental surprises in library behavior. Since library authors typically manage their own import subgraphs, they retain predictable control over when and how laziness is applied. The mechanism is also granular. It is introduced through explicit syntax on individual imports, rather than a global flag or implicit setting. This allows developers to adopt it incrementally, starting with the most performance-sensitive areas of a codebase. As this feature is introduced to the community, we want to make the experience of onboarding optional, progressive, and adaptable to the needs of each project. In addition to the new lazy import syntax, we also propose a way to control lazy imports at the application level: globally disabling or enabling, and selectively disabling. These are provided for debugging, testing and experimentation, and are not expected to be the common way to control lazy imports. The design of lazy imports provides several concrete advantages: Command-line tools are often invoked directly by a user, so latency — in particular startup latency — is quite noticeable. These programs are also typically short-lived processes (contrasted with, e.g., a web server). Most conventions would have a CLI with multiple subcommands import every dependency up front, even if the user only requests tool --help (or tool subcommand --help ). With lazy imports, only the code paths actually reached will import a module. This can reduce startup time by 50–70% in practice, providing a visceral improvement to a common user experience and improving Python’s competitiveness in domains where fast startup matters most.

(or ). With lazy imports, only the code paths actually reached will import a module. This can reduce startup time by 50–70% in practice, providing a visceral improvement to a common user experience and improving Python’s competitiveness in domains where fast startup matters most. Type annotations frequently require imports that are never used at runtime. The common workaround is to wrap them in if TYPE_CHECKING: blocks . With lazy imports, annotation-only imports impose no runtime penalty, eliminating the need for such guards and making annotated codebases cleaner.

blocks . With lazy imports, annotation-only imports impose no runtime penalty, eliminating the need for such guards and making annotated codebases cleaner. Large applications often import thousands of modules, and each module creates function and type objects, incurring memory costs. In long-lived processes, this noticeably raises baseline memory usage. Lazy imports defer these costs until a module is needed, keeping unused subsystems unloaded. Memory savings of 30–40% have been observed in real workloads.

Rationale The design of this proposal is centered on clarity, predictability, and ease of adoption. Each decision was made to ensure that lazy imports provide tangible benefits without introducing unnecessary complexity into the language or its runtime. It is also worth noting that while this PEP outlines one specific approach, we list alternate implementation strategies for some of the core aspects and semantics of the proposal. If the community expresses a strong preference for a different technical path that still preserves the same core semantics or there is fundamental disagreement over the specific option, we have included the brainstorming we have already completed in preparation for this proposal as reference. The choice to introduce a new lazy keyword reflects the need for explicit syntax. Import behavior is too fundamental to be left implicit or hidden behind global flags or environment variables. By marking laziness directly at the import site, the intent is immediately visible to both readers and tools. This avoids surprises, reduces the cognitive burden of reasoning about imports, and keeps the semantics in line with Python’s tradition of explicitness. Another important decision is to represent lazy imports with proxy objects in the module’s namespace, rather than by modifying dictionary lookup. Earlier approaches experimented with embedding laziness into dictionaries, but this blurred abstractions and risked affecting unrelated parts of the runtime. The dictionary is a fundamental data structure in Python—literally every object is built on top of dicts—and adding hooks to dictionaries would prevent critical optimizations and complicate the entire runtime. The proxy approach is simpler: it behaves like a placeholder until first use, at which point it resolves the import and rebinds the name. From then on, the binding is indistinguishable from a normal import. This makes the mechanism easy to explain and keeps the rest of the interpreter unchanged. Compatibility for library authors was also a key concern. Many maintainers need a migration path that allows them to support both new and old versions of Python at once. For this reason, the proposal includes the __lazy_modules__ global as a transitional mechanism. A module can declare which imports should be treated as lazy (by listing the module names as strings), and on Python 3.15 or later those imports will become lazy automatically. On earlier versions the declaration is ignored, leaving imports eager. This gives authors a practical bridge until they can rely on the keyword as the canonical syntax. Finally, the feature is designed to be adopted incrementally. Nothing changes unless a developer explicitly opts in, and adoption can begin with just a few imports in performance-sensitive areas. This mirrors the experience of gradual typing in Python: a mechanism that can be introduced progressively, without forcing projects to commit globally from day one. Notably, the adoption can also be done from the “outside in,” permitting CLI authors to introduce lazy imports and speed up user-facing tools, without requiring changes to every library the tool might use. By combining explicit syntax, a simple runtime model, a compatibility layer, and gradual adoption, this proposal balances performance improvements with the clarity and stability that Python users expect. Other design decisions The scope of laziness is deliberately local and non-recursive. A lazy import only affects the specific statement where it appears; it does not cascade into other modules or submodules. This choice is crucial for predictability. When developers read code, they can reason about import behavior line by line, without worrying about hidden laziness deeper in the dependency graph. The result is a feature that is powerful but still easy to understand in context.

In addition, it is useful to provide a mechanism to activate or deactivate lazy imports at a global level. While the primary design centers on explicit syntax, there are scenarios—such as large applications, testing environments, or frameworks—where enabling laziness consistently across many modules provides the most benefit. A global switch makes it easy to experiment with or enforce consistent behavior, while still working in combination with the filtering API to respect exclusions or tool-specific configuration. This ensures that global adoption can be practical without reducing flexibility or control.

Specification Grammar A new soft keyword lazy is added. A soft keyword is a context-sensitive keyword that only has special meaning in specific grammatical contexts; elsewhere it can be used as a regular identifier (e.g., as a variable name). The lazy keyword only has special meaning when it appears before import statements: import_name: | 'lazy'? 'import' dotted_as_names import_from: | 'lazy'? 'from' ('.' | '...')* dotted_name 'import' import_from_targets | 'lazy'? 'from' ('.' | '...')+ 'import' import_from_targets Syntax restrictions The soft keyword is only allowed at the global (module) level, not inside functions, class bodies, with try / with blocks, or import * . Import statements that use the soft keyword are potentially lazy. Imports that can’t be lazy are unaffected by the global lazy imports flag, and instead are always eager. Examples of syntax errors: # SyntaxError: lazy import not allowed inside functions def foo (): lazy import json # SyntaxError: lazy import not allowed inside classes class Bar : lazy import json # SyntaxError: lazy import not allowed inside try/except blocks try : lazy import json except ImportError : pass # SyntaxError: lazy import not allowed inside with blocks with suppress ( ImportError ): lazy import json # SyntaxError: lazy from ... import * is not allowed lazy from json import * Semantics When the lazy keyword is used, the import becomes potentially lazy. Unless lazy imports are disabled or suppressed (see below), the module is not loaded immediately at the import statement; instead, a lazy proxy object is created and bound to the name. The actual module is loaded on first use of that name. Example: import sys lazy import json print ( 'json' in sys . modules ) # False - module not loaded yet # First use triggers loading result = json . dumps ({ "hello" : "world" }) print ( 'json' in sys . modules ) # True - now loaded A module may contain a __lazy_modules__ attribute, which is a sequence of fully qualified module names (strings) to make potentially lazy (as if the lazy keyword was used). This attribute is checked on each import statement to determine whether the import should be made potentially lazy. When a module is made lazy this way, from-imports using that module are also lazy, but not necessarily imports of sub-modules. The normal (non-lazy) import statement will check the global lazy imports flag. If it is “enabled”, all imports are potentially lazy (except for imports that can’t be lazy, as mentioned above.) Example: __lazy_modules__ = [ "json" ] import json print ( 'json' in sys . modules ) # False result = json . dumps ({ "hello" : "world" }) print ( 'json' in sys . modules ) # True If the global lazy imports flag is set to “disabled”, no potentially lazy import is ever imported lazily, and the behavior is equivalent to a regular import statement: the import is eager (as if the lazy keyword was not used). For a potentially lazy import, the lazy imports filter (if set) is called with the name of the module doing the import, the name of the module being imported, and (if applicable) the fromlist. If the lazy import filter returns True , the potentially lazy import becomes a lazy import. Otherwise, the import is not lazy, and the normal (eager) import continues. Lazy import mechanism When an import is lazy, __lazy_import__ is called instead of __import__ . __lazy_import__ has the same function signature as __import__ . It adds the module name to sys.lazy_modules , a set of module names which have been lazily imported at some point (primarily for diagnostics and introspection), and returns a “lazy module object.” The implementation of from ... import (the IMPORT_FROM bytecode implementation) checks if the module it’s fetching from is a lazy module object, and if so, returns a lazy object for each name instead. The end result of this process is that lazy imports (regardless of how they are enabled) result in lazy objects being assigned to global variables. Lazy module objects do not appear in sys.modules , they’re just listed in the sys.lazy_modules set. Under normal operation lazy objects should only end up stored in global variables, and the common ways to access those variables (regular variable access, module attributes) will resolve lazy imports (“reify”) and replace them when they’re accessed. It is still possible to expose lazy objects through other means, like debuggers. This is not considered a problem. Reification When a lazy object is first used, it needs to be reified. This means resolving the import at that point in the program and replacing the lazy object with the concrete one. Reification imports the module in the same way as it would have been if it had been imported eagerly, barring intervening changes to the import system (e.g. to sys.path , sys.meta_path , sys.path_hooks or __import__ ). Reification still calls __import__ to resolve the import. When the module is first reified, it’s removed from sys.lazy_modules (even if there are still other unreified lazy references to it). When a package is reified and submodules in the package were also previously lazily imported, those submodules are not automatically reified but they are added to the reified package’s globals (unless the package already assigned something else to the name of the submodule). If reification fails (e.g., due to an ImportError ), the exception is enhanced with chaining to show both where the lazy import was defined and where it was first accessed (even though it propagates from the code that triggered reification). This provides clear debugging information: # app.py - has a typo in the import lazy from json import dumsp # Typo: should be 'dumps' print ( "App started successfully" ) print ( "Processing data..." ) # Error occurs here on first use result = dumsp ({ "key" : "value" }) The traceback shows both locations: App started successfully Processing data... Traceback (most recent call last): File "app.py", line 2, in lazy from json import dumsp ImportError: deferred import of 'json.dumsp' raised an exception during resolution The above exception was the direct cause of the following exception: Traceback (most recent call last): File "app.py", line 8, in result = dumsp({"key": "value"}) ^^^^^ ImportError: cannot import name 'dumsp' from 'json'. Did you mean: 'dump'? This exception chaining clearly shows: (1) where the lazy import was defined, (2) that it was deferred, and (3) where the actual access happened that triggered the error. Reification does not automatically occur when a module that was previously lazily imported is subsequently eagerly imported. Reification does not immediately resolve all lazy objects (e.g. lazy from statements) that referenced the module. It only resolves the lazy object being accessed. Accessing a lazy object (from a global variable or a module attribute) reifies the object. Accessing a module’s __dict__ reifies all lazy objects in that module. Operations that indirectly access __dict__ (such as dir() ) also trigger this behavior. Example using __dict__ from external code: # my_module.py import sys lazy import json print ( 'json' in sys . modules ) # False - still lazy # main.py import sys import my_module # Accessing __dict__ from external code DOES reify all lazy imports d = my_module . __dict__ print ( 'json' in sys . modules ) # True - reified by __dict__ access print ( type ( d [ 'json' ])) # However, calling globals() does not trigger reification — it returns the module’s dictionary, and accessing lazy objects through that dictionary still returns lazy proxy objects that need to be manually reified upon use. A lazy object can be resolved explicitly by calling the get method. Other, more indirect ways of accessing arbitrary globals (e.g. inspecting frame.f_globals ) also do not reify all the objects. Example using globals() : import sys lazy import json # Calling globals() does NOT trigger reification g = globals () print ( 'json' in sys . modules ) # False - still lazy print ( type ( g [ 'json' ])) # # Explicitly reify using the get() method resolved = g [ 'json' ] . get () print ( type ( resolved )) # print ( 'json' in sys . modules ) # True - now loaded

Implementation Bytecode and adaptive specialization Lazy imports are implemented through modifications to four bytecode instructions: IMPORT_NAME , IMPORT_FROM , LOAD_GLOBAL , and LOAD_NAME . The lazy syntax sets a flag in the IMPORT_NAME instruction’s oparg ( oparg & 0x01 ). The interpreter checks this flag and calls _PyEval_LazyImportName() instead of _PyEval_ImportName() , creating a lazy import object rather than executing the import immediately. The IMPORT_FROM instruction checks whether its source is a lazy import ( PyLazyImport_CheckExact() ) and creates a lazy object for the attribute rather than accessing it immediately. When a lazy object is accessed, it must be reified. The LOAD_GLOBAL instruction (used in function scopes) and LOAD_NAME instruction (used at module and class level) both check whether the object being loaded is a lazy import. If so, they call _PyImport_LoadLazyImportTstate() to perform the actual import and store the module in sys.modules . This check incurs a very small cost on each access. However, Python’s adaptive interpreter can specialize LOAD_GLOBAL after observing that a lazy import has been reified. After several executions, LOAD_GLOBAL becomes LOAD_GLOBAL_MODULE , which accesses the module dictionary directly without checking for lazy imports. Examples of the bytecode generated: lazy import json # IMPORT_NAME with flag set Generates: IMPORT_NAME 1 (json + lazy) lazy from json import dumps # IMPORT_NAME + IMPORT_FROM Generates: IMPORT_NAME 1 (json + lazy) IMPORT_FROM 1 (dumps) lazy import json x = json # Module-level access Generates: LOAD_NAME 0 (json) lazy import json def use_json (): return json . dumps ({}) # Function scope Before any calls: LOAD_GLOBAL 0 (json) LOAD_ATTR 2 (dumps) After several calls, LOAD_GLOBAL specializes to LOAD_GLOBAL_MODULE : LOAD_GLOBAL_MODULE 0 (json) LOAD_ATTR_MODULE 2 (dumps) Lazy imports filter This PEP adds two new functions to the sys module to manage the lazy imports filter: sys.set_lazy_imports_filter(func) - Sets the filter function. The func parameter must have the signature: func(importer: str, name: str, fromlist: tuple[str, ...] | None) -> bool

... continue reading